id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
14,610,533
https://en.wikipedia.org/wiki/Outer%20membrane%20protein%20W%20family
Outer membrane protein W (OmpW) family is a family of evolutionarily related proteins from the bacterial outer membrane. This family includes outer membrane protein W (OmpW) proteins from a variety of bacterial species. This protein may form the receptor for S4 colicins in Escherichia coli. References Protein domains Protein families Outer membrane proteins
Outer membrane protein W family
[ "Biology" ]
72
[ "Protein families", "Protein domains", "Protein classification" ]
14,611,016
https://en.wikipedia.org/wiki/Imaging%20informatics
Imaging informatics, also known as radiology informatics or medical imaging informatics, is a subspecialty of biomedical informatics that aims to improve the efficiency, accuracy, usability and reliability of medical imaging services within the healthcare enterprise. It is devoted to the study of how information about and contained within medical images is retrieved, analyzed, enhanced, and exchanged throughout the medical enterprise. As radiology is an inherently data-intensive and technology-driven specialty, those in this branch of medicine have become leaders in Imaging Informatics. However, with the proliferation of digitized images across the practice of medicine to include fields such as cardiology, ophthalmology, dermatology, surgery, gastroenterology, obstetrics, gynecology and pathology, the advances in Imaging Informatics are also being tested and applied in other areas of medicine. Various industry players and vendors involved with medical imaging, along with IT experts and other biomedical informatics professionals, are contributing and getting involved in this expanding field. Imaging informatics exists at the intersection of several broad fields: biological science – includes bench sciences such as biochemistry, microbiology, physiology and genetics clinical services – includes the practice of medicine, bedside research, including outcomes and cost-effectiveness studies, and public health policy information science – deals with the acquisition, retrieval, cataloging, and archiving of information medical physics / biomedical engineering – entails the use of equipment and technology for a medical purpose cognitive science – studying human computer interactions, usability, and information visualization computer science – studying the use of computer algorithms for applications such as computer assisted diagnosis and computer vision Due to the diversity of the industry players and broad professional fields involved with Imaging Informatics, there grew a demand for new standards and protocols. These include DICOM (Digital Imaging and Communications in Medicine), Health Level 7 (HL7), International Organization for Standardization (ISO), and Artificial Intelligence protocols. Current research surrounding Imaging Informatics has a focus on Artificial Intelligence (AI) and Machine Learning (ML). These new technologies are being used to develop automation methods, disease classification, advanced visualization techniques, and improvements in diagnostic accuracy. However, AI and ML integration faces several challenges with data management and security. History Medical imaging to imaging informatics While the field of imaging informatics is based around the power of modern computing, its roots trace back to the dawn of the 20th century. On November 8, 1895, German physicist Wilhelm Conrad Röntgen observed a new imaging technique he coined “X-rays” during his experiments. This discovery led to the creation of the medical imaging field, and in turn launched a new wave of human innovation. X-rays stood as the only medical imaging technology for several decades following its discovery. However, the arrival of the mid 20th century meant the expansion of the medical imaging field. The new modalities included: computed tomography (CT) to visualize soft tissue with a high degree of resolution; Magnetic Resonance Imaging (MRI) which is a modern standard for soft tissue imaging; Ultrasound that uses sound waves to create less expensive visualizations; Nuclear Imaging and Hybrid Scanners for functional imaging and imaging with higher spatial resolution created by combining multiple modalities. As these imaging techniques became more sophisticated, the amount of information that medical imaging professionals were expected to process also increased. Additionally, the digital revolution of the mid to late 20th century further increased the data these techniques could gather. As a result, the main limiting factor for the medical imaging field became the human inability to accurately interpret large amounts of data. Thus, the need arose for computerized assistance with complex digital imaging analysis, storage and manipulation. Modern Imaging Informatics was developed to fulfill these needs. Imaging informatics development Imaging Informatics is a broad field with numerous areas of interest, making its development a culmination of the development of various individual technologies. Several of the key innovations for the field are as follows: Picture archiving and communication system (PACS) The development of PACS popularized the use of image storage and retrieval systems in medical practices. Moreover, this new technology demanded the development of others. The world quickly realized that digital imaging standards would need to be put in place given the impact PACS had on the medical community. The American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) created the Digital Imaging and Communications Standards Committee (later becoming DICOM) in response to this concern. Information technology integration The digital age’s impact on radiology resulted in a large influx of data that needed to be managed. To combat this, the field of information technology was incorporated with technology such as Radiology Information System (RIS) and Hospital Information System (HIS). These systems would work in tandem with PACS and other imaging technology to streamline the patient data management, as shown in the figure to the right. Computer-aided detection and diagnosis The idea of computer-aided detection (CAD) and computer-aided diagnosis (CADx) is that the process of analysis and interpretation of medical image data could be automated, with a potentially higher degree of accuracy than human detection and diagnosis. Interest in this subject dates back to 1966, when radiology imaging first became digitized. The first successful implementation of a CAD system was in 1994 at the University of Chicago for use in mammography. This was followed by the first commercial CAD system in 1998 called ImageChecker M1000. With the arrival of the 21st century, machine learning techniques have been utilized to accomplish a version of the CAD and CADx systems. The future development of these technologies is advantageous as it gives a solution to human limitations in medical image processing. Although a highly accurate and fully automated CAD system has yet to be realized, recent advancements in Artificial Intelligence may allow for functioning implementations. Standards and protocols In the domain of imaging informatics, it is imperative to ascertain that the information pertaining to industry standards and data-sharing protocols is contemporaneous. The expeditious advancement in this field necessitates a vigilant approach to sustain uniformity, foster interoperability, and guarantee the efficacious dissemination of imaging data. To this end, several pivotal facets warrant rigorous consideration: Digital imaging and communications in medicine (DICOM) standards The Digital Imaging and Communications in Medicine (DICOM) standard delineates a sophisticated structural schema that integrates medical imaging data with pertinent patient identifiers into unified data sets, analogous to the embedded metadata in JPEG images. Such DICOM entities are constituted by a multitude of attributes, notably encapsulating pixel data, which in certain imaging modalities, corresponds to discrete images or, alternatively, an array of frames exemplifying kinetic or volumetric data, as observed in cine loops or multi-dimensional scans in nuclear medicine. This architecture accommodates the assimilation of intricate, multi-faceted data into a monolithic DICOM file. The standard accommodates a spectrum of pixel data compression algorithms, including but not limited to JPEG and JPEG 2000, and provisionally allows for holistic data set compression. DICOM specifies three encodings for data elements, with a predilection for explicit value representations, barring specific exceptions as elaborated in Part 5 of the DICOM compendium. Uniformly applied across diverse applications, the file manifestation customarily incorporates a header that houses essential attributes and data on the originating application. The proposed workflow integrates the use of DICOM Structured Reporting (SR), in which essential measurements are encoded as DICOM SR objects. These objects are then used to fill a predefined SR template, resulting in the creation of a standardized report composed of discrete data elements. This report is subsequently transmitted to the Electronic Medical Record (EMR) system. The discrete data extracted from these reports facilitate the longitudinal monitoring of individual patient metrics, are forwarded to data registries, or are leveraged for clinical research purposes. Health level 7 (HL7) standards DDInteract has been crafted to enhance cooperative engagement between healthcare practitioners and patients, aiming to ascertain the optimal therapeutic approach that minimizes the hazards posed by potential drug-drug interactions. The user interface of DDInteract is systematically organized into four distinct segments. Medication data can be represented across a variety of Fast Health Interoperability Resources (FHIR) resources, necessitating careful analysis by DDInteract. Specifically, MedicationRequest is utilized for medications prescribed to the patient; MedicationDispense covers medications that have been physically provided to the patient; and MedicationStatement pertains to medications that the patient reports having taken or is currently taking. It is possible for a single medication to be represented in multiple resource forms, with potential redundancies being amalgamated into a single record based on the most recent date and a defined hierarchy among the resource types. To optimize the efficiency of data retrieval from the FHIR server, not every instance of medication is considered. Only those resources that are currently active or were active within the past 100 days are included, adhering to the prevalent U.S. protocol that typically allows for medication dispensation for a duration not exceeding three months. International organization for standardization (ISO) standards A Quality Management System (QMS) is an integrative construct that includes the organizational architecture, the allocation of resources, the expertise of personnel, and the repository of documents and procedures that collectively contribute to the assurance and enhancement of quality in an entity's offerings. It delineates a suite of systematically orchestrated actions essential for governing and optimizing quality parameters. The ISO 9000 suite emerges as the preeminent and universally endorsed schema for QMS implementations, whereas the ISO 15189 standard provides a specialized framework designed expressly for the exigencies of clinical laboratory settings. Artificial intelligence in imaging informatics A systematic review critically assessed the design, reporting standards, risk of bias, and validity of claims within studies that compare the efficacy of diagnostic deep learning algorithms in medical imaging against the expertise of clinicians. Conducted using data from prominent databases spanning from 2010 to June 2019, the review specifically targeted studies involving convolutional neural networks (CNNs)—notable for their capacity to autonomously discern crucial features for image classification within medical contexts. The investigation uncovered a notable deficiency in randomized clinical trials concerning this subject, identifying only ten such studies, of which merely two were published, exhibiting low risk of bias and commendable adherence to reporting protocols. Among the 81 non-randomized studies located, a minority were prospective or validated in practical clinical settings, with the majority presenting a high risk of bias, substandard compliance with reporting norms, and a pronounced lack of accessibility to data and code. This review underscores the imperative for an augmentation in the number of prospective studies and randomized trials, advocating for diminished bias, amplified clinical pertinence, enhanced transparency, and tempered conclusions in the burgeoning field of applying deep learning to medical imaging. The exponential growth in digital data alongside enhanced computing capabilities has markedly accelerated advancements in artificial intelligence (AI), which are now progressively being incorporated into healthcare. These AI applications aim to refine diagnosis, treatment, and prognosis through sophisticated classification and prediction models. Nevertheless, the evolution of these technologies is impeded by a lack of rigorous reporting standards relating to data sourcing, model architecture, and the methodologies employed in model evaluation and validation. In response, we propose MINIMAR (Minimum Information for Medical AI Reporting), an initiative designed to establish critical parameters for understanding AI-driven predictions, the demographics targeted, inherent biases, and the ability to generalize these technologies. We urge the adoption of standardized protocols to ensure that AI implementations in healthcare are reported with accuracy and responsibility, facilitating the development and deployment of associated clinical decision-support tools while simultaneously addressing critical concerns regarding precision and bias. As a foundational requisite, the proposed standard ought to fulfill several essential criteria: Firstly, it should encompass comprehensive details concerning the population from which the training data are derived, delineating the sources of data and the methods employed for cohort selection. Secondly, the demographics of the training data should be explicitly documented to facilitate a substantive comparison with the demographic characteristics of the population on which the model is intended to operate. Thirdly, there should be a thorough disclosure of the model’s architecture and its development process to allow for a clear interpretation of the model's intended purpose, comparison with analogous models, and to enable exact replication. Fourthly, the process of model evaluation, optimization, and validation must be transparently reported to elucidate the means by which local model optimization is attained and to support replication and the sharing of resources. Evaluation of artificial intelligence in imaging informatics Advantages Improved Diagnostic Accuracy: Artificial intelligence, particularly through the use of convolutional neural networks (CNNs), has transformed medical imaging by significantly enhancing the accuracy of diagnostics. These technologies excel at autonomously identifying pertinent features from imaging data, thereby augmenting diagnostic, prognostic, and therapeutic strategies. Operational Efficiency: AI's capability to swiftly analyze extensive imaging datasets exceeds human capacity and offers the potential to decrease the interval between imaging and diagnosis, ultimately benefiting patient care. Consistency and Replicability: Initiatives such as MINIMAR are crucial as they promote standardized reporting and deployment of AI in healthcare, thereby improving the consistency and replicability of AI-driven diagnostic tools across various clinical environments. Disadvantages Inadequate Clinical Validation: A significant gap in clinical validation for AI tools is highlighted by the limited number of randomized clinical trials that compare the performance of AI systems directly with human clinicians, where many studies show high risk of bias and poor adherence to established reporting standards. Accessibility of Resources: The prevalent issue of limited access to the datasets and algorithms used in AI research impedes the ability of the broader scientific community to validate, replicate, and innovate upon existing studies. Transparency and Ethical Concerns: AI development in medical imaging faces challenges in transparency regarding how models are built, trained, and validated. Additionally, there is a pressing concern about the potential for these models to propagate existing biases or introduce new biases if not properly checked. Recommendations for future development Expansion of Rigorous Trials: The field requires a substantial increase in prospective, well-designed randomized trials to thoroughly assess and validate AI applications in clinical settings. Standardization of Reporting: Implementing comprehensive reporting standards as proposed by initiatives like MINIMAR will address transparency issues, reduce biases, and enhance the generalizability of AI applications, ensuring they meet rigorous scientific and ethical standards. Promotion of Open Data Practices: Encouraging more open access to AI datasets and modeling code will foster a collaborative environment that enhances the scrutiny, replication, and advancement of AI technologies, thereby solidifying their role in healthcare. In summary, while AI offers significant opportunities for advancing imaging informatics, leveraging these opportunities to their fullest extent necessitates stringent validation, adherence to robust reporting frameworks, and an overarching commitment to addressing ethical considerations. These steps are pivotal in ensuring that AI-driven tools achieve their promise of enhancing efficiency and effectiveness in medical diagnostics. Areas of interest Key areas relevant to Imaging informatics include: Picture archiving and communication system (PACS) and component systems Imaging informatics for the enterprise Image-enabled electronic medical records Radiology Information Systems (RIS) and Hospital Information Systems (HIS) Digital image acquisition Image processing and enhancement Radiomics Image data compression 3D visualization and multimedia Speech recognition Computer-aided diagnosis (CAD). Imaging facilities design Imaging vocabularies and ontologies Data mining from medical images databases Transforming the Radiological Interpretation Process (TRIP) DICOM, HL7, FHIR and other standards Workflow and process modeling and process simulation Quality assurance Archive integrity and security Teleradiology Radiology informatics education Digital imaging Applications Imaging Informatics has quite a few applications within the medical field. Radiology Imaging Informatics is most prominent within the field of radiology. Using AI, radiologists can use Imaging Informatics to ease their job and save time whilst analyzing images. A study published in "Current Medical Imaging" discovered that in CT imaging assisted by AI, the reading time to detect lung nodules and pleural effusions was reduced by more than 44% for radiologists. Cardiology Imaging informatics within Cardiology aids in the molecular phenotyping of CV(Cardiovascular) diseases and unification of CV knowledge. This means that through data extraction, imaging, and machine learning analysis of these data and images allow researchers to categorize diseases based on the characteristics or features discovered. With this classification, researchers are then able to unify this CV information into one platform for continued analysis and information retrieval. Pathology Imaging informatics in pathology as a whole allows for a wide range of disease detection and analysis. The most prominent use in pathology is with the detection and analysis of different forms of cancer. Diagnosing cancer manually is a pain staking and subjective process which includes examining what could be millions of cells. Through various clinical decision support systems(CDSS), professionals can ease the manual labor of tissue region selection, using Whole-Slide Imaging(WSI) tools to maximize the information analyzed. Several predictive models aimed to identify regions of interest within WSI, requiring training before use. Unsupervised models are being introduced, but are currently less prominent. An example of an unsupervised model being used is detecting tissue folds by using an unsupervised method to cluster the pixels in an image representing the difference between saturation and intensity values for every pixel. Due to being an unsupervised model, this method has some limitations. These limitations being that it has low sensitivity for different types of tissue folds within an image, and it has low specificity for images without tissue folds. Training In the US and some other countries, radiologists who wish to pursue sub-specialty training in this field can undergo fellowship training in imaging informatics. Medical Imaging Informatics Fellowships are done after completion of Board Certification in Diagnostic Radiology, and may be pursued concurrently with other sub-specialty radiology fellowships. The American Board of Imaging Informatics (ABII) also administers a certification examination for Imaging Informatics Professionals. PARCA (PACS Administrators Registry and Certification Association) certifications also exist for imaging informatics professionals. The American Board of Preventive Medicine (ABPM) offers a certification examination for Clinical Informatics for physicians who have primary board certification with the American Board of Medical Specialties, a medical license and a medical degree. There are two pathways to be eligible to sit for the examination: Practice Pathway (open through 2022) for those who have not completed ACGME-accredited fellowship training in Clinical Informatics and ACGME-Accredited Fellowship Pathway of at least 24 months in duration. Recent innovations Integration of DICOM standards (late 1990s to early 2000s) The expansion of DICOM standards facilitated the widespread adoption of Picture Archiving and Communication Systems (PACS), marking a milestone in the digital transformation of imaging informatics. This standardization, which began to take hold in the late 1990s and was established by the early 2000s, has enhanced the ability to store, retrieve, and share medical images across different systems, improving the efficiency of medical imaging practices. Structured and automated reporting (early 2010s) The adoption of structured reporting aimed to standardize reports to be concise and uniform, influencing patient care. The introduction of BI-RADS (Breast Imaging–Reporting and Data System) is a notable example, which has led to improved consistency across mammography reports. This milestone spans several years as these systems were refined and more widely adopted throughout the early 2010s. Advancements in AI and deep learning (2012) The realization that graphics processing units (GPUs) could be used to accelerate neural networks occurred around 2012. This advancement led to the rapid development of deep learning techniques, speeding up tasks like image segmentation, feature recognition, and algorithm creation from large datasets of annotated images. This era of AI has enabled high-performance algorithms capable of assisting in hundreds of diagnostic tasks. Rise of radiomics (late 2010s) The field of radiomics, which involves extracting quantitative features from medical images that are invisible to the human eye, saw significant growth towards the late 2010s. This approach has enabled a deeper analysis of imaging data, which can be correlated with genomic patterns and other medical data to enhance diagnostic and predictive accuracy. Photon-counting CT detectors (2022) The development and FDA clearance of photon-counting detectors (PCD) for computed tomography (CT) scans in 2022 was an important innovation. These detectors offer a more efficient process for converting X-rays to electrical signals, allowing for better material differentiation and potentially reducing the radiation dose for patients. The image to the right shows two scans of the same brain using old and new CT technology respectively. Current research and future directions Current research in imaging informatics is primarily focused on the integration and advancement of artificial intelligence (AI) and machine learning (ML) within medical imaging technologies. Efforts are concentrated on enhancing diagnostic precision, improving predictive analytics, and automating image analysis processes. Deep learning, a subset of ML, is particularly pivotal in transforming radiological imaging, with algorithms increasingly being developed for tasks such as tumor detection, organ segmentation, and anomaly identification. These advancements not only aim to increase the efficiency and accuracy of diagnoses but also strive to reduce the workload on radiologists by automating routine tasks. Looking ahead, the future directions of imaging informatics are expected to further embrace interdisciplinary approaches, incorporating genetics, pathology, and data from wearable devices to offer more holistic views of patient health. The concept of "radiogenomics," linking imaging features with genomic data, is an area of growing interest, potentially leading to more personalized and precise medical treatments. Additionally, the ongoing development of interoperability standards and secure data exchange protocols will be crucial in enabling the seamless integration of imaging data across different healthcare platforms, enhancing collaborative research and clinical practice globally. Challenges in imaging informatics There are several challenges in the field of Imaging Informatics: Data Management: The sheer volume of data generated from a large amount of high quality images poses storage and efficiency issues. Efficient management, storage, and retrieval of these images is critical. This is a challenge in terms of infrastructure and development of systems capable of handling and processing large datasets efficiently. Integration: Healthcare is a very slow field to adapt change. This is because all systems must be thoroughly tested and must work in tandem with existing systems with out any issues. Security: Personal security and safe data management is always a concern. This concern is elevated in the field of healthcare since the standard and regulations for security are much higher. Medical imaging often involves sharing sensitive patient data across networks, robust security measures are essential to protect against data breaches and ensure privacy compliance. This includes secure transmission, encryption of data at rest, and rigorous access controls. Integration of Artificial Intelligence: While AI offers significant potential to enhance diagnostic accuracy and efficiency in imaging, its integration into clinical workflows is fraught with challenges. These include the need for high-quality, annotated datasets for training AI models, the risk of algorithmic bias, and the black-box nature of some AI systems which can obscure how decisions are made. There is also skepticism among healthcare professionals regarding the reliability and accuracy of AI, which can hinder its adoption. Ethical and Legal Issues: The deployment of advanced imaging technologies raises ethical questions about the extent to which AI should be involved in patient diagnosis and the potential for AI to replace human radiologists. Legal implications, particularly concerning malpractice and liability when AI is used, are yet unresolved. These issues necessitate clear guidelines and robust ethical frameworks to govern the use of AI in medical imaging. Addressing these challenges requires a coordinated effort among technology developers, healthcare providers, regulatory bodies, and other stakeholders. Advances in technology must be balanced with considerations of practicality, ethics, and equity to ensure that imaging informatics can fulfill its promise to enhance patient care and treatment outcomes. Technological advances Software innovations Recent years have seen significant advancements in software technologies relevant to imaging informatics. One notable development is the integration of machine learning algorithms into imaging software, enabling automated analysis and interpretation of medical images. For instance, Rajpurkar et al. (2017) demonstrated the effectiveness of deep learning algorithms in pneumonia detection on chest X-rays, showcasing the potential of machine learning in medical imaging analysis. These algorithms have shown promising results in tasks such as lesion detection, disease classification, and treatment response assessment. Moreover, the implementation of natural language processing (NLP) techniques has facilitated the extraction of valuable insights from unstructured radiology reports, enhancing the efficiency of data analysis and decision-making processes. Hardware developments Advances in hardware technology have also played a pivotal role in shaping the landscape of imaging informatics. The evolution of imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), has led to improvements in image resolution, acquisition speed, and diagnostic accuracy. Additionally, the miniaturization of imaging devices has enabled point-of-care imaging, allowing for real-time assessment of patients in various clinical settings. For example, the development of handheld ultrasound devices has revolutionized point-of-care imaging by providing clinicians with portable and easy-to-use tools for bedside examinations (Smith, 2018). The rise of wearable devices and mobile health applications has further expanded the scope of imaging informatics, facilitating remote imaging and patient monitoring using sensors and cameras. Methodological advancements Along with technological innovations, methodological advancements have expanded the capabilities of imaging informatics. One development is the integration of multimodal imaging techniques, which combine data from multiple imaging modalities to provide complementary information about anatomical and physiological structures. For instance, recent studies have demonstrated the effectiveness of combining MRI, CT, and ultrasound data for improved diagnosis and treatment planning in oncology patients (Gupta et al., 2020). By fusing data from these sources, clinicians can obtain a more comprehensive understanding of a patient's condition, leading to more accurate diagnoses and personalized treatment plans. References External links The Society for Imaging Informatics in Medicine American Board of Imaging Informatics Bioinformatics
Imaging informatics
[ "Engineering", "Biology" ]
5,393
[ "Bioinformatics", "Biological engineering" ]
14,612,089
https://en.wikipedia.org/wiki/Desertec
DESERTEC is a non-profit foundation that focuses on the production of renewable energy in desert regions. The project aims to create a global renewable energy plan based on the concept of harnessing sustainable powers, from sites where renewable sources of energy are more abundant, and transferring it through high-voltage direct current transmission to consumption centers. The foundation also works on concepts involving green hydrogen. Multiple types of renewable energy sources are envisioned, but their plan is centered around the natural climate of the deserts. The Desertec Industrial Initiative evolved in several steps. The Foundation's first idea was to focus on the transmission of renewable power from the MENA region to Europe, while the next one focused on meeting the domestic demand. The project failed twice due to the problem of transportation and cost-inefficiency. The initiative was revived in 2020 with a focus on green hydrogen, catering to both domestic demand and exports to foreign markets. Organizations, milestones, and activities DESERTEC was developed by the Trans-Mediterranean Renewable Energy Cooperation (TREC), a voluntary organisation founded in 2003 by the Club of Rome and the National Energy Research Center Jordan, made up of scientists and experts from across Europe, the Middle East and North Africa (EU-MENA). It is from this network that the DESERTEC Foundation later emerged as a non-profit organisation and started to promote their solutions around the world. Founding members of the foundation are the German Association of the Club of Rome, members of the network of scientists TREC as well as committed private supporters and long-time promoters of the DESERTEC idea. In 2009, the DESERTEC Foundation founded the Munich-based industrial initiative together with partners from the industrial and finance sectors. It aims to accelerate the implementation of the DESERTEC Concept in the focus region EU-MENA. Scientific studies done by the German Aerospace Center (DLR) between 2004 and 2007 demonstrated that the desert sun could meet rising power demand in the MENA region while also helping to power Europe, reduce carbon emissions across the EU-MENA region and power desalination plants to provide freshwater to the MENA region. Dii published a further study called Desert Power 2050 in June 2012. It found that the MENA region would be able to meet its needs for power with renewable energy, while exporting its excess power to create an export industry with an annual volume of more than €60 billion. Meanwhile, by importing desert power, Europe could save around 30 pounds/MW. By taking into account land and water use, DESERTEC intends to offer an integrated and comprehensive solution to food and water shortages. TREC The DESERTEC concept originated from Dr Gerhard Knies, a German particle physicist and founder of the Trans-Mediterranean Renewable Energy Cooperation (TREC) network of researchers. In 1986, in the wake of the Chernobyl nuclear accident, he was searching for a potential alternative source of clean energy and arrived at a conclusion: in six hours, the world's deserts receive more energy from the sun than humankind consumes in a year. The DESERTEC concept was developed further by TREC – an international network of scientists, experts and politicians from the field of renewable energies – founded in 2003 by the Club of Rome and the National Energy Research Center Jordan. One of the most famous members was Prince Hassan bin Talal of Jordan. In 2009, TREC emerged to the non-profit DESERTEC Foundation. DESERTEC Foundation The DESERTEC Foundation was founded on 20 January 2009 with the aim of promoting the implementation of the DESERTEC Concept for clean power from deserts all over the world. It is a non-profit organisation based in Hamburg. The founding members were the German Association of the Club of Rome, members of the TREC network of scientists as well as committed private supporters and long-time promoters of the DESERTEC idea. The foundation works to accelerate the implementation of the DESERTEC Concept by: Supporting knowledge transfer & scientific co-operation Fostering exchange & co-operation with the private sector Promoting the establishment of the necessary framework conditions: Cooperation with JREF in Asia: In March 2012, a year after the nuclear disaster in Fukushima, the DESERTEC Foundation and the Japan Renewable Energy Foundation (JREF) have signed a Memorandum of understanding. The aim is to accelerate the deployment of renewable energy in Asia to provide secure and sustainable alternatives to fossil and nuclear power by implementing the DESERTEC Concept in Greater East Asia (Asia Super Grid Initiative). Evaluating and initiating projects that could serve as models Informing about DESERTEC Dii GmbH To help accelerate the implementation of the DESERTEC idea in EU-MENA, the non-profit DESERTEC Foundation and a group of 12 European companies led by Munich Re founded an industrial initiative called Dii GmbH in Munich on 30 October 2009. The other companies included Deutsche Bank, E.ON, RWE, Abengoa. Like the DESERTEC Foundation, Dii GmbH did not intend to build power plants itself. Instead it focused on four core objectives in EU-MENA: Development of long term perspectives for the period up to 2050 providing investment and financing guidance Carrying out specific in-depth studies Development of a framework for feasible investments into renewable energy and interconnected grids in EU-MENA Origination of reference projects to prove feasibility Dii GmbH aimed to create a positive investment climate for renewable energies and interconnected power grid in North Africa and the Middle East by encouraging the necessary technological, economic, political and market frameworks. This included the development of a long-term implementation perspective called Desert Power 2050 with guidance on investment and funding. Dii GmbH has initiated selected reference projects to demonstrate overall feasibility and reduce system overall costs. On 24 November 2011, a memorandum of understanding (MoU) was signed between the Medgrid consortium and Dii to study, design and promote an interconnected electrical grid linking DESERTEC and the Medgrid projects. The Medgrid together with DESERTEC would serve as the backbone of the European super grid and the benefits of investing in HVDC technology are being assessed to reach the final goal – the supersmart grid. The activities of Dii and Medgrid were covered by the Mediterranean Solar Plan (MSP), a political initiative within the framework of the Union for the Mediterranean (UfM). Consortium The company was formed by the DESERTEC foundation and a consortium of worldwide companies. As of March 2014, Dii consisted of 20 shareholders (listed below) and 17 associate partners. ABB Abengoa Solar ACWA Power Cevital Deutsche Bank Enel Green Power E.ON First Solar Flagsol HSH Nordbank Munich Re Nareva Red Eléctrica de España RWE Avancis Schott Solar Terna Terna Energy SA UniCredit State Grid Corporation of China Managing Director of Dii GmbH has been Paul van Son, a senior international energy manager. At the end of 2014, most shareholders left Dii which has been described both as a "failure" and as a reorientation in project objectives. RWE, State Grid Corporation of China, ACWA Power and a number of partner companies stayed on board to drive the new mission of Dii: "To facilitate the rapid deployment of utility-scale renewable energy projects in desert areas, and to integrate them in the interconnected power systems" Concept details Description DESERTEC is a global renewable energy solution based on harnessing sustainable power from the sites where renewable sources of energy are at their most abundant. These sites can be used thanks to low-loss High-Voltage Direct Current transmission. All kinds of renewables will be used in the DESERTEC Concept, but the sun-rich deserts of the world play a special role. The original and first region for the assessment and application of this concept is the EU-MENA region (European Union, Middle East and Northern Africa). The DESERTEC organisations promote the generation of electricity in North Africa, the Middle East and Europe using renewable sources, such as solar power plants, wind parks, and develop a Euro-Mediterranean electricity network, primarily made up of high voltage direct current (HVDC) transmission cables. Despite its name, DESERTEC's proposal would see most of the power plants located outside of the Sahara Desert itself but rather in the surrounding areas, in the more accessible North and South steppes and woodlands, as well as the relatively moist Atlantic Coastal Desert. Under the DESERTEC proposal, concentrating solar power systems, photovoltaic systems and wind parks would be spread over the wide desert regions in North Africa like the Sahara Desert and all its subdivisions. The generated electricity would be transmitted to European and African countries by a super grid of high-voltage direct current cables. It would provide a considerable part of the electricity demand of the MENA countries and furthermore provide continental Europe with 15% of its electricity needs. Exported desert power would complement Europe's transition to renewables which would be based primarily on harnessing domestic sources of energy that would increase its energy independence. According to a scenario by the German Aerospace Center (DLR), by 2050, investments into solar plants and transmission lines would be total €400 billion. An exact proposal how to realise this scenario, including technical and financial requirements, will be designed by 2012/2013 (see Desert Power 2050). In March 2012, the DESERTEC Foundation started working in a further focus region. A year after the nuclear disaster in Fukushima, the DESERTEC Foundation and the Japan Renewable Energy Foundation (JREF) have signed a MoU. They will exchange knowledge and know-how, and coordinate their work together to develop suitable framework conditions for the deployment of renewables and to establish transnational cooperation in Greater East Asia. The aim is to accelerate the deployment of renewable energy in Asia to provide secure and sustainable alternatives to fossil and nuclear power. As a part of its mission, JREF promotes the Asia Super Grid Initiative to facilitate an electricity system based fully on renewable energy. The DESERTEC Foundation sees such a grid as an important step towards the implementation of DESERTEC in Greater East Asia and has already conducted a feasibility study on potential grid corridors to make best use of the region's desert sun. Studies about DESERTEC DLR studies The DESERTEC Concept was developed by an international network of politicians, academics and economists, called TREC. The research institutes for renewable sources of the governments of Morocco (CDER), Algeria (NEAL), Libya (CSES), Egypt (NREA), Jordan (NERC) and Yemen (Universities of Sana'a and Aden) as well as the German Aerospace Center (DLR) made significant contributions towards the development of the DESERTEC Concept. The basic studies relating to DESERTEC were led by DLR scientist Dr. Franz Trieb working for the Institute for Technical Thermodynamics at the DLR. The three studies were funded by the German Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety (BMU). The studies, conducted between 2004 and 2007, evaluated the following as shown in the table below; The studies concluded that the extremely high solar radiation in the deserts of North Africa and the Middle East outweighs the 10–15% transmission losses between the desert regions and Europe. This means that solar thermal power plants in the desert regions are more economical than the same kinds of plants in southern Europe. The German Aerospace Center has calculated that if solar thermal power plants were to be constructed in large numbers in the coming years, the estimated cost of electricity would come down from 0.09 to 0.22 euro/kWh to about 0.04–0.05 euro/kWh. The Sahara Desert was chosen as an ideal location for solar farms as it is exposed to bright sunshine nearly all the time, roughly between 80% and 97% of the daylight hours in the best cases. This is the sunniest year-round area on the planet. In the world's largest hot desert, there is an extremely vast area, covering almost the whole desert, that receives more than 3,600 h of yearly sunshine. There is also a very large area in excess of 4,000 h of sunshine annually. The highest solar radiation received on the planet is in the Sahara Desert, under the Tropic of Cancer. This results from a general, strong lack of cloud cover year-round and a geographical position under the tropics. The annual average insolation, which represents the total amount of solar radiation energy received on a given area and on a giver period, is about 2,500 kWh/(m2 year) over the region and this number can soar up to almost 3,000 kWh/(m2 year) in the best cases. The weather features of the Sahara Desert, especially the insolation, have a pronounced nature. The annual electricity production reaches 1,300,000 TWh at maximum in this sun-drenched area if the whole desert is covered in solar panels. The desert is also extremely vast covering about some 9,000,000 km2 (3,474,920 sq mi), being almost as large as China or the United States and is sparsely populated, making it possible to set up large solar farms without a negative impact on inhabitants of the region, too. Lastly, sand deserts can provide silicon, a raw material that is essential in the production of solar panels. The great African desert is relatively cloud-free all year long but it's important to note the harsh, desert climate also has some negative features such as extreme heat and sometimes dust or sand-laden winds which frequently blow over the desert and can even result in severe duststorms or sandstorms. Both phenomenons reduce the solar electricity productivity and the efficiency of the solar panels. Desert Power 2050 Dii announced it would introduce a roll-out-plan in late 2012 which included concrete recommendations on how to enable investments in renewable energy and interconnected power grids. Dii claims to work with all key stakeholders from the international scientific and business communities as well as policy-makers and civil society to enable two or three concrete reference projects to demonstrate the feasibility of the long-term vision. Dii developed a strategic framework for a fully integrated and decarbonized power system based on renewable energies for the entire North Africa, Middle East, and Europe (EUMENA) region in 2050. Therefore, Dii researched from the viewpoint of technology and geography what is the optimal mix of renewable energies to provide the EUMENA region with sustainable energy. In July 2012 Dii presented the first part of its study "Desert Power 2050 – Perspectives on a Sustainable Power System for EUMENA. Key Findings Desert Power 2050 demonstrates that the abundance of sun and wind in the EUMENA region will enable the creation of a joint power network that will entail more than 90 percent renewables. According to the study, such a joint power network involving North Africa, the Middle East, and Europe (EUMENA) offers clear benefits to all involved. The nations of the Middle East and North Africa (MENA) could meet their expanding needs for power with renewable energy, while developing an export industry from their excess power which could reach an annual volume worth more than 60 billion euros, according to the study results. By importing up to 20 percent of its power from the deserts, Europe could save up to 30 euros for each megawatt-hour of desert power. The north and south would become the powerhouses of this joint network, supported by wind and hydropower in Scandinavia, as well as wind and solar energy in the MENA region. Supply and demand would complement one other – both regionally and seasonally – according to the findings of Desert Power 2050. With its constant supply of wind and solar energy throughout the year, the MENA region can cover Europe's energy needs without the latter having to build costly excess capacities. A further benefit of the power network is the enhanced security of supply to all nations concerned. A renewables-based network would lead to mutual reliance among the countries involved, complemented by inexpensive imports from the south and the north. Methodology Desert Power 2050 presents the full perspective of the EUMENA region, which includes, for instance, the growing consumption of power in the MENA states. The power requirements of the MENA states are likely to more than quadruple by 2050, totalling more than 3000 terawatt hours. Unlike in Europe, the population will also grow considerably by the middle of the century, thus heightening the demand for new jobs. Analysing the design of a power system built to include more than 90% renewables 40 years into the future is necessarily subject to major uncertainties on a range of assumptions. To address these uncertainties Dii analysed so-called sensitivities, or perspectives, to show how the results react to changed parameters. Dii has analysed a total of 18 perspectives on the EUMENA power supply in 2050. They cover a wide range of major impact factors on the attractiveness of power system integration. The main message of the study: grid integration across the Mediterranean is valuable under all foreseeable circumstances. Second Phase Desert energy could be a stimulus for growth and make an important contribution when it comes to coping with the social and economic challenges in North Africa and the Middle East. Dii announced that a second phase of Desert Power 2050, Getting Started, will examine this topic in greater depth in the next few months, with discussions including political, scientific and industrial stakeholders. The objective is to formulate recommendations for the regulatory steps required in the years to come. Benefits More energy falls on the world's deserts in six hours than the world consumes in a year, and the Saharan desert is virtually uninhabited and is close to Europe. Supporters say that the project will keep Europe "at the forefront of the fight against climate change and help North African and European economies to grow within greenhouse gas emission limits". DESERTEC officials say the project could one day deliver 15 percent of Europe's electricity and a considerable part of MENA's electricity demand. According to the DESERTEC Foundation, the project has strong job creation potential and could improve the stability in the region. According to the report by Wuppertal Institute for Climate, Environment and Energy and the Club of Rome, the project could create 240,000 German jobs and generate €2 trillion worth of electricity by 2050. Technology Concentrated solar power Concentrated solar power (also called concentrating solar power and CSP) systems use mirrors or lenses to concentrate a large area of sunlight, or solar thermal energy, onto a small area. Electrical power is produced when the concentrated light is converted to heat, which drives a heat engine (usually a steam turbine) connected to an electrical power generator. Molten salt can be employed as a thermal energy storage method to retain thermal energy collected by a solar tower or solar trough so that it can be used to generate electricity in bad weather or at night. Since solar fields feed their heat energy into a conventional generation unit with a steam turbine, they can be combined without any problem with fossil fuel hybrid power plants. This hybridisation secures energy supply also in unfavourable weather and at night without the need of accelerating costly compensatory plants. A technical challenge is the cooling which is necessary for every heating power system. Dii is therefore reliant either on an adequate water supply, coastal facilities or improved cooling technology. Photovoltaics Dii also considers photovoltaics (PV) as a technology suitable for desert power plants. Photovoltaics is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors. Photovoltaic power generation employs solar panels composed of a number of solar cells containing a photovoltaic material. Materials presently used for photovoltaics include monocrystalline silicon, polycrystalline silicon, amorphous silicon, cadmium telluride, and copper indium gallium selenide/sulfide. Driven by advances in technology and increases in manufacturing scale and sophistication, the cost of photovoltaics has declined steadily since the first solar cells were manufactured. In 2010, First Solar, a producer of thin film solar panels, joined Dii as associated partner. The US based company already has experience with huge PV installations, and has constructed the 550 megawatt Desert Sunlight Solar Farm and Topaz Solar Farm in California, which are the biggest two PV installations of the world. Wind energy As also parts of the desert regions in the Middle East and North Africa (MENA) come with high wind potential, Dii is examining in which geographic regions the installation of wind farms is suitable. Wind turbines produce electricity by wind turning the blades, which spin a shaft, which connects to a generator which produces electricity. The Sahara Desert is one of the windiest areas on the planet, especially on the western coast where lies the Atlantic coastal desert along Western Sahara and Mauritania. The annual average wind speed at the ground greatly exceeds 5 m/s in most of the desert, and even approach 8 m/s or 9 m/s along the western ocean coast. It's important to note that wind speed increases with height. The regularity and the constancy of winds in arid regions are major assets for wind energy, too. The winds blow nearly constantly over the desert and there are generally no windless days during throughout the year. Therefore, the desert of North Africa is also an ideal location to install large-scale wind parks and wind turbines with very good productivity. High-voltage direct current (HVDC) To export renewable energy produced in the MENA desert region, a high-voltage direct current (HVDC) electric power transmission system is needed. High Voltage DC (HVDC) technology is a proven and economical method of power transmission over very long distances and also a trusted method to connect asynchronous grids or grids of different frequencies. With HVDC energy can also be transported in both directions. For long-distance transmission HVDC suffers lower electrical losses than alternating current (AC) transmission. Because of the higher solar radiation in MENA, the production of energy, even with the included transmissions losses, is still advantageous over the production in South Europe. Also very long distance projects have already been realised with technological cooperation from ABB and Siemens – both shareholders of Dii; namely the 800 kV HVDC Xiangjiaba-Shanghai transmission system, which was commissioned by State Grid Corporation of China (SGCC) in June 2010. The HVDC link is the most powerful and longest transmission of its kind to be implemented anywhere in the world; and at the time of commissioning, transmitted 6,400 MW of power over a distance of nearly 2,000 kilometres. This is longer than would be needed to link MENA and Europe. Siemens Energy has equipped the sending converter station Fulong for this link with ten DC converter transformers, including five rated at 800 kV. The second HVDC project which is also for SGCC with cooperation from ABB, is a new HVDC link of 3,000 MW over 920 kilometres from Hulunbeir, in Inner Mongolia, to Shenyang in the province of Liaoning in the North-Eastern part of China in 2010. Another project scheduled for 2014 commissioning – is the construction of an ±800 kV North-East UHVDC link from the North-Eastern and Eastern region of India to the city of Agra across a distance of 1,728 kilometres. Another project of this type is the Rio Madeira HVDC system a HVDC link of . Projects The Sahara Desert covers huge parts of Algeria, Chad, Egypt, Libya, Mali, Mauritania, Morocco, Niger, Western Sahara, Sudan and Tunisia. It is one of three distinct physiographic provinces of the African massive physiographic division. The first solar and wind power projects in North Africa have already begun. Algeria initiated a unique project in 2011 dealing with Hybrid power generation which combines a 25 MW concentrating solar power array in conjunction with a 130 MW combined cycle gas turbine plant Hassi R'Mel integrated solar combined cycle power station. Other countries like Morocco have set up ambitious plans on the implementation of renewable energy. The Ouarzazate solar power station in Morocco for example, with the capacity of 500 MW, will be one of the largest concentrated solar plants in the world. In 2011, the DESERTEC Foundation started to evaluate projects that could serve as models for the implementation of DESERTEC according to its sustainability criteria. The first of these is the TuNur solar power plant in Tunisia that is planned to have 2 GW of capacity. Creating up to 20,000 direct and indirect local jobs, its plants include dry-cooling systems that reduce water usage by up to 90%. Construction is planned to begin in 2014, and export power to Italy by 2016. A video on YouTube explains this project. Talks with the Moroccan government had been successful and the Dii confirmed their first reference project would be in Morocco. As a partner in a beginning partnership between Europe and MENA Morocco is especially well-suited since a grid connection from Morocco via Gibraltar to Spain already exists. Also the Moroccan government enacted a program to support renewable energies. In June 2011, Dii signed a Memorandum of Understanding with the Moroccan Agency for Solar Energy (MASEN). MASEN will act as a project developer and will be responsible for all important project steps in Morocco. Dii will promote the project and its financing in the European Union in Brussels as well as in national governments. This reference project, with a total capacity of 500 MW, will be a combination of concentrated solar power plants (400 MW) and photovoltaics (100 MW). The first available power from the joint Dii/MASEN project could be fed into the Moroccan and Spanish grids between 2014 and 2016, depending on the selected technology and market conditions. Based on the current estimate the total costs are €2 billion. In April 2010, Dii emphasised that the power plant won't be installed in the region of Western Sahara which is administered by Morocco. An official spokesperson of Dii made the following confirmation: "Our reference projects will not be located in the region. When looking for project sites, the DII will also take political, ecological or cultural issues into consideration. This procedure is in line with the funding policies of international development banks." In Tunisia, STEG Énergies Renouvelables, a subsidiary of the Tunisian state utility company STEG, and Dii are currently working on a pre-feasibility study. The study focuses on substantial solar and wind energy projects in Tunisia. Research will address the technical and regulatory conditions for the supply of energy in local networks for the export of power to neighbouring countries as well as Europe. Besides financing of the project will be analysed. Algeria, which offers excellent conditions for renewable energy, is considered as a potential location for a further reference project. In December 2011, the Algerian energy supplier Sonelgaz and Dii signed a Memorandum of Understanding on their future collaboration in the presence of EU Energy Commissioner Günther Oettinger and the Algerian Minister for Energy and Mining Youcef Yousfi. The focus of this cooperation will be the strengthening and the exchange of technical expertise, joint efforts in market development and the progress of renewable energy in Algeria as well as in foreign countries. Since the Euro-Mediterranean projects, Medgrid and DESERTEC are both attempting to generate solar energy from deserts and complement each other, a MoU was signed on 24 November 2011 between Medgrid and Dii to study, design and promote an interconnected electrical grid linking both projects. The plan is to build five interconnections at a cost of around 5 billion euros ($6.7 billion), including between Tunisia and Italy. The activities of Dii and Medgrid are covered by the Mediterranean Solar Plan (MSP), a political initiative within the framework of the Union for the Mediterranean (UfM). In March 2012 Dii, Medgrid, Friends of the supergrid and Renewables Grid Initiative signed a joint declaration to support the effective and complete integration, in a single electricity market, of renewable energy from both large-scale and decentralised sources, which shall not be played out against each other in Europe and in its neighbouring regions. Obstacles Some experts – such as Professor Tony Day, director of the Centre for Efficient and Renewable Energy in Building at London South Bank University, Henry Wilkinson of Janusian Security Risk Management, and Wolfram Lacher of Control Risks consultancy – are concerned about political obstacles to the project. Generating so much of the electricity consumed in Europe and in Africa would create a political dependency on North African countries which had corruption before Arab Spring and a lack of cross-border coordination. Moreover, DESERTEC would require extensive economic and political cooperation between Algeria and Morocco, which is at risk as the border between the two countries is closed due to a disagreement over the Western Sahara, Inram Kada by EUMENA, is responsible for expediting the project. Cooperation between the states of Europe and the states of the Middle East and North Africa is also certain to be challenging. Large scale cooperation necessary between the EU and the North African nations the project may be delayed due to bureaucratic red tape and other factors such as expropriation of assets. There are also concerns that the water requirement for the solar plant to clean dust off panels and for turbine coolant may be detrimental to local populations in terms of the demand it will place on the local water supply. An EU innovation supported project however resulted in the development of a silicone based film with a nano-dendrite structure on it. The film is fused on top of the solar panels and the nano-dendrite structure makes that sand, water, salt, bacteria, molds, etc. can't attach to the photovoltaic panels. Opposed to this, studies point out the generation of fresh water by the solar thermal plants. Furthermore, no significant amount of water is needed for cleaning and cooling, since alternative technologies can be used (dry cleaning, dry cooling). However, dry cooling is more expensive, technologically challenging and less efficient than the water cooling currently planned. Plans for water desalination for cooling purposes are not part of the DESERTEC business plan or cost estimates as proposed. The late Hermann Scheer (Eurosolar) pointed out that the doubled solar radiation in the Sahara can not be the only criterion especially with its continuous trade winds there . Transmitting energy over long distances has been criticized, with questions raised over the cost of cabling compared to energy generation, and over electricity losses. However, the study and current operating technology show that electricity losses using high-voltage direct current transmission amount to only 3% per 1,000 km (10% per 3,000 km). Investment may be required within Europe in a "supergrid". In response, one proposal is to cascade power between neighbouring states so that states draw on the power generation of neighbouring states rather than from distant desert sites. One key question will be the cultural aspect, as Middle Eastern and African nations may need assurance that they will own the project rather than it being imposed from Europe. See also Medgrid European super grid Intermittent energy source List of HVDC projects North Sea Offshore Grid Renewable energy in Morocco Relative cost of electricity generated by different sources Solar energy in Israel Solel SuperSmart Grid Wind power in Morocco SunCable Australia - Singapore References External links DESERTEC Foundation Dii GmbH Proposed electric power transmission systems Renewable energy in the European Union Energy in Africa 2009 establishments in Germany Energy companies of Germany Macro-engineering Foundations based in Germany
Desertec
[ "Engineering" ]
6,439
[ "Macro-engineering" ]
14,612,385
https://en.wikipedia.org/wiki/Distribution%20law
Distribution law or the Nernst's distribution law gives a generalisation which governs the distribution of a solute between two immiscible solvents. This law was first given by Nernst who studied the distribution of several solutes between different appropriate pairs of solvents. C1/C2 = Kd Where Kd is called the distribution coefficient or the partition coefficient. Concentration of X in solvent A/concentration of X in solvent B=Kď If C1 denotes the concentration of solute X in solvent A & C2 denotes the concentration of solute X in solvent B; Nernst's distribution law can be expressed as C1/C2 = Kd. This law is only valid if the solute is in the same molecular form in both the solvents. Sometimes the solute dissociates or associates in the solvent. In such cases the law is modified as, D(Distribution factor)=concentration of solute in all forms in solvent 1/concentration of solute in all forms in solvent 2. Further reading Martin's Physical Pharmacy & pharmaceutical sciences; fifth edition, Patrick.J.Sinko , Lippincott Williams & Wilkins. Note, this source does not describe Nernst in the manner the text presents, nor is it evident that it is the source of the quotation (as much as one can surmise through search). Lacking full information (i.e., page number), the source is moved to Further reading. References Equilibrium chemistry Walther Nernst
Distribution law
[ "Chemistry" ]
310
[ "Equilibrium chemistry" ]
7,273,383
https://en.wikipedia.org/wiki/Gamma%20ray%20logging
Gamma ray logging is a method of measuring naturally occurring gamma radiation to characterize the rock or sediment in a borehole or drill hole. It is a wireline logging method used in mining, mineral exploration, water-well drilling, for formation evaluation in oil and gas well drilling and for other related purposes. Different types of rock emit different amounts and different spectra of natural gamma radiation. In particular, shales usually emit more gamma rays than other sedimentary rocks, such as sandstone, gypsum, salt, coal, dolomite, or limestone because radioactive potassium is a common component in their clay content, and because the cation-exchange capacity of clay causes them to absorb uranium and thorium. This difference in radioactivity between shales and sandstones/carbonate rocks allows the gamma ray tool to distinguish between shales and non-shales. But it cannot distinguish between carbonates and sandstone as they both have similar deflections on the gamma ray log. Thus gamma ray logs cannot be said to make good lithological logs by themselves, but in practice, gamma ray logs are compared side-by-side with stratigraphic logs. The gamma ray log, like other types of well logging, is done by lowering an instrument down the drill hole and recording gamma radiation variation with depth. In the United States, the device most commonly records measurements at 1/2-foot intervals. Gamma radiation is usually recorded in API units, a measurement originated by the petroleum industry. Gamma rays attenuate according to the diameter of the borehole mainly because of the properties of the fluid filling the borehole, but because gamma logs are generally used in a qualitative way, amplitude corrections are usually not necessary. Three elements and their decay chains are responsible for the radiation emitted by rock: potassium, thorium and uranium. Shales often contain potassium as part of their clay content and tend to absorb uranium and thorium as well. A common gamma-ray log records the total radiation and cannot distinguish between the radioactive elements, while a spectral gamma ray log (see below) can. For standard gamma-ray logs, the measured value of gamma-ray radiation is calculated from concentration of uranium in ppm, thorium in ppm, and potassium in weight percent: e.g., GR API = 8 × uranium concentration in ppm + 4 × thorium concentration in ppm + 16 × potassium concentration in weight percent. Due to the weighted nature of uranium concentration in the GR API calculation, anomalous concentrations of uranium can cause clean sand reservoirs to appear shaley. For this reason, spectral gamma ray is used to provide an individual reading for each element so that anomalous concentrations can be found and properly interpreted. An advantage of the gamma log over some other types of well logs is that it works through the steel and cement walls of cased boreholes. Although concrete and steel absorb some of the gamma radiation, enough travels through the steel and cement to allow for qualitative determinations. In some places, non-shales exhibit elevated levels of gamma radiation. For instance, sandstones can contain uranium minerals, potassium feldspar, clay filling, or lithic fragments that cause the rock to have higher than usual gamma readings. Coal and dolomite may contain absorbed uranium. Evaporite deposits may contain potassium minerals such as sylvite and carnallite. When this is the case, spectral gamma ray logging should be done to identify the source of these anomalies. Spectral logging Spectral logging is the technique of measuring the spectrum, or number and energy, of gamma rays emitted via natural radioactivity of the rock formation. There are three main sources of natural radioactivity on Earth: potassium (40K), thorium (principally 232Th and 230Th), and uranium (principally 238U and 235U). These radioactive isotopes each emit gamma rays that have a characteristic energy level measured in MeV. The quantity and energy of these gamma rays can be measured by a scintillometer. A log of the spectroscopic response to natural gamma ray radiation is usually presented as a total gamma ray log that plots the weight fraction of potassium (%), thorium (ppm) and uranium (ppm). The primary standards for the weight fractions are geological formations with known quantities of the three isotopes. Natural gamma ray spectroscopy logs became routinely used in the early 1970s, although they had been studied from the 1950s. The characteristic gamma ray line that is associated with each radioactive component: Potassium : Gamma ray energy 1.46 MeV Thorium series: Gamma ray energy 2.61 MeV Uranium-Radium series: Gamma ray energy 1.76 MeV Another example of the use of spectral gamma ray logs is to identify specific clay types, like kaolinite or illite. This may be useful for interpreting the environment of deposition as kaolinite can form from feldspars in tropical soils by leaching of potassium; and low potassium readings may thus indicate the presence of one or more paleosols. The identification of specific clay minerals is also useful for calculating the effective porosity of reservoir rock. Use in mineral exploration Gamma ray logs are also used in mineral exploration, especially exploration for phosphates, uranium, and potassium salts. References Well logging Petroleum technology
Gamma ray logging
[ "Chemistry", "Engineering" ]
1,078
[ "Petroleum engineering", "Petroleum technology", "Well logging" ]
7,273,549
https://en.wikipedia.org/wiki/Spontaneous%20potential%20logging
Spontaneous potential log, commonly called the self potential log or SP log, is a passive measurement taken by oil industry well loggers to characterise rock formation properties. The log works by measuring small electric potentials (measured in millivolts) between depths with in the borehole and a grounded electrode at the surface. Conductive bore hole fluids are necessary to create a SP response, so the SP log cannot be used in nonconductive drilling muds (e.g. oil-based mud) or air filled holes. The change in voltage through the well bore is caused by a buildup of charge on the well bore walls. Also Clays and shales (which are composed predominantly of clays) will generate one charge and permeable formations such as sandstone will generate an opposite one. Spontaneous potentials occur when two aqueous solutions with different ionic concentrations are placed in contact through a porous, semi-permeable membrane. In nature, ions tend to migrate from high to low ionic concentrations. In the case of SP logging, the two aqueous solutions are the well bore fluid (drilling mud) and the formation water (connate water). The potential opposite shales is called the baseline, and typically shifts only slowly over the depth of the borehole. The relative salinity of the mud and the formation water will determine the which way the SP curve will deflect opposite a permeable formation. Generally if the ionic concentration of the well bore fluid is less than the formation fluid then the SP reading will be more negative (usually plotted as a deflection to the left). If the formation fluid has an ionic concentration less than the well bore fluid, the voltage deflection will be positive (usually plotted as an excursion to the right). The amplitudes of the line made by the changing SP will vary from formation to formation and will not give a definitive answer to how permeable or the porosity of the formation that it is logging. The presence of hydrocarbons (e.g. oil, natural gas, condensate) will reduce the response on an SP log because the interstitial water contact with the well bore fluid is reduced. This phenomenon is called hydrocarbon suppression and can be used to diagnose rocks for commercial potential. The SP curve is usually 'flat' opposite shale formations because there is no ion exchange due to the low permeability, low porosity properties (tight)thus creating a baseline. Tight rocks other than shale (e.g. tight sandstones, tight carbonates) will also result in poor or no response on the SP curve because of no ion exchange. The SP tool is one of the simplest tools and is generally run as standard when logging a hole, along with the gamma ray. SP data can be used to find: Depths of permeable formations The boundaries of these formations Correlation of formations when compared with data from other analogue wells Values for the formation-water resistivity The SP curve can be influenced by various factors both in the formation and introduced into the wellbore by the drilling process. These factors can cause the SP curve to be muted or even inverted depending on the situation. Formation bed thickness Resistivities in the formation bed and the adjacent formations Resistivity and make up of the drilling mud Wellbore diameter The depth of invasion by the drilling mud into the formation Mud invasion into the permeable formation can cause the deflections in the SP curve to be rounded off and to reduce the amplitude of thin beds. A smaller wellbore will cause, like a mud filtrate invasion, the deflections on the SP curve to be rounded off and decrease the amplitude opposite thin beds, while a larger diameter wellbore has the opposite effect. If the salinity of the mud filtrate is greater than formation water the SP currents will flow in opposite direction. In that case SP deflection will be positive towards to the right. Positive deflections are observed for fresh water bearing formations. References Petroleum production Well logging
Spontaneous potential logging
[ "Engineering" ]
821
[ "Petroleum engineering", "Well logging" ]
7,280,692
https://en.wikipedia.org/wiki/Beta-M
The Beta-M is a radioisotope thermoelectric generator (RTG) that was used in Soviet-era lighthouses and beacons. Design The Beta-M contains a core made up of strontium-90, which has a half-life of 28.79 years. The service life of these generators is initially 10 years, and can be extended for another 5 to 10 years. The core is also known as radioisotope heat source 90 (RHS-90). In its initial state after manufacture, the generator is capable of generating 10 watts of electricity. The generator contains the strontium-90 radioisotope, with a heating power of 250W and 1,480 TBq of radioactivity – equivalent to some of Sr-90. Mass-scale production of RTGs in the Soviet Union was the responsibility of a plant called Baltiyets, in Narva, Estonia. Safety incidents Some Beta-M generators have been subject to incidents of vandalism when scavengers disassembled the units while searching for non-ferrous metals. In December 2001 a radiological accident occurred when three residents of Lia, Georgia found parts of an abandoned Beta-M in the forest while collecting firewood. The three suffered burns and symptoms of acute radiation syndrome as a result of their exposure to the strontium-90 contained in the Beta-M. The disposal team that removed the radiation sources consisted of 25 men who were restricted to 40 seconds' worth of exposure each while transferring the canisters to lead-lined drums. References External links Norwegian environmental concerns over Beta-M generators still in use RTG Master Plan Development Results and Priority Action Plan Elaboration for its Implementation Electrical generators Strontium Nuclear technology in the Soviet Union Energy in the Soviet Union
Beta-M
[ "Physics", "Technology" ]
372
[ "Physical systems", "Electrical generators", "Machines" ]
17,404,454
https://en.wikipedia.org/wiki/Engineering%20fit
Engineering fits are generally used as part of geometric dimensioning and tolerancing when a part or assembly is designed. In engineering terms, the "fit" is the clearance between two mating parts, and the size of this clearance determines whether the parts can, at one end of the spectrum, move or rotate independently from each other or, at the other end, are temporarily or permanently joined. Engineering fits are generally described as a "shaft and hole" pairing, but are not necessarily limited to just round components. ISO is the internationally accepted standard for defining engineering fits, but ANSI is often still used in North America. ISO and ANSI both group fits into three categories: clearance, location or transition, and interference. Within each category are several codes to define the size limits of the hole or shaft – the combination of which determines the type of fit. A fit is usually selected at the design stage according to whether the mating parts need to be accurately located, free to slide or rotate, separated easily, or resist separation. Cost is also a major factor in selecting a fit, as more accurate fits will be more expensive to produce, and tighter fits will be more expensive to assemble. Methods of producing work to the required tolerances to achieve a desired fit range from casting, forging and drilling for the widest tolerances through broaching, reaming, milling and turning to lapping and honing at the tightest tolerances. ISO system of limits and fits Overview The International Organization for Standardization system splits the three main categories into several individual fits based on the allowable limits for hole and shaft size. Each fit is allocated a code, made up of a number and a letter, which is used on engineering drawings in place of upper & lower size limits to reduce clutter in detailed areas. Hole and shaft basis A fit is either specified as shaft-basis or hole-basis, depending on which part has its size controlled to determine the fit. In a hole-basis system, the size of the hole remains constant and the diameter of the shaft is varied to determine the fit; conversely, in a shaft-basis system the size of shaft remains constant and the hole diameter is varied to determine the fit. The ISO system uses an alpha-numeric code to illustrate the tolerance ranges for the fit, with the upper-case representing the hole tolerance and lower-case representing the shaft. For example, in H7/h6 (a commonly-used fit) H7 represents the tolerance range of the hole and h6 represents the tolerance range of the shaft. These codes can be used by machinists or engineers to quickly identify the upper and lower size limits for either the hole or shaft. The potential range of clearance or interference can be found by subtracting the smallest shaft diameter from the largest hole, and largest shaft from the smallest hole. Types of fit The three types of fit are: Clearance: The hole is larger than the shaft, enabling the two parts to slide and / or rotate when assembled, e.g. piston and valves Location / transition: The hole is fractionally smaller than the shaft and mild force is required to assemble / disassemble, e.g. Shaft key Interference: The hole is smaller than the shaft and high force and / or heat is required to assemble / disassemble, e.g. Bearing bush Clearance fits For example, using an H8/f7 close-running fit on a 50mm diameter: H8 (hole) tolerance range = +0.000mm to +0.039 f7 (shaft) tolerance range = −0.050mm to −0.025mm Potential clearance will be between +0.025mm and +0.089mm Transition fits For example, using an H7/k6 similar fit on a 50mm diameter: H7 (hole) tolerance range = +0.000mm to +0.025mm k6 (shaft) tolerance range = +0.002mm to +0.018mm Potential clearance / interference will be between +0.023mm and −0.018mm Interference fits For example, using an H7/p6 press fit on a 50mm diameter: H7 (hole) tolerance range = +0.000mm to +0.025mm p6 (shaft) tolerance range = +0.042mm to +0.026mm Potential interference will be between −0.001mm and −0.042mm. Useful tolerances Common tolerances for sizes ranging from 0 to 120 mm ANSI fit classes (US only) Interference fits Interference fits, also known as press fits or friction fits, are fastenings between two parts in which the inner component is larger than the outer component. Achieving an interference fit requires applying force during assembly. After the parts are joined, the mating surfaces will feel pressure due to friction, and deformation of the completed assembly will be observed. Force fits Force fits are designed to maintain a controlled pressure between mating parts, and are used where forces or torques are being transmitted through the joining point. Like interference fits, force fits are achieved by applying a force during component assembly. FN 1 to FN 5 Shrink fits Shrink fits serve the same purpose as force fits, but are achieved by heating one member to expand it while the other remains cool. The parts can then be easily put together with little applied force, but after cooling and contraction, the same dimensional interference exists as for a force fit. Like force fits, shrink fits range from FN 1 to FN 5. Location fits Location fits are for parts that do not normally move relative to each other. Location interference fit LN 1 to LN 3 (or LT 7 to LT 21? ) Location transition fit LT 1 to LT 6 Location fit is for have comparatively better fit than slide fit. Location clearance fit LC 1 to LC 11 RC fits The smaller RC numbers have smaller clearances for tighter fits, the larger numbers have larger clearances for looser fits. RC1: close sliding fits Fits of this kind are intended for the accurate location of parts which must assemble without noticeable play. RC2: sliding fits Fits of this kind are intended for the accurate location but with greater maximum clearance than class RC1. Parts made to this fit turn and move easily. This type is not designed for free run. Sliding fits in larger sizes may seize with small temperature changes due to little allowance for thermal expansion or contraction. RC3: precision running fits Fits of this kind are about the closest fits which can be expected to run freely. Precision fits are intended for precision work at low speed, low bearing pressures, and light journal pressures. RC3 is not suitable where noticeable temperature differences occur. RC4: close running fits Fits of this kind are mostly for running fits on accurate machinery with moderate surface speed, bearing pressures, and journal pressures where accurate location and minimum play are desired. Fits of this kind also can be described as smaller clearances with higher requirements for precision fit. RC5 and R6: medium running fits Fits of this kind are designed for machines running at higher running speeds, considerable bearing pressures, and heavy journal pressure. Fits of this kind also can be described with greater clearances with common requirements for fit precision. RC7: Free running fits Fits of this kind are intended for use where accuracy is not essential. It is suitable for great temperature variations. This fit is suitable to use without any special requirements for precise guiding of shafts into certain holes. RC8 and RC9: loose running fits Fits of this kind are intended for use where wide commercial tolerances may be required on the shaft. With these fits, the parts with great clearances with having great tolerances. Loose running fits may be exposed to effects of corrosion, contamination by dust, and thermal or mechanical deformations. See also Coiled spring pins Engineering tolerance Geometric dimensioning and tolerancing Interchangeable parts Statistical interference References Mechanical engineering Metalworking terminology
Engineering fit
[ "Physics", "Engineering" ]
1,622
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
17,411,030
https://en.wikipedia.org/wiki/Internal%20drainage%20board
An internal drainage board (IDB) is a type of operating authority which is established in areas of special drainage need in England and Wales with permissive powers to undertake work to secure clean water drainage and water level management within drainage districts. The area of an IDB is not determined by county or metropolitan council boundaries, but by water catchment areas within a given region. IDBs are geographically concentrated in the Broads, Fens in East Anglia and Lincolnshire, Somerset Levels and Yorkshire. In comparison with public bodies in other countries, IDBs are most similar to the Waterschappen of the Netherlands, Consorzi di bonifica e irrigazione of Italy, wateringen of Flanders and Northern France, Watershed Districts of Minnesota, United States and Marsh Bodies of Nova Scotia, Canada. Responsibilities Much of their work involves the maintenance of rivers, drainage channels (rhynes), ordinary watercourses, pumping stations and other critical infrastructure, facilitating drainage of new developments, the ecological conservation and enhancement of watercourses, monitoring and advising on planning applications and making sure that any development is carried out in line with legislation (NPPF). IDBs are not responsible for watercourses designated as main rivers within their drainage districts; the supervision of these watercourses is undertaken by the Environment Agency. The precursors to internal drainage boards date back to 1252; however, the majority of today's IDBs were established by the national government following the passing of the Land Drainage Act 1930 and today predominantly operate under the Land Drainage Act 1991 under which, an IDB is required to exercise a general supervision over all matters relating to water level management of land within its district. Some IDBs may also have other duties, powers and responsibilities under specific legislation for the district (for instance the Middle Level Commissioners are also a navigation authority). IDBs are responsible to Defra from whom all legislation/regulations affecting them are issued. The work of an IDB is closely linked with that of the Environment Agency which has a range of functions providing a supervisory role over them. Regulation Defra brought IDBs under the jurisdiction of the Local Government Ombudsman (LGO) from 1 April 2004, and introduced a model complaints procedure for IDBs to operate. This move was aimed to increase the accountability of IDBs to the general public who have an interest in the way that IDBs are run and operate by providing an independent means of review. At this time Defra also revised and re-issued model statutory rules and procedures under which IDBs operate. Current internal drainage boards of England There are 112 internal drainage boards in England covering 1.2 million hectares (9.7% of England's total land area) and areas around The Wash, the Lincolnshire Coast, the lower reaches of the Trent and the Yorkshire Ouse, the Somerset Levels and the Fens have concentrations of adjacent IDBs covering broad areas of lowland. In other parts of the country IDBs stretch in narrow ‘fingers’ up river valleys, separated by less low-lying areas, especially in Norfolk and Suffolk, Sussex, Kent, West Yorkshire, Herefordshire/Shropshire and the northern Vale of York. The largest IDB (Lindsey Marsh DB) covers 52,757 hectares and the smallest (Cawdle Fen IDB) 181 hectares. 24 of the county councils in England include one or more IDB in their area as do six metropolitan districts, and 109 unitary authorities or district councils. The Association of Drainage Authorities holds a definitive record of all IDBs within England and Wales and their boundaries. The Environment Agency acts as the internal drainage board for one internal drainage district in East Sussex. In Wales internal drainage districts are managed by Natural Resources Wales. Water level management and flood risk IDBs have an important role in reducing flood risk through management of water levels and drainage in their districts. The water level management activities of internal drainage boards cover 1.2 million hectares of England which represents 9.7% of the total land area. Reducing the flood risk to ~600,000 people who live or work, and ~879,000 properties located in IDB districts. Whilst many thousands of people outside of these boundaries also derive reduced flood risk from IDB water level management activities. Several forms of critical infrastructure fall within IDB districts including; 56 major power stations (28%) are located within an Internal Drainage District, 68 other major industrial premises and 208 km of motorway. In fact a recent publication by the Association of Drainage Authorities identified that 53% of the installed capacity (potential maximum power output) of major power stations in England and Wales are located within an IDB. Although of much reduced significance since the 1980s, many IDB districts in Yorkshire and Nottinghamshire lie in areas of coal reserves and drainage has been significantly affected by subsidence from mining. IDBs have played an important role in monitoring and mitigating the effects of this activity and have worked in close collaboration with the coal companies and the Coal Authority. Maintenance of watercourses The fundamental role of an internal drainage board is to manage the water level within its district. The majority of lowland rivers and watercourses have been heavily modified by man or are totally artificial channels. All are engineered structures designed and constructed for the primary function of conveying surplus run-off to their outfall efficiently and safely, managing water levels to sustain a multitude of land functions. As with any engineered structure it must be maintained in order to function at or near its design capacity. Annual or bi-annual vegetation clearance and periodic de-silting (dredging) of these rivers and watercourses is therefore an essential component of the whole life cycle of these watercourses. Accommodating sustainability within the design and maintenance process for lowland rivers and watercourses has to address three essential elements: year round conveyance of flows, storage of flood peaks, retention and protection of flora and fauna dependent on or resident in the water corridor. Many IDBs are redesigning watercourses to create a two-stage or bermed channel. These have been extensively created in the Lindsey Marsh Drainage Board area of East Lincolnshire to accommodate the three elements of lowland watercourse sustainability. Berms are created at or near to the normal retained water level in the system. It is sometimes replanted with vegetation removed from the watercourse prior to improvement works but is often left to re-colonise naturally. In all cases this additional part of the channel profile allows for enhanced environmental value to develop. The area created above the berm also provides additional flood storage capacity whilst the low level channel can be maintained in such a manner that design conveyance conditions are achieved and flood risk controlled. By widening the channel and the berm, the berm can be safely used as access for machinery carrying out channel maintenance. While in-channel habitat that develops can be retained for a much longer period during the summer months, flood storage is provided for rare or extreme events and a buffer zone between the channel and any adjacent land use is created. The timing of vegetation clearance works is essential to striking a sustainable balance in lowland watercourses. The Conveyance Estimating System (CES) is a modelling tool developed through a Defra / Environment Agency research collaboration. IDBs use CES to estimate the seasonal variation of conveyance owing to vegetation growth and other physical parameters which they use to assess the impact of varying the timing of vegetation clearance operations. This is critical during the spring and early summer, the prime nesting season for aquatic birds, the breeding season for many protected mammal species such as water voles and the season when many rare species of plant life flower and seed. Many IDBs have developed vegetation control strategies in co-ordination with Natural England. Pumping stations 111 IDB districts require pumping to some degree for water level management and 79 are purely gravity boards (where no pumping is required). 53 IDBs have more than 95% of their area dependent on pumping. This means in England some of land in IDB districts rely on pumping, almost 51% of the total. A new pumping station was commissioned in April 2011 by the Middle Level Commissioners at Wiggenhall St Germans, Norfolk. The station replaced its 73-year-old predecessor and is vital to the flood risk management of of surrounding Fenland and 20,000 residential properties. When running at full capacity, it is capable of draining five Olympic-size swimming pools every 2 minutes. Emergency actions During times of heavy rainfall and high river levels IDBs: liaise with the Environment Agency (in England) or Natural Resources Wales (in Wales) over developing flood conditions check sensitive locations and remove restrictions take actions, where possible, to reduce risk of flooding to property advise local authorities on the developing situation in order that Local Authorities can execute their emergency plan effectively for the protection of people, property and critical infrastructure assist where possible in any post-flood remedial and clearance operations assess flooding incidents to determine if new works can be undertaken to reduce the effect of future flooding incidents An IDB's priorities during flooding are: ensuring the board's systems are working efficiently protection of people and residential properties protection of commercial properties protection of agricultural land and ecologically sensitive sites Some IDBs are able to provide a 24-hour contact number and most extend office hours during severe emergencies. Planning guidance Associated with the powers to regulate activities that may impede drainage, IDBs provide comments to local planning authorities on developments in their district and when asked, make recommendations on measures required to manage flood risk and to provide adequate drainage. Environmental responsibilities Internal drainage boards in England have responsibilities associated with 398 Sites of Special Scientific Interest plus other designated environmental areas, in coordination with Natural England. Slow flowing drainage channels such as those managed by IDBs can form an important habitat for a diverse community of aquatic and emergent plants, invertebrates and higher organisms. IDB channels form one of the last refuges in the UK of the BAP registered spined loach (Cobitis Taenia), a small nocturnal bottom-feeding fish that have been recorded only in the lower parts of the Trent and Great Ouse catchments, and in some small rivers and drains in Lincolnshire and East Anglia. All IDBs are currently engaging with their own individual biodiversity action plans which will further enhance their environmental role. Many IDBs are involved with assisting major wetland biodiversity projects with organisations such as the RSPB, National Trust and the Wildfowl and Wetlands Trust. Many smaller conservation projects are co-ordinated with Wildlife Trusts and local authorities. Current projects include: The Great Fen Project (Middle Level Commissioners), Newport Wetlands Reserve (Caldicot and Wentlooge Levels IDB) and WWT Welney (MLC). Middle Level Commissioners launched a three-year Otter Recovery Project in December 2007. It will build 33 otter holts and 15 other habitat areas. Drainage rates All properties within a drainage district are deemed to derive benefit from the activities of an IDB. Every property is therefore subject to a drainage rate paid annually to the IDB. For the purposes of rating, properties are divided into: Agricultural land and buildings Other land (such as domestic houses, factories, shops etc.) Occupiers of all "other land" pay Council Tax or non-domestic rates to the local authority who then are charged by the board. This charge is called the "Special Levy". The board, therefore, only demands drainage rates direct on agricultural land and buildings. The basis of this is that each property has been allotted an "annual value" which were last revised in the early 1990s. The annual value is an amount equal to the yearly rent, or the rent that might be reasonably expected if let on a tenancy from year to year commencing 1 April 1988. The annual value remains the same from year to year. Each year the board lays a rate "in the £" to meet its estimated expenditure. This is multiplied by the annual value to produce the amount of drainage rate due on each property. Precepts Under Section 141 of the Water Resources Act 1991 the Environment Agency may issue a precept to an IDB to recover a contribution that the agency considers fair towards their expenses. Under Section 57 of the Land Drainage Act 1991, in cases where a drainage district receives water from land at a higher level, the IDB may make an application to the Environment Agency for a contribution towards the expenses of dealing with that water. District drainage commissioners District drainage commissioners (DDCs) are internal drainage boards set up under local legislation rather than the Land Drainage Act 1991 and its predecessor legislation. The majority of the provisions of the Land Drainage Acts, do however, apply to such commissioners and they are statutory public bodies. The most important in terms of size and revenue is the Middle Level Commissioners. Association of Drainage Authorities The majority of internal drainage boards are members of the Association of Drainage Authorities (ADA) their representative organisation. Through ADA the collective views of drainage authorities and other members involved in water level management are represented to government, regulators, other policy makers and stakeholders. At a European level ADA represents IDBs through EUWMA. In 2013 it was announced that the Caldicot and Wentlooge Levels Internal Drainage Board was to be abolished in April 2015, after officials at the Wales Audit Office detailed a series of irregularities, including overpaying its chief executive, misuse of public funds, financial irregularities, and unlawful actions. References External links Association of Drainage Authorities Defra Flood and Coastal Risk Management European Union of Water Management Associations Internal drainage board websites Bedford Group of Drainage Boards Black Sluice Internal Drainage Board Caldicot and Wentlooge Levels Internal Drainage Board Downham Market Group of Internal Drainage Boards Ely Group of Internal Drainage Boards Lower Aire & Don Consortia of Drainage Boards Lower Ouse Internal Drainage Board Lower Severn Internal Drainage Board Lindsey Marsh Drainage Board Market Weighton Internal Drainage Board Medway Internal Drainage Boards Middle Level Commissioners Newark Area Internal Drainage Board North East Lindsey Internal Drainage Board North Level District Internal Drainage Board River Stour (Kent) Internal Drainage Board Romney Marshes Area Internal Drainage Board Shire Group of Internal Drainage Boards Somerset Drainage Boards Consortium Vale of Pickering Internal Drainage Boards Upper Witham Internal Drainage Board Water Management Alliance Welland and Deepings Internal Drainage Board West Mendip Internal Drainage Board Whittlesey Consortium of Internal Drainage Boards Witham First District Internal Drainage Board Witham Third District Internal Drainage Board Witham 4th District Internal Drainage Board York Consortium of Drainage Boards Department for Environment, Food and Rural Affairs Public bodies and task forces of the United Kingdom government Water management authorities in the United Kingdom Hydrology Hydraulic engineering Land drainage in the United Kingdom Internal Drainage Boards
Internal drainage board
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
2,958
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
339,024
https://en.wikipedia.org/wiki/Length%20contraction
Length contraction is the phenomenon that a moving object's length is measured to be shorter than its proper length, which is the length as measured in the object's own rest frame. It is also known as Lorentz contraction or Lorentz–FitzGerald contraction (after Hendrik Lorentz and George Francis FitzGerald) and is usually only noticeable at a substantial fraction of the speed of light. Length contraction is only in the direction in which the body is travelling. For standard objects, this effect is negligible at everyday speeds, and can be ignored for all regular purposes, only becoming significant as the object approaches the speed of light relative to the observer. History Length contraction was postulated by George FitzGerald (1889) and Hendrik Antoon Lorentz (1892) to explain the negative outcome of the Michelson–Morley experiment and to rescue the hypothesis of the stationary aether (Lorentz–FitzGerald contraction hypothesis). Although both FitzGerald and Lorentz alluded to the fact that electrostatic fields in motion were deformed ("Heaviside-Ellipsoid" after Oliver Heaviside, who derived this deformation from electromagnetic theory in 1888), it was considered an ad hoc hypothesis, because at this time there was no sufficient reason to assume that intermolecular forces behave the same way as electromagnetic ones. In 1897 Joseph Larmor developed a model in which all forces are considered to be of electromagnetic origin, and length contraction appeared to be a direct consequence of this model. Yet it was shown by Henri Poincaré (1905) that electromagnetic forces alone cannot explain the electron's stability. So he had to introduce another ad hoc hypothesis: non-electric binding forces (Poincaré stresses) that ensure the electron's stability, give a dynamical explanation for length contraction, and thus hide the motion of the stationary aether. Lorentz believed that length contraction represented a physical contraction of the atoms making up an object. He envisioned no fundamental change in the nature of space and time. Lorentz expected that length contraction would result in compressive strains in an object that should result in measurable effects. Such effects would include optical effects in transparent media, such as optical rotation and induction of double refraction, and the induction of torques on charged condensers moving at an angle with respect to the aether. Lorentz was perplexed by experiments such as the Trouton–Noble experiment and the experiments of Rayleigh and Brace, which failed to validate his theoretical expectations. For mathematical consistency, Lorentz proposed a new time variable, the "local time", called that because it depended on the position of a moving body, following the relation . Lorentz considered local time not to be "real"; rather, it represented an ad hoc change of variable. Impressed by Lorentz's "most ingenious idea", Poincaré saw more in local time than a mere mathematical trick. It represented the actual time that would be shown on a moving observer's clocks. On the other hand, Poincaré did not consider this measured time to be the "true time" that would be exhibited by clocks at rest in the aether. Poincaré made no attempt to redefine the concepts of space and time. To Poincaré, Lorentz transformation described the apparent states of the field for a moving observer. True states remained those defined with respect to the ether. Albert Einstein (1905) is credited with removing the ad hoc character from the contraction hypothesis, by deriving this contraction from his postulates instead of experimental data. Hermann Minkowski gave the geometrical interpretation of all relativistic effects by introducing his concept of four-dimensional spacetime. Basis in relativity First it is necessary to carefully consider the methods for measuring the lengths of resting and moving objects. Here, "object" simply means a distance with endpoints that are always mutually at rest, i.e., that are at rest in the same inertial frame of reference. If the relative velocity between an observer (or his measuring instruments) and the observed object is zero, then the proper length of the object can simply be determined by directly superposing a measuring rod. However, if the relative velocity is greater than zero, then one can proceed as follows: The observer installs a row of clocks that either are synchronized a) by exchanging light signals according to the Poincaré–Einstein synchronization, or b) by "slow clock transport", that is, one clock is transported along the row of clocks in the limit of vanishing transport velocity. Now, when the synchronization process is finished, the object is moved along the clock row and every clock stores the exact time when the left or the right end of the object passes by. After that, the observer only has to look at the position of a clock A that stored the time when the left end of the object was passing by, and a clock B at which the right end of the object was passing by at the same time. It's clear that distance AB is equal to length of the moving object. Using this method, the definition of simultaneity is crucial for measuring the length of moving objects. Another method is to use a clock indicating its proper time , which is traveling from one endpoint of the rod to the other in time as measured by clocks in the rod's rest frame. The length of the rod can be computed by multiplying its travel time by its velocity, thus in the rod's rest frame or in the clock's rest frame. In Newtonian mechanics, simultaneity and time duration are absolute and therefore both methods lead to the equality of and . Yet in relativity theory the constancy of light velocity in all inertial frames in connection with relativity of simultaneity and time dilation destroys this equality. In the first method an observer in one frame claims to have measured the object's endpoints simultaneously, but the observers in all other inertial frames will argue that the object's endpoints were not measured simultaneously. In the second method, times and are not equal due to time dilation, resulting in different lengths too. The deviation between the measurements in all inertial frames is given by the formulas for Lorentz transformation and time dilation (see Derivation). It turns out that the proper length remains unchanged and always denotes the greatest length of an object, and the length of the same object measured in another inertial reference frame is shorter than the proper length. This contraction only occurs along the line of motion, and can be represented by the relation where is the length observed by an observer in motion relative to the object is the proper length (the length of the object in its rest frame) is the Lorentz factor, defined as where is the relative velocity between the observer and the moving object is the speed of light Replacing the Lorentz factor in the original formula leads to the relation In this equation both and are measured parallel to the object's line of movement. For the observer in relative movement, the length of the object is measured by subtracting the simultaneously measured distances of both ends of the object. For more general conversions, see the Lorentz transformations. An observer at rest observing an object travelling very close to the speed of light would observe the length of the object in the direction of motion as very near zero. Then, at a speed of (30 million mph, 0.0447) contracted length is 99.9% of the length at rest; at a speed of (95 million mph, 0.141), the length is still 99%. As the magnitude of the velocity approaches the speed of light, the effect becomes prominent. Symmetry The principle of relativity (according to which the laws of nature are invariant across inertial reference frames) requires that length contraction is symmetrical: If a rod is at rest in an inertial frame , it has its proper length in and its length is contracted in . However, if a rod rests in , it has its proper length in and its length is contracted in . This can be vividly illustrated using symmetric Minkowski diagrams, because the Lorentz transformation geometrically corresponds to a rotation in four-dimensional spacetime. Magnetic forces Magnetic forces are caused by relativistic contraction when electrons are moving relative to atomic nuclei. The magnetic force on a moving charge next to a current-carrying wire is a result of relativistic motion between electrons and protons. In 1820, André-Marie Ampère showed that parallel wires having currents in the same direction attract one another. In the electrons' frame of reference, the moving wire contracts slightly, causing the protons of the opposite wire to be locally denser. As the electrons in the opposite wire are moving as well, they do not contract (as much). This results in an apparent local imbalance between electrons and protons; the moving electrons in one wire are attracted to the extra protons in the other. The reverse can also be considered. To the static proton's frame of reference, the electrons are moving and contracted, resulting in the same imbalance. The electron drift velocity is relatively very slow, on the order of a meter an hour but the force between an electron and proton is so enormous that even at this very slow speed the relativistic contraction causes significant effects. This effect also applies to magnetic particles without current, with current being replaced with electron spin. Experimental verifications Any observer co-moving with the observed object cannot measure the object's contraction, because he can judge himself and the object as at rest in the same inertial frame in accordance with the principle of relativity (as it was demonstrated by the Trouton–Rankine experiment). So length contraction cannot be measured in the object's rest frame, but only in a frame in which the observed object is in motion. In addition, even in such a non-co-moving frame, direct experimental confirmations of length contraction are hard to achieve, because (a) at the current state of technology, objects of considerable extension cannot be accelerated to relativistic speeds, and (b) the only objects traveling with the speed required are atomic particles, whose spatial extensions are too small to allow a direct measurement of contraction. However, there are indirect confirmations of this effect in a non-co-moving frame: It was the negative result of a famous experiment, that required the introduction of length contraction: the Michelson–Morley experiment (and later also the Kennedy–Thorndike experiment). In special relativity its explanation is as follows: In its rest frame the interferometer can be regarded as at rest in accordance with the relativity principle, so the propagation time of light is the same in all directions. Although in a frame in which the interferometer is in motion, the transverse beam must traverse a longer, diagonal path with respect to the non-moving frame thus making its travel time longer, the factor by which the longitudinal beam would be delayed by taking times L/(c−v) and L/(c+v) for the forward and reverse trips respectively is even longer. Therefore, in the longitudinal direction the interferometer is supposed to be contracted, in order to restore the equality of both travel times in accordance with the negative experimental result(s). Thus the two-way speed of light remains constant and the round trip propagation time along perpendicular arms of the interferometer is independent of its motion & orientation. Given the thickness of the atmosphere as measured in Earth's reference frame, muons' extremely short lifespan shouldn't allow them to make the trip to the surface, even at the speed of light, but they do nonetheless. From the Earth reference frame, however, this is made possible only by the muon's time being slowed down by time dilation. However, in the muon's frame, the effect is explained by the atmosphere being contracted, shortening the trip. Heavy ions that are spherical when at rest should assume the form of "pancakes" or flat disks when traveling nearly at the speed of lightand in fact, the results obtained from particle collisions can only be explained when the increased nucleon density due to length contraction is considered. The ionization ability of electrically charged particles with large relative velocities is higher than expected. In pre-relativistic physics the ability should decrease at high velocities, because the time in which ionizing particles in motion can interact with the electrons of other atoms or molecules is diminished; however, in relativity, the higher-than-expected ionization ability can be explained by length contraction of the Coulomb field in frames in which the ionizing particles are moving, which increases their electrical field strength normal to the line of motion. In synchrotrons and free-electron lasers, relativistic electrons were injected into an undulator, so that synchrotron radiation is generated. In the proper frame of the electrons, the undulator is contracted which leads to an increased radiation frequency. Additionally, to find out the frequency as measured in the laboratory frame, one has to apply the relativistic Doppler effect. So, only with the aid of length contraction and the relativistic Doppler effect, the extremely small wavelength of undulator radiation can be explained. Reality of length contraction In 1911 Vladimir Varićak asserted that one sees the length contraction in an objective way, according to Lorentz, while it is "only an apparent, subjective phenomenon, caused by the manner of our clock-regulation and length-measurement", according to Einstein. Einstein published a rebuttal: Einstein also argued in that paper, that length contraction is not simply the product of arbitrary definitions concerning the way clock regulations and length measurements are performed. He presented the following thought experiment: Let A'B' and A"B" be the endpoints of two rods of the same proper length L0, as measured on x' and x" respectively. Let them move in opposite directions along the x* axis, considered at rest, at the same speed with respect to it. Endpoints A'A" then meet at point A*, and B'B" meet at point B*. Einstein pointed out that length A*B* is shorter than A'B' or A"B", which can also be demonstrated by bringing one of the rods to rest with respect to that axis. Paradoxes Due to superficial application of the contraction formula, some paradoxes can occur. Examples are the ladder paradox and Bell's spaceship paradox. However, those paradoxes can be solved by a correct application of the relativity of simultaneity. Another famous paradox is the Ehrenfest paradox, which proves that the concept of rigid bodies is not compatible with relativity, reducing the applicability of Born rigidity, and showing that for a co-rotating observer the geometry is in fact non-Euclidean. Visual effects Length contraction refers to measurements of position made at simultaneous times according to a coordinate system. This could suggest that if one could take a picture of a fast moving object, that the image would show the object contracted in the direction of motion. However, such visual effects are completely different measurements, as such a photograph is taken from a distance, while length contraction can only directly be measured at the exact location of the object's endpoints. It was shown by several authors such as Roger Penrose and James Terrell that moving objects generally do not appear length contracted on a photograph. This result was popularized by Victor Weisskopf in a Physics Today article. For instance, for a small angular diameter, a moving sphere remains circular and is rotated. This kind of visual rotation effect is called Penrose-Terrell rotation. Derivation Length contraction can be derived in several ways: Known moving length In an inertial reference frame S, let and denote the endpoints of an object in motion. In this frame the object's length is measured, according to the above conventions, by determining the simultaneous positions of its endpoints at . Meanwhile, the proper length of this object, as measured in its rest frame S', can be calculated by using the Lorentz transformation. Transforming the time coordinates from S into S' results in different times, but this is not problematic, since the object is at rest in S' where it does not matter when the endpoints are measured. Therefore, the transformation of the spatial coordinates suffices, which gives: Since , and by setting and , the proper length in S' is given by Therefore, the object's length, measured in the frame S, is contracted by a factor : Likewise, according to the principle of relativity, an object that is at rest in S will also be contracted in S'. By exchanging the above signs and primes symmetrically, it follows that Thus an object at rest in S, when measured in S', will have the contracted length Known proper length Conversely, if the object rests in S and its proper length is known, the simultaneity of the measurements at the object's endpoints has to be considered in another frame S', as the object constantly changes its position there. Therefore, both spatial and temporal coordinates must be transformed: Computing length interval as well as assuming simultaneous time measurement , and by plugging in proper length , it follows: Equation (2) gives which, when plugged into (1), demonstrates that becomes the contracted length : . Likewise, the same method gives a symmetric result for an object at rest in S': . Using time dilation Length contraction can also be derived from time dilation, according to which the rate of a single "moving" clock (indicating its proper time ) is lower with respect to two synchronized "resting" clocks (indicating ). Time dilation was experimentally confirmed multiple times, and is represented by the relation: Suppose a rod of proper length at rest in and a clock at rest in are moving along each other with speed . Since, according to the principle of relativity, the magnitude of relative velocity is the same in either reference frame, the respective travel times of the clock between the rod's endpoints are given by in and in , thus and . By inserting the time dilation formula, the ratio between those lengths is: . Therefore, the length measured in is given by So since the clock's travel time across the rod is longer in than in (time dilation in ), the rod's length is also longer in than in (length contraction in ). Likewise, if the clock were at rest in and the rod in , the above procedure would give Geometrical considerations Additional geometrical considerations show that length contraction can be regarded as a trigonometric phenomenon, with analogy to parallel slices through a cuboid before and after a rotation in E3 (see left half figure at the right). This is the Euclidean analog of boosting a cuboid in E1,2. In the latter case, however, we can interpret the boosted cuboid as the world slab of a moving plate. Image: Left: a rotated cuboid in three-dimensional euclidean space E3. The cross section is longer in the direction of the rotation than it was before the rotation. Right: the world slab of a moving thin plate in Minkowski spacetime (with one spatial dimension suppressed) E1,2, which is a boosted cuboid. The cross section is thinner in the direction of the boost than it was before the boost. In both cases, the transverse directions are unaffected and the three planes meeting at each corner of the cuboids are mutually orthogonal (in the sense of E1,2 at right, and in the sense of E3 at left). In special relativity, Poincaré transformations are a class of affine transformations which can be characterized as the transformations between alternative Cartesian coordinate charts on Minkowski spacetime corresponding to alternative states of inertial motion (and different choices of an origin). Lorentz transformations are Poincaré transformations which are linear transformations (preserve the origin). Lorentz transformations play the same role in Minkowski geometry (the Lorentz group forms the isotropy group of the self-isometries of the spacetime) which are played by rotations in euclidean geometry. Indeed, special relativity largely comes down to studying a kind of noneuclidean trigonometry in Minkowski spacetime, as suggested by the following table: References External links Physics FAQ: Can You See the Lorentz–Fitzgerald Contraction? Or: Penrose-Terrell Rotation; The Barn and the Pole Special relativity Length Hendrik Lorentz
Length contraction
[ "Physics", "Mathematics" ]
4,197
[ "Scalar physical quantities", "Physical quantities", "Distance", "Quantity", "Size", "Special relativity", "Length", "Theory of relativity", "Wikipedia categories named after physical quantities" ]
339,350
https://en.wikipedia.org/wiki/Black%20hole%20thermodynamics
In physics, black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. As the study of the statistical mechanics of black-body radiation led to the development of the theory of quantum mechanics, the effort to understand the statistical mechanics of black holes has had a deep impact upon the understanding of quantum gravity, leading to the formulation of the holographic principle. Overview The second law of thermodynamics requires that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole. The increase of the entropy of the black hole more than compensates for the decrease of the entropy carried by the object that was swallowed. In 1972, Jacob Bekenstein conjectured that black holes should have an entropy proportional to the area of the event horizon, where by the same year, he proposed no-hair theorems. In 1973 Bekenstein suggested as the constant of proportionality, asserting that if the constant was not exactly this, it must be very close to it. The next year, in 1974, Stephen Hawking showed that black holes emit thermal Hawking radiation corresponding to a certain temperature (Hawking temperature). Using the thermodynamic relationship between energy, temperature and entropy, Hawking was able to confirm Bekenstein's conjecture and fix the constant of proportionality at : where is the area of the event horizon, is the Boltzmann constant, and is the Planck length. This is often referred to as the Bekenstein–Hawking formula. The subscript BH either stands for "black hole" or "Bekenstein–Hawking". The black hole entropy is proportional to the area of its event horizon . The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle. This area relationship was generalized to arbitrary regions via the Ryu–Takayanagi formula, which relates the entanglement entropy of a boundary conformal field theory to a specific surface in its dual gravitational theory. Although Hawking's calculations gave further thermodynamic evidence for black hole entropy, until 1995 no one was able to make a controlled calculation of black hole entropy based on statistical mechanics, which associates entropy with a large number of microstates. In fact, so called "no-hair" theorems appeared to suggest that black holes could have only a single microstate. The situation changed in 1995 when Andrew Strominger and Cumrun Vafa calculated the right Bekenstein–Hawking entropy of a supersymmetric black hole in string theory, using methods based on D-branes and string duality. Their calculation was followed by many similar computations of entropy of large classes of other extremal and near-extremal black holes, and the result always agreed with the Bekenstein–Hawking formula. However, for the Schwarzschild black hole, viewed as the most far-from-extremal black hole, the relationship between micro- and macrostates has not been characterized. Efforts to develop an adequate answer within the framework of string theory continue. In loop quantum gravity (LQG) it is possible to associate a geometrical interpretation with the microstates: these are the quantum geometries of the horizon. LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon. It is possible to derive, from the covariant formulation of full quantum theory (spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy. The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes. There seems to be also discussed the calculation of Bekenstein–Hawking entropy from the point of view of loop quantum gravity. The current accepted microstate ensemble for black holes is the microcanonical ensemble. The partition function for black holes results in a negative heat capacity. In canonical ensembles, there is limitation for a positive heat capacity, whereas microcanonical ensembles can exist at a negative heat capacity. The laws of black hole mechanics The four laws of black hole mechanics are physical properties that black holes are believed to satisfy. The laws, analogous to the laws of thermodynamics, were discovered by Jacob Bekenstein, Brandon Carter, and James Bardeen. Further considerations were made by Stephen Hawking. Statement of the laws The laws of black hole mechanics are expressed in geometrized units. The zeroth law The horizon has constant surface gravity for a stationary black hole. The first law For perturbations of stationary black holes, the change of energy is related to change of area, angular momentum, and electric charge by where is the energy, is the surface gravity, is the horizon area, is the angular velocity, is the angular momentum, is the electrostatic potential and is the electric charge. The second law The horizon area is, assuming the weak energy condition, a non-decreasing function of time: This "law" was superseded by Hawking's discovery that black holes radiate, which causes both the black hole's mass and the area of its horizon to decrease over time. The third law It is not possible to form a black hole with vanishing surface gravity. That is, cannot be achieved. Discussion of the laws The zeroth law The zeroth law is analogous to the zeroth law of thermodynamics, which states that the temperature is constant throughout a body in thermal equilibrium. It suggests that the surface gravity is analogous to temperature. T constant for thermal equilibrium for a normal system is analogous to constant over the horizon of a stationary black hole. The first law The left side, , is the change in energy (proportional to mass). Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right side the term . The second law The second law is the statement of Hawking's area theorem. Analogously, the second law of thermodynamics states that the change in entropy in an isolated system will be greater than or equal to 0 for a spontaneous process, suggesting a link between entropy and the area of a black hole horizon. However, this version violates the second law of thermodynamics by matter losing (its) entropy as it falls in, giving a decrease in entropy. However, generalizing the second law as the sum of black hole entropy and outside entropy, shows that the second law of thermodynamics is not violated in a system including the universe beyond the horizon. The generalized second law of thermodynamics (GSL) was needed to present the second law of thermodynamics as valid. This is because the second law of thermodynamics, as a result of the disappearance of entropy near the exterior of black holes, is not useful. The GSL allows for the application of the law because now the measurement of interior, common entropy is possible. The validity of the GSL can be established by studying an example, such as looking at a system having entropy that falls into a bigger, non-moving black hole, and establishing upper and lower entropy bounds for the increase in the black hole entropy and entropy of the system, respectively. One should also note that the GSL will hold for theories of gravity such as Einstein gravity, Lovelock gravity, or Braneworld gravity, because the conditions to use GSL for these can be met. However, on the topic of black hole formation, the question becomes whether or not the generalized second law of thermodynamics will be valid, and if it is, it will have been proved valid for all situations. Because a black hole formation is not stationary, but instead moving, proving that the GSL holds is difficult. Proving the GSL is generally valid would require using quantum-statistical mechanics, because the GSL is both a quantum and statistical law. This discipline does not exist so the GSL can be assumed to be useful in general, as well as for prediction. For example, one can use the GSL to predict that, for a cold, non-rotating assembly of nucleons, , where is the entropy of a black hole and is the sum of the ordinary entropy. The third law The third law of black hole thermodynamics is controversial. Specific counterexamples called extremal black holes fail to obey the rule. The classical third law of thermodynamics, known as the Nernst theorem, which says the entropy of a system must go to zero as the temperature goes to absolute zero is also not a universal law. However the systems that fail the classical third law have not been realized in practice, leading to the suggestion that the extremal black holes may not represent the physics of black holes generally. A weaker form of the classical third law known as the "unattainability principle" states that an infinite number of steps are required to put a system in to its ground state. This form of the third law does have an analog in black hole physics. Interpretation of the laws The four laws of black hole mechanics suggest that one should identify the surface gravity of a black hole with temperature and the area of the event horizon with entropy, at least up to some multiplicative constants. If one only considers black holes classically, then they have zero temperature and, by the no-hair theorem, zero entropy, and the laws of black hole mechanics remain an analogy. However, when quantum-mechanical effects are taken into account, one finds that black holes emit thermal radiation (Hawking radiation) at a temperature From the first law of black hole mechanics, this determines the multiplicative constant of the Bekenstein–Hawking entropy, which is (in geometrized units) which is the entropy of the black hole in Einstein's general relativity. Quantum field theory in curved spacetime can be utilized to calculate the entropy for a black hole in any covariant theory for gravity, known as the Wald entropy. Critique While black hole thermodynamics (BHT) has been regarded as one of the deepest clues to a quantum theory of gravity, there remain a philosophical criticism that "the analogy is not nearly as good as is commonly supposed", that it “is often based on a kind of caricature of thermodynamics” and "it’s unclear what the systems in BHT are supposed to be". These criticisms where reexamined in detail, ending with the opposite conclusion, "stationary black holes are not analogous to thermodynamic systems: they are thermodynamic systems, in the fullest sense." Beyond black holes Gary Gibbons and Hawking have shown that black hole thermodynamics is more general than black holes—that cosmological event horizons also have an entropy and temperature. More fundamentally, Gerard 't Hooft and Leonard Susskind used the laws of black hole thermodynamics to argue for a general holographic principle of nature, which asserts that consistent theories of gravity and quantum mechanics must be lower-dimensional. Though not yet fully understood in general, the holographic principle is central to theories like the AdS/CFT correspondence. There are also connections between black hole entropy and fluid surface tension. See also Joseph Polchinski Robert Wald Notes Citations Bibliography External links Bekenstein-Hawking entropy on Scholarpedia Black Hole Thermodynamics Black hole entropy on arxiv.org Black holes Branches of thermodynamics
Black hole thermodynamics
[ "Physics", "Chemistry", "Astronomy" ]
2,464
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Thermodynamics", "Branches of thermodynamics", "Stellar phenomena", "Astronomical objects" ]
339,542
https://en.wikipedia.org/wiki/Semiprime
In mathematics, a semiprime is a natural number that is the product of exactly two prime numbers. The two primes in the product may equal each other, so the semiprimes include the squares of prime numbers. Because there are infinitely many prime numbers, there are also infinitely many semiprimes. Semiprimes are also called biprimes, since they include two primes, or second numbers, by analogy with how "prime" means "first". Examples and variations The semiprimes less than 100 are: Semiprimes that are not square numbers are called discrete, distinct, or squarefree semiprimes: The semiprimes are the case of the -almost primes, numbers with exactly prime factors. However some sources use "semiprime" to refer to a larger set of numbers, the numbers with at most two prime factors (including unit (1), primes, and semiprimes). These are: Formula for number of semiprimes A semiprime counting formula was discovered by E. Noel and G. Panos in 2005. Let denote the number of semiprimes less than or equal to n. Then where is the prime-counting function and denotes the kth prime. Properties Semiprime numbers have no composite numbers as factors other than themselves. For example, the number 26 is semiprime and its only factors are 1, 2, 13, and 26, of which only 26 is composite. For a squarefree semiprime (with ) the value of Euler's totient function (the number of positive integers less than or equal to that are relatively prime to ) takes the simple form This calculation is an important part of the application of semiprimes in the RSA cryptosystem. For a square semiprime , the formula is again simple: Applications Semiprimes are highly useful in the area of cryptography and number theory, most notably in public key cryptography, where they are used by RSA and pseudorandom number generators such as Blum Blum Shub. These methods rely on the fact that finding two large primes and multiplying them together (resulting in a semiprime) is computationally simple, whereas finding the original factors appears to be difficult. In the RSA Factoring Challenge, RSA Security offered prizes for the factoring of specific large semiprimes and several prizes were awarded. The original RSA Factoring Challenge was issued in 1991, and was replaced in 2001 by the New RSA Factoring Challenge, which was later withdrawn in 2007. In 1974 the Arecibo message was sent with a radio signal aimed at a star cluster. It consisted of binary digits intended to be interpreted as a bitmap image. The number was chosen because it is a semiprime and therefore can be arranged into a rectangular image in only two distinct ways (23 rows and 73 columns, or 73 rows and 23 columns). See also Chen's theorem Sphenic number, a product of three distinct primes Parity problem (sieve theory) References External links Integer sequences Prime numbers Theory of cryptography
Semiprime
[ "Mathematics" ]
651
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
340,094
https://en.wikipedia.org/wiki/Estimated%20time%20of%20arrival
The estimated time of arrival (ETA) is the time when a ship, vehicle, aircraft, cargo, person, or emergency service is expected to arrive at a certain place. Overview One of the more common uses of the phrase is in public transportation where the movements of trains, buses, airplanes and the like can be used to generate estimated times of arrival depending on either a static timetable or through measurements on traffic intensity. In this respect, the phrase or its abbreviation is often paired with its complement, estimated time of departure (ETD), to indicate the expected start time of a particular journey. This information is often conveyed to a passenger information system as part of the core functionality of intelligent transportation systems. For example, a certain flight may have a calculated ETA based on the speed by which it has covered the distance traveled so far. The remaining distance is divided by the speed previously measured to roughly estimate the arrival time. This particular method does not take into account any unexpected events (such as new wind directions) which may occur on the way to the flight's destination. ETA is also used metaphorically in situations where nothing actually moves physically, as in describing the time estimated for a certain task to complete (e.g. work undertaken by an individual; a computation undertaken by a computer program; or a process undertaken by an organization). The associated term is "estimated time of accomplishment", which may be a backronym. Applications Accurate and timely estimations of times of arrival are important in several application areas: In air traffic control arrival sequencing and scheduling, where scheduling aircraft arrival according to the first-come-first-served order of ETA at the runway minimizes delays. In airport gate assignment methods, to optimize gate utilization. In elevator control, to minimize the average waiting time or journey time of passengers (destination dispatch). References Time Airline tickets Passenger rail transport
Estimated time of arrival
[ "Physics", "Mathematics" ]
379
[ "Physical quantities", "Time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
340,136
https://en.wikipedia.org/wiki/Bernstein%20polynomial
In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of Bernstein basis polynomials. The idea is named after mathematician Sergei Natanovich Bernstein. Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval [0, 1], became important in the form of Bézier curves. A numerically stable way to evaluate polynomials in Bernstein form is de Casteljau's algorithm. Definition Bernstein basis polynomials The n+1 Bernstein basis polynomials of degree n are defined as where is a binomial coefficient. So, for example, The first few Bernstein basis polynomials for blending 1, 2, 3 or 4 values together are: The Bernstein basis polynomials of degree n form a basis for the vector space of polynomials of degree at most n with real coefficients. Bernstein polynomials A linear combination of Bernstein basis polynomials is called a Bernstein polynomial or polynomial in Bernstein form of degree n. The coefficients are called Bernstein coefficients or Bézier coefficients. The first few Bernstein basis polynomials from above in monomial form are: Properties The Bernstein basis polynomials have the following properties: , if or for and where is the Kronecker delta function: has a root with multiplicity at point (note: if , there is no root at 0). has a root with multiplicity at point (note: if , there is no root at 1). The derivative can be written as a combination of two polynomials of lower degree: The k-th derivative at 0: The k-th derivative at 1: The transformation of the Bernstein polynomial to monomials is and by the inverse binomial transformation, the reverse transformation is The indefinite integral is given by The definite integral is constant for a given n: If , then has a unique local maximum on the interval at . This maximum takes the value The Bernstein basis polynomials of degree form a partition of unity: By taking the first -derivative of , treating as constant, then substituting the value , it can be shown that Similarly the second -derivative of , with again then substituted , shows that A Bernstein polynomial can always be written as a linear combination of polynomials of higher degree: The expansion of the Chebyshev Polynomials of the First Kind into the Bernstein basis is Approximating continuous functions Let ƒ be a continuous function on the interval [0, 1]. Consider the Bernstein polynomial It can be shown that uniformly on the interval [0, 1]. Bernstein polynomials thus provide one way to prove the Weierstrass approximation theorem that every real-valued continuous function on a real interval [a, b] can be uniformly approximated by polynomial functions over . A more general statement for a function with continuous kth derivative is where additionally is an eigenvalue of Bn; the corresponding eigenfunction is a polynomial of degree k. Probabilistic proof This proof follows Bernstein's original proof of 1912. See also Feller (1966) or Koralov & Sinai (2007). Motivation We will first give intuition for Bernstein's original proof. A continuous function on a compact interval must be uniformly continuous. Thus, the value of any continuous function can be uniformly approximated by its value on some finite net of points in the interval. This consideration renders the approximation theorem intuitive, given that polynomials should be flexible enough to match (or nearly match) a finite number of pairs . To do so, we might (1) construct a function close to on a lattice, and then (2) smooth out the function outside the lattice to make a polynomial. The probabilistic proof below simply provides a constructive method to create a polynomial which is approximately equal to on such a point lattice, given that "smoothing out" a function is not always trivial. Taking the expectation of a random variable with a simple distribution is a common way to smooth. Here, we take advantage of the fact that Bernstein polynomials look like Binomial expectations. We split the interval into a lattice of n discrete values. Then, to evaluate any f(x), we evaluate f at one of the n lattice points close to x, randomly chosen by the Binomial distribution. The expectation of this approximation technique is polynomial, as it is the expectation of a function of a binomial RV. The proof below illustrates that this achieves a uniform approximation of f. The crux of the proof is to (1) justify replacing an arbitrary point with a binomially chosen lattice point by concentration properties of a Binomial distribution, and (2) justify the inference from to by uniform continuity. Bernstein's proof Suppose K is a random variable distributed as the number of successes in n independent Bernoulli trials with probability x of success on each trial; in other words, K has a binomial distribution with parameters n and x. Then we have the expected value and By the weak law of large numbers of probability theory, for every δ > 0. Moreover, this relation holds uniformly in x, which can be seen from its proof via Chebyshev's inequality, taking into account that the variance of  K, equal to  x(1−x), is bounded from above by irrespective of x. Because ƒ, being continuous on a closed bounded interval, must be uniformly continuous on that interval, one infers a statement of the form uniformly in x for each . Taking into account that ƒ is bounded (on the given interval) one finds that uniformly in x. To justify this statement, we use a common method in probability theory to convert from closeness in probability to closeness in expectation. One splits the expectation of into two parts split based on whether or not . In the interval where the difference does not exceed ε, the expectation clearly cannot exceed ε. In the other interval, the difference still cannot exceed 2M, where M is an upper bound for |ƒ(x)| (since uniformly continuous functions are bounded). However, by our 'closeness in probability' statement, this interval cannot have probability greater than ε. Thus, this part of the expectation contributes no more than 2M times ε. Then the total expectation is no more than , which can be made arbitrarily small by choosing small ε. Finally, one observes that the absolute value of the difference between expectations never exceeds the expectation of the absolute value of the difference, a consequence of Holder's Inequality. Thus, using the above expectation, we see that (uniformly in x) Noting that our randomness was over K while x is constant, the expectation of f(x) is just equal to f(x). But then we have shown that converges to f(x). Then we will be done if is a polynomial in x (the subscript reminding us that x controls the distribution of K). Indeed it is: Uniform convergence rates between functions In the above proof, recall that convergence in each limit involving f depends on the uniform continuity of f, which implies a rate of convergence dependent on f 's modulus of continuity It also depends on 'M', the absolute bound of the function, although this can be bypassed if one bounds and the interval size. Thus, the approximation only holds uniformly across x for a fixed f, but one can readily extend the proof to uniformly approximate a set of functions with a set of Bernstein polynomials in the context of equicontinuity. Elementary proof The probabilistic proof can also be rephrased in an elementary way, using the underlying probabilistic ideas but proceeding by direct verification: The following identities can be verified: ("probability") ("mean") ("variance") In fact, by the binomial theorem and this equation can be applied twice to . The identities (1), (2), and (3) follow easily using the substitution . Within these three identities, use the above basis polynomial notation and let Thus, by identity (1) so that Since f is uniformly continuous, given , there is a such that whenever . Moreover, by continuity, . But then The first sum is less than ε. On the other hand, by identity (3) above, and since , the second sum is bounded by times (Chebyshev's inequality) It follows that the polynomials fn tend to f uniformly. Generalizations to higher dimension Bernstein polynomials can be generalized to dimensions – the resulting polynomials have the form . In the simplest case only products of the unit interval are considered; but, using affine transformations of the line, Bernstein polynomials can also be defined for products . For a continuous function on the -fold product of the unit interval, the proof that can be uniformly approximated by is a straightforward extension of Bernstein's proof in one dimension. See also Polynomial interpolation Newton form Lagrange form Binomial QMF (also known as Daubechies wavelet) Notes References , English translation , Russian edition first published in 1940 External links from University of California, Davis. Note the error in the summation limits in the first formula on page 9. Feature Column from American Mathematical Society Numerical analysis Polynomials Articles containing proofs
Bernstein polynomial
[ "Mathematics" ]
1,865
[ "Polynomials", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Articles containing proofs", "Approximations", "Algebra" ]
340,198
https://en.wikipedia.org/wiki/Pointwise%20convergence
In mathematics, pointwise convergence is one of various senses in which a sequence of functions can converge to a particular function. It is weaker than uniform convergence, to which it is often compared. Definition Suppose that is a set and is a topological space, such as the real or complex numbers or a metric space, for example. A sequence of functions all having the same domain and codomain is said to converge pointwise to a given function often written as if (and only if) the limit of the sequence evaluated at each point in the domain of is equal to , written as The function is said to be the pointwise limit function of the The definition easily generalizes from sequences to nets . We say converge pointwises to , written as if (and only if) is the unique accumulation point of the net evaluated at each point in the domain of , written as Sometimes, authors use the term bounded pointwise convergence when there is a constant such that . Properties This concept is often contrasted with uniform convergence. To say that means that where is the common domain of and , and stands for the supremum. That is a stronger statement than the assertion of pointwise convergence: every uniformly convergent sequence is pointwise convergent, to the same limiting function, but some pointwise convergent sequences are not uniformly convergent. For example, if is a sequence of functions defined by then pointwise on the interval but not uniformly. The pointwise limit of a sequence of continuous functions may be a discontinuous function, but only if the convergence is not uniform. For example, takes the value when is an integer and when is not an integer, and so is discontinuous at every integer. The values of the functions need not be real numbers, but may be in any topological space, in order that the concept of pointwise convergence make sense. Uniform convergence, on the other hand, does not make sense for functions taking values in topological spaces generally, but makes sense for functions taking values in metric spaces, and, more generally, in uniform spaces. Topology Let denote the set of all functions from some given set into some topological space As described in the article on characterizations of the category of topological spaces, if certain conditions are met then it is possible to define a unique topology on a set in terms of which nets do and do not converge. The definition of pointwise convergence meets these conditions and so it induces a topology, called the , on the set of all functions of the form A net in converges in this topology if and only if it converges pointwise. The topology of pointwise convergence is the same as convergence in the product topology on the space where is the domain and is the codomain. Explicitly, if is a set of functions from some set into some topological space then the topology of pointwise convergence on is equal to the subspace topology that it inherits from the product space when is identified as a subset of this Cartesian product via the canonical inclusion map defined by If the codomain is compact, then by Tychonoff's theorem, the space is also compact. Almost everywhere convergence In measure theory, one talks about almost everywhere convergence of a sequence of measurable functions defined on a measurable space. That means pointwise convergence almost everywhere, that is, on a subset of the domain whose complement has measure zero. Egorov's theorem states that pointwise convergence almost everywhere on a set of finite measure implies uniform convergence on a slightly smaller set. Almost everywhere pointwise convergence on the space of functions on a measure space does not define the structure of a topology on the space of measurable functions on a measure space (although it is a convergence structure). For in a topological space, when every subsequence of a sequence has itself a subsequence with the same subsequential limit, the sequence itself must converge to that limit. But consider the sequence of so-called "galloping rectangles" functions, which are defined using the floor function: let and mod and let Then any subsequence of the sequence has a sub-subsequence which itself converges almost everywhere to zero, for example, the subsequence of functions which do not vanish at But at no point does the original sequence converge pointwise to zero. Hence, unlike convergence in measure and convergence, pointwise convergence almost everywhere is not the convergence of any topology on the space of functions. See also References Convergence (mathematics) Measure theory Topological spaces Topology of function spaces hu:Függvénysorozatok konvergenciája#Pontonkénti konvergencia
Pointwise convergence
[ "Mathematics" ]
940
[ "Sequences and series", "Functions and mappings", "Mathematical structures", "Convergence (mathematics)", "Mathematical objects", "Space (mathematics)", "Topological spaces", "Topology", "Mathematical relations" ]
340,201
https://en.wikipedia.org/wiki/Soundproofing
Soundproofing is any means of impeding sound propagation. There are several methods employed including increasing the distance between the source and receiver, decoupling, using noise barriers to reflect or absorb the energy of the sound waves, using damping structures such as sound baffles for absorption, or using active antinoise sound generators. Acoustic quieting and noise control can be used to limit unwanted noise. Soundproofing can reduce the transmission of unwanted direct sound waves from the source to an involuntary listener through the use of distance and intervening objects in the sound path (see sound transmission class and sound reduction index). Soundproofing can suppress unwanted indirect sound waves such as reflections that cause echoes and resonances that cause reverberation. Techniques Absorption Sound-absorbing material controls reverberant sound pressure levels within a cavity, enclosure or room. Synthetic absorption materials are porous, referring to open cell foam (acoustic foam, soundproof foam). Fibrous absorption material such as cellulose, mineral wool, fiberglass, sheep's wool, are more commonly used to deaden resonant frequencies within a cavity (wall, floor, or ceiling insulation), serving a dual purpose along with their thermal insulation properties. Both fibrous and porous absorption material are used to create acoustic panels, which absorb sound reflections in a room, improving speech intelligibility. Porous absorbers Porous absorbers, typically open cell rubber foams or melamine sponges, absorb noise by friction within the cell structure. Porous open cell foams are highly effective noise absorbers across a broad range of medium-high frequencies. Performance can be less impressive at lower frequencies. The exact absorption profile of a porous open-cell foam will be determined by a number of factors including cell size, tortuosity, porosity, thickness, and density. The absorption aspect in soundproofing should not be confused with sound-absorbing panels used in acoustic treatments. Absorption in this sense refers to reducing a resonating frequency in a cavity by installing insulation between walls, ceilings or floors. Acoustic panels can play a role in treatment reducing reflections that make the overall sound in the source room louder, after walls, ceilings, and floors have been soundproofed. Resonant absorbers Resonant panels, Helmholtz resonators and other resonant absorbers work by damping a sound wave as they reflect it. Unlike porous absorbers, resonant absorbers are most effective at low-medium frequencies and the absorption of resonant absorbers is matched to a narrow frequency range. Damping Damping serves to reduce resonance in the room, by absorption or redirection through reflection or diffusion. Absorption reduces the overall sound level, whereas redirection makes unwanted sound harmless or even beneficial by reducing coherence. Damping can be separately applied to reduce the acoustic resonance in the air or to reduce mechanical resonance in the structure of the room itself or things in the room. Decoupling Creating separation between a sound source and any form of adjoining mass, hindering the direct pathway for sound transfer. Distance The energy density of sound waves decreases as they become farther apart so increasing the distance between the receiver and source results in a progressively lesser intensity of sound at the receiver. In a normal three-dimensional setting, with a point source and point receptor, the intensity of sound waves will be attenuated according to the inverse square of the distance from the source. Mass Adding dense material to treatment helps stop sound waves from exiting a source wall, ceiling or floor. Materials include mass-loaded vinyl, soundproof sheetrock or drywall, plywood, fibreboard, concrete or rubber. Different widths and densities in soundproofing material reduce sound within a variable frequency range. Reflection When sound waves hit a medium, the reflection of that sound is dependent on the dissimilarity of the material it comes in contact with. Sound hitting a concrete surface will result in a much different reflection than if the sound were to hit a softer medium such as fiberglass. In an outdoor environment such as highway engineering, embankments or paneling are often used to reflect sound upwards into the sky. Diffusion If a specular reflection from a hard flat surface is giving a problematic echo then an acoustic diffuser may be applied to the surface. It will scatter sound in all directions. Active noise control In active noise control, a microphone is used to pick up the sound that is then analyzed by a computer; then, sound waves with opposite polarity (180° phase at all frequencies) are output through a speaker, causing destructive interference and canceling much of the noise. Applications Residential Residential sound programs aim to decrease or eliminate the effects of exterior noise. The main focus of a residential sound program in existing structures is the windows and doors. Solid wood doors are a better sound barrier than hollow doors. Curtains can be used to dampen sound, either through use of heavy materials or through the use of air chambers known as honeycombs. Single-, double- and triple-honeycomb designs achieve relatively greater degrees of sound damping. The primary soundproofing limit of curtains is the lack of a seal at the edge of the curtain, although this may be alleviated with the use of sealing features, such as hook and loop fastener, adhesive, magnets, or other materials. The thickness of glass will play a role when diagnosing sound leakage. Double-pane windows achieve somewhat greater sound damping than single-pane windows when well-sealed into the opening of the window frame and wall. Significant noise reduction can also be achieved by installing a second interior window. In this case, the exterior window remains in place while a slider or hung window is installed within the same wall openings. In the US, the FAA offers sound-reducing for homes that fall within a noise contour where the average sound level is or greater. It is part of their Residential Sound Insulation Program. The program provides solid-core wood entry doors plus windows and storm doors. Ceilings Sealing gaps and cracks around electrical wiring, water pipes and ductwork using acoustical caulk or spray foam will significantly reduce unwanted noise as a preliminary step for ceiling soundproofing. Acoustical caulk should be used along the perimeter of the wall and around all fixtures and duct registers to further seal the treatment. Mineral wool insulation is most commonly used in soundproofing for its density and low cost compared to other soundproofing materials. Spray foam insulation should only be used to fill gaps and cracks or as a 1-2 inch layer before installing mineral wool. Cured spray foam and other closed-cell foam can be a sound conductor. Spray foam is not porous enough to absorb sound and is also not dense enough to stop sound. An effective method to reduce impact noise is the "resilient isolation channel". The channels decouple the drywall from the joists, reducing the transfer of vibration. Walls Mass is the only way to stop sound. Mass refers to drywall, plywood or concrete. Mass-loaded vinyl (MLV) is used to dampen or weaken sound waves between layers of mass. Use of a viscoelastic damping compound or MLV converts sound waves into heat, weakening the waves before they reach the next layer of mass. It is important to use multiple layers of mass, in different widths and densities, to optimize any given soundproofing treatment. Installing soundproof drywall is recommended for its higher sound transmission class (STC) value. Soundproof drywall in combination with a viscoelastic compound may achieve a noise reduction of STC 60+. Walls are filled with mineral wool insulation. Depending on the desired level of treatment, two layers of insulation may be required. Outlets, light switches, and electrical boxes are weak points in any given soundproofing treatment. Electrical boxes should be wrapped in clay or putty and backed with MLV. After switch plates, outlet covers and lights are installed, acoustical caulking should be applied around the perimeter of the plates or fixtures. Floors Decoupling between the joist and subfloor plywood using neoprene joist tape or u-shaped rubber spacers helps create soundproof flooring. An additional layer of plywood can be installed with a viscoelastic compound. Mass loaded vinyl, in combination with open-cell rubber or a closed-cell foam floor underlayment, will further reduce sound transmission. After applying these techniques, hardwood flooring or carpeting can be installed. Additional area rugs and furniture will help reduce unwanted reflection within the room. Room within a room A room within a room (RWAR) is one method of isolating sound and preventing it from transmitting to the outside world where it may be undesirable. Most sound transfer from a room to the outside occurs through mechanical means. The vibration passes directly through the brick, woodwork and other solid structural elements. When it meets with an element such as a wall, ceiling, floor or window, which acts as a sounding board, the vibration is amplified and heard in the second space. A mechanical transmission is much faster, more efficient and more readily amplified than an airborne transmission of the same initial strength. The use of acoustic foam and other absorbent means is less effective against this transmitted vibration. The transmission can be stopped by breaking the connection between the room that contains the noise source and the outside world. This is called acoustic decoupling. Commercial Restaurants, schools, office businesses, and healthcare facilities use architectural acoustics to reduce noise for their customers. In the United States, OSHA has requirements regulating the length of exposure of workers to certain levels of noise. For educators and students, improving the sound quality of an environment will subsequently improve student learning, concentration, and teacher-student inter-communications. In 2014, a research study conducted by Applied Science revealed 86% of students perceived their instructors more intelligibly, while 66% of students reported experiencing higher concentration levels after sound-absorbing materials were incorporated into the classroom. Automotive Automotive soundproofing aims to decrease or eliminate the effects of exterior noise, primarily engine, exhaust and tire noise across a wide frequency range. A panel damping material is fitted which reduces the vibration of the vehicle's body panels when they are excited by one of the many high-energy sound sources in play when the vehicle is in use. There are many complex noises created within vehicles which change with the driving environment and speed at which the vehicle travels. Significant noise reductions of up to 8 dB can be achieved by installing a combination of different types of materials. The automotive environment limits the thickness of materials that can be used, but combinations of dampers, barriers, and absorbers are common. Common materials include felt, foam, polyester, and polypropylene blend materials. Waterproofing may be necessary depending on the materials used. Acoustic foam can be applied in different areas of a vehicle during manufacture to reduce cabin noise. Foams also have cost and performance advantages in installation since foam material can expand and fill cavities after application and also prevent leaks and some gases from entering the vehicle. Vehicle soundproofing can reduce wind, engine, road, and tire noise. Vehicle soundproofing can reduce sound inside a vehicle from five to 20 decibels. Surface-damping materials are very effective at reducing structure-borne noise. Passive damping materials have been used since the early 1960s in the aerospace industry. Over the years, advances in material manufacturing and the development of more efficient analytical and experimental tools to characterize complex dynamic behaviors enabled the expansion of the usage of these materials to the automotive industry. Nowadays, multiple viscoelastic damping pads are usually attached to the body in order to attenuate higher-order structural panel modes that significantly contribute to the overall noise level inside the cabin. Traditionally, experimental techniques are used to optimize the size and location of damping treatments. In particular, laser vibrometer-type tests are often conducted on the body in white structures enabling the fast acquisition of a large number of measurement points with a good spatial resolution. However, testing a complete vehicle is mostly infeasible, requiring evaluation of every subsystem individually, hence limiting the usability of this technology in a fast and efficient way. Alternatively, structural vibrations can also be acoustically measured using particle velocity sensors located near a vibrating structure. Several studies have revealed the potential of particle velocity sensors for characterizing structural vibrations, which accelerates the entire testing process when combined with scanning techniques. Noise barriers Since the early 1970s, it has become common practice in the United States and other industrialized countries to engineer noise barriers along major highways to protect adjacent residents from intruding roadway noise. The Federal Highway Administration (FHWA) in conjunction with State Highway Administration (SHA) adopted Federal Regulation (23 CFR 772) requiring each state to adopt their own policy in regards to abatement of highway traffic noise. Engineering techniques have been developed to predict an effective geometry for the noise barrier design in a particular real-world situation. Noise barriers may be constructed of wood, masonry, earth or a combination thereof. See also Acoustic transmission Acoustiblok Hearing test Noise pollution Noise regulation Recording studio References Acoustics Fluid dynamics Noise reduction Sound Noise control
Soundproofing
[ "Physics", "Chemistry", "Engineering" ]
2,705
[ "Chemical engineering", "Classical mechanics", "Acoustics", "Piping", "Fluid dynamics" ]
340,284
https://en.wikipedia.org/wiki/Electron%20ionization
Electron ionization (EI, formerly known as electron impact ionization and electron bombardment ionization) is an ionization method in which energetic electrons interact with solid or gas phase atoms or molecules to produce ions. EI was one of the first ionization techniques developed for mass spectrometry. However, this method is still a popular ionization technique. This technique is considered a hard (high fragmentation) ionization method, since it uses highly energetic electrons to produce ions. This leads to extensive fragmentation, which can be helpful for structure determination of unknown compounds. EI is the most useful for organic compounds which have a molecular weight below 600 amu. Also, several other thermally stable and volatile compounds in solid, liquid and gas states can be detected with the use of this technique when coupled with various separation methods. History Electron ionization was first described in 1918 by Canadian-American Physicist Arthur J. Dempster in the article of "A new method of positive ray analysis." It was the first modern mass spectrometer and used positive rays to determine the ratio of the mass to charge of various constituents. In this method, the ion source used an electron beam directed at a solid surface. The anode was made cylindrical in shape using the metal which was to be studied. Subsequently, it was heated by a concentric coil and then was bombarded with electrons. Using this method, the two isotopes of lithium and three isotopes of magnesium, with their atomic weights and relative proportions, were able to be determined. Since then this technique has been used with further modifications and developments. The use of a focused monoenergetic beam of electrons for ionization of gas phase atoms and molecules was developed by Bleakney in 1929. Principle of operation In this process, an electron from the analyte molecule (M) is expelled during the collision process to convert the molecule to a positive ion with an odd number of electrons. The following gas phase reaction describes the electron ionization process M{} + e^- -> M^{+\bullet}{} + 2e^- where M is the analyte molecule being ionized, e− is the electron and M+• is the resulting molecular ion. In an EI ion source, electrons are produced through thermionic emission by heating a wire filament that has electric current running through it. The kinetic energy of the bombarding electrons should have higher energy than the ionization energy of the sample molecule. The electrons are accelerated to 70 eV in the region between the filament and the entrance to the ion source block. The sample under investigation which contains the neutral molecules is introduced to the ion source in a perpendicular orientation to the electron beam. Close passage of highly energetic electrons in low pressure (ca. 10−5 to 10−6 torr) causes large fluctuations in the electric field around the neutral molecules and induces ionization and fragmentation. The fragmentation in electron ionization can be described using Born Oppenheimer potential curves as in the diagram. The red arrow shows the electron impact energy which is enough to remove an electron from the analyte and form a molecular ion from non- dissociative results. Due to the higher energy supplied by 70 eV electrons other than the molecular ion, several other bond dissociation reactions can be seen as dissociative results, shown by the blue arrow in the diagram. These ions are known as second-generation product ions. The radical cation products are then directed towards the mass analyzer by a repeller electrode. The ionization process often follows predictable cleavage reactions that give rise to fragment ions which, following detection and signal processing, convey structural information about the analyte. The efficiency of EI Increasing the electron ionization process is done by increasing the ionization efficiency. In order to achieve higher ionization efficiency there should be an optimized filament current, emission current, and ionizing current. The current supplied to the filament to heat it to incandescent is called the filament current. The emission current is the current measured between the filament and the electron entry slit. The ionizing current is the rate of electron arrival at the trap. It is a direct measure of the number of electrons in the chamber that are available for ionization. The sample ion current (I+) is the measure of the ionization rate. This can be enhanced by manipulation of the ion extraction efficiency (β), the total ionizing cross section (Qi), the effective ionizing path length (L), the concentration of the sample molecules([N]) and the ionizing current (Ie). The equation can be shown as follows: The ion extraction efficiency (β) can be optimized by increasing the voltage of both repeller and acceleration. Since the ionization cross section depends on the chemical nature of the sample and the energy of ionizing electrons a standard value of 70 eV is used. At low energies (around 20 eV), the interactions between the electrons and the analyte molecules do not transfer enough energy to cause ionization. At around 70 eV, the de Broglie wavelength of the electrons matches the length of typical bonds in organic molecules (about 0.14 nm) and energy transfer to organic analyte molecules is maximized, leading to the strongest possible ionization and fragmentation. Under these conditions, about 1 in 1000 analyte molecules in the source are ionized. At higher energies, the de Broglie wavelength of the electrons becomes smaller than the bond lengths in typical analytes; the molecules then become "transparent" to the electrons and ionization efficiency decreases. The effective ionizing path length (L) can be increased by using a weak magnetic field. But the most practical way to increase the sample current is to operate the ion source at higher ionizing current (Ie). Instrumentation A schematic diagram of instrumentation which can be used for electron ionization is shown to the right. The ion source block is made out of metal. As the electron source, the cathode, which can be a thin filament of tungsten or rhenium wire, is inserted through a slit to the source block. Then it is heated up to an incandescent temperature to emit electrons. A potential of 70 V is applied between the cathode and source block to accelerate them to 70 eV kinetic energy to produce positive ions. The potential of the anode (electron trap) is slightly positive and it is placed on the outside of the ionization chamber, directly opposite to the cathode. The unused electrons are collected by this electron trap. The sample is introduced through the sample hole. To increase the ionization process, a weak magnetic field is applied parallel to the direction of the electrons' travel. Because of this, electrons travel in a narrow helical path, which increases their path length. The positive ions that are generated are accelerated by the repeller electrode into the accelerating region through the slit in the source block. By applying a potential to the ion source and maintaining the exit slit at ground potential, ions enter the mass analyzer with a fixed kinetic energy. To avoid the condensation of the sample, the source block is heated to approximately 300 °C. Applications Since the early 20th century electron ionization has been one of the most popular ionization techniques because of the large number of applications it has. These applications can be broadly categorized by the method of sample insertion used. The gaseous and highly volatile liquid samples use a vacuum manifold, solids and less volatile liquids use a direct insertion probe, and complex mixtures use gas chromatography or liquid chromatography. Vacuum manifold In this method the sample is first inserted into a heated sample reservoir in the vacuum manifold. It then escapes into the ionization chamber through a pinhole. This method is useful with highly volatile samples that may not be compatible with other sample introduction methods. Direct insertion EI-MS In this method, the probe is manufactured from a long metal channel which ends in a well for holding a sample capillary. The probe is inserted into the source block through a vacuum lock. The sample is introduced to the well using a glass capillary. Next the probe is quickly heated to the desired temperature to vaporize the sample. Using this probe the sample can be positioned very close to the ionization region. Analysis of archaeologic materials Direct insertion electron ionization mass spectrometry (direct insertion EI-MS) has been used for the identification of archeological adhesives such as tars, resins and waxes found during excavations on archeological sites. These samples are typically investigated using gas chromatography–MS with extraction, purification, and derivatization of the samples. Due to the fact that these samples were deposited in prehistoric periods, they are often preserved in small amounts. By using direct insertion EI–MS archaeological samples, ancient organic remains like pine and pistacia resins, birch bark tar, beeswax, and plant oils as far from bronze and Iron Age periods were directly analyzed. The advantage of this technique is that the required amount of sample is less and the sample preparation is minimized. Both direct insertion-MS and gas chromatography-MS were used and compared in a study of characterization of the organic material present as coatings in Roman and Egyptian amphoras can be taken as an example of archeological resinous materials. From this study, it reveals that, the direct insertion procedure seems to be a fast, straightforward and a unique tool which is suitable for screening of organic archeological materials which can reveal information about the major constituents within the sample. This method provides information on the degree of oxidation and the class of materials present. As a drawback of this method, less abundant components of the sample may not be identified. Characterization of synthetic carbon clusters Another application of direct insertion EI-MS is the characterization of novel synthetic carbon clusters isolated in the solid phase. These crystalline materials consist of C60 and C70 in the ratio of 37:1. In one investigation it has been shown that the synthetic C60 molecule is remarkably stable and that it retains its aromatic character. Gas chromatography mass spectrometry Gas chromatography (GC) is the most widely used method in EI-MS for sample insertion. GC can be incorporated for the separation of mixtures of thermally stable and volatile gases which are in perfect match with the electron ionization conditions. Analysis of archaeologic materials The GC-EI-MS has been used for the study and characterization of organic material present in coatings on Roman and Egyptian amphorae. From this analysis scientists found that the material used to waterproof the amphorae was a particular type of resin not native to the archaeological site but imported from another region. One disadvantage of this method was the long analysis time and requirement of wet chemical pre-treatment. Environmental analysis GC-EI-MS has been successfully used for the determination of pesticide residues in fresh food by a single injection analysis. In this analysis 81 multi-class pesticide residues were identified in vegetables. For this study the pesticides were extracted with dichloromethane and further analyzed using gas chromatography–tandem mass spectrometry (GC–MS–MS). The optimum ionization method can be identified as EI or chemical ionization (CI) for this single injection of the extract. This method is fast, simple and cost effective since high numbers of pesticides can be determined by GC with a single injection, considerably reducing the total time for the analysis. Analysis of biological fluids The GC-EI-MS can be incorporated for the analysis of biological fluids for several applications. One example is the determination of thirteen synthetic pyrethroid insecticide molecules and their stereoisomers in whole blood. This investigation used a new rapid and sensitive electron ionization-gas chromatography–mass spectrometry method in selective ion monitoring mode (SIM) with a single injection of the sample. All the pyrethroid residues were separated by using a GC-MS operated in electron ionization mode and quantified in selective ion monitoring mode. The detection of specific residues in blood is a difficult task due to their very low concentration since as soon as they enter the body most of the chemicals may get excreted. However, this method detected the residues of different pyrethroids down to the level 0.05–2 ng/ml. The detection of this insecticide in blood is very important since an ultra-small quantity in the body is enough to be harmful to human health, especially in children. This method is a very simple, rapid technique and therefore can be adopted without any matrix interferences. The selective ion monitoring mode provides detection sensitivity up to 0.05 ng/ml. Another application is in protein turnover studies using GC-EI-MS. This measures very low levels of d-phenylalanine which can indicate the enrichment of amino acid incorporated into tissue protein during studies of human protein synthesis. This method is very efficient since both free and protein-bound d-phenylalanine can be measured using the same mass spectrometer and only a small amount of protein is needed (about 1 mg). Forensic applications The GC-EI-MS is also used in forensic science. One example is the analysis of five local anesthetics in blood using headspace solid-phase microextraction (HS-SPME) and gas chromatography–mass spectrometry–electron impact ionization selected ion monitoring (GC–MS–EI-SIM). Local anesthesia is widely used but sometimes these drugs can cause medical accidents. In such cases an accurate, simple, and rapid method for the analysis of local anesthetics is required. GC-EI-MS was used in one case with an analysis time of 65 minutes and a sample size of approximately 0.2 g, a relatively small amount. Another application in forensic practice is the determination of date rape drugs (DRDs) in urine. These drugs are used to incapacitate victims and then rape or rob them. The analyses of these drugs are difficult due to the low concentrations in the body fluids and often a long time delay between the event and clinical examination. However, using GC-EI-MS allows a simple, sensitive and robust method for the identification, detection and quantification of 128 compounds of DRDs in urine. Liquid chromatography EI-MS Two recent approaches for coupling capillary scale liquid chromatography-electron ionization mass spectrometry (LC-EI-MS) can be incorporated for the analysis of various samples. These are capillary-scale EI-based LC/MS interface and direct-EI interface. In the capillary EI the nebulizer has been optimized for linearity and sensitivity. The direct-EI interface is a miniaturized interface for nano- and micro-HPLC in which the interfacing process takes place in a suitably modified ion source. Higher sensitivity, linearity, and reproducibility can be obtained because the elution from the column is completely transferred into the ion source. Using these two interfaces electron ionization can be successfully incorporated for the analysis of small and medium-sized molecules with various polarities. The most common applications for these interfaces in LC-MS are environmental applications such as gradient separations of the pesticides, carbaryl, propanil, and chlorpropham using a reversed phase, and pharmaceutical applications such as separation of four anti-inflammatory drugs, diphenyldramine, amitriptyline, naproxen, and ibuprofen. Another method to categorize the applications of electron ionization is based on the separation technique which is used in mass spectroscopy. According to this category most of the time applications can be found in time of flight (TOF) or orthogonal TOF mass spectrometry (OA-TOF MS), Fourier transform ion cyclotron resonance (FT-ICR MS) and quadrupole or ion trap mass spectrometry. Use with time-of-flight mass spectrometry The electron ionization time of flight mass spectroscopy (EI-TOF MS) is well suited for analytical and basic chemical physics studies. EI-TOF MS is used to find ionization potentials of molecules and radicals, as well as bond dissociation energies for ions and neutral molecules. Another use of this method is to study about negative ion chemistry and physics. Autodetachment lifetimes, metastable dissociation, Rydberg electron transfer reactions and field detachment, SF6 scavenger method for detecting temporary negative ion states, and many others have all been discovered using this technique. In this method the field free ionization region allows for high precision in the electron energy and also high electron energy resolution. Measuring the electric fields down the ion flight tube determines autodetachment and metastable decomposition as well as field detachment of weakly bound negative ions. The first description of an electron ionization orthogonal-acceleration TOF MS (EI oa-TOFMS) was in 1989. By using "orthogonal-acceleration" with the EI ion source the resolving power and sensitivity was increased. One of the key advantage of oa-TOFMS with EI sources is for deployment with gas chromatographic (GC) inlet systems, which allows chromatographic separation of volatile organic compounds to proceed at high speed. Fourier transform ion cyclotron resonance mass spectrometry FT- ICR EI - MS can be used for analysis of three vacuum gas oil (VGO) distillation fractions in 295-319 °C, 319-456 °C and 456-543 °C. In this method, EI at 10 eV allows soft ionization of aromatic compounds in the vacuum gas oil range. The compositional variations at the molecular level were determined from the elemental composition assignment. Ultra-high resolving power, small sample size, high reproducibility and mass accuracy (<0.4ppm) are the special features in this method. The major product was aromatic hydrocarbons in all three samples. In addition, many sulfur-, nitrogen-, and oxygen-containing compounds were directly observed when the concentration of this heteroatomic species increased with the boiling point. Using data analysis it gave the information about compound types (rings plus double bonds), their carbon number distributions for hydrocarbon and heteroatomic compounds in the distillation fractions, increasing average molecular weight (or carbon number distribution) and aromaticity with increasing boiling temperature of the petroleum fractions. Ion trap mass spectrometry Ion trap EI MS can be incorporated for the identification and quantitation of nonylphenol polyethoxylate (NPEO) residues and their degradation products such as nonylphenol polyethoxy carboxylates and carboxyalkylphenol ethoxy carboxylates, in the samples of river water and sewage effluent. Form this research, they have found out that the ion trap GC- MS is a reliable and convenient analytical approach with variety of ionization methods including EI, for the determination of target compounds in environmental samples. Advantages and disadvantages There are several advantages and also disadvantages by using EI as the ionization method in mass spectrometry. These are listed below. See also Ion source Penning ionization Chemical ionization Spark ionization Thermal ionization References Notes External links NIST Chemistry WebBook Mass Spectrometry. Michigan State University. Ion source Mass spectrometry Scientific techniques
Electron ionization
[ "Physics", "Chemistry" ]
4,051
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Ion source", "Mass spectrometry", "Matter" ]
340,291
https://en.wikipedia.org/wiki/Stick%20shaker
A stick shaker is a mechanical device designed to rapidly and noisily vibrate the control yoke (the "stick") of an aircraft, warning the flight crew that an imminent aerodynamic stall has been detected. It is typically present on the majority of large civil jet aircraft, as well as most large military planes. The stick shaker comprises a key component of an aircraft's stall protection system. Accidents, such as the 1963 BAC One-Eleven test crash, were attributable to aerodynamic stalls and motivated aviation regulatory bodies to establish requirements for certain aircraft to be outfitted with stall protection measures, such as the stick shaker and stick pusher, to reduce such occurrences. While the stick shaker has become relatively prevalent amongst airliners and large transport aircraft, such devices are not infallible and require flight crews to be appropriately trained on their functionality and how to respond to their activation. Several instances of aircraft entering stalls have occurred even with properly functioning stick shakers, largely due to pilots reacting improperly. History When many small aircraft approach the critical angle of attack that will result in an aerodynamic stall, the smooth flow of air over the wings is interrupted, causing turbulent airflow at the trailing edge of the wings. Depending on the aircraft size or design, that turbulent air, known as buffet, typically impacts the elevator at the rear end of the aircraft, and that in turn causes vibrations that are transmitted through control cables and can be felt by the pilot on the yoke as violent shaking. This natural shaking of the control yoke serves as an early warning to pilots that a stall is developing. For very large aircraft, fly-by-wire aircraft and some aircraft with complex tail designs, there is no buffet effect on the control yoke, because the turbulent air does not reach the elevator, or because any movement in the elevator from buffet is not transmitted back to the control yoke. This deprives pilots of these aircraft of one of the important early warnings that they are about to enter a stall. Boeing aircraft designers were the first to solve this problem by creating a mechanical device, which they named a stick shaker, that shakes the control yoke in a similar way to how a yoke is shaken naturally in smaller aircraft as the aircraft approaches its critical angle of attack. Stick shakers were being developed as early as 1949. During 1963, a BAC One-Eleven airliner was lost after having crashed during a stall test. The pilots pushed the T-tailed plane past the limits of stall recovery and entered a deep stall state, in which the disturbed air from the stalled wing had rendered the elevator ineffective, directly leading to a loss of control and crash. As a consequence of the crash, a combined stick shaker/pusher system was installed in all production BAC One-Eleven airliners. A wider consequence of the incident was the instatement of a new requirement related to the pilot's ability to identify and overcome stall conditions; a design of transport category aircraft that fails to comply with the specifics of this requirement may be acceptable if the aircraft is equipped with a stick pusher. Following the crash of American Airlines Flight 191 on 25 May 1979, the Federal Aviation Administration (FAA) issued an airworthiness directive, which mandated the installation and operation of stick shakers on both sets of flight controls on most models of the McDonnell Douglas DC-10, a trijet airliner. (Previously, only the captain's controls were equipped with a stick shaker on the DC-10; in the case of Flight 191, this single stick shaker had been disabled by a partial electrical power failure early in the accident sequence.) In addition to regulatory pressure, various aircraft manufacturers have endeavoured to devise their own improved stall protection systems, many of which have included the stick shaker. The American aerospace company Boeing had designed and integrated stall warning systems into numerous aircraft that it has produced. A wide range of aircraft have incorporated stick shakers into their cockpits. Textron Aviation's Citation Longitude business jet is one such example, as is the Pilatus PC-24 light business jet, and Bombardier Aviation's Challenger 600 family of business jets. Commercial airliners such as the newer models of the Boeing 737, the Boeing 767, and the Embraer E-Jet E2 family have also included stick shakers in the aircraft's stall protection systems. Function in stall protection systems The stick shaker is a major element of an aircraft's stall protection system. The system is composed of fuselage or wing-mounted angle of attack (AOA) sensors that are connected to an avionics computer, which receives inputs from the AOA sensors along with a variety of other flight systems. When this data indicates an imminent stall condition, the computer actuates both the stick shaker and an auditory alert. The shaker itself is composed of an electric motor connected to a deliberately unbalanced flywheel. When actuated, the shaker induces a forceful, noisy, and entirely unmistakable shaking of the control yoke. This shaking of the control yoke matches the frequency and amplitude of the stick shaking that occurs due to airflow separation in low-speed aircraft as they approach the stall. The stick shaking is intended to act as a backup to the auditory stall alert, in cases where the flight crew may be distracted. Stick pusher Other stall protection systems include the stick pusher, a device that automatically pushes forward on the control yoke, commanding a reduction in the aircraft's angle of attack and thus preventing the aircraft from entering a full stall. In the majority of circumstances, the stick pusher will not activate until shortly after the stick shaker has given its warning of near-stall conditions being detected, and will not activate if the flight crew have performed appropriate actions to reduce the likelihood of stalling by lowering the angle of attack. Under most regulatory regimes, an aircraft's stall protection systems must be tested and armed prior to takeoff, as well as remain armed throughout the flight; for this reason, startup checklists normally include performing such tests as a matter of routine. Audio The vibration of the stick shaker is loud enough that it can be commonly heard on cockpit voice recorder (CVR) recordings of aircraft that have encountered stall conditions. This level of vigorous movement is intentional, the stick shaker having been designed to be impossible to ignore. To unfamiliar flight crews, the stall warning system can be viewed as aggressive and impatient, hence why it has become commonplace for the system to be introduced to trainee pilots via a flight simulator rather than a live aircraft. To fly without them would increase the likelihood of the aircraft encountering, and improperly responding to, a stall event. Flight crew factor During the 2000s, there was a series of accidents that were attributed, at least in part, to their flight crews having made improper responses to the activation of the stall warning systems. During the early 2010s, in response to this wave of accidents, the FAA issued guidance urging operators to ensure that flight crews are properly training on the correct use of these aids. See also 1963 BAC One-Eleven test crash Dual control (aviation) References External links FAA Advisory Circular 120-109, Stall and Stick Pusher Training Manual on Aeroplane Upset Prevention and Recovery Training via icao.int Aircraft controls Mechanical vibrations
Stick shaker
[ "Physics", "Engineering" ]
1,483
[ "Structural engineering", "Mechanics", "Mechanical vibrations" ]
340,440
https://en.wikipedia.org/wiki/Fluid%20mosaic%20model
The fluid mosaic model explains various characteristics regarding the structure of functional cell membranes. According to this biological model, there is a lipid bilayer (two molecules thick layer consisting primarily of amphipathic phospholipids) in which protein molecules are embedded. The phospholipid bilayer gives fluidity and elasticity to the membrane. Small amounts of carbohydrates are also found in the cell membrane. The biological model, which was devised by Seymour Jonathan Singer and Garth L. Nicolson in 1972, describes the cell membrane as a two-dimensional liquid where embedded proteins are generally randomly distributed. For example, it is stated that "A prediction of the fluid mosaic model is that the two-dimensional long-range distribution of any integral protein in the plane of the membrane is essentially random." Chemical makeup Experimental evidence The fluid property of functional biological membranes had been determined through labeling experiments, x-ray diffraction, and calorimetry. These studies showed that integral membrane proteins diffuse at rates affected by the viscosity of the lipid bilayer in which they were embedded, and demonstrated that the molecules within the cell membrane are dynamic rather than static. Previous models of biological membranes included the Robertson Unit Membrane Model and the Davson-Danielli Tri-Layer model. These models had proteins present as sheets neighboring a lipid layer, rather than incorporated into the phospholipid bilayer. Other models described repeating, regular units of protein and lipid. These models were not well supported by microscopy and thermodynamic data, and did not accommodate evidence for dynamic membrane properties. An important experiment that provided evidence supporting fluid and dynamic biological was performed by Frye and Edidin. They used Sendai virus to force human and mouse cells to fuse and form a heterokaryon. Using antibody staining, they were able to show that the mouse and human proteins remained segregated to separate halves of the heterokaryon a short time after cell fusion. However, the proteins eventually diffused and over time the border between the two halves was lost. Lowering the temperature slowed the rate of this diffusion by causing the membrane phospholipids to transition from a fluid to a gel phase. Singer and Nicolson rationalized the results of these experiments using their fluid mosaic model. The fluid mosaic model explains changes in structure and behavior of cell membranes under different temperatures, as well as the association of membrane proteins with the membranes. While Singer and Nicolson had substantial evidence drawn from multiple subfields to support their model, recent advances in fluorescence microscopy and structural biology have validated the fluid mosaic nature of cell membranes. Subsequent developments Membrane asymmetry Additionally, the two leaflets of biological membranes are asymmetric and divided into subdomains composed of specific proteins or lipids, allowing spatial segregation of biological processes associated with membranes. Cholesterol and cholesterol-interacting proteins can concentrate into lipid rafts and constrain cell signaling processes to only these rafts. Another form of asymmetry was shown by the work of Mouritsen and Bloom in 1984, where they proposed a Mattress Model of lipid-protein interactions to address the biophysical evidence that the membrane can range in thickness and hydrophobicity of proteins. Non-bilayer membranes The existence of non-bilayer lipid formations with important biological functions was confirmed subsequent to publication of the fluid mosaic model. These membrane structures may be useful when the cell needs to propagate a non bilayer form, which occurs during cell division and the formation of a gap junction. Membrane curvature The membrane bilayer is not always flat. Local curvature of the membrane can be caused by the asymmetry and non-bilayer organization of lipids as discussed above. More dramatic and functional curvature is achieved through BAR domains, which bind to phosphatidylinositol on the membrane surface, assisting in vesicle formation, organelle formation and cell division. Curvature development is in constant flux and contributes to the dynamic nature of biological membranes. Lipid movement within the membrane During the 1970s, it was acknowledged that individual lipid molecules undergo free lateral diffusion within each of the layers of the lipid membrane. Diffusion occurs at a high speed, with an average lipid molecule diffusing ~2μm, approximately the length of a large bacterial cell, in about 1 second. It has also been observed that individual lipid molecules rotate rapidly around their own axis. Moreover, phospholipid molecules can, although they seldom do, migrate from one side of the lipid bilayer to the other (a process known as flip-flop). However, flip-flop movement is enhanced by flippase enzymes. The processes described above influence the disordered nature of lipid molecules and interacting proteins in the lipid membranes, with consequences to membrane fluidity, signaling, trafficking and function. Restrictions to lateral diffusion There are restrictions to the lateral mobility of the lipid and protein components in the fluid membrane imposed by zonation. Early attempts to explain the assembly of membrane zones include the formation of lipid rafts and “cytoskeletal fences”, corrals wherein lipid and membrane proteins can diffuse freely, but that they can seldom leave. These ideas remain controversial, and alternative explanations are available such as the proteolipid code. Lipid rafts Lipid rafts are membrane nanometric platforms with a particular lipid and protein composition that laterally diffuse, navigating on the liquid bilipid layer. Sphingolipids and cholesterol are important building blocks of the lipid rafts. Protein complexes Cell membrane proteins and glycoproteins do not exist as single elements of the lipid membrane, as first proposed by Singer and Nicolson in 1972. Rather, they occur as diffusing complexes within the membrane. The assembly of single molecules into these macromolecular complexes has important functional consequences for the cell; such as ion and metabolite transport, signaling, cell adhesion, and migration. Cytoskeletal fences (corrals) and binding to the extracellular matrix Some proteins embedded in the bilipid layer interact with the extracellular matrix outside the cell, cytoskeleton filaments inside the cell, and septin ring-like structures. These interactions have a strong influence on shape and structure, as well as on compartmentalization. Moreover, they impose physical constraints that restrict the free lateral diffusion of proteins and at least some lipids within the bilipid layer. When integral proteins of the lipid bilayer are tethered to the extracellular matrix, they are unable to diffuse freely. Proteins with a long intracellular domain may collide with a fence formed by cytoskeleton filaments. Both processes restrict the diffusion of proteins and lipids directly involved, as well as of other interacting components of the cell membranes. Septins are a family of GTP-binding proteins highly conserved among eukaryotes. Prokaryotes have similar proteins called paraseptins. They form compartmentalizing ring-like structures strongly associated with the cell membranes. Septins are involved in the formation of structures such as, cilia and flagella, dendritic spines, and yeast buds. Historical timeline 1895 – Ernest Overton hypothesized that cell membranes are made out of lipids. 1925 – Evert Gorter and François Grendel found that red blood cell membranes are formed by a fatty layer two molecules thick, i.e. they described the bilipid nature of the cell membrane. 1935 – Hugh Davson and James Danielli proposed that lipid membranes are layers composed by proteins and lipids with pore-like structures that allow specific permeability for certain molecules. Then, they suggested a model for the cell membrane, consisting of a lipid layer surrounded by protein layers at both sides of it. 1957 – J. David Robertson, based on electron microscopy studies, establishes the "Unit Membrane Hypothesis". This, states that all membranes in the cell, i.e. plasma and organelle membranes, have the same structure: a bilayer of phospholipids with monolayers of proteins at both sides of it. 1972 – SJ Singer and GL Nicolson proposed the fluid mosaic model as an explanation for the data and latest evidence regarding the structure and thermodynamics of cell membranes. 1997 – K Simons and E Ikonen proposed the lipid raft theory as an initial explanation of membrane zonation. 2024 – TA Kervin and M Overduin proposed the proteolipid code to fully explain membrane zonation as the lipid raft theory became increasingly controversial. Notes and references Membrane biology Organelles Cell anatomy
Fluid mosaic model
[ "Chemistry" ]
1,788
[ "Membrane biology", "Molecular biology" ]
340,678
https://en.wikipedia.org/wiki/Quantum%20group
In mathematics and theoretical physics, the term quantum group denotes one of a few different kinds of noncommutative algebras with additional structure. These include Drinfeld–Jimbo type quantum groups (which are quasitriangular Hopf algebras), compact matrix quantum groups (which are structures on unital separable C*-algebras), and bicrossproduct quantum groups. Despite their name, they do not themselves have a natural group structure, though they are in some sense 'close' to a group. The term "quantum group" first appeared in the theory of quantum integrable systems, which was then formalized by Vladimir Drinfeld and Michio Jimbo as a particular class of Hopf algebra. The same term is also used for other Hopf algebras that deform or are close to classical Lie groups or Lie algebras, such as a "bicrossproduct" class of quantum groups introduced by Shahn Majid a little after the work of Drinfeld and Jimbo. In Drinfeld's approach, quantum groups arise as Hopf algebras depending on an auxiliary parameter q or h, which become universal enveloping algebras of a certain Lie algebra, frequently semisimple or affine, when q = 1 or h = 0. Closely related are certain dual objects, also Hopf algebras and also called quantum groups, deforming the algebra of functions on the corresponding semisimple algebraic group or a compact Lie group. Intuitive meaning The discovery of quantum groups was quite unexpected since it was known for a long time that compact groups and semisimple Lie algebras are "rigid" objects, in other words, they cannot be "deformed". One of the ideas behind quantum groups is that if we consider a structure that is in a sense equivalent but larger, namely a group algebra or a universal enveloping algebra, then a group algebra or enveloping algebra can be "deformed", although the deformation will no longer remain a group algebra or enveloping algebra. More precisely, deformation can be accomplished within the category of Hopf algebras that are not required to be either commutative or cocommutative. One can think of the deformed object as an algebra of functions on a "noncommutative space", in the spirit of the noncommutative geometry of Alain Connes. This intuition, however, came after particular classes of quantum groups had already proved their usefulness in the study of the quantum Yang–Baxter equation and quantum inverse scattering method developed by the Leningrad School (Ludwig Faddeev, Leon Takhtajan, Evgeny Sklyanin, Nicolai Reshetikhin and Vladimir Korepin) and related work by the Japanese School. The intuition behind the second, bicrossproduct, class of quantum groups was different and came from the search for self-dual objects as an approach to quantum gravity. Drinfeld–Jimbo type quantum groups One type of objects commonly called a "quantum group" appeared in the work of Vladimir Drinfeld and Michio Jimbo as a deformation of the universal enveloping algebra of a semisimple Lie algebra or, more generally, a Kac–Moody algebra, in the category of Hopf algebras. The resulting algebra has additional structure, making it into a quasitriangular Hopf algebra. Let A = (aij) be the Cartan matrix of the Kac–Moody algebra, and let q ≠ 0, 1 be a complex number, then the quantum group, Uq(G), where G is the Lie algebra whose Cartan matrix is A, is defined as the unital associative algebra with generators kλ (where λ is an element of the weight lattice, i.e. 2(λ, αi)/(αi, αi) is an integer for all i), and ei and fi (for simple roots, αi), subject to the following relations: And for i ≠ j we have the q-Serre relations, which are deformations of the Serre relations: where the q-factorial, the q-analog of the ordinary factorial, is defined recursively using q-number: In the limit as q → 1, these relations approach the relations for the universal enveloping algebra U(G), where and tλ is the element of the Cartan subalgebra satisfying (tλ, h) = λ(h) for all h in the Cartan subalgebra. There are various coassociative coproducts under which these algebras are Hopf algebras, for example, where the set of generators has been extended, if required, to include kλ for λ which is expressible as the sum of an element of the weight lattice and half an element of the root lattice. In addition, any Hopf algebra leads to another with reversed coproduct T o Δ, where T is given by T(x ⊗ y) = y ⊗ x, giving three more possible versions. The counit on Uq(A) is the same for all these coproducts: ε(kλ) = 1, ε(ei) = ε(fi) = 0, and the respective antipodes for the above coproducts are given by Alternatively, the quantum group Uq(G) can be regarded as an algebra over the field C(q), the field of all rational functions of an indeterminate q over C. Similarly, the quantum group Uq(G) can be regarded as an algebra over the field Q(q), the field of all rational functions of an indeterminate q over Q (see below in the section on quantum groups at q = 0). The center of quantum group can be described by quantum determinant. Representation theory Just as there are many different types of representations for Kac–Moody algebras and their universal enveloping algebras, so there are many different types of representation for quantum groups. As is the case for all Hopf algebras, Uq(G) has an adjoint representation on itself as a module, with the action being given by where Case 1: q is not a root of unity One important type of representation is a weight representation, and the corresponding module is called a weight module. A weight module is a module with a basis of weight vectors. A weight vector is a nonzero vector v such that kλ · v = dλv for all λ, where dλ are complex numbers for all weights λ such that for all weights λ and μ. A weight module is called integrable if the actions of ei and fi are locally nilpotent (i.e. for any vector v in the module, there exists a positive integer k, possibly dependent on v, such that for all i). In the case of integrable modules, the complex numbers dλ associated with a weight vector satisfy , where ν is an element of the weight lattice, and cλ are complex numbers such that for all weights λ and μ, for all i. Of special interest are highest-weight representations, and the corresponding highest weight modules. A highest weight module is a module generated by a weight vector v, subject to kλ · v = dλv for all weights μ, and ei · v = 0 for all i. Similarly, a quantum group can have a lowest weight representation and lowest weight module, i.e. a module generated by a weight vector v, subject to kλ · v = dλv for all weights λ, and fi · v = 0 for all i. Define a vector v to have weight ν if for all λ in the weight lattice. If G is a Kac–Moody algebra, then in any irreducible highest weight representation of Uq(G), with highest weight ν, the multiplicities of the weights are equal to their multiplicities in an irreducible representation of U(G) with equal highest weight. If the highest weight is dominant and integral (a weight μ is dominant and integral if μ satisfies the condition that is a non-negative integer for all i), then the weight spectrum of the irreducible representation is invariant under the Weyl group for G, and the representation is integrable. Conversely, if a highest weight module is integrable, then its highest weight vector v satisfies , where cλ · v = dλv are complex numbers such that for all weights λ and μ, for all i, and ν is dominant and integral. As is the case for all Hopf algebras, the tensor product of two modules is another module. For an element x of Uq(G), and for vectors v and w in the respective modules, x ⋅ (v ⊗ w) = Δ(x) ⋅ (v ⊗ w), so that , and in the case of coproduct Δ1, and The integrable highest weight module described above is a tensor product of a one-dimensional module (on which kλ = cλ for all λ, and ei = fi = 0 for all i) and a highest weight module generated by a nonzero vector v0, subject to for all weights λ, and for all i. In the specific case where G is a finite-dimensional Lie algebra (as a special case of a Kac–Moody algebra), then the irreducible representations with dominant integral highest weights are also finite-dimensional. In the case of a tensor product of highest weight modules, its decomposition into submodules is the same as for the tensor product of the corresponding modules of the Kac–Moody algebra (the highest weights are the same, as are their multiplicities). Case 2: q is a root of unity Quasitriangularity Case 1: q is not a root of unity Strictly, the quantum group Uq(G) is not quasitriangular, but it can be thought of as being "nearly quasitriangular" in that there exists an infinite formal sum which plays the role of an R-matrix. This infinite formal sum is expressible in terms of generators ei and fi, and Cartan generators tλ, where kλ is formally identified with qtλ. The infinite formal sum is the product of two factors, and an infinite formal sum, where λj is a basis for the dual space to the Cartan subalgebra, and μj is the dual basis, and η = ±1. The formal infinite sum which plays the part of the R-matrix has a well-defined action on the tensor product of two irreducible highest weight modules, and also on the tensor product of two lowest weight modules. Specifically, if v has weight α and w has weight β, then and the fact that the modules are both highest weight modules or both lowest weight modules reduces the action of the other factor on v ⊗ W to a finite sum. Specifically, if V is a highest weight module, then the formal infinite sum, R, has a well-defined, and invertible, action on V ⊗ V, and this value of R (as an element of End(V ⊗ V)) satisfies the Yang–Baxter equation, and therefore allows us to determine a representation of the braid group, and to define quasi-invariants for knots, links and braids. Case 2: q is a root of unity Quantum groups at q = 0 Masaki Kashiwara has researched the limiting behaviour of quantum groups as q → 0, and found a particularly well behaved base called a crystal base. Description and classification by root-systems and Dynkin diagrams There has been considerable progress in describing finite quotients of quantum groups such as the above Uq(g) for qn = 1; one usually considers the class of pointed Hopf algebras, meaning that all simple left or right comodules are 1-dimensional and thus the sum of all its simple subcoalgebras forms a group algebra called the coradical: In 2002 H.-J. Schneider and N. Andruskiewitsch finished their classification of pointed Hopf algebras with an abelian co-radical group (excluding primes 2, 3, 5, 7), especially as the above finite quotients of Uq(g) decompose into E′s (Borel part), dual F′s and K′s (Cartan algebra) just like ordinary Semisimple Lie algebras: Here, as in the classical theory V is a braided vector space of dimension n spanned by the E′s, and σ (a so-called cocycle twist) creates the nontrivial linking between E′s and F′s. Note that in contrast to classical theory, more than two linked components may appear. The role of the quantum Borel algebra is taken by a Nichols algebra of the braided vectorspace. A crucial ingredient was I. Heckenberger's classification of finite Nichols algebras for abelian groups in terms of generalized Dynkin diagrams. When small primes are present, some exotic examples, such as a triangle, occur (see also the Figure of a rank 3 Dynkin diagram). Meanwhile, Schneider and Heckenberger have generally proven the existence of an arithmetic root system also in the nonabelian case, generating a PBW basis as proven by Kharcheko in the abelian case (without the assumption on finite dimension). This can be used on specific cases Uq(g) and explains e.g. the numerical coincidence between certain coideal subalgebras of these quantum groups and the order of the Weyl group of the Lie algebra g. Compact matrix quantum groups S. L. Woronowicz introduced compact matrix quantum groups. Compact matrix quantum groups are abstract structures on which the "continuous functions" on the structure are given by elements of a C*-algebra. The geometry of a compact matrix quantum group is a special case of a noncommutative geometry. The continuous complex-valued functions on a compact Hausdorff topological space form a commutative C*-algebra. By the Gelfand theorem, a commutative C*-algebra is isomorphic to the C*-algebra of continuous complex-valued functions on a compact Hausdorff topological space, and the topological space is uniquely determined by the C*-algebra up to homeomorphism. For a compact topological group, G, there exists a C*-algebra homomorphism Δ: C(G) → C(G) ⊗ C(G) (where C(G) ⊗ C(G) is the C*-algebra tensor product - the completion of the algebraic tensor product of C(G) and C(G)), such that Δ(f)(x, y) = f(xy) for all f ∈ C(G), and for all x, y ∈ G (where (f ⊗ g)(x, y) = f(x)g(y) for all f, g ∈ C(G) and all x, y ∈ G). There also exists a linear multiplicative mapping κ: C(G) → C(G), such that κ(f)(x) = f(x−1) for all f ∈ C(G) and all x ∈ G. Strictly, this does not make C(G) a Hopf algebra, unless G is finite. On the other hand, a finite-dimensional representation of G can be used to generate a *-subalgebra of C(G) which is also a Hopf *-algebra. Specifically, if is an n-dimensional representation of G, then for all i, j uij ∈ C(G) and It follows that the *-algebra generated by uij for all i, j and κ(uij) for all i, j is a Hopf *-algebra: the counit is determined by ε(uij) = δij for all i, j (where δij is the Kronecker delta), the antipode is κ, and the unit is given by General definition As a generalization, a compact matrix quantum group is defined as a pair (C, u), where C is a C*-algebra and is a matrix with entries in C such that The *-subalgebra, C0, of C, which is generated by the matrix elements of u, is dense in C; There exists a C*-algebra homomorphism called the comultiplication Δ: C → C ⊗ C (where C ⊗ C is the C*-algebra tensor product - the completion of the algebraic tensor product of C and C) such that for all i, j we have: There exists a linear antimultiplicative map κ: C0 → C0 (the coinverse) such that κ(κ(v*)*) = v for all v ∈ C0 and where I is the identity element of C. Since κ is antimultiplicative, then κ(vw) = κ(w) κ(v) for all v, w in C0. As a consequence of continuity, the comultiplication on C is coassociative. In general, C is not a bialgebra, and C0 is a Hopf *-algebra. Informally, C can be regarded as the *-algebra of continuous complex-valued functions over the compact matrix quantum group, and u can be regarded as a finite-dimensional representation of the compact matrix quantum group. Representations A representation of the compact matrix quantum group is given by a corepresentation of the Hopf *-algebra (a corepresentation of a counital coassociative coalgebra A is a square matrix with entries in A (so v belongs to M(n, A)) such that for all i, j and ε(vij) = δij for all i, j). Furthermore, a representation v, is called unitary if the matrix for v is unitary (or equivalently, if κ(vij) = v*ij for all i, j). Example An example of a compact matrix quantum group is SUμ(2), where the parameter μ is a positive real number. So SUμ(2) = (C(SUμ(2)), u), where C(SUμ(2)) is the C*-algebra generated by α and γ, subject to and so that the comultiplication is determined by ∆(α) = α ⊗ α − γ ⊗ γ*, ∆(γ) = α ⊗ γ + γ ⊗ α*, and the coinverse is determined by κ(α) = α*, κ(γ) = −μ−1γ, κ(γ*) = −μγ*, κ(α*) = α. Note that u is a representation, but not a unitary representation. u is equivalent to the unitary representation Equivalently, SUμ(2) = (C(SUμ(2)), w), where C(SUμ(2)) is the C*-algebra generated by α and β, subject to and so that the comultiplication is determined by ∆(α) = α ⊗ α − μβ ⊗ β*, Δ(β) = α ⊗ β + β ⊗ α*, and the coinverse is determined by κ(α) = α*, κ(β) = −μ−1β, κ(β*) = −μβ*, κ(α*) = α. Note that w is a unitary representation. The realizations can be identified by equating . When μ = 1, then SUμ(2) is equal to the algebra C(SU(2)) of functions on the concrete compact group SU(2). Bicrossproduct quantum groups Whereas compact matrix pseudogroups are typically versions of Drinfeld-Jimbo quantum groups in a dual function algebra formulation, with additional structure, the bicrossproduct ones are a distinct second family of quantum groups of increasing importance as deformations of solvable rather than semisimple Lie groups. They are associated to Lie splittings of Lie algebras or local factorisations of Lie groups and can be viewed as the cross product or Mackey quantisation of one of the factors acting on the other for the algebra and a similar story for the coproduct Δ with the second factor acting back on the first. The very simplest nontrivial example corresponds to two copies of R locally acting on each other and results in a quantum group (given here in an algebraic form) with generators p, K, K−1, say, and coproduct where h is the deformation parameter. This quantum group was linked to a toy model of Planck scale physics implementing Born reciprocity when viewed as a deformation of the Heisenberg algebra of quantum mechanics. Also, starting with any compact real form of a semisimple Lie algebra g its complexification as a real Lie algebra of twice the dimension splits into g and a certain solvable Lie algebra (the Iwasawa decomposition), and this provides a canonical bicrossproduct quantum group associated to g. For su(2) one obtains a quantum group deformation of the Euclidean group E(3) of motions in 3 dimensions. See also Hopf algebra Lie bialgebra Poisson–Lie group Quantum affine algebra Notes References Mathematical quantization
Quantum group
[ "Physics" ]
4,476
[ "Mathematical quantization", "Quantum mechanics" ]
340,757
https://en.wikipedia.org/wiki/Internal%20energy
The internal energy of a thermodynamic system is the energy of the system as a state function, measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization. It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e., the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. The internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics. The notion has been introduced to describe the systems characterized by temperature variations, temperature being added to the set of state parameters, the position variables known in mechanics (and their conjugated generalized force parameters), in a similar way to potential energy of the conservative fields of force, gravitational and electrostatic. Its author is Rudolf Clausius. Internal energy changes equal the algebraic sum of the heat transferred and the work done. In systems without temperature changes, potential energy changes equal the work done by/on the system. The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of substance, or of energy, as heat, or by thermodynamic work. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable, a thermodynamic potential, and an extensive property. Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, rotations, and vibrations, and of the potential energies associated with microscopic forces, including chemical bonds. The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy. The corresponding quantity relative to the amount of substance with unit J/mol is the molar internal energy. Cardinal functions The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: . It expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, , of the same list of extensive variables of state, except that the entropy, , is replaced in the list by the internal energy, . It expresses the entropy representation. Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example , that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, for , to get . In contrast, Legendre transformations are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions. The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy. For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics. Description and definition The internal energy of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state: where denotes the difference between the internal energy of the given state and that of the reference state, and the are the various energies transferred to the system in the steps from the reference state to the given state. It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, , and microscopic kinetic energy, , components: The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the chemical and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment, as well as the energy of deformation of solids (stress-strain). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics. Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational, electrostatic, or electromagnetic fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the system with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter. For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy. Therefore, a convenient null reference point may be chosen for the internal energy. The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains. At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy. The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy, The scaling property between temperature and thermal energy is the entropy change of the system. Statistical mechanics considers any system to be statistically distributed across an ensemble of microstates. In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy and is associated with a probability . The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence: This is the statistical expression of the law of conservation of energy. Internal energy changes Thermodynamics is chiefly concerned with the changes in internal energy . For a closed system, with mass transfer excluded, the changes in internal energy are due to heat transfer and due to thermodynamic work done by the system on its surroundings. Accordingly, the internal energy change for a process may be written When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible. A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings. If the system is not closed, the third mechanism that can increase the internal energy is transfer of substance into the system. This increase, cannot be split into heat and work components. If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy: If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat, in contrast to sensible heat, which is associated with temperature change. Internal energy of the ideal gas Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other noble gases. For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not electronically excited to higher energies except at very high temperatures. Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles): . It is not dependent on other thermodynamic quantities such as pressure or density. The internal energy of an ideal gas is proportional to its amount of substance (number of moles) and to its temperature where is the isochoric (at constant volume) molar heat capacity of the gas; is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties , , (entropy, volume, number of moles). In case of the ideal gas it is in the following way where is an arbitrary positive constant and where is the universal gas constant. It is easily seen that is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex. Knowing temperature and pressure to be the derivatives the ideal gas law immediately follows as below: Internal energy of a closed thermodynamic system The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings. This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential. For a closed system, with transfers only as heat and work, the change in the internal energy is expressing the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement). For example, the mechanical work done by the system may be related to the pressure and volume change . The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement: This defines the direction of work, , to be energy transfer from the working system to the surroundings, indicated by a positive term. Taking the direction of heat transfer to be into the working fluid and assuming a reversible process, the heat is where denotes the temperature, and denotes the entropy. The change in internal energy becomes Changes due to temperature and volume The expression relating changes in internal energy to changes in temperature and volume is This is useful if the equation of state is known. In case of an ideal gas, we can derive that , i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature. The expression relating changes in internal energy to changes in temperature and volume is The equation of state is the ideal gas law Solve for pressure: Substitute in to internal energy expression: Take the derivative of pressure with respect to temperature: Replace: And simplify: To express in terms of and , the term is substituted in the fundamental thermodynamic relation This gives The term is the heat capacity at constant volume The partial derivative of with respect to can be evaluated if the equation of state is known. From the fundamental thermodynamic relation, it follows that the differential of the Helmholtz free energy is given by The symmetry of second derivatives of with respect to and yields the Maxwell relation: This gives the expression above. Changes due to temperature and pressure When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful: where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of the coefficient of thermal expansion and the isothermal compressibility by writing and equating dV to zero and solving for the ratio dP/dT. This gives Substituting () and () in () gives the above expression. Changes due to volume at constant temperature The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature: Internal energy of multi-component systems In addition to including the entropy and volume terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains: where are the molar amounts of constituents of type in the system. The internal energy is an extensive function of the extensive variables , , and the amounts , the internal energy may be written as a linearly homogeneous function of first degree: where is a factor describing the growth of the system. The differential internal energy may be written as which shows (or defines) temperature to be the partial derivative of with respect to entropy and pressure to be the negative of the similar derivative with respect to volume , and where the coefficients are the chemical potentials for the components of type in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition: As conjugate variables to the composition , the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant and , because of the extensive nature of and its independent variables, using Euler's homogeneous function theorem, the differential may be integrated and yields an expression for the internal energy: The sum over the composition of the system is the Gibbs free energy: that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for . Internal energy in an elastic medium For an elastic medium the potential energy component of the internal energy has an elastic nature expressed in terms of the stress and strain involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is Euler's theorem yields for the internal energy: For a linearly elastic material, the stress is related to the strain by where the are the components of the 4th-rank elastic constant tensor of the medium. Elastic deformations, such as sound, passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased. History James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat. Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius. Notes See also Calorimetry Enthalpy Exergy Thermodynamic equations Thermodynamic potentials Gibbs free energy Helmholtz free energy References Bibliography of cited references Adkins, C. J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, . Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, . Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. . Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, . Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London. Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, . Bibliography Physical quantities Thermodynamic properties State functions Statistical mechanics Energy (physics)
Internal energy
[ "Physics", "Chemistry", "Mathematics" ]
4,022
[ "State functions", "Physical phenomena", "Thermodynamic properties", "Physical quantities", "Quantity", "Statistical mechanics", "Energy (physics)", "Thermodynamics", "Wikipedia categories named after physical quantities", "Physical properties" ]
340,782
https://en.wikipedia.org/wiki/Pykrete
Pykrete (, ) is a frozen ice composite, originally made of approximately 14% sawdust or some other form of wood pulp (such as paper) and 86% ice by weight (6 to 1 by weight). During World War II, Geoffrey Pyke proposed it as a candidate material for a supersized aircraft carrier for the British Royal Navy. Pykrete features unusual properties, including a relatively slow melting rate due to its low thermal conductivity, as well as a vastly improved strength and toughness compared to ordinary ice. These physical properties can make the material comparable to concrete, as long as the material is kept frozen. Pykrete is slightly more difficult to form than concrete, as it expands during the freezing process. However, it can be repaired and maintained using seawater as a raw material. The mixture can be moulded into any shape and frozen, and it will be tough and durable, as long as it is kept at or below freezing temperature. Resistance to gradual creep or sagging is improved by lowering the temperature further, to . History During World War II Geoffrey Pyke managed to convince Lord Mountbatten of the potential of his proposal (actually prior to the invention of pykrete) sometime around 1942, and trials were made at two locations in Alberta, Canada. The idea for a ship made of ice impressed the United States and Canada enough that a , 1,000-ton ship was built in one month on Patricia Lake in the Canadian Rockies. However, it was constructed using plain ice (from the lake), before pykrete was proposed. It took slightly more than an entire summer to melt, but plain ice proved to be too weak. Pyke learned from a report by Herman Mark and his assistant that ice made from water mixed with wood fibres formed a strong solid mass—much stronger than pure water ice. Max Perutz later recalled: Perutz would later learn that Project Habakkuk was the plan to build an enormous aircraft carrier, actually more of a floating island than a ship in the traditional sense. The experiments of Perutz and his collaborators in Smithfield Meat Market in the City of London took place in great secrecy behind a screen of animal carcasses. The tests confirmed that pykrete is much stronger than pure ice and does not shatter, but also that it sags under its own weight at temperatures higher than . Mountbatten's reaction to the breakthrough is recorded by Pyke's biographer David Lampe: Another tale is that at the Quebec Conference of 1943, Mountbatten brought a block of pykrete along to demonstrate its potential to the entourage of admirals and generals who had come along with Winston Churchill and Franklin D. Roosevelt. Mountbatten entered the project meeting with two blocks and placed them on the ground. One was a normal ice block and the other was pykrete. He then drew his service pistol and shot at the first block. It shattered and splintered. Next, he fired at the pykrete to give an idea of the resistance of that kind of ice to projectiles. The bullet ricocheted off the block, grazing the trouser leg of Admiral Ernest King and ending up in the wall. According to Perutz's own account, however, the incident of a ricochetting bullet hitting an Admiral actually happened much earlier in London and the gun was fired by someone on the project—not Mountbatten. Despite these tests, the main Project Habakkuk was never put into action because of limitations in funds and the belief that the tides of the war were beginning to turn in favour of the Allies using more conventional methods. According to the memoirs of British General Ismay: After World War II Since World War II, pykrete has remained a scientific curiosity, unexploited by research or construction of any significance. However, new concepts for pykrete crop up occasionally among architects, engineers and futurists, usually regarding its potential for mammoth offshore construction or its improvement by applying super-strong materials such as synthetic composites or Kevlar. In 1985, pykrete was considered for a quay in Oslo harbour. However, the idea was later shelved, considering pykrete's unreliability in the real-world environment. Since pykrete needs to be preserved at or below freezing point, and tends to sag under its own weight at temperatures above , an alternative was considered that would guarantee effectiveness and public safety. In 2011, the Vienna University of Technology successfully built a pykrete ice dome, measuring in diameter in the Austrian village of Obergurgl. They improved on an original Japanese technique of spraying ice on a balloon by using the natural properties of ice and its strength. This structure managed to stand for three months before sunlight started melting the ice, rendering the structure unreliable. Researcher Johann Kollegger of Vienna University of Technology thinks his team's alternative new method is easier, avoiding icy sprayback onto the workers. To build their freestanding structure, Kollegger and his colleagues first cut an plate of ice into 16 segments. To sculpt the segments to have a dome-like curve, the researchers relied on ice's creep behavior. If pressure is applied to ice, it slowly changes its shape without breaking. One of the mechanisms by which glaciers move, called glacial creep, functions similarly, the researchers say. In 2014, the Eindhoven University of Technology worked on a pykrete architecture project in Juuka, Finland, which included an ice dome and a pykrete scale model of the Sagrada Familia. They attempted to build the largest ice dome in the world. Due to human error, the plug to a compressor that kept the balloon inflated was pulled, leading to the balloon deflating. The team of Dutch students quickly re-inflated the balloon, and resprayed the part of the dome that had collapsed. They continued with their construction, and eventually opened the dome to the public. However within a matter of days the roof caved in; there were no visitors on the site at the time. Mechanical properties The durability of pykrete is still debated. Perutz has estimated a crushing strength value of around . A September 1943 proposal for making smaller pykrete vessels included the following table of characteristics: In the media In 2009, the Discovery Channel program MythBusters episode 115 tested the properties of pykrete and the myths behind it. First, the program's primary hosts, Adam Savage and Jamie Hyneman, compared the mechanical properties of common ice, pykrete, and a new material specially created for the show, dubbed "super pykrete", which used newspapers instead of woodpulp. Both versions of pykrete indeed proved to be much stronger than the chunk of ice, withstanding hundreds of pounds of weight. The super pykrete was much stronger than the original version. The MythBusters then built a full-size boat out of the super pykrete, naming it Yesterday's News, and subjected it to real-world conditions. The MythBusters vessel did not contain refrigeration units to keep the pykrete frozen as the original plans called for, and the boat had a much thinner construction than the massive ships proposed in World War II. Though the boat managed to float and stay intact at speeds of up to , it quickly began to spring leaks as the boat slowly melted. After 20 minutes the boat was deteriorating, and the experiment was ended. The boat lasted another 10 minutes while being piloted back to shore. Though the boat worked, it was noted that it would be highly impractical for the original proposal, which claimed that an entire aircraft carrier could be built out of pykrete. Their conclusion was "Plausible, but ludicrous", since it would involve building vessels out of tens of thousands of tons of the material that would sink without being kept cool. In the same year, the story of Pyke and pykrete in the Second World War also played an important role in Giles Foden's book Turbulence, about a (fictitional) British meteorologist and his contributions to D-Day weather forecasting. The main character is also involved in the post-War pykrete effort. In 2010, the BBC programme Bang Goes the Theory episode 26 tested a , 5-tonne pykrete boat made with hemp rather than wood pulp. All four presenters, Jem Stansfield, Dallas Campbell, Liz Bonnin, and Yan Wong, had to be rescued from Portsmouth Harbour after the boat took on water through the engine mounts. It eventually capsized after melting much faster than anticipated in the warmer-than-expected September waters. 2013 German TV station WDR's programme experimented with pykrete but replaced the woodpulp by hemp-fibres. A 5 cm (2.1 inch)-thick plate withstood even more than 80 kg without breaking, it only started to bend. Neal Stephenson's 2015 novel Seveneves describes the fictional use of pykrete to construct low Earth orbit habitats and spaceship hulls. 99% Invisible's third volume of mini-stories podcasts includes an article about Project Habbakuk and the creation, proposal, and eventual scrapping of pykrete as a useful building material during WWII. Science & Futurism with Isaac Arthur Youtube episode "Colonizing Ceres" describes the fictional use of pykrete to construct a dome habitat on an asteroid to be mined. See also Cement-bonded wood fiber Ground freezing, a construction technique using similar properties of frozen soil Footnotes References External links Pykrete - Ice Ships in the Rockies Proposed WW2 aircraft carrier Pykrete ... or, The Myth that Wouldn't Die... Composite materials Recycled building materials Concrete Water ice
Pykrete
[ "Physics", "Engineering" ]
2,043
[ "Structural engineering", "Composite materials", "Materials", "Concrete", "Matter" ]
341,038
https://en.wikipedia.org/wiki/Reporter%20gene
In molecular biology, a reporter gene (often simply reporter) is a gene that researchers attach to a regulatory sequence of another gene of interest in bacteria, cell culture, animals or plants. Such genes are called reporters because the characteristics they confer on organisms expressing them are easily identified and measured, or because they are selectable markers. Reporter genes are often used as an indication of whether a certain gene has been taken up by or expressed in the cell or organism population. Common reporter genes To introduce a reporter gene into an organism, scientists place the reporter gene and the gene of interest in the same DNA construct to be inserted into the cell or organism. For bacteria or prokaryotic cells in culture, this is usually in the form of a circular DNA molecule called a plasmid. For viruses, this is known as a viral vector. It is important to use a reporter gene that is not natively expressed in the cell or organism under study, since the expression of the reporter is being used as a marker for successful uptake of the gene of interest. Commonly used reporter genes that induce visually identifiable characteristics usually involve fluorescent and luminescent proteins. Examples include the gene that encodes jellyfish green fluorescent protein (GFP), which causes cells that express it to glow green under blue or ultraviolet light, the enzyme luciferase, which catalyzes a reaction with luciferin to produce light, and the red fluorescent protein from the gene . The GUS gene has been commonly used in plants but luciferase and GFP are becoming more common. A common reporter in bacteria is the E. coli lacZ gene, which encodes the protein beta-galactosidase. This enzyme causes bacteria expressing the gene to appear blue when grown on a medium that contains the substrate analog X-gal. An example of a selectable marker which is also a reporter in bacteria is the chloramphenicol acetyltransferase (CAT) gene, which confers resistance to the antibiotic chloramphenicol. Transformation and transfection assays Many methods of transfection and transformation – two ways of expressing a foreign or modified gene in an organism – are effective in only a small percentage of a population subjected to the techniques. Thus, a method for identifying those few successful gene uptake events is necessary. Reporter genes used in this way are normally expressed under their own promoter (DNA regions that initiates gene transcription) independent from that of the introduced gene of interest; the reporter gene can be expressed constitutively (that is, it is "always on") or inducibly with an external intervention such as the introduction of Isopropyl β-D-1-thiogalactopyranoside (IPTG) in the β-galactosidase system. As a result, the reporter gene's expression is independent of the gene of interest's expression, which is an advantage when the gene of interest is only expressed under certain specific conditions or in tissues that are difficult to access. In the case of selectable-marker reporters such as CAT, the transfected population of bacteria can be grown on a substrate that contains chloramphenicol. Only those cells that have successfully taken up the construct containing the CAT gene will survive and multiply under these conditions. Gene expression assays Reporter genes can be used to assay for the expression of a gene of interest that is normally difficult to quantitatively assay. Reporter genes can produce a protein that has little obvious or immediate effect on the cell culture or organism. They are ideally not present in the native genome to be able to isolate reporter gene expression as a result of the gene of interest's expression. To activate reporter genes, they can be expressed constitutively, where they are directly attached to the gene of interest to create a gene fusion. This method is an example of using cis-acting elements where the two genes are under the same promoter elements and are transcribed into a single messenger RNA molecule. The mRNA is then translated into protein. It is important that both proteins be able to properly fold into their active conformations and interact with their substrates despite being fused. In building the DNA construct, a segment of DNA coding for a flexible polypeptide linker region is usually included so that the reporter and the gene product will only minimally interfere with one another. Reporter genes can also be expressed by induction during growth. In these cases, trans-acting elements, such as transcription factors are used to express the reporter gene. Reporter gene assay have been increasingly used in high throughput screening (HTS) to identify small molecule inhibitors and activators of protein targets and pathways for drug discovery and chemical biology. Because the reporter enzymes themselves (e.g. firefly luciferase) can be direct targets of small molecules and confound the interpretation of HTS data, novel coincidence reporter designs incorporating artifact suppression have been developed. Promoter assays Reporter genes can be used to assay for the activity of a particular promoter in a cell or organism. In this case there is no separate "gene of interest"; the reporter gene is simply placed under the control of the target promoter and the reporter gene product's activity is quantitatively measured. The results are normally reported relative to the activity under a "consensus" promoter known to induce strong gene expression. Further uses A more complex use of reporter genes on a large scale is in two-hybrid screening, which aims to identify proteins that natively interact with one another in vivo. See also GUS reporter system References External links Research highlights and updated information on reporter genes. Staining Whole Mouse Embryos for β-Galactosidase (lacZ) Activity Biochemistry detection methods Genetics techniques Molecular biology es:Gen reportero
Reporter gene
[ "Chemistry", "Engineering", "Biology" ]
1,172
[ "Biochemistry methods", "Genetics techniques", "Genetic engineering", "Chemical tests", "Biochemistry detection methods", "Molecular biology", "Biochemistry" ]
341,420
https://en.wikipedia.org/wiki/Standard%20enthalpy%20of%20reaction
The standard enthalpy of reaction (denoted ) for a chemical reaction is the difference between total product and total reactant molar enthalpies, calculated for substances in their standard states. The value can be approximately interpreted in terms of the total of the chemical bond energies for bonds broken and bonds formed. For a generic chemical reaction the standard enthalpy of reaction is related to the standard enthalpy of formation values of the reactants and products by the following equation: In this equation, are the stoichiometric coefficients of each product and reactant. The standard enthalpy of formation, which has been determined for a vast number of substances, is the change of enthalpy during the formation of 1 mole of the substance from its constituent elements, with all substances in their standard states. Standard states can be defined at any temperature and pressure, so both the standard temperature and pressure must always be specified. Most values of standard thermochemical data are tabulated at either (25°C, 1 bar) or (25°C, 1 atm). For ions in aqueous solution, the standard state is often chosen such that the aqueous H+ ion at a concentration of exactly 1 mole/liter has a standard enthalpy of formation equal to zero, which makes possible the tabulation of standard enthalpies for cations and anions at the same standard concentration. This convention is consistent with the use of the standard hydrogen electrode in the field of electrochemistry. However, there are other common choices in certain fields, including a standard concentration for H+ of exactly 1 mole/(kg solvent) (widely used in chemical engineering) and mole/L (used in the field of biochemistry). Introduction Two initial thermodynamic systems, each isolated in their separate states of internal thermodynamic equilibrium, can, by a thermodynamic operation, be coalesced into a single new final isolated thermodynamic system. If the initial systems differ in chemical constitution, then the eventual thermodynamic equilibrium of the final system can be the result of chemical reaction. Alternatively, an isolated thermodynamic system, in the absence of some catalyst, can be in a metastable equilibrium; introduction of a catalyst, or some other thermodynamic operation, such as release of a spark, can trigger a chemical reaction. The chemical reaction will, in general, transform some chemical potential energy into thermal energy. If the joint system is kept isolated, then its internal energy remains unchanged. Such thermal energy manifests itself, however, in changes in the non-chemical state variables (such as temperature, pressure, volume) of the joint systems, as well as the changes in the mole numbers of the chemical constituents that describe the chemical reaction. Internal energy is defined with respect to some standard state. Subject to suitable thermodynamic operations, the chemical constituents of the final system can be brought to their respective standard states, along with transfer of energy as heat or through thermodynamic work, which can be measured or calculated from measurements of non-chemical state variables. Accordingly, the calculation of standard enthalpy of reaction is the most established way of quantifying the conversion of chemical potential energy into thermal energy. Enthalpy of reaction for standard conditions defined and measured The standard enthalpy of a reaction is defined so as to depend simply upon the standard conditions that are specified for it, not simply on the conditions under which the reactions actually occur. There are two general conditions under which thermochemical measurements are actually made. (a) Constant volume and temperature: heat , where (sometimes written as ) is the internal energy of the system (b) Constant pressure and temperature: heat , where is the enthalpy of the system The magnitudes of the heat effects in these two conditions are different. In the first case the volume of the system is kept constant during the course of the measurement by carrying out the reaction in a closed and rigid container, and as there is no change in the volume no work is involved. From the first law of thermodynamics, , where W is the work done by the system. When only expansion work is possible for a process we have ; this implies that the heat of reaction at constant volume is equal to the change in the internal energy of the reacting system. The thermal change that occurs in a chemical reaction is only due to the difference between the sum of internal energy of the products and the sum of the internal energy of reactants. We have This also signifies that the amount of heat absorbed at constant volume could be identified with the change in the thermodynamic quantity internal energy. At constant pressure on the other hand, the system is either kept open to the atmosphere or confined within a container on which a constant external pressure is exerted and under these conditions the volume of the system changes. The thermal change at a constant pressure not only involves the change in the internal energy of the system but also the work performed either in expansion or contraction of the system. In general the first law requires that (work) If is only pressure–volume work, then at constant pressure Assuming that the change in state variables is due solely to a chemical reaction, we have As enthalpy or heat content is defined by , we have By convention, the enthalpy of each element in its standard state is assigned a value of zero. If pure preparations of compounds or ions are not possible, then special further conventions are defined. Regardless, if each reactant and product can be prepared in its respective standard state, then the contribution of each species is equal to its molar enthalpy of formation multiplied by its stoichiometric coefficient in the reaction, and the enthalpy of reaction at constant (standard) pressure and constant temperature (usually 298 K) may be written as As shown above, at constant pressure the heat of the reaction is exactly equal to the enthalpy change, , of the reacting system. Variation with temperature or pressure The variation of the enthalpy of reaction with temperature is given by Kirchhoff's Law of Thermochemistry, which states that the temperature derivative of ΔH for a chemical reaction is given by the difference in heat capacity (at constant pressure) between products and reactants: . Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature. Pressure variation effects and corrections due to mixing are generally minimal unless a reaction involves non-ideal gases and/or solutes, or is carried out at extremely high pressures. The enthalpy of mixing for a solution of ideal gases is exactly zero; the same is true for a reaction where the reactants and products are pure, unmixed components. Contributions to reaction enthalpies due to concentration variations for solutes in solution generally must be experimentally determined on a case by case basis, but would be exactly zero for ideal solutions since no change in the solution's average intermolecular forces as a function of concentration is possible in an ideal solution. Subcategories In each case the word standard implies that all reactants and products are in their standard states. Standard enthalpy of combustion is the enthalpy change when one mole of an organic compound reacts with molecular oxygen (O2) to form carbon dioxide and liquid water. For example, the standard enthalpy of combustion of ethane gas refers to the reaction C2H6 (g) + (7/2) O2 (g) → 2 CO2 (g) + 3 H2O (l). Standard enthalpy of formation is the enthalpy change when one mole of any compound is formed from its constituent elements in their standard states. The enthalpy of formation of one mole of ethane gas refers to the reaction 2 C (graphite) + 3 H2 (g) → C2H6 (g). Standard enthalpy of hydrogenation is defined as the enthalpy change observed when one mole of an unsaturated compound reacts with an excess of hydrogen to become fully saturated. The hydrogenation of one mole of acetylene yields ethane as a product and is described by the equation C2H2 (g) + 2 H2 (g) → C2H6 (g). Standard enthalpy of neutralization is the change in enthalpy that occurs when an acid and base undergo a neutralization reaction to form one mole of water. For example in aqueous solution, the standard enthalpy of neutralization of hydrochloric acid and the base magnesium hydroxide refers to the reaction HCl (aq) + 1/2 Mg(OH)2 → 1/2 MgCl2 (aq) + H2O(l). Evaluation of reaction enthalpies There are several methods of determining the values of reaction enthalpies, involving either measurements on the reaction of interest or calculations from data for related reactions. For reactions which go rapidly to completion, it is often possible to measure the heat of reaction directly using a calorimeter. One large class of reactions for which such measurements are common is the combustion of organic compounds by reaction with molecular oxygen (O2) to form carbon dioxide and water (H2O). The heat of combustion can be measured with a so-called bomb calorimeter, in which the heat released by combustion at high temperature is lost to the surroundings as the system returns to its initial temperature. Since enthalpy is a state function, its value is the same for any path between given initial and final states, so that the measured ΔH is the same as if the temperature stayed constant during the combustion. For reactions which are incomplete, the equilibrium constant can be determined as a function of temperature. The enthalpy of reaction is then found from the van 't Hoff equation as . A closely related technique is the use of an electroanalytical voltaic cell, which can be used to measure the Gibbs energy for certain reactions as a function of temperature, yielding and thereby . It is also possible to evaluate the enthalpy of one reaction from the enthalpies of a number of other reactions whose sum is the reaction of interest, and these not need be formation reactions. This method is based on Hess's law, which states that the enthalpy change is the same for a chemical reaction which occurs as a single reaction or in several steps. If the enthalpies for each step can be measured, then their sum gives the enthalpy of the overall single reaction. Finally the reaction enthalpy may be estimated using bond energies for the bonds which are broken and formed in the reaction of interest. This method is only approximate, however, because a reported bond energy is only an average value for different molecules with bonds between the same elements. References Enthalpy Thermochemistry Thermodynamics pl:Standardowe molowe ciepło tworzenia
Standard enthalpy of reaction
[ "Physics", "Chemistry", "Mathematics" ]
2,244
[ "Thermodynamic properties", "Thermochemistry", "Physical quantities", "Quantity", "Enthalpy", "Thermodynamics", "Dynamical systems" ]
341,436
https://en.wikipedia.org/wiki/State%20function
In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state. Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis. History It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body." Overview A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate. Generally, a state space is defined by an equation of the form , where denotes pressure, denotes temperature, denotes volume, and the ellipsis denotes other possible state variables like particle number and entropy . If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state. When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure and volume as functions of time from time to will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time to time , calculate . In order to calculate the work in the above integral, the functions and must be known at each time over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of over the path: In the equation, can be expressed as the exact differential of the function . Therefore, the integral can be expressed as the difference in the value of at the end points of the integration. The product is therefore a state function of the system. The notation will be used for an exact differential. In other words, the integral of will be equal to . The symbol will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, will be used to denote an infinitesimal increment of work. State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function is proportional to the internal energy of an ideal gas, but the work is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location. List of state functions The following are considered to be state functions in thermodynamics: Mass Energy () Enthalpy () Internal energy () Gibbs free energy () Helmholtz free energy () Exergy () Entropy () Pressure () Temperature () Volume () Chemical composition Pressure altitude Specific volume () or its reciprocal density () Particle number () See also Markov property Conservative vector field Nonholonomic system Equation of state State variable Notes References External links Thermodynamic properties Continuum mechanics
State function
[ "Physics", "Chemistry", "Mathematics" ]
1,333
[ "State functions", "Thermodynamic properties", "Physical quantities", "Continuum mechanics", "Quantity", "Classical mechanics", "Thermodynamics" ]
602,032
https://en.wikipedia.org/wiki/Pierre%20Janssen
Pierre Jules César Janssen (22 February 1824 – 23 December 1907), usually known as Jules Janssen, was a French astronomer who, along with English scientist Joseph Norman Lockyer, is credited with discovering the gaseous nature of the solar chromosphere, but there is no justification for the conclusion that he deserves credit for the co-discovery of the element helium. Life, work, and interests Janssen was born in Paris (During Bourbon Restoration in France) into a cultivated family. His father, César Antoine Janssen (born in Paris, 1780 – 1860) was a well known clarinettist from Dutch/Belgian descent (his father, Christianus Janssen, emigrated from Walloon Brabant to Paris). His mother Pauline Marie Le Moyne (1789 – 1871) was a daughter of the architect Paul Guillaume Le Moyne. Pierre Janssen studied mathematics and physics at the faculty of sciences. He taught at the Lycée Charlemagne in 1853, and in the school of architecture 1865 – 1871, but his energies were mainly devoted to various scientific missions entrusted to him. Thus in 1857 he went to Peru in order to determine the magnetic equator; in 1861–1862 and 1864, he studied telluric absorption in the solar spectrum in Italy and Switzerland; in 1867 he carried out optical and magnetic experiments at the Azores; he successfully observed both transits of Venus, that of 1874 in Japan, that of 1882 at Oran in Algeria; and he took part in a long series of solar eclipse-expeditions, e.g. to Trani, Italy (1867), Guntur, India (1868), Algiers (1870), Siam (1875), the Caroline Islands (1883), and to Alcossebre in Spain (1905). To see the eclipse of 1870, he escaped from the Siege of Paris in a balloon. Unfortunately the eclipse was obscured from him by cloud. In the year 1874, Janssen invented the Revolver of Janssen or Photographic Revolver, instrument that originated the chronophotography. Later this invention was of great use for researchers like Etienne Jules Marey to carry out exhibitions and inventions. Solar spectroscopy In 1868 Janssen discovered how to observe solar prominences without an eclipse. While observing the solar eclipse of 18 August 1868, at Guntur, Madras State (now in Andhra Pradesh), British India, he noticed bright lines in the spectrum of the chromosphere, showing that the chromosphere is gaseous. From the brightness of the spectral lines, Janssen realized that the chromospheric spectrum could be observed even without an eclipse, and he proceeded to do so. But he never mentioned the emission line seen by Joseph Norman Lockyer, which later was shown to be due to the element helium. On 20 October, Lockyer in England set up a new, relatively powerful spectroscope. He also observed the emission spectrum of the chromosphere, including a new yellow line near the sodium D line, which he called "D3". Lockyer and the English chemist Edward Frankland speculated that the new line could be due to a new element, which they named the element after the Greek word for the Sun, ἥλιος (helios). Observatories At the great Indian eclipse of 1868 that occurred in Guntur, Janssen also demonstrated the gaseous nature of the red prominences, and devised a method of observing them under ordinary daylight conditions. One main purpose of his spectroscopic inquiries was to answer the question whether the Sun contains oxygen or not. An indispensable preliminary was the virtual elimination of oxygen-absorption in the Earth's atmosphere, and his bold project of establishing an observatory on the top of Mont Blanc was prompted by a perception of the advantages to be gained by reducing the thickness of air through which observations have to be made. This observatory, the foundations of which were fixed in the hard ice that appeared to cover the summit to a depth of over ten metres, was built in September 1893, and Janssen, in spite of his sixty-nine years, made the ascent and spent four days making observations. In 1875, Janssen was appointed director of the new astrophysical observatory established by the French government at Meudon, and set on foot there in 1876 the remarkable series of solar photographs collected in his great Atlas de photographies solaires (1904). The first volume of the Annales de l'observatoire de Meudon was published by him in 1896. (see also Meudon Great Refractor) Janssen was the President of the Société Astronomique de France (SAF), the French astronomical society, from 1895 to 1897. International Meridian Conference In 1884 he took part in the International Meridian Conference. Death, honors, and legacy Janssen died at Meudon on 23 December 1907 and was buried at Père Lachaise Cemetery in Paris, with the name "J. Janssen" inscribed on his tomb. During his life he was made a Knight of the Legion of Honor and a Foreign Member of the Royal Society of London. Craters on both Mars and the Moon are named in his honor. The public square in front of Meudon Observatory is named Place Jules Janssen after him. Two major prizes carry his name: the Prix Jules Janssen of the French Astronomical Society, and the Janssen Medal of the French Academy of Sciences. Janssen named minor planet 225 Henrietta discovered by Johann Palisa, after his wife, Henrietta. Notes and references Further reading Obituary, from Popular Astronomy, 1908, vol. 16, pp. 72–74 Obituary, from Astronomische Nachrichten, 1908, vol. 177, p. 63 (in French) Obituary, from The Astrophysical Journal, 1908, vol. 28, pp. 89–99 (in French) Janssen statue, description and black-and-white picture from The Observatory, 1922, vol. 45, pp. 175–176 Brief biography, from the High Altitude Observatory at Boulder, Colorado 1824 births 1907 deaths Burials at Père Lachaise Cemetery Discoverers of chemical elements 19th-century French astronomers Members of the French Academy of Sciences Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Knights of the Legion of Honour Honorary Fellows of the Royal Society of Edinburgh Scientists from Paris Helium Spectroscopists Recipients of the Lalande Prize Articles containing video clips French people of Belgian descent
Pierre Janssen
[ "Physics", "Chemistry" ]
1,303
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
602,650
https://en.wikipedia.org/wiki/Type%20safety
In computer science, type safety and type soundness are the extent to which a programming language discourages or prevents type errors. Type safety is sometimes alternatively considered to be a property of facilities of a computer language; that is, some facilities are type-safe and their usage will not result in type errors, while other facilities in the same language may be type-unsafe and a program using them may encounter type errors. The behaviors classified as type errors by a given programming language are usually those that result from attempts to perform operations on values that are not of the appropriate data type, e.g., adding a string to an integer when there's no definition on how to handle this case. This classification is partly based on opinion. Type enforcement can be static, catching potential errors at compile time, or dynamic, associating type information with values at run-time and consulting them as needed to detect imminent errors, or a combination of both. Dynamic type enforcement often allows programs to run that would be invalid under static enforcement. In the context of static (compile-time) type systems, type safety usually involves (among other things) a guarantee that the eventual value of any expression will be a legitimate member of that expression's static type. The precise requirement is more subtle than this — see, for example, subtyping and polymorphism for complications. Definitions Intuitively, type soundness is captured by Robin Milner's pithy statement that Well-typed programs cannot "go wrong". In other words, if a type system is sound, then expressions accepted by that type system must evaluate to a value of the appropriate type (rather than produce a value of some other, unrelated type or crash with a type error). Vijay Saraswat provides the following, related definition: A language is type-safe if the only operations that can be performed on data in the language are those sanctioned by the type of the data. However, what precisely it means for a program to be "well typed" or to "go wrong" are properties of its static and dynamic semantics, which are specific to each programming language. Consequently, a precise, formal definition of type soundness depends upon the style of formal semantics used to specify a language. In 1994, Andrew Wright and Matthias Felleisen formulated what has become the standard definition and proof technique for type safety in languages defined by operational semantics, which is closest to the notion of type safety as understood by most programmers. Under this approach, the semantics of a language must have the following two properties to be considered type-sound: Progress A well-typed program never gets "stuck": every expression is either already a value or can be reduced towards a value in some well-defined way. In other words, the program never gets into an undefined state where no further transitions are possible. Preservation (or subject reduction) After each evaluation step, the type of each expression remains the same (that is, its type is preserved). A number of other formal treatments of type soundness have also been published in terms of denotational semantics and structural operational semantics. Relation to other forms of safety In isolation, type soundness is a relatively weak property, as it essentially just states that the rules of a type system are internally consistent and cannot be subverted. However, in practice, programming languages are designed so that well-typedness also entails other, stronger properties, some of which include: Prevention of illegal operations. For example, a type system can reject the expression 3 / "Hello, World" as invalid, because the division operator is not defined for a string divisor. Memory safety Type systems can prevent wild pointers that could otherwise arise from a pointer to one type of object being treated as a pointer to another type. More sophisticated type systems, such as those supporting dependent types, can detect and reject out-of-bound accesses, preventing potential buffer overflows. Logic errors originating in the semantics of different types. For instance, inches and millimeters may both be stored as integers, but should not be substituted for each other or added. A type system can enforce two different types of integer for them. Type-safe and type-unsafe languages Type safety is usually a requirement for any toy language (i.e. esoteric language) proposed in academic programming language research. Many languages, on the other hand, are too big for human-generated type safety proofs, as they often require checking thousands of cases. Nevertheless, some languages such as Standard ML, which has rigorously defined semantics, have been proved to meet one definition of type safety. Some other languages such as Haskell are believed to meet some definition of type safety, provided certain "escape" features are not used (for example Haskell's , used to "escape" from the usual restricted environment in which I/O is possible, circumvents the type system and so can be used to break type safety.) Type punning is another example of such an "escape" feature. Regardless of the properties of the language definition, certain errors may occur at run-time due to bugs in the implementation, or in linked libraries written in other languages; such errors could render a given implementation type unsafe in certain circumstances. An early version of Sun's Java virtual machine was vulnerable to this sort of problem. Strong and weak typing Programming languages are often colloquially classified as strongly typed or weakly typed (also loosely typed) to refer to certain aspects of type safety. In 1974, Liskov and Zilles defined a strongly-typed language as one in which "whenever an object is passed from a calling function to a called function, its type must be compatible with the type declared in the called function." In 1977, Jackson wrote, "In a strongly typed language each data area will have a distinct type and each process will state its communication requirements in terms of these types." In contrast, a weakly typed language may produce unpredictable results or may perform implicit type conversion. Memory management and type safety Type safety is closely linked to memory safety. For instance, in an implementation of a language that has some type which allows some bit patterns but not others, a dangling pointer memory error allows writing a bit pattern that does not represent a legitimate member of into a dead variable of type , causing a type error when the variable is read. Conversely, if the language is memory-safe, it cannot allow an arbitrary integer to be used as a pointer, hence there must be a separate pointer or reference type. As a minimal condition, a type-safe language must not allow dangling pointers across allocations of different types. But most languages enforce the proper use of abstract data types defined by programmers even when this is not strictly necessary for memory safety or for the prevention of any kind of catastrophic failure. Allocations are given a type describing its contents, and this type is fixed for the duration of the allocation. This allows type-based alias analysis to infer that allocations of different types are distinct. Most type-safe languages use garbage collection. Pierce says, "it is extremely difficult to achieve type safety in the presence of an explicit deallocation operation", due to the dangling pointer problem. However Rust is generally considered type-safe and uses a borrow checker to achieve memory safety, instead of garbage collection. Type safety in object oriented languages In object oriented languages type safety is usually intrinsic in the fact that a type system is in place. This is expressed in terms of class definitions. A class essentially defines the structure of the objects derived from it and an API as a contract for handling these objects. Each time a new object is created it will comply with that contract. Each function that exchanges objects derived from a specific class, or implementing a specific interface, will adhere to that contract: hence in that function the operations permitted on that object will be only those defined by the methods of the class the object implements. This will guarantee that the object integrity will be preserved. Exceptions to this are object oriented languages that allow dynamic modification of the object structure, or the use of reflection to modify the content of an object to overcome the constraints imposed by the class methods definitions. Type safety issues in specific languages Ada Ada was designed to be suitable for embedded systems, device drivers and other forms of system programming, but also to encourage type-safe programming. To resolve these conflicting goals, Ada confines type-unsafety to a certain set of special constructs whose names usually begin with the string . Unchecked_Deallocation can be effectively banned from a unit of Ada text by applying to this unit. It is expected that programmers will use constructs very carefully and only when necessary; programs that do not use them are type-safe. The SPARK programming language is a subset of Ada eliminating all its potential ambiguities and insecurities while at the same time adding statically checked contracts to the language features available. SPARK avoids the issues with dangling pointers by disallowing allocation at run time entirely. Ada2012 adds statically checked contracts to the language itself (in form of pre-, and post-conditions, as well as type invariants). C The C programming language is type-safe in limited contexts; for example, a compile-time error is generated when an attempt is made to convert a pointer to one type of structure to a pointer to another type of structure, unless an explicit cast is used. However, a number of very common operations are non-type-safe; for example, the usual way to print an integer is something like printf("%d", 12), where the %d tells printf at run-time to expect an integer argument. (Something like printf("%s", 12), which tells the function to expect a pointer to a character-string and yet supplies an integer argument, may be accepted by compilers, but will produce undefined results.) This is partially mitigated by some compilers (such as gcc) checking type correspondences between printf arguments and format strings. In addition, C, like Ada, provides unspecified or undefined explicit conversions; and unlike in Ada, idioms that use these conversions are very common, and have helped to give C a type-unsafe reputation. For example, the standard way to allocate memory on the heap is to invoke a memory allocation function, such as malloc, with an argument indicating how many bytes are required. The function returns an untyped pointer (type void *), which the calling code must explicitly or implicitly cast to the appropriate pointer type. Pre-standardized implementations of C required an explicit cast to do so, therefore the code (struct foo *) malloc(sizeof(struct foo)) became the accepted practice. C++ Some features of C++ that promote more type-safe code: The new operator returns a pointer of type based on operand, whereas malloc returns a void pointer. C++ code can use virtual functions and templates to achieve polymorphism without void pointers. Safer casting operators, such as dynamic cast that performs run-time type checking. C++11 strongly-typed enumerations cannot be implicitly converted to or from integers or other enumeration types. C++ explicit constructors and C++11 explicit conversion operators prevent implicit type conversions. C# C# is type-safe. It has support for untyped pointers, but this must be accessed using the "unsafe" keyword which can be prohibited at the compiler level. It has inherent support for run-time cast validation. Casts can be validated by using the "as" keyword that will return a null reference if the cast is invalid, or by using a C-style cast that will throw an exception if the cast is invalid. See C Sharp conversion operators. Undue reliance on the object type (from which all other types are derived) runs the risk of defeating the purpose of the C# type system. It is usually better practice to abandon object references in favour of generics, similar to templates in C++ and generics in Java. Java The Java language is designed to enforce type safety. Anything in Java happens inside an object and each object is an instance of a class. To implement the type safety enforcement, each object, before usage, needs to be allocated. Java allows usage of primitive types but only inside properly allocated objects. Sometimes a part of the type safety is implemented indirectly: e.g. the class BigDecimal represents a floating point number of arbitrary precision, but handles only numbers that can be expressed with a finite representation. The operation BigDecimal.divide() calculates a new object as the division of two numbers expressed as BigDecimal. In this case if the division has no finite representation, as when one computes e.g. 1/3=0.33333..., the divide() method can raise an exception if no rounding mode is defined for the operation. Hence the library, rather than the language, guarantees that the object respects the contract implicit in the class definition. Standard ML Standard ML has rigorously defined semantics and is known to be type-safe. However, some implementations, including Standard ML of New Jersey (SML/NJ), its syntactic variant Mythryl and MLton, provide libraries that offer unsafe operations. These facilities are often used in conjunction with those implementations' foreign function interfaces to interact with non-ML code (such as C libraries) that may require data laid out in specific ways. Another example is the SML/NJ interactive toplevel itself, which must use unsafe operations to execute ML code entered by the user. Modula-2 Modula-2 is a strongly-typed language with a design philosophy to require any unsafe facilities to be explicitly marked as unsafe. This is achieved by "moving" such facilities into a built-in pseudo-library called SYSTEM from where they must be imported before they can be used. The import thus makes it visible when such facilities are used. Unfortunately, this was not consequently implemented in the original language report and its implementation. There still remained unsafe facilities such as the type cast syntax and variant records (inherited from Pascal) that could be used without prior import. The difficulty in moving these facilities into the SYSTEM pseudo-module was the lack of any identifier for the facility that could then be imported since only identifiers can be imported, but not syntax. IMPORT SYSTEM; (* allows the use of certain unsafe facilities: *) VAR word : SYSTEM.WORD; addr : SYSTEM.ADDRESS; addr := SYSTEM.ADR(word); (* but type cast syntax can be used without such import *) VAR i : INTEGER; n : CARDINAL; n := CARDINAL(i); (* or *) i := INTEGER(n); The ISO Modula-2 standard corrected this for the type cast facility by changing the type cast syntax into a function called CAST which has to be imported from pseudo-module SYSTEM. However, other unsafe facilities such as variant records remained available without any import from pseudo-module SYSTEM. IMPORT SYSTEM; VAR i : INTEGER; n : CARDINAL; i := SYSTEM.CAST(INTEGER, n); (* Type cast in ISO Modula-2 *) A recent revision of the language applied the original design philosophy rigorously. First, pseudo-module SYSTEM was renamed to UNSAFE to make the unsafe nature of facilities imported from there more explicit. Then all remaining unsafe facilities where either removed altogether (for example variant records) or moved to pseudo-module UNSAFE. For facilities where there is no identifier that could be imported, enabling identifiers were introduced. In order to enable such a facility, its corresponding enabling identifier must be imported from pseudo-module UNSAFE. No unsafe facilities remain in the language that do not require import from UNSAFE. IMPORT UNSAFE; VAR i : INTEGER; n : CARDINAL; i := UNSAFE.CAST(INTEGER, n); (* Type cast in Modula-2 Revision 2010 *) FROM UNSAFE IMPORT FFI; (* enabling identifier for foreign function interface facility *) <*FFI="C"*> (* pragma for foreign function interface to C *) Pascal Pascal has had a number of type safety requirements, some of which are kept in some compilers. Where a Pascal compiler dictates "strict typing", two variables cannot be assigned to each other unless they are either compatible (such as conversion of integer to real) or assigned to the identical subtype. For example, if you have the following code fragment: type TwoTypes = record I: Integer; Q: Real; end; DualTypes = record I: Integer; Q: Real; end; var T1, T2: TwoTypes; D1, D2: DualTypes; Under strict typing, a variable defined as is not compatible with (because they are not identical, even though the components of that user defined type are identical) and an assignment of T1 := D2; is illegal. An assignment of T1 := T2; would be legal because the subtypes they are defined to are identical. However, an assignment such as T1.Q := D1.Q; would be legal. Common Lisp In general, Common Lisp is a type-safe language. A Common Lisp compiler is responsible for inserting dynamic checks for operations whose type safety cannot be proven statically. However, a programmer may indicate that a program should be compiled with a lower level of dynamic type-checking. A program compiled in such a mode cannot be considered type-safe. C++ examples The following examples illustrates how C++ cast operators can break type safety when used incorrectly. The first example shows how basic data types can be incorrectly cast: #include <iostream> using namespace std; int main () { int ival = 5; // integer value float fval = reinterpret_cast<float&>(ival); // reinterpret bit pattern cout << fval << endl; // output integer as float return 0; } In this example, reinterpret_cast explicitly prevents the compiler from performing a safe conversion from integer to floating-point value. When the program runs it will output a garbage floating-point value. The problem could have been avoided by instead writing float fval = ival; The next example shows how object references can be incorrectly downcast: #include <iostream> using namespace std; class Parent { public: virtual ~Parent() {} // virtual destructor for RTTI }; class Child1 : public Parent { public: int a; }; class Child2 : public Parent { public: float b; }; int main () { Child1 c1; c1.a = 5; Parent & p = c1; // upcast always safe Child2 & c2 = static_cast<Child2&>(p); // invalid downcast cout << c2.b << endl; // will output garbage data return 0; } The two child classes have members of different types. When downcasting a parent class pointer to a child class pointer, then the resulting pointer may not point to a valid object of correct type. In the example, this leads to garbage value being printed. The problem could have been avoided by replacing static_cast with dynamic_cast that throws an exception on invalid casts. See also Type theory Notes References Programming language topics Type theory Articles with example Pascal code
Type safety
[ "Mathematics", "Engineering" ]
4,026
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "Type theory", "Software engineering", "Programming language topics" ]
16,153,022
https://en.wikipedia.org/wiki/Expanded%20genetic%20code
An expanded genetic code is an artificially modified genetic code in which one or more specific codons have been re-allocated to encode an amino acid that is not among the 22 common naturally-encoded proteinogenic amino acids. The key prerequisites to expand the genetic code are: the non-standard amino acid to encode, an unused codon to adopt, a tRNA that recognizes this codon, and a tRNA synthetase that recognizes only that tRNA and only the non-standard amino acid. Expanding the genetic code is an area of research of synthetic biology, an applied biological discipline whose goal is to engineer living systems for useful purposes. The genetic code expansion enriches the repertoire of useful tools available to science. In May 2019, researchers, in a milestone effort, reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 61 codons (eliminating two out of the six codons coding for serine and one out of three stop codons) – of which 59 used to encode 20 amino acids. Introduction It is noteworthy that the genetic code for all organisms is basically the same, so that all living beings use the same 'genetic language'. In general, the introduction of new functional unnatural amino acids into proteins of living cells breaks the universality of the genetic language, which ideally leads to alternative life forms. Proteins are produced thanks to the translational system molecules, which decode the RNA messages into a string of amino acids. The translation of genetic information contained in messenger RNA (mRNA) into a protein is catalysed by ribosomes. Transfer RNAs (tRNA) are used as keys to decode the mRNA into its encoded polypeptide. The tRNA recognizes a specific three nucleotide codon in the mRNA with a complementary sequence called the anticodon on one of its loops. Each three-nucleotide codon is translated into one of twenty naturally occurring amino acids. There is at least one tRNA for any codon, and sometimes multiple codons code for the same amino acid. Many tRNAs are compatible with several codons. An enzyme called an aminoacyl tRNA synthetase covalently attaches the amino acid to the appropriate tRNA. Most cells have a different synthetase for each amino acid (20 or more synthetases). On the other hand, some bacteria have fewer than 20 aminoacyl tRNA synthetases, and introduce the "missing" amino acid(s) by modification of a structurally related amino acid by an aminotransferase enzyme. A feature exploited in the expansion of the genetic code is the fact that the aminoacyl tRNA synthetase often does not recognize the anticodon, but another part of the tRNA, meaning that if the anticodon were to be mutated the encoding of that amino acid would change to a new codon. In the ribosome, the information in mRNA is translated into a specific amino acid when the mRNA codon matches with the complementary anticodon of a tRNA, and the attached amino acid is added onto a growing polypeptide chain. When it is released from the ribosome, the polypeptide chain folds into a functioning protein. In order to incorporate a novel amino acid into the genetic code several changes are required. First, for successful translation of a novel amino acid, the codon to which the novel amino acid is assigned cannot already code for one of the 20 natural amino acids. Usually a nonsense codon (stop codon) or a four-base codon are used. Second, a novel pair of tRNA and aminoacyl tRNA synthetase are required, these are called the orthogonal set. The orthogonal set must not crosstalk with the endogenous tRNA and synthetase sets, while still being functionally compatible with the ribosome and other components of the translation apparatus. The active site of the synthetase is modified to accept only the novel amino acid. Most often, a library of mutant synthetases is screened for one which charges the tRNA with the desired amino acid. The synthetase is also modified to recognize only the orthogonal tRNA. The tRNA synthetase pair is often engineered in other bacteria or eukaryotic cells. In this area of research, the 20 encoded proteinogenic amino acids are referred to as standard amino acids, or alternatively as natural or canonical amino acids, while the added amino acids are called non-standard amino acids (NSAAs), or unnatural amino acids (uAAs; term not used in papers dealing with natural non-proteinogenic amino acids, such as phosphoserine), or non-canonical amino acids. Non-standard amino acids The first element of the system is the amino acid that is added to the genetic code of a certain strain of organism. Over 71 different NSAAs have been added to different strains of E. coli, yeast or mammalian cells. Due to technical details (easier chemical synthesis of NSAAs, less crosstalk and easier evolution of the aminoacyl-tRNA synthase), the NSAAs are generally larger than standard amino acids and most often have a phenylalanine core but with a large variety of different substituents. These allow a large repertoire of new functions, such as labeling (see figure), as a fluorescent reporter (e.g. dansylalanine) or to produce translational proteins in E. coli with Eukaryotic post-translational modifications (e.g. phosphoserine, phosphothreonine, and phosphotyrosine). The founding work was reported by Rolf Furter, who singlehandedly used yeast tRNAPhe/PheRS pair to incorporate p-iodophenylalanine in E. coli. Unnatural amino acids incorporated into proteins include heavy atom-containing amino acids to facilitate certain x-ray crystallographic studies; amino acids with novel steric/packing and electronic properties; photocrosslinking amino acids which can be used to probe protein-protein interactions in vitro or in vivo; keto, acetylene, azide, and boronate-containing amino acids which can be used to selectively introduce a large number of biophysical probes, tags, and novel chemical functional groups into proteins in vitro or in vivo; redox active amino acids to probe and modulate electron transfer; photocaged and photoisomerizable amino acids to photoregulate biological processes; metal binding amino acids for catalysis and metal ion sensing; amino acids that contain fluorescent or infra-red active side chains to probe protein structure and dynamics; α-hydroxy acids and D-amino acids as probes of backbone conformation and hydrogen bonding interactions; and sulfated amino acids and mimetics of phosphorylated amino acids as probes of post-translational modifications. Availability of the non-standard amino acid requires that the organism either import it from the medium or biosynthesize it. In the first case, the unnatural amino acid is first synthesized chemically in its optically pure L-form. It is then added to the growth medium of the cell. A library of compounds is usually tested for use in incorporation of the new amino acid, but this is not always necessary, for example, various transport systems can handle unnatural amino acids with apolar side-chains. In the second case, a biosynthetic pathway needs to be engineered, for example, an E. coli strain that biosynthesizes a novel amino acid (p-aminophenylalanine) from basic carbon sources and includes it in its genetic code. Another example: the production of phosphoserine, a natural metabolite, and consequently required alteration of its pathway flux to increase its production. Codon assignment Another element of the system is a codon to allocate to the new amino acid. A major problem for the genetic code expansion is that there are no free codons. The genetic code has a non-random layout that shows tell-tale signs of various phases of primordial evolution, however, it has since frozen into place and is near-universally conserved. Nevertheless, some codons are rarer than others. In fact, in E. coli (and all organisms) the codon usage is not equal, but presents several rare codons (see table), the rarest being the amber stop codon (UAG). Amber codon suppression The possibility of reassigning codons was realized by Normanly et al. in 1990, when a viable mutant strain of E. coli read through the UAG ("amber") stop codon. This was possible thanks to the rarity of this codon and the fact that release factor 1 alone makes the amber codon terminate translation. Later, in the Schultz lab, the tRNATyr/tyrosyl-tRNA synthetase (TyrRS) from Methanococcus jannaschii, an archaebacterium, was used to introduce a tyrosine instead of STOP, the default value of the amber codon. This was possible because of the differences between the endogenous bacterial syntheses and the orthologous archaeal synthase, which do not recognize each other. Subsequently, the group evolved the orthologonal tRNA/synthase pair to utilize the non-standard amino acid O-methyltyrosine. This was followed by the larger naphthylalanine and the photocrosslinking benzoylphenylalanine, which proved the potential utility of the system. The amber codon is the least used codon in Escherichia coli, but hijacking it results in a substantial loss of fitness. One study, in fact, found that there were at least 83 peptides majorly affected by the readthrough Additionally, the labelling was incomplete. As a consequence, several strains have been made to reduce the fitness cost, including the removal of all amber codons from the genome. In most E. coli K-12 strains (viz. Escherichia coli (molecular biology) for strain pedigrees) there are 314 UAG stop codons. Consequently, a gargantuan amount of work has gone into the replacement of these. One approach pioneered by the group of Prof. George Church from Harvard, was dubbed MAGE in CAGE: this relied on a multiplex transformation and subsequent strain recombination to remove all UAG codons—the latter part presented a halting point in a first paper, but was overcome. This resulted in the E. coli strain C321.ΔA, which lacks all UAG codons and RF1. This allowed an experiment to be done with this strain to make it "addicted" to the amino acid biphenylalanine by evolving several key enzymes to require it structurally, therefore putting its expanded genetic code under positive selection. Rare sense codon reassignment In addition to the amber codon, rare sense codons have also been considered for use. The AGG codon codes for arginine, but a strain has been successfully modified to make it code for 6-N-allyloxycarbonyl-lysine. Another candidate is the AUA codon, which is unusual in that its respective tRNA has to differentiate against AUG that codes for methionine (primordially, isoleucine, hence its location). In order to do this, the AUA tRNA has a special base, lysidine. The deletion of the synthase (tilS) was possible thanks to the replacement of the native tRNA with that of Mycoplasma mobile (no lysidine). The reduced fitness is a first step towards pressuring the strain to lose all instances of AUA, allowing it to be used for genetic code expansion. E. coli strain Syn61 is a variant where all uses of TCG (Ser), TCA (Ser), TAG (STOP) codons are eliminated using a synthetic genome (see below). By removing the unneeded tRNA genes and RF1, strain Syn61Δ3 was produced. The three freed codons then become available for adding three special residues, as demonstrated in strain "Syn61Δ3(ev4)". Four base (quadruplet) codons While triplet codons are the basis of the genetic code in nature, programmed +1 frameshift is a natural process that allows the use of a four-nucleotide sequence (quadruplet codon) to encode an amino acid. Recent developments in genetic code engineering also showed that quadruplet codon could be used to encode non-standard amino acids under experimental conditions. This allowed the simultaneous usage of two unnatural amino acids, p-azidophenylalanine (pAzF) and N6-[(2-propynyloxy)carbonyl]lysine (CAK), which cross-link with each other by Huisgen cycloaddition. Quadrupled decoding in wild-type, non-recoded strains is very inefficient. This stems from the fact that the interaction between engineered tRNAs with ternary complexes or other translation components is not as favorable and strong as with cell endogenous translation elements. This problem can be overcome by specifically engineering and evolving tRNA that can decode quadruplet codons in non-recoded strains. Up to 4 different quadruplet orthogonal tRNA/tRNA synthethase pairs can be generated in this manner. Quadruplet codon decoding approach has also been applied to the construction of an HIV-1 vaccine. tRNA/synthetase pair Another key element is the tRNA/synthetase pair. The orthologous set of synthetase and tRNA can be mutated and screened through directed evolution to charge the tRNA with a different, even novel, amino acid. Mutations to the plasmid containing the pair can be introduced by error-prone PCR or through degenerate primers for the synthetase's active site. Selection involves multiple rounds of a two-step process, where the plasmid is transferred into cells expressing chloramphenicol acetyl transferase with a premature amber codon. In the presence of toxic chloramphenicol and the non-natural amino acid, the surviving cells will have overridden the amber codon using the orthogonal tRNA aminoacylated with either the standard amino acids or the non-natural one. To remove the former, the plasmid is inserted into cells with a barnase gene (toxic) with a premature amber codon but without the non-natural amino acid, removing all the orthogonal syntheses that do not specifically recognize the non-natural amino acid. In addition to the recoding of the tRNA to a different codon, they can be mutated to recognize a four-base codon, allowing additional free coding options. The non-natural amino acid, as a result, introduces diverse physicochemical and biological properties in order to be used as a tool to explore protein structure and function or to create novel or enhanced protein for practical purposes. Orthogonal sets in model organisms The orthogonal pairs of synthetase and tRNA that work for one organism may not work for another, as the synthetase may mis-aminoacylate endogenous tRNAs or the tRNA be mis-aminoacylated itself by an endogenous synthetase. As a result, the sets created to date differ between organisms. In 2017, a mouse engineered with an extended genetic code that can produce proteins with unnatural amino acids was reported. Orthogonal ribosomes Similarly to orthogonal tRNAs and aminoacyl tRNA synthetases (aaRSs), orthogonal ribosomes have been engineered to work in parallel to the natural ribosomes. Orthogonal ribosomes ideally use different mRNA transcripts than their natural counterparts and ultimately should draw on a separate pool of tRNA as well. This should alleviate some of the loss of fitness which currently still arises from techniques such as Amber codon suppression. Additionally, orthogonal ribosomes can be mutated and optimized for particular tasks, like the recognition of quadruplet codons. Such an optimization is not possible, or highly disadvantageous for natural ribosomes. o-Ribosome In 2005, three sets of ribosomes were published, which did not recognize natural mRNA, but instead translated a separate pool of orthogonal mRNA (o-mRNA). This was achieved by changing the recognition sequence of the mRNA, the Shine-Dalgarno sequence, and the corresponding recognition sequence in the 16S rRNA of ribosomes, the so-called Anti-Shine-Dalgarno-Sequence. This way the base pairing, which is usually lost if either sequence is mutated, stays available. However the mutations in the 16S rRNA were not limited to the obviously base-pairing nucleotides of the classical Anti-Shine-Dalgarno sequence. Ribo-X In 2007, the group of Jason W. Chin presented an orthogonal ribosome, which was optimized for Amber codon suppression. The 16S rRNA was mutated in such a way that it bound the release factor RF1 less strongly than the natural ribosome does. This ribosome did not eliminate the problem of lowered cell fitness caused by suppressed stop codons in natural proteins. However through the improved specificity it raised the yields of correctly synthesized target protein significantly (from ~20% to >60% percent for one amber codon to be suppressed and from <1% to >20% for two amber codons). Ribo-Q In 2010, the group of Jason W. Chin presented a further optimized version of the orthogonal ribosome. The Ribo-Q is a 16S rRNA optimized to recognize tRNAs, which have quadruplet anti-codons to recognize quadruplet codons, instead of the natural triplet codons. With this approach the number of possible codons rises from 64 to 256. Even accounting for a variety of stop codons, more than 200 different amino acids could potentially be encoded this way. Ribosome stapling The orthogonal ribosomes described above all focus on optimizing the 16S rRNA. Thus far, this optimized 16S rRNA was combined with natural large-subunits to form orthogonal ribosomes. If the 23S rRNA, the main RNA-component of the large ribosomal subunit, is to be optimized as well, it had to be assured, that there was no crosstalk in the assembly of orthogonal and natural ribosomes (see figureX B). To ensure that optimized 23S rRNA would only form into ribosomes with the optimized 16S rRNA, the two rRNAs were combined into one transcript. By inserting the sequence for the 23S rRNA into a loop-region of the 16S rRNA sequence, both subunits still adopt functioning folds. Since the two rRNAs are linked and thus in constant proximity, they preferably bind each other, not other free floating ribosomal subunits. Engineered peptidyl transferase center In 2014, it was shown that by altering the peptidyl transferase center of the 23S rRNA, ribosomes could be created which draw on orthogonal pools of tRNA. The 3' end of tRNAs is universally conserved to be CCA. The two cytidines base pair with two guanines the 23S rRNA to bind the tRNA to the ribosome. This interaction is required for translational fidelity. However, by co-mutating the binding nucleotides in such a way, that they can still base pair, the translational fidelity can be conserved. The 3'-end of the tRNA is mutated from CCA to CGA, while two cytidine nucleotides in the ribosomes A- and P-sites are mutated to guanidine. This leads to ribosomes which do not accept naturally occurring tRNAs as substrates and to tRNAs, which cannot be used as substrate by natural ribosomes. To use such tRNAs effectively, they would have to be aminoacylated by specific, orthogonal aaRSs. Most naturally occurring aaRSs recognize the 3'-end of their corresponding tRNA. aaRSs for these 3'-mutated tRNAs are not available yet. Thus far, this system has only been shown to work in an in-vitro translation setting where the aminoacylation of the orthogonal tRNA was achieved using so called "flexizymes". Flexizymes are ribozymes with tRNA-amino-aclylation activity. Applications With an expanded genetic code, the unnatural amino acid can be genetically directed to any chosen site in the protein of interest. The high efficiency and fidelity of this process allows a better control of the placement of the modification compared to modifying the protein post-translationally, which, in general, will target all amino acids of the same type, such as the thiol group of cysteine and the amino group of lysine. Also, an expanded genetic code allows modifications to be carried out in vivo. The ability to site-specifically direct lab-synthesized chemical moieties into proteins allows many types of studies that would otherwise be extremely difficult, such as: Probing protein structure and function: By using amino acids with slightly different size such as O-methyltyrosine or dansylalanine instead of tyrosine, and by inserting genetically coded reporter moieties (color-changing and/or spin-active) into selected protein sites, chemical information about the protein's structure and function can be measured. Probing the role of post-translational modifications in protein structure and function: By using amino acids that mimic post-translational modifications such as phosphoserine, biologically active protein can be obtained, and the site-specific nature of the amino acid incorporation can lead to information on how the position, density, and distribution of protein phosphorylation effect protein function. Identifying and regulating protein activity: By using photocaged aminoacids, protein function can be "switched" on or off by illuminating the organism. Changing the mode of action of a protein: One can start with the gene for a protein that binds a certain sequence of DNA and, by inserting a chemically active amino acid into the binding site, convert it to a protein that cuts the DNA rather than binding it. Improving immunogenicity and overcoming self-tolerance: By replacing strategically chosen tyrosines with p-nitro phenylalanine, a tolerated self-protein can be made immunogenic. Selective destruction of selected cellular components: using an expanded genetic code, unnatural, destructive chemical moieties (sometimes called "chemical warheads") can be incorporated into proteins that target specific cellular components. Producing better protein: the evolution of T7 bacteriophages on a non-evolving E. coli strain that encoded 3-iodotyrosine on the amber codon, resulted in a population fitter than wild-type thanks to the presence of iodotyrosine in its proteome Probing protein localization and protein-protein interaction in bacteria. Future The expansion of the genetic code is still in its infancy. Current methodology uses only one non-standard amino acid at the time, whereas ideally multiple could be used. In fact, the group of Jason Chin has recently broken the record for a genetically recoded E. coli strain that can simultaneously incorporate up to 4 unnatural amino acids. Moreover, there has been development in software that allows combination of orthogonal ribosomes and unnatural tRNA/RS pairs in order to improve protein yield and fidelity. Recoded synthetic genome One way to achieve the encoding of multiple unnatural amino acids is by synthesising a rewritten genome. In 2010, at the cost of $40 million an organism, Mycoplasma laboratorium, was constructed that was controlled by a synthetic, but not recoded, genome. The first genetically recoded organism was created by a collaboration between George Church's and Farren Isaacs' labs, when the wild type was recoded in such a way that all 321 known UAG stop codons were substituted with synonymous UAA codons and release factor 1 was knocked out in order to eliminate the interaction with the exogenous stop codon and improve unnatural protein synthesis. In 2019, Escherichia coli Syn61 was created, with a 4 megabase recoded genome consisting of only 61 codons instead of the natural 64. In addition to the elimination of the usage of rare codons, the specificity of the system needs to be increased as many tRNA recognise several codons Expanded genetic alphabet Another approach is to expand the number of nucleobases to increase the coding capacity. An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. A demonstration of UBPs were achieved in vitro by Ichiro Hirao's group at RIKEN institute in Japan. In 2002, they developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named "d5SICS" and "dNaM." More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed, and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM. The successful incorporation of a third base pair into a living micro-organism is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, and by including individual artificial nucleotides in the culture media, were able to induce amplification of the plasmids containing the artificial nucleotides by a factor of 2 x 107 (24 doublings); they did not create mRNA or proteins able to use the artificial nucleotides. Related methods Selective pressure incorporation (SPI) method for production of alloproteins There have been many studies that have produced protein with non-standard amino acids, but they do not alter the genetic code. These protein, called alloprotein, are made by incubating cells with an unnatural amino acid in the absence of a similar coded amino acid in order for the former to be incorporated into protein in place of the latter, for example L-2-aminohexanoic acid (Ahx) for methionine (Met). These studies rely on the natural promiscuous activity of the aminoacyl tRNA synthetase to add to its target tRNA an unnatural amino acid (i.e. analog) similar to the natural substrate, for example methionyl-tRNA synthase's mistaking isoleucine for methionine. In protein crystallography, for example, the addition of selenomethionine to the media of a culture of a methionine-auxotrophic strain results in proteins containing selenomethionine as opposed to methionine (viz. Multi-wavelength anomalous dispersion for reason). Another example is that photoleucine and photomethionine are added instead of leucine and methionine to cross-label protein. Similarly, some tellurium-tolerant fungi can incorporate tellurocysteine and telluromethionine into their protein instead of cysteine and methionine. The objective of expanding the genetic code is more radical as it does not replace an amino acid, but it adds one or more to the code. On the other hand, proteome-wide replacements are most efficiently performed by global amino acid substitutions. For example, global proteome-wide substitutions of natural amino acids with fluorinated analogs have been attempted in E. coli and B. subtilis. A complete tryptophan substitution with thienopyrrole-alanine in response to 20899 UGG codons in E. coli was reported in 2015 by Budisa and Söll. Moreover, many biological phenomena, such as protein folding and stability, are based on synergistic effects at many positions in the protein sequence. In this context, the SPI method generates recombinant protein variants or alloproteins directly by substitution of natural amino acids with unnatural counterparts. An amino acid auxotrophic expression host is supplemented with an amino acid analog during target protein expression. This approach avoids the pitfalls of suppression-based methods and it is superior to it in terms of efficiency, reproducibility and an extremely simple experimental setup. Numerous studies demonstrated how global substitution of canonical amino acids with various isosteric analogs caused minimal structural perturbations but dramatic changes in thermodynamic, folding, aggregation spectral properties and enzymatic activity. in vitro synthesis The genetic code expansion described above is in vivo. An alternative is the change of coding in vitro translation experiments. This requires the depletion of all tRNAs and the selective reintroduction of certain aminoacylated-tRNAs, some chemically aminoacylated. Chemical synthesis There are several techniques to produce peptides chemically, generally it is by solid-phase protection chemistry. This means that any (protected) amino acid can be added into the nascent sequence. In November 2017, a team from the Scripps Research Institute reported having constructed a semi-synthetic E. coli bacteria genome using six different nucleotides (versus four found in nature). The two extra 'letters' form a third, unnatural base pair. The resulting organisms were able to thrive and synthesize proteins using "unnatural amino acids". The unnatural base pair used is dNaM–dTPT3. This unnatural base pair has been demonstrated previously, but this is the first report of transcription and translation of proteins using an unnatural base pair. See also Bioengineering Directed evolution Hachimoji DNA List of genetic codes Nucleic acid analogue Non-proteinogenic amino acids Protein labelling Protein methods Synthetic biology Xenobiology References Molecular genetics Nucleic acids Synthetic biology
Expanded genetic code
[ "Chemistry", "Engineering", "Biology" ]
6,813
[ "Synthetic biology", "Biomolecules by chemical classification", "Biological engineering", "Bioinformatics", "Molecular genetics", "Molecular biology", "Nucleic acids" ]
16,159,670
https://en.wikipedia.org/wiki/Coxeter%27s%20loxodromic%20sequence%20of%20tangent%20circles
In geometry, Coxeter's loxodromic sequence of tangent circles is an infinite sequence of circles arranged so that any four consecutive circles in the sequence are pairwise mutually tangent. This means that each circle in the sequence is tangent to the three circles that precede it and also to the three circles that follow it. Properties The radii of the circles in the sequence form a geometric progression with ratio where is the golden ratio. This ratio and its reciprocal satisfy the equation and so any four consecutive circles in the sequence meet the conditions of Descartes' theorem. The centres of the circles in the sequence lie on a logarithmic spiral. Viewed from the centre of the spiral, the angle between the centres of successive circles is The angle between consecutive triples of centers is the same as one of the angles of the Kepler triangle, a right triangle whose construction also involves the square root of the golden ratio. History and related constructions The construction is named after geometer H. S. M. Coxeter, who generalised the two-dimensional case to sequences of spheres and hyperspheres in higher dimensions. It can be interpreted as a degenerate special case of the Doyle spiral. See also Apollonian gasket References External links Circle packing Golden ratio Eponyms in geometry
Coxeter's loxodromic sequence of tangent circles
[ "Mathematics" ]
263
[ "Geometry problems", "Eponyms in geometry", "Packing problems", "Golden ratio", "Circle packing", "Geometry", "Mathematical problems" ]
16,161,178
https://en.wikipedia.org/wiki/Effective%20number%20of%20codons
Effective number of codons (abbreviated as ENC or Nc) is a measure to study the state of codon usage biases in genes and genomes. The way that ENC is computed has obvious similarities to the computation of effective population size in population genetics. Although it is easy to compute ENC values, it has been shown that this measure is one of the best measures to show codon usage bias. Since the original suggestion of the ENC, several investigators have tried to improve the method, but it seems that there is much room to improve this measure. References Molecular biology
Effective number of codons
[ "Chemistry", "Biology" ]
119
[ "Biochemistry", "Molecular biology" ]
16,161,577
https://en.wikipedia.org/wiki/Achmatowicz%20reaction
The Achmatowicz reaction, also known as the Achmatowicz rearrangement, is an organic synthesis in which a furan is converted to a dihydropyran. In the original publication by the Polish chemist Osman Achmatowicz Jr. (b. 20 December 1931 in Vilnius) in 1971 furfuryl alcohol is reacted with bromine in methanol to 2,5-dimethoxy-2,5-dihydrofuran which rearranges to the dihydropyran with dilute sulfuric acid. Additional reaction steps, alcohol protection with methyl orthoformate and boron trifluoride) and then ketone reduction with sodium borohydride produce an intermediate from which many monosaccharides can be synthesised. The Achmatowitz protocol has been used in total synthesis, including those of desoxoprosophylline, pyrenophorin Recently it has been used in diversity oriented synthesis and in enantiomeric scaffolding. References Organic reactions Name reactions
Achmatowicz reaction
[ "Chemistry" ]
218
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
13,459,016
https://en.wikipedia.org/wiki/Cognitive%20description
Cognitive description is a term used in psychology to describe the cognitive workings of the human mind. A cognitive description specifies what information is utilized during a cognitive action, how this information is processed and transformed, what data structures are used, and what behaviour is generated. Cognitive description, a fundamental concept in cognitive science, refers to the elucidation of the processes and mechanisms underlying cognitive actions. It specifies the nature of information utilized, the processes of transforming this information, the data structures involved, and the resulting behaviour. This domain is interdisciplinary, intertwining psychology, neuroscience, linguistics, and computer science. Definition and Core Aspects Cognitive description concerns itself with detailing how cognitive actions are executed from start to finish. It addresses several key aspects: Information Utilization: This involves identifying what specific information is required and accessed during a cognitive action, such as sensory data or memories. Information Processing and Transformation: Here, the focus is on how information is processed — the mental algorithms and operations applied to transform the input information. Data Structures: This relates to the internal cognitive structures, such as schemas and mental models, that organize and store information. Generated Behaviour: Finally, cognitive description explains the behaviour that results from these processes, including decision-making, problem-solving, and physical actions. Significance in Cognitive Science The significance of cognitive descriptions lies in their ability to offer a structured, detailed analysis of mental operations. This analysis is instrumental in formulating theories about the human mind and its functioning. Additionally, it provides a framework for designing and interpreting cognitive research experiments. Applications and Real-World Relevance Cognitive descriptions have practical applications across various fields: Education: They aid in developing teaching methods that align with how information is processed and understood. Artificial Intelligence: Insights from cognitive descriptions inform the development of AI algorithms that mimic human cognitive processes. Clinical Psychology: They are crucial in diagnosing and treating cognitive impairments and understanding mental health disorders. Future Directions Future advancements in cognitive description are expected to integrate more deeply with neuroscience, linking cognitive processes with brain activities and structures. There is also a growing emphasis on understanding these processes in diverse cultural and developmental contexts. See also Cognitive module Cognition Cognition Disorder References Behavioural sciences Cognitive psychology Evolutionary psychology Ethology Semantics
Cognitive description
[ "Biology" ]
446
[ "Behavioural sciences", "Ethology", "Behavior", "Cognitive psychology" ]
13,459,707
https://en.wikipedia.org/wiki/Aminocoumarin
Aminocoumarin is a class of antibiotics that act by an inhibition of the DNA gyrase enzyme involved in the cell division in bacteria. They are derived from Streptomyces species, whose best-known representative – Streptomyces coelicolor – was completely sequenced in 2002. The aminocoumarin antibiotics include: Novobiocin, Albamycin (Pharmacia And Upjohn) Coumermycin Clorobiocin Structure The core of aminocoumarin antibiotics is made up of a 3-amino-4,7-dihydroxycumarin ring, which is linked, e.g., with a sugar in 7-Position and a benzoic acid derivative in 3-Position. Clorobiocin is a natural antibiotic isolated from several Streptomyces strains and differs from novobiocin in that the methyl group at the 8 position in the coumarin ring of novobiocin is replaced by a chlorine atom, and the carbamoyl at the 3' position of the noviose sugar is substituted by a 5-methyl-2-pyrrolylcarbonyl group. Mechanism of action The aminocoumarin antibiotics are known inhibitors of DNA gyrase. Antibiotics of the aminocoumarin family exert their therapeutic activity by binding tightly to the B subunit of bacterial DNA gyrase, thereby inhibiting this essential enzyme. They compete with ATP for binding to the B subunit of this enzyme and inhibit the ATP-dependent DNA supercoiling catalysed by gyrase. X-ray crystallography studies have confirmed binding at the ATP-binding site located on the gyrB subunit of DNA gyrase. Their affinity for gyrase is considerably higher than that of modern fluoroquinolones, which also target DNA gyrase but at the gyrA subunit. Resistance Resistance to this class of antibiotics usually results from genetic mutation in the gyrB subunit. Other mechanisms include de novo synthesis of a coumarin-resistant gyrase B subunit by the novobiocin producer S. sphaeroides . Clinical use The clinical use of this antibiotic class has been restricted due to the low water solubility, low activity against gram-negative bacteria, and toxicity in vivo of this class of antibiotics. References Antibiotics Coumarin drugs
Aminocoumarin
[ "Biology" ]
484
[ "Antibiotics", "Biocides", "Biotechnology products" ]
13,460,646
https://en.wikipedia.org/wiki/Dimaprit
Dimaprit is a histamine analog working as a selective H2 histamine receptor agonist. References Biogenic amines Amidines Thioethers
Dimaprit
[ "Chemistry" ]
35
[ "Biomolecules by chemical classification", "Amidines", "Biogenic amines", "Functional groups", "Bases (chemistry)" ]
13,461,936
https://en.wikipedia.org/wiki/Timoshenko%E2%80%93Ehrenfest%20beam%20theory
The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of fourth order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter (in principle comparable to the height of the beam or shorter), and thus the distance between opposing shear forces decreases. Rotary inertia effect was introduced by Bresse and Rayleigh. If the shear modulus of the beam material approaches infinity—and thus the beam becomes rigid in shear—and if rotational inertia effects are neglected, Timoshenko beam theory converges towards Euler–Bernoulli beam theory. Quasistatic Timoshenko beam In static Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by where are the coordinates of a point in the beam, are the components of the displacement vector in the three coordinate directions, is the angle of rotation of the normal to the mid-surface of the beam, and is the displacement of the mid-surface in the -direction. The governing equations are the following coupled system of ordinary differential equations: The Timoshenko beam theory for the static case is equivalent to the Euler–Bernoulli theory when the last term above is neglected, an approximation that is valid when where is the length of the beam. is the cross section area. is the elastic modulus. is the shear modulus. is the second moment of area. , called the Timoshenko shear coefficient, depends on the geometry. Normally, for a rectangular section. is a distributed load (force per length). is the displacement of the mid-surface in the -direction. is the angle of rotation of the normal to the mid-surface of the beam. Combining the two equations gives, for a homogeneous beam of constant cross-section, The bending moment and the shear force in the beam are related to the displacement and the rotation . These relations, for a linear elastic Timoshenko beam, are: {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of quasistatic Timoshenko beam equations |- |From the kinematic assumptions for a Timoshenko beam, the displacements of the beam are given by Then, from the strain-displacement relations for small strains, the non-zero strains based on the Timoshenko assumptions are Since the actual shear strain in the beam is not constant over the cross section we introduce a correction factor such that The variation in the internal energy of the beam is Define Then Integration by parts, and noting that because of the boundary conditions the variations are zero at the ends of the beam, leads to The variation in the external work done on the beam by a transverse load per unit length is Then, for a quasistatic beam, the principle of virtual work gives The governing equations for the beam are, from the fundamental theorem of variational calculus, For a linear elastic beam Therefore the governing equations for the beam may be expressed as Combining the two equations together gives |} Boundary conditions The two equations that describe the deformation of a Timoshenko beam have to be augmented with boundary conditions if they are to be solved. Four boundary conditions are needed for the problem to be well-posed. Typical boundary conditions are: Simply supported beams: The displacement is zero at the locations of the two supports. The bending moment applied to the beam also has to be specified. The rotation and the transverse shear force are not specified. Clamped beams: The displacement and the rotation are specified to be zero at the clamped end. If one end is free, shear force and bending moment have to be specified at that end. Strain energy of a Timoshenko beam The strain energy of a Timoshenko beam is expressed as a sum of strain energy due to bending and shear. Both these components are quadratic in their variables. The strain energy function of a Timoshenko beam can be written as, Example: Cantilever beam For a cantilever beam, one boundary is clamped while the other is free. Let us use a right handed coordinate system where the direction is positive towards right and the direction is positive upward. Following normal convention, we assume that positive forces act in the positive directions of the and axes and positive moments act in the clockwise direction. We also assume that the sign convention of the stress resultants ( and ) is such that positive bending moments compress the material at the bottom of the beam (lower coordinates) and positive shear forces rotate the beam in a counterclockwise direction. Let us assume that the clamped end is at and the free end is at . If a point load is applied to the free end in the positive direction, a free body diagram of the beam gives us and Therefore, from the expressions for the bending moment and shear force, we have Integration of the first equation, and application of the boundary condition at , leads to The second equation can then be written as Integration and application of the boundary condition at gives The axial stress is given by Dynamic Timoshenko beam In Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by where are the coordinates of a point in the beam, are the components of the displacement vector in the three coordinate directions, is the angle of rotation of the normal to the mid-surface of the beam, and is the displacement of the mid-surface in the -direction. Starting from the above assumption, the Timoshenko beam theory, allowing for vibrations, may be described with the coupled linear partial differential equations: where the dependent variables are , the translational displacement of the beam, and , the angular displacement. Note that unlike the Euler–Bernoulli theory, the angular deflection is another variable and not approximated by the slope of the deflection. Also, is the density of the beam material (but not the linear density). is the cross section area. is the elastic modulus. is the shear modulus. is the second moment of area. , called the Timoshenko shear coefficient, depends on the geometry. Normally, for a rectangular section. is a distributed load (force per length). is the displacement of the mid-surface in the -direction. is the angle of rotation of the normal to the mid-surface of the beam. These parameters are not necessarily constants. For a linear elastic, isotropic, homogeneous beam of constant cross-section these two equations can be combined to give {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of combined Timoshenko beam equation |- |The equations governing the bending of a homogeneous Timoshenko beam of constant cross-section are From equation (1), assuming appropriate smoothness, we have Differentiating equation (2) gives Substituting equation (3), (4), (5) into equation (6) and rearrange, we get |} However, it can easily be shown that this equation is incorrect. Consider the case where q is constant and does not depend on x or t, combined with the presence of a small damping all time derivatives will go to zero when t goes to infinity. The shear terms are not present in this situation, resulting in the Euler-Bernoulli beam theory, where shear deformation is neglected. The Timoshenko equation predicts a critical frequency For normal modes the Timoshenko equation can be solved. Being a fourth order equation, there are four independent solutions, two oscillatory and two evanescent for frequencies below . For frequencies larger than all solutions are oscillatory and, as consequence, a second spectrum appears. Axial effects If the displacements of the beam are given by where is an additional displacement in the -direction, then the governing equations of a Timoshenko beam take the form where and is an externally applied axial force. Any external axial force is balanced by the stress resultant where is the axial stress and the thickness of the beam has been assumed to be . The combined beam equation with axial force effects included is Damping If, in addition to axial forces, we assume a damping force that is proportional to the velocity with the form the coupled governing equations for a Timoshenko beam take the form and the combined equation becomes A caveat to this Ansatz damping force (resembling viscosity) is that, whereas viscosity leads to a frequency-dependent and amplitude-independent damping rate of beam oscillations, the empirically measured damping rates are frequency-insensitive, but depend on the amplitude of beam deflection. Shear coefficient Determining the shear coefficient is not straightforward (nor are the determined values widely accepted, i.e. there's more than one answer); generally it must satisfy: . The shear coefficient depends on Poisson's ratio. The attempts to provide precise expressions were made by many scientists, including Stephen Timoshenko, Raymond D. Mindlin, G. R. Cowper, N. G. Stephen, J. R. Hutchinson etc. (see also the derivation of the Timoshenko beam theory as a refined beam theory based on the variational-asymptotic method in the book by Khanh C. Le leading to different shear coefficients in the static and dynamic cases). In engineering practice, the expressions by Stephen Timoshenko are sufficient in most cases. In 1975 Kaneko published a review of studies of the shear coefficient. More recently, experimental data shows that the shear coefficient is underestimated. Corrective shear coefficients for homogeneous isotropic beam according to Cowper - selection. where is Poisson's ratio. See also Plate theory Sandwich theory References Beam theory Continuum mechanics Structural analysis
Timoshenko–Ehrenfest beam theory
[ "Physics", "Engineering" ]
2,125
[ "Structural engineering", "Continuum mechanics", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering" ]
13,463,844
https://en.wikipedia.org/wiki/Richardson%27s%20theorem
In mathematics, Richardson's theorem establishes the undecidability of the equality of real numbers defined by expressions involving integers, , and exponential and sine functions. It was proved in 1968 by the mathematician and computer scientist Daniel Richardson of the University of Bath. Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π, the number ln 2, the variable x, the operations of addition, subtraction, multiplication, composition, and the sin, exp, and abs functions. For some classes of expressions generated by other primitives than in Richardson's theorem, there exist algorithms that can determine whether an expression is zero. Statement of the theorem Richardson's theorem can be stated as follows: Let E be a set of expressions that represent functions. Suppose that E includes these expressions: x (representing the identity function) ex (representing the exponential functions) sin x (representing the sin function) all rational numbers, ln 2, and π (representing constant functions that ignore their input and produce the given number as output) Suppose E is also closed under a few standard operations. Specifically, suppose that if A and B are in E, then all of the following are also in E: A + B (representing the pointwise addition of the functions that A and B represent) A − B (representing pointwise subtraction) AB (representing pointwise multiplication) A∘B (representing the composition of the functions represented by A and B) Then the following decision problems are unsolvable: Deciding whether an expression A in E represents a function that is nonnegative everywhere If E includes also the expression |x| (representing the absolute value function), deciding whether an expression A in E represents a function that is zero everywhere If E includes an expression B representing a function whose antiderivative has no representative in E, deciding whether an expression A in E represents a function whose antiderivative can be represented in E. (Example: has an antiderivative in the elementary functions if and only if .) Extensions After Hilbert's tenth problem was solved in 1970, B. F. Caviness observed that the use of ex and ln 2 could be removed. Wang later noted that under the same assumptions under which the question of whether there was x with A(x) < 0 was insolvable, the question of whether there was x with A(x) = 0 was also insolvable. Miklós Laczkovich removed also the need for π and reduced the use of composition. In particular, given an expression A(x) in the ring generated by the integers, x, sin xn, and sin(x sin xn) (for n ranging over positive integers), both the question of whether A(x) > 0 for some x and whether A(x) = 0 for some x are unsolvable. By contrast, the Tarski–Seidenberg theorem says that the first-order theory of the real field is decidable, so it is not possible to remove the sine function entirely. See also References Further reading External links Undecidable problems Functions and mappings Theorems in the foundations of mathematics
Richardson's theorem
[ "Mathematics" ]
656
[ "Functions and mappings", "Mathematical analysis", "Foundations of mathematics", "Mathematical logic", "Mathematical objects", "Computational problems", "Mathematical relations", "Undecidable problems", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics...
13,464,118
https://en.wikipedia.org/wiki/Gary%20Taubes
Gary Taubes (born April 30, 1956) is an American journalist, writer, and low-carbohydrate / high-fat (LCHF) diet advocate. His central claim is that carbohydrates, especially sugar and high-fructose corn syrup, overstimulate the secretion of insulin, causing the body to store fat in fat cells and the liver, and that it is primarily a high level of dietary carbohydrate consumption that accounts for obesity and other metabolic syndrome conditions. He is the author of Nobel Dreams (1987); Bad Science: The Short Life and Weird Times of Cold Fusion (1993); Good Calories, Bad Calories (2007), titled The Diet Delusion (2008) in the UK and Australia; Why We Get Fat: And What to Do About It (2010); The Case Against Sugar (2016); and The Case for Keto: Rethinking Weight Control and the Science and Practice of Low-Carb/High-Fat Eating (2020). Taubes's work often goes against accepted scientific, governmental, and popular tenets such as that obesity is caused by eating too much and exercising too little and that excessive consumption of fat, especially saturated fat in animal products, leads to cardiovascular disease. Biography Born in Rochester, New York, Taubes studied physics at Harvard University (BS, 1977) and aerospace engineering at Stanford University (MS, 1978). After receiving a master's degree in journalism at Columbia University in 1981, Taubes joined Discover magazine as a staff reporter in 1982. Since then he has written numerous articles for Discover, Science and other magazines. Originally focusing on physics issues, his interests have more recently turned to medicine and nutrition. His brother, Clifford Henry Taubes, is the William Petschek Professor of Mathematics at Harvard University. Scientific controversies Taubes' books have all dealt with scientific controversies. Nobel Dreams takes a critical look at the politics and experimental techniques behind the Nobel Prize-winning work of physicist Carlo Rubbia. In Bad Science: The Short Life and Weird Times of Cold Fusion, he chronicles the short-lived media frenzy surrounding the Pons–Fleischmann cold fusion experiments of 1989. He opines in the book that heat generation in the experiments of Drs. Martin Fleischmann and Stanley Pons was due entirely to difference in ionic conductivity of deuterated salts solutions compared to normal aqueous solutions. He also formulated an allegation of fraud regarding the results from John Bockris's research group. Diet advocacy Taubes gained prominence in the low-carb diet debate following the publication of his 2002 New York Times Magazine piece "What if It's All Been a Big Fat Lie?". The article, which questioned the efficacy and health benefits of low-fat diets, was seen as defending the Atkins diet against the medical establishment, and it became extremely controversial. Some scholars interviewed for the article complained that Taubes misinterpreted their words or treated them out of context. Taubes himself stated: "[E]ven though I knew the article would be the most controversial article the Times Magazine ran all year, [the reaction] still shocked me." The Center for Science in the Public Interest published a rebuttal to the Times article in its November 2002 newsletter. Cardiologist John W. Farquhar commented that "Gary Taubes tricked us all into coming across as supporters of the Atkins diet." Taubes is an advocate of eating beef. Beef industry leader Amanda Radke has written in Beef Daily that "Today's best beef advocates wear a variety of hats [...] like Nina Teicholz or Gary Taubes who turn against conventional health advice to promote diets rich in animal fats and proteins". Good Calories, Bad Calories In 2007, Taubes published his book Good Calories, Bad Calories: Challenging the Conventional Wisdom on Diet, Weight Control, and Disease (published as The Diet Delusion in the UK). This book proposed that a hypothesis — that dietary fat is the cause of obesity and heart disease — became dogma, and claims to show how the scientific method was circumvented so a contestable hypothesis could remain unchallenged. The book uses data and studies compiled from more than a century of dietary research to support what Taubes calls "the alternative hypothesis." Taubes' argument is that the medical community and the U.S. federal government have relied upon misinterpreted scientific data on nutrition to build the prevailing paradigm about what constitutes healthful eating. Taubes argues that — contrary to conventional nutritional science — it is a carbohydrate-laced diet, augmented with sugar, that leads to heart disease, type 2 diabetes, obesity, cancer, and other "maladies of civilization." In the Epilogue to Good Calories, Bad Calories on page 454, Taubes sets out ten "inescapable" conclusions, the first of which is, "Dietary fat, whether saturated or not, is not a cause of obesity, heart disease, or any other chronic disease of civilization." Reviewing Good Calories, Bad Calories, obesity researcher George A. Bray, wrote that the book "...has much useful information and is well worth reading" but that "obese people clearly eat more than do lean ones" and that "some of the conclusions that the author reaches are not consistent with current concepts about obesity." In 2007, New York Times science writer John Tierney cited Taubes's book Good Calories, Bad Calories and discussed information cascades and the role of physiologist Ancel Keys in widely held beliefs related to diet and fat. Tierney follows Taubes in noting that a 2001 Cochrane meta-analysis of low-fat diets found that they had "no significant effect on mortality". Harriet A. Hall, however, has criticized Taubes for selectively quoting the meta-analysis, and, writing for Science-Based Medicine, states that although it is possible some of Taubes' hypotheses may be borne out by subsequent evidence, his idea that carbohydrate restriction can lead to weight loss independently of calorie restriction is "simply wrong". The Case Against Sugar Taubes authored The Case Against Sugar in 2016. The book argues that sugar is an addictive drug and is the cause of obesity and many health-related problems. It was positively reviewed by chef and food-writer Dan Barber, who described Taubes's writing as "inflammatory and copiously researched". Food journalist Joanna Blythman also praised the book, noting "his clear and persuasive argument that obesity is a hormonal disorder, switched on by sugar, is one that urgently needs wider airing." Harriet Hall, who is known as a skeptic in the medical community, wrote that Taubes made a compelling case against sugar but the evidence was inconclusive. C. Albert Yeung in the Journal of Public Health described the book as very informative but insufficient to draw any conclusion and a "polemic, not a balanced scientific review." NuSI In September, 2012, Taubes and Peter Attia launched the Nutrition Science Initiative (NuSI), a nonprofit organization they described as "a Manhattan Project-like effort to solve" the problem of obesity. The project set out to validate the "carbohydrate-insulin hypothesis", a model by which carbohydrate is proposed to be uniquely fattening because of its influence on insulin levels. A pilot study funded by NuSI was conducted in 2014 by a team led by NIH researcher Kevin Hall, and produced evidence which did not support the hypothesis. In 2017, Kevin Hall wrote that the hypothesis had been falsified by experiment. Not long after the completion of that study NuSI was confronted with a number of issues. They lost a significant source of funding; co-founder Peter Attia left the organization. In 2018, NuSI was described as having "two part-time employees and an unpaid volunteer hanging around". Awards Taubes has won the Science in Society Journalism Award of the National Association of Science Writers three times and was awarded an MIT Knight Science Journalism Fellowship for 1996–97. He is a Robert Wood Johnson Foundation independent investigator in health policy. Selected bibliography (Also published as The Diet Delusion ) References External links 1956 births Living people American nutritionists American science writers Cold fusion Columbia University Graduate School of Journalism alumni Harvard John A. Paulson School of Engineering and Applied Sciences alumni Low-carbohydrate diet advocates Stanford University alumni Writers from Rochester, New York 20th-century American Jews 21st-century American Jews Discover (magazine) people
Gary Taubes
[ "Physics", "Chemistry" ]
1,789
[ "Nuclear fusion", "Cold fusion", "Nuclear physics" ]
13,464,520
https://en.wikipedia.org/wiki/Husky%20VMMD
The Husky VMMD (Vehicle-Mounted Mine Detection) is a configurable counter-IED MRAP (Mine-Resistant Ambush Protected) vehicle, developed by South African-based DCD Protected Mobility and American C-IED company Critical Solutions International. Designed for use in route clearance and de-mining operations, the Husky is equipped with technologies to help detect explosives and minimise blast damage. The Husky VMMD can help operators detect land mines, and improvised explosive devices (IEDs) using basic sensor equipment, and imaging systems. The Husky is equipped with countermeasures like jamming systems in an attempt to help disrupt the effect of IEDs. The Husky's armour is also able to withstand damage from basic explosives. Development The Husky traces its lineage to the Pookie, a Rhodesian mine clearance vehicle. Originally used as the lead element of a mine removal convoy, the Husky was employed as part of the Chubby mine detection system. The early Chubby system comprised a lead detection vehicle (the Meerkat), a second proofing vehicle (the Husky) towing a mine detonation trailer, and a third vehicle carrying spare parts for expedient blast repair. The Husky was initially deployed in the 1970s. During the South African Border War, the South African Defense Force used the Husky extensively to clear mines from military convoy routes in Namibia and Angola. In the mid-1990s, DCD Group and Critical Solutions International planned to bring the technology to the U.S. and underwent a two-year foreign comparative test program with the United States Department of Defense and follow-on modifications and testing. In 1997, CSI was directed to produce and deliver production systems under the U.S. Army Interim Vehicle Mounted Mine Detection Program. Over the next twenty years, the Husky underwent several iterations and upgrades. U.S. military clearance units currently train on and employ Husky vehicles as detection assets and clearance vehicles. Design The Husky is part of a class of MRAP vehicles developed from South African blast protection designs. The sharp V-hull of the Husky helps reduces blast effect by increasing ground clearance and standoff from the blast, increasing structural hull rigidity, and diverting blast energy and fragmentation away from the platform and its occupants. The Husky is designed to break apart in a blast event, allowing energy to transfer to the detachable front and rear modules rather than the critical components of the vehicle or the occupants located in the cab. Its three main components (a center cab with front and rear wheel modules) are connected by shear pins. Critical components are engineered to break apart predictably, to help prevent catastrophic damage, and enabling users to quickly replace modules on site. This approach increases the lifespan of the vehicle and limits the need for recovery teams to evacuate the vehicle to maintenance facilities. The cabin of the Husky is fitted with bulletproof glass windows. There is an entry hatch on the roof. The Husky Mk III and 2G are powered by a Mercedes-Benz OM906LA turbo diesel engine coupled with an Allison Transmission 2500 SP 5-speed automatic transmission. It can reach a maximum speed of 72 km/h, and has a range of 350 km. Variants Husky Mk I First Husky production model. Replaced by Husky Mk II. Husky Mk II Second Husky model. Replaced by Husky Mk III. Husky Mk III Modern single-occupant Husky model. The platform is integrated with pulse induction metal detector panels and overpass tires that enable operators to regulate tire air pressure in order to reduce the risk of initiating land mines without causing detonation. The Mk III, like other Husky models, is engineered in a modular, frangible configuration. Husky 2G Project Type - Mine clearance vehicle Manufacturer - DCD Protected Mobility Crew - Two Operating Weight - 9,200kg Husky 2G is a two-seat variant of Husky MK III vehicle mounted mine detector (VMMD) designed and manufactured by South African firm DCD Protected Mobility (DCD PM). Equipped with a number of sensors, the vehicle is ideally suited for mine-clearing operations including detection, identification and destruction of improvised explosive devices (IED), landmines and other explosive materials. Development of the Husky 2G was prompted by the need to conduct longer missions and employ multiple detection systems. The Husky 2G was designed with added high sensitivity detectors, ground-penetrating radar, video optics suites, and remote weapon stations. These additional components required a second operator to manage the additional workload, hence the required two occupants. Equipment The Husky is capable of carrying the following equipment and payloads: Autonomous vehicle upgrades Rocket-propelled grenade armor and netting Smoke grenade launchers Electronic countermeasures Remote weapon station Metal detectors Ground-penetrating radar Nonlinear junction detectors Gunfire detectors Robotic arms Blowers Water diggers Thermal cameras Optics suite Mine-clearing line charges Mine rollers Rhino Passive Infrared Defeat System Mine plows Proofing rollers Electrostatic discharge Red Pack repair kit Operators Husky Mk III United States Army United States Marine Corps Canadian Army Australian Army South African Defense Force Kenyan Army Husky 2G Islamic Republic of Iran Army Iraqi Army Turkish Army Spanish Army Royal Saudi Land Forces Egyptian Army Jordanian Army Latvian Army United States Army (limited fielding in support of Operation Enduring Freedom) Recognitions The Husky was listed on the U.S. Army’s Top Ten inventions of 2010. References External links Critical Solutions International (CSI) Soldier Armed magazine article Military engineering vehicles Cold War military equipment of South Africa Mine warfare countermeasures Military vehicles of the United States Military vehicles of South Africa Bomb disposal Military vehicles introduced in the 1970s
Husky VMMD
[ "Chemistry", "Engineering" ]
1,127
[ "Explosion protection", "Military engineering", "Military engineering vehicles", "Bomb disposal", "Engineering vehicles" ]
13,467,020
https://en.wikipedia.org/wiki/WiMAX%20MIMO
WiMAX MIMO refers to the use of Multiple-input multiple-output communications (MIMO) technology on WiMAX, which is the technology brand name for the implementation of the standard IEEE 802.16. Background WiMAX WiMAX is the technology brand name for the implementation of the standard IEEE 802.16, which specifies the air interface at the PHY (Physical layer) and at the MAC (Medium Access Control layer) . Aside from specifying the support of various channel bandwidths and adaptive modulation and coding, it also specifies the support for MIMO antennas to provide good Non-line-of-sight (NLOS) characteristics. See Also: WiMax Forum MIMO MIMO stands for Multiple Input and Multiple Output, and refers to the technology where there are multiple antennas at the base station and multiple antennas at the mobile device. Typical usage of multiple antenna technology includes cellular phones with two antennas, laptops with two antennas (e.g. built in the left and right side of the screen), as well as CPE devices with multiple sprouting antennas. The predominant cellular network implementation is to have multiple antennas at the base station and a single antenna on the mobile device. This minimizes the cost of the mobile radio. As the costs for radio frequency (RF) components in mobile devices go down, second antennas in mobile device may become more common. Multiple mobile device antennas are currently used in Wi-Fi technology (e.g. IEEE 802.11n), where WiFi-enabled cellular phones, laptops and other devices often have two or more antennas. MIMO Technology in WiMAX WiMAX implementations that use MIMO technology have become important. The use of MIMO technology improves the reception and allows for a better reach and rate of transmission. The implementation of MIMO also gives WiMAX a significant increase in spectral efficiency. MIMO auto-negotiation The 802.16 defined MIMO configuration is negotiated dynamically between each individual base station and mobile station. The 802.16 specification supports the ability to support a mix of mobile stations with different MIMO capabilities. This helps to maximize the sector throughput by leveraging the different capabilities of a diverse set of vendor mobile stations. Space Time Code The 802.16 specification supports the Multiple-input and single-output (MISO) technique of Transmit Diversity, which is commonly referred to Space Time Code (STC). With this method, two or more antennas are employed at the transmitter and one antenna at the receiver. The use of multiple receive antennas (thus MIMO) can further improve the reception of STC transmitted signals. With a Transmit Diversity rate = 1 (a.k.a. "Matrix A" in the 802.16 standard), different data bit constellations are transferred on two different antennas during the same symbol. The conjugate and/or inverse of the same two constellations are transferred again on the same antennas during the next symbol. The data transfer rate with STC remains the same as the baseline case. The received signal is more robust with this method due to the transmission redundancy. This configuration delivers similar performance to the case of two receive antennas and one transmitter antenna. Spatial Multiplexing The 802.16 specification also supports the MIMO technique of Spatial Multiplexing (SMX), also known as Transmit Diversity rate = 2 (a.k.a. "Matrix B" in the 802.16 standard). Instead of transmitting the same bit over two antennas, this method transmits one data bit from the first antenna, and another bit from the second antenna simultaneously, per symbol. As long as the receiver has more than one antenna and the signal is of sufficient quality, the receiver can separate the signals. This method involves added complexity and expense at both the transmitter and receiver. However, with two transmit antennas and two receive antennas, data can be transmitted twice as fast as compared systems using Space Time Codes with only one receive antenna. WiMAX Network use of Spatial Multiplexing One specific use of Spatial Multiplexing is to apply it to users who have the best signal quality, so that less time is spent transmitting to them. Users whose signal quality is too low to allow the spatially multiplexed signals to be resolved stay with conventional transmission. This allows an operator to offer higher data rates to some users and/or to serve more users. The WiMAX specification's dynamic negotiation mechanism helps enable this use. WiMAX MISO/MIMO with four antennas The 802.16 specification also supports the use of four antennas. Three configurations are supported. WiMAX four antenna mode 1 With rate = 1 using four antennas, data is transmitted four times per symbol, where each time the data is conjugated and/or inverted. This does not change the data rate, but does give the signal more robustness and avoids sudden increases in error rates. WiMAX four antenna mode 2 With rate = 2 using four antennas, the data rate is only doubled, but increases in robustness since the same data is transmitted twice as compared to only once with using two antennas. WiMAX four antenna Matrix C mode The third configuration that is only available using four antennas is Matrix C, where a different data bit is transmitted from the four antennas per symbol, which gives it four times the baseline data rate. Note: MRC (Maximum Ratio Combining) is vendor discretionary and improves rate and range. In WiMAX, MRC at the Base Station is sometimes also referred to as Receive Beamforming. See also: Space Time Coding and Spatial Multiplexing Other advanced MIMO techniques applied to WiMAX Uplink Collaborative MIMO A related technique is called Uplink Collaborative MIMO, where users transmit at the same time in the same frequency. This type of spatial multiplexing improves the sector throughput without requiring multiple transmit antennas at the mobile device. The common non-MIMO method for this in OFDMA is by scheduling different mobile stations at different points in an OFDMA time-frequency map. Collaborative Spatial Multiplexing (Collaborative MIMO) is comparable to regular spatial multiplexing, where multiple data streams are transmitted from multiple antennas on the same device. WiMAX Uplink Collaborative MIMO In the case of WiMAX, Uplink Collaborative MIMO is spatial multiplexing with two different devices, each with one antenna. These transmitting devices are collaborating in the sense that both devices must be synchronized in time and frequency so that the intentional overlapping occurs under controlled circumstances. The two streams of data will then interfere with each other. As long as the signal quality is sufficiently good and the receiver at the base station has at least two antennas, the two data streams can be separated again. This technique is sometimes also termed Virtual Spatial Multiplexing. Other MIMO-related radio techniques applied to WiMAX Adaptive Antenna Steering (AAS), a.k.a. Beamforming A MIMO-related technique that can be used with WiMAX is called AAS or Beamforming. Multiple antennas and multiple signals are employed, which then shape the beam with the intent of improving transmission to the desired station. The result is reduced interference because the signal going to the desired user is increased and the signal going to other users is reduced. Cyclic Delay Diversity Another MIMO-related technique that can be used in WiMAX systems, but which is outside of the scope of the 802.16 specification, is known as Cyclic Delay Diversity. In this technique, one or more of the signals are delayed before transmission. Because the signals are coming out of two antennas, their receive spectrums differ as each spectrum is characterized by humps and notches due to multi-path fading. At the receiver the signals combine, which improves reception because the joint reception results in shallower spectral humps and fewer spectral notches. The closer the signal can get towards a flat channel at a certain power level, the higher the throughput that can be obtained. Radio Conformance Test of WiMAX MIMO The WiMax Forum has a set of standardized conformance test procedures for PHY and MAC specification compliance called the Radio Conformance Test (RCT). Any technology aspect of a particular implementation of a radio interface must first undergo the RCT. Generally, any aspect of the IEEE 802.16 standard that does not have a test procedure in the RCT may be assumed to not yet be widely implemented. Silicon implementations of WiMAX MIMO Companies that make RFICs that support WiMAX MIMO include Intel, Beceem , NXP Semiconductors and PMC-Sierra. See also Advanced MIMO communications IEEE 802.16 Integrated Circuit Design MIMO OFDM WiMAX Wi-Fi References Louay M.A. Jalloul and Sam. P. Alex, "Evaluation Methodology and Performance of an IEEE 802.16e System", Presented to the IEEE Communications and Signal Processing Society, Orange County Joint Chapter (ComSig), December 7, 2006. Available at: http://chapters.comsoc.org/comsig/meet.html Alex, S.P.; Jalloul, L.M.A.; "Performance Evaluation of MIMO in IEEE802.16e/WiMAX," IEEE Journal of Selected Topics in Signal Processing, vol.2, no.2, pp. 181–190, April 2008 External links The WiMAX Forum IEEE website for 802.16 PMC-Sierra WiMAX Products WiMAX Evolution: Emerging Technologies and Applications, edited by M. Katz and F. Fitzek, 2009. Chapter 16, MIMO Technologies for WiMAX Systems: Present and Future, by C.-B. Chae, K. Huang, and T. Inoue GEDOMIS (GEneric hardware DemOnstrator for MIMO Systems): PHY-layer implementation of MIMO mobile WiMAX Network access WiMAX
WiMAX MIMO
[ "Technology", "Engineering" ]
1,986
[ "Electronic engineering", "WiMAX", "Wireless networking", "Network access" ]
3,057,518
https://en.wikipedia.org/wiki/Backward-wave%20oscillator
A backward wave oscillator (BWO), also called carcinotron or backward wave tube, is a vacuum tube that is used to generate microwaves up to the terahertz range. Belonging to the traveling-wave tube family, it is an oscillator with a wide electronic tuning range. An electron gun generates an electron beam that interacts with a slow-wave structure. It sustains the oscillations by propagating a traveling wave backwards against the beam. The generated electromagnetic wave power has its group velocity directed oppositely to the direction of motion of the electrons. The output power is coupled out near the electron gun. It has two main subtypes, the M-type (M-BWO), the most powerful, and the O-type (O-BWO). The output power of the O-type is typically in the range of 1 mW at 1000 GHz to 50 mW at 200 GHz. Carcinotrons are used as powerful and stable microwave sources. Due to the good quality wavefront they produce (see below), they find use as illuminators in terahertz imaging. The backward wave oscillators were demonstrated in 1951, M-type by Bernard Epsztein and O-type by Rudolf Kompfner. The M-type BWO is a voltage-controlled non-resonant extrapolation of magnetron interaction. Both types are tunable over a wide range of frequencies by varying the accelerating voltage. They can be swept through the band fast enough to be appearing to radiate over all the band at once, which makes them suitable for effective radar jamming, quickly tuning into the radar frequency. Carcinotrons allowed airborne radar jammers to be highly effective. However, frequency-agile radars can hop frequencies fast enough to force the jammer to use barrage jamming, diluting its output power over a wide band and significantly impairing its efficiency. Carcinotrons are used in research, civilian and military applications. For example, the Czechoslovak Kopac passive sensor and Ramona passive sensor air defense detection systems employed carcinotrons in their receiver systems. Basic concept All travelling-wave tubes operate in the same general fashion, and differ primarily in details of their construction. The concept is dependent on a steady stream of electrons from an electron gun that travel down the center of the tube (see adjacent concept diagram). Surrounding the electron beam is some sort of radio frequency source signal; in the case of the traditional klystron this is a resonant cavity fed with an external signal, whereas in more modern devices there are a series of these cavities or a helical metal wire fed with the same signal. As the electrons travel down the tube, they interact with the RF signal. The electrons are attracted to areas with maximum positive bias and repelled from negative areas. This causes the electrons to bunch up as they are repelled or attracted along the length of the tube, a process known as velocity modulation. This process makes the electron beam take on the same general structure as the original signal; the density of the electrons in the beam matches the relative amplitude of the RF signal in the induction system. The electron current is a function of the details of the gun, and is generally orders of magnitude more powerful than the input RF signal. The result is a signal in the electron beam that is an amplified version of the original RF signal. As the electrons are moving, they induce a magnetic field in any nearby conductor. This allows the now-amplified signal to be extracted. In systems like the magnetron or klystron, this is accomplished with another resonant cavity. In the helical designs, this process occurs along the entire length of the tube, reinforcing the original signal in the helical conductor. The "problem" with traditional designs is that they have relatively narrow bandwidths; designs based on resonators will work with signals within 10% or 20% of their design, as this is physically built into the resonator design, while the helix designs have a much wider bandwidth, perhaps 100% on either side of the design peak. BWO The BWO is built in a fashion similar to the helical TWT. However, instead of the RF signal propagating in the same (or similar) direction as the electron beam, the original signal travels at right angles to the beam. This is normally accomplished by drilling a hole through a rectangular waveguide and shooting the beam through the hole. The waveguide then goes through two right angle turns, forming a C-shape and crossing the beam again. This basic pattern is repeated along the length of the tube so the waveguide passes across the beam several times, forming a series of S-shapes. The original RF signal enters from what would be the far end of the TWT, where the energy would be extracted. The effect of the signal on the passing beam causes the same velocity modulation effect, but because of the direction of the RF signal and specifics of the waveguide, this modulation travels backward along the beam, instead of forward. This propagation, the slow-wave, reaches the next hole in the folded waveguide just as the same phase of the RF signal does. This causes amplification just like the traditional TWT. In a traditional TWT, the speed of propagation of the signal in the induction system has to be similar to that of the electrons in the beam. This is required so that the phase of the signal lines up with the bunched electrons as they pass the inductors. This places limits on the selection of wavelengths the device can amplify, based on the physical construction of the wires or resonant chambers. This is not the case in the BWO, where the electrons pass the signal at right angles and their speed of propagation is independent of that of the input signal. The complex serpentine waveguide places strict limits on the bandwidth of the input signal, such that a standing wave is formed within the guide. But the velocity of the electrons is limited only by the allowable voltages applied to the electron gun, which can be easily and rapidly changed. Thus the BWO takes a single input frequency and produces a wide range of output frequencies. Carcinotron The device was originally given the name "carcinotron", after the Greek name for the crayfish, which swim backwards. By simply changing the supply voltage, the device could produce any required frequency across a band that was much larger than any existing microwave amplifier could match - the cavity magnetron worked at a single frequency defined by the physical dimensions of their resonators, and while the klystron amplified an external signal, it only did so efficiently within a small range of frequencies. Previously, jamming a radar was a complex and time-consuming operation. Operators had to listen for potential frequencies being used, set up one of a bank of amplifiers on that frequency, and then begin broadcasting. When the radar station realized what was happening, they would change their frequencies and the process would begin again. In contrast, the carcinotron could sweep through all the possible frequencies so rapidly that it appeared to be a constant signal on all of the frequencies at once. Typical designs could generate hundreds or low thousands of watts, so at any one frequency, there might be a few watts of power that is received by the radar station. However, at long range the amount of energy from the original radar broadcast that reaches the aircraft is only a few watts at most, so the carcinotron can overpower them. The system was so powerful that it was found that a carcinotron operating on an aircraft would begin to be effective even before it rose above the radar horizon. As it swept through the frequencies it would broadcast on the radar's operating frequency at what were effectively random times, filling the display with random dots any time the antenna was pointed near it, perhaps 3 degrees on either side of the target. There were so many dots that the display simply filled with white noise in that area. As it approached the station, the signal would also begin to appear in the antenna's sidelobes, creating further areas that were blanked out by noise. At close range, on the order of , the entire radar display would be completely filled with noise, rendering it useless. The concept was so powerful as a jammer that there were serious concerns that ground-based radars were obsolete. Airborne radars had the advantage that they could approach the aircraft carrying the jammer, and, eventually, the huge output from their transmitter would "burn through" the jamming. However, interceptors of the era relied on ground direction to get into range, using ground-based radars. This represented an enormous threat to air defense operations. For ground radars, the threat was eventually solved in two ways. The first was that radars were upgraded to operate on many different frequencies and switch among them randomly from pulse to pulse, a concept now known as frequency agility. Some of these frequencies were never used in peacetime, and highly secret, with the hope that they would not be known to the jammer in wartime. The carcinotron could still sweep through the entire band, but then it would be broadcasting on the same frequency as the radar only at random times, reducing its effectiveness. The other solution was to add passive receivers that triangulated on the carcinotron broadcasts, allowing the ground stations to produce accurate tracking information on the location of the jammer and allowing them to be attacked. The slow-wave structure The needed slow-wave structures must support a radio frequency (RF) electric field with a longitudinal component; the structures are periodic in the direction of the beam and behave like microwave filters with passbands and stopbands. Due to the periodicity of the geometry, the fields are identical from cell to cell except for a constant phase shift Φ. This phase shift, a purely real number in a passband of a lossless structure, varies with frequency. According to Floquet's theorem (see Floquet theory), the RF electric field E(z,t) can be described at an angular frequency ω, by a sum of an infinity of "spatial or space harmonics" En where the wave number or propagation constant kn of each harmonic is expressed as kn = (Φ + 2nπ) / p (-π < Φ < +π) z being the direction of propagation, p the pitch of the circuit and n an integer. Two examples of slow-wave circuit characteristics are shown, in the ω-k or Brillouin diagram: on figure (a), the fundamental n=0 is a forward space harmonic (the phase velocity vn=ω/kn has the same sign as the group velocity vg=dω/dkn), synchronism condition for backward interaction is at point B, intersection of the line of slope ve - the beam velocity - with the first backward (n = -1) space harmonic, on figure (b) the fundamental (n=0) is backward A periodic structure can support both forward and backward space harmonics, which are not modes of the field, and cannot exist independently, even if a beam can be coupled to only one of them. As the magnitude of the space harmonics decreases rapidly when the value of n is large, the interaction can be significant only with the fundamental or the first space harmonic. M-type BWO The M-type carcinotron, or M-type backward wave oscillator, uses crossed static electric field E and magnetic field B, similar to the magnetron, for focussing an electron sheet beam drifting perpendicularly to E and B, along a slow-wave circuit, with a velocity E/B. Strong interaction occurs when the phase velocity of one space harmonic of the wave is equal to the electron velocity. Both Ez and Ey components of the RF field are involved in the interaction (Ey parallel to the static E field). Electrons which are in a decelerating Ez electric field of the slow-wave, lose the potential energy they have in the static electric field E and reach the circuit. The sole electrode is more negative than the cathode, in order to avoid collecting those electrons having gained energy while interacting with the slow-wave space harmonic. O-type BWO The O-type carcinotron, or O-type backward wave oscillator, uses an electron beam longitudinally focused by a magnetic field, and a slow-wave circuit interacting with the beam. A collector collects the beam at the end of the tube. O-BWO spectral purity and noise The BWO is a voltage tunable oscillator, whose voltage tuning rate is directly related to the propagation characteristics of the circuit. The oscillation starts at a frequency where the wave propagating on the circuit is synchronous with the slow space charge wave of the beam. Inherently the BWO is more sensitive than other oscillators to external fluctuations. Nevertheless, its ability to be phase- or frequency-locked has been demonstrated, leading to successful operation as a heterodyne local oscillator. Frequency stability The frequency–voltage sensitivity, is given by the relation f/f = 1/2 [1/(1 + |vΦ/vg|)] (V0/V0) The oscillation frequency is also sensitive to the beam current (called "frequency pushing"). The current fluctuations at low frequencies are mainly due to the anode voltage supply, and the sensitivity to the anode voltage is given by f/f = 3/4 [ωq/ω/(1 + |vΦ/vg|)] (Va/Va) This sensitivity as compared to the cathode voltage sensitivity, is reduced by the ratio ωq/ω, where ωq is the angular plasma frequency; this ratio is of the order of a few times 10−2. Noise Measurements on submillimeter-wave BWO's (de Graauw et al., 1978) have shown that a signal-to-noise ratio of 120 dB per MHz could be expected in this wavelength range. In heterodyne detection using a BWO as a local oscillator, this figure corresponds to a noise temperature added by the oscillator of only 1000–3000 K. Notes References Johnson, H. R. (1955). Backward-wave oscillators. Proceedings of the IRE, 43(6), 684–697. Ramo S., Whinnery J. R., Van Duzer T. - Fields and Waves in Communication Electronics (3rd ed.1994) John Wiley & Sons Kantorowicz G., Palluel P. - Backward Wave Oscillators, in Infrared and Millimeter Waves, Vol 1, Chap. 4, K. Button ed., Academic Press 1979 de Graauw Th., Anderegg M., Fitton B., Bonnefoy R., Gustincic J. J. - 3rd Int. Conf. Submm. Waves, Guilford University of Surrey (1978) Convert G., Yeou T., in Millimeter and Submillimeter Waves, Chap. 4, (1964) Illife Books, London External links Virtual Valve Museum Thomson CSF CV6124 (Wayback Machine) Microwave technology Terahertz technology Vacuum tubes
Backward-wave oscillator
[ "Physics" ]
3,173
[ "Spectrum (physical sciences)", "Terahertz technology", "Vacuum tubes", "Electromagnetic spectrum", "Vacuum", "Matter" ]
3,057,557
https://en.wikipedia.org/wiki/Rubble%20pile
In astronomy, a rubble pile is a celestial body that consists of numerous pieces of debris that have coalesced under the influence of gravity. Rubble piles have low density because there are large cavities between the various chunks that make them up. The asteroids Bennu and Ryugu have a measured bulk density which suggests that their internal structure is a rubble pile. Many comets and most smaller minor planets (<10 km in diameter) are thought to be composed of coalesced rubble. Minor planets Most smaller asteroids are thought to be rubble piles. Rubble piles form when an asteroid or moon (which may originally be monolithic) is smashed to pieces by an impact, and the shattered pieces subsequently fall back together, primarily due to self-gravitation. This coalescing usually takes from several hours to weeks. When a rubble-pile asteroid passes a much more massive object, tidal forces change its shape. Scientists first suspected that asteroids are often rubble piles when asteroid densities were first determined. Many of the calculated densities were significantly less than those of meteorites, which in some cases had been determined to be pieces of asteroids. Many asteroids with low densities are thought to be rubble piles, for example 253 Mathilde. The mass of Mathilde, as determined by the NEAR Shoemaker mission, is far too low for the volume observed, considering the surface is rock. Even ice with a thin crust of rock would not provide a suitable density. Also, the large impact craters on Mathilde would have shattered a rigid body. However, the first unambiguous rubble pile to be photographed is 25143 Itokawa, which has no obvious impact craters and is thus almost certainly a coalescence of shattered fragments. The asteroid 433 Eros, the primary destination of NEAR Shoemaker, was determined to be riven with cracks but otherwise solid. Other asteroids, possibly including Itokawa, have been found to be contact binaries, two major bodies touching, with or without rubble filling the boundary. Large interior voids are possible because of the very low gravity of most asteroids. Despite a fine regolith on the outside (at least to the resolution that has been seen with spacecraft), the asteroid's gravity is so weak that friction between fragments dominates and prevents small pieces from falling inwards and filling the voids. All the largest asteroids (1 Ceres, 2 Pallas, 4 Vesta, 10 Hygiea, 704 Interamnia) are solid objects without any macroscopic internal porosity. This may be because they have been large enough to withstand all impacts, and have never been shattered. Alternatively, Ceres and some few other of the largest asteroids may be massive enough that, even if they were shattered but not dispersed, their gravity would collapse most voids upon recoalescing. Vesta, at least, has withstood intact one major impact since its formation and shows signs of internal structure from differentiation in the resultant crater that assures that it is not a rubble pile. This serves as evidence for size as a protection from shattering into rubble. Comets Observational evidence suggest that the cometary nucleus may not be a well-consolidated single body, but may instead be a loosely bound agglomeration of smaller fragments, weakly bonded and subject to occasional or even frequent disruptive events, although the larger cometary fragments are expected to be primordial condensations rather than collisionally derived debris as in the asteroid case. However, in situ observations by the Rosetta mission indicate that it may be more complex than that. Moons The moon Phobos, the larger of the two natural satellites of the planet Mars, is also thought to be a rubble pile bound together by a thin regolith crust about thick. A rubble-pile morphology may point towards an in situ origin of the Martian moons. Based on this, it has been proposed that Phobos and Deimos may originate from a single destroyed moon. Alternatively, Phobos may have undergone repeated 'recycling,' having been torn apart into a ring before reaccreting and migrating outwards. See also Circumplanetary disk Comet nucleus List of slow rotators (minor planets) References External links Close-up images of Itokawa, a rubble pile asteroid Hyper-Velocity Impacts on Rubble Pile Asteroids pdf online @ kent.ac.uk Astrophysics Bodies of the Solar System
Rubble pile
[ "Physics", "Astronomy" ]
885
[ "Astronomical sub-disciplines", "Bodies of the Solar System", "Astrophysics", "Astronomical objects", "Solar System" ]
3,059,333
https://en.wikipedia.org/wiki/Valence%20bond%20programs
Valence bond (VB) computer programs for modern valence bond calculations:- CRUNCH, by Gordon A. Gallup and his group. GAMESS (UK), includes calculation of VB wave functions by the TURTLE code, due to J.H. van Lenthe. GAMESS (US), has links to interface VB2000, and XMVB. MOLPRO and MOLCAS include code by David L. Cooper for generating Spin Coupled VB wave functions from CASSCF calculations. VB2000 version 3.0 (released, 2022), by Jiabo Li, Brian Duke, David W. O. de Sousa, Rodrigo S. Bitzer and Roy McWeeny allows the use of Group Function theory, whereby different groups can be handled by different methods (VB or Hartree–Fock). Many types of VB, including spin-coupled VB, and CASVB calculations are possible. It is part of the GAMESS (US) release and can be compiled into the GAMESS(US) executable. There is a more limited stand-alone program. Earlier versions were interfaced to GAUSSIAN. XMVB (previously known as XIAMEN), by Lingchun Song, Yirong Mo, Qianer Zhang and Wei Wu. This allows several VB methods, including breathing orbital VB. The code now interfaces to GAMESS (US) in a similar manner to VB2000. Earlier versions interfaced to GAUSSIAN 98. Note that several other programs, as well as some of those above, can do Goddard's Generalized Valence Bond (GVB) methods. GAMESS (US) does this either without the VB2000 interface or with it. See also Quantum chemistry computer programs References Computational chemistry software Quantum chemistry
Valence bond programs
[ "Physics", "Chemistry" ]
374
[ "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry stubs", "Computational chemistry", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
3,061,815
https://en.wikipedia.org/wiki/Homogeneous%20broadening
Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectral linewidth is its natural linewidth, with a Lorentzian profile. Broadening in laser systems Broadening in laser physics is a physical phenomenon that affects the spectroscopic line shape of the laser emission profile. The laser emission is due to the (excitation and subsequent) relaxation of a quantum system (atom, molecule, ion, etc.) between an excited state (higher in energy) and a lower one. These states can be thought of as the eigenstates of the energy operator. The difference in energy between these states is proportional to the frequency/wavelength of the photon emitted. Since this energy difference has a fluctuation, then the frequency/wavelength of the "macroscopic emission" (the beam) will have a certain width (i.e. it will be "broadened" with respect to the "ideal" perfectly monochromatic emission). Depending on the nature of the fluctuation, there can be two types of broadening. If the fluctuation in the frequency/wavelength is due to a phenomenon that is the same for each quantum emitter, there is homogeneous broadening, while if each quantum emitter has a different type of fluctuation, the broadening is inhomogeneous. Examples of situations where the fluctuation is the same for each system (homogeneous broadening) are natural or lifetime broadening, and collisional or pressure broadening. In these cases each system is affected "on average" in the same way (e.g. by the collisions due to the pressure). The most frequent situation in solid state systems where the fluctuation is different for each system (inhomogeneous broadening) is when because of the presence of dopants, the local electric field is different for each emitter, and so the Stark effect changes the energy levels in an inhomogeneous way. The homogeneous broadened emission line will have a Lorentzian profile (i.e. will be best fitted by a Lorentzian function), while the inhomogeneously broadened emission will have a Gaussian profile. One or more phenomena may be present at the same time, but if one has a wider fluctuation, it will be the one responsible for the character of the broadening. These effects are not limited to laser systems, or even to optical spectroscopy. They are relevant in magnetic resonance as well, where the frequency range is in the radiofrequency region for NMR, and one can also refer to these effects in EPR where the lineshape is observed at fixed (microwave) frequency and in a magnetic field range. Semiconductors In semiconductors, if all oscillations have the same eigenfrequency and the broadening in the imaginary part of the dielectric function results only from a finite damping , the system is said to be homogeneously broadened, and has a Lorentzian profile. If the system contains many oscillators with slightly different frequencies about however, then the system is inhomogeneously broadened. See also Homogeneity (physics) Voigt profile Spectral line shape References Laser science Atomic, molecular, and optical physics
Homogeneous broadening
[ "Physics", "Chemistry" ]
707
[ "Atomic", " molecular", " and optical physics" ]
1,571,836
https://en.wikipedia.org/wiki/Azoth
Azoth is a universal remedy or potent solvent sought after in the realm of alchemy, akin to alkahest—a distinct alchemical substance. The quest for Azoth was the crux of numerous alchemical endeavors, symbolized by the Caduceus. Initially coined to denote an esoteric formula pursued by alchemists, akin to the Philosopher's Stone, the term Azoth later evolved into a poetic expression for the element mercury. The etymology of 'Azoth' traces to Medieval Latin as a modification of 'azoc,' ultimately derived from the Arabic al-za'buq (الزئبق), meaning 'the mercury.' The scientific community does not recognize the existence of this substance. The myth of Azoth may stem from misinterpreted observations of solvents like mercury, capable of dissolving gold. Additionally, the myth might have been fueled by the occult inclinations nurtured by alchemists, who rooted and steered their chemical explorations in superstitions and dogmas. Description Azoth was believed to be the essential agent of transformation in alchemy. It is the name given by ancient alchemists to mercury, which they believed to be the animating spirit hidden in all matter that makes transmutation possible. The word comes from the Arabic al-zā'būq which means "mercury". The word occurs in the writings of many early alchemists, such as Zosimos, Olympiodorus, and Jābir ibn Hayyān (Geber). Mystical traditions and philosophy Azoth has also been linked to various mystical and spiritual practices beyond alchemy. In the context of Renaissance magic, it was often associated with the idea of spiritual enlightenment and the purification of the soul. Some mystical traditions regarded Azoth as a metaphor for the internal transformation required to achieve a higher state of consciousness. It was thought to embody the process of turning base human traits into divine virtues, akin to the transformation of base metals into gold. This spiritual interpretation of Azoth influenced numerous esoteric and hermetic schools of thought, contributing to its lasting legacy in Western mystical traditions. Additionally, Azoth's connection to mercury and its fluid, transformative properties also made it a symbol of adaptability and change in broader philosophical contexts. In the Kabbalah, Azoth is related to the Ein Soph or 'the Endless One'. See also Anima mundi Panacea (medicine) Prima materia Viriditas References External links Interpretation of Azoth of the Philosophers (by Dennis William Hauck) What is the Azoth? and The Azoth Ritual at Azothalchemy.org Alchemical substances Mythological medicines and drugs
Azoth
[ "Chemistry" ]
556
[ "Alchemical substances" ]
1,571,859
https://en.wikipedia.org/wiki/Hydrazoic%20acid
Hydrazoic acid, also known as hydrogen azide, azic acid or azoimide, is a compound with the chemical formula . It is a colorless, volatile, and explosive liquid at room temperature and pressure. It is a compound of nitrogen and hydrogen, and is therefore a pnictogen hydride. It was first isolated in 1890 by Theodor Curtius. The acid has few applications, but its conjugate base, the azide ion, is useful in specialized processes. Hydrazoic acid, like its fellow mineral acids, is soluble in water. Undiluted hydrazoic acid is dangerously explosive with a standard enthalpy of formation ΔfHo (l, 298K) = +264 kJ/mol. When dilute, the gas and aqueous solutions (<10%) can be safely prepared but should be used immediately; because of its low boiling point, hydrazoic acid is enriched upon evaporation and condensation such that dilute solutions incapable of explosion can form droplets in the headspace of the container or reactor that are capable of explosion. Production The acid is usually formed by acidification of an azide salt like sodium azide. Normally solutions of sodium azide in water contain trace quantities of hydrazoic acid in equilibrium with the azide salt, but introduction of a stronger acid can convert the primary species in solution to hydrazoic acid. The pure acid may be subsequently obtained by fractional distillation as an extremely explosive colorless liquid with an unpleasant smell. Its aqueous solution can also be prepared by treatment of barium azide solution with dilute sulfuric acid, filtering the insoluble barium sulfate. It was originally prepared by the reaction of aqueous hydrazine with nitrous acid: With the hydrazinium cation this reaction is written as: Other oxidizing agents, such as hydrogen peroxide, nitrosyl chloride, trichloramine or nitric acid, can also be used to produce hydrazoic acid from hydrazine. Destruction prior to disposal Hydrazoic acid reacts with nitrous acid: This reaction is unusual in that it involves compounds with nitrogen in four different oxidation states. Reactions In its properties hydrazoic acid shows some analogy to the halogen acids, since it forms poorly soluble (in water) lead, silver and mercury(I) salts. The metallic salts all crystallize in the anhydrous form and decompose on heating, leaving a residue of the pure metal. It is a weak acid (pKa = 4.75.) Its heavy metal salts are explosive and readily interact with the alkyl iodides. Azides of heavier alkali metals (excluding lithium) or alkaline earth metals are not explosive, but decompose in a more controlled way upon heating, releasing spectroscopically-pure gas. Solutions of hydrazoic acid dissolve many metals (e.g. zinc, iron) with liberation of hydrogen and formation of salts, which are called azides (formerly also called azoimides or hydrazoates). Hydrazoic acid may react with carbonyl derivatives, including aldehydes, ketones, and carboxylic acids, to give an amine or amide, with expulsion of nitrogen. This is called Schmidt reaction or Schmidt rearrangement. Dissolution in the strongest acids produces explosive salts containing the aminodiazonium ion , for example: The ion is isoelectronic to diazomethane . The decomposition of hydrazoic acid, triggered by shock, friction, spark, etc. produces nitrogen and hydrogen: Hydrazoic acid undergoes unimolecular decomposition at sufficient energy: The lowest energy pathway produces NH in the triplet state, making it a spin-forbidden reaction. This is one of the few reactions whose rate has been determined for specific amounts of vibrational energy in the ground electronic state, by laser photodissociation studies. In addition, these unimolecular rates have been analyzed theoretically, and the experimental and calculated rates are in reasonable agreement. Toxicity Hydrazoic acid is volatile and highly toxic. It has a pungent smell and its vapor can cause violent headaches. The compound acts as a non-cumulative poison. Applications 2-Furonitrile, a pharmaceutical intermediate and potential artificial sweetening agent has been prepared in good yield by treating furfural with a mixture of hydrazoic acid () and perchloric acid () in the presence of magnesium perchlorate in the benzene solution at 35 °C. The all gas-phase iodine laser (AGIL) mixes gaseous hydrazoic acid with chlorine to produce excited nitrogen chloride, which is then used to cause iodine to lase; this avoids the liquid chemistry requirements of COIL lasers. References External links OSHA: Hydrazoic Acid Acids Azides Nitrogen hydrides Explosive chemicals Explosive gases Foul-smelling chemicals
Hydrazoic acid
[ "Chemistry" ]
1,014
[ "Explosive chemicals", "Azides", "Acids", "Explosive gases" ]
1,572,074
https://en.wikipedia.org/wiki/Hammer%20drill
A hammer drill, also known as a percussion drill or impact drill, is a power tool used chiefly for drilling in hard materials. It is a type of rotary drill with an impact mechanism that generates a hammering motion. The percussive mechanism provides a rapid succession of short hammer thrusts to pulverize the material to be bored, so as to provide quicker drilling with less effort. If a hammer drill's impact mechanism can be switched off, the tool can be used like a conventional drill to also perform tasks such as screwdriving. History Ancient China's principal drilling technique, percussive drilling, was invented during the Han dynasty. The process involved two to six men jumping on a lever at rhythmic intervals to raise a heavy iron bit attached to long bamboo cables from a bamboo derrick. Utilizing cast iron bits and tools constructed of bamboo, the early Chinese were able to use percussion drilling to drill holes to a depth of . The construction of large wells took more than two to three generations of workers to complete. The cable tool drilling machines developed by the early Chinese involved raising and dropping a heavy string of drilling tools to crush through rocks into diminutive fragments. In addition, the Chinese also used a cutting head secured to bamboo rods to drill to depths of . The raising and dropping of the bamboo drill strings allowed the drilling machine to penetrate less dense and unconsolidated rock formations. In 1848 J.J. Couch invented the first pneumatic percussion drill. The origin of the first hammer drill is a matter of contention. German company Fein patented a ("drill with electro-pneumatic striking mechanism") in 1914. German company Bosch produced the first "Bosch-Hammer" around 1932 in mass production. The US company Milwaukee Electric Tool Corporation states that in 1935, it was selling a lightweight electric hammer drill (cam-action). Hand-cranked percussion drills were made in the UK in the mid-twentieth century. Design Hammer drills have a cam-action or percussion hammering mechanism, in which two sets of toothed gears mechanically interact with each other to hammer while rotating the drill bit. With cam-action drills, the chuck has a mechanism whereby the entire chuck and bit move forward and backward on the axis of rotation. This type of drill is often used with or without the hammer action, but it is not possible to use the hammer action alone as it is the rotation over the cams which causes the hammer motion. A hammer drill has a specially designed clutch that allows it to not only spin the drill bit, but also to punch it in and out (along the axis of the bit). The actual distance the bit travels in and out and the force of its blow are both very small, and the hammering action is very rapid—thousands of "BPM" (blows per minute) or "IPM" (impacts per minute). Although each blow is of relatively low force, these thousands of blows per minute are more than adequate to break up concrete or brick, using the masonry drill bit's carbide wedge to pulverize it for the spiral flutes to whisk away. For this reason, a hammer drill drills much faster than a regular drill through concrete, brick, and thick lumber. In standardized drilling speed tests, the most effective hammer drills improve drilling speeds by upwards of 30% compared to completing the same task with the hammer mode disabled. Hammer drills are increasingly powered by cordless technology. Uses Holes in hard materials are needed for anchor bolts, concrete screws, and wall plugs. Hammer drills are not typically used for production construction drilling, but rather for occasional drilling of holes into concrete, masonry or stone. They are also used to drill holes in concrete footings to pin concrete wall forms and to drill holes in concrete floors to pin wall framing. Slotted drive shaft or slotted drive system (SDS) rotary drills are more commonly used as dedicated masonry drilling tools in construction. The system was designed by Bosch in 1975 and stands for "Stecken – Drehen – Sichern" which is German for "Insert – Twist – Secure". Hammer drills almost always have a lever or switch that locks off the special "hammer clutch," turning the tool into a conventional drill for wood or metal work. Hammer drills are more expensive and more bulky than regular drills, but are preferable for applications where the material to be drilled, concrete block or wood studs, is unknown. For example, an electrician mounting an electrical box to a wall would be able to use the same hammer drill to drill into either wood studs (hammer disabled) or masonry walls (hammer enabled). See also References External links NIOSH Sound Power and Vibrations Database New York City Quiet Vendor Guidelines Power tools Hand-held power tools
Hammer drill
[ "Physics" ]
963
[ "Power (physics)", "Power tools", "Physical quantities" ]
1,572,605
https://en.wikipedia.org/wiki/Mehdi%20Golshani
Mehdi Golshani (Persian: مهدی گلشنی, born 1939 in Isfahan, Iran) is a contemporary Iranian theoretical physicist, academic, scholar, philosopher and distinguished professor at Sharif University of Technology. He is also member of Iranian Science and Culture Hall of Fame, senior fellow of Academy of Sciences of Iran and a founding fellow of the Institute for Studies in Theoretical Physics and Mathematics. He is a former member of the Supreme Council of the Cultural Revolution. History He received his B.Sc. in Physics from Tehran University in 1959 and his Ph.D. in Physics with a specialization in particle physics in 1969 from the University of California, Berkeley. The title of his doctoral dissertation is "Electron impact excitation of heavily ionized atoms". Life Career Mehdi Golshani is a distinguished lecturer. His main research areas include foundational physics, particle physics, physical cosmology and philosophical implications of quantum mechanics. He is known as a thinker for his writings on science, religion and their interrelation. Golshani is the founder and chairman of the Faculty of Philosophy of Science at Sharif University of Technology. He is also the director of the Institute of Humanities and Cultural Studies, Tehran, Iran, and a professor at Physics Department of Sharif University of Technology, as well as a Senior Fellow of School of Physics at Institute for Studies in Theoretical Physics and Mathematics (IPM). He is a member of American Association of Physics Teachers, and Center for Theology and Natural Science, as well as a Senior Associate of International Centre for Theoretical Physics, Trieste, Italy. He is also Member of Philosophy of Science Association, Michigan, U.S. and European Society for the Study of Science and Theology. He has been among the winners of the first year of the Templeton Science & Religion course program and also among the Former Judges of The Templeton Prize. Golshani is a fellow of Islamic World Academy of Sciences IAS. He has written numerous books and articles on physics, philosophy of physics, science and religion, as well as science and theology. In most of Golshani's works, there is a clear attempt to help revive the scientific spirit in the Muslim world. Views On the foundation of quantum mechanics he is mainly concerned with the orthodox interpretation of quantum mechanics and the possible more realistic alternatives, particularly Bohmian Mechanics. On the interrelationship of science and religion He is a Muslim scientist and thinker who has deep roots in both science and religion. On Christianity and the development of modern science The biblical world view has had a significant impact in the development of science. Professor Mehdi Golshani connotes a connection between a belief in the Biblical God and scientific breakthroughs by stating that Copernicus, Kepler, Galileo, Boyle, Newton and many other founders of science were all devout Christians. Western Science was largely constructed within the framework of a Christian world view, and was influenced by the following Biblical concepts: Quotes "The conception of an omniscient and omnipotent personal God, [w]ho made everything in accordance with a rational plan and purpose, contributed to the notion of a rationally structured creation". "The notion of a transcendent God, [w]ho exists separate from His creation, served to counter the notion that the physical world, or any part of it, is sacred. Since the entire physical world is a mere creation, it was thus a fit object of study and transformation". "Since man was made in the image of God (Gen.1:26), which included rationality and creativity, it was deemed possible that man could discern the rational structure of the physical universe that God had made". "The cultural mandate, which appointed man to be God's steward over creation (Gen1:28), provided the motivation for studying nature and for applying that study towards practical ends, at the same time glorifying God for His wisdom and goodness". "In the popular mind, the two greatest historical conflicts between science and religion have been those involving Galileo and Darwin. "The Galileo affair, in the early 17th century, was a complex dispute, inflamed by politics and personalities. It was primarily a family squabble within Christianity. Two different scientific research programs clashed, each program supported by its own group of Christian scientists. The central issue was the epistemological question of how to determine absolute motion. Should the absolute frame of reference be set by Biblical standards, by Aristotelian philosophy, by mathematical simplicity [...] or by other considerations? The difficulty was that the observational data in themselves can yield information only about relative motion. The question of absolute motion must thus be settled by extra-scientific definitions and considerations. As is now widely recognized, the resolution of this issue depends largely on one's worldview assumptions". "The conflict precipitated by Darwin concerns primarily origins. How did life, in all its manifold forms, come to be? The dispute is not so much about observations of living things, fossils, geological formations, etc. but how to explain how they came to be. As such, the conflict involves questions concerning the ultimate nature of reality (e.g., can mind be explained entirely in terms of matter?), eschatology (e.g., does man have a non-material soul that survives physical death?)[...] and causation (e.g., does the origin of life require special divine acts?). Again, a central issue is one of epistemology: what role should divine revelation (e.g., the Bible) play in interpreting the results of observational science, in choosing the theories of science [...] and in informing our view of origins, etc? Here, too, it is clear that this conflict is rooted in a clash of opposing extra-scientific presuppositions". Works Books تحليلى بر ديدگاههاى فلسفى فيزيكدانان معاصر (a Probe into the Philosophical Viewpoints of Contemporary Physicists). in Persian. علم دینی و علم سکولار (Secular and religious science). in Persian. Golshani, Mehdi. Holy Quran and the Sciences of Nature. Paperback ed. Studies in Contemporary Philosophical Th., 1997. From Physics to Metaphysics, Institute for Humanities and Cultural Studies, Tehran, 1998 Golshani, Mehdi. Can Science Dispense with Religion? Hardcover ed. I.H.C.S., 1998. English Translation of the Holy Qur'an, Vol. 1, Islamic Propagation Organization, Tehran, 1991 As a contributor "The Sciences of Nature in an Islamic Perspective" in The Concept of Nature in Science & Theology (SSTh 4/1996), ed. by N. H. Gregersen et al. (Geneva: Labor et Fides, 1998), pp. 56–62. _ and Shojai, A. "Direct Particle Quantum Interaction" in Contemporary Fundamental Physics, 1, ed. by V.V. Dvoeglazor (Huntington, New York: Nova Publishers, Inc., 2000), p. 270. "Ways of Understanding Nature in the Qur’anic Perspective" in The Interplay between Scientific and Theological Worldviews (SSTh 6/1998), ed. by N. H. Gregersen et al. (Geneva: Labor et Fides, 1999), p. 183. "Philosophy of Science from the Qur’anic Perspective" in Towards Islamization of Disciplines (Hendon, Virginia: International Institute of Islamic Thought, 1989), p. 71. "Theistic Science" in God for the Twenty First Century (USA: John Templeton Foundation, 2000). "Have Physicists Been Able to Dispense with Philosophy?" in Recent Advances in Relativity Theory, ed. by M. C. Duffy & M. Wegener (Palm Harbor, Fl. : Hadronic Press, 2001) p. 90. "The Ladder of God" in Faith in Science: Scientists Search for Truth (London: Routledge, Fall 2001). "Causality in the Islamic Outlook and in Modern Physics" in Studies in Science and Theology, Vol. 8, ed. by N. H. Gregersen (ESSSAT, Fall 2001). Articles Golshani, Mehdi. "Does Science Offer Evidence of a Transcendent Reality and Purpose:." Islam and Science (Refereed) 1 (2003): 45-65. Golshani, Mehdi. "Some Important Questions Concerning the Relationship Between Science and Religion." Islam and Science 3.1 (2003): 63-83. Scientific Papers References External links Homepage - Sharif Univ. of Tech. Webpage - Islamic World Academy of Sciences Webpage - Institute for Studies in Theoretical Physics and Mathematics Webpage - Centre for Islam and Science Interview (audio) - Meta Library Quantum physicists 20th-century Iranian physicists Members of the International Society for Science and Religion Particle physicists Scientists from Isfahan Academic staff of Sharif University of Technology University of California, Berkeley alumni University of Tehran alumni 1939 births Living people Recipients of the Order of Knowledge Iranian Science and Culture Hall of Fame recipients in Mathematics and Physics Iran's Book of the Year Awards recipients Muslim evolutionists
Mehdi Golshani
[ "Physics" ]
1,933
[ "Quantum mechanics", "Quantum physicists", "Particle physicists", "Particle physics" ]
1,572,814
https://en.wikipedia.org/wiki/Di-tert-butyl%20ether
Di-tert-butyl ether is a tertiary ether, primarily of theoretical interest as the simplest member of the class of di-tertiary ethers. See also Ether Methyl tert-butyl ether Dimethyl ether Diethyl ether Diisopropyl ether References Dialkyl ethers Ether solvents Tert-butyl compounds Symmetrical ethers
Di-tert-butyl ether
[ "Chemistry" ]
74
[]
1,573,422
https://en.wikipedia.org/wiki/Leachate
A leachate is any liquid that, in the course of passing through matter, extracts soluble or suspended solids, or any other component of the material through which it has passed. Leachate is a widely used term in the environmental sciences where it has the specific meaning of a liquid that has dissolved or entrained environmentally harmful substances that may then enter the environment. It is most commonly used in the context of land-filling of putrescible or industrial waste. In the narrow environmental context leachate is therefore any liquid material that drains from land or stockpiled material and contains significantly elevated concentrations of undesirable material derived from the material that it has passed through. Landfill leachate Leachate from a landfill varies widely in composition depending on the age of the landfill and the type of waste that it contains. It usually contains both dissolved and suspended material. The generation of leachate is caused principally by precipitation percolating through waste deposited in a landfill. Once in contact with decomposing solid waste, the percolating water becomes contaminated, and if it then flows out of the waste material it is termed leachate. Additional leachate volume is produced during this decomposition of carbonaceous material producing a wide range of other materials including methane, carbon dioxide and a complex mixture of organic acids, aldehydes, alcohols and simple sugars. The risks of leachate generation can be mitigated by properly designed and engineered landfill sites, such as those that are constructed on geologically impermeable materials or sites that use impermeable liners made of geomembranes or engineered clay. The use of linings is now mandatory within the United States, Australia and the European Union except where the waste is deemed inert. In addition, most toxic and difficult materials are now specifically excluded from landfilling. However, despite much stricter statutory controls, leachates from modern sites are often found to contain a range of contaminants stemming from illegal activity or legally discarded household and domestic products. In a 2012 survey performed in New York State, all surveyed double-lined landfill cells had leakage rates of less than 500 liters per hectare per day. Average leakage rates were much lower than for landfills built according to older standards before 1992. Composition of landfill leachate When water percolates through waste, it promotes and assists the process of decomposition by bacteria and fungi. These processes in turn release by-products of decomposition and rapidly use up any available oxygen, creating an anoxic environment. In actively decomposing waste, the temperature rises and the pH falls rapidly with the result that many metal ions that are relatively insoluble at neutral pH become dissolved in the developing leachate. The decomposition processes themselves release more water, which adds to the volume of leachate. Leachate also reacts with materials that are not prone to decomposition themselves, such as fire ash, cement-based building materials and gypsum-based materials changing the chemical composition. In sites with large volumes of building waste, especially those containing gypsum plaster, the reaction of leachate with the gypsum can generate large volumes of hydrogen sulfide, which may be released in the leachate and may also form a large component of the landfill gas. The physical appearance of leachate when it emerges from a typical landfill site is a strongly odoured black-, yellow- or orange-coloured cloudy liquid. The smell is acidic and offensive and may be very pervasive because of hydrogen-, nitrogen- and sulfur-rich organic species such as mercaptans. In a landfill that receives a mixture of municipal, commercial, and mixed industrial waste but excludes significant amounts of concentrated chemical waste, landfill leachate may be characterized as a water-based solution of four groups of contaminants: dissolved organic matter (alcohols, acids, aldehydes, short chain sugars, etc.), inorganic macro components (common cations and anions including sulfate, chloride, iron, aluminium, zinc and ammonia), heavy metals (Pb, Ni, Cu, Hg), and xenobiotic organic compounds such as halogenated organics, (PCBs, dioxins, etc.). A number of complex organic contaminants have also been detected in landfill leachates. Samples from raw and treated landfill leachate yielded 58 complex organic contaminants including 2-OH-benzothiazole in 84% of the samples and perfluorooctanoic acid in 68%. Bisphenol A, valsartan and 2-OH-benzothiazole had the highest average concentrations in raw leachates, after biological treatment and after reverse osmosis, respectively. Leachate management In older landfills and those with no membrane between the waste and the underlying geology, leachate is free to leave the waste and flow directly into the groundwater. In such cases, high concentrations of leachate are often found in nearby springs and flushes. As leachate first emerges it can be black in colour, anoxic, and possibly effervescent, with dissolved and entrained gases. As it becomes oxygenated it tends to turn brown or yellow because of the presence of iron salts in solution and in suspension. It also quickly develops a bacterial flora often comprising substantial growths of Sphaerotilus natans. History of landfill leachate collection In the UK, in the late 1960s, central Government policy was to ensure new landfill sites were being chosen with permeable underlying geological strata to avoid the build-up of leachate. This policy was dubbed "dilute and disperse". However, following a number of cases where this policy was seen to be failing, and an exposee in The Sunday Times of serious environmental damage being caused by inappropriate disposal of industrial wastes, both policy and the law were changed. The Deposit of Poisonous Wastes Act 1972, together with The 1974 Local Government Act, made local government responsible for waste disposal and for the enforcement of environmental standards regarding waste disposal. Proposed landfill locations also had to be justified not only by geography but also scientifically. Many European countries decided to select landfill sites in groundwater-free clay geological conditions or to require that the site have an engineered lining. In the wake of European advancements, the United States increased its development of leachate retaining and collection systems. This quickly led from lining in principle to the use of multiple lining layers in all landfills (excepting those truly inert). Goals of leachate collection systems The primary criterion for design of the leachate system is that all leachate be collected and removed from the landfill at a rate sufficient to prevent an unacceptable hydraulic head occurring at any point over the lining system. Components of leachate collection systems There are many components to a collection system including pumps, manholes, discharge lines and liquid level monitors. However, there are four main components which govern the overall efficiency of the system. These four elements are liners, filters, pumps and sumps. Liners Natural and synthetic liners may be utilized as both a collection device and as a means for isolating leachate within the fill to protect the soil and groundwater below. The chief concern is the ability of a liner to maintain integrity and impermeability over the life of the landfill. Subsurface water monitoring, leachate collection, and clay liners are commonly included in the design and construction of a waste landfill. To effectively serve the purpose of containing leachate in a landfill, a liner system must possess a number of physical properties. The liner must have high tensile strength, flexibility, and elongation without failure. It is also important that the liner resist abrasion, puncture, and chemical degradation by leachate. Lastly, the liner must withstand temperature variation, must resist UV light (which leads most liners to be black), must be easily installed, and must be economical. There are several types of liners used in leachate control and collection. These types include geomembranes, geosynthetic clay liners, geotextiles, geogrids, geonets, and geocomposites. Each style of liner has specific uses and abilities. Geomembranes are used to provide a barrier between mobile polluting substances released from wastes and the groundwater. In the closing of landfills, geomembranes are used to provide a low-permeability cover barrier to prevent the intrusion of rain water. Geosynthetic clay liners (GCLs) are fabricated by distributing sodium bentonite in a uniform thickness between woven and non-woven geotextiles. Sodium bentonite has a low permeability, which makes GCLs a suitable alternative to clay liners in a composite liner system. Geotextiles are used as separation between two different types of soils to prevent contamination of the lower layer by the upper layer. Geotextiles also act as a cushion to protect synthetic layers against puncture from underlying and overlaying rocks. Geogrids are structural synthetic materials used in slope veneer stability to create stability for cover soils over synthetic liners or as soil reinforcement in steep slopes. Geonets are synthetic drainage materials that are often used in lieu of sand and gravel. Radz can take of drainage sand, thus increasing the landfill space for waste. Geocomposites are a combination of synthetic materials that are ordinarily used singly. A common type of geocomposite is a geonet that is heat-bonded to two layers of geotextile, one on each side. The geocomposite serves as a filter and drainage medium. Geosynthetic clay liners are a type of combination liner. One advantage to using a geosynthetic clay liner (GCL) is the ability to order exact amounts of the liner. Ordering precise amounts from the manufacturer prevents surplus and over-spending. Another advantage to GCLs is that the liner can be used in areas without an adequate clay source. On the other hand, GCLs are heavy and cumbersome, and their installation is very labor-intensive. In addition to being arduous and difficult under normal conditions, installation can be cancelled during damp conditions because the bentonite would absorb the moisture, making the job even more burdensome and tedious. Leachate drainage system The leachate drainage system is responsible for the collection and transport of the leachate collected inside the liner. The pipe dimensions, type, and layout must all be planned with the weight and pressure of waste, and transport vehicles in mind. The pipes are located on the floor of the cell. Above the network lies an enormous amount of weight and pressure. To support this, the pipes can either be flexible or rigid, but the joints to connect the pipes yield better results if the connections are flexible. An alternative to placing the collection system underneath the waste is to position the conduits in trenches or above grade. The collection pipe network of a leachate collection system drains, collects, and transports leachate through the drainage layer to a collection sump where it is removed for treatment or disposal. The pipes also serve as drains within the drainage layer to minimize the mounding of leachate in the layer. These pipes are designed with cuts that are inclined to 120 degrees, preventing entry of solid particles. Filters The filter layer is used above the drainage layer in leachate collection. Two types of filters are typically used in engineering practices: granular and geotextile. Granular filters consist of one or more soil layers or multiple layers having a coarser gradation in the direction of the seepage than the soil to be protected. Sumps or leachate well As liquid enters the landfill cell, it moves down the filter, passes through the pipe network, and rests in the sump. As collection systems are planned, the number, location, and size of the sumps are vital to an efficient operation. When designing sumps, the amount of leachate and liquid expected is the foremost concern. Areas in which rainfall is higher than average typically have larger sumps. A further criterion for sump planning is accounting for the pump capacity. The relationship of pump capacity and sump size is inverse. If the pump capacity is low, the volume of the sump should be larger than average. It is critical for the volume of the sump to be able to store the expected leachate between pumping cycles. This relationship helps maintain a healthy operation. Sump pumps can function with preset phase times. If the flow is not predictable, a predetermined leachate height level can automatically switch the system on. Other conditions for sump planning are maintenance and pump drawdown. Collection pipes typically convey the leachate by gravity to one or more sumps, depending upon the size of the area drained. Leachate collected in the sump is removed by pumping to a vehicle, to a holding facility for subsequent vehicle pickup, or to an on-site treatment facility. Sump dimensions are governed by the amount of leachate to be stored, pump capacity, and minimum pump drawdown. The volume of the sump must be sufficient to hold the maximum amount of leachate anticipated between pump cycles, plus an additional volume equal to the minimum pump drawdown volume. Sump size should also consider dimensional requirements for conducting maintenance and inspection activities. Sump pumps may operate with preset cycling times or, if leachate flow is less predictable, the pump may be automatically switched on when the leachate reaches a predetermined level. Membrane and collection for treatment More modern landfills in the developed world have some form of membrane separating the waste from the surrounding ground, and in such sites there is often a leachate collection series of pipes laid on the membrane to convey the leachate to a collection or treatment location. An example of a treatment system with only minor membrane use is the Nantmel Landfill Site. All membranes are porous to a limited extent so that, over time, low volumes of leachate will cross the membrane. The design of landfill membranes is at such low volumes that they should never have a measurable adverse impact on the quality of the receiving groundwater. A more significant risk may be the failure or abandonment of the leachate collection system. Such systems are prone to internal failure as landfills suffer large internal movements as waste decomposes unevenly and thus buckles and distorts pipes. If a leachate collection system fails, leachate levels will slowly build in a site and may even over-top the containing membrane and flow out into the environment. Rising leachate levels can also wet waste masses that have previously been dry, triggering further active decomposition and leachate generation. Thus, what appears to be a stabilised and inactive site can become re-activated and restart significant gas production and exhibit significant changes in finished ground levels. Re-injection into landfill One method of leachate management that was more common in uncontained sites was leachate re-circulation, in which leachate was collected and re-injected into the waste mass. This process greatly accelerated decomposition and therefore gas production and had the impact of converting some leachate volume into landfill gas and reducing the overall volume of leachate for disposal. However, it also tended to increase substantially the concentrations of contaminant materials, making it a more difficult waste to treat. Treatment The most common method of handling collected leachate is on-site treatment. When treating leachate on-site, the leachate is pumped from the sump into the treatment tanks. The leachate may then be mixed with chemical reagents to modify the pH and to coagulate and settle solids and to reduce the concentration of hazardous matter. Traditional treatment involved a modified form of activated sludge to substantially reduce the dissolved organic content. Nutrient imbalance can cause difficulties in maintaining an effective biological treatment stage. The treated liquid is rarely of sufficient quality to be released to the environment and may be tankered or piped to a local sewage treatment facility; the decision depends on the age of the landfill and on the limit of water quality that must be achieved after treatment. With high conductivity, leachate is hard to treat with biological treatment or chemical treatment. Treatment with reverse osmosis is also limited, resulting in low recoveries and fouling of the RO membranes. Reverse osmosis applicability is limited by conductivity, organics, and scaling inorganic elements such as CaSO4, Si, and Ba. Removal to sewer system In some older landfills, leachate was directed to the sewers, but this can cause a number of problems. Toxic metals from leachate passing through the sewage treatment plant concentrate in the sewage sludge, making it difficult or dangerous to dispose of the sludge without incurring a risk to the environment. In Europe, regulations and controls have improved in recent decades, and toxic wastes are now no longer permitted to be disposed of in the Municipal Solid Waste landfills, and in most developed countries the metals problem has diminished. Paradoxically, however, as sewage treatment plant discharges are being improved throughout Europe and many other countries, the plant operators are finding that leachates are difficult waste streams to treat. This is because leachates contain very high ammoniacal nitrogen concentrations, are usually very acidic, are often anoxic and, if received in large volumes relative to the incoming sewage flow, lack the phosphorus needed to prevent nutrient starvation for the biological communities that perform the sewage treatment processes. The result is that leachates are a difficult-to-treat waste stream. However, within ageing municipal solid waste landfills, this may not be a problem as the pH returns close to neutral after the initial stage of acidogenic leachate decomposition. Many sewer undertakers limit maximum ammoniacal nitrogen concentration in their sewers to 250 mg/L to protect sewer maintenance workers, as the WHO's maximum occupational safety limit would be exceeded at above pH 9 to 10, which is often the highest pH allowed in sewer discharges. Many older leachate streams also contained a variety of synthetic organic species and their decomposition products, some of which had the potential to be acutely damaging to the environment. Environmental impact The risks from waste leachate are due to its high organic contaminant concentrations and high concentration of ammonia. Pathogenic microorganisms that might be present in it are often cited as the most important, but pathogenic organism counts reduce rapidly with time in the landfill, so this only applies to the freshest leachate. Toxic substances may, however, be present in variable concentrations, and their presence is related to the nature of the waste deposited. Most landfills containing organic material will produce methane, some of which dissolves in the leachate. This could, in theory, be released in poorly ventilated areas in the treatment plant. All plants in Europe must now be assessed under the EU ATEX Directive and zoned where explosion risks are identified to prevent future accidents. The most important requirement is the prevention of the discharge of dissolved methane from untreated leachate into public sewers, and most sewage treatment authorities limit the permissible discharge concentration of dissolved methane to 0.14 mg/L, or 1/10 of the lower explosive limit. This entails methane stripping from the leachate. The greatest environmental risks occur in the discharges from older sites constructed before modern engineering standards became mandatory and also from sites in the developing world where modern standards have not been applied. There are also substantial risks from illegal sites and ad-hoc sites used by organizations outside the law to dispose of waste materials. Leachate streams running directly into the aquatic environment have both an acute and chronic impact on the environment, which may be very severe and can severely diminish bio-diversity and greatly reduce populations of sensitive species. Where toxic metals and organics are present this can lead to chronic toxin accumulation in both local and far distant populations. Rivers impacted by leachate are often yellow in appearance and often support severe overgrowths of sewage fungus. The contemporary research in the field of assessment techniques and remedial technology of environmental issues originating from landfill leachate has been reviewed in an article published in Critical Reviews in Environmental Science and Technology journal. A possible ecological threat for the aquatic environment due to the occurrence of organic micropollutants in raw and treated landfill leachates has also been reported. Problems and failures with collection systems Leachate collection systems can experience many problems including clogging with mud or silt. Bioclogging can be exacerbated by the growth of micro-organisms in the conduit. The conditions in leachate collection systems are ideal for micro-organisms to multiply. Chemical reactions in the leachate may also cause clogging through generation of solid residues. The chemical composition of leachate can weaken pipe walls, which may then fail. Other types of leachate Leachate can also be produced from land that was contaminated by chemicals or toxic materials used in industrial activities such as factories, mines or storage sites. Composting sites in areas of high rainfall also produce leachate. Leachate is associated with stockpiled coal and with waste materials from metal ore mining and other rock extraction processes, especially those in which sulfide containing materials are exposed to air producing sulfuric acid, often with elevated metal concentrations. In the context of civil engineering (more specifically reinforced concrete design), leachate refers to the effluent of pavement wash-off (that may include melting snow and ice with salt) that permeates through the cement paste onto the surface of the steel reinforcement, thereby catalyzing its oxidation and degradation. Leachates can be genotoxic in nature. A possible risk for the aquatic environment due to the occurrence of organic micropollutants in raw or treated landfill leachates has also been reported in recent studies. References Anaerobic digestion Biodegradable waste management Environmental soil science Liquid-solid separation
Leachate
[ "Chemistry", "Engineering", "Environmental_science" ]
4,454
[ "Separation processes by phases", "Biodegradable waste management", "Biodegradation", "Anaerobic digestion", "Environmental engineering", "Water technology", "Environmental soil science", "Liquid-solid separation" ]
1,575,447
https://en.wikipedia.org/wiki/Shear%20modulus
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: where = shear stress is the force which acts is the area on which the force acts = shear strain. In engineering , elsewhere is the transverse displacement is the initial length of the area. The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. Explanation The shear modulus is one of several quantities for measuring the stiffness of materials. All of them arise in the generalized Hooke's law: Young's modulus E describes the material's strain response to uniaxial stress in the direction of this stress (like pulling on the ends of a wire or putting a weight on top of a column, with the wire getting longer and the column losing height), the Poisson's ratio ν describes the response in the directions orthogonal to this uniaxial stress (the wire getting thinner and the column thicker), the bulk modulus K describes the material's response to (uniform) hydrostatic pressure (like the pressure at the bottom of the ocean or a deep swimming pool), the shear modulus G describes the material's response to shear stress (like cutting it with dull scissors). These moduli are not independent, and for isotropic materials they are connected via the equations The shear modulus is concerned with the deformation of a solid when it experiences a force parallel to one of its surfaces while its opposite face experiences an opposing force (such as friction). In the case of an object shaped like a rectangular prism, it will deform into a parallelepiped. Anisotropic materials such as wood, paper and also essentially all single crystals exhibit differing material response to stress or strain when tested in different directions. In this case, one may need to use the full tensor-expression of the elastic constants, rather than a single scalar value. One possible definition of a fluid would be a material with zero shear modulus. Shear waves In homogeneous and isotropic solids, there are two kinds of waves, pressure waves and shear waves. The velocity of a shear wave, is controlled by the shear modulus, where G is the shear modulus is the solid's density. Shear modulus of metals The shear modulus of metals is usually observed to decrease with increasing temperature. At high pressures, the shear modulus also appears to increase with the applied pressure. Correlations between the melting temperature, vacancy formation energy, and the shear modulus have been observed in many metals. Several models exist that attempt to predict the shear modulus of metals (and possibly that of alloys). Shear modulus models that have been used in plastic flow computations include: the Varshni-Chen-Gray model developed by and used in conjunction with the Mechanical Threshold Stress (MTS) plastic flow stress model. the Steinberg-Cochran-Guinan (SCG) shear modulus model developed by and used in conjunction with the Steinberg-Cochran-Guinan-Lund (SCGL) flow stress model. the Nadal and LePoac (NP) shear modulus model that uses Lindemann theory to determine the temperature dependence and the SCG model for pressure dependence of the shear modulus. Varshni-Chen-Gray model The Varshni-Chen-Gray model (sometimes referred to as the Varshni equation) has the form: where is the shear modulus at , and and are material constants. SCG model The Steinberg-Cochran-Guinan (SCG) shear modulus model is pressure dependent and has the form where, μ0 is the shear modulus at the reference state (T = 300 K, p = 0, η = 1), p is the pressure, and T is the temperature. NP model The Nadal-Le Poac (NP) shear modulus model is a modified version of the SCG model. The empirical temperature dependence of the shear modulus in the SCG model is replaced with an equation based on Lindemann melting theory. The NP shear modulus model has the form: where and μ0 is the shear modulus at absolute zero and ambient pressure, ζ is an area, m is the atomic mass, and f is the Lindemann constant. Shear relaxation modulus The shear relaxation modulus is the time-dependent generalization of the shear modulus : . See also Elasticity tensor Dynamic modulus Impulse excitation technique Shear strength Seismic moment References Materials science Shear strength Elasticity (physics) Mechanical quantities
Shear modulus
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
999
[ "Structural engineering", "Physical phenomena", "Mechanical quantities", "Applied and interdisciplinary physics", "Physical quantities", "Elasticity (physics)", "Deformation (mechanics)", "Quantity", "Shear strength", "Materials science", "Mechanics", "nan", "Mechanical engineering", "Phys...
10,950,869
https://en.wikipedia.org/wiki/Orthonormal%20function%20system
An orthonormal function system (ONS) is an orthonormal basis in a vector space of functions. References Linear algebra Functional analysis
Orthonormal function system
[ "Mathematics" ]
34
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra" ]
10,952,377
https://en.wikipedia.org/wiki/Material-handling%20equipment
Material handling equipment (MHE) is mechanical equipment used for the movement, storage, control, and protection of materials, goods and products throughout the process of manufacturing, distribution, consumption, and disposal. The different types of equipment can be classified into four major categories: transport equipment, positioning equipment, unit load formation equipment, and storage equipment. Transport equipment Transport equipment is used to move material from one location to another (e.g., between workplaces, between a loading dock and a storage area, etc.), while positioning equipment is used to manipulate material at a single location. The major subcategories of transport equipment are conveyors, cranes, and industrial trucks. Material can also be transported manually using no equipment. Conveyors Conveyors are used when material is to be moved frequently between specific points over a fixed path and when there is a sufficient flow volume to justify the fixed conveyor investment. Different types of conveyors can be characterized by the type of product being handled: unit load or bulk load; the conveyor's location: in-floor, on-floor, or overhead, and whether or not loads can accumulate on the conveyor. Accumulation allows intermittent movement of each unit of material transported along the conveyor, while all units move simultaneously on conveyors without accumulation capability. For example, while both the roller and flat-belt are unit-load on-floor conveyors, the roller provides accumulation capability while the flat-belt does not; similarly, both the power-and-free and trolley are unit-load overhead conveyors, with the power-and-free designed to include an extra track in order to provide the accumulation capability lacking in the trolley conveyor. Examples of bulk-handling conveyors include the magnetic-belt, troughed-belt, bucket, and screw conveyors. A sortation conveyor system is used for merging, identifying, inducting, and separating products to be conveyed to specific destinations, and typically consists of flat-belt, roller, and chute conveyor segments together with various moveable arms and/or pop-up wheels and chains that deflect, push, or pull products to different destinations. Cranes Cranes are used to transport loads over variable (horizontal and vertical) paths within a restricted area and when there is insufficient (or intermittent) flow volume such that the use of a conveyor cannot be justified. Cranes provide more flexibility in movement than conveyors because the loads handled can be more varied with respect to their shape and weight. Cranes provide less flexibility in movement than industrial trucks because they only can operate within a restricted area, though some can operate on a portable base. Most cranes utilize trolley-and-tracks for horizontal movement and hoists for vertical movement, although manipulators can be used if precise positioning of the load is required. The most common cranes include the jib, bridge, gantry, and stacker cranes. Industrial trucks Industrial trucks are trucks that are not licensed to travel on public roads (commercial trucks are licensed to travel on public roads). Industrial trucks are used to move materials over variable paths and when there is insufficient (or intermittent) flow volume such that the use of a conveyor cannot be justified. They provide more flexibility in movement than conveyors and cranes because there are no restrictions on the area covered, and they provide vertical movement if the truck has lifting capabilities. Different types of industrial trucks can be characterized by whether or not they have forks for handling pallets, provide powered or require manual lifting and travel capabilities, allow the operator to ride on the truck or require that the operator walk with the truck during travel, provide load stacking capability, and whether or not they can operate in narrow aisles. Hand trucks (including carts and dollies), the simplest type of industrial truck, cannot transport or stack pallets, is non-powered, and requires the operator to walk. A pallet jack, which cannot stack a pallet, uses front wheels mounted inside the end of forks that extend to the floor as the pallet is only lifted enough to clear the floor for subsequent travel. A counterbalanced lift truck (sometimes referred to as a forklift truck, but other attachments besides forks can be used) can transport and stack pallets and allows the operator to ride on the truck. The weight of the vehicle (and operator) behind the front wheels of truck counterbalances weight of the load (and weight of vehicle beyond front wheels); the front wheels act as a fulcrum or pivot point. Narrow-aisle trucks usually require that the operator stand-up while riding in order to reduce the truck's turning radius. Reach mechanisms and outrigger arms that straddle and support a load can be used in addition to the just the counterbalance of the truck. On a turret truck, the forks rotate during stacking, eliminating the need for the truck itself to turn in narrow aisles. An order picker allows the operator to be lifted with the load to allow for less-than-pallet-load picking. Automated guided vehicles (AGVs) are industrial trucks that can transport loads without requiring a human operator. Rail or wheel steered transfer carts are preferred for areas that do not have favourable conditions for the operation of forklifts. Rail transfer carts are carts that can move on the rail line. Wheel steered transfer carts can move independently of the route with battery powered energy systems. An electric tug is a small battery powered and pedestrian operated machine capable of either pushing or pulling a significantly heavier load than itself. Manual Handling Equipment Commonly used to assist in moving smaller loads where larger equipment would struggle, manual handling equipment such as pallet trucks, trolleys, and sack trucks can be an essential part of any material handling. Yard ramp A yard ramp, sometimes called a mobile yard ramp, is a movable metal ramp for loading and unloading of vehicles. A yard ramp is placed at the back of a vehicle to provide access for forklifts to ascend the ramp. Using a yard ramp for vehicle loading or unloading allows the work to be carried out by a forklift. Positioning equipment Positioning equipment is used to handle material at a single location. It can be used at a workplace to feed, orient, load/unload, or otherwise manipulate materials so that are in the correct position for subsequent handling, machining, transport, or storage. As compared to manual handling, the use of positioning equipment can raise the productivity of each worker when the frequency of handling is high, improve product quality and limit damage to materials and equipment when the item handled is heavy or awkward to hold and damage is likely through human error or inattention, and can reduce fatigue and injuries when the environment is hazardous or inaccessible. In many cases, positioning equipment is required for and can be justified by the ergonomic requirements of a task. Examples of positioning equipment include lift/tilt/turn tables, hoists, balancers, manipulators, and industrial robots. Manipulators act as “muscle multipliers” by counterbalancing the weight of a load so that an operator lifts only a small portion (1%) of the load's weight, and they fill the gap between hoists and industrial robots: they can be used for a wider range of positioning tasks than hoists and are more flexible than industrial robots due to their use of manual control. They can be powered manually, electrically, or pneumatically, and a manipulator's end-effector can be equipped with mechanical grippers, vacuum grippers, electromechanical grippers, or other tooling. Unit load formation equipment Unit load formation equipment is used to restrict materials so that they maintain their integrity when handled a single load during transport and for storage. If materials are self-restraining (e.g., a single part or interlocking parts), then they can be formed into a unit load with no equipment. Examples of unit load formation equipment include pallets, skids, slipsheets, tote pans, bins/baskets, cartons, bags, and crates. A pallet is a platform made of wood (the most common), paper, plastic, rubber, or metal with enough clearance beneath its top surface (or face) to enable the insertion of forks for subsequent lifting purposes. A slipsheet is a thick piece of paper, corrugated fiber, or plastic upon which a load is placed and has tabs that can be grabbed by special push/pull lift truck attachments. They are used in place of a pallet to reduce weight and volume, but loading/unloading is slower. Storage equipment Storage equipment is used for holding or buffering materials over a period of time. The design of each type of storage equipment, along with its use in warehouse design, represents a trade-off between minimizing handling costs, by making material easily accessible, and maximizing the utilization of space (or cube). If materials are stacked directly on the floor, then no storage equipment is required, but, on average, each different item in storage will have a stack only half full; to increase cube utilization, storage racks can be used to allow multiple stacks of different items to occupy the same floor space at different levels. The use of racks becomes preferable to floor storage as the number of units per item requiring storage decreases. Similarly, the depth at which units of an item are stored affects cube utilization in proportion to the number of units per item requiring storage. Pallets can be stored using single- and double-deep racks when the number of units per item is small, while pallet-flow and push-back racks are used when the units per item are mid-range, and floor-storage or drive-in racks are used when the number of units per item is large, with drive-in providing support for pallet loads that cannot be stacked on top of each other. Individual cartons can either be picked from pallet loads or can be stored in carton-flow racks, which are designed to allow first-in, first-out (FIFO) carton access. For individual piece storage, bin shelving, storage drawers, carousels, and A-frames can be used. Engineered systems Engineered systems are automated solutions designed to streamline and optimize material handling processes. An automatic storage/retrieval system (AS/RS) is an integrated computer-controlled storage system that combines storage medium, transport mechanism, and controls with various levels of automation for fast and accurate random storage of products and materials. Identification and Control Equipment Equipment used to collect and communicate the information that is used to coordinate the flow of materials within a facility and between a facility and its suppliers and customers. The identification of materials and associated control can be performed manually with no specialized equipment. See also Automated guided vehicle Automated storage and retrieval system Bulk material handling Caster Drum handler Electric track vehicle system Electric tug Forklift truck Industrial robot Material handling Packaging machinery Pallet Pallet inverter Pallet racking Slip sheet Telescopic handler Warehouse Notes References Chu, H.K., Egbelu, P.J., and Wu, C.T., 1995, "ADVISOR: A computer-aided material handling equipment selection system", Int. J. Prod. Res., 33(12):3311−3329. Kay, M.G., 2012, Material Handling Equipment, Retrieved 2014-10-02. Kulwiec, R.A., Ed., 1985, Materials Handling Handbook, 2nd Ed., New York: Wiley. Mulcahy, D.E., 1999, Materials Handling Handbook, New York: McGraw-Hill. Tompkins, J.A., White, J.A., Bozer, Y.A., and Tanchoco, J.M.A., 2003, Facilities Planning, 3rd Ed., Wiley, Appendix 5.B. External links College Industry Council on Material Handling Education (CICMHE) European Federation of Materials Handling Industrial Truck Association Material Handling Equipment Distributors Association Material Handling Equipment Taxonomy Material Handling Industry Equipment Industrial equipment
Material-handling equipment
[ "Physics", "Engineering" ]
2,464
[ "Materials", "Material handling", "nan", "Matter" ]
2,217,599
https://en.wikipedia.org/wiki/Circular%20symmetry
In geometry, circular symmetry is a type of continuous symmetry for a planar object that can be rotated by any arbitrary angle and map onto itself. Rotational circular symmetry is isomorphic with the circle group in the complex plane, or the special orthogonal group SO(2), and unitary group U(1). Reflective circular symmetry is isomorphic with the orthogonal group O(2). Two dimensions A 2-dimensional object with circular symmetry would consist of concentric circles and annular domains. Rotational circular symmetry has all cyclic symmetry, Zn as subgroup symmetries. Reflective circular symmetry has all dihedral symmetry, Dihn as subgroup symmetries. Three dimensions In 3-dimensions, a surface or solid of revolution has circular symmetry around an axis, also called cylindrical symmetry or axial symmetry. An example is a right circular cone. Circular symmetry in 3 dimensions has all pyramidal symmetry, Cnv as subgroups. A double-cone, bicone, cylinder, toroid and spheroid have circular symmetry, and in addition have a bilateral symmetry perpendicular to the axis of system (or half cylindrical symmetry). These reflective circular symmetries have all discrete prismatic symmetries, Dnh as subgroups. Four dimensions In four dimensions, an object can have circular symmetry, on two orthogonal axis planes, or duocylindrical symmetry. For example, the duocylinder and Clifford torus have circular symmetry in two orthogonal axes. A spherinder has spherical symmetry in one 3-space, and circular symmetry in the orthogonal direction. Spherical symmetry An analogous 3-dimensional equivalent term is spherical symmetry. Rotational spherical symmetry is isomorphic with the rotation group SO(3), and can be parametrized by the Davenport chained rotations pitch, yaw, and roll. Rotational spherical symmetry has all the discrete chiral 3D point groups as subgroups. Reflectional spherical symmetry is isomorphic with the orthogonal group O(3) and has the 3-dimensional discrete point groups as subgroups. A scalar field has spherical symmetry if it depends on the distance to the origin only, such as the potential of a central force. A vector field has spherical symmetry if it is in radially inward or outward direction with a magnitude and orientation (inward/outward) depending on the distance to the origin only, such as a central force. See also Isotropy Rotational symmetry Particle in a spherically symmetric potential Gauss's theorem References Symmetry Rotation
Circular symmetry
[ "Physics", "Mathematics" ]
499
[ "Physical phenomena", "Classical mechanics", "Rotation", "Motion (physics)", "Geometry", "Symmetry" ]
2,218,040
https://en.wikipedia.org/wiki/Crystallographic%20restriction%20theorem
The crystallographic restriction theorem in its basic form was based on the observation that the rotational symmetries of a crystal are usually limited to 2-fold, 3-fold, 4-fold, and 6-fold. However, quasicrystals can occur with other diffraction pattern symmetries, such as 5-fold; these were not discovered until 1982 by Dan Shechtman. Crystals are modeled as discrete lattices, generated by a list of independent finite translations . Because discreteness requires that the spacings between lattice points have a lower bound, the group of rotational symmetries of the lattice at any point must be a finite group (alternatively, the point is the only system allowing for infinite rotational symmetry). The strength of the theorem is that not all finite groups are compatible with a discrete lattice; in any dimension, we will have only a finite number of compatible groups. Dimensions 2 and 3 The special cases of 2D (wallpaper groups) and 3D (space groups) are most heavily used in applications, and they can be treated together. Lattice proof A rotation symmetry in dimension 2 or 3 must move a lattice point to a succession of other lattice points in the same plane, generating a regular polygon of coplanar lattice points. We now confine our attention to the plane in which the symmetry acts , illustrated with lattice vectors in the figure. Now consider an 8-fold rotation, and the displacement vectors between adjacent points of the polygon. If a displacement exists between any two lattice points, then that same displacement is repeated everywhere in the lattice. So collect all the edge displacements to begin at a single lattice point. The edge vectors become radial vectors, and their 8-fold symmetry implies a regular octagon of lattice points around the collection point. But this is impossible, because the new octagon is about 80% as large as the original. The significance of the shrinking is that it is unlimited. The same construction can be repeated with the new octagon, and again and again until the distance between lattice points is as small as we like; thus no discrete lattice can have 8-fold symmetry. The same argument applies to any k-fold rotation, for k greater than 6. A shrinking argument also eliminates 5-fold symmetry. Consider a regular pentagon of lattice points. If it exists, then we can take every other edge displacement and (head-to-tail) assemble a 5-point star, with the last edge returning to the starting point. The vertices of such a star are again vertices of a regular pentagon with 5-fold symmetry, but about 60% smaller than the original. Thus the theorem is proved. The existence of quasicrystals and Penrose tilings shows that the assumption of a linear translation is necessary. Penrose tilings may have 5-fold rotational symmetry and a discrete lattice, and any local neighborhood of the tiling is repeated infinitely many times, but there is no linear translation for the tiling as a whole. And without the discrete lattice assumption, the above construction not only fails to reach a contradiction, but produces a (non-discrete) counterexample. Thus 5-fold rotational symmetry cannot be eliminated by an argument missing either of those assumptions. A Penrose tiling of the whole (infinite) plane can only have exact 5-fold rotational symmetry (of the whole tiling) about a single point, however, whereas the 4-fold and 6-fold lattices have infinitely many centres of rotational symmetry. Trigonometry proof Consider two lattice points A and B separated by a translation vector r. Consider an angle α such that a rotation of angle α about any lattice point is a symmetry of the lattice. Rotating about point B by α maps point A to a new point A'. Similarly, rotating about point A by α maps B to a point B'. Since both rotations mentioned are symmetry operations, A' and B' must both be lattice points. Due to periodicity of the crystal, the new vector r' which connects them must be equal to an integer multiple of r: with integer. The four translation vectors, three of length and one, connecting A' and B', of length , form a trapezium. Therefore, the length of r' is also given by: Combining the two equations gives: where is also an integer. Bearing in mind that we have allowed integers . Solving for possible values of reveals that the only values in the 0° to 180° range are 0°, 60°, 90°, 120°, and 180°. In radians, the only allowed rotations consistent with lattice periodicity are given by 2π/n, where n = 1, 2, 3, 4, 6. This corresponds to 1-, 2-, 3-, 4-, and 6-fold symmetry, respectively, and therefore excludes the possibility of 5-fold or greater than 6-fold symmetry. Short trigonometry proof Consider a line of atoms A-O-B, separated by distance a. Rotate the entire row by θ = +2π/n and θ = −2π/n, with point O kept fixed. After the rotation by +2π/n, A is moved to the lattice point C and after the rotation by -2π/n, B is moved to the lattice point D. Due to the assumed periodicity of the lattice, the two lattice points C and D will be also in a line directly below the initial row; moreover C and D will be separated by r = ma, with m an integer. But by trigonometry, the separation between these points is: . Equating the two relations gives: This is satisfied by only n = 1, 2, 3, 4, 6. Matrix proof For an alternative proof, consider matrix properties. The sum of the diagonal elements of a matrix is called the trace of the matrix. In 2D and 3D every rotation is a planar rotation, and the trace is a function of the angle alone. For a 2D rotation, the trace is 2 cos θ; for a 3D rotation, 1 + 2 cos θ. Examples Consider a 60° (6-fold) rotation matrix with respect to an orthonormal basis in 2D. The trace is precisely 1, an integer. Consider a 45° (8-fold) rotation matrix. The trace is 2/, not an integer. Selecting a basis formed from vectors that spans the lattice, neither orthogonality nor unit length is guaranteed, only linear independence. However the trace of the rotation matrix is the same with respect to any basis. The trace is a similarity invariant under linear transformations. In the lattice basis, the rotation operation must map every lattice point into an integer number of lattice vectors, so the entries of the rotation matrix in the lattice basis – and hence the trace – are necessarily integers. Similar as in other proofs, this implies that the only allowed rotational symmetries correspond to 1,2,3,4 or 6-fold invariance. For example, wallpapers and crystals cannot be rotated by 45° and remain invariant, the only possible angles are: 360°, 180°, 120°, 90° or 60°. Example Consider a 60° (360°/6) rotation matrix with respect to the oblique lattice basis for a tiling by equilateral triangles. The trace is still 1. The determinant (always +1 for a rotation) is also preserved. The general crystallographic restriction on rotations does not guarantee that a rotation will be compatible with a specific lattice. For example, a 60° rotation will not work with a square lattice; nor will a 90° rotation work with a rectangular lattice. Higher dimensions When the dimension of the lattice rises to four or more, rotations need no longer be planar; the 2D proof is inadequate. However, restrictions still apply, though more symmetries are permissible. For example, the hypercubic lattice has an eightfold rotational symmetry, corresponding to an eightfold rotational symmetry of the hypercube. This is of interest, not just for mathematics, but for the physics of quasicrystals under the cut-and-project theory. In this view, a 3D quasicrystal with 8-fold rotation symmetry might be described as the projection of a slab cut from a 4D lattice. The following 4D rotation matrix is the aforementioned eightfold symmetry of the hypercube (and the cross-polytope): Transforming this matrix to the new coordinates given by will produce: This third matrix then corresponds to a rotation both by 45° (in the first two dimensions) and by 135° (in the last two). Projecting a slab of hypercubes along the first two dimensions of the new coordinates produces an Ammann–Beenker tiling (another such tiling is produced by projecting along the last two dimensions), which therefore also has 8-fold rotational symmetry on average. The A4 lattice and F4 lattice have order 10 and order 12 rotational symmetries, respectively. To state the restriction for all dimensions, it is convenient to shift attention away from rotations alone and concentrate on the integer matrices . We say that a matrix A has order k when its k-th power (but no lower), Ak, equals the identity. Thus a 6-fold rotation matrix in the equilateral triangle basis is an integer matrix with order 6. Let OrdN denote the set of integers that can be the order of an N×N integer matrix. For example, Ord2 = {1, 2, 3, 4, 6}. We wish to state an explicit formula for OrdN. Define a function ψ based on Euler's totient function φ; it will map positive integers to non-negative integers. For an odd prime, p, and a positive integer, k, set ψ(pk) equal to the totient function value, φ(pk), which in this case is pk−pk−1. Do the same for ψ(2k) when k > 1. Set ψ(2) and ψ(1) to 0. Using the fundamental theorem of arithmetic, we can write any other positive integer uniquely as a product of prime powers, m = Πα pαk α; set ψ(m) = Σα ψ(pαk α). This differs from the totient itself, because it is a sum instead of a product. The crystallographic restriction in general form states that OrdN consists of those positive integers m such that ψ(m) ≤ N. {| class="wikitable" |+ Smallest dimension for a given order |- align="center" | m || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 || 11 || 12 || 13 || 14 || 15 || 16 || 17 || 18 || 19 || 20 || 21 || 22 || 23 || 24 || 25 || 26 || 27 || 28 || 29 || 30 || 31 |- align="center" | ψ(m) || 0 || 0 || 2 || 2 || 4 || 2 || 6 || 4 || 6 || 4 || 10 || 4 || 12 || 6 || 6 || 8 || 16 || 6 || 18 || 6 || 8 || 10 || 22 || 6 || 20 || 12 || 18 || 8 || 28 || 6 || 30 |} For m>2, the values of ψ(m) are equal to twice the algebraic degree of cos(2π/m); therefore, ψ(m) is strictly less than m and reaches this maximum value if and only if m is a prime. These additional symmetries do not allow a planar slice to have, say, 8-fold rotation symmetry. In the plane, the 2D restrictions still apply. Thus the cuts used to model quasicrystals necessarily have thickness. Integer matrices are not limited to rotations; for example, a reflection is also a symmetry of order 2. But by insisting on determinant +1, we can restrict the matrices to proper rotations. Formulation in terms of isometries The crystallographic restriction theorem can be formulated in terms of isometries of Euclidean space. A set of isometries can form a group. By a discrete isometry group we will mean an isometry group that maps each point to a discrete subset of RN, i.e. the orbit of any point is a set of isolated points. With this terminology, the crystallographic restriction theorem in two and three dimensions can be formulated as follows. For every discrete isometry group in two- and three-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4 or 6. Isometries of order n include, but are not restricted to, n-fold rotations. The theorem also excludes S8, S12, D4d, and D6d (see point groups in three dimensions), even though they have 4- and 6-fold rotational symmetry only. Rotational symmetry of any order about an axis is compatible with translational symmetry along that axis. The result in the table above implies that for every discrete isometry group in four- and five-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4, 5, 6, 8, 10, or 12. All isometries of finite order in six- and seven-dimensional space are of order 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 18, 20, 24 or 30 . See also Crystallographic point group Crystallography Notes References External links The crystallographic restriction The crystallographic restriction theorem by CSIC Crystallography Group theory Theorems in algebra Articles containing proofs
Crystallographic restriction theorem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,920
[ "Theorems in algebra", "Materials science", "Group theory", "Crystallography", "Fields of abstract algebra", "Condensed matter physics", "Mathematical problems", "Articles containing proofs", "Mathematical theorems", "Algebra" ]
2,218,269
https://en.wikipedia.org/wiki/Polymersome
In biotechnology, polymersomes are a class of artificial vesicles, tiny hollow spheres that enclose a solution. Polymersomes are made using amphiphilic synthetic block copolymers to form the vesicle membrane, and have radii ranging from 50 nm to 5 μm or more. Most reported polymersomes contain an aqueous solution in their core and are useful for encapsulating and protecting sensitive molecules, such as drugs, enzymes, other proteins and peptides, and DNA and RNA fragments. The polymersome membrane provides a physical barrier that isolates the encapsulated material from external materials, such as those found in biological systems. Synthosomes are polymersomes engineered to contain channels (transmembrane proteins) that allow certain chemicals to pass through the membrane, into or out of the vesicle. This allows for the collection or enzymatic modification of these substances. The term "polymersome" for vesicles made from block copolymers was coined in 1999. Polymersomes are similar to liposomes, which are vesicles formed from naturally occurring lipids. While having many of the properties of natural liposomes, polymersomes exhibit increased stability and reduced permeability. Furthermore, the use of synthetic polymers enables designers to manipulate the characteristics of the membrane and thus control permeability, release rates, stability and other properties of the polymersome. Preparation Several different morphologies of the block copolymer used to create the polymersome have been used. The most frequently used are the linear diblock or triblock copolymers. In these cases, the block copolymer has one block that is hydrophobic; the other block or blocks are hydrophilic. Other morphologies used include comb copolymers, where the backbone block is hydrophilic and the comb branches are hydrophobic, and dendronized block copolymers, where the dendrimer portion is hydrophilic. In the case of diblock, comb and dendronized copolymers the polymersome membrane has the same bilayer morphology of a liposome, with the hydrophobic blocks of the two layers facing each other in the interior of the membrane. In the case of triblock copolymers the membrane is a monolayer that mimics a bilayer, the central block filling the role of the two facing hydrophobic blocks of a bilayer. In general they can be prepared by the methods used in the preparation of liposomes. Film rehydration, direct injection method or dissolution method. Uses Polymersomes that contain active enzymes and that provide a way to selectively transport substrates for conversion by those enzymes have been described as nanoreactors. Polymersomes have been used to create controlled release drug delivery systems. Similar to coating liposomes with polyethylene glycol, polymersomes can be made invisible to the immune system if the hydrophilic block consists of polyethylene glycol. Thus, polymersomes are useful carriers for targeted medication. For in vivo applications, polymersomes are de facto limited to the use of FDA-approved polymers, as most pharmaceutical firms are unlikely to develop novel polymers due to cost issues. Fortunately, there are a number of such polymers available, with varying properties, including: Hydrophilic blocks Poly(ethylene glycol) (PEG/PEO) Poly(2-methyloxazoline) Hydrophobic blocks Polydimethylsiloxane (PDMS) Poly(caprolactone (PCL) Poly(lactide) (PLA) Poly(methyl methacrylate) (PMMA) If enough of the block copolymer molecules that make up a polymersome are cross-linked, the polymersome can be made into a transportable powder. Polymersomes can be used to make an artificial cell if hemoglobin and other components are added. The first artificial cell was made by Thomas Chang. See also Cell (biology) Liposome Polymer Copolymer Artificial cell References Biomolecules Polymers Immunology Pharmacokinetics
Polymersome
[ "Chemistry", "Materials_science", "Biology" ]
861
[ "Pharmacology", "Natural products", "Biochemistry", "Pharmacokinetics", "Organic compounds", "Immunology", "Polymer chemistry", "Biomolecules", "Structural biology", "Polymers", "Molecular biology" ]
2,218,753
https://en.wikipedia.org/wiki/Adatom
An adatom is an atom that lies on a crystal surface, and can be thought of as the opposite of a surface vacancy. This term is used in surface chemistry and epitaxy, when describing single atoms lying on surfaces and surface roughness. The word is a portmanteau of "adsorbed atom". A single atom, a cluster of atoms, or a molecule or cluster of molecules may all be referred to by the general term "adparticle". This is often a thermodynamically unfavorable state. However, cases such as graphene may provide counter-examples. Growth ″Adatom″ is a portmanteau word, short for adsorbed atom. When the atom arrives at a crystal surface, it is adsorbed by the periodic potential of the crystal, thus becoming an adatom. The minima of this potential form a network of adsorption sites on the surface. There are different types of adsorption sites. Each of these sites corresponds to a different structure of the surface. There are five different types of adsorption sites, which are: on a terrace, where the adsorption site is on top of the surface layer that is growing; at the step edge, which is next to the growing layer; in the kink of a growing layer; in the step edge of a growing layer, and in the surface layer, where the adsorption site is inside the lower layer. Out of these adsorption site types, kink sites play the most important role in crystal growth. Kink density is a major factor of growth kinetics. Attachment of an atom to the kink site, or removal of the atom from the kink, does not change the free surface energy of the crystal, since the number of broken bonds does not change. This gives that the chemical potential of an atom in the kink site is equal to that of the crystal, which means that the kink site is the one adsorption site type where an adatom becomes a part of the crystal. If crystallography is used, or if the growth temperatures are higher, which would give an entropy effect, the crystal surface becomes rough, causing greater number of kinks. This means that adatoms have a greater chance of arriving at a kink site, to become part of the crystal. This is the normal mechanism of growth. The opposite, so with a lower growth temperature, would give a smooth surface, which means that there is a higher number of terrace adsorption sites. There are still kink sites, but these are only found at the edges of steps. The crystal only grows through "lateral motion of the steps". This type of growth is called the layer mechanism of growth. How the adatoms grow on the surface depends on what interaction is the strongest or what the surface looks like. If the adatom-adatom interaction is the strongest, adatoms are more likely to create pyramids of adatoms on the surface. If the adatom-surface interaction is the strongest, the adatoms are more likely to arrange themselves in such a way as to create layers on the surface. But it also depends on the origins of the steps on the surface. In total there are five different types of layer growth: normal growth, step-flow growth, layer-by-layer growth, multilayer (or three-dimensional island) growth, and spiral growth. Step-flow growth is observed on stair-like surfaces. These surfaces have a geometry with vicinal steps separated by "atomically flat low-index terraces". When adatoms attach to the edges of the steps, they move along the surface, until they find a kink site to attach to become part of the crystal. However, if the kink density is not high enough, and thus not all adatoms arrive at one of the kinks, additional steps, as if there is a flat surface with small two-dimensional islands on it, are created on the terraces, leading to a mixed growth mode, which leads to a change in layer growth type, from step-flow to layer-by-layer growth. In layer-by-layer growth, the adatom-surface interaction is the strongest. A new layer is created through 2D islands, which is created on the surface. The islands grow until they spread out over the entire surface, and the next layer will start to grow. This growth is named Frank-Van der Merwe (FM) growth. In some cases the cycle of making new layers in layer-by-layer growth is broken by kinetic constraints. In these cases, growth in higher layers starts before lower layers are finished, which means three-dimensional island are created. A new type of growth, called multilayer growth, is started, instead of the layer-by-layer growth. Multilayer growth can be divided into Volmer-Weber growth and Stranski-Krastanov growth. If the crystal surface contains a screw dislocation, a different type of growth, called spiral growth might take place. Around the screw dislocation, a spiral shape is seen during growth. As the screw dislocation causes a growth spiral that does not disappear, islands might not be needed to cause crystal growth. The adatoms are bound to the surface through epitaxy. In this process, new layers of a crystal are created through the attachment of new atoms. This can be through a chemical reaction, or through heating a new film or centrifuging it. Generally, what happens is that the particles that are used to form a new layer, will not always be adsorbed. To create bonds with the surface, energy is needed and not every particle has the needed amount of energy to attach at that part of the surface (for different parts, different energies are needed). If one has a flux F of particles incoming, part of it will be adsorbed, given by the adsorption flux where s here is the sticking coefficient. Not only does this variable depend on the surface and on the energy of the incoming atom, but also on the chemical nature of both the particle and the surface. If both the particle and surface are made of a substance that easily reacts with other particles, it is easier for the atoms to stick to the surface. Surface thermodynamics Taking a look at the thermodynamics at the surface of the film, it is seen that bonds are broken, releasing energy, and bonds are formed, confining energy. The thermodynamics involved were modeled by the Walther Kossel and Ivan Stranski in 1920. This model is called terrace ledge kink model (TLK). The adatom can create more than one bond with the crystal, depending on the structure of the crystal. If it is a simple cubic lattice, the adatom can have up to 6 bonds, whereas in a face-centered cubic lattice, it can have up to 12 nearest neighbors. The more bonds created, the more energy is confined, making it harder to desorb the adatom. A special site for an adatom is a kink, where exactly half of the bonds with the surface can be created, also called the "half-crystal position". Magnetic adatoms Adatoms, due to having fewer bonds than the other atoms in the crystal, have unbound electrons. These electrons have spin and therefore a magnetic moment. This magnetic moment has no preference for orientation until an external influence, like a magnetic field, is present. The structure of the adatoms on a surface can be adjusted by changing the external magnetic field. Through this method theoretical situations, such as the atomic chain, can be simulated. Quantum mechanics needs to be taken into account when using adatoms due to the small scale. The magnetic field created by an atom is caused mostly by the orbit and spin of the electrons. The proton's and neutron's magnetic moment are negligible when compared to that of the electron due to their larger masses. When an atom with free electrons is inside an external magnetic field, its magnetic moment aligns with the external field because this lowers its energy. This is why bound electrons do not display this magnetic moment, they already have a favorable energy state and it is unfavorable to change. The magnetization of an (magnetically aligned) atom is given by: Where N is the number of electrons, gj is the g-factor, μB is the Bohr magneton, kb is the Boltzmann constant, T is the temperature and j is the total angular momentum quantum number. This formula holds under the assumption that the magnetic energy of an electron is given by and there is no exchange interaction. Movement across a surface The movement of adatoms across a surface can be described by the Burton, Cabrera and Frank (CBF) model. The model treats adatoms as a 2D gas on top of the surface. The adatoms diffuse with a diffusion constant D; they are desorbed back to the medium above with a rate of per atom and adsorbed with flux F. The diffusion constant can be, when the concentration of particles is small, expressed as: Where a is the hopping distance for the atom. ED is the energy needed to pass the diffusion barrier. ν0 is the attempt frequency. The CBF model obeys the following continuity equation: Combining the steady states () with the following boundary conditions can lead to an expression for the velocity of the adatoms at each adsorption site. The boundary conditions: And: Applications In 2012, scientists at the University of New South Wales were able to use phosphine to precisely, deterministically eject a single silicon atom onto a surface of epitaxial silicon. This resulting adatom created what is described as a single-atom transistor. Thus, inasmuch as chemical empirical formulas pinpoint the locations of branching ions that are attached to a particular molecule, the dopant of silicon based transistors and other such electronic components will have the location identified of each dopant atom or molecule, along with the associated characteristic of the device based on the named locations. Thus, the mapping of the dopant substances will give exact characteristics of any given semiconductor device, once all is known. With the technology available nowadays it is possible to create a linear chain of adatoms on top of an epitaxial film. With this, one can analyse theoretical situations. Furthermore, Usami et al. were able to create quantum wells by adding Si atoms to a SiGe bulk crystal. Within these wells they observed photoluminescence of excitons that were confined in these wells. References Surface science
Adatom
[ "Physics", "Chemistry", "Materials_science" ]
2,199
[ "Condensed matter physics", "Surface science" ]
2,219,538
https://en.wikipedia.org/wiki/Calcium%20signaling
Calcium signaling is the use of calcium ions (Ca2+) to communicate and drive intracellular processes often as a step in signal transduction. Ca2+ is important for cellular signalling, for once it enters the cytosol of the cytoplasm it exerts allosteric regulatory effects on many enzymes and proteins. Ca2+ can act in signal transduction resulting from activation of ion channels or as a second messenger caused by indirect signal transduction pathways such as G protein-coupled receptors. Concentration regulation The resting concentration of Ca2+ in the cytoplasm is normally maintained around 100 nM. This is 20,000- to 100,000-fold lower than typical extracellular concentration. To maintain this low concentration, Ca2+ is actively pumped from the cytosol to the extracellular space, the endoplasmic reticulum (ER), and sometimes into the mitochondria. Certain proteins of the cytoplasm and organelles act as buffers by binding Ca2+. Signaling occurs when the cell is stimulated to release Ca2+ ions from intracellular stores, and/or when Ca2+ enters the cell through plasma membrane ion channels. Under certain conditions, the intracellular Ca2+ concentration may begin to oscillate at a specific frequency. Phospholipase C pathway Specific signals can trigger a sudden increase in the cytoplasmic Ca2+ levels to 500–1,000 nM by opening channels in the ER or the plasma membrane. The most common signaling pathway that increases cytoplasmic calcium concentration is the phospholipase C (PLC) pathway. Many cell surface receptors, including G protein-coupled receptors and receptor tyrosine kinases, activate the PLC enzyme. PLC uses hydrolysis of the membrane phospholipid PIP2 to form IP3 and diacylglycerol (DAG), two classic secondary messengers. DAG attaches to the plasma membrane and recruits protein kinase C (PKC). IP3 diffuses to the ER and is bound to the IP3 receptor. The IP3 receptor serves as a Ca2+ channel, and releases Ca2+ from the ER. The Ca2+ bind to PKC and other proteins and activate them. Depletion from the endoplasmic reticulum Depletion of Ca2+ from the ER will lead to Ca2+ entry from outside the cell by activation of "Store-Operated Channels" (SOCs). This inflow of Ca2+ is referred to as Ca2+-release-activated Ca2+ current (ICRAC). The mechanisms through which ICRAC occurs are currently still under investigation. Although Orai1 and STIM1, have been linked by several studies, for a proposed model of store-operated calcium influx. Recent studies have cited the phospholipase A2 beta, nicotinic acid adenine dinucleotide phosphate (NAADP), and the protein STIM 1 as possible mediators of ICRAC. As a second messenger Calcium is a ubiquitous second messenger with wide-ranging physiological roles. These include muscle contraction, neuronal transmission (as in an excitatory synapse), cellular motility (including the movement of flagella and cilia), fertilization, cell growth (proliferation), neurogenesis, learning and memory as with synaptic plasticity, and secretion of saliva. High levels of cytoplasmic Ca2+ can also cause the cell to undergo apoptosis. Other biochemical roles of calcium include regulating enzyme activity, permeability of ion channels, activity of ion pumps, and components of the cytoskeleton. Many of Ca2+ mediated events occur when the released Ca2+ binds to and activates the regulatory protein calmodulin. Calmodulin may activate the Ca2+-calmodulin-dependent protein kinases, or may act directly on other effector proteins. Besides calmodulin, there are many other Ca2+-binding proteins that mediate the biological effects of Ca2+. In muscle contractions Contractions of skeletal muscle fiber are caused due to electrical stimulation. This process is caused by the depolarization of the transverse tubular junctions. Once depolarized the sarcoplasmic reticulum (SR) releases Ca2+ into the myoplasm where it will bind to a number of calcium sensitive buffers. The Ca2+ in the myoplasm will diffuse to Ca2+ regulator sites on the thin filaments. This leads to the actual contraction of the muscle. Contractions of smooth muscle fiber are dependent on how a Ca2+ influx occurs. When a Ca2+ influx occurs, cross bridges form between myosin and actin leading to the contraction of the muscle fibers. Influxes may occur from extracellular Ca2+ diffusion via ion channels. This can lead to three different results. The first is a uniform increase in the Ca2+ concentration throughout the cell. This is responsible for increases in vascular diameters. The second is a rapid time dependent change in the membrane potential which leads to a very quick and uniform increase of Ca2+. This can cause a spontaneous release of neurotransmitters via sympathetic or parasympathetic nerve channels. The last potential result is a specific and localized subplasmalemmal Ca2+ release. This type of release increases the activation of protein kinase, and is seen in cardiac muscle where it causes excitation-concentration coupling. Ca2+ may also result from internal stores found in the SR. This release may be caused by Ryaodine (RYRs) or IP3 receptors. RYRs Ca2+ release is spontaneous and localized. This has been observed in a number of smooth muscle tissues including arteries, portal vein, urinary bladder, ureter tissues, airway tissues, and gastrointestinal tissues. IP3 Ca2+ release is caused by activation of the IP3 receptor on the SR. These influxes are often spontaneous and localized as seen in the colon and portal vein, but may lead to a global Ca2+ wave as observed in many vascular tissues. In neurons In neurons, concomitant increases in cytosolic and mitochondrial Ca2+ are important for the synchronization of neuronal electrical activity with mitochondrial energy metabolism. Mitochondrial matrix Ca2+ levels can reach the tens of μM levels that are necessary for the activation of isocitrate dehydrogenase, which is one of the key regulatory enzymes of the Krebs cycle. The ER, in neurons, may serve in a network integrating numerous extracellular and intracellular signals in a binary membrane system with the plasma membrane. Such an association with the plasma membrane creates the relatively new perception of the ER and theme of "a neuron within a neuron." The ER's structural characteristics, ability to act as a Ca2+ sink, and specific Ca2+ releasing proteins, serve to create a system that may produce regenerative waves of Ca2+ release. These may communicate both locally and globally in the cell. These Ca2+ signals integrate extracellular and intracellular fluxes, and have been implicated to play roles in synaptic plasticity, memory, neurotransmitter release, neuronal excitability, and long term changes at the gene transcription level. ER stress is also related to Ca2+ signaling and along with the unfolded protein response, can cause ER associated degradation (ERAD) and autophagy. Astrocytes have a direct relationship with neurons through them releasing gliotransmitters. These transmitters allow communication between neurons and are triggered by calcium levels increasing around astrocytes from inside stores. This increase in calcium can also be caused by other neurotransmitters. Some examples of gliotransmitters are ATP and glutamate. Activation of these neurons will lead to an increase in the concentration of calcium in the cytosol from 100 nanomolar to 1 micromolar. In fertilization Ca2+ influx during fertilization has been observed in many species as a trigger for development of the oocyte. These influxes may occur as a single increase in concentration as seen with fish and echinoderms, or may occur with the concentrations oscillating as observed in mammals. The triggers to these Ca2+ influxes may differ. The influx have been observed to occur via membrane Ca2+ conduits and Ca2+ stores in the sperm. It has also been seen that sperm binds to membrane receptors that lead to a release in Ca2+ from the ER. The sperm has also been observed to release a soluble factor that is specific to that species. This prevents cross species fertilization to occur. These soluble factors lead to activation of IP3 which causes a Ca2+ release from the ER via IP3 receptors. It has also been seen that some model systems mix these methods such as seen with mammals. Once the Ca2+ is released from the ER the egg starts the process of forming a fused pronucleus and the restart of the mitotic cell cycle. Ca2+ release is also responsible for the activation of NAD+ kinase which leads to membrane biosynthesis, and the exocytosis of the oocytes cortical granules which leads to the formation of the hyaline layer allowing for the slow block to polyspermy. See also Nanodomain European Calcium Society References Further reading Cell signaling Signal transduction Calcium signaling
Calcium signaling
[ "Chemistry", "Biology" ]
1,964
[ "Biochemistry", "Neurochemistry", "Calcium signaling", "Signal transduction" ]
2,219,841
https://en.wikipedia.org/wiki/Selective%20catalytic%20reduction
Selective catalytic reduction (SCR) means converting nitrogen oxides, also referred to as with the aid of a catalyst into diatomic nitrogen (), and water (). A reductant, typically anhydrous ammonia (), aqueous ammonia (), or a urea () solution, is added to a stream of flue or exhaust gas and is reacted onto a catalyst. As the reaction drives toward completion, nitrogen (), and carbon dioxide (), in the case of urea use, are produced. Selective catalytic reduction of using ammonia as the reducing agent was patented in the United States by the Engelhard Corporation in 1957. Development of SCR technology continued in Japan and the US in the early 1960s with research focusing on less expensive and more durable catalyst agents. The first large-scale SCR was installed by the IHI Corporation in 1978. Commercial selective catalytic reduction systems are typically found on large utility boilers, industrial boilers, and municipal solid waste boilers and have been shown to lower emissions by 70-95%. Applications include diesel engines, such as those found on large ships, diesel locomotives, gas turbines, and automobiles. SCR systems are now the preferred method for meeting Tier 4 Final and EURO 6 diesel emissions standards for heavy trucks, cars and light commercial vehicles. As a result, emissions of NOx, particulates, and hydrocarbons have been lowered by as much as 95% when compared with pre-emissions engines. Chemistry The reduction reaction takes place as the gases pass through the catalyst chamber. Before entering the catalyst chamber, ammonia, or other reductant (such as urea), is injected and mixed with the gases. The intended equations for the reactions using ammonia for a SCR are: Several secondary reactions also occur: With urea, the reactions are: As with ammonia, several secondary reactions also occur in the presence of sulfur: The ideal reaction has an optimal temperature range between 630 and 720 K (357 and 447 °C) but can operate as low as 500 K (227 °C) with longer residence times. The minimum effective temperature depends on the fuels, gas constituents, and catalyst. Other possible reductants include cyanuric acid and ammonium sulfate. Catalysts SCR catalysts are made from various porous ceramic materials used as a support, such as titanium oxide, and active catalytic components are usually either oxides of vanadium, molybdenum and tungsten), zeolites, or cerium. Base metal catalysts, such as vanadium and tungsten, lack high thermal durability, but are less expensive and operate very well at the temperature ranges most commonly applied in industrial and utility boiler applications. Thermal durability is particularly important for automotive SCR applications that incorporate the use of a diesel particulate filter with forced regeneration. They also have a high catalysing potential to oxidize into , which can be extremely damaging due to its acidic properties. Zeolite catalysts have the potential to operate at substantially higher temperature than base metal catalysts; they can withstand prolonged operation at temperatures of 900 K (627 °C) and transient conditions of up to 1120 K (847 °C). Zeolites also have a lower potential for oxidation and thus decrease the related corrosion risks. Iron- and copper-exchanged zeolite urea SCRs have been developed with approximately equal performance to that of vanadium-urea SCRs if the fraction of the is 20% to 50% of the total . The two most common catalyst geometries used today are honeycomb catalysts and plate catalysts. The honeycomb form usually consists of an extruded ceramic applied homogeneously throughout the carrier or coated on the substrate. Like the various types of catalysts, their configuration also has advantages and disadvantages. Plate-type catalysts have lower pressure drops and are less susceptible to plugging and fouling than the honeycomb types, but are much larger and more expensive. Honeycomb configurations are smaller than plate types, but have higher pressure drops and plug much more easily. A third type is corrugated, comprising only about 10% of the market in power plant applications. Reductants Several nitrogen-bearing reductants are used in SCR applications including anhydrous ammonia, aqueous ammonia or dissolved urea. All those three reductants are widely available in large quantities. Anhydrous ammonia can be stored as a liquid at approximately 10 bar in steel tanks. It is classified as an inhalation hazard, but it can be safely stored and handled if well-developed codes and standards are followed. Its advantage is that it needs no further conversion to operate within a SCR and is typically favoured by large industrial SCR operators. Aqueous ammonia must be first vaporized in order to be used, but it is substantially safer to store and transport than anhydrous ammonia. Urea is the safest to store, but requires conversion to ammonia through thermal decomposition. At the end of the process, the purified exhaust gasses are sent to the boiler or condenser or other equipment, or discharged into the atmosphere. Limitations Most catalysts have finite service life mainly due to the formation of ammonium sulfate and ammonium bisulfate from sulfur compounds when high-sulfur fuels are used, as well as the undesirable catalyst-induced oxidation of to and . In applications that use exhaust gas boilers, ammonium sulfate and ammonium bisulfate can accumulate on the boiler tubes, inhibiting steam output and increasing exhaust back-pressure. In marine applications, this can increase fresh water requirements as the boiler must be continuously washed to remove the deposits. Most catalysts on the market have porous structures and a geometries optimized for increasing their specific surface area (a clay planting pot is a good example of what SCR catalyst feels like). This porosity is what gives the catalyst the high surface area needed for reduction of NOx. However, soot, ammonium sulfate, ammonium bisulfate, silica compounds, and other fine particulates can easily clog the pores. Ultrasonic horns and soot blowers can remove most of these contaminants while the unit is online. The unit can also be cleaned by being washed with water or by raising the exhaust temperature. Of more concern to SCR performance are poisons, which will chemically degrade the catalyst itself or block the catalyst's active sites and render it ineffective at reduction, and in severe cases this can result in the ammonia or urea being oxidized and a subsequent increase in emissions. These poisons are alkali metals, alkaline earth metals, halogens, phosphorus, sulfur, arsenic, antimony, chromium, heavy metals (copper, cadmium, mercury, thallium, and lead), and many heavy metal compounds (e.g. oxides and halides). Most SCRs require tuning to properly perform. Part of tuning involves ensuring a proper distribution of ammonia in the gas stream and uniform gas velocity through the catalyst. Without tuning, SCRs can exhibit inefficient NOx reduction along with excessive ammonia slip due to not utilizing the catalyst surface area effectively. Another facet of tuning involves determining the proper ammonia flow for all process conditions. Ammonia flow is in general controlled based on NOx measurements taken from the gas stream or preexisting performance curves from an engine manufacturer (in the case of gas turbines and reciprocating engines). Typically, all future operating conditions must be known beforehand to properly design and tune an SCR system. Ammonia slip is an industry term for ammonia passing through the SCR unreacted. This occurs when ammonia is injected in excess, temperatures are too low for ammonia to react, or the catalyst has been poisoned. In applications using both SCR and an alkaline scrubber, the use of high-sulfur fuels also tend to significantly increase ammonia slip, since compounds such as NaOH and will reduce ammonium sulfate and ammonium bisulfate back into ammonia: Temperature is SCR's largest limitation. Engines all have a period during start-up where exhaust temperatures are too low, and the catalyst must be pre-heated for the desired NOx reduction to occur when an engine is first started, especially in cold climates. Power plants In power stations, the same basic technology is employed for removal of from the flue gas of boilers used in power generation and industry. In general, the SCR unit is located between the furnace economizer and the air heater, and the ammonia is injected into the catalyst chamber through an ammonia injection grid. As in other SCR applications, the temperature of operation is critical. Ammonia slip (unreacted ammonia) is also an issue with SCR technology used in power plants. A significant operational difficulty in coal-fired boilers is the binding of the catalyst by fly ash from the fuel combustion. This requires the usage of sootblowers, ultrasonic horns, and careful design of the ductwork and catalyst materials to avoid plugging by the fly ash. SCR catalysts have a typical operational lifetime of about 16,000 – 40,000 hours (1.8 – 4.5 years) in coal-fired power plants, depending on the flue gas composition, and up to 80,000 hours (9 years) in cleaner gas-fired power plants. Poisons, sulfur compounds, and fly ash can all be removed by installing scrubbers before the SCR system to increase the life of the catalyst, though in most power plants and marine engines, scrubbers are installed after the system to maximize the SCR system's effectiveness. Automobiles History SCR was applied to trucks by Nissan Diesel Corporation, and the first practical product "Nissan Diesel Quon" was introduced in 2004 in Japan. In 2007, the United States Environmental Protection Agency (EPA) enacted requirements to significantly lower harmful exhaust emissions. To achieve this standard, Cummins and other diesel engine manufacturers developed an aftertreatment system that includes the use of a diesel particulate filter (DPF). As the DPF does not function with low-sulfur diesel fuel, diesel engines that conform to 2007 EPA emissions standards require ultra-low sulfur diesel fuel (ULSD) to prevent damage to the DPF. After a brief transition period, ULSD fuel became common at fuel pumps in the United States and Canada. The 2007 EPA regulations were meant to be an interim solution to allow manufacturers time to prepare for the more stringent 2010 EPA regulations, which lowers NOx levels even further. 2010 EPA regulations Diesel engines manufactured after January 1, 2010 are required to meet lowered NOx standards for the US market. All of the heavy-duty engine (Class 7-8 trucks) manufacturers except for Navistar International and Caterpillar continuing to manufacture engines after this date have chosen to use SCR. This includes Detroit Diesel (DD13, DD15, and DD16 models), Cummins (ISX, ISL9, and ISB6.7), Paccar, and Volvo/Mack. These engines require the periodic addition of diesel exhaust fluid (DEF, a urea solution) to enable the process. DEF is available in bottles and jugs from most truck stops, and a more recent development is bulk DEF dispensers near diesel fuel pumps. Caterpillar and Navistar had initially chosen to use enhanced exhaust gas recirculation (EEGR) to comply with the Environmental Protection Agency (EPA) standards, but in July 2012 Navistar announced it would be pursuing SCR technology for its engines, except on the MaxxForce 15 which was to be discontinued. Caterpillar ultimately withdrew from the on-highway engine market prior to implementation of these requirements. BMW, Daimler AG (as BlueTEC), and Volkswagen have used SCR technology in some of their passenger diesel cars. See also Acid rain Catalytic converter, which also catalyzes NOx conversion but does not use urea or ammonia Diesel exhaust fluid (DEF) or AdBlue Exhaust gas recirculation versus selective catalytic reduction Environmental engineering Selective non-catalytic reduction (SNCR) NOx adsorber (LNT) Vehicle emissions control References Pollution control technologies Chemical processes Air pollution control systems NOx control Catalysis
Selective catalytic reduction
[ "Chemistry", "Engineering" ]
2,523
[ "Catalysis", "Pollution control technologies", "Chemical processes", "nan", "Environmental engineering", "Chemical process engineering", "Chemical kinetics" ]
2,219,887
https://en.wikipedia.org/wiki/Negative%20refraction
In optics, negative refraction is the electromagnetic phenomenon where light rays become refracted at an interface that is opposite to their more commonly observed positive refractive properties. Negative refraction can be obtained by using a metamaterial which has been designed to achieve a negative value for electric permittivity () and magnetic permeability (); in such cases the material can be assigned a negative refractive index. Such materials are sometimes called "double negative" materials. Negative refraction occurs at interfaces between materials at which one has an ordinary positive phase velocity (i.e., a positive refractive index), and the other has the more exotic negative phase velocity (a negative refractive index). Negative phase velocity Negative phase velocity (NPV) is a property of light propagation in a medium. There are different definitions of NPV; the most common is Victor Veselago's original proposal of opposition of the wave vector and (Abraham) the Poynting vector. Other definitions include the opposition of wave vector to group velocity, and energy to velocity. "Phase velocity" is used conventionally, as phase velocity has the same sign as the wave vector. A typical criterion used to determine Veselago's NPV is that the dot product of the Poynting vector and wave vector is negative (i.e., that ), but this definition is not covariant. While this restriction is not practically significant, the criterion has been generalized into a covariant form. Veselago NPV media are also called "left-handed (meta)materials", as the components of plane waves passing through (electric field, magnetic field, and wave vector) follow the left-hand rule instead of the right-hand rule. The terms "left-handed" and "right-handed" are generally avoided as they are also used to refer to chiral media. Negative refractive index One can choose to avoid directly considering the Poynting vector and wave vector of a propagating light field, and instead directly consider the response of the materials. Assuming the material is achiral, one can consider what values of permittivity (ε) and permeability (μ) result in negative phase velocity (NPV). Since both ε and μ are generally complex, their imaginary parts do not have to be negative for a passive (i.e. lossy) material to display negative refraction. In these materials, the criterion for negative phase velocity is derived by Depine and Lakhtakia to be where are the real valued parts of ε and μ, respectively. For active materials, the criterion is different. NPV occurrence does not necessarily imply negative refraction (negative refractive index). Typically, the refractive index is determined using , where by convention the positive square root is chosen for . However, in NPV materials, the negative square root is chosen to mimic the fact that the wave vector and phase velocity are also reversed. The refractive index is a derived quantity that describes how the wavevector is related to the optical frequency and propagation direction of the light; thus, the sign of must be chosen to match the physical situation. In chiral materials The refractive index also depends on the chirality parameter , resulting in distinct values for left and right circularly polarized waves, given by . A negative refractive index occurs for one polarization if > ; in this case, and/or do not need to be negative. A negative refractive index due to chirality was predicted by Pendry and Tretyakov et al., and first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. Refraction The consequence of negative refraction is light rays are refracted on the same side of the normal on entering the material, as indicated in the diagram, and by a general form of Snell's law. See also Acoustic metamaterials Metamaterial Negative index metamaterials Metamaterial antennas Multiple-prism dispersion theory N-slit interferometric equation Perfect lens Photonic metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Tunable metamaterials Electromagnetic interactions Bloch's theorem Casimir effect Dielectric Electromagnetism EM radiation Electron mobility Permeability (electromagnetism)* Permittivity* Wavenumber Photo-Dember Impedance References Photonics Physical phenomena Metamaterials Articles containing video clips
Negative refraction
[ "Physics", "Materials_science", "Engineering" ]
915
[ "Physical phenomena", "Metamaterials", "Materials science" ]
2,220,039
https://en.wikipedia.org/wiki/Rotating-wave%20approximation
The rotating-wave approximation is an approximation used in atom optics and magnetic resonance. In this approximation, terms in a Hamiltonian that oscillate rapidly are neglected. This is a valid approximation when the applied electromagnetic radiation is near resonance with an atomic transition, and the intensity is low. Explicitly, terms in the Hamiltonians that oscillate with frequencies are neglected, while terms that oscillate with frequencies are kept, where is the light frequency, and is a transition frequency. The name of the approximation stems from the form of the Hamiltonian in the interaction picture, as shown below. By switching to this picture the evolution of an atom due to the corresponding atomic Hamiltonian is absorbed into the system ket, leaving only the evolution due to the interaction of the atom with the light field to consider. It is in this picture that the rapidly oscillating terms mentioned previously can be neglected. Since in some sense the interaction picture can be thought of as rotating with the system ket only that part of the electromagnetic wave that approximately co-rotates is kept; the counter-rotating component is discarded. The rotating-wave approximation is closely related to, but different from, the secular approximation. Mathematical formulation For simplicity consider a two-level atomic system with ground and excited states and , respectively (using the Dirac bracket notation). Let the energy difference between the states be so that is the transition frequency of the system. Then the unperturbed Hamiltonian of the atom can be written as . Suppose the atom experiences an external classical electric field of frequency , given by ; e.g., a plane wave propagating in space. Then under the dipole approximation the interaction Hamiltonian between the atom and the electric field can be expressed as , where is the dipole moment operator of the atom. The total Hamiltonian for the atom-light system is therefore The atom does not have a dipole moment when it is in an energy eigenstate, so This means that defining allows the dipole operator to be written as (with denoting the complex conjugate). The interaction Hamiltonian can then be shown to be where is the Rabi frequency and is the counter-rotating frequency. To see why the terms are called counter-rotating consider a unitary transformation to the interaction or Dirac picture where the transformed Hamiltonian is given by where is the detuning between the light field and the atom. Making the approximation This is the point at which the rotating wave approximation is made. The dipole approximation has been assumed, and for this to remain valid the electric field must be near resonance with the atomic transition. This means that and the complex exponentials multiplying and can be considered to be rapidly oscillating. Hence on any appreciable time scale, the oscillations will quickly average to 0. The rotating wave approximation is thus the claim that these terms may be neglected and thus the Hamiltonian can be written in the interaction picture as Finally, transforming back into the Schrödinger picture, the Hamiltonian is given by Another criterion for rotating wave approximation is the weak coupling condition, that is, the Rabi frequency should be much less than the transition frequency. At this point the rotating wave approximation is complete. A common first step beyond this is to remove the remaining time dependence in the Hamiltonian via another unitary transformation. Derivation Given the above definitions the interaction Hamiltonian is as stated. The next step is to find the Hamiltonian in the interaction picture, . The required unitary transformation is , where the 3rd step can be proved by using a Taylor series expansion, and using the orthogonality of the states and . Note that a multiplication by an overall phase of on a unitary operator does not affect the underlying physics, so in the further usages of we will neglect it. Applying gives: Now we apply the RWA by eliminating the counter-rotating terms as explained in the previous section: Finally, we transform the approximate Hamiltonian back to the Schrödinger picture: The atomic Hamiltonian was unaffected by the approximation, so the total Hamiltonian in the Schrödinger picture under the rotating wave approximation is References Atomic, molecular, and optical physics Chemical physics
Rotating-wave approximation
[ "Physics", "Chemistry" ]
852
[ "Applied and interdisciplinary physics", " molecular", "nan", "Atomic", "Chemical physics", " and optical physics" ]
2,220,957
https://en.wikipedia.org/wiki/Electron%20optics
Electron optics is a mathematical framework for the calculation of electron trajectories in the presence of electromagnetic fields. The term optics is used because magnetic and electrostatic lenses act upon a charged particle beam similarly to optical lenses upon a light beam. Electron optics calculations are crucial for the design of electron microscopes and particle accelerators. In the paraxial approximation, trajectory calculations can be carried out using ray transfer matrix analysis. Electron properties Electrons are charged particles (point charges with rest mass) with spin 1/2 (hence they are fermions). Electrons can be accelerated by suitable electric fields, thereby acquiring kinetic energy. Given sufficient voltage, the electron can be accelerated sufficiently fast to exhibit measurable relativistic effects. According to wave particle duality, electrons can also be considered as matter waves with properties such as wavelength, phase and amplitude. Geometric electron optics The Hamilton's optico-mechanical analogy shows that electron beams can be modeled using concepts and mathematical formula of light beams. The electron particle trajectory formula matches the formula for geometrical optics with a suitable electron-optical index of refraction. This index of refraction functions like the material properties of glass in altering the direction ray propagation. In light optics, the refractive index changes abruptly at a surface between regions of constant index: the rays are controlled with the shape of the interface. In the electron-optics, the index varies throughout space and is controlled by electromagnetic fields created outside the electron trajectories. Magnetic fields Electrons interact with magnetic fields according to the second term of the Lorentz force: a cross product between the magnetic field and the electron velocity. In an infinite uniform field this results in a circular motion of the electron around the field direction with a radius given by: where r is the orbit radius, m is the mass of an electron, is the component of the electron velocity perpendicular to the field, e is the electron charge and B is the magnitude of the applied magnetic field. Electrons that have a velocity component parallel to the magnetic field will proceed along helical trajectories. Electric fields In the case of an applied electrostatic field, an electron will deflect towards the positive gradient of the field. Notably, this crossing of electrostatic field lines means that electrons, as they move through electrostatic fields change the magnitude of their velocity, whereas in magnetic fields, only the velocity direction is modified. Relativistic theory At relativistic electron velocity the geometrical electron optical equations rely on an index of refraction that includes both the ratio of electron velocity to light and , the component of the magnetic vector potential along the electron direction: where , , and are the electron mass, electron charge, and the speed of light. The first term is controlled by electrostatic lens while the second one by magnetic lens. Although not very common, it is also possible to derive effects of magnetic structures to charged particles starting from the Dirac equation. Diffractive electron optics As electrons can exhibit non-particle (wave-like) effects such as interference and diffraction, a full analysis of electron paths must go beyond geometrical optics. Free electron propagation (in vacuum) can be accurately described as a de Broglie matter wave with a wavelength inversely proportional to its longitudinal (possibly relativistic) momentum. Fortunately as long as the electromagnetic field traversed by the electron changes only slowly compared with this wavelength (see typical values in matter wave#Applications of matter waves), Kirchhoff's diffraction formula applies. The essential character of this approach is to use geometrical ray tracing but to keep track of the wave phase along each path to compute the intensity in the diffraction pattern. As a result of the charge carried by the electron, electric fields, magnetic fields, or the electrostatic mean inner potential of thin, weakly interacting materials can impart a phase shift to the wavefront of an electron. Thickness-modulated silicon nitride membranes and programmable phase shift devices have exploited these properties to apply spatially varying phase shifts to control the far-field spatial intensity and phase of the electron wave. Devices like these have been applied to arbitrarily shape the electron wavefront, correct the aberrations inherent to electron microscopes, resolve the orbital angular momentum of a free electron, and to measure dichroism in the interaction between free electrons and magnetic materials or plasmonic nanostructures. Limitations of applying light optics techniques Electrons interact strongly with matter as they are sensitive to not only the nucleus, but also the matter's electron charge cloud. Therefore, electrons require vacuum to propagate any reasonable distance, such as would be desirable in electron optic system. Penetration in vacuum is dictated by mean free path, a measure of the probability of collision between electrons and matter, approximate values for which can be derived from Poisson statistics. See also Charged particle beam Strong focusing Electron beam technology Electron microscope Beam emittance Ernst Ruska Hemispherical electron energy analyzer Further reading P. Grivet, P.W. Hawkes, A.Septier (1972). Electron Optics, 2nd edition. Pergamon Press. . A.Septier (ed.) (1980). Applied Charged Particle Optics. Part A.. Academic Press. . A.Septier (ed.) (1967). Focusing of Charged Particles. Volume 1.. Academic Press. D. W. O. Heddle (2000). Electrostatic Lens Systems, 2nd edition. CRC Press. . A.B El-Kareh, J.C.J. El-Kareh (1970).Electron Beams, Lenses, and Optics Vol. 1. Academic Press. Hawkes, P. W. & Kasper, E. (1994). Principles of Electron Optics. Academic Press. . Pozzi, G. (2016). Particles and Waves in Electron Optics and Microscopy. Academic Press. . Jon Orloff et al., (2008). Handbook of Charged Particle Optics. Second Edition. CRC Press. . Bohdan Paszkowski. (1968). Electron Optics, Iliffe Books Ltd. Miklos Szilagyi (1988). Electron and Ion Optics, Springer New York, NY. . Helmut Liebl (2008). Applied Charged Particle Optics . Springer Berlin. . Erwin Kasper (2001). Advances in Imaging and Electron Physics, Vol. 116 , Numerical Field Calculation for Charged Particle Optics. Academic Press. . Harald Rose (2012). Geometrical Charged-Particle Optics . Springer Berlin, Heidelberg. . Electron Optics Simulation Software Commercial programs SIMION (Ion and Electron Optics Simulator) EOD (Electron Optical Design) CPO (electronoptics.com) MEBS (Munro's Electron Beams Software) Field Precision LLC Free Software IBSIMU (by Taneli Kalvas) (ibsimu.SourceForge.net) References Electromagnetism Accelerator physics
Electron optics
[ "Physics" ]
1,415
[ "Electromagnetism", "Physical phenomena", "Applied and interdisciplinary physics", "Experimental physics", "Fundamental interactions", "Accelerator physics" ]
2,221,141
https://en.wikipedia.org/wiki/Electroceramics
Electroceramics are a class of ceramic materials used primarily for their electrical properties. While ceramics have traditionally been admired and used for their mechanical, thermal and chemical stability, their unique electrical, optical and magnetic properties have become of increasing importance in many key technologies including communications, energy conversion and storage, electronics and automation. Such materials are now classified under electroceramics, as distinguished from other functional ceramics such as advanced structural ceramics. Historically, developments in the various subclasses of electroceramics have paralleled the growth of new technologies. Examples include: ferroelectrics - high dielectric capacitors, non-volatile memories; ferrites - data and information storage; solid electrolytes - energy storage and conversion; piezoelectrics - sonar; semiconducting oxides - environmental monitoring. Recent advances in these areas are described in the Journal of Electroceramics. Dielectric ceramics Dielectric materials used for construction of ceramic capacitors include: Lead Zirconate titanate (PZT), Barium titanate(BT), strontium titanate (ST), calcium titanate (CT), magnesium titanate (MT), calcium magnesium titanate (CMT), zinc titanate (ZT), lanthanum titanate (LT), and neodymium titanate (NT), barium zirconate (BZ), calcium zirconate (CZ), lead magnesium niobate (PMN), lead zinc niobate (PZN), lithium niobate (LN), barium stannate (BS), calcium stannate (CS), magnesium aluminium silicate, magnesium silicate, barium tantalate, titanium dioxide, niobium oxide, zirconia, silica, sapphire, beryllium oxide, and zirconium tin titanate Some piezoelectric materials can be used as well; the EIA Class 2 dielectrics are based on mixtures rich on barium titanate. In turn, EIA Class 1 dielectrics contain little or no barium titanate. Electronically conductive ceramics Indium tin oxide (ITO), lanthanum-doped strontium titanate (SLT), yttrium-doped strontium titanate (SYT) Fast ion conductor ceramics Yttria-stabilized zirconia (YSZ), gadolinium-doped ceria (GDC), lanthanum strontium gallate magnesite(LSGM), beta alumina, beta alumina Piezoelectric and ferroelectric ceramics Commercially used piezoceramic is primarily lead zirconate titanate (PZT). Barium titanate (BT), strontium titanate (ST), quartz, and others are also used. See :Category:Piezoelectric materials. Magnetic ceramics Ferrites including iron(III) oxide and strontium carbonate display magnetic properties. Lanthanum strontium manganite exhibits colossal magnetoresistance. See also Ceramic Genoa Joint Laboratories Strontium titanate Barium titanate Lead zirconate titanate References The Electroceramics and Crystal Physics Group at MIT Materials science Ceramic materials Condensed matter physics
Electroceramics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
690
[ "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Ceramic materials", "Condensed matter physics", "nan", "Ceramic engineering", "Matter" ]
2,221,187
https://en.wikipedia.org/wiki/Solid%20solution
A solid solution, a term popularly used for metals, is a homogeneous mixture of two compounds in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species. In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite. Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation. Nomenclature The IUPAC definition of a solid solution is a "solid in which components are compatible and form a unique phase". The definition "crystal containing a second constituent which fits into and is distributed in the lattice of the host crystal" given in refs., is not general and, thus, is not recommended. The expression is to be used to describe a solid phase containing more than one substance when, for convenience, one (or more) of the substances, called the solvent, is treated differently from the other substances, called solutes. One or several of the components can be macromolecules. Some of the other components can then act as plasticizers, i.e., as molecularly dispersed substances that decrease the glass-transition temperature at which the amorphous phase of a polymer is converted between glassy and rubbery states. In pharmaceutical preparations, the concept of solid solution is often applied to the case of mixtures of drug and polymer. The number of drug molecules that do behave as solvent (plasticizer) of polymers is small. Phase diagrams On a phase diagram a solid solution is represented by an area, often labeled with the structure type, which covers the compositional and temperature/pressure ranges. Where the end members are not isostructural there are likely to be two solid solution ranges with different structures dictated by the parents. In this case the ranges may overlap and the materials in this region can have either structure, or there may be a miscibility gap in solid state indicating that attempts to generate materials with this composition will result in mixtures. In areas on a phase diagram which are not covered by a solid solution there may be line phases, these are compounds with a known crystal structure and set stoichiometry. Where the crystalline phase consists of two (non-charged) organic molecules the line phase is commonly known as a cocrystal. In metallurgy alloys with a set composition are referred to as intermetallic compounds. A solid solution is likely to exist when the two elements (generally metals) involved are close together on the periodic table, an intermetallic compound generally results when two metals involved are not near each other on the periodic table. Details The solute may incorporate into the solvent crystal lattice substitutionally, by replacing a solvent particle in the lattice, or interstitially, by fitting into the space between solvent particles. Both of these types of solid solution affect the properties of the material by distorting the crystal lattice and disrupting the physical and electrical homogeneity of the solvent material. Where the atomic radii of the solute atom is larger than the solvent atom it replaces the crystal structure (unit cell) often expands to accommodate it, this means that the composition of a material in a solid solution can be calculated from the unit cell volume a relationship known as Vegard's law. Some mixtures will readily form solid solutions over a range of concentrations, while other mixtures will not form solid solutions at all. The propensity for any two substances to form a solid solution is a complicated matter involving the chemical, crystallographic, and quantum properties of the substances in question. Substitutional solid solutions, in accordance with the Hume-Rothery rules, may form if the solute and solvent have: Similar atomic radii (15% or less difference) Same crystal structure Similar electronegativities Similar valency a solid solution mixes with others to form a new solution The phase diagram in the above diagram displays an alloy of two metals which forms a solid solution at all relative concentrations of the two species. In this case, the pure phase of each element is of the same crystal structure, and the similar properties of the two elements allow for unbiased substitution through the full range of relative concentrations. Solid solution of pseudo-binary systems in complex systems with three or more components may require a more involved representation of the phase diagram with more than one solvus curves drawn corresponding to different equilibrium chemical conditions. Solid solutions have important commercial and industrial applications, as such mixtures often have superior properties to pure materials. Many metal alloys are solid solutions. Even small amounts of solute can affect the electrical and physical properties of the solvent. The binary phase diagram in the above diagram shows the phases of a mixture of two substances in varying concentrations, and . The region labeled "" is a solid solution, with acting as the solute in a matrix of . On the other end of the concentration scale, the region labeled "" is also a solid solution, with acting as the solute in a matrix of . The large solid region in between the and solid solutions, labeled " + ", is not a solid solution. Instead, an examination of the microstructure of a mixture in this range would reveal two phases—solid solution -in- and solid solution -in- would form separate phases, perhaps lamella or grains. Application In the phase diagram, at three different concentrations, the material will be solid until heated to its melting point, and then (after adding the heat of fusion) become liquid at that same temperature: the unalloyed extreme left the unalloyed extreme right the dip in the center (the eutectic composition). At other proportions, the material will enter a mushy or pasty phase until it warms up to being completely melted. The mixture at the dip point of the diagram is called a eutectic alloy. Lead-tin mixtures formulated at that point (37/63 mixture) are useful when soldering electronic components, particularly if done manually, since the solid phase is quickly entered as the solder cools. In contrast, when lead-tin mixtures were used to solder seams in automobile bodies a pasty state enabled a shape to be formed with a wooden paddle or tool, so a 70–30 lead to tin ratio was used. (Lead is being removed from such applications owing to its toxicity and consequent difficulty in recycling devices and components that include lead.) Exsolution When a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae. This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute. Alkali feldspar minerals, for example, have end members of albite, NaAlSi3O8 and microcline, KAlSi3O8. At high temperatures Na+ and K+ readily substitute for each other and so the minerals will form a solid solution, yet at low temperatures albite can only substitute a small amount of K+ and the same applies for Na+ in the microcline. This leads to exsolution where they will separate into two separate phases. In the case of the alkali feldspar minerals, thin white albite layers will alternate between typically pink microcline, resulting in a perthite texture. See also Solid solution strengthening Notes References External links DoITPoMS Teaching and Learning Package—"Solid Solutions" Materials science Mineralogy
Solid solution
[ "Physics", "Materials_science", "Engineering" ]
1,870
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
2,221,365
https://en.wikipedia.org/wiki/Queen%20Saovabha%20Memorial%20Institute
The Queen Saovabha Memorial Institute (QSMI) (; ) in Bangkok, Thailand, is an institute that specialises in the husbandry of venomous snakes, the extraction and research of snake venom, and vaccines, especially rabies vaccine. It houses the snake farm, a popular tourist attraction. The origins of the institute can be traced back to 1912 when King Rama VI granted permission for a government institute to manufacture and distribute rabies vaccine at the suggestion of Prince Damrong, whose daughter, , died from rabies infection. It was officially opened on 26 October 1913 in the Luang Building on Bamrung Muang Road as the Pastura Institute after Louis Pasteur, who discovered the first vaccine against rabies. In 1917 it was renamed the Pasteur Institute and placed under the supervision of the Thai Red Cross Society. The institute also produced vaccine against smallpox. The Travel and Immunization Clinic is now located here. If offers vaccines and pre-travel consultation. In the early-1920s the king offered his private property for the construction of a new home for the institute on Rama IV Road. The new buildings were officially opened on 7 December 1922, now named for the king's mother, Queen Saovabha Phongsri. At the same time, the institute's first director, Dr. Leopold Robert, requested contributions from foreigners living in Thailand for the establishment of a snake farm, which would enable the institute to manufacture antivenom for snake bites. Reportedly the second snake farm in the world after Instituto Butantan in São Paulo, Brazil, it was opened on 22 November 1923 by Queen Savang Vadhana, then President of the Thai Red Cross, on the institute's premises. Research into snake venom is highly important, since many people fall victim to venomous snake bites. Normally only an antidote that is based from the same snake's venom can save the individual's life. The snake farm houses thousands of some of the most venomous snakes in the world, such as the king cobra and all sorts of vipers. Visitors can see handlers interact with pythons, and venom extractions can be seen. There is also a museum, and lectures are given. The QSMI and the snake farm are near Chulalongkorn Hospital, on the corner of Henri Dunant Road and Rama IV Road. References External links FactZoo.com | Queen Saovabha Memorial Institute's Visitors Brochure Thai Red Cross | Queen Saovabha Memorial Institute Bangkok Metropolitan Administration | Queen Saovabha Memorial Institute Out About Bangkok | Queen Saovapha Memorial Institute (includes details of institute's work) Blurrytravel.com | Queen Saovabha Memorial Institute (includes photos) Thailandguidebook.com | Queen Saovabha Memorial Institute (includes photos) Virtualtourist.com | Bangkok Travel Guide (includes reviews and photos) Herpetology organizations Toxicology organizations Research institutes in Thailand Tourist attractions in Bangkok Biological research institutes Museums in Bangkok Thai Red Cross Society Unregistered ancient monuments in Bangkok Pathum Wan district
Queen Saovabha Memorial Institute
[ "Environmental_science" ]
623
[ "Toxicology organizations", "Toxicology" ]
2,221,377
https://en.wikipedia.org/wiki/Irma%20Goldberg
Irma Goldberg (born 1871) was a Russian-born chemist. She was one of the first female organic chemists to have and sustain a successful career, her work even being quoted in her own name in standard textbooks. Life Education Born in Moscow to a Russian-Jewish family, she later traveled to Geneva in the 1890s to study chemistry at Geneva University. Early research, Ullmann reaction Her early research included the development of a process to remove sulfur and phosphorus from acetylene. Her first article on the derivatives of benzophenone, coauthored by German chemist Fritz Ullmann, was published in 1897. She also researched and wrote a paper (published in 1904) on using copper as a catalyst for the preparation of a phenyl derivative of thiosalicylic acid, a process known as the Ullmann reaction; Goldberg is the only woman scientist unambiguously recognized for her own named reaction: the amidation (Goldberg) reaction. This modification to previous forms of the method was a great improvement, and was extremely helpful for laboratory-scale preparations. She coordinated on other forms of chemistry research with her husband, Fritz Ullmann, in what they called the Ullmann-Goldberg collaborative. Move to Berlin, synthetic dye research In 1905, both Goldberg and Ullman moved to Technische Hochschule in Berlin. Goldberg's research, along with that of the Ullmann-Goldberg collaborative, was also a part of Germany's synthetic dye industry. Their research helped with the creation of the synthetic alizarin industry, or the process of replacing natural dye obtained from madder. In 1909, Goldberg also collaborated with Hermann Friedman to review German patents under BASF (Badische Anilin und Soda Fabrik) and Bayer & Co. Farbenfabriken, providing notes on preparation for 114 dyes. Marriage and later life In 1910, Goldberg married Ullman. In 1923, they moved back to Geneva when Ullman accepted a faculty position at Geneva University. Her exact death date is not known, but her name does appear at the top of a list of people signing a memorial notice in a Geneva newspaper for her deceased husband, Fritz Ullmann in 1939. See also Timeline of women in science References External links 19th-century scientists from the Russian Empire 19th-century women scientists from the Russian Empire 20th-century Russian women scientists German women chemists Organic chemists 1871 births Year of death missing Emigrants from the Russian Empire to the German Empire 19th-century German women scientists 19th-century Swiss women scientists Chemists from the Russian Empire
Irma Goldberg
[ "Chemistry" ]
525
[ "Organic chemists" ]
2,221,532
https://en.wikipedia.org/wiki/American%20Coalition%20for%20Clean%20Coal%20Electricity
The American Coalition for Clean Coal Electricity (ACCCE, formerly ABEC or Americans for Balanced Energy Choices) is a U.S. non-profit advocacy group representing major American coal producers, utility companies and railroads. The organization seeks to influence public opinion and legislation in favor of coal-generated electricity in the United States, placing emphasis on the development and deployment of clean coal technologies. Since carbon capture and sequestration—which ACCCE and its member companies advocate to reduce greenhouse gas emissions from coal burning—has yet to be tested on a large scale, some have questioned whether this approach is feasible or realistic. In 2009, ACCCE faced a Congressional investigation when it was discovered that a lobbying firm hired by ACCCE had sent forged letters to lawmakers. The letters, purporting to come from a variety of minority-focused non-profit groups, were in fact forged by a lobbying firm hired by ACCCE. History The ACCCE began operations in 2008, the result of a combination of two organizations: the Center for Energy and Economic Development (CEED) and Americans for Balanced Energy Choices (ABEC). CEED had been founded in 1992 and since then had been involved in a wide range of climate and energy policies related to coal-based electricity. ABEC, formed in 2000, had focused on consumer based advocacy programs concerning the use of coal-based electricity. In 2008 these two groups were combined to form ACCCE, with the goal of focusing on both legislative and public advocacy efforts. The main programs include the America's Power campaign, launched in 2007 by ABEC, which had a significant presence during the 2008 and 2012 elections, as well as legislative efforts during the United States House of Representatives debate over the Waxman-Markey cap and trade legislation. Mike Duncan became President and CEO of ACCCE in 2012. By 2017, Duncan had been succeeded in that position by Paul Bailey, who had previously been named one of the top lobbyists by The Hill, where he was described as ACCCE's "point man for policy... essential in crafting the ACCCE's response" to the positions taken by the Obama administration. Another notable ACCCE lobbyist, Jaime Harrison, was a Democratic political operative, who worked on behalf of ACCCE from 2009 to 2012. Harrison thereafter chaired the South Carolina Democratic Party, and in January 2017 made a bid for DNC chair, which he ended on February 23 with his endorsement of eventual winner Tom Perez. Harrison later accepted an appointment from Perez as Associate Chairman and Counselor of the Democratic National Committee. In June 2017, Paul Bailey joined Republican leaders including Paul Ryan and Mitch McConnell in welcoming the announcement by President Donald Trump of the United States withdrawal from the Paris Agreement. Bailey stated that "[t]he previous administration volunteered to meet one of the most stringent goals of any country in the world, while many other countries do far less to reduce their emissions", and contended that "[m]eeting President Obama's goal would have led to more regulations, higher energy prices, and dependence on less reliable energy sources". The organization maintains headquarters in Washington, D.C. Working methods Legislative In addressing comprehensive climate change legislation that would place a cap on greenhouse gas emissions and allow for trading of emission allowances, the position of ACCCE has primarily involved advocating for the development and use of clean coal technologies, while also including provisions concerning the allocating of carbon emission allowances. ACCCE has as also expressed support for a ceiling on emission allowances prices. At the time in 2008 when the U.S. Senate was considering the Lieberman-Warner bill (bill number ) – which would create a cap and trade system – ACCCE changed its prior stance towards climate-change legislation, noting that it "would support mandatory limits on carbon dioxide as long as legislation met a set of principles that encouraged 'robust utilization of coal.'" The group also employed legislative efforts surrounding the 2009 debate over the Waxman-Markey cap and trade legislation (bill number ), to which it argued that regulations relating to carbon emissions in the proposed legislation would have led to increased energy costs and reduction in employment – potentially placing additional strain on the economy during the late 2000s recession. ACCCE provided proposals to Members of Congress for changes in this legislation, and approved of some changes that were adopted; though the group did not support the final version of the bill that passed the U.S. House of Representatives on account of concerns that there were not enough measures taken to control energy rates. Advocacy-based In addition to legislative methods employed by ACCCE, the organization has engaged in consumer-focused advocacy efforts in response to perceived environmental effects surrounding clean coal, consisting of direct to consumer advertising, as well as a group of approximately 225,000 volunteers (referred to as "America's Power Army," according to their website) involved in "visiting town hall meetings, fairs and other functions attended by members of Congress (to) ask questions about energy policy." Initiatives of this form became the subject of news coverage surrounding the 2008 United States presidential election, as the organization's presence at the Democratic National Convention, Republican National Convention, presidential debates and other events has been described as having impacted both Senators John McCain and Barack Obama's positions in regards to investment in clean coal. In the last debate held prior to the election in 2008, Senator Obama noted his support of clean coal technology, when prompted by Senator McCain to explain a time in which he had backed a position not favored by the leaders of the Democratic Party. The organization actively countered President Obama's climate change agenda, arguing in 2013 that the industry had "made strides toward making coal more environmentally friendly", with ten new clean-coal technology plants having built between 2011 and mid-2013, and five more in development or scheduled to begin operations at that time. Duncan asserted that regulations propounded by the Environmental Protection Agency had contributed to nearly 290 coal plant closures that year, with more likely to come if additional regulations were enacted, and that absent the additional burdens imposed on the industry by such interference, the coal industry would continue developing cleaner technologies. ACCCE supported the FutureGen capture and sequestration project, first announced by President George W. Bush in 2003. The project was funded in the American Recovery and Reinvestment Act of 2009, but the Department of Energy suspended the project in February, 2015. ACCCE's legislative positions and advocacy-based actions have been met with opposing viewpoints from advocacy groups such as the Sierra Club and Greenpeace, which have questioned the viability of developing environmentally sustainable clean coal within an adequate time frame and budget – representing their perspective that funding of such projects should be sourced exclusively from within the coal industry. Climate change denial Since 2009 the Coalition has – according to The Atlantic – "pushed outright denial of climate science". For example in a 2014 report it said that human-caused climate change was a "hypothesis" and a "debate" and claimed that carbon pollution would be beneficial instead of negative and that its benefits could be up to 400 times as high as its costs. Higher atmospheric carbon dioxide levels would be a benefit and more carbon dioxide had no "discernable influence" on how much sea-level would rise. A 2009 article by Josh Harkinson of Mother Jones magazine said ACCCE was among the most prominent organizations in promoting climate disinformation, grouping it with entities including ExxonMobil, the American Petroleum Institute, The Heartland Institute, and the Institute for Energy Research, as "members of the chorus claiming that global warming is a joke and that CO2 emissions are actually good for you". Forgery controversy During the 2009 debate over the Waxman/Markey bill, Bonner & Associates, a Washington, D.C. lobbying firm subcontracted by ACCCE though the Hawthorne Group to drum up "grassroots support" for this effort, sent a number of fraudulent letters to lawmakers on behalf of ACCCE. The letters were forged to appear to come from various minority-focused non-profit groups, including the National Association for the Advancement of Colored People and the American Association of University Women. When the forgery was exposed, and faced with a proposed Congressional investigation, ACCCE apologized to the community groups and to the members of Congress involved. ACCCE disavowed the tactic and blamed the forgeries on their subcontractor, who in turn blamed a temporary worker, acting alone. The Washington Post described the situation as a "saga of modern Washington, in which an 'American coalition' [the ACCCE] claiming 200,000 supporters still relies on a subcontractor to gin up favorable letters." An investigation of ACCCE by U.S. Representative Edward Markey, launched in response to the forgeries, disclosed an additional set of fraudulent letters sent to lawmakers to lobby against the environmental legislation. In response to the investigation, the ACCCE pledged to take "all possible steps" to verify the authenticity of letters sent by Bonner & Associates on its behalf, and stated that it was cooperating with Markey's investigation. The investigation concluded in October 2009 with Jack Bonner, chairman of Bonner & Associates, taking “full responsibility” for the forged letters. Bonner and Associates was never paid by ACCCE for their work on the legislation. Members , ACCCE is supported by 31 member organizations: Alliance Coal, LLC American Electric Power Associated Electric Cooperative Inc. Berwind Natural Resource Corp Big Rivers Electric Corporation BNSF Railway Buckeye Power Inc. Carbon Utilization Research Council (CURC) Caterpillar Incorporated Charah Crounse Corporation CSX Corporation Drummond Company, Inc. Jackson Walker LLP John T. Boyd Company Kentucky River Coal CorporationKentucky Coal Association Komatsu Mining Corporation Murray Energy Corporation Natural Resource Partners Norfolk Southern Corporation Oglethorpe Power Cooperative Ohio CAT Peabody Energy Corporation PowerSouth Energy Cooperative Prairie State Generating Company, LLC Southern Company Trapper Mining Union Pacific Railroad Western Fuels Association White Stallion Energy Center, LLC See also Coal power in the United States Clean coal technology References External links American Coalition for Clean Coal Electricity website American Coalition for Clean Coal Electricity at SourceWatch Climate change in the United States Coal in the United States Coal technology Political advocacy groups in the United States Energy organizations
American Coalition for Clean Coal Electricity
[ "Engineering" ]
2,077
[ "Energy organizations" ]
2,221,642
https://en.wikipedia.org/wiki/Assay%20sensitivity
Assay sensitivity is a property of a clinical trial defined as the ability of a trial to distinguish an effective treatment from a less effective or ineffective intervention. Without assay sensitivity, a trial is not internally valid and is not capable of comparing the efficacy of two interventions. Importance Lack of assay sensitivity has different implications for trials intended to show a difference greater than zero between interventions (superiority trials) and trials intended to show non-inferiority. Non-inferiority trials attempt to rule out some margin of inferiority between a test and control intervention i.e. rule out that the test intervention is no worse than the control intervention by a chosen amount. If a trial intended to demonstrate efficacy by showing superiority of a test intervention to control lacks assay sensitivity, it will fail to show that the test intervention is superior and will fail to lead to a conclusion of efficacy. In contrast, if a trial intended to demonstrate efficacy by showing a test intervention is non-inferior to an active control lacks assay sensitivity, the trial may find an ineffective intervention to be non-inferior and could lead to an erroneous conclusion of efficacy. When two interventions within a trial are shown to have different efficacy (i.e., when one intervention is superior), that finding itself directly demonstrates that the trial had assay sensitivity (assuming the finding is not related to random or systematic error). In contrast, a trial that demonstrates non-inferiority between two interventions, or an unsuccessful superiority trial, generally does not contain such direct evidence of assay sensitivity. However, the idea that non-inferiority trials lack assay sensitivity has been disputed. Differences in sensitivity Assay sensitivity for a non-inferiority trial may depend upon the chosen margin of inferiority ruled out by the trial, and the design of the planned non-inferiority trial. The chosen margin of inferiority in a non-inferiority trial cannot be larger than the largest effect size which the control intervention reliably and reproducibly demonstrates compared to placebo or no treatment in past superiority trials. For instance, if there is reliable and reproducible evidence from previous superiority trials of an effect size of 10% for a control intervention compared to placebo, an appropriately designed non-inferiority trial designed to rule out that the test intervention may be as much as 5% less effective than the control would have assay sensitivity. On the other hand, with this same data, a noninferiority trial designed to rule out that the test intervention may be as much as 15% less effective than the control may not have assay sensitivity, since this trial would not ensure that the test intervention is any more effective than a placebo given that the effect ruled out is larger than the effect of the control compared to placebo. The choice of the margin is sometimes problematic in non-inferiority trials. Given investigators desire to choose larger margins to decrease the sample size needed to perform a trial, the chosen margin is sometimes larger than the effect size of the control compared to placebo. In addition, a valid noninferiority trial is not possible in situations in which there is a lack of data demonstrating a reliable and reproducible effect of the control compared to placebo. In addition to choosing a margin based upon credible past evidence, to have assay sensitivity, the planned non-inferiority trial must be designed in a way similar to the past trials which demonstrated the effectiveness of the control compared to placebo, the so-called "constancy assumption". In this way, non-inferiority trials have a feature in common with external (historically) controlled trials. This also means that non-inferiority trials are subject to some of the same biases as historically controlled trials; that is, the effect of a drug in a past trial may not be the same in a current trial given changes in medical practice, differences in disease definitions or changes in the natural history of a disease, differences in outcome timing and definitions, usage of concomitant medications, etc. The finding of "difference" or "no difference" between two interventions is not a direct demonstration of the internal validity of the trial unless another internal control confirms that the study methods have the ability to show a difference, if one exists, over the range of interest (i.e. the trial contains a third group receiving placebo). Since most clinical trials do not contain an internal "negative" control (i.e. a placebo group) to internally validate the trial, the data to evaluate the validity of the trial comes from past trials external to the current trial. See also Specificity (tests) Spectrum bias References External links ClinicalTrials.gov from US National Library of Medicine FDA Website Clinical research Drug discovery Clinical trials
Assay sensitivity
[ "Chemistry", "Biology" ]
963
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
8,790,877
https://en.wikipedia.org/wiki/Pressure%20regulator
A pressure regulator is a valve that controls the pressure of a fluid to a desired value, using negative feedback from the controlled pressure. Regulators are used for gases and liquids, and can be an integral device with a pressure setting, a restrictor and a sensor all in the one body, or consist of a separate pressure sensor, controller and flow valve. Two types are found: The pressure reduction regulator and the back-pressure regulator. A pressure reducing regulator is a control valve that reduces the input pressure of a fluid to a desired value at its output. It is a normally-open valve and is installed upstream of pressure sensitive equipment. A back-pressure regulator, back-pressure valve, pressure sustaining valve or pressure sustaining regulator is a control valve that maintains the set pressure at its inlet side by opening to allow flow when the inlet pressure exceeds the set value. It differs from an over-pressure relief valve in that the over-pressure valve is only intended to open when the contained pressure is excessive, and it is not required to keep upstream pressure constant. They differ from pressure reducing regulators in that the pressure reducing regulator controls downstream pressure and is insensitive to upstream pressure. It is a normally-closed valve which may be installed in parallel with sensitive equipment or after the sensitive equipment to provide an obstruction to flow and thereby maintain upstream pressure. Both types of regulator use feedback of the regulated pressure as input to the control mechanism, and are commonly actuated by a spring loaded diaphragm or piston reacting to changes in the feedback pressure to control the valve opening, and in both cases the valve should be opened only enough to maintain the set regulated pressure. The actual mechanism may be very similar in all respects except the placing of the feedback pressure tap. As in other feedback control mechanisms, the level of damping is important to achieve a balance between fast response to a change in the measured pressure, and stability of output. Insufficient damping may lead to hunting oscillation of the controlled pressure, while excessive friction of moving parts may cause hysteresis. Pressure reducing regulator Operation A pressure reducing regulator's primary function is to match the flow of gas through the regulator to the demand for fluid placed upon it, whilst maintaining a sufficiently constant output pressure. If the load flow decreases, then the regulator flow must decrease as well. If the load flow increases, then the regulator flow must increase in order to keep the controlled pressure from decreasing because of a shortage of fluid in the pressure system. It is desirable that the controlled pressure does not vary greatly from the set point for a wide range of flow rates, but it is also desirable that flow through the regulator is stable and the regulated pressure is not subject to excessive oscillation. A pressure regulator includes a restricting element, a loading element, and a measuring element: The restricting element is a valve that can provide a variable restriction to the flow, such as a globe valve, butterfly valve, poppet valve, etc. The loading element is a part that can apply the needed force to the restricting element. This loading can be provided by a weight, a spring, a piston actuator, or the diaphragm actuator in combination with a spring. The measuring element functions to determine when the inlet flow is equal to the outlet flow. The diaphragm itself is often used as a measuring element; it can serve as a combined element. In the pictured single-stage regulator, a force balance is used on the diaphragm to control a poppet valve in order to regulate pressure. With no inlet pressure, the spring above the diaphragm pushes it down on the poppet valve, holding it open. Once inlet pressure is introduced, the open poppet allows flow to the diaphragm and pressure in the upper chamber increases, until the diaphragm is pushed upward against the spring, causing the poppet to reduce flow, finally stopping further increase of pressure. By adjusting the top screw, the downward pressure on the diaphragm can be increased, requiring more pressure in the upper chamber to maintain equilibrium. In this way, the outlet pressure of the regulator is controlled. Single stage regulator High pressure gas from the supply enters the regulator through the inlet port. The inlet pressure gauge will indicate this pressure. The gas then passes through the normally open pressure control valve orifice and the downstream pressure rises until the valve actuating diaphragm is deflected sufficiently to close the valve, preventing any more gas from entering the low pressure side until the pressure drops again. The outlet pressure gauge will indicate this pressure. The outlet pressure on the diaphragm and the inlet pressure and poppet spring force on the upstream part of the valve hold the diaphragm/poppet assembly in the closed position against the force of the diaphragm loading spring. If the supply pressure falls, the closing force due to supply pressure is reduced, and downstream pressure will rise slightly to compensate. Thus, if the supply pressure falls, the outlet pressure will increase, provided the outlet pressure remains below the falling supply pressure. This is the cause of end-of-tank dump where the supply is provided by a pressurized gas tank. The operator can compensate for this effect by adjusting the spring load by turning the knob to restore outlet pressure to the desired level. With a single stage regulator, when the supply pressure gets low, the lower inlet pressure causes the outlet pressure to climb. If the diaphragm loading spring compression is not adjusted to compensate, the poppet can remain open and allow the tank to rapidly dump its remaining contents. Double stage regulator Two stage regulators are two regulators in series in the same housing that operate to reduce the pressure progressively in two steps instead of one. The first stage, which is preset, reduces the pressure of the supply gas to an intermediate stage; gas at that pressure passes into the second stage. The gas emerges from the second stage at a pressure (working pressure) set by user by adjusting the pressure control knob at the diaphragm loading spring. Two stage regulators may have two safety valves, so that if there is any excess pressure between stages due to a leak at the first stage valve seat the rising pressure will not overload the structure and cause an explosion. An unbalanced single stage regulator may need frequent adjustment. As the supply pressure falls, the outlet pressure may change, necessitating adjustment. In the two stage regulator, there is improved compensation for any drop in the supply pressure. Applications Pressure reducing regulators Air compressors Air compressors are used in industrial, commercial, and home workshop environments to perform an assortment of jobs including blowing things clean; running air powered tools; and inflating things like tires, balls, etc. Regulators are often used to adjust the pressure coming out of an air receiver (tank) to match what is needed for the task. Often, when one large compressor is used to supply compressed air for multiple uses (often referred to as "shop air" if built as a permanent installation of pipes throughout a building), additional regulators will be used to ensure that each separate tool or function receives the pressure it needs. This is important because some air tools, or uses for compressed air, require pressures that may cause damage to other tools or materials. Aircraft Pressure regulators are found in aircraft cabin pressurization, canopy seal pressure control, potable water systems, and waveguide pressurization. Aerospace Aerospace pressure regulators have applications in propulsion pressurant control for reaction control systems (RCS) and Attitude Control Systems (ACS), where high vibration, large temperature extremes and corrosive fluids are present. Cooking Pressurized vessels can be used to cook food much more rapidly than at atmospheric pressure, as the higher pressure raises the boiling point of the contents. All modern pressure cookers will have a pressure regulator valve and a pressure relief valve as a safety mechanism to prevent explosion in the event that the pressure regulator valve fails to adequately release pressure. Some older models lack a safety release valve. Most home cooking models are built to maintain a low and high pressure setting. These settings are usually . Almost all home cooking units will employ a very simple single-stage pressure regulator. Older models will simply use a small weight on top of an opening that will be lifted by excessive pressure to allow excess steam to escape. Newer models usually incorporate a spring-loaded valve that lifts and allows pressure to escape as pressure in the vessel rises. Some pressure cookers will have a quick release setting on the pressure regulator valve that will, essentially, lower the spring tension to allow the pressure to escape at a quick, but still safe rate. Commercial kitchens also use pressure cookers, in some cases using oil based pressure cookers to quickly deep fry fast food. Pressure vessels of this sort can also be used as autoclaves to sterilize small batches of equipment and in home canning operations. Water pressure reduction A water pressure regulating valve limits inflow by dynamically changing the valve opening so that when less pressure is on the outside, the valve opens up fully, and too much pressure on the outside causes the valve to shut. In a no pressure situation, where water could flow backwards, it won't be impeded. A water pressure regulating valve does not function as a check valve. They are used in applications where the water pressure is too high at the end of the line to avoid damage to appliances or pipes. Welding and cutting Oxy-fuel welding and cutting processes require gases at specific pressures, and regulators will generally be used to reduce the high pressures of storage cylinders to those usable for cutting and welding. Oxygen and fuel gas regulators usually have two stages: The first stage of the regulator releases the gas at a constant pressure from the cylinder despite the pressure in the cylinder becoming less as the gas is released. The second stage of the regulator controls the pressure reduction from the intermediate pressure to low pressure. The final flow rate may be adjusted at the torch. The regulator assembly usually has two pressure gauges, one indicating cylinder pressure, the other indicating delivery pressure. Inert gas shielded arc welding also uses gas stored at high pressure provided through a regulator. There may be a flow gauge calibrated to the specific gas. Propane/LP gas All propane and LP gas applications require the use of a regulator. Because pressures in propane tanks can fluctuate significantly with temperature, regulators must be present to deliver a steady pressure to downstream appliances. These regulators normally compensate for tank pressures between and commonly deliver 11 inches water column for residential applications and 35 inches of water column for industrial applications. Propane regulators differ in size and shape, delivery pressure and adjustability, but are uniform in their purpose to deliver a constant outlet pressure for downstream requirements. Common international settings for domestic LP gas regulators are 28 mbar for butane and 37 mbar for propane. Gas powered vehicles All vehicular motors that run on compressed gas as a fuel (internal combustion engine or fuel cell electric power train) require a pressure regulator to reduce the stored gas (CNG or Hydrogen) pressure from 700, 500, 350 or 200 bar (or 70, 50, 35 and 20 MPa) to operating pressure.) Recreational vehicles For recreational vehicles with plumbing, a pressure regulator is required to reduce the pressure of an external water supply connected to the vehicle plumbing, as the supply may be a much higher elevation than the campground, and water pressure depends on the height of the water column. Without a pressure regulator, the intense pressure encountered at some campgrounds in mountainous areas may be enough to burst the camper's water pipes or unseat the plumbing joints, causing flooding. Pressure regulators for this purpose are typically sold as small screw-on accessories that fit inline with the hoses used to connect an RV to the water supply, which are almost always screw-thread-compatible with the common garden hose. Breathing gas supply Pressure regulators are used with diving cylinders for Scuba diving. The tank may contain pressures in excess of , which could cause a fatal barotrauma injury to a person breathing it directly. A demand controlled regulator provides a flow of breathing gas at the ambient pressure (which varies by depth in the water). Pressure reducing regulators are also use to supply breathing gas to surface-supplied divers, and people who use self-contained breathing apparatus (SCBA) for rescue and hazmat work on land. The interstage pressure for SCBA at normal atmospheric pressure can generally be left constant at a factory setting, but for surface supplied divers it is controlled by the gas panel operator, depending on the diver depth and flow rate requirements. Supplementary oxygen for high altitude flight in unpressurised aircraft and medical gases are also commonly dispensed through pressure reducing regulators from high-pressure storage. Supplementary oxygen may also be dispensed through a regulator which both reduces the pressure, and supplies the gas at a metered flow rate, to be mixed with ambient air. One way of producing a constant mass flow at variable ambient pressure is to use a choked flow, where the flow through the metering orifice is sonic. For a given gas in choked flow, the mass flow rate may be controlled by setting the orifice size or the upstream pressure. To produce a choked flow in oxygen, the absolute pressure ratio of upstream and downstream gas must exceed 1.893 at 20 °C. At normal atmospheric pressure this requires an upstream pressure of more than 1.013 × 1.893 = 1.918 bar. A typical nominal regulated gauge pressure from a medical oxygen regulator is , for an absolute pressure of approximately 4.4 bar and a pressure ratio of about 4.4 without back pressure, so they will have choked flow in the metering orifices for a downstream (outlet) pressure of up to about 2.3 bar absolute. This type of regulator commonly uses a rotor plate with calibrated orifices and detents to hold it in place when the orifice corresponding to the desired flow rate is selected. This type of regulator may also have one or two uncalibrated takeoff connections from the intermediate pressure chamber with diameter index safety system (DISS) or similar connectors to supply gas to other equipment, and the high pressure connection is commonly a pin index safety system (PISS) yoke clamp. Similar mechanisms can be used for flow rate control for aviation and mountaineering regulators. Mining industry As the pressure in water pipes builds rapidly with depth, underground mining operations require a fairly complex water system with pressure reducing valves. These devices must be installed at a certain vertical interval, usually . Without such valves, pipes could burst and pressure would be too great for equipment operation. Natural gas industry Pressure regulators are used extensively within the natural gas industry. Natural gas is compressed to high pressures in order to be distributed throughout the country through large transmission pipelines. The transmission pressure can be over and must be reduced through various stages to a usable pressure for industrial, commercial, and residential applications. There are three main pressure reduction locations in this distribution system. The first reduction is located at the city gate, whereas the transmission pressure is dropped to a distribution pressure to feed throughout the city. This is also the location where the odorless natural gas is odorized with mercaptan. The distribution pressure is further reduced at a district regulator station, located at various points in the city, to below 60 psig. The final cut would occur at the end users location. Generally, the end user reduction is taken to low pressures ranging from 0.25 psig to 5 psig. Some industrial applications can require a higher pressure. Back-pressure regulators Maintain upstream pressure control in analytical or process systems Protect sensitive equipment from overpressure damage Reduce the pressure difference over a component which is not tolerant of large pressure differences. Gas sales lines Production vessels (e.g., Separators, heater treaters or free water knockouts) Vent or flare lines Hyperbaric chambers Where the pressure drop on a built-in breathing system exhaust system is too great, typically in saturation systems, a back-pressure regulator may be used to reduce the exhaust pressure drop to a safer and more manageable pressure. Reclaim diving helmets The depth at which most heliox breathing mixtures are used in surface-supplied diving is generally at least 5 bar above surface atmospheric pressure, and the exhaust gas from the diver must pass through a reclaim valve, which is a back-pressure valve activated by the increase in pressure in the diver's helmet above ambient pressure caused by diver exhalation. The reclaim gas hose which carries the exhaled gas back to the surface for recycling must not be at too great a pressure difference from the ambient pressure at the diver. An additional back-pressure regulator in this line allows finer setting of the reclaim valve for lower work of breathing at variable depths. See also References External links Plumbing valves Hydraulics Pneumatics
Pressure regulator
[ "Physics", "Chemistry" ]
3,412
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
8,791,730
https://en.wikipedia.org/wiki/Forming%20gas
Forming gas is a mixture of hydrogen (mole fraction varies) and nitrogen. It is sometimes called a "dissociated ammonia atmosphere" due to the reaction which generates it: 2 NH3 → 3 H2 + N2 It can also be manufactured by thermal cracking of ammonia, in an ammonia cracker or forming gas generator. Forming gas is used as an atmosphere for processes that need the properties of hydrogen gas. Typical forming gas formulations (5% H2 in N2) are not explosive. It is used in chambers for gas hypersensitization, a process in which photographic film is heated in forming gas to drive out moisture and oxygen and to increase the base fog of the film. Hypersensitization is used particularly in deep-sky astrophotography, which deals with low-intensity incoming light, requires long exposure times, and is thus particularly sensitive to contaminants in the film. Forming gas is also used to regenerate catalysts in glove boxes and as an atmosphere for annealing processes. It can be purchased at welding supply stores. It is sometimes used as a reducing agent for high-temperature soldering and brazing, to remove oxidation of the joint without the use of flux. It also finds application in microchip production, where a high-temperature anneal in forming gas assists in silicon-silicon dioxide interface passivation. Quite often forming gas is used in furnaces during annealing or sintering for the thermal treatment of metals, because it reduces oxides on the metal surface. See also Endothermic gas References Gases Welding Brazing and soldering Metal heat treatments Industrial gases
Forming gas
[ "Physics", "Chemistry", "Engineering" ]
331
[ "Matter", "Welding", "Metallurgical processes", "Phases of matter", "Industrial gases", "Metal heat treatments", "Mechanical engineering", "Chemical process engineering", "Statistical mechanics", "Gases" ]
11,932,146
https://en.wikipedia.org/wiki/Geopolymer
A geopolymer is a vague pseudo-chemical term used to describe inorganic, typically bulk ceramic-like material that forms covalently bonded, non-crystalline (amorphous) networks, often intermingled with other phases. Many geopolymers may also be classified as alkali-activated cements or acid-activated binders. They are mainly produced by a chemical reaction between a chemically reactive aluminosilicate powder e.g. metakaolin or other clay-derived powders, natural pozzolan, or suitable glasses, and an aqueous solution (alkaline or acidic) that causes this powder to react and re-form into a solid monolith. The most common pathway to produce geopolymers is by the reaction of metakaolin with sodium silicate, which is an alkaline solution, but other processes are also possible. Commercially produced geopolymers may be used for fire- and heat-resistant coatings and adhesives, medicinal applications, high-temperature ceramics, new binders for fire-resistant fiber composites, toxic and radioactive waste encapsulation, and as cementing components in making or repairing concretes. The properties and uses of geopolymers are being explored in many scientific and industrial disciplines such as modern inorganic chemistry, physical chemistry, colloid chemistry, mineralogy, geology, and in other types of engineering process technologies. The term geopolymer was coined by Joseph Davidovits in 1978 due to the rock-forming minerals of geological origin used in the synthesis process. These materials and associated terminology were popularized over the following decades via his work with the Institut Géopolymère (Geopolymer Institute). Geopolymers are synthesized in one of two conditions: in alkaline medium (Na+, K+, Li+, Cs+, Ca2+…) in acidic medium (phosphoric acid: ) The alkaline route is the most important in terms of research and development and commercial applications. Details on the acidic route have also been published. Composition In the 1950s, Viktor Glukhovsky developed concrete materials originally known as "soil silicate concretes" and "soil cements", but since the introduction of the geopolymer concept by Joseph Davidovits, the terminology and definitions of the word geopolymer have become more diverse and often conflicting. The word geopolymer is sometimes used to refer to naturally occurring organic macromolecules; that sense of the word differs from the now-more-common use of this terminology to discuss inorganic materials which can have either cement-like or ceramic-like character. A geopolymer is essentially a mineral chemical compound or mixture of compounds consisting of repeating units, for example silico-oxide (-Si-O-Si-O-), silico-aluminate (-Si-O-Al-O-), ferro-silico-aluminate (-Fe-O-Si-O-Al-O-) or alumino-phosphate (-Al-O-P-O-), created through a process of geopolymerization. This method of describing mineral synthesis (geosynthesis) was first presented by Davidovits at an IUPAC symposium in 1976. Even within the context of inorganic materials, there exist various definitions of the word geopolymer, which can include a relatively wide variety of low-temperature synthesized solid materials. The most typical geopolymer is generally described as resulting from the reaction between metakaolin (calcined kaolinitic clay) and a solution of sodium or potassium silicate (waterglass). Geopolymerization tends to result in a highly connected, disordered network of negatively charged tetrahedral oxide units balanced by the sodium or potassium ions. In the simplest form, an example chemical formula for a geopolymer can be written as Na2O·Al2O3·nSiO2·wH2O, where n is usually between 2 and 4, and w is around 11-15. Geopolymers can be formulated with a wide variety of substituents in both the framework (silicon, aluminium) and non-framework (sodium) sites; most commonly potassium or calcium takes on the non-framework sites, but iron or phosphorus can in principle replace some of the aluminum or silicon. Geopolymerization usually occurs at ambient or slightly elevated temperature; the solid aluminosilicate raw materials (e.g. metakaolin) dissolve into the alkaline solution, then cross-link and polymerize into a growing gel phase, which then continues to set, harden, and gain strength. Geopolymer synthesis Covalent bonding The fundamental unit within a geopolymer structure is a tetrahedral complex consisting of silicon or aluminum coordinated through covalent bonds to four oxygens. The geopolymer framework results from the cross-linking between these tetrahedra, which leads to a 3-dimensional aluminosilicate network, where the negative charge associated with tetrahedral aluminium is balanced by a small cationic species, most commonly an alkali metal cation (Na+, K+ etc). These alkali metal cations are often ion-exchangeable, as they are associated with, but only loosely bonded to the main covalent network, similarly to the non-framework cations present in zeolites. Oligomer formation Geopolymerization is the process of combining many small molecules known as oligomers into a covalently bonded network. This reaction process takes place via formation of oligomers (dimer, trimer, tetramer, pentamer) which are believed to contribute to the formation of the actual structure of the three-dimensional macromolecular framework, either through direct incorporation or through rearrangement via monomeric species. These oligomers are named by some geopolymer chemists as sialates following the scheme developed by Davidovits, although this terminology is not universally accepted within the research community due in part to confusion with the earlier (1952) use of the same word to refer to the salts of the important biomolecule sialic acid. The image shows five examples of small oligomeric potassium aluminosilicate species (labelled in the diagram according to the poly(sialate) / poly(sialate-siloxo) nomenclature), which are key intermediates in potassium-based alumino-silicate geopolymerization. The aqueous chemistry of aluminosilicate oligomers is complex, and plays an important role in the discussion of zeolite synthesis, a process which has many details in common with geopolymerization. Example of geopolymerization of a metakaolin precursor, in an alkaline medium The reaction process broadly involves four main stages: Alkaline hydrolysis of the layered structure of the calcined kaolinite Formation of monomeric and oligomeric species In the presence of waterglass (soluble potassium or sodium silicate), cyclic Al-Si structures can form (e.g. #5 in the figure), whereby the hydroxide is liberated by condensation reactions and can react again Geopolymerization (polycondensation) into polymeric 3D-networks. The reaction processes involving other aluminosilicate precursors (e.g. low-calcium fly ash, crushed or synthetic glasses, natural pozzolans) are broadly similar to the steps described above. Geopolymer 3D-frameworks and water Geopolymerization forms aluminosilicate frameworks that are similar to those of some rock-forming minerals, but lacking in long-range crystalline order, and generally containing water in both chemically bound sites (hydroxyl groups) and in molecular form as pore water. This water can be removed at temperatures above 100 – 200°C. Cation hydration and the locations, and mobility of water molecules in pores are important for lower-temperature applications, such as in usage of geopolymers as cements. The figure shows a geopolymer containing both bound (Si-OH groups) and free water (left in the figure). Some water is associated with the framework similarly to zeolitic water, and some is in larger pores and can be readily released and removed. After dehydroxylation (and dehydration), generally above 250°C, geopolymers can then crystallise above 800-1000°C (depending on the nature of the alkali cation present). Commercial applications There exists a wide variety of potential and existing applications. Some of the geopolymer applications are still in development, whereas others are already industrialized and commercialized. They are listed in three major categories: Geopolymer cements and concretes Building materials (for example, clay bricks) Low- cements and concretes Radioactive and toxic waste containment Geopolymer resins and binders Fire-resistant materials, thermal insulation, foams Low-energy ceramic tiles, refractory items, thermal shock refractories High-tech resin systems, paints, binders and grouts Bio-technologies (materials for medicinal applications) Foundry industry (resins), tooling for the manufacture of organic fiber composites Composites for infrastructure repair and strengthening Fire-resistant and heat-resistant high-tech carbon-fiber composites for aircraft interiors and automobiles Arts and archaeology Decorative stone artifacts, arts and decoration Cultural heritage, archaeology and history of sciences Geopolymer cements From a terminological point of view, geopolymer cement is a binding system that hardens at room temperature, like regular Portland cement. Geopolymer cement is being developed and utilised as an alternative to conventional Portland cement for use in transportation, infrastructure, construction and offshore applications. Production of geopolymer cement requires an aluminosilicate precursor material such as metakaolin or fly ash, a user-friendly alkaline reagent (for example, sodium or potassium soluble silicates with a molar ratio (MR) SiO2:M2O ≥ 1.65, M being sodium or potassium) and water (See the definition for "user-friendly" reagent below). Room temperature hardening is more readily achieved with the addition of a source of calcium cations, often blast furnace slag. Geopolymer cements can be formulated to cure more rapidly than Portland-based cements; some mixes gain most of their ultimate strength within 24 hours. However, they must also set slowly enough that they can be mixed at a batch plant, either for pre-casting or delivery in a concrete mixer. Geopolymer cement also has the ability to form a strong chemical bond with silicate rock-based aggregates. There is often confusion between the meanings of the terms 'geopolymer cement' and 'geopolymer concrete'. A cement is a binder, whereas concrete is the composite material resulting from the mixing and hardening of cement with water (or an alkaline solution in the case of geopolymer cement), and stone aggregates. Materials of both types (geopolymer cements and geopolymer concretes) are commercially available in various markets internationally. Alkali-activated materials vs. geopolymer cements There exists some confusion in the terminology applied to geopolymers, alkali-activated cements and concretes, and related materials, which have been described by a variety of names including also "soil silicate concretes" and "soil cements". Terminology related to alkali-activated materials or alkali-activated geopolymers is also in wide (but debated) use. These cements, sometimes abbreviated AAM, encompass the specific fields of alkali-activated slags, alkali-activated coal fly ashes, and various blended cementing systems. User-friendly alkaline-reagents Geopolymerization uses chemical ingredients that may be dangerous and therefore requires some safety procedures. Material Safety rules classify the alkaline products in two categories: corrosive products (named here: hostile) and irritant products (named here: friendly). The table lists some alkaline chemicals and their corresponding safety labels. Alkaline reagents belonging to the second (less elevated pH) class may also be termed as User-friendly, although the irritant nature of the alkaline component and the potential inhalation risk of powders still require the selection and use of appropriate personal protective equipment, as in any situation where chemicals or powders are handled. The development of some alkali-activated-cements, as shown in numerous published recipes (especially those based on fly ashes) use alkali silicates with molar ratios SiO2:M2O below 1.20, or are based on concentrated NaOH. These conditions are not considered so user-friendly as when more moderate pH values are used, and require careful consideration of chemical safety handling laws, regulations, and state directives. Conversely, geopolymer cement recipes employed in the field generally involve alkaline soluble silicates with starting molar ratios ranging from 1.45 to 1.95, particularly 1.60 to 1.85, i.e. user-friendly conditions. It may happen that for research, some laboratory recipes have molar ratios in the 1.20 to 1.45 range. Examples of materials that are sometimes called geopolymer cements Commercial geopolymer cements were developed in the 1980s, of the type (K,Na,Ca)-aluminosilicate (or "slag-based geopolymer cement") and resulted from the research carried out by Joseph Davidovits and J.L. Sawyer at Lone Star Industries, USA, marketed as Pyrament® cement. The US patent 4,509,985 was granted on April 9, 1985 with the title 'Early high-strength mineral polymer'. In the 1990s, using knowledge of the synthesis of zeolites from fly ashes, Wastiels et al., Silverstrim et al. and van Jaarsveld and van Deventer developed geopolymeric fly ash-based cements. Materials based on siliceous (EN 197), also called class F (ASTM C618), fly ashes are known: alkali-activated fly ash geopolymer: In many (but not all) cases requires heat curing at 60-80°C; not manufactured separately as a cement, but rather produced directly as a fly-ash based concrete. NaOH + fly ash: partially-reacted fly ash particles embedded in an alumino-silicate gel with Si:Al= 1 to 2, zeolitic type (chabazite-Na and sodalite) structures. slag/fly ash-based geopolymer cement: Room-temperature cement hardening. Alkali metal silicate solution + blast furnace slag + fly ash: fly ash particles embedded in a geopolymeric matrix with Si:Al ~ 2. Can be produced with "user-friendly" (not extremely high pH) activating solutions. The properties of iron-containing "ferri-sialate"-based geopolymer cements are similar to those of rock-based geopolymer cements but involve geological elements, or metallurgical slags, with high iron oxide content. The hypothesised binder chemistry is (Ca,K)-(Fe-O)-(Si-O-Al-O). Rock-based geopolymer cements can be formed by the reaction of natural pozzolanic materials under alkaline conditions, and geopolymers derived from calcined clays (e.g. metakaolin) can also be produced in the form of cements. emissions during manufacturing Geopolymer cements may be able to be designed to have a lower attributed emission of carbon dioxide than some other widely-used materials such as Portland cement. Geopolymers use industrial byproducts/waste containing aluminosilicate phases in manufacturing, which minimizes CO₂ emissions and has a lower environmental impact. The need for standards In June 2012, the institution ASTM International organized a symposium on Geopolymer Binder Systems. The introduction to the symposium states: When performance specifications for Portland cement were written, non-portland binders were uncommon...New binders such as geopolymers are being increasingly researched, marketed as specialty products, and explored for use in structural concrete. This symposium is intended to provide an opportunity for ASTM to consider whether the existing cement standards provide, on the one hand, an effective framework for further exploration of geopolymer binders and, on the other hand, reliable protection for users of these materials. The existing Portland cement standards are not adapted to geopolymer cements; they must be elaborated by an ad hoc committee. Yet, to do so requires the presence of standard geopolymer cements. Presently, every expert is presenting their own recipe based on local raw materials (wastes, by-products or extracted). There is a need for selecting the right geopolymer cement category. The 2012 State of the Geopolymer R&D, suggested to select two categories, namely: type 2 slag/fly ash-based geopolymer cement: fly ashes are available in the major emerging countries; ferro-sialate-based geopolymer cement: this geological iron-rich raw material is present in all countries throughout the globe. along with the appropriate user-friendly geopolymeric reagent. Health effects Geopolymers as ceramics Geopolymers can be used as a low-cost and/or chemically flexible route to ceramic production, both to produce monolithic specimens, and as the continuous (binder) phase in composites with particulate or fibrous dispersed phases. Room-temperature processed materials Geopolymers produced at room temperature are typically hard, brittle, castable, and mechanically strong. This combination of characteristics offers the opportunity for their usage in a variety of applications in which other ceramics (e.g. porcelain) are conventionally used. Some of the first patented applications of geopolymer-type materials - actually predating the coining of the term geopolymer by multiple decades - relate to use in automobile spark plugs. Thermal processing of geopolymers to produce ceramics It is also possible to use geopolymers as a versatile pathway to produce crystalline ceramics or glass-ceramics, by forming a geopolymer through room-temperature setting, and then heating (calcining) it at the necessary temperature to convert it from the crystallographically disordered geopolymer form to achieve the desired crystalline phases (e.g. leucite, pollucite and others). Geopolymer applications in arts and archaeology Because geopolymer artifacts can look like natural stone, several artists started to cast in silicone rubber molds replicas of their sculptures. For example, in the 1980s, the French artist Georges Grimal worked on several geopolymer castable stone formulations. Egyptian pyramid stones In the mid-1980s, Joseph Davidovits presented his first analytical results carried out on samples sourced from Egyptian pyramids. He claimed that the ancient Egyptians used a geopolymeric reaction to make re-agglomerated limestone blocks. Later on, several materials scientists and physicists took over these archaeological studies and have published results on pyramid stones, claiming synthetic origins. However, the theories of synthetic origin of pyramid stones have also been stridently disputed by other geologists, materials scientists, and archaeologists. Roman cements It has also been claimed that the Roman lime-pozzolan cements used in the building of some important structures, especially works related to water storage (cisterns, aqueducts), have chemical parallels to geopolymeric materials. See also Zeolite References External links Geopolymer science. Science Direct. Elsevier. 2024 Inorganic chemistry Geochemistry Polymers Inorganic polymers Silicates Aluminosilicates Ceramic materials Cement Resins Geopolymers Building materials
Geopolymer
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,215
[ "Ceramic engineering", "Geopolymers", "Resins", "Inorganic compounds", "Building engineering", "Inorganic polymers", "Unsolved problems in physics", "Architecture", "Construction", "Materials", "Ceramic materials", "nan", "Polymer chemistry", "Polymers", "Amorphous solids", "Matter", ...
11,933,545
https://en.wikipedia.org/wiki/Corepressor
In genetics and molecular biology, a corepressor is a molecule that represses the expression of genes. In prokaryotes, corepressors are small molecules whereas in eukaryotes, corepressors are proteins. A corepressor does not directly bind to DNA, but instead indirectly regulates gene expression by binding to repressors. A corepressor downregulates (or represses) the expression of genes by binding to and activating a repressor transcription factor. The repressor in turn binds to a gene's operator sequence (segment of DNA to which a transcription factor binds to regulate gene expression), thereby blocking transcription of that gene. Function Prokaryotes In prokaryotes, the term corepressor is used to denote the activating ligand of a repressor protein. For example, the E. coli tryptophan repressor (TrpR) is only able to bind to DNA and repress transcription of the trp operon when its corepressor tryptophan is bound to it. TrpR in the absence of tryptophan is known as an aporepressor and is inactive in repressing gene transcription. Trp operon encodes enzymes responsible for the synthesis of tryptophan. Hence TrpR provides a negative feedback mechanism that regulates the biosynthesis of tryptophan. In short tryptophan acts as a corepressor for its own biosynthesis. Eukaryotes In eukaryotes, a corepressor is a protein that binds to transcription factors. In the absence of corepressors and in the presence of coactivators, transcription factors upregulate gene expression. Coactivators and corepressors compete for the same binding sites on transcription factors. A second mechanism by which corepressors may repress transcriptional initiation when bound to transcription factor/DNA complexes is by recruiting histone deacetylases which catalyze the removal of acetyl groups from lysine residues. This increases the positive charge on histones which strengthens the electrostatic attraction between the positively charged histones and negatively charged DNA, making the DNA less accessible for transcription. In humans several dozen to several hundred corepressors are known, depending on the level of confidence with which the characterisation of a protein as a corepressors can be made. Examples of corepressors NCoR NCoR (nuclear receptor co-repressor) directly binds to the D and E domains of nuclear receptors and represses their transcriptional activity. Class I histone deacetylases are recruited by NCoR through SIN3, and NCoR directly binds to class II histone deacetylases. Silencing mediator for retinoid and thyroid-hormone receptor SMRT (silencing mediator of retinoic acid and thyroid hormone receptor), also known as NCoR2, is an alternatively spliced SRC-1(steroid receptor coactivator-1). It is negatively and positively affected by MAPKKK (mitogen activated protein kinase kinase kinase) and casein kinase 2 phosphorylation, respectively. SMRT has two major mechanisms: first, similar to NCoR, SMRT also recruits class I histone deacetylases through SIN3 and directly binds to class II histone deacetylases. Second, it binds and sequesters components of the general transcriptional machinery, such as transcription factor II B. Role in biological processes Corepressors are known to regulate transcription through different activation and inactivation states. NCoR and SMRT act as a corepressor complex to regulate transcription by becoming activated once the ligand is bound. Knockouts of NCoR resulted in embryo death, indicating its importance in erythrocytic, thymic, and neural system development. Mutations in certain corepressors can result in deregulation of signals. SMRT contributes to cardiac muscle development, with knockouts of the complex resulting in less developed muscle and improper development. NCoR has also been found to be an important checkpoint in processes such as inflammation and macrophage activation. Recent evidence also suggests the role of corepressor RIP140 in metabolic regulation of energy homeostasis. Clinical significance Diseases Since corepressors participate and regulate a vast range of gene expression, it is not surprising that aberrant corepressor activities can cause diseases. Acute myeloid leukemia (AML) is a highly lethal blood cancer characterized by uncontrolled myeloid cell growth. Two homologous corepressor genes, BCOR (BCL6 corepressor) and BCORL1, are recurrently mutated in AML patients. BCOR works with multiple transcription factors and is known to play vital regulatory roles in embryonic development. Clinical results detected BCOR somatic mutations in ~4% of an unselected group of AML patients, and ~17% in a subset of patients who lack known AML-causing mutations. Similarly, BCORL1 is a corepressor that regulates cellular processes, and was found to be mutated in ~6% of tested AML patients. These studies point out a strong association between corepressor mutations and AML. Further corepressor research may reveal potential therapeutic targets for AML and other diseases. Therapeutic Potential Corepressors present many potential avenues for drugs to target a vast range of diseases. BCL6 upregulation is observed in cancers such as diffuse large B-cell lymphomas (DLBCLs), colorectal cancer, and lung cancer. BCL-6 corepressor, SMRT, NCoR, and other corepressors are able to interact with and transcriptionally repress BCL6. Small-molecule compounds, such as synthetic peptides that target BCL6 and corepressor interactions, as well as other protein-protein interaction inhibitors, have been shown to effectively kill cancer cells. Activated liver X receptor (LXR) forms a complex with corepressors to suppress the inflammatory response in rheumatoid arthritis, making LXR agonists like GW3965 a potential therapeutic strategy. Ursodeoxycholic acid (UDCA), by upregulating the corepressor small heterodimer partner interacting leucine zipper protein (SMILE), inhibits the expression of IL-17, an inflammatory cytokine, and suppresses Th17 cells, both implicated in rheumatoid arthritis. This effect is dose-dependent in humans, and UCDA is thought to be another prospective agent of rheumatoid arthritis therapy. See also Transcription coregulator TcoF-DB References External links Gene expression Transcription coregulators
Corepressor
[ "Chemistry", "Biology" ]
1,394
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
5,587,151
https://en.wikipedia.org/wiki/Germanium%20dioxide
Germanium dioxide, also called germanium(IV) oxide, germania, and salt of germanium, is an inorganic compound with the chemical formula GeO2. It is the main commercial source of germanium. It also forms as a passivation layer on pure germanium in contact with atmospheric oxygen. Structure The two predominant polymorphs of GeO2 are hexagonal and tetragonal. Hexagonal GeO2 has the same structure as α-quartz, with germanium having coordination number 4. Tetragonal GeO2 (the mineral argutite) has the rutile-like structure seen in stishovite. In this motif, germanium has the coordination number 6. An amorphous (glassy) form of GeO2 is similar to fused silica. Germanium dioxide can be prepared in both crystalline and amorphous forms. At ambient pressure the amorphous structure is formed by a network of GeO4 tetrahedra. At elevated pressure up to approximately 9 GPa the germanium average coordination number steadily increases from 4 to around 5 with a corresponding increase in the Ge–O bond distance. At higher pressures, up to approximately 15 GPa, the germanium coordination number increases to 6, and the dense network structure is composed of GeO6 octahedra. When the pressure is subsequently reduced, the structure reverts to the tetrahedral form. At high pressure, the rutile form converts to an orthorhombic CaCl2 form. Reactions Heating germanium dioxide with powdered germanium at 1000 °C forms germanium monoxide (GeO). The hexagonal (d = 4.29 g/cm3) form of germanium dioxide is more soluble than the rutile (d = 6.27 g/cm3) form and dissolves to form acid, H4GeO4, or Ge(OH)4. GeO2 is only slightly soluble in acid but dissolves more readily in alkali to give germanates. The germanic acid forms stable complexes with di- and polyfunctional carboxylic acids, poly-alcohols, and o-diphenols. In contact with hydrochloric acid, it releases the volatile and corrosive germanium tetrachloride. Uses The refractive index (1.7) and optical dispersion properties of germanium dioxide make it useful as an optical material for wide-angle lenses, in optical microscope objective lenses, and for the core of fiber-optic lines. See Optical fiber for specifics on the manufacturing process. Both germanium and its glass oxide, GeO2, are transparent to the infrared (IR) spectrum. The glass can be manufactured into IR windows and lenses, used for night-vision technology in the military, luxury vehicles, and thermographic cameras. GeO2 is preferred over other IR transparent glasses because it is mechanically strong and therefore preferred for rugged military usage. A mixture of silicon dioxide and germanium dioxide ("silica-germania") is used as an optical material for optical fibers and optical waveguides. Controlling the ratio of the elements allows precise control of refractive index. Silica-germania glasses have lower viscosity and higher refractive index than pure silica. Germania replaced titania as the silica dopant for silica fiber, eliminating the need for subsequent heat treatment, which made the fibers brittle. Germanium dioxide is used as a colorant in borosilicate glass, used in lampworking. When combined with copper oxide, it provides a more stable red. It gives the glass a very reactive/changeable color, “a wonderful rainbow effect” when combined with silver oxide, that can shift light amber to a somewhat reddish and even deep purple appearance. The color can vary based on flame chemistry of the flame used to melt the glass (whether it has more oxygen or whether it has more fuel) And also it can change colors depending on the temperature of the kiln used to anneal the glass. Germanium dioxide is also used as a catalyst in production of polyethylene terephthalate resin, and for production of other germanium compounds. It is used as a feedstock for production of some phosphors and semiconductor materials. Germanium dioxide is used in algaculture as an inhibitor of unwanted diatom growth in algal cultures, since contamination with the comparatively fast-growing diatoms often inhibits the growth of or outcompetes the original algae strains. GeO2 is readily taken up by diatoms and leads to silicon being substituted by germanium in biochemical processes within the diatoms, causing a significant reduction of the diatoms' growth rate or even their complete elimination, with little effect on non-diatom algal species. For this application, the concentration of germanium dioxide typically used in the culture medium is between 1 and 10 mg/L, depending on the stage of the contamination and the species. Toxicity and medical Germanium dioxide has low toxicity, but it is nephrotoxic in higher doses. Germanium dioxide is used as a germanium supplement in some questionable dietary supplements and "miracle cures". High doses of these resulted in several cases of germanium poisonings. References Germanium(IV) compounds Oxides Optical materials Ceramic materials Glass compositions Transparent materials
Germanium dioxide
[ "Physics", "Chemistry", "Engineering" ]
1,090
[ "Physical phenomena", "Glass chemistry", "Glass compositions", "Oxides", "Salts", "Optical phenomena", "Materials", "Optical materials", "Ceramic materials", "Transparent materials", "Ceramic engineering", "Matter" ]
5,589,335
https://en.wikipedia.org/wiki/Penetration%20depth
Penetration depth is a measure of how deep light or any electromagnetic radiation can penetrate into a material. It is defined as the depth at which the intensity of the radiation inside the material falls to 1/e (about 37%) of its original value at (or more properly, just beneath) the surface. When electromagnetic radiation is incident on the surface of a material, it may be (partly) reflected from that surface and there will be a field containing energy transmitted into the material. This electromagnetic field interacts with the atoms and electrons inside the material. Depending on the nature of the material, the electromagnetic field might travel very far into the material, or may die out very quickly. For a given material, penetration depth will generally be a function of wavelength. Beer–Lambert law According to Beer–Lambert law, the intensity of an electromagnetic wave inside a material falls off exponentially from the surface as If denotes the penetration depth, we have Penetration depth is one term that describes the decay of electromagnetic waves inside of a material. The above definition refers to the depth at which the intensity or power of the field decays to 1/e of its surface value. In many contexts one is concentrating on the field quantities themselves: the electric and magnetic fields in the case of electromagnetic waves. Since the power of a wave in a particular medium is proportional to the square of a field quantity, one may speak of a penetration depth at which the magnitude of the electric (or magnetic) field has decayed to 1/e of its surface value, and at which point the power of the wave has thereby decreased to or about 13% of its surface value: Note that is identical to the skin depth, the latter term usually applying to metals in reference to the decay of electrical currents (which follow the decay in the electric or magnetic field due to a plane wave incident on a bulk conductor). The attenuation constant is also identical to the (negative) real part of the propagation constant, which may also be referred to as using a notation inconsistent with the above use. When referencing a source one must always be careful to note whether a number such as or refers to the decay of the field itself, or of the intensity (power) associated with that field. It can also be ambiguous as to whether a positive number describes attenuation (reduction of the field) or gain; this is usually obvious from the context. Attenuation constant The attenuation constant for an electromagnetic wave at normal incidence on a material is also proportional to the imaginary part of the material's refractive index n. Using the above definition of (based on intensity) the following relationship holds: where denotes the complex index of refraction, is the radian frequency of the radiation, c is the speed of light in vacuum and is the wavelength. Note that is very much a function of frequency, as is its imaginary part which is often not mentioned (it is essentially zero for transparent dielectrics). The complex refractive index of metals is also infrequently mentioned but has the same significance, leading to a penetration depth (or skin depth ) accurately given by a formula which is valid up to microwave frequencies. Relationships between these and other ways of specifying the decay of an electromagnetic field can be expressed by mathematical descriptions of opacity. This is only specifying the decay of the field which may be due to absorption of the electromagnetic energy in a lossy medium or may simply describe the penetration of the field in a medium where no loss occurs (or a combination of the two). For instance, a hypothetical substance may have a complex index of refraction . A wave will enter that medium without significant reflection and will be totally absorbed in the medium with a penetration depth (in field strength) of , where is the vacuum wavelength. A different hypothetical material with a complex index of refraction will also have a penetration depth of 16 wavelengths, however in this case the wave will be perfectly reflected from the material! No actual absorption of the radiation takes place, however the electric and magnetic fields extend well into the substance. In either case the penetration depth is found directly from the imaginary part of the material's refractive index as is detailed above. See also Skin effect Absorbance Attenuation coefficient Transmittance References Electromagnetic radiation Scattering, absorption and radiative transfer (optics) Spectroscopy
Penetration depth
[ "Physics", "Chemistry" ]
876
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Molecular physics", "Spectrum (physical sciences)", "Electromagnetic radiation", "Instrumental analysis", "Scattering", "Radiation", "Spectroscopy" ]
14,617,622
https://en.wikipedia.org/wiki/Pugh%27s%20closing%20lemma
In mathematics, Pugh's closing lemma is a result that links periodic orbit solutions of differential equations to chaotic behaviour. It can be formally stated as follows: Let be a diffeomorphism of a compact smooth manifold . Given a nonwandering point of , there exists a diffeomorphism arbitrarily close to in the topology of such that is a periodic point of . Interpretation Pugh's closing lemma means, for example, that any chaotic set in a bounded continuous dynamical system corresponds to a periodic orbit in a different but closely related dynamical system. As such, an open set of conditions on a bounded continuous dynamical system that rules out periodic behaviour also implies that the system cannot behave chaotically; this is the basis of some autonomous convergence theorems. See also Smale's problems References Further reading Dynamical systems Lemmas in analysis Limit sets
Pugh's closing lemma
[ "Physics", "Mathematics" ]
184
[ "Limit sets", "Theorems in mathematical analysis", "Topology", "Mechanics", "Lemmas in mathematical analysis", "Lemmas", "Dynamical systems" ]
14,619,769
https://en.wikipedia.org/wiki/Shear%20legs
Shear legs, also known as sheers, shears, or sheer legs, are a form of two-legged lifting device. Shear legs may be permanent, formed of a solid A-frame and supports, as commonly seen on land and the floating sheerleg, or temporary, as aboard a vessel lacking a fixed crane or derrick. When fixed, they are often used for very heavy lifting, as in tank recovery, shipbuilding, and offshore salvage operations. At dockyards they hoist masts and other substantial rigging parts on board. They are sometimes temporarily rigged on sailboats for similar tasks. Uses On land Shear legs are a lifting device related to the gin pole, derrick and tripod (lifting device). Shears are an A-frame of any kind of material such as timbers or metal, the feet resting on or in the ground or on a solid surface which will not let them move and the top held in place with guy-wires or guy ropes simply called "guys". Shear legs only need two guys whereas a gin pole needs at least three. The U. S. Army Field Manual FM 5-125 gives detailed instruction on how to rig shears. On water Fixed shear legs are most commonly found on floating cranes known as floating sheerlegs. These have heavy A-frame booms and vary in lifting capacity between 50 and 4,000 tons, and are used principally in shipbuilding, other large scale fabrication, cargo management, and salvage operations. Temporary sheers comprise two upright spars, lashed together at their heads and their feet splayed apart. Unlike in a gyn, which has three legs and is thus stable without support, stability in sheers (derricks, and single-legged gin poles) is provided by a guy. The heels of the spars are secured by splay and heel tackles. The point at the top of the sheers where the spars cross and are lashed together is the "crutch", to which a block and tackle is attached. Unlike derricks, sheers need no lateral support, and only require either a foreguy and an aftguy or a martingale and a topping lift. Being made of two spars rather than one, sheers are stronger than a derrick of the same size and made of equivalent materials. Unlike the apex of a gyn, which is fixed, the crutch of a sheers can be topped up or lowered, via the topping lift, through a limited angle. In the era of sailing vessels, it was common for dockyards to employ a sheer hulk, an old floating ship's hull fitted with sheer legs, and used to install masts in other ships. See also Crane (machine) Masting sheer Sheerleg References Further reading Sailing rigs and rigging Vertical transport devices Lifting equipment Cranes (machines)
Shear legs
[ "Physics", "Technology", "Engineering" ]
576
[ "Machines", "Transport systems", "Lifting equipment", "Physical systems", "Vertical transport devices", "Cranes (machines)", "Engineering vehicles" ]
14,621,035
https://en.wikipedia.org/wiki/Similarities%20between%20Wiener%20and%20LMS
The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter. Derivation of the Wiener filter for system identification Given a known input signal , the output of an unknown LTI system can be expressed as: where is an unknown filter tap coefficients and is noise. The model system , using a Wiener filter solution with an order N, can be expressed as: where are the filter tap coefficients to be determined. The error between the model and the unknown system can be expressed as: The total squared error can be expressed as: Use the Minimum mean-square error criterion over all of by setting its gradient to zero: which is for all Substitute the definition of : Distribute the partial derivative: Using the definition of discrete cross-correlation: Rearrange the terms: for all This system of N equations with N unknowns can be determined. The resulting coefficients of the Wiener filter can be determined by: , where is the cross-correlation vector between and . Derivation of the LMS algorithm By relaxing the infinite sum of the Wiener filter to just the error at time , the LMS algorithm can be derived. The squared error can be expressed as: Using the Minimum mean-square error criterion, take the gradient: Apply chain rule and substitute definition of y[n] Using gradient descent and a step size : which becomes, for i = 0, 1, ..., N-1, This is the LMS update equation. See also Wiener filter Least mean squares filter References J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice-Hall, 4th ed., 2007. Digital signal processing Filter theory
Similarities between Wiener and LMS
[ "Engineering" ]
416
[ "Telecommunications engineering", "Filter theory" ]
14,621,793
https://en.wikipedia.org/wiki/Lindemann%20mechanism
In chemical kinetics, the Lindemann mechanism (also called the Lindemann–Christiansen mechanism or the Lindemann–Hinshelwood mechanism) is a schematic reaction mechanism for unimolecular reactions. Frederick Lindemann and J.A. Christiansen proposed the concept almost simultaneously in 1921, and Cyril Hinshelwood developed it to take into account the energy distributed among vibrational degrees of freedom for some reaction steps. It breaks down an apparently unimolecular reaction into two elementary steps, with a rate constant for each elementary step. The rate law and rate equation for the entire reaction can be derived from the rate equations and rate constants for the two steps. The Lindemann mechanism is used to model gas phase decomposition or isomerization reactions. Although the net formula for decomposition or isomerization appears to be unimolecular and suggests first-order kinetics in the reactant, the Lindemann mechanism shows that the unimolecular reaction step is preceded by a bimolecular activation step so that the kinetics may actually be second-order in certain cases. Activated reaction intermediates The overall equation for a unimolecular reaction may be written A → P, where A is the initial reactant molecule and P is one or more products (one for isomerization, more for decomposition). A Lindemann mechanism typically includes an activated reaction intermediate, labeled A*. The activated intermediate is produced from the reactant only after a sufficient activation energy is acquired by collision with a second molecule M, which may or may not be similar to A. It then either deactivates from A* back to A by another collision, or reacts in a unimolecular step to produce the product(s) P. The two-step mechanism is then Rate equation in steady-state approximation The rate equation for the rate of formation of product P may be obtained by using the steady-state approximation, in which the concentration of intermediate A* is assumed constant because its rates of production and consumption are (almost) equal. This assumption simplifies the calculation of the rate equation. For the schematic mechanism of two elementary steps above, rate constants are defined as for the forward reaction rate of the first step, for the reverse reaction rate of the first step, and for the forward reaction rate of the second step. For each elementary step, the order of reaction is equal to the molecularity The rate of production of the intermediate A* in the first elementary step is simply: (forward first step) A* is consumed both in the reverse first step and in the forward second step. The respective rates of consumption of A* are: (reverse first step) (forward second step) According to the steady-state approximation, the rate of production of A* equals the rate of consumption. Therefore: Solving for , it is found that The overall reaction rate is Now, by substituting the calculated value for [A*], the overall reaction rate can be expressed in terms of the original reactants A and M: Reaction order and rate-determining step The steady-state rate equation is of mixed order and predicts that a unimolecular reaction can be of either first or second order, depending on which of the two terms in the denominator is larger. At sufficiently low pressures, so that , which is second order. That is, the rate-determining step is the first, bimolecular activation step. At higher pressures, however, so that which is first order, and the rate-determining step is the second step, i.e. the unimolecular reaction of the activated molecule. The theory can be tested by defining an effective rate constant (or coefficient) which would be constant if the reaction were first order at all pressures: . The Lindemann mechanism predicts that k decreases with pressure, and that its reciprocal is a linear function of or equivalently of . Experimentally for many reactions, does decrease at low pressure, but the graph of as a function of is quite curved. To account accurately for the pressure-dependence of rate constants for unimolecular reactions, more elaborate theories are required such as the RRKM theory. Decomposition of dinitrogen pentoxide In the Lindemann mechanism for a true unimolecular reaction, the activation step is followed by a single step corresponding to the formation of products. Whether this is actually true for any given reaction must be established from the evidence. Much early experimental investigation of the Lindemann mechanism involved study of the gas-phase decomposition of dinitrogen pentoxide 2 N2O5 → 2 N2O4 + O2. This reaction was studied by Farrington Daniels and coworkers, and initially assumed to be a true unimolecular reaction. However it is now known to be a multistep reaction whose mechanism was established by Ogg as: N2O5 NO2 + NO3 NO2 + NO3 → NO2 + O2 + NO NO + N2O5 → 3 NO2 An analysis using the steady-state approximation shows that this mechanism can also explain the observed first-order kinetics and the fall-off of the rate constant at very low pressures. Mechanism of the isomerization of cyclopropane The Lindemann-Hinshelwood mechanism explains unimolecular reactions that take place in the gas phase. Usually, this mechanism is used in gas phase decomposition and also in isomerization reactions. An example of isomerization by a Lindemann mechanism is the isomerization of cyclopropane. cyclo−C3H6 → CH3−CH=CH2 Although it seems like a simple reaction, it is actually a multistep reaction: cyclo−C3H6 → (k1) → cyclo−C3H6 (k−1) → CH3−CH=CH2 (k2) This isomerization can be explained by the Lindemann mechanism, because once the cyclopropane, the reactant, is excited by collision it becomes an energized cyclopropane. And then, this molecule can be deactivated back to reactants or produce propene, the product. References Reaction mechanisms
Lindemann mechanism
[ "Chemistry" ]
1,288
[ "Reaction mechanisms", "Chemical kinetics", "Physical organic chemistry" ]
14,623,985
https://en.wikipedia.org/wiki/Kinetic%20chain%20length
In polymer chemistry, the kinetic chain length () of a polymer is the average number of units called monomers added to a growing chain during chain-growth polymerization. During this process, a polymer chain is formed when monomers are bonded together to form long chains known as polymers. Kinetic chain length is defined as the average number of monomers that react with an active center such as a radical from initiation to termination. This definition is a special case of the concept of chain length in chemical kinetics. For any chemical chain reaction, the chain length is defined as the average number of times that the closed cycle of chain propagation steps is repeated. It is equal to the rate of the overall reaction divided by the rate of the initiation step in which the chain carriers are formed. For example, the decomposition of ozone in water is a chain reaction which has been described in terms of its chain length. In chain-growth polymerization the propagation step is the addition of a monomer to the growing chain. The word kinetic is added to chain length in order to distinguish the number of reaction steps in the kinetic chain from the number of monomers in the final macromolecule, a quantity named the degree of polymerization. In fact the kinetic chain length is one factor which influences the average degree of polymerization, but there are other factors as described below. The kinetic chain length and therefore the degree of polymerization can influence certain physical properties of the polymer, including chain mobility, glass-transition temperature, and modulus of elasticity. Calculating chain length For most chain-growth polymerizations, the propagation steps are much faster than the initiation steps, so that each growing chain is formed in a short time compared to the overall polymerization reaction. During the formation of a single chain, the reactant concentrations and therefore the propagation rate remain effectively constant. Under these conditions, the ratio of the number of propagation steps to the number of initiation steps is just the ratio of reaction rates: where is the rate of propagation, is the rate of initiation of polymerization, and is the rate of termination of the polymer chain. The second form of the equation is valid at steady-state polymerization, as the chains are being initiated at the same rate they are being terminated (). An exception is the class of living polymerizations, in which propagation is much slower than initiation, and chain termination does not occur until a quenching agent is added. In such reactions the reactant monomer is slowly consumed and the propagation rate varies and is not used to obtain the kinetic chain length. Instead the length at a given time is usually written as: where represents the number of monomer units consumed, and the number of radicals that initiate polymerization. When the reaction goes to completion, , and then the kinetic chain length is equal to the number average degree of polymerization of the polymer. In both cases kinetic chain length is an average quantity, as not all polymer chains in a given reaction are identical in length. The value of ν depends on the nature and concentration of both the monomer and initiator involved. Kinetic chain length and degree of polymerization In chain-growth polymerization, the degree of polymerization depends not only on the kinetic chain length but also on the type of termination step and the possibility of chain transfer. Termination by disproportionation Termination by disproportionation occurs when an atom is transferred from one polymer free radical to another. The atom is usually hydrogen, and this results in two polymer chains. With this type of termination and no chain transfer, the number average degree of polymerization (DPn) is then equal to the average kinetic chain length: Termination by combination Combination simply means that two radicals are joined together, destroying the radical character of each and forming one polymeric chain. With no chain transfer, the average degree of polymerization is then twice the average kinetic chain length Chain transfer Some chain-growth polymerizations include chain transfer steps, in which another atom (often hydrogen) is transferred from a molecule in the system to the polymer radical. The original polymer chain is terminated and a new one is initiated. The kinetic chain is not terminated if the new radical can add monomer. However the degree of polymerization is reduced without affecting the rate of polymerization (which depends on kinetic chain length), since two (or more) macromolecules are formed instead of one. For the case of termination by disproportionation, the degree of polymerization becomes: where Rtr is the rate of transfer. The greater Rtr is, the shorter the final macromolecule. Significance The kinetic chain length is important in determining the degree of polymerization, which in turn influences many physical properties of the polymer. Viscosity - Chain entanglements are very important in viscous flow behavior (viscosity) of polymers. As the chain becomes longer, chain mobility decreases; that is, the chains become more entangled with each other. Glass-transition temperature - An increase in chain length often leads to an increase in the glass-transition temperature, Tg. The increased chain length causes the chains to become more entangled at a given temperature. Therefore, the temperature does not need to be as low for the material to act as a solid. Modulus of Elasticity - A longer chain length is also associated with a material tends to be tougher and has a higher modulus of elasticity, E, also known as the Young's modulus. The interaction of the chains causes the polymer to become stiffer. References Polymer chemistry Chemical kinetics
Kinetic chain length
[ "Chemistry", "Materials_science", "Engineering" ]
1,131
[ "Chemical kinetics", "Chemical reaction engineering", "Materials science", "Polymer chemistry" ]
9,461,236
https://en.wikipedia.org/wiki/Riemann%20problem
A Riemann problem, named after Bernhard Riemann, is a specific initial value problem composed of a conservation equation together with piecewise constant initial data which has a single discontinuity in the domain of interest. The Riemann problem is very useful for the understanding of equations like Euler conservation equations because all properties, such as shocks and rarefaction waves, appear as characteristics in the solution. It also gives an exact solution to some complex nonlinear equations, such as the Euler equations. In numerical analysis, Riemann problems appear in a natural way in finite volume methods for the solution of conservation law equations due to the discreteness of the grid. For that it is widely used in computational fluid dynamics and in computational magnetohydrodynamics simulations. In these fields, Riemann problems are calculated using Riemann solvers. The Riemann problem in linearized gas dynamics As a simple example, we investigate the properties of the one-dimensional Riemann problem in gas dynamics (Toro, Eleuterio F. (1999). Riemann Solvers and Numerical Methods for Fluid Dynamics, Pg 44, Example 2.5) The initial conditions are given by where x = 0 separates two different states, together with the linearised gas dynamic equations (see gas dynamics for derivation). where we can assume without loss of generality . We can now rewrite the above equations in a conservative form: : where and the index denotes the partial derivative with respect to the corresponding variable (i.e. x or t). The eigenvalues of the system are the characteristics of the system . They give the propagation speed of the medium, including that of any discontinuity, which is the speed of sound here. The corresponding eigenvectors are By decomposing the left state in terms of the eigenvectors, we get for some Now we can solve for and : Analogously for Using this, in the domain in between the two characteristics , we get the final constant solution: and the (piecewise constant) solution in the entire domain : Although this is a simple example, it still shows the basic properties. Most notably, the characteristics decompose the solution into three domains. The propagation speed of these two equations is equivalent to the propagation speed of sound. The fastest characteristic defines the Courant–Friedrichs–Lewy (CFL) condition, which sets the restriction for the maximum time step for which an explicit numerical method is stable. Generally as more conservation equations are used, more characteristics are involved. References See also Computational fluid dynamics Computational magnetohydrodynamics Riemann solver Conservation equations Fluid dynamics Computational fluid dynamics Bernhard Riemann
Riemann problem
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
540
[ "Computational fluid dynamics", "Chemical engineering", "Conservation laws", "Mathematical objects", "Equations", "Computational physics", "Piping", "Fluid dynamics", "Conservation equations", "Symmetry", "Physics theorems" ]
9,461,390
https://en.wikipedia.org/wiki/Riemann%20solver
A Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in computational fluid dynamics and computational magnetohydrodynamics. Definition Generally speaking, Riemann solvers are specific methods for computing the numerical flux across a discontinuity in the Riemann problem. They form an important part of high-resolution schemes; typically the right and left states for the Riemann problem are calculated using some form of nonlinear reconstruction, such as a flux limiter or a WENO method, and then used as the input for the Riemann solver. Exact solvers Sergei K. Godunov is credited with introducing the first exact Riemann solver for the Euler equations, by extending the previous CIR (Courant-Isaacson-Rees) method to non-linear systems of hyperbolic conservation laws. Modern solvers are able to simulate relativistic effects and magnetic fields. More recent research shows that an exact series solution to the Riemann problem exists, which may converge fast enough in some cases to avoid the iterative methods required in Godunov's scheme. Approximate solvers As iterative solutions are too costly, especially in magnetohydrodynamics, some approximations have to be made. Some popular solvers are: Roe solver Philip L. Roe used the linearisation of the Jacobian, which he then solves exactly. HLLE solver The HLLE solver (developed by Ami Harten, Peter Lax, Bram van Leer and Einfeldt) is an approximate solution to the Riemann problem, which is only based on the integral form of the conservation laws and the largest and smallest signal velocities at the interface. The stability and robustness of the HLLE solver is closely related to the signal velocities and a single central average state, as proposed by Einfeldt in the original paper HLLC solver The HLLC (Harten-Lax-van Leer-Contact) solver was introduced by Toro. It restores the missing rarefaction wave by using an estimation technique, such as linearisation. More advanced techniques exist, like using the Roe average velocity for the middle wave speed. These schemes are quite robust and efficient but somewhat more diffusive. Rotated-hybrid Riemann solvers These solvers were introduced by Hiroaki Nishikawa and Kitamura, in order to overcome the carbuncle problems of the Roe solver and the excessive diffusion of the HLLE solver at the same time. They developed robust and accurate Riemann solvers by combining the Roe solver and the HLLE/Rusanov solvers: they show that being applied in two orthogonal directions the two Riemann solvers can be combined into a single Roe-type solver (the Roe solver with modified wave speeds). In particular, the one derived from the Roe and HLLE solvers, called Rotated-RHLL solver, is extremely robust (carbuncle-free for all possible test cases on both structured and unstructured grids) and accurate (as accurate as the Roe solver for the boundary layer calculation). Other solvers There are a variety of other solvers available, including more variants of the HLL scheme and solvers based on flux-splitting via characteristic decomposition. Notes See also Godunov's scheme Computational fluid dynamics Computational magnetohydrodynamics References External links Numerical analysis Computational fluid dynamics Conservation equations Bernhard Riemann
Riemann solver
[ "Physics", "Chemistry", "Mathematics" ]
715
[ "Computational fluid dynamics", "Conservation laws", "Mathematical objects", "Computational mathematics", "Equations", "Computational physics", "Mathematical relations", "Numerical analysis", "Fluid dynamics", "Conservation equations", "Approximations", "Symmetry", "Physics theorems" ]
9,462,323
https://en.wikipedia.org/wiki/Wilhelmy%20plate
A Wilhelmy plate is a thin plate that is used to measure equilibrium surface or interfacial tension at an air–liquid or liquid–liquid interface. In this method, the plate is oriented perpendicular to the interface, and the force exerted on it is measured. Based on the work of Ludwig Wilhelmy, this method finds wide use in the preparation and monitoring of Langmuir films. Detailed description The Wilhelmy plate consists of a thin plate usually on the order of a few square centimeters in area. The plate is often made from filter paper, glass or platinum which may be roughened to ensure complete wetting. In fact, the results of the experiment do not depend on the material used, as long as the material is wetted by the liquid. The plate is cleaned thoroughly and attached to a balance with a thin metal wire. The force on the plate due to wetting is measured using a tensiometer or microbalance and used to calculate the surface tension () using the Wilhelmy equation: where is the wetted perimeter (), is the plate width, is the plate thickness, and is the contact angle between the liquid phase and the plate. In practice the contact angle is rarely measured; instead, either literature values are used or complete wetting () is assumed. In general, surface tension may be measured with high sensitivity using very thin plates ranging in thickness from 0.1 to 0.002 mm. The device is calibrated with pure liquids like water and ethanol. The buoyancy adjustment is minimized by utilizing a thin plate and dipping it as little as feasible. Wetting water on a platinum plate is accomplished by using commercially available platinum plates that have been roughened to improve wettability. Advantages and short brief If complete wetting is assumed (contact angle = 0), no correction factors are required to calculate surface tensions when using the Wilhelmy plate, unlike for a du Noüy ring. In addition, because the plate is not moved during measurements, the Wilhelmy plate allows accurate determination of surface kinetics on a wide range of timescales, and it displays low operator variance. In a typical plate experiment, the plate is lowered to the surface being analyzed until a meniscus is formed, and then raised so that the bottom edge of the plate lies on the plane of the undisturbed surface. If measuring a buried interface, the second (less dense) phase is then added on top of the undisturbed primary (denser) phase in such a way as to not disturb the meniscus. The force at equilibrium can then be used to determine the absolute surface or interfacial tension. Due to a large wetted area of the plate, the measurement is less susceptible for measurement errors than when using a smaller probe. Also, the method has been described in several international measurement standards. See also Tensiometer (surface tension) du Noüy ring method Sessile drop technique Further reading Holmberg, K (ed.) Handbook of Applied Surface and Colloid Chemistry New York, Wiley and Sons: 2002. Vol. 2, p. 219 References Laboratory equipment Materials science
Wilhelmy plate
[ "Physics", "Materials_science", "Engineering" ]
635
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
9,463,516
https://en.wikipedia.org/wiki/Delbr%C3%BCck%20scattering
Delbrück scattering, the deflection of high-energy photons in the Coulomb field of nuclei as a consequence of vacuum polarization, was observed in 1975. The related process of the scattering of light by light, also a consequence of vacuum polarization, was not observed until 1998. In both cases, it is a process described by quantum electrodynamics. Discovery From 1932 to 1937, Max Delbrück worked in Berlin as an assistant to Lise Meitner, who was collaborating with Otto Hahn on the results of irradiating uranium with neutrons. During this period he wrote a few papers, one of which turned out to be an important contribution on the scattering of gamma rays by a Coulomb field due to polarization of the vacuum produced by that field (1933). His conclusion proved to be theoretically sound but inapplicable to the case in point, but 20 years later Hans Bethe confirmed the phenomenon and named it "Delbrück scattering". In 1953, Robert Wilson observed Delbrück scattering of 1.33 MeV gamma-rays by the electric fields of lead nuclei. Description Delbrück scattering is the coherent elastic scattering of photons in the Coulomb field of heavy nuclei. It is one of the two nonlinear effects of quantum electrodynamics (QED) in the Coulomb field investigated experimentally. The other is the splitting of a photon into two photons. Delbrück scattering was introduced by Max Delbrück in order to explain discrepancies between experimental and predicted data in a Compton scattering experiment on heavy atoms carried out by Meitner and Kösters. Delbrück's arguments were based on the relativistic quantum mechanics of Dirac according to which the QED vacuum is filled with electrons of negative energy or – in modern terms – with electron-positron pairs. These electrons of negative energy should be capable of producing coherent-elastic photon scattering because the recoil momentum during absorption and emission of the photon is transferred to the total atom while the electrons remain in their state of negative energy. This process is the analog of atomic Rayleigh scattering with the only difference that in the latter case the electrons are bound in the electron cloud of the atom. The experiment of Meitner and Kösters was the first in a series of experiments where the discrepancy between experimental and predicted differential cross sections for elastic scattering by heavy atoms were interpreted in terms of Delbrück scattering. From the present point of view these early results are not trustworthy. Reliable investigations were possible only after modern QED techniques based on Feynman diagrams were available for quantitative predictions, and on the experimental side photon detectors with high energy resolution and high detection efficiency had been developed. This was the case at the beginning of the 1970s when also computers with high computing capacity were in operation which delivered numerical results for Delbrück scattering amplitudes with sufficient precision. A first observation of Delbrück scattering was achieved in a high-energy, small-angle photon scattering experiment carried out at DESY (Germany) in 1973, where only the imaginary part of the scattering amplitude is of importance. Agreement was obtained with predictions of Cheng Wu which later were verified by Milstein and Strakhovenko. These latter authors make use of the quasi-classical approximation being very different from the one of Cheng and Wu. It could however be shown that both approximations are equivalent and lead to the same numerical results. The essential breakthrough came with the Göttingen (Germany) experiment in 1975 carried out at an energy of 2.754 MeV. In the Göttingen experiment Delbrück scattering was observed as the dominant contribution to the coherent-elastic scattering process, in addition to minor contributions stemming from atomic Rayleigh scattering and nuclear Rayleigh scattering. This experiment was the first where exact predictions based on Feynman diagrams, were confirmed with high precision and, therefore, has to be considered as the first definite observation of Delbrück scattering. For a comprehensive description of the present status of Delbrück scattering see. Nowadays, the most accurate measurements of high-energy Delbrück scattering are performed at the Budker Institute of Nuclear Physics (BINP) in Novosibirsk (Russia). The experiment where photon splitting was really observed for the first time was also performed at the BINP. There are a number of experimental works published previously to the 1975 Göttingen experiment (or even to the Desy 1973 one). Most notable Jackson and Wetzel in 1969 and Moreh and Kahane in 1973. In both these works use was made of higher energy gamma rays compared with the Göttingen one, conferring a higher contribution of the Delbrück scattering to the overall measured cross section. In general, in the low energy nuclear physics region i.e. <10–20 MeV, a Delbrück experiment measures a number of competing coherent processes including also Rayleigh scattering from electrons, Thomson scattering from the point nucleus and nuclear excitation via the giant dipole resonance. Apart from the Thomson scattering which is well known, the other two (namely Rayleigh and GDR) have considerable uncertainties. The interference of these effects with Delbrück is by no means "minor" (again "at classical nuclear physics energies"). Even at very forward scattering angles, where Delbrück is very strong, there is a substantial interference with the Rayleigh scattering, the amplitudes of both effects being of the same order of magnitude. References Quantum electrodynamics Scattering
Delbrück scattering
[ "Physics", "Chemistry", "Materials_science" ]
1,095
[ "Condensed matter physics", "Scattering", "Particle physics", "Nuclear physics" ]
13,471,652
https://en.wikipedia.org/wiki/Generalized%20forces
In analytical mechanics (particularly Lagrangian mechanics), generalized forces are conjugate to generalized coordinates. They are obtained from the applied forces , acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the variation of a generalized coordinate. Virtual work Generalized forces can be obtained from the computation of the virtual work, , of the applied forces. The virtual work of the forces, , acting on the particles , is given by where is the virtual displacement of the particle . Generalized coordinates Let the position vectors of each of the particles, , be a function of the generalized coordinates, . Then the virtual displacements are given by where is the virtual displacement of the generalized coordinate . The virtual work for the system of particles becomes Collect the coefficients of so that Generalized forces The virtual work of a system of particles can be written in the form where are called the generalized forces associated with the generalized coordinates . Velocity formulation In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle Pi be , then the virtual displacement can also be written in the form This means that the generalized force, , can also be determined as D'Alembert's principle D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force (apparent force), called D'Alembert's principle. The inertia force of a particle, , of mass is where is the acceleration of the particle. If the configuration of the particle system depends on the generalized coordinates , then the generalized inertia force is given by D'Alembert's form of the principle of virtual work yields See also Lagrangian mechanics Generalized coordinates Degrees of freedom (physics and chemistry) Virtual work References Mechanical quantities Classical mechanics Lagrangian mechanics
Generalized forces
[ "Physics", "Mathematics" ]
401
[ "Mechanical quantities", "Physical quantities", "Quantity", "Lagrangian mechanics", "Classical mechanics", "Mechanics", "Dynamical systems" ]
13,473,033
https://en.wikipedia.org/wiki/Bending%20stiffness
The bending stiffness () is the resistance of a member against bending deflection/deformation. It is a function of the Young's modulus , the second moment of area of the beam cross-section about the axis of interest, length of the beam and beam boundary condition. Bending stiffness of a beam can analytically be derived from the equation of beam deflection when it is applied by a force. where is the applied force and is the deflection. According to elementary beam theory, the relationship between the applied bending moment and the resulting curvature of the beam is: where is the deflection of the beam and is the distance along the beam. Double integration of the above equation leads to computing the deflection of the beam, and in turn, the bending stiffness of the beam. Bending stiffness in beams is also known as Flexural rigidity. See also Applied mechanics Beam theory Bending Stiffness References External links Efunda's beam calculator Beam theory Continuum mechanics Structural analysis
Bending stiffness
[ "Physics", "Engineering" ]
208
[ "Structural engineering", "Continuum mechanics", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering" ]
13,473,221
https://en.wikipedia.org/wiki/Quantum%20bus
A quantum bus is a device which can be used to store or transfer information between independent qubits in a quantum computer, or combine two qubits into a superposition. It is the quantum analog of a classical bus. There are several physical systems that can be used to realize a quantum bus, including trapped ions, photons, and superconducting qubits. Trapped ions, for example, can use the quantized motion of ions (phonons) as a quantum bus, while photons can act as a carrier of quantum information by utilizing the increased interaction strength provided by cavity quantum electrodynamics. Circuit quantum electrodynamics, which uses superconducting qubits coupled to a microwave cavity on a chip, is another example of a quantum bus that has been successfully demonstrated in experiments. History The concept was first demonstrated by researchers at Yale University and the National Institute of Standards and Technology (NIST) in 2007. Prior to this experimental demonstration, the quantum bus had been described by scientists at NIST as one of the possible cornerstone building blocks in quantum computing architectures. Mathematical description A quantum bus for superconducting qubits can be built with a resonance cavity. The hamiltonian for a system with qubit A, qubit B, and the resonance cavity or quantum bus connecting the two is where is the single qubit hamiltonian, is the raising or lowering operator for creating or destroying excitations in the th qubit, and is controlled by the amplitude of the D.C. and radio frequency flux bias. References Quantum information science Quantum electronics
Quantum bus
[ "Physics", "Materials_science" ]
325
[ "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Quantum physics stubs" ]
13,474,567
https://en.wikipedia.org/wiki/Block%20Error%20Rate
Block Error Rate (BLER) is a ratio of the number of erroneous blocks to the total number of blocks ansmitted on a digital circuit. It is used in measuring the error rate when extracting data frames from a Compact Disc (CD). The BLER measurement is often used as a quality control measure with regards to how well audio is retained on a compact disc over time. BLER is also used for W-CDMA performance requirements tests (demodulation tests in multipath conditions, etc.). BLER is measured after channel de-interleaving and decoding by evaluating the Cyclic Redundancy Check (CRC) on each transport block. Block Error Rate (BLER) is used in LTE/4G technology to determine the in-sync or out-of-sync indication during radio link monitoring (RLM). Normal BLER is 2% for an in-sync condition and 10% for an out-of-sync condition.8ballPool References Compact disc Audio software
Block Error Rate
[ "Engineering" ]
211
[ "Audio engineering", "Audio software" ]
13,474,685
https://en.wikipedia.org/wiki/Ground-effect%20vehicle
A ground-effect vehicle (GEV), also called a wing-in-ground-effect (WIGE or WIG), ground-effect craft/machine (GEM), wingship, flarecraft, surface effect vehicle or ekranoplan (), is a vehicle that is able to move over the surface by gaining support from the reactions of the air against the surface of the earth or water. Typically, it is designed to glide over a level surface (usually over the sea) by making use of ground effect, the aerodynamic interaction between the moving wing and the surface below. Some models can operate over any flat area such as frozen lakes or flat plains similar to a hovercraft. The term Ground-Effect Vehicle originally referred to any craft utilizing ground effect, including what is known later as hovercraft, in descriptions of patents during the 1950s. However, this term is nowadays regarded as distinct from air-cushion vehicles or hovercraft. The definition of GEVs does not include racecars utilizing ground-effect for increasing downforce. Design A ground-effect vehicle needs some forward velocity to produce lift dynamically, and the principal benefit of operating a wing in ground effect is to reduce its lift-dependent drag. The basic design principle is that the closer the wing operates to an external surface such as the ground, when it is said to be in ground effect, the less drag it experiences. An airfoil passing through air increases air pressure on the underside, while decreasing pressure across the top. The high and low pressures are maintained until they flow off the ends of the wings, where they form vortices which in turn are the major cause of lift-induced drag—normally a significant portion of the drag affecting an aircraft. The greater the span of a wing, the less induced drag created for each unit of lift and the greater the efficiency of the particular wing. This is the primary reason gliders have long wings. Placing the same wing near a surface such as the water or the ground has the same effect as increasing the aspect ratio because the ground prevents wingtip vortices from expanding, but without having the complications associated with a long and slender wing, so that the short stubs on a GEV can produce just as much lift as the much larger wing on a transport aircraft, though it can do this only when close to the earth's surface. Once sufficient speed has built up, some GEVs may be capable of leaving ground effect and functioning as normal aircraft until they approach their destination. The distinguishing characteristic is that they are unable to land or take off without a significant amount of help from the ground effect cushion, and cannot climb until they have reached a much higher speed. A GEV is sometimes characterized as a transition between a hovercraft and an aircraft, although this is not correct as a hovercraft is statically supported upon a cushion of pressurized air from an onboard downward-directed fan. Some GEV designs, such as the Russian Lun and Dingo, have used forced blowing under the wing by auxiliary engines to increase the high pressure area under the wing to assist the takeoff; however they differ from hovercraft in still requiring forward motion to generate sufficient lift to fly. Although the GEV may look similar to the seaplane and share many technical characteristics, it is generally not designed to fly out of ground effect. It differs from the hovercraft in lacking low-speed hover capability in much the same way that a fixed-wing airplane differs from the helicopter. Unlike the hydrofoil, it does not have any contact with the surface of the water when in "flight". The ground-effect vehicle constitutes a unique class of transportation. The Boston-based (United States) company REGENT proposed an electric-powered high-wing design with a standard hull for water operations, but also incorporated fore- and aft-mounted hydrofoil units designed to lift the craft out of the water during takeoff run, to facilitate lower liftoff speeds. Wing configurations Straight wing Used by the Russian Rostislav Alexeyev for his ekranoplan. The wings are significantly shorter than those of comparable aircraft, and this configuration requires a high aft-placed horizontal tail to maintain stability. The pitch and altitude stability comes from the lift slope difference between a front low wing in ground-effect (commonly the main wing) and an aft, higher-located second wing nearly out of ground-effect (generally named a stabilizer). Reverse-delta wing Developed by Alexander Lippisch, this wing allows stable flight in ground-effect through self-stabilization. This is the main Class B form of GEV. Hanno Fischer later developed WIG craft based on the configuration, which were then transferred to multiple companies in Asia, thus becoming one of the "standards" in GEV design. Tandem wings Tandem wings can have three configurations: A biplane-style type-1 utilising a shoulder-mounted main lift wing and belly-mounted sponsons similar to those on combat and transport helicopters. A canard-style type-2 with a mid-size horizontal wing near the nose of the craft directing airflow under the main lift airfoil. This type-2 tandem design is a major improvement during takeoff, as it creates an air cushion to lift the craft above the water at a lower speed, thereby reducing water drag, which is the biggest obstacle to successful seaplane launches. Two stubby wings as in the tandem-airfoil flairboat produced by Günther Jörg in Germany. His particular design is self-stabilizing longitudinally. Advantages and disadvantages Given similar hull size and power, and depending on its specific design, the lower lift-induced drag of a GEV, as compared to an aircraft of similar capacity, will improve its fuel efficiency and, up to a point, its speed. GEVs are also much faster than surface vessels of similar power, because they avoid drag from the water. On the water the aircraft-like construction of GEVs increases the risk of damage in collisions with surface objects. Furthermore, the limited number of egress points make it more difficult to evacuate the vehicle in an emergency. According to WST, the builders of the WIG craft WSH-500, GEVs furthermore have the advantage of avoiding conflict with ocean currents by flying over them. Since most GEVs are designed to operate from water, accidents and engine failure typically are less hazardous than in a land-based aircraft, but the lack of altitude control leaves the pilot with fewer options for avoiding collision, and to some extent that negates such benefits. Low altitude brings high-speed craft into conflict with ships, buildings and rising land, which may not be sufficiently visible in poor conditions to avoid. GEVs may be unable to climb over or turn sharply enough to avoid collisions, while drastic, low-level maneuvers risk contact with solid or water hazards beneath. Aircraft can climb over most obstacles, but GEVs are more limited. In high winds, take-off must be into the wind, which takes the craft across successive lines of waves, causing heavy pounding, stressing the craft and creating an uncomfortable ride. In light winds, waves may be in any direction, which can make control difficult as each wave causes the vehicle to both pitch and roll. The lighter construction of GEVs makes their ability to operate in higher sea states less than that of conventional ships, but greater than the ability of hovercraft or hydrofoils, which are closer to the water surface. Like conventional aircraft, greater power is needed for takeoff, and, like seaplanes, ground-effect vehicles must get on the step before they can accelerate to flight speed. Careful design, usually with multiple redesigns of hullforms, is required to get this right, which increases engineering costs. This obstacle is more difficult for GEVs with short production runs to overcome. For the vehicle to work, its hull needs to be stable enough longitudinally to be controllable yet not so stable that it cannot lift off the water. The bottom of the vehicle must be formed to avoid excessive pressures on landing and taking off without sacrificing too much lateral stability, and it must not create too much spray, which damages the airframe and the engines. The Russian ekranoplans show evidence of fixes for these problems in the form of multiple chines on the forward part of the hull undersides and in the forward location of the jet engines. Finally, limited utility has kept production levels low enough that it has been impossible to amortize development costs sufficiently to make GEVs competitive with conventional aircraft. A 2014 study by students at NASA's Ames Research Center claims that use of GEVs for passenger travel could lead to cheaper flights, increased accessibility and less pollution. Classification One obstacle to GEV development is the classification and legislation to be applied. The International Maritime Organization has studied the application of rules based on the International Code of Safety for High-Speed Craft (HSC code) which was developed for fast ships such as hydrofoils, hovercraft, catamarans and the like. The Russian Rules for classification and construction of small type A ekranoplans is a document upon which most GEV design is based. However, in 2005, the IMO classified the WISE or GEV under the category of ships. The International Maritime Organization recognizes three types of GEVs: At the time of writing, those classes only applied to craft carrying 12 passengers or more, and (as of 2019) there was disagreement between national regulatory agencies about whether these vehicles should be classified, and regulated, as aircraft or as boats. History By the 1920s, the ground effect phenomenon was well-known, as pilots found that their airplanes appeared to become more efficient as they neared the runway surface during landing. In 1934 the US National Advisory Committee for Aeronautics issued Technical Memorandum 771, Ground Effect on the Takeoff and Landing of Airplanes, which was a translation into English of a summary of French research on the subject. The French author Maurice Le Sueur had added a suggestion based on this phenomenon: "Here the imagination of inventors is offered a vast field. The ground interference reduces the power required for level flight in large proportions, so here is a means of rapid and at the same time economic locomotion: Design an airplane which is always within the ground-interference zone. At first glance this apparatus is dangerous because the ground is uneven and the altitude called skimming permits no freedom of maneuver. But on large-sized aircraft, over water, the question may be attempted ..." By the 1960s, the technology started maturing, in large part due to the independent contributions of Rostislav Alexeyev in the Soviet Union and German Alexander Lippisch, working in the United States. Alexeyev worked from his background as a ship designer whereas Lippisch worked as an aeronautical engineer. The influence of Alexeyev and Lippisch remains noticeable in most GEVs seen today. Canada It is said that the research hydrofoil HD-4 by Alexander Graham Bell had part of its dynamic lift contributed by its pair of wings operating in ground effect. However it is dubious whether the designer was aware of its existence due to the relative infancy of aerodynamics. Avro Canada investigated into aircraft with a Coanda-effect propulsion system. Such jets were supposed to create an air cushion below the airframe that will allow them to hover on the ground. In fact, of the only test aircraft built, this was the only mode they could possibly operate from due to stability issues when taking off. The designs were later further developed by the United States, while Convair could have possibly been inspired by them to create a preliminary design of a large ocean-going ground-effect ship called Hydroskimmer. Soviet Union Led by Alexeyev, the Soviet Central Hydrofoil Design Bureau () was the center of ground-effect craft development in the USSR. The vehicle came to be known as an ekranoplan (, экран screen + план plane, from , literally screen effect, or ground effect in English). The military potential for such a craft was soon recognized, and Alexeyev received support and financial resources from Soviet leader Nikita Khrushchev. Some manned and unmanned prototypes were built, ranging up to eight tonnes in displacement. This led to the development of a 550-tonne military ekranoplan of length. The craft was dubbed the Caspian Sea Monster by U.S. intelligence experts, after a huge, unknown craft was spotted on satellite reconnaissance photos of the Caspian Sea area in the 1960s. With its short wings, it looked airplane-like in planform, but would probably be incapable of flight. Although it was designed to travel a maximum of above the sea, it was found to be most efficient at , reaching a top speed of in research flights. The Soviet ekranoplan program continued with the support of Minister of Defence Dmitriy Ustinov. It produced the most successful ekranoplan so far, the 125-tonne A-90 Orlyonok. These craft were originally developed as high-speed military transports and were usually based on the shores of the Caspian Sea and Black Sea. The Soviet Navy ordered 120 Orlyonok-class ekranoplans, but this figure was later reduced to fewer than 30 vessels, with planned deployment mainly in the Black Sea and Baltic Sea fleets. A few Orlyonoks served with the Soviet Navy from 1979 to 1992. In 1987, the 400-tonne Lun-class ekranoplan was built as an anti-ship missile launch platform. A second Lun, renamed Spasatel, was laid down as a rescue vessel, but was never finished. The two major problems that the Soviet ekranoplans faced were poor longitudinal stability and a need for reliable navigation. Minister Ustinov died in 1984, and the new Minister of Defence, Marshal Sokolov, cancelled funding for the program. Only three operational Orlyonok-class ekranoplans (with revised hull design) and one Lun-class ekranoplan remained at a naval base near Kaspiysk. Since the dissolution of the Soviet Union, ekranoplans have been produced by the Volga Shipyard in Nizhniy Novgorod. Smaller ekranoplans for non-military use have been under development. The CHDB had already developed the eight-seat Volga-2 in 1985, and Technologies and Transport is developing a smaller version called the Amphistar. Beriev proposed a large craft of the type, the Be-2500, as a "flying ship" cargo carrier, but nothing came of the project. United States of America During the 1950s, the US Navy investigated into anti-submarine vessels operating on the ram effect, a product of ground effect. Such vessels were to use this to create an air cushion below the hulls that will allow hovering. If this is not possible, additional engines were to be used to artificially blow air underneath the craft. The project was designated RAM-2. Several other projects were proposed throughout the early Cold War, some using a similar mix of wings and lift engines while others are more akin to Russian types. More than a decade later, General Dynamics designed catamaran vessels equipped with ground-effect and filed them as patents. Germany Lippisch Type and Hanno Fischer In Germany, Lippisch was asked to build a very fast boat for American businessman Arthur A. Collins. In 1963 Lippisch developed the X-112, a revolutionary design with reversed delta wing and T-tail. This design proved to be stable and efficient in ground effect, and even though it was successfully tested, Collins decided to stop the project and sold the patents to the German company Rhein Flugzeugbau (RFB), which further developed the inverse delta concept into the X-113 and the six-seat X-114. These craft could be flown out of ground effect so that, for example, peninsulas could be overflown. Hanno Fischer took over the works from RFB and created his own company, Fischer Flugmechanik, which eventually completed two models. The Airfisch 3 carried two persons, and the FS-8 carried six persons. The FS-8 was to be developed by Fischer Flugmechanik for a Singapore-Australian joint venture called Flightship. Powered by a V8 Chevrolet automobile engine rated at 337 kW, the prototype made its first flight in February 2001 in the Netherlands. The company no longer exists but the prototype craft was bought by Wigetworks, a company based in Singapore and renamed as AirFish 8. In 2010, that vehicle was registered as a ship in the Singapore Registry of Ships. The University of Duisburg-Essen is supporting an ongoing research project to develop the Hoverwing. Günther Jörg-type tandem-airfoil flairboat German engineer Günther Jörg, who had worked on Alexeyev's first designs and was familiar with the challenges of GEV design, developed a GEV with two wings in a tandem arrangement, the Jörg-II. It was the third, manned, tandem-airfoil boat, named "Skimmerfoil", which was developed during his consultancy period in South Africa. It was a simple and low-cost design of a first 4-seater tandem-airfoil flairboat completely constructed of aluminium. The prototype was in the SAAF Port Elizabeth Museum from 4 July 2007 until 2013, and is now in private use. Pictures of the museum show the boat after some years outside the museum and without protection against the sun. The consultancy of Günther Jörg, a specialist and insider of German airplane industry from 1963 and a colleague of Alexander Lippisch and Hanno Fischer, was founded with a fundamental knowledge of wing in ground effect physics, as well as results of fundamental tests under different conditions and designs having begun in 1960. For over 30 years, Jörg built and tested 15 different tandem-airfoil flairboats in different sizes and made of different materials. The following tandem-airfoil flairboat (TAF) types had been built after a previous period of nearly 10 years of research and development: TAB VII-3: First manned tandem W.I.G type Jörg, being built at Technical University of Darmstadt, Akaflieg TAF VII-5: Second manned tandem-airfoil Flairboat, 2 seater made of wood TAF VIII-1: 2-seater tandem-airfoil flairboat built of glass-reinforced plastic (GRP) and aluminium. A small serie of 6 Flairboats had been produced by former Botec Company TAF VIII-2: 4-seater tandem-airfoil Flairboat built of full aluminium (2 units) and built of GRP (3 units) TAF VIII-3: 8-seater tandem-airfoil Flairboat built of aluminium combined with GRP parts TAF VIII-4: 12-seater tandem-airfoil Flairboat built of aluminium combined with GRP parts TAF VIII-3B: 6-seater tandem-airfoil flairboat under carbon fibre composite construction Bigger concepts are: 25-seater, 32-seater, 60-seater, 80-seater and bigger up to the size of a passenger airplane. 1980-1999 Since the 1980s GEVs have been primarily smaller craft designed for the recreational and civilian ferry markets. Germany, Russia and the United States have provided most of the activity with some development in Australia, China, Japan, Korea and Taiwan. In these countries and regions, small craft with up to ten seats have been built. Other larger designs such as ferries and heavy transports have been proposed but have not been carried to completion. Besides the development of appropriate design and structural configuration, automatic control and navigation systems have been developed. These include altimeters with high accuracy for low altitude flight and lesser dependence on weather conditions. "Phase radio altimeters" have become the choice for such applications beating laser altimeter, isotropic or ultrasonic altimeters. With Russian consultation, the United States Defense Advanced Research Projects Agency (DARPA) studied the Aerocon Dash 1.6 wingship. Universal Hovercraft developed a flying hovercraft, first flying a prototype in 1996. Since 1999, the company has offered plans, parts, kits and manufactured ground effect hovercraft called the Hoverwing. 2000-2019 Iran deployed three squadrons of Bavar 2 two-seat GEVs in September 2010. This GEV carries one machine gun and surveillance gear, and incorporates features to reduce its radar signature. In October 2014, satellite images showed the GEV in a shipyard in southern Iran. The GEV has two engines and no armament. In Singapore, Wigetworks obtained certification from Lloyd's Register for entry into class. On 31 March 2011, AirFish 8-001 became one of the first GEVs to be flagged with the Singapore Registry of Ships, one of the largest ship registries. Wigetworks partnered with National University of Singapore's Engineering Department to develop higher capacity GEVs. Burt Rutan in 2011 and Korolev in 2015 showed GEV projects. In Korea, Wing Ship Technology Corporation developed and tested a 50-seat passenger GEV named the WSH-500. in 2013 Estonian transport company Sea Wolf Express planned to launch passenger service in 2019 between Helsinki and Tallinn, a distance of 87 km taking only half an hour, using a Russian-built ekranoplan. The company ordered 15 ekranoplans with maximum speed of 185 km/h and capacity of 12 passengers, built by Russian RDC Aqualines. 2020- In 2021 Brittany Ferries announced that they were looking into using REGENT (Regional Electric Ground Effect Naval Transport) ground effect craft "seagliders" for cross English Channel services. Southern Airways Express also placed firm orders for seagliders with intent to operate them along Florida's east coast. Around mid-2022, the US Defense Advanced Research Projects Agency (DARPA) launched its Liberty Lifter project, with the goal of creating a low-cost seaplane that would use the ground-effect to extend its range. The program aims to carry 90 tons over , operate at sea without ground-based maintenance, all using low-cost materials. In May 2024, Ocean Glider announced a deal with UK-based investor MONTE to finance $145m of a $700m deal to begin operating 25 REGENT seagliders between destinations in New Zealand. The order includes 15 12-seater Viceroys and 10 100-seater Monarchs. See also Aerodynamically alleviated marine vehicle Flying Platform Ground effect (aerodynamics) Ground-effect train Hovercraft List of ground-effect vehicles Surface effect ship Caspian Sea Monster Footnotes Notes Citations Bibliography . External links Amphibious vehicles Aircraft configurations Ekranoplan Soviet inventions
Ground-effect vehicle
[ "Engineering" ]
4,633
[ "Aircraft configurations", "Aerospace engineering" ]
1,014,518
https://en.wikipedia.org/wiki/Volatile%20organic%20compound
Volatile organic compounds (VOCs) are organic compounds that have a high vapor pressure at room temperature. They are common and exist in a variety of settings and products, not limited to house mold, upholstered furniture, arts and crafts supplies, dry cleaned clothing, and cleaning supplies. VOCs are responsible for the odor of scents and perfumes as well as pollutants. They play an important role in communication between animals and plants, such as attractants for pollinators, protection from predation, and even inter-plant interactions. Some VOCs are dangerous to human health or cause harm to the environment, often despite the odor being perceived as pleasant, such as "new car smell". Anthropogenic VOCs are regulated by law, especially indoors, where concentrations are the highest. Most VOCs are not acutely toxic, but may have long-term chronic health effects. Some VOCs have been used in pharmaceutical settings, while others are the target of administrative controls because of their recreational use. The high vapor pressure of VOCs correlates with a low boiling point, which relates to the number of the sample's molecules in the surrounding air, a trait known as volatility. Definitions Diverse definitions of the term VOC are in use. Some examples are presented below. Canada Health Canada classifies VOCs as organic compounds that have boiling points roughly in the range of . The emphasis is placed on commonly encountered VOCs that would have an effect on air quality. European Union The European Union defines a VOC as "any organic compound as well as the fraction of creosote, having at 293.15 K a vapour pressure of 0.01 kPa or more, or having a corresponding volatility under the particular conditions of use;". The VOC Solvents Emissions Directive was the main policy instrument for the reduction of industrial emissions of volatile organic compounds (VOCs) in the European Union. It covers a wide range of solvent-using activities, e.g. printing, surface cleaning, vehicle coating, dry cleaning and manufacture of footwear and pharmaceutical products. The VOC Solvents Emissions Directive requires installations in which such activities are applied to comply either with the emission limit values set out in the Directive or with the requirements of the so-called reduction scheme. Article 13 of The Paints Directive, approved in 2004, amended the original VOC Solvents Emissions Directive and limits the use of organic solvents in decorative paints and varnishes and in vehicle finishing products. The Paints Directive sets out maximum VOC content limit values for paints and varnishes in certain applications. The Solvents Emissions Directive was replaced by the Industrial Emissions Directive from 2013. China The People's Republic of China defines a VOC as those compounds that have "originated from automobiles, industrial production and civilian use, burning of all types of fuels, storage and transportation of oils, fitment finish, coating for furniture and machines, cooking oil fume and fine particles (PM 2.5)", and similar sources. The Three-Year Action Plan for Winning the Blue Sky Defence War released by the State Council in July 2018 creates an action plan to reduce 2015 VOC emissions 10% by 2020. India The Central Pollution Control Board of India released the Air (Prevention and Control of Pollution) Act in 1981, amended in 1987, to address concerns about air pollution in India. While the document does not differentiate between VOCs and other air pollutants, the CPCB monitors "oxides of nitrogen (NOx), sulphur dioxide (SO2), fine particulate matter (PM10) and suspended particulate matter (SPM)". United States The definitions of VOCs used for control of precursors of photochemical smog used by the U.S. Environmental Protection Agency (EPA) and state agencies in the US with independent outdoor air pollution regulations include exemptions for VOCs that are determined to be non-reactive, or of low-reactivity in the smog formation process. Prominent is the VOC regulation issued by the South Coast Air Quality Management District in California and by the California Air Resources Board (CARB). However, this specific use of the term VOCs can be misleading, especially when applied to indoor air quality because many chemicals that are not regulated as outdoor air pollution can still be important for indoor air pollution. Following a public hearing in September 1995, California's ARB uses the term "reactive organic gases" (ROG) to measure organic gases. The CARB revised the definition of "Volatile Organic Compounds" used in their consumer products regulations, based on the committee's findings. In addition to drinking water, VOCs are regulated in pollutant discharges to surface waters (both directly and via sewage treatment plants) as hazardous waste, but not in non-industrial indoor air. The Occupational Safety and Health Administration (OSHA) regulates VOC exposure in the workplace. Volatile organic compounds that are classified as hazardous materials are regulated by the Pipeline and Hazardous Materials Safety Administration while being transported. Biologically generated VOCs Most VOCs in Earth's atmosphere are biogenic, largely emitted by plants. Biogenic volatile organic compounds (BVOCs) encompass VOCs emitted by plants, animals, or microorganisms, and while extremely diverse, are most commonly terpenoids, alcohols, and carbonyls (methane and carbon monoxide are generally not considered). Not counting methane, biological sources emit an estimated 760 teragrams of carbon per year in the form of VOCs. The majority of VOCs are produced by plants, the main compound being isoprene. Small amounts of VOCs are produced by animals and microbes. Many VOCs are considered secondary metabolites, which often help organisms in defense, such as plant defense against herbivory. The strong odor emitted by many plants consists of green leaf volatiles, a subset of VOCs. Although intended for nearby organisms to detect and respond to, these volatiles can be detected and communicated through wireless electronic transmission, by embedding nanosensors and infrared transmitters into the plant materials themselves. Emissions are affected by a variety of factors, such as temperature, which determines rates of volatilization and growth, and sunlight, which determines rates of biosynthesis. Emission occurs almost exclusively from the leaves, the stomata in particular. VOCs emitted by terrestrial forests are often oxidized by hydroxyl radicals in the atmosphere; in the absence of NOx pollutants, VOC photochemistry recycles hydroxyl radicals to create a sustainable biosphere–atmosphere balance. Due to recent climate change developments, such as warming and greater UV radiation, BVOC emissions from plants are generally predicted to increase, thus upsetting the biosphere–atmosphere interaction and damaging major ecosystems. A major class of VOCs is the terpene class of compounds, such as myrcene. Providing a sense of scale, a forest in area, the size of the U.S. state of Pennsylvania, is estimated to emit of terpenes on a typical August day during the growing season. Maize produces the VOC (Z)-3-hexen-1-ol and other plant hormones. Anthropogenic sources Anthropogenic sources emit about 142 teragrams (1.42 × 1011 kg, or 142 billion kg) of carbon per year in the form of VOCs. The major source of man-made VOCs are: Fossil fuel use and production, e.g. incompletely combusted fossil fuels or unintended evaporation of fuels. The most prevalent VOC is ethane, a relatively inert compound. Solvents used in coatings, paints, and inks. Approximately 12 billion litres of paint are produced annually. Typical solvents include aliphatic hydrocarbons, ethyl acetate, glycol ethers and acetone. Motivated by cost, environmental concerns, and regulation, the paint and coating industries are increasingly shifting toward aqueous solvents. Compressed aerosol products, mainly butane and propane, estimated to contribute 1.3 million tonnes of VOC emissions per year globally. Biofuel use, e.g., cooking oils in Asia and bioethanol in Brazil. Biomass combustion, especially from rain forests. Although combustion principally releases carbon dioxide and water, incomplete combustion affords a variety of VOCs. Indoor VOCs Due to their numerous sources indoors, concentrations of VOCs indoors are consistently higher in indoor air (up to ten times higher) than outdoors due to the many sources. VOCs are emitted by thousands of indoor products. Examples include: paints, varnishes, waxes and lacquers, paint strippers, cleaning and personal care products, pesticides, building materials and furnishings, office equipment such as copiers and printers, correction fluids and carbonless copy paper, graphics and craft materials including glues and adhesives, permanent markers, and photographic solutions. Human activities such as cooking and cleaning can also emit VOCs. Cooking can release long-chain aldehydes and alkanes when oil is heated and terpenes can be released when spices are prepared and/or cooked. Cleaning products contain a range of VOCs, including monoterpenes, sesquiterpenes, alcohols and esters. Once released into the air, VOCs can undergo reactions with ozone and hydroxyl radicals to produce other VOCs, such as formaldehyde. Some VOCs are emitted directly indoors, and some are formed through the subsequent chemical reactions. The total concentration of all VOCs (TVOC) indoors can be up to five times higher than that of outdoor levels. New buildings experience particularly high levels of VOC off-gassing indoors because of the abundant new materials (building materials, fittings, surface coverings and treatments such as glues, paints and sealants) exposed to the indoor air, emitting multiple VOC gases. This off-gassing has a multi-exponential decay trend that is discernible over at least two years, with the most volatile compounds decaying with a time-constant of a few days, and the least volatile compounds decaying with a time-constant of a few years. New buildings may require intensive ventilation for the first few months, or a bake-out treatment. Existing buildings may be replenished with new VOC sources, such as new furniture, consumer products, and redecoration of indoor surfaces, all of which lead to a continuous background emission of TVOCs, and requiring improved ventilation. There are strong seasonal variations in indoors VOC emissions, with emission rates increasing in summer. This is largely due to the rate of diffusion of VOC species through materials to the surface, increasing with temperature. This leads to generally higher concentrations of TVOCs indoors in summer. Indoor air-quality measurements Measurement of VOCs from the indoor air is done with sorption tubes e. g. Tenax (for VOCs and SVOCs) or DNPH-cartridges (for carbonyl-compounds) or air detector. The VOCs adsorb on these materials and are afterwards desorbed either thermally (Tenax) or by elution (DNPH) and then analyzed by GC–MS/FID or HPLC. Reference gas mixtures are required for quality control of these VOC measurements. Furthermore, VOC emitting products used indoors, e.g. building products and furniture, are investigated in emission test chambers under controlled climatic conditions. For quality control of these measurements round robin tests are carried out, therefore reproducibly emitting reference materials are ideally required. Other methods have used proprietary Silcosteel-coated canisters with constant flow inlets to collect samples over several days. These methods are not limited by the adsorbing properties of materials like Tenax. Regulation of indoor VOC emissions In most countries, a separate definition of VOCs is used with regard to indoor air quality that comprises each organic chemical compound that can be measured as follows: adsorption from air on Tenax TA, thermal desorption, gas chromatographic separation over a 100% nonpolar column (dimethylpolysiloxane). VOC (volatile organic compounds) are all compounds that appear in the gas chromatogram between and including n-hexane and n-hexadecane. Compounds appearing earlier are called VVOC (very volatile organic compounds); compounds appearing later are called SVOC (semi-volatile organic compounds). France, Germany (AgBB/DIBt), Belgium, Norway (TEK regulation) and Italy (CAM Edilizia) have enacted regulations to limit VOC emissions from commercial products. European industry has developed numerous voluntary ecolabels and rating systems, such as EMICODE, M1, Blue Angel, GuT (textile floor coverings), Nordic Swan Ecolabel, EU Ecolabel, and Indoor Air Comfort. In the United States, several standards exist; California Standard CDPH Section 01350 is the most common one. These regulations and standards changed the marketplace, leading to an increasing number of low-emitting products. Health risks Respiratory, allergic, or immune effects in infants or children are associated with man-made VOCs and other indoor or outdoor air pollutants. Some VOCs, such as styrene and limonene, can react with nitrogen oxides or with ozone to produce new oxidation products and secondary aerosols, which can cause sensory irritation symptoms. VOCs contribute to the formation of tropospheric ozone and smog. Health effects include eye, nose, and throat irritation; headaches, loss of coordination, nausea, hearing disorders and damage to the liver, kidney, and central nervous system. Some VOCs are suspected or known to cause cancer in humans. Key signs or symptoms associated with exposure to VOCs include conjunctival irritation, nose and throat discomfort, headache, allergic skin reaction, dyspnea, declines in serum cholinesterase levels, nausea, vomiting, nose bleeding, fatigue, dizziness. The ability of organic chemicals to cause health effects varies greatly from those that are highly toxic to those with no known health effects. As with other pollutants, the extent and nature of the health effect will depend on many factors including level of exposure and length of time exposed. Eye and respiratory tract irritation, headaches, dizziness, visual disorders, and memory impairment are among the immediate symptoms that some people have experienced soon after exposure to some organics. At present, not much is known about what health effects occur from the levels of organics usually found in homes. Ingestion While null in comparison to the concentrations found in indoor air, benzene, toluene, and methyl tert-butyl ether (MTBE) were found in samples of human milk and increase the concentrations of VOCs that we are exposed to throughout the day. A study notes the difference between VOCs in alveolar breath and inspired air suggesting that VOCs are ingested, metabolized, and excreted via the extra-pulmonary pathway. VOCs are also ingested by drinking water in varying concentrations. Some VOC concentrations were over the EPA's National Primary Drinking Water Regulations and China's National Drinking Water Standards set by the Ministry of Ecology and Environment. Dermal absorption The presence of VOCs in the air and in groundwater has prompted more studies. Several studies have been performed to measure the effects of dermal absorption of specific VOCs. Dermal exposure to VOCs like formaldehyde and toluene downregulate antimicrobial peptides on the skin like cathelicidin LL-37, human β-defensin 2 and 3. Xylene and formaldehyde worsen allergic inflammation in animal models. Toluene also increases the dysregulation of filaggrin: a key protein in dermal regulation. this was confirmed by immunofluorescence to confirm protein loss and western blotting to confirm mRNA loss. These experiments were done on human skin samples. Toluene exposure also decreased the water in the trans-epidermal layer allowing for vulnerability in the skin's layers. Limit values for VOC emissions Limit values for VOC emissions into indoor air are published by AgBB, AFSSET, California Department of Public Health, and others. These regulations have prompted several companies in the paint and adhesive industries to adapt with VOC level reductions their products. VOC labels and certification programs may not properly assess all of the VOCs emitted from the product, including some chemical compounds that may be relevant for indoor air quality. Each ounce of colorant added to tint paint may contain between 5 and 20 grams of VOCs. A dark color, however, could require 5–15 ounces of colorant, adding up to 300 or more grams of VOCs per gallon of paint. VOCs in healthcare settings VOCs are also found in hospital and health care environments. In these settings, these chemicals are widely used for cleaning, disinfection, and hygiene of the different areas. Thus, health professionals such as nurses, doctors, sanitation staff, etc., may present with adverse health effects such as asthma; however, further evaluation is required to determine the exact levels and determinants that influence the exposure to these compounds. Concentration levels of individual VOCs such as halogenated and aromatic hydrocarbons vary substantially between areas of the same hospital. Generally, ethanol, isopropanol, ether, and acetone are the main compounds in the interior of the site. Following the same line, in a study conducted in the United States, it was established that nursing assistants are the most exposed to compounds such as ethanol, while medical equipment preparers are most exposed to 2-propanol. In relation to exposure to VOCs by cleaning and hygiene personnel, a study conducted in 4 hospitals in the United States established that sterilization and disinfection workers are linked to exposures to d-limonene and 2-propanol, while those responsible for cleaning with chlorine-containing products are more likely to have higher levels of exposure to α-pinene and chloroform. Those who perform floor and other surface cleaning tasks (e.g., floor waxing) and who use quaternary ammonium, alcohol, and chlorine-based products are associated with a higher VOC exposure than the two previous groups, that is, they are particularly linked to exposure to acetone, chloroform, α-pinene, 2-propanol or d-limonene. Other healthcare environments such as nursing and age care homes have been rarely a subject of study, even though the elderly and vulnerable populations may spend considerable time in these indoor settings where they might be exposed to VOCs, derived from the common use of cleaning agents, sprays and fresheners. In one study, more than 200 chemicals were identified, of which 41 have adverse health effects, 37 of them being VOCs. The health effects include skin sensitization, reproductive and organ-specific toxicity, carcinogenicity, mutagenicity, and endocrine-disrupting properties. Furthermore, in another study carried out in the same European country, it was found that there is a significant association between breathlessness in the elderly population and elevated exposure to VOCs such as toluene and o-xylene, unlike the remainder of the population. VOCs in hospitality and retail Workers in hospitality are also exposed to VOCs from a variety of sources including cleaning products (air fresheners, floor cleaners, disinfectants, etc.), building materials and furnishings, as well as fragrances. One of the most common VOC found in hospitality settings are alkanes, which are a major ingredient in cleaning products (35%). Other products present in hospitality that contain alkanes are laundry detergents, paints, and lubricants. Housekeepers in particular may also be exposed to formaldehyde, which is present in some fabrics used to make towels and bedding, however exposure decreases after several washes. Some hotels still use bleach to clean, and this bleach can form chloroform and carbon tetrachloride. Fragrances are often used in hotels and are composed of many different chemicals. There are many negative health outcomes associated with VOC exposure in hospitality. VOCs present in cleaning supplies can cause skin, eye, nose, and throat irritation, which can develop into dermatitis. VOCs in cleaning supplies can also cause more serious conditions, such as respiratory diseases and cancer. One study found that n-nonane and formaldehyde were the main drivers of eye and upper respiratory tract irritation while cancer risks were driven by chloroform and formaldehyde. Some solvent-based products have also been shown to cause damage to the kidneys and reproductive organs. One study showed that the star rating of the hotel may influence VOC exposure, as hotels with lower star ratings tend to have lower quality materials for the furnishings. Additionally, due to a movement among higher-end hotels to be more environmentally friendly, there has been a shift to using less harsh cleaning agents. Another similar environment that exposes workers to VOCs are retail spaces. Studies have shown that retail spaces have the highest VOC concentrations compared to all other indoor spaces such as residences, offices, and vehicles. The concentration of VOCs present as well as the types depend on the type of store, but common sources of VOCs in retail spaces include motor vehicle exhaust, building materials, cleaning products, products, and fragrances. One study found that VOC concentrations were higher in retail storage spaces compared to the sales areas, particularly formaldehyde. In retail spaces, formaldehyde concentrations ranged from 8.0 to 19.4 µg m−3 compared to 14.2 to 45.0 µg m−3 in storage spaces. Occupational exposure to VOCs also depends on the task. One study found that workers were exposed to peak total VOC concentrations when they were removing the plastic film off of new products. This peak was 7 times higher than total VOC concentration peaks of all other tasks, contributing greatly to retail workers’ exposure to VOCs despite being a relatively short task. One way that VOC concentrations can be kept minimal within retail and hospitality is by ensuring there is proper air ventilation. Employers can ensure proper ventilation by placing furniture in a way that enhances air circulation, as well as checking that the HVAC (heating, ventilation, and air conditioning) system is working properly to remove pollutants from the air. Workers can make sure that air vents are not blocked. Analytical methods Sampling Obtaining samples for analysis is challenging. VOCs, even when at dangerous levels, are dilute, so preconcentration is typically required. Many components of the atmosphere are mutually incompatible, e.g. ozone and organic compounds, peroxyacyl nitrates and many organic compounds. Furthermore, collection of VOCs by condensation in cold traps also accumulates a large amount of water, which generally must be removed selectively, depending on the analytical techniques to be employed. Solid-phase microextraction (SPME) techniques are used to collect VOCs at low concentrations for analysis. As applied to breath analysis, the following modalities are employed for sampling: gas sampling bags, syringes, evacuated steel and glass containers. Principle and measurement methods In the U.S., standard methods have been established by the National Institute for Occupational Safety and Health (NIOSH) and another by U.S. OSHA. Each method uses a single component solvent; butanol and hexane cannot be sampled, however, on the same sample matrix using the NIOSH or OSHA method. VOCs are quantified and identified by two broad techniques. The major technique is gas chromatography (GC). GC instruments allow the separation of gaseous components. When coupled to a flame ionization detector (FID) GCs can detect hydrocarbons at the parts per trillion levels. Using electron capture detectors, GCs are also effective for organohalide such as chlorocarbons. The second major technique associated with VOC analysis is mass spectrometry, which is usually coupled with GC, giving the hyphenated technique of GC-MS. Direct injection mass spectrometry techniques are frequently utilized for the rapid detection and accurate quantification of VOCs. PTR-MS is among the methods that have been used most extensively for the on-line analysis of biogenic and anthropogenic VOCs. PTR-MS instruments based on time-of-flight mass spectrometry have been reported to reach detection limits of 20 pptv after 100 ms and 750 ppqv after 1 min. measurement (signal integration) time. The mass resolution of these devices is between 7000 and 10,500 m/Δm, thus it is possible to separate most common isobaric VOCs and quantify them independently. Chemical fingerprinting and breath analysis The exhaled human breath contains a few thousand volatile organic compounds and is used in breath biopsy to serve as a VOC biomarker to test for diseases, such as lung cancer. One study has shown that "volatile organic compounds ... are mainly blood borne and therefore enable monitoring of different processes in the body." And it appears that VOC compounds in the body "may be either produced by metabolic processes or inhaled/absorbed from exogenous sources" such as environmental tobacco smoke. Chemical fingerprinting and breath analysis of volatile organic compounds has also been demonstrated with chemical sensor arrays, which utilize pattern recognition for detection of component volatile organics in complex mixtures such as breath gas. Metrology for VOC measurements To achieve comparability of VOC measurements, reference standards traceable to SI units are required. For a number of VOCs gaseous reference standards are available from specialty gas suppliers or national metrology institutes, either in the form of cylinders or dynamic generation methods. However, for many VOCs, such as oxygenated VOCs, monoterpenes, or formaldehyde, no standards are available at the appropriate amount of fraction due to the chemical reactivity or adsorption of these molecules. Currently, several national metrology institutes are working on the lacking standard gas mixtures at trace level concentration, minimising adsorption processes, and improving the zero gas. The final scopes are for the traceability and the long-term stability of the standard gases to be in accordance with the data quality objectives (DQO, maximum uncertainty of 20% in this case) required by the WMO/GAW program. See also Aroma compound Criteria air contaminants Fugitive emission Non-methane volatile organic compound Organic compound Trichloroethylene Vapor intrusion VOC contamination of groundwater Volatile Organic Compounds Protocol References External links Volatile Organic Compounds (VOCs) web site of the Chemicals Control Branch of Environment Canada EPA New England: Ground-level Ozone (Smog) Information VOC emissions and calculations Examples of product labels with low VOC emission criteria KEY-VOCS: Metrology for VOC indicators in air pollution and climate change, a European Metrology Research Project. VOCs in Paints Chemical Safety in the Workplace, by the US National Institute for Occupational Safety and Health Building biology Organic compounds Pollutants Smog Flavors Perfumes Pollution Chemical hazards Indoor air pollution
Volatile organic compound
[ "Physics", "Chemistry", "Engineering" ]
5,649
[ "Visibility", "Physical quantities", "Building engineering", "Smog", "Chemical hazards", "Organic compounds", "Building biology" ]
1,014,694
https://en.wikipedia.org/wiki/Real%20projective%20space
In mathematics, real projective space, denoted or is the topological space of lines passing through the origin 0 in the real space It is a compact, smooth manifold of dimension , and is a special case of a Grassmannian space. Basic properties Construction As with all projective spaces, is formed by taking the quotient of under the equivalence relation for all real numbers . For all in one can always find a such that has norm 1. There are precisely two such differing by sign. Thus can also be formed by identifying antipodal points of the unit -sphere, , in . One can further restrict to the upper hemisphere of and merely identify antipodal points on the bounding equator. This shows that is also equivalent to the closed -dimensional disk, , with antipodal points on the boundary, , identified. Low-dimensional examples is called the real projective line, which is topologically equivalent to a circle. is called the real projective plane. This space cannot be embedded in . It can however be embedded in and can be immersed in (see here). The questions of embeddability and immersibility for projective -space have been well-studied. is diffeomorphic to SO(3), hence admits a group structure; the covering map is a map of groups Spin(3) → SO(3), where Spin(3) is a Lie group that is the universal cover of SO(3). Topology The antipodal map on the -sphere (the map sending to ) generates a Z2 group action on . As mentioned above, the orbit space for this action is . This action is actually a covering space action giving as a double cover of . Since is simply connected for , it also serves as the universal cover in these cases. It follows that the fundamental group of is when . (When the fundamental group is due to the homeomorphism with ). A generator for the fundamental group is the closed curve obtained by projecting any curve connecting antipodal points in down to . The projective -space is compact, connected, and has a fundamental group isomorphic to the cyclic group of order 2: its universal covering space is given by the antipody quotient map from the -sphere, a simply connected space. It is a double cover. The antipode map on has sign , so it is orientation-preserving if and only if is even. The orientation character is thus: the non-trivial loop in acts as on orientation, so is orientable if and only if is even, i.e., is odd. The projective -space is in fact diffeomorphic to the submanifold of consisting of all symmetric matrices of trace 1 that are also idempotent linear transformations. Geometry of real projective spaces Real projective space admits a constant positive scalar curvature metric, coming from the double cover by the standard round sphere (the antipodal map is locally an isometry). For the standard round metric, this has sectional curvature identically 1. In the standard round metric, the measure of projective space is exactly half the measure of the sphere. Smooth structure Real projective spaces are smooth manifolds. On Sn, in homogeneous coordinates, (x1, ..., xn+1), consider the subset Ui with xi ≠ 0. Each Ui is homeomorphic to the disjoint union of two open unit balls in Rn that map to the same subset of RPn and the coordinate transition functions are smooth. This gives RPn a smooth structure. Structure as a CW complex Real projective space RPn admits the structure of a CW complex with 1 cell in every dimension. In homogeneous coordinates (x1 ... xn+1) on Sn, the coordinate neighborhood U1 = {(x1 ... xn+1) | x1 ≠ 0} can be identified with the interior of n-disk Dn. When xi = 0, one has RPn−1. Therefore the n−1 skeleton of RPn is RPn−1, and the attaching map f : Sn−1 → RPn−1 is the 2-to-1 covering map. One can put Induction shows that RPn is a CW complex with 1 cell in every dimension up to n. The cells are Schubert cells, as on the flag manifold. That is, take a complete flag (say the standard flag) 0 = V0 < V1 <...< Vn; then the closed k-cell is lines that lie in Vk. Also the open k-cell (the interior of the k-cell) is lines in (lines in Vk but not Vk−1). In homogeneous coordinates (with respect to the flag), the cells are This is not a regular CW structure, as the attaching maps are 2-to-1. However, its cover is a regular CW structure on the sphere, with 2 cells in every dimension; indeed, the minimal regular CW structure on the sphere. In light of the smooth structure, the existence of a Morse function would show RPn is a CW complex. One such function is given by, in homogeneous coordinates, On each neighborhood Ui, g has nondegenerate critical point (0,...,1,...,0) where 1 occurs in the i-th position with Morse index i. This shows RPn is a CW complex with 1 cell in every dimension. Tautological bundles Real projective space has a natural line bundle over it, called the tautological bundle. More precisely, this is called the tautological subbundle, and there is also a dual n-dimensional bundle called the tautological quotient bundle. Algebraic topology of real projective spaces Homotopy groups The higher homotopy groups of RPn are exactly the higher homotopy groups of Sn, via the long exact sequence on homotopy associated to a fibration. Explicitly, the fiber bundle is: You might also write this as or by analogy with complex projective space. The homotopy groups are: Homology The cellular chain complex associated to the above CW structure has 1 cell in each dimension 0, ..., n. For each dimensional k, the boundary maps dk : δDk → RPk−1/RPk−2 is the map that collapses the equator on Sk−1 and then identifies antipodal points. In odd (resp. even) dimensions, this has degree 0 (resp. 2): Thus the integral homology is RPn is orientable if and only if n is odd, as the above homology calculation shows. Infinite real projective space The infinite real projective space is constructed as the direct limit or union of the finite projective spaces: This space is classifying space of O(1), the first orthogonal group. The double cover of this space is the infinite sphere , which is contractible. The infinite projective space is therefore the Eilenberg–MacLane space K(Z2, 1). For each nonnegative integer q, the modulo 2 homology group . Its cohomology ring modulo 2 is where is the first Stiefel–Whitney class: it is the free -algebra on , which has degree 1. See also Complex projective space Quaternionic projective space Lens space Real projective plane Notes References Bredon, Glen. Topology and geometry, Graduate Texts in Mathematics, Springer Verlag 1993, 1996 Algebraic topology Differential geometry Projective geometry
Real projective space
[ "Mathematics" ]
1,530
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
1,014,906
https://en.wikipedia.org/wiki/Cyclomatic%20complexity
Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976. Cyclomatic complexity is computed using the control-flow graph of the program. The nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods, or classes within a program. One testing strategy, called basis path testing by McCabe who first proposed it, is to test each linearly independent path through the program. In this case, the number of test cases will equal the cyclomatic complexity of the program. Description Definition There are multiple ways to define cyclomatic complexity of a section of source code. One common way is the number of linearly independent paths within it. A set of paths is linearly independent if the edge set of any path in is not the union of edge sets of the paths in some subset of . If the source code contained no control flow statements (conditionals or decision points) the complexity would be 1, since there would be only a single path through the code. If the code had one single-condition IF statement, there would be two paths through the code: one where the IF statement is TRUE and another one where it is FALSE. Here, the complexity would be 2. Two nested single-condition IFs, or one IF with two conditions, would produce a complexity of 3. Another way to define the cyclomatic complexity of a program is to look at its control-flow graph, a directed graph containing the basic blocks of the program, with an edge between two basic blocks if control may pass from the first to the second. The complexity is then defined as where = the number of edges of the graph. = the number of nodes of the graph. = the number of connected components. An alternative formulation of this, as originally proposed, is to use a graph in which each exit point is connected back to the entry point. In this case, the graph is strongly connected. Here, the cyclomatic complexity of the program is equal to the cyclomatic number of its graph (also known as the first Betti number), which is defined as This may be seen as calculating the number of linearly independent cycles that exist in the graph: those cycles that do not contain other cycles within themselves. Because each exit point loops back to the entry point, there is at least one such cycle for each exit point. For a single program (or subroutine or method), always equals 1; a simpler formula for a single subroutine is Cyclomatic complexity may be applied to several such programs or subprograms at the same time (to all of the methods in a class, for example). In these cases, will equal the number of programs in question, and each subprogram will appear as a disconnected subset of the graph. McCabe showed that the cyclomatic complexity of a structured program with only one entry point and one exit point is equal to the number of decision points ("if" statements or conditional loops) contained in that program plus one. This is true only for decision points counted at the lowest, machine-level instructions. Decisions involving compound predicates like those found in high-level languages like IF cond1 AND cond2 THEN ... should be counted in terms of predicate variables involved. In this example, one should count two decision points because at machine level it is equivalent to IF cond1 THEN IF cond2 THEN .... Cyclomatic complexity may be extended to a program with multiple exit points. In this case, it is equal to where is the number of decision points in the program and is the number of exit points. Algebraic topology An even subgraph of a graph (also known as an Eulerian subgraph) is one in which every vertex is incident with an even number of edges. Such subgraphs are unions of cycles and isolated vertices. Subgraphs will be identified with their edge sets, which is equivalent to only considering those even subgraphs which contain all vertices of the full graph. The set of all even subgraphs of a graph is closed under symmetric difference, and may thus be viewed as a vector space over GF(2). This vector space is called the cycle space of the graph. The cyclomatic number of the graph is defined as the dimension of this space. Since GF(2) has two elements and the cycle space is necessarily finite, the cyclomatic number is also equal to the 2-logarithm of the number of elements in the cycle space. A basis for the cycle space is easily constructed by first fixing a spanning forest of the graph, and then considering the cycles formed by one edge not in the forest and the path in the forest connecting the endpoints of that edge. These cycles form a basis for the cycle space. The cyclomatic number also equals the number of edges not in a maximal spanning forest of a graph. Since the number of edges in a maximal spanning forest of a graph is equal to the number of vertices minus the number of components, the formula defines the cyclomatic number. Cyclomatic complexity can also be defined as a relative Betti number, the size of a relative homology group: which is read as "the rank of the first homology group of the graph G relative to the terminal nodes t". This is a technical way of saying "the number of linearly independent paths through the flow graph from an entry to an exit", where: "linearly independent" corresponds to homology, and backtracking is not double-counted; "paths" corresponds to first homology (a path is a one-dimensional object); and "relative" means the path must begin and end at an entry (or exit) point. This cyclomatic complexity can be calculated. It may also be computed via absolute Betti number by identifying the terminal nodes on a given component, or drawing paths connecting the exits to the entrance. The new, augmented graph obtains It can also be computed via homotopy. If a (connected) control-flow graph is considered a one-dimensional CW complex called , the fundamental group of will be . The value of is the cyclomatic complexity. The fundamental group counts how many loops there are through the graph up to homotopy, aligning as expected. Interpretation In his presentation "Software Quality Metrics to Identify Risk" for the Department of Homeland Security, Tom McCabe introduced the following categorization of cyclomatic complexity: 1 - 10: Simple procedure, little risk 11 - 20: More complex, moderate risk 21 - 50: Complex, high risk > 50: Untestable code, very high risk Applications Limiting complexity during development One of McCabe's original applications was to limit the complexity of routines during program development. He recommended that programmers should count the complexity of the modules they are developing, and split them into smaller modules whenever the cyclomatic complexity of the module exceeded 10. This practice was adopted by the NIST Structured Testing methodology, which observed that since McCabe's original publication, the figure of 10 had received substantial corroborating evidence. However, it also noted that in some circumstances it may be appropriate to relax the restriction and permit modules with a complexity as high as 15. As the methodology acknowledged that there were occasional reasons for going beyond the agreed-upon limit, it phrased its recommendation as "For each module, either limit cyclomatic complexity to [the agreed-upon limit] or provide a written explanation of why the limit was exceeded." Measuring the "structuredness" of a program Section VI of McCabe's 1976 paper is concerned with determining what the control-flow graphs (CFGs) of non-structured programs look like in terms of their subgraphs, which McCabe identified. (For details, see structured program theorem.) McCabe concluded that section by proposing a numerical measure of how close to the structured programming ideal a given program is, i.e. its "structuredness". McCabe called the measure he devised for this purpose essential complexity. To calculate this measure, the original CFG is iteratively reduced by identifying subgraphs that have a single-entry and a single-exit point, which are then replaced by a single node. This reduction corresponds to what a human would do if they extracted a subroutine from the larger piece of code. (Nowadays such a process would fall under the umbrella term of refactoring.) McCabe's reduction method was later called condensation in some textbooks, because it was seen as a generalization of the condensation to components used in graph theory. If a program is structured, then McCabe's reduction/condensation process reduces it to a single CFG node. In contrast, if the program is not structured, the iterative process will identify the irreducible part. The essential complexity measure defined by McCabe is simply the cyclomatic complexity of this irreducible graph, so it will be precisely 1 for all structured programs, but greater than one for non-structured programs. Implications for software testing Another application of cyclomatic complexity is in determining the number of test cases that are necessary to achieve thorough test coverage of a particular module. It is useful because of two properties of the cyclomatic complexity, , for a specific module: is an upper bound for the number of test cases that are necessary to achieve a complete branch coverage. is a lower bound for the number of paths through the control-flow graph (CFG). Assuming each test case takes one path, the number of cases needed to achieve path coverage is equal to the number of paths that can actually be taken. But some paths may be impossible, so although the number of paths through the CFG is clearly an upper bound on the number of test cases needed for path coverage, this latter number (of possible paths) is sometimes less than . All three of the above numbers may be equal: branch coverage cyclomatic complexity number of paths. For example, consider a program that consists of two sequential if-then-else statements. if (c1()) f1(); else f2(); if (c2()) f3(); else f4(); In this example, two test cases are sufficient to achieve a complete branch coverage, while four are necessary for complete path coverage. The cyclomatic complexity of the program is 3 (as the strongly connected graph for the program contains 9 edges, 7 nodes, and 1 connected component) (). In general, in order to fully test a module, all execution paths through the module should be exercised. This implies a module with a high complexity number requires more testing effort than a module with a lower value since the higher complexity number indicates more pathways through the code. This also implies that a module with higher complexity is more difficult to understand since the programmer must understand the different pathways and the results of those pathways. Unfortunately, it is not always practical to test all possible paths through a program. Considering the example above, each time an additional if-then-else statement is added, the number of possible paths grows by a factor of 2. As the program grows in this fashion, it quickly reaches the point where testing all of the paths becomes impractical. One common testing strategy, espoused for example by the NIST Structured Testing methodology, is to use the cyclomatic complexity of a module to determine the number of white-box tests that are required to obtain sufficient coverage of the module. In almost all cases, according to such a methodology, a module should have at least as many tests as its cyclomatic complexity. In most cases, this number of tests is adequate to exercise all the relevant paths of the function. As an example of a function that requires more than mere branch coverage to test accurately, reconsider the above function. However, assume that to avoid a bug occurring, any code that calls either f1() or f3() must also call the other. Assuming that the results of c1() and c2() are independent, the function as presented above contains a bug. Branch coverage allows the method to be tested with just two tests, such as the following test cases: c1() returns true and c2() returns true c1() returns false and c2() returns false Neither of these cases exposes the bug. If, however, we use cyclomatic complexity to indicate the number of tests we require, the number increases to 3. We must therefore test one of the following paths: c1() returns true and c2() returns false c1() returns false and c2() returns true Either of these tests will expose the bug. Correlation to number of defects Multiple studies have investigated the correlation between McCabe's cyclomatic complexity number with the frequency of defects occurring in a function or method. Some studies find a positive correlation between cyclomatic complexity and defects; functions and methods that have the highest complexity tend to also contain the most defects. However, the correlation between cyclomatic complexity and program size (typically measured in lines of code) has been demonstrated many times. Les Hatton has claimed that complexity has the same predictive ability as lines of code. Studies that controlled for program size (i.e., comparing modules that have different complexities but similar size) are generally less conclusive, with many finding no significant correlation, while others do find correlation. Some researchers question the validity of the methods used by the studies finding no correlation. Although this relation likely exists, it is not easily used in practice. Since program size is not a controllable feature of commercial software, the usefulness of McCabe's number has been questioned. The essence of this observation is that larger programs tend to be more complex and to have more defects. Reducing the cyclomatic complexity of code is not proven to reduce the number of errors or bugs in that code. International safety standards like ISO 26262, however, mandate coding guidelines that enforce low code complexity. See also Programming complexity Complexity trap Computer program Computer programming Control flow Decision-to-decision path Design predicates Essential complexity (numerical measure of "structuredness") Halstead complexity measures Software engineering Software testing Static program analysis Maintainability Notes References External links Generating cyclomatic complexity metrics with Polyspace The role of empiricism in improving the reliability of future software McCabe's Cyclomatic Complexity and Why We Don't Use It Software metrics
Cyclomatic complexity
[ "Mathematics", "Engineering" ]
3,060
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
1,015,016
https://en.wikipedia.org/wiki/Degenerative%20disease
Degenerative disease is the result of a continuous process based on degenerative cell changes, affecting tissues or organs, which will increasingly deteriorate over time. In neurodegenerative diseases, cells of the central nervous system stop working or die via neurodegeneration. An example of this is Alzheimer's disease. The other two common groups of degenerative diseases are those that affect circulatory system (e.g. coronary artery disease) and neoplastic diseases (e.g. cancers). Many degenerative diseases exist and some are related to aging. Normal bodily wear or lifestyle choices (such as exercise or eating habits) may worsen degenerative diseases, depending on the specific condition. Sometimes the main or partial cause behind such diseases is genetic. Thus some are clearly hereditary like Huntington's disease. Other causes include viruses, poisons or chemical exposures, while sometimes, the underlying cause remains unknown. Some degenerative diseases can be cured. In those that can not, it may be possible to alleviate the symptoms. Examples Alzheimer's disease (AD) Amyotrophic lateral sclerosis (ALS, Lou Gehrig's disease) Cancers Charcot–Marie–Tooth disease (CMT) Chronic traumatic encephalopathy Cystic fibrosis Some cytochrome c oxidase deficiencies (often the cause of degenerative Leigh syndrome) Ehlers–Danlos syndrome Fibrodysplasia ossificans progressiva Friedreich's ataxia Frontotemporal dementia (FTD) Some cardiovascular diseases (e.g. atherosclerotic ones like coronary artery disease, aortic stenosis, congenital defects etc.) Huntington's disease Infantile neuroaxonal dystrophy Keratoconus (KC) Keratoglobus Leukodystrophies Macular degeneration (AMD) Marfan's syndrome (MFS) Some mitochondrial myopathies Mitochondrial DNA depletion syndrome Mueller–Weiss syndrome Multiple sclerosis (MS) Multiple system atrophy Muscular dystrophies (MD) Neuronal ceroid lipofuscinosis Niemann–Pick diseases Osteoarthritis Osteoporosis Parkinson's disease Pulmonary arterial hypertension All prion diseases (Creutzfeldt-Jakob disease, fatal familial insomnia etc.) Progressive supranuclear palsy Retinitis pigmentosa (RP) Rheumatoid arthritis Sandhoff Disease Spinal muscular atrophy (SMA, motor neuron disease) Subacute sclerosing panencephalitis Substance Use Disorder Tay–Sachs disease Vascular dementia (might not itself be neurodegenerative, but often appears alongside other forms of degenerative dementia) See also Life extension Senescence Progressive disease List of genetic disorders References Diseases and disorders Senescence Ageing processes
Degenerative disease
[ "Chemistry", "Biology" ]
611
[ "Senescence", "Ageing processes", "Metabolism", "Cellular processes" ]
1,015,240
https://en.wikipedia.org/wiki/Rollback%20%28data%20management%29
In database technologies, a rollback is an operation which returns the database to some previous state. Rollbacks are important for database integrity, because they mean that the database can be restored to a clean copy even after erroneous operations are performed. They are crucial for recovering from database server crashes; by rolling back any transaction which was active at the time of the crash, the database is restored to a consistent state. The rollback feature is usually implemented with a transaction log, but can also be implemented via multiversion concurrency control. Cascading rollback A cascading rollback occurs in database systems when a transaction (T1) causes a failure and a rollback must be performed. Other transactions dependent on T1's actions must also be rollbacked due to T1's failure, thus causing a cascading effect. That is, one transaction's failure causes many to fail. Practical database recovery techniques guarantee cascadeless rollback, therefore a cascading rollback is not a desirable result. Cascading rollback is scheduled by dba. SQL SQL refers to Structured Query Language, a kind of language used to access, update and manipulate database. In SQL, ROLLBACK is a command that causes all data changes since the last START TRANSACTION or BEGIN to be discarded by the relational database management systems (RDBMS), so that the state of the data is "rolled back" to the way it was before those changes were made. A ROLLBACK statement will also release any existing savepoints that may be in use. In most SQL dialects, ROLLBACKs are connection specific. This means that if two connections are made to the same database, a ROLLBACK made in one connection will not affect any other connections. This is vital for proper concurrency. Usage outside databases Rollbacks are not exclusive to databases: any stateful distributed system may use rollback operations to maintain consistency. Examples of distributed systems that can support rollbacks include message queues and workflow management systems. More generally, any operation that resets a system to its previous state before another operation or series of operations can be viewed as a rollback. See also Savepoint Commit Undo Schema migration Notes References "ROLLBACK Transaction", Microsoft SQL Server. "Sql Commands", MySQL. Database theory Transaction processing Reversible computing Database management systems
Rollback (data management)
[ "Physics" ]
475
[ "Spacetime", "Reversible computing", "Physical quantities", "Time" ]
1,016,017
https://en.wikipedia.org/wiki/Thiocyanate
Thiocyanates are salts containing the thiocyanate anion (also known as rhodanide or rhodanate). is the conjugate base of thiocyanic acid. Common salts include the colourless salts potassium thiocyanate and sodium thiocyanate. Mercury(II) thiocyanate was formerly used in pyrotechnics. Thiocyanate is analogous to the cyanate ion, , wherein oxygen is replaced by sulfur. is one of the pseudohalides, due to the similarity of its reactions to that of halide ions. Thiocyanate used to be known as rhodanide (from a Greek word for rose) because of the red colour of its complexes with iron. Thiocyanate is produced by the reaction of elemental sulfur or thiosulfate with cyanide: 8 CN- + S8 -> 8 SCN-CN- + S2O3^2- -> SCN- + SO3^2- The second reaction is catalyzed by thiosulfate sulfurtransferase, a hepatic mitochondrial enzyme, and by other sulfur transferases, which together are responsible for around 80% of cyanide metabolism in the body. Oxidation of thiocyanate inevitably produces hydrogen sulfate. The other product depends on pH: in acid, it is hydrogen cyanide, presumably via HOSCN and with a sulfur dicyanide side-product; but in base and neutral solutions, it is cyanate. Biology Occurrences Thiocyanate occurs widely in nature, albeit often in low concentrations. It is a component of some sulfur cycles. Biochemistry Thiocyanate hydrolases catalyze the conversion of thiocyanate to carbonyl sulfide and to cyanate: Medicine Thiocyanate is known to be an important part in the biosynthesis of hypothiocyanite by a lactoperoxidase. Thus the complete absence of thiocyanate or reduced thiocyanate in the human body, (e.g., cystic fibrosis) is damaging to the human host defense system. Thiocyanate is a potent competitive inhibitor of the thyroid sodium-iodide symporter. Iodine is an essential component of thyroxine. Since thiocyanates will decrease iodide transport into the thyroid follicular cell, they will decrease the amount of thyroxine produced by the thyroid gland. As such, foodstuffs containing thiocyanate are best avoided by iodide deficient hypothyroid patients. In the early 20th century, thiocyanate was used in the treatment of hypertension, but it is no longer used because of associated toxicity. Sodium nitroprusside, a metabolite of which is thiocyanate, is however still used for the treatment of a hypertensive emergency. Rhodanese catalyzes the reaction of sodium nitroprusside (like other cyanides) with thiosulfate to form the metabolite thiocyanate. Coordination chemistry Thiocyanate shares its negative charge approximately equally between sulfur and nitrogen. As a consequence, thiocyanate can act as a nucleophile at either sulfur or nitrogen—it is an ambidentate ligand. [SCN]− can also bridge two (M−SCN−M) or even three metals (>SCN− or −SCN<). Experimental evidence leads to the general conclusion that class A metals (hard acids) tend to form N-bonded thiocyanate complexes, whereas class B metals (soft acids) tend to form S-bonded thiocyanate complexes. Other factors, e.g. kinetics and solubility, are sometimes involved, and linkage isomerism can occur, for example [Co(NH3)5(NCS)]Cl2 and [Co(NH3)5(SCN)]Cl2. It [SCN] is considered as a weak ligand. ([NCS] is a strong ligand) Test for iron(III) and cobalt(II) If [SCN]− is added to a solution with iron(III) ions, a blood-red solution forms mainly due to the formation of [Fe(NCS)(H2O)5]2+, i.e. pentaaqua(thiocyanato-N)iron(III). Lesser amounts of other hydrated compounds also form: e.g. Fe(SCN)3 and [Fe(SCN)4]−. Similarly, Co2+ gives a blue complex with thiocyanate. Both the iron and cobalt complexes can be extracted into organic solvents like diethyl ether or amyl alcohol. This allows the determination of these ions even in strongly coloured solutions. The determination of Co(II) in the presence of Fe(III) is possible by adding KF to the solution, which forms uncoloured, very stable complexes with Fe(III), which no longer react with SCN−. Phospholipids or some detergents aid the transfer of thiocyanatoiron into chlorinated solvents like chloroform and can be determined in this fashion. See also Sulphobes References Citations Anions Sulfur ions Concrete admixtures
Thiocyanate
[ "Physics", "Chemistry" ]
1,154
[ "Matter", "Anions", "Functional groups", "Thiocyanates", "Sulfur ions", "Ions" ]
1,016,422
https://en.wikipedia.org/wiki/Curved%20spacetime
In physics, curved spacetime is the mathematical model in which, with Einstein's theory of general relativity, gravity naturally arises, as opposed to being described as a fundamental force in Newton's static Euclidean reference frame. Objects move along geodesics—curved paths determined by the local geometry of spacetime—rather than being influenced directly by distant bodies. This framework led to two fundamental principles: coordinate independence, which asserts that the laws of physics are the same regardless of the coordinate system used, and the equivalence principle, which states that the effects of gravity are indistinguishable from those of acceleration in sufficiently small regions of space. These principles laid the groundwork for a deeper understanding of gravity through the geometry of spacetime, as formalized in Einstein's field equations. Introduction Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. Gravity is mediated by a mysterious force, acting instantaneously across a distance, whose actions are independent of the intervening space. In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. Nor is there any such thing as a force of gravitation, only the structure of spacetime itself. In spacetime terms, the path of a satellite orbiting the Earth is not dictated by the distant influences of the Earth, Moon and Sun. Instead, the satellite moves through space only in response to local conditions. Since spacetime is everywhere locally flat when considered on a sufficiently small scale, the satellite is always following a straight line in its local inertial frame. We say that the satellite always follows along the path of a geodesic. No evidence of gravitation can be discovered following alongside the motions of a single particle. In any analysis of spacetime, evidence of gravitation requires that one observe the relative accelerations of two bodies or two separated particles. In Fig. 5-1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The tidal accelerations that these particles exhibit with respect to each other do not require forces for their explanation. Rather, Einstein described them in terms of the geometry of spacetime, i.e. the curvature of spacetime. These tidal accelerations are strictly local. It is the cumulative total effect of many local manifestations of curvature that result in the appearance of a gravitational force acting at a long range from Earth. Different observers viewing the scenarios presented in this figure interpret the scenarios differently depending on their knowledge of the situation. (i) A first observer, at the center of mass of particles 2 and 3 but unaware of the large mass 1, concludes that a force of repulsion exists between the particles in scenario A while a force of attraction exists between the particles in scenario B. (ii) A second observer, aware of the large mass 1, smiles at the first reporter's naiveté. This second observer knows that in reality, the apparent forces between particles 2 and 3 really represent tidal effects resulting from their differential attraction by mass 1. (iii) A third observer, trained in general relativity, knows that there are, in fact, no forces at all acting between the three objects. Rather, all three objects move along geodesics in spacetime. Two central propositions underlie general relativity. The first crucial concept is coordinate independence: The laws of physics cannot depend on what coordinate system one uses. This is a major extension of the principle of relativity from the version used in special relativity, which states that the laws of physics must be the same for every observer moving in non-accelerated (inertial) reference frames. In general relativity, to use Einstein's own (translated) words, "the laws of physics must be of such a nature that they apply to systems of reference in any kind of motion." This leads to an immediate issue: In accelerated frames, one feels forces that seemingly would enable one to assess one's state of acceleration in an absolute sense. Einstein resolved this problem through the principle of equivalence. The equivalence principle states that in any sufficiently small region of space, the effects of gravitation are the same as those from acceleration. In Fig. 5-2, person A is in a spaceship, far from any massive objects, that undergoes a uniform acceleration of g. Person B is in a box resting on Earth. Provided that the spaceship is sufficiently small so that tidal effects are non-measurable (given the sensitivity of current gravity measurement instrumentation, A and B presumably should be Lilliputians), there are no experiments that A and B can perform which will enable them to tell which setting they are in. An alternative expression of the equivalence principle is to note that in Newton's universal law of gravitation, mgg and in Newton's second law, there is no a priori reason why the gravitational mass mg should be equal to the inertial mass mi. The equivalence principle states that these two masses are identical. To go from the elementary description above of curved spacetime to a complete description of gravitation requires tensor calculus and differential geometry, topics both requiring considerable study. Without these mathematical tools, it is possible to write about general relativity, but it is not possible to demonstrate any non-trivial derivations. Curvature of time In the discussion of special relativity, forces played no more than a background role. Special relativity assumes the ability to define inertial frames that fill all of spacetime, all of whose clocks run at the same rate as the clock at the origin. Is this really possible? In a nonuniform gravitational field, experiment dictates that the answer is no. Gravitational fields make it impossible to construct a global inertial frame. In small enough regions of spacetime, local inertial frames are still possible. General relativity involves the systematic stitching together of these local frames into a more general picture of spacetime. Years before publication of the general theory in 1916, Einstein used the equivalence principle to predict the existence of gravitational redshift in the following thought experiment: (i) Assume that a tower of height h (Fig. 5-3) has been constructed. (ii) Drop a particle of rest mass m from the top of the tower. It falls freely with acceleration g, reaching the ground with velocity , so that its total energy E, as measured by an observer on the ground, is (iii) A mass-energy converter transforms the total energy of the particle into a single high energy photon, which it directs upward. (iv) At the top of the tower, an energy-mass converter transforms the energy of the photon E back into a particle of rest mass m. It must be that , since otherwise one would be able to construct a perpetual motion device. We therefore predict that , so that A photon climbing in Earth's gravitational field loses energy and is redshifted. Early attempts to measure this redshift through astronomical observations were somewhat inconclusive, but definitive laboratory observations were performed by Pound & Rebka (1959) and later by Pound & Snider (1964). Light has an associated frequency, and this frequency may be used to drive the workings of a clock. The gravitational redshift leads to an important conclusion about time itself: Gravity makes time run slower. Suppose we build two identical clocks whose rates are controlled by some stable atomic transition. Place one clock on top of the tower, while the other clock remains on the ground. An experimenter on top of the tower observes that signals from the ground clock are lower in frequency than those of the clock next to her on the tower. Light going up the tower is just a wave, and it is impossible for wave crests to disappear on the way up. Exactly as many oscillations of light arrive at the top of the tower as were emitted at the bottom. The experimenter concludes that the ground clock is running slow, and can confirm this by bringing the tower clock down to compare side by side with the ground clock. For a 1 km tower, the discrepancy would amount to about 9.4 nanoseconds per day, easily measurable with modern instrumentation. Clocks in a gravitational field do not all run at the same rate. Experiments such as the Pound–Rebka experiment have firmly established curvature of the time component of spacetime. The Pound–Rebka experiment says nothing about curvature of the space component of spacetime. But the theoretical arguments predicting gravitational time dilation do not depend on the details of general relativity at all. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence. This includes Newtonian gravitation. A standard demonstration in general relativity is to show how, in the "Newtonian limit" (i.e. the particles are moving slowly, the gravitational field is weak, and the field is static), curvature of time alone is sufficient to derive Newton's law of gravity. Newtonian gravitation is a theory of curved time. General relativity is a theory of curved time and curved space. Given G as the gravitational constant, M as the mass of a Newtonian star, and orbiting bodies of insignificant mass at distance r from the star, the spacetime interval for Newtonian gravitation is one for which only the time coefficient is variable: Curvature of space The coefficient in front of describes the curvature of time in Newtonian gravitation, and this curvature completely accounts for all Newtonian gravitational effects. As expected, this correction factor is directly proportional to and , and because of the in the denominator, the correction factor increases as one approaches the gravitating body, meaning that time is curved. But general relativity is a theory of curved space and curved time, so if there are terms modifying the spatial components of the spacetime interval presented above, should not their effects be seen on, say, planetary and satellite orbits due to curvature correction factors applied to the spatial terms? The answer is that they are seen, but the effects are tiny. The reason is that planetary velocities are extremely small compared to the speed of light, so that for planets and satellites of the solar system, the term dwarfs the spatial terms. Despite the minuteness of the spatial terms, the first indications that something was wrong with Newtonian gravitation were discovered over a century-and-a-half ago. In 1859, Urbain Le Verrier, in an analysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848, reported that known physics could not explain the orbit of Mercury, unless there possibly existed a planet or asteroid belt within the orbit of Mercury. The perihelion of Mercury's orbit exhibited an excess rate of precession over that which could be explained by the tugs of the other planets. The ability to detect and accurately measure the minute value of this anomalous precession (only 43 arc seconds per tropical century) is testimony to the sophistication of 19th century astrometry. As the astronomer who had earlier discovered the existence of Neptune "at the tip of his pen" by analyzing irregularities in the orbit of Uranus, Le Verrier's announcement triggered a two-decades long period of "Vulcan-mania", as professional and amateur astronomers alike hunted for the hypothetical new planet. This search included several false sightings of Vulcan. It was ultimately established that no such planet or asteroid belt existed. In 1916, Einstein was to show that this anomalous precession of Mercury is explained by the spatial terms in the curvature of spacetime. Curvature in the temporal term, being simply an expression of Newtonian gravitation, has no part in explaining this anomalous precession. The success of his calculation was a powerful indication to Einstein's peers that the general theory of relativity could be correct. The most spectacular of Einstein's predictions was his calculation that the curvature terms in the spatial components of the spacetime interval could be measured in the bending of light around a massive body. Light has a slope of ±1 on a spacetime diagram. Its movement in space is equal to its movement in time. For the weak field expression of the invariant interval, Einstein calculated an exactly equal but opposite sign curvature in its spatial components. In Newton's gravitation, the coefficient in front of predicts bending of light around a star. In general relativity, the coefficient in front of predicts a doubling of the total bending. The story of the 1919 Eddington eclipse expedition and Einstein's rise to fame is well told elsewhere. Sources of spacetime curvature In Newton's theory of gravitation, the only source of gravitational force is mass. In contrast, general relativity identifies several sources of spacetime curvature in addition to mass. In the Einstein field equations, the sources of gravity are presented on the right-hand side in the stress–energy tensor. Fig. 5-5 classifies the various sources of gravity in the stress–energy tensor: (red): The total mass–energy density, including any contributions to the potential energy from forces between the particles, as well as kinetic energy from random thermal motions. and (orange): These are momentum density terms. Even if there is no bulk motion, energy may be transmitted by heat conduction, and the conducted energy will carry momentum. are the rates of flow of the of momentum per unit area in the . Even if there is no bulk motion, random thermal motions of the particles will give rise to momentum flow, so the terms (green) represent isotropic pressure, and the terms (blue) represent shear stresses. One important conclusion to be derived from the equations is that, colloquially speaking, gravity itself creates gravity. Energy has mass. Even in Newtonian gravity, the gravitational field is associated with an energy, called the gravitational potential energy. In general relativity, the energy of the gravitational field feeds back into creation of the gravitational field. This makes the equations nonlinear and hard to solve in anything other than weak field cases. Numerical relativity is a branch of general relativity using numerical methods to solve and analyze problems, often employing supercomputers to study black holes, gravitational waves, neutron stars and other phenomena in the strong field regime. Energy-momentum In special relativity, mass-energy is closely connected to momentum. Just as space and time are different aspects of a more comprehensive entity called spacetime, mass–energy and momentum are merely different aspects of a unified, four-dimensional quantity called four-momentum. In consequence, if mass–energy is a source of gravity, momentum must also be a source. The inclusion of momentum as a source of gravity leads to the prediction that moving or rotating masses can generate fields analogous to the magnetic fields generated by moving charges, a phenomenon known as gravitomagnetism. It is well known that the force of magnetism can be deduced by applying the rules of special relativity to moving charges. (An eloquent demonstration of this was presented by Feynman in volume II, of his Lectures on Physics, available online.) Analogous logic can be used to demonstrate the origin of gravitomagnetism. In Fig. 5-7a, two parallel, infinitely long streams of massive particles have equal and opposite velocities −v and +v relative to a test particle at rest and centered between the two. Because of the symmetry of the setup, the net force on the central particle is zero. Assume so that velocities are simply additive. Fig. 5-7b shows exactly the same setup, but in the frame of the upper stream. The test particle has a velocity of +v, and the bottom stream has a velocity of +2v. Since the physical situation has not changed, only the frame in which things are observed, the test particle should not be attracted towards either stream. It is not at all clear that the forces exerted on the test particle are equal. (1) Since the bottom stream is moving faster than the top, each particle in the bottom stream has a larger mass energy than a particle in the top. (2) Because of Lorentz contraction, there are more particles per unit length in the bottom stream than in the top stream. (3) Another contribution to the active gravitational mass of the bottom stream comes from an additional pressure term which, at this point, we do not have sufficient background to discuss. All of these effects together would seemingly demand that the test particle be drawn towards the bottom stream. The test particle is not drawn to the bottom stream because of a velocity-dependent force that serves to repel a particle that is moving in the same direction as the bottom stream. This velocity-dependent gravitational effect is gravitomagnetism. Matter in motion through a gravitomagnetic field is hence subject to so-called frame-dragging effects analogous to electromagnetic induction. It has been proposed that such gravitomagnetic forces underlie the generation of the relativistic jets (Fig. 5-8) ejected by some rotating supermassive black holes. Pressure and stress Quantities that are directly related to energy and momentum should be sources of gravity as well, namely internal pressure and stress. Taken together, , momentum, pressure and stress all serve as sources of gravity: Collectively, they are what tells spacetime how to curve. General relativity predicts that pressure acts as a gravitational source with exactly the same strength as mass–energy density. The inclusion of pressure as a source of gravity leads to dramatic differences between the predictions of general relativity versus those of Newtonian gravitation. For example, the pressure term sets a maximum limit to the mass of a neutron star. The more massive a neutron star, the more pressure is required to support its weight against gravity. The increased pressure, however, adds to the gravity acting on the star's mass. Above a certain mass determined by the Tolman–Oppenheimer–Volkoff limit, the process becomes runaway and the neutron star collapses to a black hole. The stress terms become highly significant when performing calculations such as hydrodynamic simulations of core-collapse supernovae. These predictions for the roles of pressure, momentum and stress as sources of spacetime curvature are elegant and play an important role in theory. In regards to pressure, the early universe was radiation dominated, and it is highly unlikely that any of the relevant cosmological data (e.g. nucleosynthesis abundances, etc.) could be reproduced if pressure did not contribute to gravity, or if it did not have the same strength as a source of gravity as mass–energy. Likewise, the mathematical consistency of the Einstein field equations would be broken if the stress terms did not contribute as a source of gravity. Experimental test of the sources of spacetime curvature Definitions: Active, passive, and inertial mass Bondi distinguishes between different possible types of mass: (1) is the mass which acts as the source of a gravitational field; (2) is the mass which reacts to a gravitational field; (3) is the mass which reacts to acceleration. is the same as in the discussion of the equivalence principle. In Newtonian theory, The third law of action and reaction dictates that and must be the same. On the other hand, whether and are equal is an empirical result. In general relativity, The equality of and is dictated by the equivalence principle. There is no "action and reaction" principle dictating any necessary relationship between and . Pressure as a gravitational source The classic experiment to measure the strength of a gravitational source (i.e. its active mass) was first conducted in 1797 by Henry Cavendish (Fig. 5-9a). Two small but dense balls are suspended on a fine wire, making a torsion balance. Bringing two large test masses close to the balls introduces a detectable torque. Given the dimensions of the apparatus and the measurable spring constant of the torsion wire, the gravitational constant G can be determined. To study pressure effects by compressing the test masses is hopeless, because attainable laboratory pressures are insignificant in comparison with the of a metal ball. However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 1028 atm ≈ 1033 Pa ≈ 1033 kg·s−2m−1. This amounts to about 1% of the nuclear mass density of approximately 1018kg/m3 (after factoring in c2 ≈ 9×1016m2s−2). If pressure does not act as a gravitational source, then the ratio should be lower for nuclei with higher atomic number Z, in which the electrostatic pressures are higher. (1968) did a Cavendish experiment using a Teflon mass suspended in a mixture of the liquids trichloroethylene and dibromoethane having the same buoyant density as the Teflon (Fig. 5-9b). Fluorine has atomic number , while bromine has . Kreuzer found that repositioning the Teflon mass caused no differential deflection of the torsion bar, hence establishing active mass and passive mass to be equivalent to a precision of 5×10−5. Although Kreuzer originally considered this experiment merely to be a test of the ratio of active mass to passive mass, Clifford Will (1976) reinterpreted the experiment as a fundamental test of the coupling of sources to gravitational fields. In 1986, Bartlett and Van Buren noted that lunar laser ranging had detected a 2 km offset between the moon's center of figure and its center of mass. This indicates an asymmetry in the distribution of Fe (abundant in the Moon's core) and Al (abundant in its crust and mantle). If pressure did not contribute equally to spacetime curvature as does mass–energy, the moon would not be in the orbit predicted by classical mechanics. They used their measurements to tighten the limits on any discrepancies between active and passive mass to about 10−12. With decades of additional lunar laser ranging data, Singh et al. (2023) reported improvement on these limits by a factor of about 100. Gravitomagnetism The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until 2005. The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism. Initial results confirmed the relatively large geodetic effect (which is due to simple spacetime curvature, and is also known as de Sitter precession) to an accuracy of about 1%. The much smaller frame-dragging effect (which is due to gravitomagnetism, and is also known as Lense–Thirring precession) was difficult to measure because of unexpected charge effects causing variable drift in the gyroscopes. Nevertheless, by August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, while the geodetic effect was confirmed to better than 0.5%. Subsequent measurements of frame dragging by laser-ranging observations of the LARES, and satellites has improved on the measurement, with results (as of 2016) demonstrating the effect to within 5% of its theoretical value, although there has been some disagreement on the accuracy of this result. Another effort, the Gyroscopes in General Relativity (GINGER) experiment, seeks to use three 6 m ring lasers mounted at right angles to each other 1400 m below the Earth's surface to measure this effect. The first ten years of experience with a prototype ring laser gyroscope array, GINGERINO, established that the full scale experiment should be able to measure gravitomagnetism due to the Earth's rotation to within a 0.1% level or even better. See also Spacetime topology Notes References Concepts in physics Theoretical physics Theory of relativity Time Time in physics Conceptual models
Curved spacetime
[ "Physics", "Mathematics" ]
4,902
[ "Physical phenomena", "Time in physics", "Physical quantities", "Time", "Vector spaces", "Quantity", "Theoretical physics", "Space (mathematics)", "nan", "Theory of relativity", "Spacetime", "Wikipedia categories named after physical quantities" ]
1,017,492
https://en.wikipedia.org/wiki/Wolf%20tone
A wolf tone, or simply a "wolf", is an undesirable phenomenon that occurs in some bowed-string instruments, most famously in the cello. It happens when the pitch of the played note is close to a particularly strong natural resonant frequency of the body of the musical instrument. A wolf tone is hard for the player to control: instead of a solid tone it tends to produce a thin "surface" sound, sometimes jumping to the octave of the intended note. In extreme cases, a "stuttering" or "warbling" sound is produced, as in the sound example. This sound may be likened to the howling of a wolf. A somewhat similar sound is the beating produced by a wolf interval, which is usually the interval between E and G of the various non-circulating temperaments. Stringed instruments The physics behind the warbling wolf was first explained by C. V. Raman. He used simultaneous measurements of the vibrating string and the vibrating body of the cello, to show that the warbling sound is caused by an alternation of two different types of string vibration. All bowed string vibration is “stick-slip oscillation”. One of the vibration types involves a single slip in every cycle of the note, but the other type involves two slips per cycle. Frequently, the wolf is present on or in between the pitches E and F on the cello, and around G on the double bass. A wolf can be reduced or eliminated with a piece of equipment called a wolf tone eliminator. There are several types. The one illustrated is a metal tube and mounting screw with an interior rubber sleeve, that fits around one of the lengths of string below the bridge. The position of the tube must be adjusted so that the short section of string resonates exactly at the frequency at which the wolf occurs. It works in the same way as a tuned-mass damper, often used to reduce vibration of bridges or tall buildings. An older device on cellos was a fifth string that could be tuned to the wolf frequency; fingering an octave above or below also attenuates the effect somewhat, as does the trick of squeezing with the knees. While it has been said that Lou Harrison wrote a piece (evidently reworked as the second movement of the Suite for Cello and Harp) that exploited the wolf specific to Seymour Barab's new cello, there is no clear evidence that this occurred. "Naldjorlak I", composed by Éliane Radigue for realisation exclusively by the cellist Charles Curtis, is in fact composed solely around the manipulation of the wolf tone of Curtis's cello. See also Mechanical resonance String resonance Violin acoustics References Wilkins, R.A.; Pan, J.; Sun, H. (Fall 2013). "An Empirical Investigation into the Mechanism of Cello Wolf-Tone Beats". Journal of the Violin Society of America. 24 (2). Wilkins, R.A.; Pan, J.; Sun, H. (Fall 2013). "An Investigation into the Techniques for Controlling Cello Wolf-Tones". Journal of the Violin Society of America. 24 (2). Sounds by type Resonance String performance techniques
Wolf tone
[ "Physics", "Chemistry" ]
652
[ "Resonance", "Waves", "Physical phenomena", "Scattering" ]
1,018,020
https://en.wikipedia.org/wiki/Evaporative%20cooling%20%28atomic%20physics%29
Evaporative cooling is an atomic physics technique to achieve high phase space densities which optical cooling techniques alone typically can not reach. Atoms trapped in optical or magnetic traps can be evaporatively cooled via two primary mechanisms, usually specific to the type of trap in question: in magnetic traps, radiofrequency (RF) fields are used to selectively drive warm atoms from the trap by inducing transitions between trapping and non-trapping spin states; or, in optical traps, the depth of the trap itself is gradually decreased, allowing the most energetic atoms in the trap to escape over the edges of the optical barrier. In the case of a Maxwell-Boltzmann distribution for the velocities of the atoms in the trap, these atoms which escape/are driven out of the trap lie in the highest velocity tail of the distribution, meaning that their kinetic energy (and therefore temperature) is much higher than the average for the trap. The net result is that while the total trap population decreases, so does the mean energy of the remaining population. This decrease in the mean kinetic energy of the atom cloud translates into a progressive decrease in the trap temperature, cooling the trap. The process is analogous to blowing on a cup of coffee to cool it: those molecules at the highest end of the energy distribution for the coffee form a vapor above the surface and are then removed from the system by blowing them away, decreasing the average energy, and therefore temperature, of the remaining coffee molecules. Evaporation is a change of state from liquid to gas. Radiofrequency induced evaporation Radiofrequency (RF) induced evaporative cooling is the most common method for evaporatively cooling atoms in a magneto-optical trap (MOT). Consider trapped atoms laser cooled on a |F=0 |F=1 transition. The magnetic sublevels of the |F=1 state (|m= -1,0,1) are degenerate for zero external field. The confining magnetic quadrupole field, which is zero at the center of the trap and nonzero everywhere else, causes a Zeeman shift in atoms which stray from the trap center, lifting the degeneracy of the three magnetic sublevels. The interaction energy between the total spin angular momentum of the trapped atom and the external magnetic field depends on the projection of the spin angular momentum onto the z-axis, and is proportional toFrom this relation it can be seen that only the |m=-1 magnetic sublevel will have a positive interaction energy with the field, that is to say, the energy of atoms in this state increases as they migrate from the trap center, making the trap center a point of minimum energy, the definition of a trap. Conversely, the energy of the |m=0 state is unchanged by the field (no trapping), and the |m=1 state actually decreases in energy as it strays from the trap center, making the center a point of maximum energy. For this reason |m=-1 is referred to as the trapping state, and |m=0,1 the non-trapping states. From the equation for the magnetic field interaction energy, it can also be seen that the energies of the |m=1,-1 states shift in opposite directions, changing the total energy difference between these two states. The |m=-1|m=1 transition frequency therefore experiences a Zeeman shift. With this in mind, the RF evaporative cooling scheme works as follows: the size of the Zeeman shift of the -1+1 transition depends on the strength of the magnetic field, which increases radially outward from the trap center. Those atoms which are coldest move within a small region around the trap center, where they experience only a small Zeeman shift in the -1+1 transition frequency. Warm atoms, however, spend time in regions of the trap much further from the center, where the magnetic field is stronger and the Zeeman shift therefore larger. The shift induced by magnetic fields on the scale used in typical MOTs is on the order of MHz, so that a radiofrequency source can be used to drive the -1+1 transition. The choice of frequency for the RF source corresponds to a point on the trapping potential curve at which atoms experience a Zeeman shift equal to the frequency of the RF source, which then drives the atoms to the anti-trapping |m=1 magnetic sublevel and immediately exits the trap. Lowering the RF frequency is therefore equivalent to lowering the dashed line in the figure, effectively reducing the depth of the potential well. For this reason the RF source used to remove these energetic atoms is often referred to as an "RF knife," as it effectively lowers the height of the trapping potential to remove the most energetic atoms from the trap, "cutting" away the high energy tail of the trap's energy distribution. This method was famously used to cool a cloud of rubidium atoms below the condensation critical temperature to form the first experimentally observed Bose-Einstein condensate (BEC) . Optical evaporation While the first observation of Bose-Einstein condensation was made in a magnetic atom trap using RF driven evaporative cooling, optical dipole traps are now much more common platforms for achieving condensation. Beginning in a MOT, cold, trapped atoms are transferred to the focal point of a high power, tightly focused, off-resonant laser beam. The electric field of the laser at its focus is sufficiently strong to induce dipole moments in the atoms, which are then attracted to the electric field maximum at the laser focus, effectively creating a trapping potential to hold them at the beam focus. The depth of the optical trapping potential in an optical dipole trap (ODT) is proportional to the intensity of the trapping laser light. Decreasing the power in the trapping laser beam therefore decreases the depth of the trapping potential. In the case of RF-driven evaporation, the actual height of the potential barrier confining the atoms is fixed during the evaporation sequence, but the RF knife effectively decreases the depth of this barrier, as previously discussed. For an optical trap, however, evaporation is facilitated by decreasing the laser power and thus lowering the depth of the trapping potential. As a result, the warmest atoms in the trap will have sufficient kinetic energy to be able to make it over the barrier walls and escape the trap, reducing the average energy of the remaining atoms as previously described. While trap depths for ODTs can be shallow (on the order of mK, in terms of temperature), the simplicity of this optical evaporation procedure has helped to make it increasingly popular for BEC experiments since its first demonstrations shortly after magnetic BEC production. See also Magneto-optical trap Bose-Einstein condensation Optical tweezers Laser cooling Sisyphus cooling Raman cooling References M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman and E. A. Cornell, Observations of Bose-Einstein Condensation in a Dilute Atomic Vapor, Science, 269:198–201, July 14, 1995. J. J. Tollett, C. C. Bradley, C. A. Sackett, and R. G. Hulet, Permanent magnet trap for cold atoms, Phys. Rev. A 51, R22, 1995. Bouyer et al., RF-induced evaporative cooling and BEC in a high magnetic field, physics/0003050, 2000. Thermodynamics Atomic physics Cooling technology
Evaporative cooling (atomic physics)
[ "Physics", "Chemistry", "Mathematics" ]
1,562
[ "Dynamical systems", "Quantum mechanics", "Atomic physics", "Thermodynamics", " molecular", "Atomic", " and optical physics" ]
1,018,336
https://en.wikipedia.org/wiki/Lawson%20criterion
The Lawson criterion is a figure of merit used in nuclear fusion research. It compares the rate of energy being generated by fusion reactions within the fusion fuel to the rate of energy losses to the environment. When the rate of production is higher than the rate of loss, the system will produce net energy. If enough of that energy is captured by the fuel, the system will become self-sustaining and is said to be ignited. The concept was first developed by John D. Lawson in a classified 1955 paper that was declassified and published in 1957. As originally formulated, the Lawson criterion gives a minimum required value for the product of the plasma (electron) density ne and the "energy confinement time" that leads to net energy output. Later analysis suggested that a more useful figure of merit is the triple product of density, confinement time, and plasma temperature T. The triple product also has a minimum required value, and the name "Lawson criterion" may refer to this value. On August 8, 2021, researchers at Lawrence Livermore National Laboratory's National Ignition Facility in California confirmed to have produced the first-ever successful ignition of a nuclear fusion reaction surpassing the Lawson's criteria in the experiment. Energy balance The central concept of the Lawson criterion is an examination of the energy balance for any fusion power plant using a hot plasma. This is shown below: Net power = Efficiency × (Fusion − Radiation loss − Conduction loss) Net power is the excess power beyond that needed internally for the process to proceed in any fusion power plant. Efficiency is how much energy is needed to drive the device and how well it collects energy from the reactions. Fusion is rate of energy generated by the fusion reactions. Radiation loss is the energy lost as light (including X-rays) leaving the plasma. Conduction loss is the energy lost as particles leave the plasma, carrying away energy. Lawson calculated the fusion rate by assuming that the fusion reactor contains a hot plasma cloud which has a Gaussian curve of individual particle energies, a Maxwell–Boltzmann distribution characterized by the plasma's temperature. Based on that assumption, he estimated the first term, the fusion energy being produced, using the volumetric fusion equation. Fusion = Number density of fuel A × Number density of fuel B × Cross section(Temperature) × Energy per reaction Fusion is the rate of fusion energy produced by the plasma Number density is the density in particles per unit volume of the respective fuels (or just one fuel, in some cases) Cross section is a measure of the probability of a fusion event, which is based on the plasma temperature Energy per reaction is the energy released in each fusion reaction This equation is typically averaged over a population of ions which has a normal distribution. The result is the amount of energy being created by the plasma at any instant in time. Lawson then estimated the radiation losses using the following equation: where N is the number density of the cloud and T is the temperature. For his analysis, Lawson ignores conduction losses. In reality this is nearly impossible; practically all systems lose energy through mass leaving the plasma and carrying away its energy. By equating radiation losses and the volumetric fusion rates, Lawson estimated the minimum temperature for the fusion for the deuterium–tritium (D-T) reaction to be 30 million degrees (2.6 keV), and for the deuterium–deuterium (D-D) reaction to be 150 million degrees (12.9 keV). Extensions into nτE The confinement time measures the rate at which a system loses energy to its environment. The faster the rate of loss of energy, , the shorter the energy confinement time. It is the energy density (energy content per unit volume) divided by the power loss density (rate of energy loss per unit volume): For a fusion reactor to operate in steady state, the fusion plasma must be maintained at a constant temperature. Thermal energy must therefore be added at the same rate the plasma loses energy in order to maintain the fusion conditions. This energy can be supplied by the fusion reactions themselves, depending on the reaction type, or by supplying additional heating through a variety of methods. For illustration, the Lawson criterion for the D-T reaction will be derived here, but the same principle can be applied to other fusion fuels. It will also be assumed that all species have the same temperature, that there are no ions present other than fuel ions (no impurities and no helium ash), and that D and T are present in the optimal 50-50 mixture. Ion density then equals electron density and the energy density of both electrons and ions together is given, according to the ideal gas law, by where is the temperature in electronvolt (eV) and is the particle density. The volume rate (reactions per volume per time) of fusion reactions is where is the fusion cross section, is the relative velocity, and denotes an average over the Maxwellian velocity distribution at the temperature . The volume rate of heating by fusion is times , the energy of the charged fusion products (the neutrons cannot help to heat the plasma). In the case of the D-T reaction, . The Lawson criterion requires that fusion heating exceeds the losses: Substituting in known quantities yields: Rearranging the equation produces: The quantity is a function of temperature with an absolute minimum. Replacing the function with its minimum value provides an absolute lower limit for the product . This is the Lawson criterion. For the deuterium–tritium reaction, the physical value is at least The minimum of the product occurs near . Extension into the "triple product" A still more useful figure of merit is the "triple product" of density, temperature, and confinement time, nTτE. For most confinement concepts, whether inertial, mirror, or toroidal confinement, the density and temperature can be varied over a fairly wide range, but the maximum attainable pressure p is a constant. When such is the case, the fusion power density is proportional to p2<σv>/T 2. The maximum fusion power available from a given machine is therefore reached at the temperature T where <σv>/T 2 is a maximum. By continuation of the above derivation, the following inequality is readily obtained: The quantity is also a function of temperature with an absolute minimum at a slightly lower temperature than . For the D-T reaction, the minimum occurs at T = 14 keV. The average <σv> in this temperature region can be approximated as so the minimum value of the triple product value at T = 14 keV is about This number has not yet been achieved in any reactor, although the latest generations of machines have come close. JT-60 reported 1.53x1021 keV.s.m−3. For instance, the TFTR has achieved the densities and energy lifetimes needed to achieve Lawson at the temperatures it can create, but it cannot create those temperatures at the same time. ITER aims to do both. As for tokamaks, there is a special motivation for using the triple product. Empirically, the energy confinement time τE is found to be nearly proportional to n1/3/P 2/3. In an ignited plasma near the optimum temperature, the heating power P equals fusion power and therefore is proportional to n2T 2. The triple product scales as The triple product is only weakly dependent on temperature as T -1/3. This makes the triple product an adequate measure of the efficiency of the confinement scheme. Inertial confinement The Lawson criterion applies to inertial confinement fusion (ICF) as well as to magnetic confinement fusion (MCF) but in the inertial case it is more usefully expressed in a different form. A good approximation for the inertial confinement time is the time that it takes an ion to travel over a distance R at its thermal speed where mi denotes mean ionic mass. The inertial confinement time can thus be approximated as By substitution of the above expression into relationship (), we obtain This product must be greater than a value related to the minimum of T 3/2/<σv>. The same requirement is traditionally expressed in terms of mass density ρ = <nmi>: Satisfaction of this criterion at the density of solid D-T (0.2 g/cm3) would require a laser pulse of implausibly large energy. Assuming the energy required scales with the mass of the fusion plasma (Elaser ~ ρR3 ~ ρ−2), compressing the fuel to 103 or 104 times solid density would reduce the energy required by a factor of 106 or 108, bringing it into a realistic range. With a compression by 103, the compressed density will be 200 g/cm3, and the compressed radius can be as small as 0.05 mm. The radius of the fuel before compression would be 0.5 mm. The initial pellet will be perhaps twice as large since most of the mass will be ablated during the compression. The fusion power times density is a good figure of merit to determine the optimum temperature for magnetic confinement, but for inertial confinement the fractional burn-up of the fuel is probably more useful. The burn-up should be proportional to the specific reaction rate (n2<σv>) times the confinement time (which scales as T -1/2) divided by the particle density n: Thus the optimum temperature for inertial confinement fusion maximises <σv>/T3/2, which is slightly higher than the optimum temperature for magnetic confinement. Non-thermal systems Lawson's analysis is based on the rate of fusion and loss of energy in a thermalized plasma. There is a class of fusion machines that do not use thermalized plasmas but instead directly accelerate individual ions to the required energies. The best-known examples are the migma, fusor and polywell. When applied to the fusor, Lawson's analysis is used as an argument that conduction and radiation losses are the key impediments to reaching net power. Fusors use a voltage drop to accelerate and collide ions, resulting in fusion. The voltage drop is generated by wire cages, and these cages conduct away particles. Polywells are improvements on this design, designed to reduce conduction losses by removing the wire cages which cause them. Regardless, it is argued that radiation is still a major impediment. See also Fusion energy gain factor (Q) Notes It is straightforward to relax these assumptions. The most difficult question is how to define when the ion and electrons differ in density and temperature. Considering that this is a calculation of energy production and loss by ions, and that any plasma confinement concept must contain the pressure forces of the plasma, it seems appropriate to define the effective (electron) density through the (total) pressure as . The factor of is included because usually refers to the density of the electrons alone, but here refers to the total pressure. Given two species with ion densities , atomic numbers , ion temperature , and electron temperature , it is easy to show that the fusion power is maximized by a fuel mix given by . The values for , , and the power density must be multiplied by the factor . For example, with protons and boron () as fuel, another factor of must be included in the formulas. On the other hand, for cold electrons, the formulas must all be divided by (with no additional factor for ). References External links Mathematical derivation, archived 2019 from the original Fusion power
Lawson criterion
[ "Physics", "Chemistry" ]
2,342
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
17,419,853
https://en.wikipedia.org/wiki/Ch%C3%A9zy%20formula
The Chézy Formula is a semi-empirical resistance equation which estimates mean flow velocity in open channel conduits. The relationship was conceptualized and developed in 1768 by French physicist and engineer Antoine de Chézy (1718–1798) while designing Paris's water canal system. Chézy discovered a similarity parameter that could be used for estimating flow characteristics in one channel based on the measurements of another. The Chézy formula is a pioneering formula in the field of fluid mechanics that relates the flow of water through an open channel with the channel's dimensions and slope. It was expanded and modified by Irish engineer Robert Manning in 1889. Manning's modifications to the Chézy formula allowed the entire similarity parameter to be calculated by channel characteristics rather than by experimental measurements. Today, the Chézy and Manning equations continue to accurately estimate open channel fluid flow and are standard formulas in various fields related to fluid mechanics and hydraulics, including physics, mechanical engineering, and civil engineering. The Chézy formula The Chézy formula describes mean flow velocity in turbulent open channel flow and is used broadly in fields related to fluid mechanics and fluid dynamics. Open channels refer to any open conduit, such as rivers, ditches, canals, or partially full pipes. The Chézy formula is defined for uniform equilibrium and non-uniform, gradually varied flows. The formula is written as: where, is average velocity [length/time]; is the hydraulic radius [length], which is the cross-sectional area of flow divided by the wetted perimeter, for a wide channel this is approximately equal to the water depth; is the hydraulic gradient, which for uniform normal depth of flow is the slope of the channel bottom [unitless; length/length]; is Chézy's coefficient [length1/2/time]. Values of this coefficient must be determined experimentally. Typically, these range from 30 m1/2/s (small rough channel) to 90 m1/2/s (large smooth channel). For many years following Antoine de Chézy's development of this formula, researchers assumed that was a constant, independent of flow conditions. However, additional research proved the coefficient's dependence on the Reynolds number as well as a channel's roughness. Accordingly, although the Chézy formula does not appear to incorporate either of these terms, the Chézy coefficient empirically and indirectly represents them. Exploring Chézy's similarity parameter The relationship between linear momentum and deformable fluid bodies is well explored, as are the Navier–Stokes equations for incompressible flow. However, exploring the relationships foundational to the Chézy formula can be helpful towards understanding the formula in full. To understand the Chézy similarity parameter, a simple linear momentum equation can help summarize the conservation of momentum of a control volume uniformly flowing through an open channel: Where the sum of forces on the contents of a control volume in the open channel is equal to the sum of the time rate of change of the linear momentum of the contents of the control volume, plus the net rate of flow of linear momentum through the control surface. The momentum principle may always be used for hydrodynamic force calculations. As long as uniform flow can be assumed, applying the linear momentum equation to a river channel flowing in one dimension means that momentum remains conserved and the forces are balanced in the direction of flow: Here, the hydrostatic pressure forces are F1 and F2, the component (τwPl) represents the shear force of friction acting on the control volume, and the component (ω sin θ) represents the gravitational force of the fluid's weight acting on the sloped channel bottom are held in balance in the flow direction. The free-body diagram below illustrates this equilibrium of forces in open channel flow with uniform flow conditions. Most open-channel flows are turbulent and characterised by very large Reynolds numbers. Due to the large Reynolds numbers characteristic in open channel flow, the channel shear stress proves to be proportional to the density and velocity of the flow. This can be illustrated in a series of advanced formulas which identify a shear stress similarity parameter characteristic of all turbulent open channels. Combining this parameter with the Chézy formula, channel components and the conservation of momentum in an open channel flow results in the relationship . Chézy's similarity parameter and formula explain how the velocity of water flowing through a channel has a relationship with the slope and sheer stress of the channel bottom, the hydraulic radius of flow, and the Chézy coefficient, which empirically incorporates several other parameters of the flowing water. This relationship is driven by the conservation of momentum present during uniform flow conditions. Chézy's formula inspires the Manning formula Once this relationship was established by Chézy, many engineers and physicists (see the below section Authors of flow formulas) continued to search for ways to improve Chézy's equation. A slight oversight of Chézy's formula was determined by the research of these colleagues. They determined that the velocity's slope dependence in Chézy's formula (V:S0) was reasonable, but that the velocity's dependence on the hydraulic radius (V:Rh1/2) was not reasonable and that the relationship was closer to (V:Rh2/3). Many formulas based on Chézy's formula have been developed since its discovery by these contemporaries and others, and differing formulas are more suitable in differing conditions. The Chézy formula provided a substantial foundation for a new flow formula proposed in 1889 by Irish engineer Robert Manning. Manning's formula is a modified Chézy formula that combines many of his aforementioned contemporaries' work. Manning's modifications to the Chézy formula allowed the entire similarity parameter to be calculated by channel characteristics rather than by experimental measurements. The Manning equation improved Chézy's equation by better representing the relationship between Rh and velocity, while also replacing the empirical Chézy coefficient () with the Manning resistance coefficient (), which is also referenced in places as the Manning roughness coefficient. Unlike the Chézy coefficient () which could only be determined by field measurements, the Manning coefficient () was determined to remain constant based on the material of the wetted perimeter, allowing for a standardized table of values to be developed that could reasonably estimate flow velocity. While field measurements remain the most precise way to obtain either Chézy or Manning coefficients, the standardized values that were developed with the use of the Manning formula provided a much-desired simplicity to open-channel flow estimates. Chézy formula vs Manning formula The Manning formula is described elsewhere but it is included below for comparison purposes. Below, the minor modifications used by the Manning formula to improve upon the Chézy formula are clear. Chézy formula Manning formula Using Chézy formula with Manning coefficient This similarity between the Chézy and Manning formulas shown above also means that the standardized Manning coefficients may be used to estimate open channel flow velocity with the Chézy formula, by using them to calculate the Chézy's coefficient as shown below. Manning derived the following relationship between Manning coefficient () to Chézy coefficient () based upon experiments: where is the Chézy coefficient [length1/2/time], a function of relative roughness and Reynolds number; is the hydraulic radius, which is the cross-sectional area of flow divided by the wetted perimeter (for a wide channel this is approximately equal to the water depth) [m]; is Manning's coefficient [time/length1/3]; and is a constant; k = 1 when using SI units and k = 1.49 when using BG units. Modern use of Chézy and Manning formulas Since the Chézy formula and the Manning formula both reference a single control volume location along the channel, neither address friction factor nor head loss directly. However, the change in pressure head may be calculated by combining them with other formulas such as the Darcy–Weisbach equation. The empirical aspect to the coefficient indirectly addresses friction factor and Reynold's number and is the reason why the Chézy formula remains most accurate in certain conditions, such as river channels with non-uniform channel dimensions. Additionally, both equations are explicitly used with uniform or "steady-state" flow where the hydraulic depth is constant, due to their derivation from the conservation of momentum. In contrast, if the hydraulic conditions fluctuate in open channel flow, they are then described as gradually or rapidly varied flow, and will require further analyses beyond these two formula methods. Since partially full pipes aren't pressurized, they are considered open channels by definition. Therefore, the Manning and Chézy formulas can be applied to calculate partially full pipe flow. However, the intended use of these formulas are primarily for considering uniform and turbulent flow. Many other formulas that have been developed since may produce more accurate results, such as the Darcy–Weisbach equation or the Hazen–Williams equation, but lack the simplicity of the Manning or Chézy formulas. Both formulas continue to be broadly taught and are used in open channel and fluid dynamics research. Today, the Manning formula is likely the most globally used formula for open channel uniform flow analysis, due to its simplicity, proven efficacy, and the fact that most open channel studies are concerned with turbulent flow. Chézy's formula is one of the oldest in the field of fluid mechanics, it applies to a wider range of flows than the Manning equation, and its influence continues to this day. See also Hydrology Hydraulic engineering Authors of flow formulas Albert Brahms (1692–1758) Antoine de Chézy (1718–1798) Claude-Louis Navier (1785–1836) Adhémar Jean Claude Barré de Saint-Venant (1797–1886) Gotthilf Heinrich Ludwig Hagen (1797–1884) Jean Léonard Marie Poiseuille (1797–1869) Henri P. G. Darcy (1803–1858) Julius Ludwig Weisbach (1806–1871) Charles Storrow (1809–1904) Robert Manning (1816–1897) Wilhelm Rudolf Kutter (1818–1888) Emile Oscar Ganguillet (1818–1894) Sir George Stokes (1819–1903) Philippe Gaspard Gauckler (1826–1905) Henri-Émile Bazin (1829–1917) Alphonse Fteley (1837–1903) Frederic Stearns (1851–1919) Ludwig Prandtl (1875–1953) Paul Richard Heinrich Blasius (1883–1970) Albert Strickler (1887–1963) Cyril Frank Colebrook (1910–1997) References External links History of the Chézy Formula Eponymous equations of physics Fluid dynamics Piping Scientific laws
Chézy formula
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
2,134
[ "Equations of physics", "Building engineering", "Chemical engineering", "Mathematical objects", "Eponymous equations of physics", "Equations", "Scientific laws", "Mechanical engineering", "Piping", "Fluid dynamics" ]
4,182,509
https://en.wikipedia.org/wiki/Oxygen%20transmission%20rate
Oxygen transmission rate (OTR) is the measurement of the amount of oxygen gas that passes through a substance over a given period. It is mostly carried out on non-porous materials, where the mode of transport is diffusion, but there are a growing number of applications where the transmission rate also depends on flow through apertures of some description. It relates to the permeation of oxygen through packaging to sensitive foods and pharmaceuticals. Measurement Standard test methods are available for measuring the oxygen transmission rate of packaging materials. Completed packages, however, involve heat seals, creases, joints, and closures which often reduce the effective barrier of the package. For example, the glass of a glass bottle may have an effective total barrier but the screw cap closure and the closure liner might not. ASTM standard test methods include: F3136 Standard Test Method for Oxygen Gas Transmission Rate through Plastic Film and Sheeting using a Dynamic Accumulation Method D3985 Standard Test Method for Oxygen Gas Transmission Rate Through Plastic Film and Sheeting Using a Coulometric Sensor F1307 Standard Test Method for Oxygen Transmission Rate Through Dry Packages Using a Coulometric Sensor F1927 Standard Test Method for Determination of Oxygen Gas Transmission Rate, Permeability and Permeance at Controlled Relative Humidity Through Barrier Materials Using a Coulometric Detector F2622 Standard Test Method for Oxygen Gas Transmission Rate Through Plastic Film and Sheeting Using Various Sensors Other test methods include: The ambient oxygen ingress rate method (AOIR) an alternative method for measuring the oxygen transmission rates (OTR) of whole packages Wine Also a factor of increasing awareness in the debate surrounding wine closures, natural corks show small variation in their oxygen transmission rate, which in turn translates to a degree of bottle variation. See also Moisture vapor transmission rate Permeation Shelf life Oxygen scavenger References Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Massey, L K, "Permeability Properties of Plastics and Elastomers", 2003, Andrew Publishing, Sanghyun Lee "Mass Transfer" Konkuk University, 2017 Hanne Larsen, Achim Kohlr and Ellen Merethe Magnus, "Ambient oxygen ingress rate method", John Wilew & Sons, Packaging Technology and Science, Volume 13 Issue 6, Pages 233 - 241 Footnotes Packaging Temporal rates
Oxygen transmission rate
[ "Physics" ]
479
[ "Temporal quantities", "Temporal rates", "Physical quantities" ]
4,184,621
https://en.wikipedia.org/wiki/Chetaev%20instability%20theorem
The Chetaev instability theorem for dynamical systems states that if there exists, for the system with an equilibrium point at the origin, a continuously differentiable function V(x) such that the origin is a boundary point of the set ; there exists a neighborhood of the origin such that for all then the origin is an unstable equilibrium point of the system. This theorem is somewhat less restrictive than the Lyapunov instability theorems, since a complete sphere (circle) around the origin for which and both are of the same sign does not have to be produced. It is named after Nicolai Gurevich Chetaev. Applications Chetaev instability theorem has been used to analyze the unfolding dynamics of proteins under the effect of optical tweezers. See also Lyapunov function — a function whose existence guarantees stability References Further reading Theorems in dynamical systems Stability theory
Chetaev instability theorem
[ "Mathematics" ]
179
[ "Theorems in dynamical systems", "Stability theory", "Mathematical problems", "Mathematical theorems", "Dynamical systems" ]