id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,609,741
https://en.wikipedia.org/wiki/Alexandrov%20space
In geometry, Alexandrov spaces with curvature ≥ k form a generalization of Riemannian manifolds with sectional curvature ≥ k, where k is some real number. By definition, these spaces are locally compact complete length spaces where the lower curvature bound is defined via comparison of geodesic triangles in the space to geodesic triangles in standard constant-curvature Riemannian surfaces. One can show that the Hausdorff dimension of an Alexandrov space with curvature ≥ k is either a non-negative integer or infinite. One can define a notion of "angle" (see Comparison triangle#Alexandrov angles) and "tangent cone" in these spaces. Alexandrov spaces with curvature ≥ k are important as they form the limits (in the Gromov–Hausdorff metric) of sequences of Riemannian manifolds with sectional curvature ≥ k, as described by Gromov's compactness theorem. Alexandrov spaces with curvature ≥ k were introduced by the Russian mathematician Aleksandr Danilovich Aleksandrov in 1948 and should not be confused with Alexandrov-discrete spaces named after the Russian topologist Pavel Alexandrov. They were studied in detail by Burago, Gromov and Perelman in 1992 and were later used in Perelman's proof of the Poincaré conjecture. References Metric geometry Differential geometry Riemannian manifolds
Alexandrov space
[ "Mathematics" ]
279
[ "Metric spaces", "Riemannian manifolds", "Space (mathematics)", "Geometry", "Geometry stubs" ]
14,609,763
https://en.wikipedia.org/wiki/Normal%20shock%20tables
In aerodynamics, the normal shock tables are a series of tabulated data listing the various properties before and after the occurrence of a normal shock wave. With a given upstream Mach number, the post-shock Mach number can be calculated along with the pressure, density, temperature, and stagnation pressure ratios. Such tables are useful since the equations used to calculate the properties after a normal shock are cumbersome. The tables below have been calculated using a heat capacity ratio, , equal to 1.4. The upstream Mach number, , begins at 1 and ends at 5. Although the tables could be extended over any range of Mach numbers, stopping at Mach 5 is typical since assuming to be 1.4 over the entire Mach number range leads to errors over 10% beyond Mach 5. Normal shock table equations Given an upstream Mach number, , and the ratio of specific heats, , the post normal shock Mach number, , can be calculated using the equation below. The next equation shows the relationship between the post normal shock pressure, , and the upstream ambient pressure, . The relationship between the post normal shock density, , and the upstream ambient density, is shown next in the tables. Next, the equation below shows the relationship between the post normal shock temperature, , and the upstream ambient temperature, . Finally, the ratio of stagnation pressures is shown below where is the upstream stagnation pressure and occurs after the normal shock. The ratio of stagnation temperatures remains constant across a normal shock since the process is adiabatic. Note that before and after the shock the isentropic relations are valid and connect static and total quantities. That means, (comes from Bernoulli, assumes incompressible flow) because the flow is for Mach numbers greater than unity always compressible. The normal shock tables (for γ = 1.4) See also Normal shock Mach number Compressible flow References External links University of Cincinnati shock relations calculator Parkin Research Normal shock calculator Aerospace engineering Aerodynamics
Normal shock tables
[ "Chemistry", "Engineering" ]
409
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
14,609,794
https://en.wikipedia.org/wiki/Subcostal%20plane
The subcostal plane is a transverse plane which bisects the body at the level of the 10th costal margin and the vertebra body L3. References External links http://www.liv.ac.uk/HumanAnatomy/phd/mbchb/travel/surface1.html http://www.qub.ac.uk/cskills/Abd%20exam.htm Anatomical planes
Subcostal plane
[ "Mathematics" ]
94
[ "Planes (geometry)", "Anatomical planes" ]
14,609,827
https://en.wikipedia.org/wiki/Interspinous%20plane
The interspinous plane (Planum interspinale) is an anatomical transverse plane that passes through the anterior superior iliac spines. It separates the lateral lumbar region from the inguinal region and the umbilical region from the pubic region. References Anatomical planes
Interspinous plane
[ "Mathematics" ]
60
[ "Planes (geometry)", "Anatomical planes" ]
14,610,483
https://en.wikipedia.org/wiki/Autotransporter%20family
In molecular biology, an autotransporter domain is a structural domain found in some bacterial outer membrane proteins. The domain is always located at the C-terminal end of the protein and forms a beta-barrel structure. The barrel is oriented in the membrane such that the N-terminal portion of the protein, termed the passenger domain, is presented on the cell surface. These proteins are typically virulence factors, associated with infection or virulence in pathogenic bacteria. The name autotransporter derives from an initial understanding that the protein was self-sufficient in transporting the passenger domain through the outermembrane. This view has since been challenged by Benz and Schmidt. Secretion of polypeptide chains through the outer membrane of Gram-negative bacteria can occur via a number of different pathways. The type V(a), or autotransporter, secretion pathway constitutes the largest number of secreted virulence factors of any one of the seven known types of secretion in Gram-negative bacteria. This secretion pathway is exemplified by the prototypical IgA1 Protease of Neisseria gonorrhoeae. The protein is directed to the inner membrane by a signal peptide transported across the inner membrane via the Sec machinery. Once in the periplasm, the autotransporter domain inserts into the outer membrane. The passenger domain is passed through the center of the autotransporter domain to be presented on the outside of the cell, however the mechanism by which this occurs remains unclear. The C-terminal translocator domain corresponds to an outer membrane beta-barrel domain. The N-terminal passenger domain is translocated across the membrane, and may or may not be cleaved from the translocator domain. In those proteins where the cleavage is auto-catalytic, the peptidase domains belong to MEROPS peptidase families S6 and S8. Passenger domains structurally characterized to date have been shown to be dominated by a protein fold known as a beta helix, typified by pertactin. The folding of this domain is thought to be intrinsically linked to its method of outer membrane translocation. See also Trimeric Autotransporter Adhesins (TAA) References Further reading Protein domains Outer membrane proteins
Autotransporter family
[ "Biology" ]
476
[ "Protein domains", "Protein classification" ]
14,610,533
https://en.wikipedia.org/wiki/Outer%20membrane%20protein%20W%20family
Outer membrane protein W (OmpW) family is a family of evolutionarily related proteins from the bacterial outer membrane. This family includes outer membrane protein W (OmpW) proteins from a variety of bacterial species. This protein may form the receptor for S4 colicins in Escherichia coli. References Protein domains Protein families Outer membrane proteins
Outer membrane protein W family
[ "Biology" ]
72
[ "Protein families", "Protein domains", "Protein classification" ]
14,610,619
https://en.wikipedia.org/wiki/Virulence-related%20outer%20membrane%20protein%20family
Virulence-related outer membrane proteins, or outer surface proteins (Osp) in some contexts, are expressed in the outer membrane of gram-negative bacteria and are essential to bacterial survival within macrophages and for eukaryotic cell invasion. This family consists of several bacterial and phage Ail/Lom-like proteins. The Yersinia enterocolitica Ail protein is a known virulence factor. Proteins in this family are predicted to consist of eight transmembrane beta-sheets and four cell surface-exposed loops. It is thought that Ail directly promotes invasion and loop 2 contains an active site, perhaps a receptor-binding domain. The phage protein Lom is expressed during lysogeny, and encode host-cell envelope proteins. Lom is found in the bacterial outer membrane, and is homologous to virulence proteins of two other enterobacterial genera. It has been suggested that lysogeny may generally have a role in bacterial survival in animal hosts, and perhaps in pathogenesis. Borrelia burgdorferi (responsible for Lyme disease) outer surface proteins play a role in persistence within ticks (OspA, OspB, OspD), mammalian host transmission (OspC, BBA64), host cell adhesion (OspF, BBK32, DbpA, DbpB), and in evasion of the host immune system (VlsE). OspC trigger innate immune system via signaling through TLR1, TLR2 and TLR6 receptors. Examples Members of this group include: PagC, required by Salmonella typhimurium for survival in macrophages and for virulence in mice Rck outer membrane protein of the S. typhimurium and S. enteritidis virulence plasmid Ail, a product of the Yersinia enterocolitica chromosome capable of mediating bacterial adherence to and invasion of epithelial cell lines OmpX from Escherichia coli that promotes adhesion to and entry into mammalian cells. It also has a role in the resistance against attack by the human complement system a Bacteriophage lambda outer membrane protein, Lom OspA/B are lipoproteins from Borrelia burgdorferi. OspA and OspB share 53% amino acid identity and likely have a similar antiparallel “free-standing” β sheet protein structure associated with the outer membrane surface via a lipidated NH2-terminal cysteine residue. OspA OspC is a major surface lipoprotein produced by Borrelia burgdorferi when infected ticks feed. OspC is necessary for tick salivary gland invasion. OspC-deficient B. burgdorferi have a markedly reduced capacity (approximately 800-fold less than control spirochetes, OspC expressing) for successful transmission to mice. Its synthesis decreases after transmission to a mammalian host. This protein disappears from the bacterial surface around 2 weeks after infection. Structure The crystal structure of OmpX from E. coli reveals that OmpX consists of an eight-stranded antiparallel all-next-neighbour beta barrel. The structure shows two girdles of aromatic amino acid residues and a ribbon of nonpolar residues that attach to the membrane interior. The core of the barrel consists of an extended hydrogen bonding network of highly conserved residues. OmpX thus resembles an inverse micelle. The OmpX structure shows that the membrane-spanning part of the protein is much better conserved than the extracellular loops. Moreover, these loops form a protruding beta sheet, the edge of which presumably binds to external proteins. It is suggested that this type of binding promotes cell adhesion and invasion and helps defend against the complement system. Although OmpX has the same beta-sheet topology as the structurally related outer membrane protein A (OmpA) , their barrels differ with respect to the shear numbers and internal hydrogen-bonding networks. OspA from Borrelia burgdorferi is an unusual outer surface protein, it has two globular domains which are connected with a single-layer β-sheet. This protein is highly soluble, contains a large number of Lys and Glu residues. These high entropy residues may disfavor crystal packing. References Further reading Protein domains Protein families Outer membrane proteins Virulence factors
Virulence-related outer membrane protein family
[ "Biology" ]
905
[ "Protein families", "Protein domains", "Protein classification" ]
14,610,686
https://en.wikipedia.org/wiki/Lipid%20A%20acylase
Antimicrobial peptide resistance and lipid A acylation protein PagP is a family of several bacterial antimicrobial peptide resistance and lipid A acylation (PagP) proteins. The bacterial outer membrane enzyme PagP transfers a palmitate chain from a phospholipid to lipid A. In a number of pathogenic Gram-negative bacteria, PagP confers resistance to certain cationic antimicrobial peptides produced during the host innate immune response. References Protein domains Protein families Outer membrane proteins
Lipid A acylase
[ "Biology" ]
110
[ "Protein families", "Protein domains", "Protein classification" ]
14,610,959
https://en.wikipedia.org/wiki/Mycobacterial%20porin
Mycobacterial porins are a group of transmembrane beta-barrel proteins produced by mycobacteria, which allow hydrophilic nutrients to enter the bacterium. They are located in the impermeable mycobacterial outer membrane, or mycomembrane of fast-growing mycobacteria. The mycomembrane is unique and composed of very-long chain fatty acids, mycolic acids. These proteins are structurally different from the typical porins located in the outer membrane of Gram-negative bacteria. For example, the MspA protein forms a tightly interconnected octamer with eight-fold rotation symmetry that resembles a goblet and contains a central channel. Each protein subunit contains a beta-sandwich of immunoglobulin-like topology and a beta-ribbon arm that forms an oligomeric transmembrane beta-barrel. MspA has biotechnological applications, most notably in nanopore sequencing. References Protein domains Outer membrane proteins
Mycobacterial porin
[ "Biology" ]
210
[ "Protein domains", "Protein classification" ]
14,611,016
https://en.wikipedia.org/wiki/Imaging%20informatics
Imaging informatics, also known as radiology informatics or medical imaging informatics, is a subspecialty of biomedical informatics that aims to improve the efficiency, accuracy, usability and reliability of medical imaging services within the healthcare enterprise. It is devoted to the study of how information about and contained within medical images is retrieved, analyzed, enhanced, and exchanged throughout the medical enterprise. As radiology is an inherently data-intensive and technology-driven specialty, those in this branch of medicine have become leaders in Imaging Informatics. However, with the proliferation of digitized images across the practice of medicine to include fields such as cardiology, ophthalmology, dermatology, surgery, gastroenterology, obstetrics, gynecology and pathology, the advances in Imaging Informatics are also being tested and applied in other areas of medicine. Various industry players and vendors involved with medical imaging, along with IT experts and other biomedical informatics professionals, are contributing and getting involved in this expanding field. Imaging informatics exists at the intersection of several broad fields: biological science – includes bench sciences such as biochemistry, microbiology, physiology and genetics clinical services – includes the practice of medicine, bedside research, including outcomes and cost-effectiveness studies, and public health policy information science – deals with the acquisition, retrieval, cataloging, and archiving of information medical physics / biomedical engineering – entails the use of equipment and technology for a medical purpose cognitive science – studying human computer interactions, usability, and information visualization computer science – studying the use of computer algorithms for applications such as computer assisted diagnosis and computer vision Due to the diversity of the industry players and broad professional fields involved with Imaging Informatics, there grew a demand for new standards and protocols. These include DICOM (Digital Imaging and Communications in Medicine), Health Level 7 (HL7), International Organization for Standardization (ISO), and Artificial Intelligence protocols. Current research surrounding Imaging Informatics has a focus on Artificial Intelligence (AI) and Machine Learning (ML). These new technologies are being used to develop automation methods, disease classification, advanced visualization techniques, and improvements in diagnostic accuracy. However, AI and ML integration faces several challenges with data management and security. History Medical imaging to imaging informatics While the field of imaging informatics is based around the power of modern computing, its roots trace back to the dawn of the 20th century. On November 8, 1895, German physicist Wilhelm Conrad Röntgen observed a new imaging technique he coined “X-rays” during his experiments. This discovery led to the creation of the medical imaging field, and in turn launched a new wave of human innovation. X-rays stood as the only medical imaging technology for several decades following its discovery. However, the arrival of the mid 20th century meant the expansion of the medical imaging field. The new modalities included: computed tomography (CT) to visualize soft tissue with a high degree of resolution; Magnetic Resonance Imaging (MRI) which is a modern standard for soft tissue imaging; Ultrasound that uses sound waves to create less expensive visualizations; Nuclear Imaging and Hybrid Scanners for functional imaging and imaging with higher spatial resolution created by combining multiple modalities. As these imaging techniques became more sophisticated, the amount of information that medical imaging professionals were expected to process also increased. Additionally, the digital revolution of the mid to late 20th century further increased the data these techniques could gather. As a result, the main limiting factor for the medical imaging field became the human inability to accurately interpret large amounts of data. Thus, the need arose for computerized assistance with complex digital imaging analysis, storage and manipulation. Modern Imaging Informatics was developed to fulfill these needs. Imaging informatics development Imaging Informatics is a broad field with numerous areas of interest, making its development a culmination of the development of various individual technologies. Several of the key innovations for the field are as follows: Picture archiving and communication system (PACS) The development of PACS popularized the use of image storage and retrieval systems in medical practices. Moreover, this new technology demanded the development of others. The world quickly realized that digital imaging standards would need to be put in place given the impact PACS had on the medical community. The American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) created the Digital Imaging and Communications Standards Committee (later becoming DICOM) in response to this concern. Information technology integration The digital age’s impact on radiology resulted in a large influx of data that needed to be managed. To combat this, the field of information technology was incorporated with technology such as Radiology Information System (RIS) and Hospital Information System (HIS). These systems would work in tandem with PACS and other imaging technology to streamline the patient data management, as shown in the figure to the right. Computer-aided detection and diagnosis The idea of computer-aided detection (CAD) and computer-aided diagnosis (CADx) is that the process of analysis and interpretation of medical image data could be automated, with a potentially higher degree of accuracy than human detection and diagnosis. Interest in this subject dates back to 1966, when radiology imaging first became digitized. The first successful implementation of a CAD system was in 1994 at the University of Chicago for use in mammography. This was followed by the first commercial CAD system in 1998 called ImageChecker M1000. With the arrival of the 21st century, machine learning techniques have been utilized to accomplish a version of the CAD and CADx systems. The future development of these technologies is advantageous as it gives a solution to human limitations in medical image processing. Although a highly accurate and fully automated CAD system has yet to be realized, recent advancements in Artificial Intelligence may allow for functioning implementations. Standards and protocols In the domain of imaging informatics, it is imperative to ascertain that the information pertaining to industry standards and data-sharing protocols is contemporaneous. The expeditious advancement in this field necessitates a vigilant approach to sustain uniformity, foster interoperability, and guarantee the efficacious dissemination of imaging data. To this end, several pivotal facets warrant rigorous consideration: Digital imaging and communications in medicine (DICOM) standards The Digital Imaging and Communications in Medicine (DICOM) standard delineates a sophisticated structural schema that integrates medical imaging data with pertinent patient identifiers into unified data sets, analogous to the embedded metadata in JPEG images. Such DICOM entities are constituted by a multitude of attributes, notably encapsulating pixel data, which in certain imaging modalities, corresponds to discrete images or, alternatively, an array of frames exemplifying kinetic or volumetric data, as observed in cine loops or multi-dimensional scans in nuclear medicine. This architecture accommodates the assimilation of intricate, multi-faceted data into a monolithic DICOM file. The standard accommodates a spectrum of pixel data compression algorithms, including but not limited to JPEG and JPEG 2000, and provisionally allows for holistic data set compression. DICOM specifies three encodings for data elements, with a predilection for explicit value representations, barring specific exceptions as elaborated in Part 5 of the DICOM compendium. Uniformly applied across diverse applications, the file manifestation customarily incorporates a header that houses essential attributes and data on the originating application. The proposed workflow integrates the use of DICOM Structured Reporting (SR), in which essential measurements are encoded as DICOM SR objects. These objects are then used to fill a predefined SR template, resulting in the creation of a standardized report composed of discrete data elements. This report is subsequently transmitted to the Electronic Medical Record (EMR) system. The discrete data extracted from these reports facilitate the longitudinal monitoring of individual patient metrics, are forwarded to data registries, or are leveraged for clinical research purposes. Health level 7 (HL7) standards DDInteract has been crafted to enhance cooperative engagement between healthcare practitioners and patients, aiming to ascertain the optimal therapeutic approach that minimizes the hazards posed by potential drug-drug interactions. The user interface of DDInteract is systematically organized into four distinct segments. Medication data can be represented across a variety of Fast Health Interoperability Resources (FHIR) resources, necessitating careful analysis by DDInteract. Specifically, MedicationRequest is utilized for medications prescribed to the patient; MedicationDispense covers medications that have been physically provided to the patient; and MedicationStatement pertains to medications that the patient reports having taken or is currently taking. It is possible for a single medication to be represented in multiple resource forms, with potential redundancies being amalgamated into a single record based on the most recent date and a defined hierarchy among the resource types. To optimize the efficiency of data retrieval from the FHIR server, not every instance of medication is considered. Only those resources that are currently active or were active within the past 100 days are included, adhering to the prevalent U.S. protocol that typically allows for medication dispensation for a duration not exceeding three months. International organization for standardization (ISO) standards A Quality Management System (QMS) is an integrative construct that includes the organizational architecture, the allocation of resources, the expertise of personnel, and the repository of documents and procedures that collectively contribute to the assurance and enhancement of quality in an entity's offerings. It delineates a suite of systematically orchestrated actions essential for governing and optimizing quality parameters. The ISO 9000 suite emerges as the preeminent and universally endorsed schema for QMS implementations, whereas the ISO 15189 standard provides a specialized framework designed expressly for the exigencies of clinical laboratory settings. Artificial intelligence in imaging informatics A systematic review critically assessed the design, reporting standards, risk of bias, and validity of claims within studies that compare the efficacy of diagnostic deep learning algorithms in medical imaging against the expertise of clinicians. Conducted using data from prominent databases spanning from 2010 to June 2019, the review specifically targeted studies involving convolutional neural networks (CNNs)—notable for their capacity to autonomously discern crucial features for image classification within medical contexts. The investigation uncovered a notable deficiency in randomized clinical trials concerning this subject, identifying only ten such studies, of which merely two were published, exhibiting low risk of bias and commendable adherence to reporting protocols. Among the 81 non-randomized studies located, a minority were prospective or validated in practical clinical settings, with the majority presenting a high risk of bias, substandard compliance with reporting norms, and a pronounced lack of accessibility to data and code. This review underscores the imperative for an augmentation in the number of prospective studies and randomized trials, advocating for diminished bias, amplified clinical pertinence, enhanced transparency, and tempered conclusions in the burgeoning field of applying deep learning to medical imaging. The exponential growth in digital data alongside enhanced computing capabilities has markedly accelerated advancements in artificial intelligence (AI), which are now progressively being incorporated into healthcare. These AI applications aim to refine diagnosis, treatment, and prognosis through sophisticated classification and prediction models. Nevertheless, the evolution of these technologies is impeded by a lack of rigorous reporting standards relating to data sourcing, model architecture, and the methodologies employed in model evaluation and validation. In response, we propose MINIMAR (Minimum Information for Medical AI Reporting), an initiative designed to establish critical parameters for understanding AI-driven predictions, the demographics targeted, inherent biases, and the ability to generalize these technologies. We urge the adoption of standardized protocols to ensure that AI implementations in healthcare are reported with accuracy and responsibility, facilitating the development and deployment of associated clinical decision-support tools while simultaneously addressing critical concerns regarding precision and bias. As a foundational requisite, the proposed standard ought to fulfill several essential criteria: Firstly, it should encompass comprehensive details concerning the population from which the training data are derived, delineating the sources of data and the methods employed for cohort selection. Secondly, the demographics of the training data should be explicitly documented to facilitate a substantive comparison with the demographic characteristics of the population on which the model is intended to operate. Thirdly, there should be a thorough disclosure of the model’s architecture and its development process to allow for a clear interpretation of the model's intended purpose, comparison with analogous models, and to enable exact replication. Fourthly, the process of model evaluation, optimization, and validation must be transparently reported to elucidate the means by which local model optimization is attained and to support replication and the sharing of resources. Evaluation of artificial intelligence in imaging informatics Advantages Improved Diagnostic Accuracy: Artificial intelligence, particularly through the use of convolutional neural networks (CNNs), has transformed medical imaging by significantly enhancing the accuracy of diagnostics. These technologies excel at autonomously identifying pertinent features from imaging data, thereby augmenting diagnostic, prognostic, and therapeutic strategies. Operational Efficiency: AI's capability to swiftly analyze extensive imaging datasets exceeds human capacity and offers the potential to decrease the interval between imaging and diagnosis, ultimately benefiting patient care. Consistency and Replicability: Initiatives such as MINIMAR are crucial as they promote standardized reporting and deployment of AI in healthcare, thereby improving the consistency and replicability of AI-driven diagnostic tools across various clinical environments. Disadvantages Inadequate Clinical Validation: A significant gap in clinical validation for AI tools is highlighted by the limited number of randomized clinical trials that compare the performance of AI systems directly with human clinicians, where many studies show high risk of bias and poor adherence to established reporting standards. Accessibility of Resources: The prevalent issue of limited access to the datasets and algorithms used in AI research impedes the ability of the broader scientific community to validate, replicate, and innovate upon existing studies. Transparency and Ethical Concerns: AI development in medical imaging faces challenges in transparency regarding how models are built, trained, and validated. Additionally, there is a pressing concern about the potential for these models to propagate existing biases or introduce new biases if not properly checked. Recommendations for future development Expansion of Rigorous Trials: The field requires a substantial increase in prospective, well-designed randomized trials to thoroughly assess and validate AI applications in clinical settings. Standardization of Reporting: Implementing comprehensive reporting standards as proposed by initiatives like MINIMAR will address transparency issues, reduce biases, and enhance the generalizability of AI applications, ensuring they meet rigorous scientific and ethical standards. Promotion of Open Data Practices: Encouraging more open access to AI datasets and modeling code will foster a collaborative environment that enhances the scrutiny, replication, and advancement of AI technologies, thereby solidifying their role in healthcare. In summary, while AI offers significant opportunities for advancing imaging informatics, leveraging these opportunities to their fullest extent necessitates stringent validation, adherence to robust reporting frameworks, and an overarching commitment to addressing ethical considerations. These steps are pivotal in ensuring that AI-driven tools achieve their promise of enhancing efficiency and effectiveness in medical diagnostics. Areas of interest Key areas relevant to Imaging informatics include: Picture archiving and communication system (PACS) and component systems Imaging informatics for the enterprise Image-enabled electronic medical records Radiology Information Systems (RIS) and Hospital Information Systems (HIS) Digital image acquisition Image processing and enhancement Radiomics Image data compression 3D visualization and multimedia Speech recognition Computer-aided diagnosis (CAD). Imaging facilities design Imaging vocabularies and ontologies Data mining from medical images databases Transforming the Radiological Interpretation Process (TRIP) DICOM, HL7, FHIR and other standards Workflow and process modeling and process simulation Quality assurance Archive integrity and security Teleradiology Radiology informatics education Digital imaging Applications Imaging Informatics has quite a few applications within the medical field. Radiology Imaging Informatics is most prominent within the field of radiology. Using AI, radiologists can use Imaging Informatics to ease their job and save time whilst analyzing images. A study published in "Current Medical Imaging" discovered that in CT imaging assisted by AI, the reading time to detect lung nodules and pleural effusions was reduced by more than 44% for radiologists. Cardiology Imaging informatics within Cardiology aids in the molecular phenotyping of CV(Cardiovascular) diseases and unification of CV knowledge. This means that through data extraction, imaging, and machine learning analysis of these data and images allow researchers to categorize diseases based on the characteristics or features discovered. With this classification, researchers are then able to unify this CV information into one platform for continued analysis and information retrieval. Pathology Imaging informatics in pathology as a whole allows for a wide range of disease detection and analysis. The most prominent use in pathology is with the detection and analysis of different forms of cancer. Diagnosing cancer manually is a pain staking and subjective process which includes examining what could be millions of cells. Through various clinical decision support systems(CDSS), professionals can ease the manual labor of tissue region selection, using Whole-Slide Imaging(WSI) tools to maximize the information analyzed. Several predictive models aimed to identify regions of interest within WSI, requiring training before use. Unsupervised models are being introduced, but are currently less prominent. An example of an unsupervised model being used is detecting tissue folds by using an unsupervised method to cluster the pixels in an image representing the difference between saturation and intensity values for every pixel. Due to being an unsupervised model, this method has some limitations. These limitations being that it has low sensitivity for different types of tissue folds within an image, and it has low specificity for images without tissue folds. Training In the US and some other countries, radiologists who wish to pursue sub-specialty training in this field can undergo fellowship training in imaging informatics. Medical Imaging Informatics Fellowships are done after completion of Board Certification in Diagnostic Radiology, and may be pursued concurrently with other sub-specialty radiology fellowships. The American Board of Imaging Informatics (ABII) also administers a certification examination for Imaging Informatics Professionals. PARCA (PACS Administrators Registry and Certification Association) certifications also exist for imaging informatics professionals. The American Board of Preventive Medicine (ABPM) offers a certification examination for Clinical Informatics for physicians who have primary board certification with the American Board of Medical Specialties, a medical license and a medical degree. There are two pathways to be eligible to sit for the examination: Practice Pathway (open through 2022) for those who have not completed ACGME-accredited fellowship training in Clinical Informatics and ACGME-Accredited Fellowship Pathway of at least 24 months in duration. Recent innovations Integration of DICOM standards (late 1990s to early 2000s) The expansion of DICOM standards facilitated the widespread adoption of Picture Archiving and Communication Systems (PACS), marking a milestone in the digital transformation of imaging informatics. This standardization, which began to take hold in the late 1990s and was established by the early 2000s, has enhanced the ability to store, retrieve, and share medical images across different systems, improving the efficiency of medical imaging practices. Structured and automated reporting (early 2010s) The adoption of structured reporting aimed to standardize reports to be concise and uniform, influencing patient care. The introduction of BI-RADS (Breast Imaging–Reporting and Data System) is a notable example, which has led to improved consistency across mammography reports. This milestone spans several years as these systems were refined and more widely adopted throughout the early 2010s. Advancements in AI and deep learning (2012) The realization that graphics processing units (GPUs) could be used to accelerate neural networks occurred around 2012. This advancement led to the rapid development of deep learning techniques, speeding up tasks like image segmentation, feature recognition, and algorithm creation from large datasets of annotated images. This era of AI has enabled high-performance algorithms capable of assisting in hundreds of diagnostic tasks. Rise of radiomics (late 2010s) The field of radiomics, which involves extracting quantitative features from medical images that are invisible to the human eye, saw significant growth towards the late 2010s. This approach has enabled a deeper analysis of imaging data, which can be correlated with genomic patterns and other medical data to enhance diagnostic and predictive accuracy. Photon-counting CT detectors (2022) The development and FDA clearance of photon-counting detectors (PCD) for computed tomography (CT) scans in 2022 was an important innovation. These detectors offer a more efficient process for converting X-rays to electrical signals, allowing for better material differentiation and potentially reducing the radiation dose for patients. The image to the right shows two scans of the same brain using old and new CT technology respectively. Current research and future directions Current research in imaging informatics is primarily focused on the integration and advancement of artificial intelligence (AI) and machine learning (ML) within medical imaging technologies. Efforts are concentrated on enhancing diagnostic precision, improving predictive analytics, and automating image analysis processes. Deep learning, a subset of ML, is particularly pivotal in transforming radiological imaging, with algorithms increasingly being developed for tasks such as tumor detection, organ segmentation, and anomaly identification. These advancements not only aim to increase the efficiency and accuracy of diagnoses but also strive to reduce the workload on radiologists by automating routine tasks. Looking ahead, the future directions of imaging informatics are expected to further embrace interdisciplinary approaches, incorporating genetics, pathology, and data from wearable devices to offer more holistic views of patient health. The concept of "radiogenomics," linking imaging features with genomic data, is an area of growing interest, potentially leading to more personalized and precise medical treatments. Additionally, the ongoing development of interoperability standards and secure data exchange protocols will be crucial in enabling the seamless integration of imaging data across different healthcare platforms, enhancing collaborative research and clinical practice globally. Challenges in imaging informatics There are several challenges in the field of Imaging Informatics: Data Management: The sheer volume of data generated from a large amount of high quality images poses storage and efficiency issues. Efficient management, storage, and retrieval of these images is critical. This is a challenge in terms of infrastructure and development of systems capable of handling and processing large datasets efficiently. Integration: Healthcare is a very slow field to adapt change. This is because all systems must be thoroughly tested and must work in tandem with existing systems with out any issues. Security: Personal security and safe data management is always a concern. This concern is elevated in the field of healthcare since the standard and regulations for security are much higher. Medical imaging often involves sharing sensitive patient data across networks, robust security measures are essential to protect against data breaches and ensure privacy compliance. This includes secure transmission, encryption of data at rest, and rigorous access controls. Integration of Artificial Intelligence: While AI offers significant potential to enhance diagnostic accuracy and efficiency in imaging, its integration into clinical workflows is fraught with challenges. These include the need for high-quality, annotated datasets for training AI models, the risk of algorithmic bias, and the black-box nature of some AI systems which can obscure how decisions are made. There is also skepticism among healthcare professionals regarding the reliability and accuracy of AI, which can hinder its adoption. Ethical and Legal Issues: The deployment of advanced imaging technologies raises ethical questions about the extent to which AI should be involved in patient diagnosis and the potential for AI to replace human radiologists. Legal implications, particularly concerning malpractice and liability when AI is used, are yet unresolved. These issues necessitate clear guidelines and robust ethical frameworks to govern the use of AI in medical imaging. Addressing these challenges requires a coordinated effort among technology developers, healthcare providers, regulatory bodies, and other stakeholders. Advances in technology must be balanced with considerations of practicality, ethics, and equity to ensure that imaging informatics can fulfill its promise to enhance patient care and treatment outcomes. Technological advances Software innovations Recent years have seen significant advancements in software technologies relevant to imaging informatics. One notable development is the integration of machine learning algorithms into imaging software, enabling automated analysis and interpretation of medical images. For instance, Rajpurkar et al. (2017) demonstrated the effectiveness of deep learning algorithms in pneumonia detection on chest X-rays, showcasing the potential of machine learning in medical imaging analysis. These algorithms have shown promising results in tasks such as lesion detection, disease classification, and treatment response assessment. Moreover, the implementation of natural language processing (NLP) techniques has facilitated the extraction of valuable insights from unstructured radiology reports, enhancing the efficiency of data analysis and decision-making processes. Hardware developments Advances in hardware technology have also played a pivotal role in shaping the landscape of imaging informatics. The evolution of imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), has led to improvements in image resolution, acquisition speed, and diagnostic accuracy. Additionally, the miniaturization of imaging devices has enabled point-of-care imaging, allowing for real-time assessment of patients in various clinical settings. For example, the development of handheld ultrasound devices has revolutionized point-of-care imaging by providing clinicians with portable and easy-to-use tools for bedside examinations (Smith, 2018). The rise of wearable devices and mobile health applications has further expanded the scope of imaging informatics, facilitating remote imaging and patient monitoring using sensors and cameras. Methodological advancements Along with technological innovations, methodological advancements have expanded the capabilities of imaging informatics. One development is the integration of multimodal imaging techniques, which combine data from multiple imaging modalities to provide complementary information about anatomical and physiological structures. For instance, recent studies have demonstrated the effectiveness of combining MRI, CT, and ultrasound data for improved diagnosis and treatment planning in oncology patients (Gupta et al., 2020). By fusing data from these sources, clinicians can obtain a more comprehensive understanding of a patient's condition, leading to more accurate diagnoses and personalized treatment plans. References External links The Society for Imaging Informatics in Medicine American Board of Imaging Informatics Bioinformatics
Imaging informatics
[ "Engineering", "Biology" ]
5,393
[ "Bioinformatics", "Biological engineering" ]
14,612,089
https://en.wikipedia.org/wiki/Desertec
DESERTEC is a non-profit foundation that focuses on the production of renewable energy in desert regions. The project aims to create a global renewable energy plan based on the concept of harnessing sustainable powers, from sites where renewable sources of energy are more abundant, and transferring it through high-voltage direct current transmission to consumption centers. The foundation also works on concepts involving green hydrogen. Multiple types of renewable energy sources are envisioned, but their plan is centered around the natural climate of the deserts. The Desertec Industrial Initiative evolved in several steps. The Foundation's first idea was to focus on the transmission of renewable power from the MENA region to Europe, while the next one focused on meeting the domestic demand. The project failed twice due to the problem of transportation and cost-inefficiency. The initiative was revived in 2020 with a focus on green hydrogen, catering to both domestic demand and exports to foreign markets. Organizations, milestones, and activities DESERTEC was developed by the Trans-Mediterranean Renewable Energy Cooperation (TREC), a voluntary organisation founded in 2003 by the Club of Rome and the National Energy Research Center Jordan, made up of scientists and experts from across Europe, the Middle East and North Africa (EU-MENA). It is from this network that the DESERTEC Foundation later emerged as a non-profit organisation and started to promote their solutions around the world. Founding members of the foundation are the German Association of the Club of Rome, members of the network of scientists TREC as well as committed private supporters and long-time promoters of the DESERTEC idea. In 2009, the DESERTEC Foundation founded the Munich-based industrial initiative together with partners from the industrial and finance sectors. It aims to accelerate the implementation of the DESERTEC Concept in the focus region EU-MENA. Scientific studies done by the German Aerospace Center (DLR) between 2004 and 2007 demonstrated that the desert sun could meet rising power demand in the MENA region while also helping to power Europe, reduce carbon emissions across the EU-MENA region and power desalination plants to provide freshwater to the MENA region. Dii published a further study called Desert Power 2050 in June 2012. It found that the MENA region would be able to meet its needs for power with renewable energy, while exporting its excess power to create an export industry with an annual volume of more than €60 billion. Meanwhile, by importing desert power, Europe could save around 30 pounds/MW. By taking into account land and water use, DESERTEC intends to offer an integrated and comprehensive solution to food and water shortages. TREC The DESERTEC concept originated from Dr Gerhard Knies, a German particle physicist and founder of the Trans-Mediterranean Renewable Energy Cooperation (TREC) network of researchers. In 1986, in the wake of the Chernobyl nuclear accident, he was searching for a potential alternative source of clean energy and arrived at a conclusion: in six hours, the world's deserts receive more energy from the sun than humankind consumes in a year. The DESERTEC concept was developed further by TREC – an international network of scientists, experts and politicians from the field of renewable energies – founded in 2003 by the Club of Rome and the National Energy Research Center Jordan. One of the most famous members was Prince Hassan bin Talal of Jordan. In 2009, TREC emerged to the non-profit DESERTEC Foundation. DESERTEC Foundation The DESERTEC Foundation was founded on 20 January 2009 with the aim of promoting the implementation of the DESERTEC Concept for clean power from deserts all over the world. It is a non-profit organisation based in Hamburg. The founding members were the German Association of the Club of Rome, members of the TREC network of scientists as well as committed private supporters and long-time promoters of the DESERTEC idea. The foundation works to accelerate the implementation of the DESERTEC Concept by: Supporting knowledge transfer & scientific co-operation Fostering exchange & co-operation with the private sector Promoting the establishment of the necessary framework conditions: Cooperation with JREF in Asia: In March 2012, a year after the nuclear disaster in Fukushima, the DESERTEC Foundation and the Japan Renewable Energy Foundation (JREF) have signed a Memorandum of understanding. The aim is to accelerate the deployment of renewable energy in Asia to provide secure and sustainable alternatives to fossil and nuclear power by implementing the DESERTEC Concept in Greater East Asia (Asia Super Grid Initiative). Evaluating and initiating projects that could serve as models Informing about DESERTEC Dii GmbH To help accelerate the implementation of the DESERTEC idea in EU-MENA, the non-profit DESERTEC Foundation and a group of 12 European companies led by Munich Re founded an industrial initiative called Dii GmbH in Munich on 30 October 2009. The other companies included Deutsche Bank, E.ON, RWE, Abengoa. Like the DESERTEC Foundation, Dii GmbH did not intend to build power plants itself. Instead it focused on four core objectives in EU-MENA: Development of long term perspectives for the period up to 2050 providing investment and financing guidance Carrying out specific in-depth studies Development of a framework for feasible investments into renewable energy and interconnected grids in EU-MENA Origination of reference projects to prove feasibility Dii GmbH aimed to create a positive investment climate for renewable energies and interconnected power grid in North Africa and the Middle East by encouraging the necessary technological, economic, political and market frameworks. This included the development of a long-term implementation perspective called Desert Power 2050 with guidance on investment and funding. Dii GmbH has initiated selected reference projects to demonstrate overall feasibility and reduce system overall costs. On 24 November 2011, a memorandum of understanding (MoU) was signed between the Medgrid consortium and Dii to study, design and promote an interconnected electrical grid linking DESERTEC and the Medgrid projects. The Medgrid together with DESERTEC would serve as the backbone of the European super grid and the benefits of investing in HVDC technology are being assessed to reach the final goal – the supersmart grid. The activities of Dii and Medgrid were covered by the Mediterranean Solar Plan (MSP), a political initiative within the framework of the Union for the Mediterranean (UfM). Consortium The company was formed by the DESERTEC foundation and a consortium of worldwide companies. As of March 2014, Dii consisted of 20 shareholders (listed below) and 17 associate partners. ABB Abengoa Solar ACWA Power Cevital Deutsche Bank Enel Green Power E.ON First Solar Flagsol HSH Nordbank Munich Re Nareva Red Eléctrica de España RWE Avancis Schott Solar Terna Terna Energy SA UniCredit State Grid Corporation of China Managing Director of Dii GmbH has been Paul van Son, a senior international energy manager. At the end of 2014, most shareholders left Dii which has been described both as a "failure" and as a reorientation in project objectives. RWE, State Grid Corporation of China, ACWA Power and a number of partner companies stayed on board to drive the new mission of Dii: "To facilitate the rapid deployment of utility-scale renewable energy projects in desert areas, and to integrate them in the interconnected power systems" Concept details Description DESERTEC is a global renewable energy solution based on harnessing sustainable power from the sites where renewable sources of energy are at their most abundant. These sites can be used thanks to low-loss High-Voltage Direct Current transmission. All kinds of renewables will be used in the DESERTEC Concept, but the sun-rich deserts of the world play a special role. The original and first region for the assessment and application of this concept is the EU-MENA region (European Union, Middle East and Northern Africa). The DESERTEC organisations promote the generation of electricity in North Africa, the Middle East and Europe using renewable sources, such as solar power plants, wind parks, and develop a Euro-Mediterranean electricity network, primarily made up of high voltage direct current (HVDC) transmission cables. Despite its name, DESERTEC's proposal would see most of the power plants located outside of the Sahara Desert itself but rather in the surrounding areas, in the more accessible North and South steppes and woodlands, as well as the relatively moist Atlantic Coastal Desert. Under the DESERTEC proposal, concentrating solar power systems, photovoltaic systems and wind parks would be spread over the wide desert regions in North Africa like the Sahara Desert and all its subdivisions. The generated electricity would be transmitted to European and African countries by a super grid of high-voltage direct current cables. It would provide a considerable part of the electricity demand of the MENA countries and furthermore provide continental Europe with 15% of its electricity needs. Exported desert power would complement Europe's transition to renewables which would be based primarily on harnessing domestic sources of energy that would increase its energy independence. According to a scenario by the German Aerospace Center (DLR), by 2050, investments into solar plants and transmission lines would be total €400 billion. An exact proposal how to realise this scenario, including technical and financial requirements, will be designed by 2012/2013 (see Desert Power 2050). In March 2012, the DESERTEC Foundation started working in a further focus region. A year after the nuclear disaster in Fukushima, the DESERTEC Foundation and the Japan Renewable Energy Foundation (JREF) have signed a MoU. They will exchange knowledge and know-how, and coordinate their work together to develop suitable framework conditions for the deployment of renewables and to establish transnational cooperation in Greater East Asia. The aim is to accelerate the deployment of renewable energy in Asia to provide secure and sustainable alternatives to fossil and nuclear power. As a part of its mission, JREF promotes the Asia Super Grid Initiative to facilitate an electricity system based fully on renewable energy. The DESERTEC Foundation sees such a grid as an important step towards the implementation of DESERTEC in Greater East Asia and has already conducted a feasibility study on potential grid corridors to make best use of the region's desert sun. Studies about DESERTEC DLR studies The DESERTEC Concept was developed by an international network of politicians, academics and economists, called TREC. The research institutes for renewable sources of the governments of Morocco (CDER), Algeria (NEAL), Libya (CSES), Egypt (NREA), Jordan (NERC) and Yemen (Universities of Sana'a and Aden) as well as the German Aerospace Center (DLR) made significant contributions towards the development of the DESERTEC Concept. The basic studies relating to DESERTEC were led by DLR scientist Dr. Franz Trieb working for the Institute for Technical Thermodynamics at the DLR. The three studies were funded by the German Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety (BMU). The studies, conducted between 2004 and 2007, evaluated the following as shown in the table below; The studies concluded that the extremely high solar radiation in the deserts of North Africa and the Middle East outweighs the 10–15% transmission losses between the desert regions and Europe. This means that solar thermal power plants in the desert regions are more economical than the same kinds of plants in southern Europe. The German Aerospace Center has calculated that if solar thermal power plants were to be constructed in large numbers in the coming years, the estimated cost of electricity would come down from 0.09 to 0.22 euro/kWh to about 0.04–0.05 euro/kWh. The Sahara Desert was chosen as an ideal location for solar farms as it is exposed to bright sunshine nearly all the time, roughly between 80% and 97% of the daylight hours in the best cases. This is the sunniest year-round area on the planet. In the world's largest hot desert, there is an extremely vast area, covering almost the whole desert, that receives more than 3,600 h of yearly sunshine. There is also a very large area in excess of 4,000 h of sunshine annually. The highest solar radiation received on the planet is in the Sahara Desert, under the Tropic of Cancer. This results from a general, strong lack of cloud cover year-round and a geographical position under the tropics. The annual average insolation, which represents the total amount of solar radiation energy received on a given area and on a giver period, is about 2,500 kWh/(m2 year) over the region and this number can soar up to almost 3,000 kWh/(m2 year) in the best cases. The weather features of the Sahara Desert, especially the insolation, have a pronounced nature. The annual electricity production reaches 1,300,000 TWh at maximum in this sun-drenched area if the whole desert is covered in solar panels. The desert is also extremely vast covering about some 9,000,000 km2 (3,474,920 sq mi), being almost as large as China or the United States and is sparsely populated, making it possible to set up large solar farms without a negative impact on inhabitants of the region, too. Lastly, sand deserts can provide silicon, a raw material that is essential in the production of solar panels. The great African desert is relatively cloud-free all year long but it's important to note the harsh, desert climate also has some negative features such as extreme heat and sometimes dust or sand-laden winds which frequently blow over the desert and can even result in severe duststorms or sandstorms. Both phenomenons reduce the solar electricity productivity and the efficiency of the solar panels. Desert Power 2050 Dii announced it would introduce a roll-out-plan in late 2012 which included concrete recommendations on how to enable investments in renewable energy and interconnected power grids. Dii claims to work with all key stakeholders from the international scientific and business communities as well as policy-makers and civil society to enable two or three concrete reference projects to demonstrate the feasibility of the long-term vision. Dii developed a strategic framework for a fully integrated and decarbonized power system based on renewable energies for the entire North Africa, Middle East, and Europe (EUMENA) region in 2050. Therefore, Dii researched from the viewpoint of technology and geography what is the optimal mix of renewable energies to provide the EUMENA region with sustainable energy. In July 2012 Dii presented the first part of its study "Desert Power 2050 – Perspectives on a Sustainable Power System for EUMENA. Key Findings Desert Power 2050 demonstrates that the abundance of sun and wind in the EUMENA region will enable the creation of a joint power network that will entail more than 90 percent renewables. According to the study, such a joint power network involving North Africa, the Middle East, and Europe (EUMENA) offers clear benefits to all involved. The nations of the Middle East and North Africa (MENA) could meet their expanding needs for power with renewable energy, while developing an export industry from their excess power which could reach an annual volume worth more than 60 billion euros, according to the study results. By importing up to 20 percent of its power from the deserts, Europe could save up to 30 euros for each megawatt-hour of desert power. The north and south would become the powerhouses of this joint network, supported by wind and hydropower in Scandinavia, as well as wind and solar energy in the MENA region. Supply and demand would complement one other – both regionally and seasonally – according to the findings of Desert Power 2050. With its constant supply of wind and solar energy throughout the year, the MENA region can cover Europe's energy needs without the latter having to build costly excess capacities. A further benefit of the power network is the enhanced security of supply to all nations concerned. A renewables-based network would lead to mutual reliance among the countries involved, complemented by inexpensive imports from the south and the north. Methodology Desert Power 2050 presents the full perspective of the EUMENA region, which includes, for instance, the growing consumption of power in the MENA states. The power requirements of the MENA states are likely to more than quadruple by 2050, totalling more than 3000 terawatt hours. Unlike in Europe, the population will also grow considerably by the middle of the century, thus heightening the demand for new jobs. Analysing the design of a power system built to include more than 90% renewables 40 years into the future is necessarily subject to major uncertainties on a range of assumptions. To address these uncertainties Dii analysed so-called sensitivities, or perspectives, to show how the results react to changed parameters. Dii has analysed a total of 18 perspectives on the EUMENA power supply in 2050. They cover a wide range of major impact factors on the attractiveness of power system integration. The main message of the study: grid integration across the Mediterranean is valuable under all foreseeable circumstances. Second Phase Desert energy could be a stimulus for growth and make an important contribution when it comes to coping with the social and economic challenges in North Africa and the Middle East. Dii announced that a second phase of Desert Power 2050, Getting Started, will examine this topic in greater depth in the next few months, with discussions including political, scientific and industrial stakeholders. The objective is to formulate recommendations for the regulatory steps required in the years to come. Benefits More energy falls on the world's deserts in six hours than the world consumes in a year, and the Saharan desert is virtually uninhabited and is close to Europe. Supporters say that the project will keep Europe "at the forefront of the fight against climate change and help North African and European economies to grow within greenhouse gas emission limits". DESERTEC officials say the project could one day deliver 15 percent of Europe's electricity and a considerable part of MENA's electricity demand. According to the DESERTEC Foundation, the project has strong job creation potential and could improve the stability in the region. According to the report by Wuppertal Institute for Climate, Environment and Energy and the Club of Rome, the project could create 240,000 German jobs and generate €2 trillion worth of electricity by 2050. Technology Concentrated solar power Concentrated solar power (also called concentrating solar power and CSP) systems use mirrors or lenses to concentrate a large area of sunlight, or solar thermal energy, onto a small area. Electrical power is produced when the concentrated light is converted to heat, which drives a heat engine (usually a steam turbine) connected to an electrical power generator. Molten salt can be employed as a thermal energy storage method to retain thermal energy collected by a solar tower or solar trough so that it can be used to generate electricity in bad weather or at night. Since solar fields feed their heat energy into a conventional generation unit with a steam turbine, they can be combined without any problem with fossil fuel hybrid power plants. This hybridisation secures energy supply also in unfavourable weather and at night without the need of accelerating costly compensatory plants. A technical challenge is the cooling which is necessary for every heating power system. Dii is therefore reliant either on an adequate water supply, coastal facilities or improved cooling technology. Photovoltaics Dii also considers photovoltaics (PV) as a technology suitable for desert power plants. Photovoltaics is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors. Photovoltaic power generation employs solar panels composed of a number of solar cells containing a photovoltaic material. Materials presently used for photovoltaics include monocrystalline silicon, polycrystalline silicon, amorphous silicon, cadmium telluride, and copper indium gallium selenide/sulfide. Driven by advances in technology and increases in manufacturing scale and sophistication, the cost of photovoltaics has declined steadily since the first solar cells were manufactured. In 2010, First Solar, a producer of thin film solar panels, joined Dii as associated partner. The US based company already has experience with huge PV installations, and has constructed the 550 megawatt Desert Sunlight Solar Farm and Topaz Solar Farm in California, which are the biggest two PV installations of the world. Wind energy As also parts of the desert regions in the Middle East and North Africa (MENA) come with high wind potential, Dii is examining in which geographic regions the installation of wind farms is suitable. Wind turbines produce electricity by wind turning the blades, which spin a shaft, which connects to a generator which produces electricity. The Sahara Desert is one of the windiest areas on the planet, especially on the western coast where lies the Atlantic coastal desert along Western Sahara and Mauritania. The annual average wind speed at the ground greatly exceeds 5 m/s in most of the desert, and even approach 8 m/s or 9 m/s along the western ocean coast. It's important to note that wind speed increases with height. The regularity and the constancy of winds in arid regions are major assets for wind energy, too. The winds blow nearly constantly over the desert and there are generally no windless days during throughout the year. Therefore, the desert of North Africa is also an ideal location to install large-scale wind parks and wind turbines with very good productivity. High-voltage direct current (HVDC) To export renewable energy produced in the MENA desert region, a high-voltage direct current (HVDC) electric power transmission system is needed. High Voltage DC (HVDC) technology is a proven and economical method of power transmission over very long distances and also a trusted method to connect asynchronous grids or grids of different frequencies. With HVDC energy can also be transported in both directions. For long-distance transmission HVDC suffers lower electrical losses than alternating current (AC) transmission. Because of the higher solar radiation in MENA, the production of energy, even with the included transmissions losses, is still advantageous over the production in South Europe. Also very long distance projects have already been realised with technological cooperation from ABB and Siemens – both shareholders of Dii; namely the 800 kV HVDC Xiangjiaba-Shanghai transmission system, which was commissioned by State Grid Corporation of China (SGCC) in June 2010. The HVDC link is the most powerful and longest transmission of its kind to be implemented anywhere in the world; and at the time of commissioning, transmitted 6,400 MW of power over a distance of nearly 2,000 kilometres. This is longer than would be needed to link MENA and Europe. Siemens Energy has equipped the sending converter station Fulong for this link with ten DC converter transformers, including five rated at 800 kV. The second HVDC project which is also for SGCC with cooperation from ABB, is a new HVDC link of 3,000 MW over 920 kilometres from Hulunbeir, in Inner Mongolia, to Shenyang in the province of Liaoning in the North-Eastern part of China in 2010. Another project scheduled for 2014 commissioning – is the construction of an ±800 kV North-East UHVDC link from the North-Eastern and Eastern region of India to the city of Agra across a distance of 1,728 kilometres. Another project of this type is the Rio Madeira HVDC system a HVDC link of . Projects The Sahara Desert covers huge parts of Algeria, Chad, Egypt, Libya, Mali, Mauritania, Morocco, Niger, Western Sahara, Sudan and Tunisia. It is one of three distinct physiographic provinces of the African massive physiographic division. The first solar and wind power projects in North Africa have already begun. Algeria initiated a unique project in 2011 dealing with Hybrid power generation which combines a 25 MW concentrating solar power array in conjunction with a 130 MW combined cycle gas turbine plant Hassi R'Mel integrated solar combined cycle power station. Other countries like Morocco have set up ambitious plans on the implementation of renewable energy. The Ouarzazate solar power station in Morocco for example, with the capacity of 500 MW, will be one of the largest concentrated solar plants in the world. In 2011, the DESERTEC Foundation started to evaluate projects that could serve as models for the implementation of DESERTEC according to its sustainability criteria. The first of these is the TuNur solar power plant in Tunisia that is planned to have 2 GW of capacity. Creating up to 20,000 direct and indirect local jobs, its plants include dry-cooling systems that reduce water usage by up to 90%. Construction is planned to begin in 2014, and export power to Italy by 2016. A video on YouTube explains this project. Talks with the Moroccan government had been successful and the Dii confirmed their first reference project would be in Morocco. As a partner in a beginning partnership between Europe and MENA Morocco is especially well-suited since a grid connection from Morocco via Gibraltar to Spain already exists. Also the Moroccan government enacted a program to support renewable energies. In June 2011, Dii signed a Memorandum of Understanding with the Moroccan Agency for Solar Energy (MASEN). MASEN will act as a project developer and will be responsible for all important project steps in Morocco. Dii will promote the project and its financing in the European Union in Brussels as well as in national governments. This reference project, with a total capacity of 500 MW, will be a combination of concentrated solar power plants (400 MW) and photovoltaics (100 MW). The first available power from the joint Dii/MASEN project could be fed into the Moroccan and Spanish grids between 2014 and 2016, depending on the selected technology and market conditions. Based on the current estimate the total costs are €2 billion. In April 2010, Dii emphasised that the power plant won't be installed in the region of Western Sahara which is administered by Morocco. An official spokesperson of Dii made the following confirmation: "Our reference projects will not be located in the region. When looking for project sites, the DII will also take political, ecological or cultural issues into consideration. This procedure is in line with the funding policies of international development banks." In Tunisia, STEG Énergies Renouvelables, a subsidiary of the Tunisian state utility company STEG, and Dii are currently working on a pre-feasibility study. The study focuses on substantial solar and wind energy projects in Tunisia. Research will address the technical and regulatory conditions for the supply of energy in local networks for the export of power to neighbouring countries as well as Europe. Besides financing of the project will be analysed. Algeria, which offers excellent conditions for renewable energy, is considered as a potential location for a further reference project. In December 2011, the Algerian energy supplier Sonelgaz and Dii signed a Memorandum of Understanding on their future collaboration in the presence of EU Energy Commissioner Günther Oettinger and the Algerian Minister for Energy and Mining Youcef Yousfi. The focus of this cooperation will be the strengthening and the exchange of technical expertise, joint efforts in market development and the progress of renewable energy in Algeria as well as in foreign countries. Since the Euro-Mediterranean projects, Medgrid and DESERTEC are both attempting to generate solar energy from deserts and complement each other, a MoU was signed on 24 November 2011 between Medgrid and Dii to study, design and promote an interconnected electrical grid linking both projects. The plan is to build five interconnections at a cost of around 5 billion euros ($6.7 billion), including between Tunisia and Italy. The activities of Dii and Medgrid are covered by the Mediterranean Solar Plan (MSP), a political initiative within the framework of the Union for the Mediterranean (UfM). In March 2012 Dii, Medgrid, Friends of the supergrid and Renewables Grid Initiative signed a joint declaration to support the effective and complete integration, in a single electricity market, of renewable energy from both large-scale and decentralised sources, which shall not be played out against each other in Europe and in its neighbouring regions. Obstacles Some experts – such as Professor Tony Day, director of the Centre for Efficient and Renewable Energy in Building at London South Bank University, Henry Wilkinson of Janusian Security Risk Management, and Wolfram Lacher of Control Risks consultancy – are concerned about political obstacles to the project. Generating so much of the electricity consumed in Europe and in Africa would create a political dependency on North African countries which had corruption before Arab Spring and a lack of cross-border coordination. Moreover, DESERTEC would require extensive economic and political cooperation between Algeria and Morocco, which is at risk as the border between the two countries is closed due to a disagreement over the Western Sahara, Inram Kada by EUMENA, is responsible for expediting the project. Cooperation between the states of Europe and the states of the Middle East and North Africa is also certain to be challenging. Large scale cooperation necessary between the EU and the North African nations the project may be delayed due to bureaucratic red tape and other factors such as expropriation of assets. There are also concerns that the water requirement for the solar plant to clean dust off panels and for turbine coolant may be detrimental to local populations in terms of the demand it will place on the local water supply. An EU innovation supported project however resulted in the development of a silicone based film with a nano-dendrite structure on it. The film is fused on top of the solar panels and the nano-dendrite structure makes that sand, water, salt, bacteria, molds, etc. can't attach to the photovoltaic panels. Opposed to this, studies point out the generation of fresh water by the solar thermal plants. Furthermore, no significant amount of water is needed for cleaning and cooling, since alternative technologies can be used (dry cleaning, dry cooling). However, dry cooling is more expensive, technologically challenging and less efficient than the water cooling currently planned. Plans for water desalination for cooling purposes are not part of the DESERTEC business plan or cost estimates as proposed. The late Hermann Scheer (Eurosolar) pointed out that the doubled solar radiation in the Sahara can not be the only criterion especially with its continuous trade winds there . Transmitting energy over long distances has been criticized, with questions raised over the cost of cabling compared to energy generation, and over electricity losses. However, the study and current operating technology show that electricity losses using high-voltage direct current transmission amount to only 3% per 1,000 km (10% per 3,000 km). Investment may be required within Europe in a "supergrid". In response, one proposal is to cascade power between neighbouring states so that states draw on the power generation of neighbouring states rather than from distant desert sites. One key question will be the cultural aspect, as Middle Eastern and African nations may need assurance that they will own the project rather than it being imposed from Europe. See also Medgrid European super grid Intermittent energy source List of HVDC projects North Sea Offshore Grid Renewable energy in Morocco Relative cost of electricity generated by different sources Solar energy in Israel Solel SuperSmart Grid Wind power in Morocco SunCable Australia - Singapore References External links DESERTEC Foundation Dii GmbH Proposed electric power transmission systems Renewable energy in the European Union Energy in Africa 2009 establishments in Germany Energy companies of Germany Macro-engineering Foundations based in Germany
Desertec
[ "Engineering" ]
6,439
[ "Macro-engineering" ]
14,612,095
https://en.wikipedia.org/wiki/Adziogol%20Lighthouse
The Adziogol Lighthouse (), also known as Stanislav–Adzhyhol Lighthouse or Stanislav Range Rear light, is one of two vertical lattice hyperboloid structures of steel bars, serving as active lighthouses in Dnieper Estuary, Ukraine. It is located about west of the city of Kherson. At a height of , it is the sixteenth-tallest "traditional lighthouse" in the world as well as the tallest in Ukraine. Location It is located on a concrete pier on a tiny islet in the combined Dnieper-Bug Estuary, which extends eastward into the Dnieper Estuary, a part of the Dnieper River delta, about north of the village of Rybalche (Skadovsk Raion) and south of the Cape of Adzhyhol, for which it is named. Together with the Stanislav Range Front Light (Small Adzhyhol Lighthouse), it serves as a range light, guiding ships entering the Dnieper River or the Southern Buh River within the vast Dnieper-Bug Estuary. Details The lighthouse was designed in 1910 and built in 1911 by Vladimir Shukhov. The one-story keeper's house sits inside the base of the tower. The site of the tower is accessible only by boat. The site is open to the public but the tower is closed. On 22 July 2022, the lighthouse was damaged by a Russian missile attack, 3 rockets hit the lighthouse. See also List of lighthouses in Ukraine Thin-shell structure List of hyperboloid structures List of thin shell structures List of tallest lighthouses in the world References Further reading Rainer Graefe: “Vladimir G. Šuchov 1853–1939 – Die Kunst der sparsamen Konstruktion.”, S.192, Stuttgart, DVA, 1990, . Peter Gössel, Gabriele Leuthäuser, Eva Schickler: “Architecture in the 20th century”, Taschen Verlag; 1990, and Kevin Matthews, "The Great Buildings Collection", CD-ROM, Artifice, 2001, . Elizabeth Cooper English: “Arkhitektura i mnimosti”: The origins of Soviet avant-garde rationalist architecture in the Russian mystical-philosophical and mathematical intellectual tradition”, a dissertation in architecture, 264p., University of Pennsylvania, 2000. External links Adziogol Lighthouse – video, 2010 Photos of Adzhyhol Lighthouse. Adzhyhol lighthouses and the mouth of Dnieper . VeniVidi.ru Lighthouses completed in 1911 Lattice shell structures by Vladimir Shukhov Hyperboloid structures High-tech architecture Lighthouses in Ukraine Dnieper–Bug estuary Towers in Ukraine Buildings and structures in Kherson Oblast 1911 establishments in the Russian Empire
Adziogol Lighthouse
[ "Technology" ]
560
[ "Structural system", "Hyperboloid structures" ]
14,612,385
https://en.wikipedia.org/wiki/Distribution%20law
Distribution law or the Nernst's distribution law gives a generalisation which governs the distribution of a solute between two immiscible solvents. This law was first given by Nernst who studied the distribution of several solutes between different appropriate pairs of solvents. C1/C2 = Kd Where Kd is called the distribution coefficient or the partition coefficient. Concentration of X in solvent A/concentration of X in solvent B=Kď If C1 denotes the concentration of solute X in solvent A & C2 denotes the concentration of solute X in solvent B; Nernst's distribution law can be expressed as C1/C2 = Kd. This law is only valid if the solute is in the same molecular form in both the solvents. Sometimes the solute dissociates or associates in the solvent. In such cases the law is modified as, D(Distribution factor)=concentration of solute in all forms in solvent 1/concentration of solute in all forms in solvent 2. Further reading Martin's Physical Pharmacy & pharmaceutical sciences; fifth edition, Patrick.J.Sinko , Lippincott Williams & Wilkins. Note, this source does not describe Nernst in the manner the text presents, nor is it evident that it is the source of the quotation (as much as one can surmise through search). Lacking full information (i.e., page number), the source is moved to Further reading. References Equilibrium chemistry Walther Nernst
Distribution law
[ "Chemistry" ]
310
[ "Equilibrium chemistry" ]
14,613,226
https://en.wikipedia.org/wiki/NINCDS-ADRDA%20Alzheimer%27s%20Criteria
The NINCDS-ADRDA Alzheimer's Criteria were proposed in 1984 by the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (now known as the Alzheimer's Association) and are among the most used in the diagnosis of Alzheimer's disease (AD). These criteria require that the presence of cognitive impairment and a suspected dementia syndrome be confirmed by neuropsychological testing for a clinical diagnosis of possible or probable AD; while they need histopathologic confirmation (microscopic examination of brain tissue) for the definitive diagnosis. They specify as well eight cognitive domains that may be impaired in AD. These criteria have shown good reliability and validity. Criteria Definite Alzheimer's disease: The patient meets the criteria for probable Alzheimer's disease and has histopathologic evidence of AD via autopsy or biopsy. Probable Alzheimer's disease: Dementia has been established by clinical and neuropsychological examination. Cognitive impairments also have to be progressive and be present in two or more areas of cognition. The onset of the deficits has been between the ages of 40 and 90 years and finally there must be an absence of other diseases capable of producing a dementia syndrome. Possible Alzheimer's disease: There is a dementia syndrome with an atypical onset, presentation or progression; and without a known etiology; but no co-morbid diseases capable of producing dementia are believed to be in the origin of it. Unlikely Alzheimer's disease: The patient presents a dementia syndrome with a sudden onset, focal neurologic signs or gait disturbance early in the course of the illness. Cognitive domains The NINCDS-ADRDA Alzheimer's Criteria specify eight cognitive domains that may be impaired in AD: memory, language, perceptual skills, attention, constructive abilities, orientation, problem solving and functional abilities. Other criteria Similar to the NINCDS-ADRDA Alzheimer's Criteria are the DSM-IV-TR criteria published by the American Psychiatric Association. At the same time the advances in functional neuroimaging techniques such as PET or SPECT that have already proven their utility to differentiate Alzheimer's disease from other possible causes, have led to proposals of revision of the NINCDS-ADRDA criteria that take into account these techniques. References Aging-associated diseases Alzheimer's disease
NINCDS-ADRDA Alzheimer's Criteria
[ "Biology" ]
480
[ "Senescence", "Aging-associated diseases" ]
7,270,080
https://en.wikipedia.org/wiki/Chlorinated%20paraffins
Chlorinated paraffins (CPs) are complex mixtures of polychlorinated n-alkanes (paraffin wax). The chlorination degree of CPs can vary between 30 and 70 wt%. CPs are subdivided according to their carbon chain length into short-chain CPs (SCCPs, C10–13), medium-chain CPs (MCCPs, C14–17) and long-chain CPs (LCCPs, C>17). Depending on chain length and chlorine content, CPs are colorless or yellowish liquids or solids. Production Chlorinated paraffins are synthesized by reaction of chlorine gas with unbranched paraffin fractions (<2 % isoparaffins, <100 ppm aromatics) at a temperature of 80–100 °C. The radical substitution may be promoted by UV-light. CxH(2x+2) + y Cl2 → CxH(2x−y+2)Cly + y HCl When the desired degree of chlorination is achieved, residues of hydrochloric acid and chlorine are blown off with nitrogen. Epoxidized vegetable oil, glycidyl ether or organophosphorous compounds may be added to the final product for improved stability at high temperatures. Commercial products have been classified as substances of unknown or variable composition. CPs are complex mixtures of chlorinated n-alkanes containing thousands of homologues and isomers which are not completely separated by standard analytical methods. CPs are produced in Europe, North America, Australia, Brazil, South Africa and Asia. In China, where most of the world production capacity is located, 600,000 tons of chlorinated paraffins were produced in 2007. Production and use volumes of CPs exceeded 1,000,000 tons in 2013. Industrial applications Production of CPs for industrial use started in the 1930s, with global production in 2000 being about 2 million tonnes. Currently, over 200 CP formulations are in use for a wide range of industrial applications, such as flame retardants and plasticisers, as additives in metal working fluids, in sealants, paints, adhesives, textiles, leather fat and coatings. Safety Short-chain CPs are classified as persistent and their physical properties (octanol-water partition coefficient (logKOW) 4.4–8, depending on the chlorination degree) imply a high potential for bioaccumulation. SCCPs are classified as toxic to aquatic organisms, and carcinogenic to rats and mice. Therefore, it was concluded that SCCPs have PBT and vPvB properties and they were added to the Candidate List of substances of very high concern for Authorisation under REACH Regulation. SCCPs (average chain length of C12, chlorination degree 60 wt%) were categorised in group 2B as possibly carcinogenic to humans from the International Agency for Research on Cancer (IARC). In 2017, it was agreed to globally ban SCCPs under the Stockholm Convention on Persistent Organic Pollutants, effective December 2018. However, also MCCPs are toxic to the aquatic environment and persistent; MCCPs in soil, biota, and most of the sediment cores show increasing time trends over the last years to decades; MCCP concentrations in sediment close to local sources exceed toxicity thresholds such as the PNEC. In July 2021 also MCCPs were added to the Candidate List of Substances of Very High Concern (SVHC) under the REACH Regulation. Chlorinated paraffins have been detected in marine life such as cetaceans (whales) and bivalves (molluscs). Of particular concern is fetal accumulation in whales, with the chemicals beginning to build-up in the offspring before they are even born. References Sources Further reading European Chemicals Bureau (2000). European Union Risk assessment report Vol. 4: Alkanes, C10-13, chloro, Luxembourg: Office for Official Publications of the European Community. European Chemicals Bureau (2008). European Union Risk assessment report Vol. 81: Alkanes, C10-13, chloro (update), Luxembourg: Office for Official Publications of the European Community. European Chemicals Bureau (2005). European Union Risk assessment report Vol. 58: Alkanes, C14-17, chloro (MCCP), Part I-Environment, Luxembourg: Office for Official Publications of the European Community. European Commission (2011). European Union Risk assessment report: Alkanes, C14-17, chloro; Addendum to the final report (2007) of the risk assessment – Environment part. Luxembourg: Office for Official Publications of the European Community. European Commission (2011). European Union Risk assessment report: Alkanes, C14-17, chloro (MCCP), Part II-Human Health, Luxembourg: Office for Official Publications of the European Community. External links Short Chain Chlorinated Paraffins – Proposal for identification of a substance as a CMR, PBT, vPvB or a substance of an equivalent level of concern Chloroalkanes Flame retardants IARC Group 2B carcinogens Soil contamination
Chlorinated paraffins
[ "Chemistry", "Environmental_science" ]
1,092
[ "Environmental chemistry", "Soil contamination" ]
7,270,721
https://en.wikipedia.org/wiki/Biological%20monitoring%20working%20party
The biological monitoring working party (BMWP) is a procedure for measuring water quality using families of macroinvertebrates as biological indicators. The method is based on the principle that different aquatic invertebrates have different tolerances to pollutants. In the case of BMWP, this is based on the sensitivity/tolerance to organic pollution (i.e. nutrient enrichment that can affect the availability of dissolved oxygen). It is important to recognise that the ranking of sensitivity/tolerance will vary for different kinds of pollution. In the case of BMWP/Organic pollution rankings, the presence of mayflies or stoneflies for instance indicate the cleanest waterways and are given a tolerance score of 10. The lowest scoring invertebrates are worms (Oligochaeta) which score 1. The number of different macroinvertebrates is also an important factor, because a better quality water is assumed to contain fewer pollutants that would exclude "sensitive" species - resulting in a higher diversity. Kick sampling, where a net is placed downstream from the sampler and the river bed is agitated with the foot for a given period of time (the standard is 3 minutes), is employed. Any macroinvertebrates caught in the net are stored and preserved with an alcohol solution, and identified to the family level, this can be done with the live organisms as well. The BMWP score equals the sum of the tolerance scores of all macroinvertebrate families in the sample. A higher BMWP score is considered to reflect a better water quality. Alternatively, also the Average Score Per Taxon (ASPT) score is calculated. The ASPT equals the average of the tolerance scores of all macroinvertebrate families found, and ranges from 0 to 10. The main difference between both indices is that ASPT does not depend on the family richness. Once BMWP and ASPT are calculated, the Lincoln Quality Index (LQI) is used to assess the water quality in the Anglian Water Authority area. Other indices that can be used to assess water quality are the Chandler Score, the Trent Biotic Index and the Rapid Bioassessment Protocols. Scoring table See also Biological integrity Biosurvey Biomonitoring References Aquatic ecology Environmental science Water quality indicators Water pollution
Biological monitoring working party
[ "Chemistry", "Biology", "Environmental_science" ]
457
[ "Water pollution", "Water quality indicators", "Ecosystems", "nan", "Aquatic ecology" ]
7,271,098
https://en.wikipedia.org/wiki/Edge%20recombination%20operator
The edge recombination operator (ERO) is an operator that creates a path that is similar to a set of existing paths (parents) by looking at the edges rather than the vertices. The main application of this is for crossover in genetic algorithms when a genotype with non-repeating gene sequences is needed such as for the travelling salesman problem. It was described by Darrell Whitley and others in 1989. Algorithm ERO is based on an adjacency matrix, which lists the neighbors of each node in any parent. For example, in a travelling salesman problem such as the one depicted, the node map for the parents CABDEF and ABCEFD (see illustration) is generated by taking the first parent, say, 'ABCEFD' and recording its immediate neighbors, including those that roll around the end of the string. Therefore; ... -> [A] <-> [B] <-> [C] <-> [E] <-> [F] <-> [D] <- ... ...is converted into the following adjacency matrix by taking each node in turn, and listing its connected neighbors; A: B D B: A C C: B E D: F A E: C F F: E D With the same operation performed on the second parent (CABDEF), the following is produced: A: C B B: A D C: F A D: B E E: D F F: E C Followed by making a union of these two lists, and ignoring any duplicates. This is as simple as taking the elements of each list and appending them to generate a list of unique link end points. In our example, generating this; A: B C D = {B,D} ∪ {C,B} B: A C D = {A,C} ∪ {A,D} C: A B E F = {B,E} ∪ {F,A} D: A B E F = {F,A} ∪ {B,E} E: C D F = {C,F} ∪ {D,F} F: C D E = {E,D} ∪ {E,C} The result is another adjacency matrix, which stores the links for a network described by all the links in the parents. Note that more than two parents can be employed here to give more diverse links. However, this approach may result in sub-optimal paths. Then, to create a path , the following algorithm is employed: algorithm ero is let K be the empty list let N be the first node of a random parent. while length(K) < length(Parent) do K := K, N (append N to K) Remove N from all neighbor lists if Ns neighbor list is non-empty then let N* be the neighbor of N with the fewest neighbors in its list (or a random one, should there be multiple) else''' let N* be a randomly chosen node that is not in K N := N''* To step through the example, we randomly select a node from the parent starting points, {A, C}. () -> A. We remove A from all the neighbor sets, and find that the smallest of B, C and D is B={C,D}. AB. The smallest sets of C and D are C={E,F} and D={E,F}. We randomly select D. ABD. Smallest are E={C,F}, F={C,E}. We pick F. ABDF. C={E}, E={C}. We pick C. ABDFC. The smallest set is E={}. ABDFCE. The length of the child is now the same as the parent, so we are done. Note that the only edge introduced in ABDFCE is AE. Comparison with other operators Edge recombination is generally considered a good option for problems like the travelling salesman problem. In a 1999 study at the University of the Basque Country, edge recombination provided better results than all the other crossover operators including partially mapped crossover and cycle crossover. References Genetic algorithms
Edge recombination operator
[ "Biology" ]
879
[ "Genetics techniques", "Genetic algorithms" ]
7,271,261
https://en.wikipedia.org/wiki/Statistical%20semantics
In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval. History The term statistical semantics was first used by Warren Weaver in his well-known paper on machine translation. He argued that word sense disambiguation for machine translation should be based on the co-occurrence frequency of the context words near a given target word. The underlying assumption that "a word is characterized by the company it keeps" was advocated by J.R. Firth. This assumption is known in linguistics as the distributional hypothesis. Emile Delavenay defined statistical semantics as the "statistical study of the meanings of words and their frequency and order of recurrence". "Furnas et al. 1983" is frequently cited as a foundational contribution to statistical semantics. An early success in the field was latent semantic analysis. Applications Research in statistical semantics has resulted in a wide variety of algorithms that use the distributional hypothesis to discover many aspects of semantics, by applying statistical techniques to large corpora: Measuring the similarity in word meanings Measuring the similarity in word relations Modeling similarity-based generalization Discovering words with a given relation Classifying relations between words Extracting keywords from documents Measuring the cohesiveness of text Discovering the different senses of words Distinguishing the different senses of words Subcognitive aspects of words Distinguishing praise from criticism Related fields Statistical semantics focuses on the meanings of common words and the relations between common words, unlike text mining, which tends to focus on whole documents, document collections, or named entities (names of people, places, and organizations). Statistical semantics is a subfield of computational semantics, which is in turn a subfield of computational linguistics and natural language processing. Many of the applications of statistical semantics (listed above) can also be addressed by lexicon-based algorithms, instead of the corpus-based algorithms of statistical semantics. One advantage of corpus-based algorithms is that they are typically not as labour-intensive as lexicon-based algorithms. Another advantage is that they are usually easier to adapt to new languages or noisier new text types from e.g. social media than lexicon-based algorithms are. However, the best performance on an application is often achieved by combining the two approaches. See also Co-occurrence Computational linguistics Information retrieval Latent semantic analysis Latent semantic indexing Semantic analytics Semantic similarity Statistical natural language processing Text corpus Text mining Web mining References Sources Reprinted in Applications of artificial intelligence Computational linguistics Information retrieval techniques Semantics Statistical natural language processing Applied statistics Computational fields of study
Statistical semantics
[ "Mathematics", "Technology" ]
534
[ "Computational fields of study", "Applied mathematics", "Computational linguistics", "Computing and society", "Natural language and computing", "Applied statistics" ]
7,272,050
https://en.wikipedia.org/wiki/Rangekeeper
Rangekeepers were electromechanical fire control computers used primarily during the early part of the 20th century. They were sophisticated analog computers whose development reached its zenith following World War II, specifically the Computer Mk 47 in the Mk 68 Gun Fire Control system. During World War II, rangekeepers directed gunfire on land, sea, and in the air. While rangekeepers were widely deployed, the most sophisticated rangekeepers were mounted on warships to direct the fire of long-range guns. These warship-based computing devices needed to be sophisticated because the problem of calculating gun angles in a naval engagement is very complex. In a naval engagement, both the ship firing the gun and the target are moving with respect to each other. In addition, the ship firing its gun is not a stable platform because it will roll, pitch, and yaw due to wave action, ship change of direction, and board firing. The rangekeeper also performed the required ballistics calculations associated with firing a gun. This article focuses on US Navy shipboard rangekeepers, but the basic principles of operation are applicable to all rangekeepers regardless of where they were deployed. Function A rangekeeper is defined as an analog fire control system that performed three functions: Target tracking The rangekeeper continuously computed the current target bearing. This is a difficult task because both the target and the ship firing (generally referred to as "own ship") are moving. This requires knowing the target's range, course, and speed accurately. It also requires accurately knowing the own ship's course and speed. Target position prediction When a gun is fired, it takes time for the projectile to arrive at the target. The rangekeeper must predict where the target will be at the time of projectile arrival. This is the point at which the guns are aimed. Gunfire correction Directing the fire of a long-range weapon to deliver a projectile to a specific location requires many calculations. The projectile point of impact is a function of many variables, including: gun azimuth, gun elevation, wind speed and direction, air resistance, gravity, latitude, gun/sight parallax, barrel wear, powder load, and projectile type. History Manual fire control The early history of naval fire control was dominated by the engagement of targets within visual range (also referred to as direct fire). In fact, most naval engagements before 1800 were conducted at ranges of . Even during the American Civil War, the famous engagement between the and the was often conducted at less than range. With time, naval guns became larger and had greater range. At first, the guns were aimed using the technique of artillery spotting. Artillery spotting involved firing a gun at the target, observing the projectile's point of impact (fall of shot), and correcting the aim based on where the shell was observed to land, which became more and more difficult as the range of the gun increased. Predecessor fire control tools and systems Between the American Civil War and 1905, numerous small improvements were made in fire control, such as telescopic sights and optical rangefinders. There were also procedural improvements, like the use of plotting boards to manually predict the position of a ship during an engagement. Around 1905, mechanical fire control aids began to become available, such as the Dreyer Table, Dumaresq (which was also part of the Dreyer Table), and Argo Clock, but these devices took a number of years to become widely deployed. These devices were early forms of rangekeepers. The issue of directing long-range gunfire came into sharp focus during World War I with the Battle of Jutland. While the British were thought by some to have the finest fire control system in the world at that time, during the Battle of Jutland only 3% of their shots actually struck their targets. At that time, the British primarily used a manual fire control system. The one British ship in the battle that had a mechanical fire control system turned in the best shooting results. This experience contributed to rangekeepers becoming standard issue. Power drives and Remote Power Control (RPC) The US Navy's first deployment of a rangekeeper was on the in 1916. Because of the limitations of the technology at that time, the initial rangekeepers were crude. During World War I, the rangekeepers could generate the necessary angles automatically, but sailors had to manually follow the directions of the rangekeepers (a task called "pointer following" or "follow the pointer"). Pointer following could be accurate, but the crews tended to make inadvertent errors when they became fatigued during extended battles. During World War II, servomechanisms (called "power drives" in the U.S. Navy and RPC in the Royal Navy) were developed that allowed the guns to automatically steer to the rangekeeper's commands with no manual intervention. The Mk. 1 and Mk. 1A computers contained approximately 20 servomechanisms, mostly position servos, to minimize torque load on the computing mechanisms. The Royal Navy first installed RPC, experimentally, aboard HMS Champion in 1928. In the 1930s RPC was used for naval searchlight control and during WW2 it was progressively installed on pom-pom mounts and directors, 4-inch, 4.5-inch and 5.25-inch gun mounts. During their long service life, rangekeepers were updated often as technology advanced, and by World War II they were a critical part of an integrated fire control system. The incorporation of radar into the fire control system early in World War II provided ships with the ability to conduct effective gunfire operations at long range in poor weather and at night. Service in World War II During World War II, rangekeeper capabilities were expanded to the extent that the name "rangekeeper" was deemed to be inadequate. The term "computer," which had been reserved for human calculators, came to be applied to the rangekeeper equipment. After World War II, digital computers began to replace rangekeepers. However, components of the analog rangekeeper system continued in service with the US Navy until the 1990s. The performance of these analog computers was impressive. The battleship during a 1945 test was able to maintain an accurate firing solution on a target during a series of high-speed turns. It is a major advantage for a warship to be able to maneuver while engaging a target. Night naval engagements at long range became feasible when radar data could be input to the rangekeeper. The effectiveness of this combination was demonstrated in November 1942 at the Third Battle of Savo Island when the engaged the Japanese battlecruiser at a range of at night. The Kirishima was set aflame, suffered a number of explosions, and was scuttled by her crew. She had been hit by nine rounds out of 75 fired (12% hit rate). The wreck of the Kirishima was discovered in 1992 and showed that the entire bow section of the ship was missing. The Japanese during World War II did not develop radar or automated fire control to the level of the US Navy and were at a significant disadvantage. The Royal Navy began to introduce gyroscopic stabilization of their director gunsights in World War One and by the start of World War Two all warships fitted with director control had gyroscopically controlled gunsights. The last combat action for the analog rangekeepers, at least for the US Navy, was in the 1991 Persian Gulf War when the rangekeepers on the s directed their last rounds in combat. Construction Rangekeepers were very large, and the ship designs needed to make provisions to accommodate them. For example, the Ford Mk 1A Computer weighed The Mk. 1/1A's mechanism support plates, some were up to thick, were made of aluminum alloy, but nevertheless, the computer is very heavy. On at least one refloated museum ship, the destroyer (now in Boston), the computer and Stable Element more than likely still are below decks, because they are so difficult to remove. The rangekeepers required a large number of electrical signal cables for synchro data transmission links over which they received information from the various sensors (e.g. gun director, pitometer, rangefinder, gyrocompass) and sent commands to the guns. These computers also had to be formidably rugged, partly to withstand the shocks created by firing the ship's own guns, and also to withstand the effects of hostile enemy hits to other parts of the ship. They not only needed to continue functioning, but also stay accurate. The Ford Mark 1/1A mechanism was mounted into a pair of approximately cubical large castings with very wide openings, the latter covered by gasketed castings. Individual mechanisms were mounted onto thick aluminum-alloy plates, and along with interconnecting shafts, were progressively installed into the housing. Progressive assembly meant that future access to much of the computer required progressive disassembly. The Mk 47 computer was a radical improvement in accessibility over the Mk 1/1A. It was more akin to a tall, wide storage cabinet in shape, with most or all dials on the front vertical surface. Its mechanism was built in six sections, each mounted on very heavy-duty pull-out slides. Behind the panel were typically a horizontal and a vertical mounting plate, arranged in a tee. Mechanisms The problem of rangekeeping Long-range gunnery is a complex combination of art, science, and mathematics. There are numerous factors that affect the ultimate placement of a projectile and many of these factors are difficult to model accurately. As such, the accuracy of battleship guns was ≈1% of range (sometimes better, sometimes worse). Shell-to-shell repeatability was ≈0.4% of range. Accurate long-range gunnery requires that a number of factors be taken into account: Target course and speed Own ship course and speed Gravity Coriolis effect: Because the Earth is rotating, there is an apparent force acting on the projectile. Internal ballistics: Guns do wear, and this aging must be taken into account by keeping an accurate count of the number of projectiles sent through the barrel (this count is reset to zero after the installation of a new liner). There are also shot-to-shot variations due to barrel temperature and interference between guns firing simultaneously. External ballistics: Different projectiles have different ballistic characteristics. Also, air conditions have an effect as well (temperature, wind, air pressure). Parallax correction: In general, the position of the gun and target spotting equipment (radar, mounted on the gun director, pelorus, etc) are in different locations on a ship. This creates a parallax error for which corrections must be made. Projectile characteristics (e.g. ballistic coefficient) Powder charge weight and temperature The calculations to predict and compensate for all these factors are complicated, frequent and error-prone when done by hand. Part of the complexity came from the amount of information that must be integrated from many different sources. For example, information from the following sensors, calculators, and visual aids must be integrated to generate a solution: Gyrocompass: This device provides an accurate true north own ship course. Rangefinders: Optical devices for determining the range to a target. Pitometer Logs: These devices provided an accurate measurement of the own ship's speed. Range clocks: These devices provided a prediction of the target's range at the time of projectile impact if the gun was fired now. This function could be considered "range keeping". Angle clocks: This device provided a prediction of the target's bearing at the time of projectile impact if the gun was fired now. Plotting board: A map of the gunnery platform and target that allowed predictions to be made as to the future position of a target. (The compartment ("room") where the Mk.1 and Mk.1A computers was located was called "Plot" for historical reasons.) Various slide rules: These devices performed the various calculations required to determine the required gun azimuth and elevation. Meteorological sensors: Temperature, wind speed, and humidity all have an effect on the ballistics of a projectile. U.S. Navy rangekeepers and analog computers did not consider different wind speeds at differing altitudes. To increase speed and reduce errors, the military felt a dire need to automate these calculations. To illustrate the complexity, Table 1 lists the types of input for the Ford Mk 1 Rangekeeper (ca 1931). {| class="wikitable" |+Table 1: Manual Inputs Into Pre-WWII Rangekeeper |- | style="width:80pt"|Variable | style="width:200pt"|Data Source |- | Range | Phoned from range finder |- |Own ship course |Gyrocompass repeater |- |Own ship speed |Pitometer log |- |Target course |Initial estimates for rate control |- |Target speed |Initial estimates for rate control |- |Target bearing |Automatically from director |- |Spotting data |Spotter, by telephone |} However, even with all this data, the rangekeeper's position predictions were not infallible. The rangekeeper's prediction characteristics could be used against it. For example, many captains under long-range gun attack would make violent maneuvers to "chase salvos" or "steer for the fall of shot," i.e., maneuver to the position of the last salvo splashes. Because the rangekeepers are constantly predicting new positions for the target, it was unlikely that subsequent salvos would strike the position of the previous salvo. Practical rangekeepers had to assume that targets were moving in a straight-line path at a constant speed, to keep complexity within acceptable limits. A sonar rangekeeper was built to track a target circling at a constant radius of turn, but that function was disabled. General technique The data were transmitted by rotating shafts. These were mounted in ball-bearing brackets fastened to the support plates. Most corners were at right angles, facilitated by miter gears in 1:1 ratio. The Mk. 47, which was modularized into six sections on heavy-duty slides, connected the sections together with shafts in the back of the cabinet. Shrewd design meant that the data carried by these shafts required no manual zeroing or alignment; only their movement mattered. The aided-tracking output from an integrator roller is one such example. When the section was slid back into normal position, the shaft couplings mated as soon as the shafts rotated. Common mechanisms in the Mk. 1/1A included many miter-gear differentials, a group of four 3-D cams, some disk-ball-roller integrators, and servo motors with their associated mechanism; all of these had bulky shapes. However, most of the computing mechanisms were thin stacks of wide plates of various shapes and functions. A given mechanism might be up to thick, possibly less, and more than a few were maybe across. Space was at a premium, but for precision calculations, more width permitted a greater total range of movement to compensate for slight inaccuracies, stemming from looseness in sliding parts. The Mk. 47 was a hybrid, doing some computing electrically, and the rest mechanically. It had gears and shafts, differentials, and totally enclosed disk-ball-roller integrators. However, it had no mechanical multipliers or resolvers ("component solvers"); these functions were performed electronically, with multiplication carried out using precision potentiometers. In the Mk. 1/1A, however, excepting the electrical drive servos, all computing was mechanical. Implementations of mathematical functions The implementation methods used in analog computers were many and varied. The fire control equations implemented during World War II on analog rangekeepers are the same equations implemented later on digital computers. The key difference is that the rangekeepers solved the equations mechanically. While mathematical functions are not often implemented mechanically today, mechanical methods exist to implement all the common mathematical operations. Some examples include: Addition and subtraction Differential gears, usually referred to by technicians simply as "differentials", were often used to perform addition and subtraction operations. The Mk. 1A contained approximately 160 of them. The history of this gearing for computing dates to antiquity (see Antikythera mechanism). Multiplication by a constant Gear ratios were very extensively used to multiply a value by a constant. Multiplication of two variables The Mk. 1 and Mk.1A computer multipliers were based on the geometry of similar triangles. Sine and cosine generation (polar-to-rectangular coordinate conversion) These mechanisms would be called resolvers, today; they were called "component solvers" in the mechanical era. In most instances, they resolved an angle and magnitude (radius) into sine and cosine components, with a mechanism consisting of two perpendicular Scotch yokes. A variable crankpin radius handled the magnitude of the vector in question. Integration Ball-and-disk integrators performed the integration operation. As well, four small Ventosa integrators in the Mk. 1 and Mk. 1A computers scaled rate-control corrections according to angles. The integrators had rotating discs and a full-width roller mounted in a hinged casting, pulled down toward the disc by two strong springs. Twin balls permitted free movement of the radius input with the disk stopped, something done at least daily for static tests. Integrators were made with discs of 3, 4 and 5 inch (7.6, 10 and 12.5 cm) diameters, the larger being more accurate. Ford Instrument Company integrators had a clever mechanism for minimizing wear when the ball-carrier carriage was in one position for extended periods. Component integrators Component integrators were essentially Ventosa integrators, all enclosed. Think of a traditional heavy-ball computer mouse and its pickoff rollers at right angles to each other. Underneath the ball is a roller that turns to rotate the mouse ball. However, the shaft of that roller can be set to any angle you want. In the Mk. 1/1A, a rate-control correction (keeping the sights on target) rotated the ball, and the two pickoff rollers at the sides distributed the movement appropriately according to angle. That angle depended upon the geometry of the moment, such as which way the target was heading. Differentiation Differentiation was performed by using an integrator in a feedback loop. Functions of one variable Rangekeepers used a number of cams to generate function values. Many face cams (flat discs with wide spiral grooves) were used in both rangekeepers. For surface fire control (the Mk. 8 Range Keeper), a single flat cam was sufficient to define ballistics. Functions of two variables In the Mk. 1 and Mk 1A computers, four three-dimensional cams were needed. These used cylindrical coordinates for their inputs, one being the rotation of the cam, and the other being the linear position of the ball follower. The radial displacement of the follower yielded the output. The four cams in the Mk. 1/1A computer provided mechanical time fuse setting, time of flight (this time is from firing to bursting at or near the target), time of flight divided by predicted range, and superelevation combined with vertical parallax correction. (Superelevation is essentially the amount the gun barrel needs to be raised to compensate for gravity drop.) Servo speed stabilization The Mk.1 and Mk.1A computers were electromechanical, and many of their mechanical calculations required drive movements of precise speeds. They used reversible two-phase capacitor-run induction motors with tungsten contacts. These were stabilized primarily by rotary magnetic drag (eddy-current) slip clutches, similar to classical rotating-magnet speedometers, but with a much higher torque. One part of the drag was geared to the motor, and the other was constrained by a fairly stiff spring. This spring offset the null position of the contacts by an amount proportional to motor speed, thus providing velocity feedback. Flywheels mounted on the motor shafts, but coupled by magnetic drags, prevented contact chatter when the motor was at rest. Unfortunately, the flywheels must also have slowed down the servos somewhat. A more elaborate scheme, which placed a rather large flywheel and differential between the motor and the magnetic drag, eliminated velocity error for critical data, such as gun orders. The Mk. 1 and Mk. 1A computer integrator discs required a particularly elaborate system to provide constant and precise drive speeds. They used a motor with its speed regulated by a clock escapement, cam-operated contacts, and a jeweled-bearing spur-gear differential. Although the speed oscillated slightly, the total inertia made it effectively a constant-speed motor. At each tick, contacts switched on motor power, then the motor opened the contacts again. It was in effect slow pulse-width modulation of motor power according to load. When running, the computer had a unique sound as motor power was switched on and off at each tick—dozens of gear meshes inside the cast-metal computer housing spread out the ticking into a "chunk-chunk" sound. Assembly A detailed description of how to dismantle and reassemble the system was contained in the two-volume Navy Ordnance Pamphlet OP 1140 with several hundred pages and several hundred photographs. When reassembling, shaft connections between mechanisms had to be loosened and the mechanisms mechanically moved so that an output of one mechanism was at the same numerical setting (such as zero) as the input to the other. Fortunately these computers were especially well-made, and very reliable. Related targeting systems During WWII, all the major warring powers developed rangekeepers to different levels. Rangekeepers were only one member of a class of electromechanical computers used for fire control during World War II. Related analog computing hardware used by the United States included: Norden bombsight US bombers used the Norden bombsight, which used similar technology to the rangekeeper for predicting bomb impact points. Torpedo Data Computer (TDC) US submarines used the TDC to compute torpedo launch angles. This device also had a rangekeeping function that was referred to as "position keeping." This was the only submarine-based fire control computer during World War II that performed target tracking. Because space within a submarine hull is limited, the TDC designers overcame significant packaging challenges in order to mount the TDC within the allocated volume. M-9/SCR-584 Anti-Aircraft System This equipment was used to direct air defense artillery. It made a particularly good account of itself against the V-1 flying bombs. See also Director (military) Gun data computer Fire-control system Kerrison Predictor Notes Bibliography External links Appendix one, Classification of Director Instruments USN Report on IJN Technology Excellent article on the performance of long-range gunnery between the World Wars. British fire control British fire control expert Ford Instrument Company museum site. Ford built rangekeepers for the US Navy during World Wars I and II OP1140, a superb Navy manual. Chapter 2 has many fine illustrations and clearly written text. Basic Mechanisms in Fire Control Computers. United States Navy Training Film. MN-6783a and MN-6783b. 1953. 2 parts of 4. Military computers Electro-mechanical computers Analog computers Artillery operation Artillery components Naval artillery Fire-control computers of World War II
Rangekeeper
[ "Technology" ]
4,775
[ "Artillery components", "Components" ]
7,272,109
https://en.wikipedia.org/wiki/Hydrography%20of%20Norte%20de%20Santander
The department of Norte de Santander in northwestern Colombia, and its capital, Cúcuta, contains several rivers. The rivers are mostly part of the Maracaibo Lake basin, with the southeastern section located in the Magdalena River basin. Important fluvial elements are the Zulia, Catatumbo and Pamplonita Rivers. The entity in charge of taking care of these hydrology of Norte de Santander is Corponor. Topography The department of Norte de Santander is for the most part situated in the Eastern Ranges of the Colombian Andes. The northeastern majority of the department is part of the Maracaibo Lake drainage basin, while the southwestern tip of Norte de Santander forms part of the Magdalena River basin. The southeasternmost part of the department is located in the Orinoco River basin. The department, reaching to an altitude of in the Tamá Páramo, has a total surface area of . Hydrography Catatumbo River The Catatumbo River is a fast flowing river, originating as the confluence of the Peralonso, Sardinata and Zulia Rivers in the central valley of Norte de Santander. The upper part of the river is sourced from the highlands near the Macho Rucio Peak ("gray mule peak"), located in the south of Ocaña province. Its mouth is at Lake Maracaibo in Venezuela, through a delta called La Empalizada ("the fence"). Early sections of the Catatumbo River are known as Chorro Oroque, Rio de la Cruz, and Algodonal. Only in the Venezuelan section it is navigable. Left tributaries of the Catatumbo River include: Main affluents Río Frío, Río de Oro, Erbura, Tiradera and San Miguelito Minor affluents Sajada, El Molino, San Lucas, Los Indios, Zurita, Carbón, Naranjito, Sánchez, Joaquín Santos, Teja, San Carlos, Guaduas, Águila, Lejía, Honda, Capitán Largo, Manuel Díaz, Oropora, Huevo, La Vieja, Guayabal, Guamos and Roja Right tributaries include: Main affluents San Miguel, Tarra, Orú, Sardinata and Zulia Minor affluents La Urugmita, La Labranza, Seca, Cargamenta and San Calixto or Maravilla Peralonso River The Peralonso River originates in a small lake in the Guerrero highlands, at altitude. It crosses the municipalities of Salazar, Gramalote, Santiago, San Cayetano and Cúcuta, ending in the Zulia River, near the village of San Cayetano. It forms the upper course of the Catatumbo River. Sardinata River The Sardinata River originates in La Vuelta, in the Guerrero highlands, near the village of Caro at about above sea level. It has a length of almost . Near the river, many forestal and mining activities are present. Its affluents are on the left tributary Santa and the right tributaries Riecito San Miguel, La Sapa, José, La Esmeralda, La Resaca and Pedro José. Its Colombian segment ends in Tres Bocas and continues in Venezuela terminating in the Catatumbo River. Zulia River The Zulia River is formed by several rivers originating in lakes in the highlands of Cachiri at about 4,220 meters above sea level, and located between 12°41'2" east longitude and 8’9" north latitude in the Santander Department, in the eastern range of the Andes mountains. The river flows through the province of Cúcuta, passing through the neighbouring nation of Venezuela, and ends in the waters of Lake Maracaibo. In Colombian territory, this river is navigable for about , starting from the old port of Los Canchos. The river flows for through Venezuela, the last being deep and calm, adaptable to embarkments of big proportions. In the past, this river provided a basic means of transportation which was responsible for much of prosperity of the neighbouring valleys, like the center of nutritioned commerce, whose products fed many of the towns nearby. The Zulia river's tributaries include the Cucutilla, Arboleda, Salazar and the Peralonso Rivers, which flow from the left, and the Pamplonita River with its own tributaries the Táchira and La Grita Rivers, flowing from the right. Areas surrounding the Zulia River are fertile, with many forests decorating the landscape. However, the climate of this area could be seen as unhealthy, due to the density of trees and the many swamps. Salazar River The Salazar River originates near the city of Zulia and terminates near the namesake town of Salazar de las Palmas. It's an important river in the tradition of local inhabitants. It is used for swimming, fishing and cooking the traditional sancocho soup on the beaches of the river. There are some areas of the river near Salazar city that have waterfalls of many minor water streams falling into the river, that are often visited by tourists. La Grita River La Grita River originates in the Venezuelan Andes near the town of La Grita at about above sea level. It's a natural boundary between Colombia and Venezuela for about , until its mouth in the Zulia River. Its affluents are the Guaramito River, La China, Riecito, Río Lobatera, and Caño de La Miel. Pamplonita River The Pamplonita River was of crucial importance in the economy of the country in the 18th to 19th centuries as the main channel for the transportation of cacao. It originates in the Altogrande Mountains at above sea level, near the city of Pamplona. It flows downhill through the Cariongo Valley, and near Chinácota the confluence with the minor affluent Honda River is situated. The Pamplonita River flows through the Cúcuta valley, where it has a slow flow, ending in the Zulia River, flowing towards Maracaibo Lake. Most of this river is above above sea level. The total length of the river is about and its watershed covers . The confluence of the Pamplonita and Zulia Rivers is located near the urban area of Cúcuta, the capital of Norte de Santander, in particular the Rinconada neighborhood. This part contains the risk of flooding, until in the streets of the city. The river has periodically flooded the local hospital and the Colón Park, named after Cristóbal Colón. The river also produces a significant erosion in the surrounding lands, in part because of the local dry climate and shortage of vegetation. This is seen more noticeably in the areas near Cúcuta; La Garita and Los Vados. The Pamplonita River crosses the municipalities of: Cúcuta, Pamplona, Los Patios, Chinácota, Bochalema and Pamplonita, and the villages of El Diamante, La Donjuana, La Garita, San Faustino and Agua Clara. The river receives sewage water from Pamplona, Los Patios and Cúcuta, and residues from slaughterhouses, pesticides and fertilizers. The 1541 law regulates the use of water from the rivers to concessions regulated by the local government, but there are many illegal non-regulated diversions of water. Affluents of Pamplonita River are: Right Main affluents: Táchira River, Rio Viejo, Las Brujas, Caño Cachicana and Caño Guardo Minor affluents: Monteadentro, Los Negros, Los Cerezos, Zipachá, Tanauca, Ulagá, El Gabro, El Ganso, Santa Helena, Cucalina, La Teja, De Piedra, La Palmita, Matagira, La Chorrera, Iscalá, Honda, Cascarena, Villa Felisa, Ciénaga, Juana Paula, Don Pedra, Faustinera, Europea, Rodea, Aguasucia Left Minor affluents: Navarro, San Antonio, La Palma, Hojancha, La Laguna, Batagá, Galindo, Santa Lucía, Las Colonias, El Laurel, Chiracoca, Montuosa, El Masato, Quebraditas, Aguanegra, Zorzana, El Ojito, Jaguala, Viajaguala, Tío José, El Magro, Aguadas, La Rinconada, Periquera, Voladora, La Sarrera, La Cuguera, Guaimaraca, Aguaclarera, La Trigrera, Negro, El Oso, and Chipo Táchira River The Táchira River originates near Tamá, in the mountains of Las Banderas, at an altitude of above sea level. The river flows towards the north, as a natural boundary between Colombia and Venezuela. It crosses the municipalities of Herrán, Ragonvalia, Villa del Rosario and Cúcuta. Tachira River flows into the Pamplonita River near the village El Escobal. Its affluents are El Salado, La Margarita, El Naranjal, Palogordo, El Palito, Agua Sucia and la Horma. See also List of rivers of Colombia References Norte de Santander Geography of Norte de Santander Department Hydrography
Hydrography of Norte de Santander
[ "Environmental_science" ]
1,955
[ "Hydrography", "Hydrology" ]
7,272,384
https://en.wikipedia.org/wiki/Neocallimastigomycota
Neocallimastigomycota is a phylum containing anaerobic fungi, which are symbionts found in the digestive tracts of larger herbivores. Anaerobic fungi were originally placed within phylum Chytridiomycota, within Order Neocallimastigales but later raised to phylum level, a decision upheld by later phylogenetic reconstructions. It encompasses only one family. Discovery The fungi in Neocallimastigomycota were first recognised as fungi by Orpin in 1975, based on motile cells present in the rumen of sheep. Their zoospores had been observed much earlier but were believed to be flagellate protists, but Orpin demonstrated that they possessed a chitin cell wall. It has since been shown that they are fungi related to the core chytrids. Prior to this, the microbial population of the rumen was believed to consist only of bacteria and protozoa. Since their discovery they have been isolated from the digestive tracts of over 50 herbivores, including ruminant and non-ruminant (hindgut-fermenting) mammals and herbivorous reptiles. Neocallimastigomycota have also been found in humans. Circumscription Reproduction and growth These fungi reproduce in the rumen of ruminants through the formation of zoospores which are released from sporangia. These zoospores bear a kinetosome but lack the nonflagellated centriole known in most chytrids, and have been known to utilize horizontal gene transfer in their development of xylanase (from bacteria) and other glucanases. The nuclear envelopes of their cells are notable for remaining intact throughout mitosis. Sexual reproduction has not been observed in anaerobic fungi. However, they are known to be able to survive for many months in aerobic environments, a factor which is important in the colonisation of new hosts. In Anaeromyces, the presence of putative resting spores has been observed but the way in which these are formed and germinate remains unknown. Metabolism Neocallimastigomycota lack mitochondria but instead contain hydrogenosomes in which the oxidation of NADH to NAD+, leading to formation of H2. Polysaccharide-degrading activity Neocallimastigomycota play an essential role in fibre-digestion in their host species. They are present in large numbers in the digestive tracts of animals which are fed on high fibre diets. The polysaccharide degrading enzymes produced by anaerobic fungi can hydrolyse the most recalcitrant plant polymers and can degrade unlignified plant cell walls entirely. Orpinomyces sp. exhibited the capacity of xylanase, CMCase, lichenase, amylase, β-xylosidase, β-glucosidase, α-Larabinofuranosidase and minor amounts of β-cellobiosidase production by utilizing avicel as the sole energy source. The polysaccharide degrading enzymes are organised into a multiprotein complex, similar to the bacterial cellulosome. Spelling of name The Greek termination, "-mastix", referring to "whips", i.e. the many flagella on these fungi, is changed to "-mastig-" when combined with additional terminations in Latinized names. The family name Neocallimastigaceae was originally incorrectly published as "Neocallimasticaceae" by the publishing authors which led to the coinage of the misspelled, hence incorrect "Neocallimasticales", an easily forgiven error considering that other "-ix" endings such as Salix goes to Salicaceae. Correction of these names is mandated by the International Code of Botanical Nomenclature, Art. 60. The corrected spelling is used by Index Fungorum. Both spellings occur in the literature and on the WWW as a result of the spelling in the original publication. References External links The Anaerobic Fungi Network Fungus phyla Fungi by classification
Neocallimastigomycota
[ "Biology" ]
860
[ "Eukaryotes by classification", "Fungi", "Fungi by classification" ]
7,273,059
https://en.wikipedia.org/wiki/Value%20of%20lost%20load
The Value of Lost Load (VoLL) is the estimated amount that customers receiving electricity with firm contracts would be willing to pay to avoid a disruption in their electricity service. The value of these losses can be expressed as a customer damage function (CDF). A CDF is defined as: Loss ($/kW) = ƒ (duration, season, time of day, notice) Based on the calculated outage cost, a CDF can be obtained for various customer groups. Typically, there are three distinct groups of customers: residential, small and medium commercial/industry and large commercial/industrial. Figure 1 below illustrates the incremental CDFs of these three groups. The CDF relates the magnitude of customer losses (per kW interrupted) for a given duration of a power outage. While the general shapes of all three curves are similar, the magnitude of loss varies dramatically depending on the customer's size. Based on VoLL data from an EGAT survey in March and April 2000, it was estimated that the customers’ costs in the first hour for residential customers was Baht 11.45/kW. For large commercial/industrial (C/I) and small & medium C/I customers, the cost in the first hour was Baht 29.55/kW and Baht 89.50 /kW respectively. Further research from EPRI indicates that residential customers’ cost tend to peak at US$1.50/kW in the first hour and falls of to US$0.46/kW in subsequent hours. On the other hand, large C/I and small & medium C/I suffer much higher losses of US$10/kW and US$38/kW respectively in the first hour. This falls to US$4/kW and US$9/kW respectively in the subsequent hours. The CDF predicts the loss per interrupted kW based on factors that influence the outage. The CDF is usually calculated based on defined market segments such as residential, commercial, and industrial. This is done because there are large variations of costs across the utility market segments. However, in a large area it is a normal practice that the interruption cost is aggregated to represent a general picture of the load losses. The aggregated interruption cost is called the composite CDF. References External links MEng Dissertation by Daniel Andrew Sen entitled "Stratification & Sampling of Electricity Consumer Data for Outage Cost Determination" Power engineering Energy economics Electric power distribution
Value of lost load
[ "Engineering", "Environmental_science" ]
497
[ "Energy economics", "Energy engineering", "Power engineering", "Electrical engineering", "Environmental social science" ]
7,273,383
https://en.wikipedia.org/wiki/Gamma%20ray%20logging
Gamma ray logging is a method of measuring naturally occurring gamma radiation to characterize the rock or sediment in a borehole or drill hole. It is a wireline logging method used in mining, mineral exploration, water-well drilling, for formation evaluation in oil and gas well drilling and for other related purposes. Different types of rock emit different amounts and different spectra of natural gamma radiation. In particular, shales usually emit more gamma rays than other sedimentary rocks, such as sandstone, gypsum, salt, coal, dolomite, or limestone because radioactive potassium is a common component in their clay content, and because the cation-exchange capacity of clay causes them to absorb uranium and thorium. This difference in radioactivity between shales and sandstones/carbonate rocks allows the gamma ray tool to distinguish between shales and non-shales. But it cannot distinguish between carbonates and sandstone as they both have similar deflections on the gamma ray log. Thus gamma ray logs cannot be said to make good lithological logs by themselves, but in practice, gamma ray logs are compared side-by-side with stratigraphic logs. The gamma ray log, like other types of well logging, is done by lowering an instrument down the drill hole and recording gamma radiation variation with depth. In the United States, the device most commonly records measurements at 1/2-foot intervals. Gamma radiation is usually recorded in API units, a measurement originated by the petroleum industry. Gamma rays attenuate according to the diameter of the borehole mainly because of the properties of the fluid filling the borehole, but because gamma logs are generally used in a qualitative way, amplitude corrections are usually not necessary. Three elements and their decay chains are responsible for the radiation emitted by rock: potassium, thorium and uranium. Shales often contain potassium as part of their clay content and tend to absorb uranium and thorium as well. A common gamma-ray log records the total radiation and cannot distinguish between the radioactive elements, while a spectral gamma ray log (see below) can. For standard gamma-ray logs, the measured value of gamma-ray radiation is calculated from concentration of uranium in ppm, thorium in ppm, and potassium in weight percent: e.g., GR API = 8 × uranium concentration in ppm + 4 × thorium concentration in ppm + 16 × potassium concentration in weight percent. Due to the weighted nature of uranium concentration in the GR API calculation, anomalous concentrations of uranium can cause clean sand reservoirs to appear shaley. For this reason, spectral gamma ray is used to provide an individual reading for each element so that anomalous concentrations can be found and properly interpreted. An advantage of the gamma log over some other types of well logs is that it works through the steel and cement walls of cased boreholes. Although concrete and steel absorb some of the gamma radiation, enough travels through the steel and cement to allow for qualitative determinations. In some places, non-shales exhibit elevated levels of gamma radiation. For instance, sandstones can contain uranium minerals, potassium feldspar, clay filling, or lithic fragments that cause the rock to have higher than usual gamma readings. Coal and dolomite may contain absorbed uranium. Evaporite deposits may contain potassium minerals such as sylvite and carnallite. When this is the case, spectral gamma ray logging should be done to identify the source of these anomalies. Spectral logging Spectral logging is the technique of measuring the spectrum, or number and energy, of gamma rays emitted via natural radioactivity of the rock formation. There are three main sources of natural radioactivity on Earth: potassium (40K), thorium (principally 232Th and 230Th), and uranium (principally 238U and 235U). These radioactive isotopes each emit gamma rays that have a characteristic energy level measured in MeV. The quantity and energy of these gamma rays can be measured by a scintillometer. A log of the spectroscopic response to natural gamma ray radiation is usually presented as a total gamma ray log that plots the weight fraction of potassium (%), thorium (ppm) and uranium (ppm). The primary standards for the weight fractions are geological formations with known quantities of the three isotopes. Natural gamma ray spectroscopy logs became routinely used in the early 1970s, although they had been studied from the 1950s. The characteristic gamma ray line that is associated with each radioactive component: Potassium : Gamma ray energy 1.46 MeV Thorium series: Gamma ray energy 2.61 MeV Uranium-Radium series: Gamma ray energy 1.76 MeV Another example of the use of spectral gamma ray logs is to identify specific clay types, like kaolinite or illite. This may be useful for interpreting the environment of deposition as kaolinite can form from feldspars in tropical soils by leaching of potassium; and low potassium readings may thus indicate the presence of one or more paleosols. The identification of specific clay minerals is also useful for calculating the effective porosity of reservoir rock. Use in mineral exploration Gamma ray logs are also used in mineral exploration, especially exploration for phosphates, uranium, and potassium salts. References Well logging Petroleum technology
Gamma ray logging
[ "Chemistry", "Engineering" ]
1,078
[ "Petroleum engineering", "Petroleum technology", "Well logging" ]
7,273,549
https://en.wikipedia.org/wiki/Spontaneous%20potential%20logging
Spontaneous potential log, commonly called the self potential log or SP log, is a passive measurement taken by oil industry well loggers to characterise rock formation properties. The log works by measuring small electric potentials (measured in millivolts) between depths with in the borehole and a grounded electrode at the surface. Conductive bore hole fluids are necessary to create a SP response, so the SP log cannot be used in nonconductive drilling muds (e.g. oil-based mud) or air filled holes. The change in voltage through the well bore is caused by a buildup of charge on the well bore walls. Also Clays and shales (which are composed predominantly of clays) will generate one charge and permeable formations such as sandstone will generate an opposite one. Spontaneous potentials occur when two aqueous solutions with different ionic concentrations are placed in contact through a porous, semi-permeable membrane. In nature, ions tend to migrate from high to low ionic concentrations. In the case of SP logging, the two aqueous solutions are the well bore fluid (drilling mud) and the formation water (connate water). The potential opposite shales is called the baseline, and typically shifts only slowly over the depth of the borehole. The relative salinity of the mud and the formation water will determine the which way the SP curve will deflect opposite a permeable formation. Generally if the ionic concentration of the well bore fluid is less than the formation fluid then the SP reading will be more negative (usually plotted as a deflection to the left). If the formation fluid has an ionic concentration less than the well bore fluid, the voltage deflection will be positive (usually plotted as an excursion to the right). The amplitudes of the line made by the changing SP will vary from formation to formation and will not give a definitive answer to how permeable or the porosity of the formation that it is logging. The presence of hydrocarbons (e.g. oil, natural gas, condensate) will reduce the response on an SP log because the interstitial water contact with the well bore fluid is reduced. This phenomenon is called hydrocarbon suppression and can be used to diagnose rocks for commercial potential. The SP curve is usually 'flat' opposite shale formations because there is no ion exchange due to the low permeability, low porosity properties (tight)thus creating a baseline. Tight rocks other than shale (e.g. tight sandstones, tight carbonates) will also result in poor or no response on the SP curve because of no ion exchange. The SP tool is one of the simplest tools and is generally run as standard when logging a hole, along with the gamma ray. SP data can be used to find: Depths of permeable formations The boundaries of these formations Correlation of formations when compared with data from other analogue wells Values for the formation-water resistivity The SP curve can be influenced by various factors both in the formation and introduced into the wellbore by the drilling process. These factors can cause the SP curve to be muted or even inverted depending on the situation. Formation bed thickness Resistivities in the formation bed and the adjacent formations Resistivity and make up of the drilling mud Wellbore diameter The depth of invasion by the drilling mud into the formation Mud invasion into the permeable formation can cause the deflections in the SP curve to be rounded off and to reduce the amplitude of thin beds. A smaller wellbore will cause, like a mud filtrate invasion, the deflections on the SP curve to be rounded off and decrease the amplitude opposite thin beds, while a larger diameter wellbore has the opposite effect. If the salinity of the mud filtrate is greater than formation water the SP currents will flow in opposite direction. In that case SP deflection will be positive towards to the right. Positive deflections are observed for fresh water bearing formations. References Petroleum production Well logging
Spontaneous potential logging
[ "Engineering" ]
821
[ "Petroleum engineering", "Well logging" ]
7,273,785
https://en.wikipedia.org/wiki/Europlanet
Europlanet is a network linking planetary scientists from across Europe. The aim of Europlanet is to promote collaboration and communication between partner institutions and to support missions to explore the Solar System. EuroPlaNet co-ordinates activities in Planetary Sciences in order to achieve a long-term integration of this discipline in Europe. In 2021, they produced a pocket atlas to Mars. Objectives The objectives are to: increase the productivity of planetary projects with European investment, with emphasis on major planetary exploration missions; initiate a long-term integration of the European planetary science community; improve European scientific competitiveness, develop and spread expertise in this research area, improve public understanding of planetary environments. These objectives will be achieved by: maximizing synergies between different fields contributing to planetary sciences: space observations, earth-based observations, laboratory studies, numerical simulations, data base development; co-ordinating the design and development of an Integrated and Distributed Information Service (IDIS) providing access to the full set of data sources produced by these complementary fields. EuroPlaNet integrates most of the European planetary exploration work, with initial focus on the Cassini–Huygens mission to Saturn and Titan, operative between 2004 and 2008. The considerable involvement of the European science community in this mission, the broad diversity of its research objectives and the urgent need to achieve a balanced share of data analysis and its results with American colleagues make Cassini–Huygens an ideal test-bed for the development of activities and tools which will contribute to the optimal exploitation of subsequent planetary missions. In addition to overall co-ordination, 6 further activities will be carried out over a 4-year period: discipline working groups; co-ordinate earth-based observations to support and complement space missions; develop an outreach strategy; exchange of personnel; EuroPlaNet-specific meetings and conferences; definition of the basic requirements for future implementation of IDIS for planetary sciences. See also List of astronomical societies References External links Europlanet Outreach Webpage Europlanet Project Webpage Astronomy organizations Planetary science
Europlanet
[ "Astronomy" ]
414
[ "Planetary science", "Astronomical sub-disciplines", "Astronomy organizations" ]
7,273,911
https://en.wikipedia.org/wiki/Interpersonal%20ties
In social network analysis and mathematical sociology, interpersonal ties are defined as information-carrying connections between people. Interpersonal ties, generally, come in three varieties: strong, weak or absent. Weak social ties, it is argued, are responsible for the majority of the embeddedness and structure of social networks in society as well as the transmission of information through these networks. Specifically, more novel information flows to individuals through weak rather than strong ties. Because our close friends tend to move in the same circles that we do, the information they receive overlaps considerably with what we already know. Acquaintances, by contrast, know people that we do not, and thus receive more novel information. Included in the definition of absent ties, according to the American sociologist Mark Granovetter, are those relationships (or ties) without substantial significance, such as "nodding" relationships between people living on the same street, or the "tie", for example, to a frequent vendor one would buy from. Such relations with familiar strangers have also been called invisible ties since they are hardly observable, and are often overlooked as a relevant type of ties. They nevertheless support people's sense of familiarity and belonging. Furthermore, the fact that two people may know each other by name does not necessarily qualify the existence of a weak tie. If their interaction is negligible the tie may be absent or invisible. The "strength" of an interpersonal tie is a linear combination of the amount of time, the emotional intensity, the intimacy (or mutual confiding), and the reciprocal services which characterize each tie. History One of the earliest writers to describe the nature of the ties between people was German scientist and philosopher, Johann Wolfgang von Goethe. In his classic 1809 novella, Elective Affinities, Goethe discussed the "marriage tie". The analogy shows how strong marriage unions are similar in character to particles of quicksilver, which find unity through the process of chemical affinity. In 1954, the Russian mathematical psychologist Anatol Rapoport commented on the "well-known fact that the likely contacts of two individuals who are closely acquainted tend to be more overlapping than those of two arbitrarily selected individuals". This argument became one of the cornerstones of social network theory. In 1973, stimulated by the work of Rapoport and Harvard theorist Harrison White, Mark Granovetter published The Strength of Weak Ties. This paper is now recognized as one of the most influential sociology papers ever written. To obtain data for his doctoral thesis, Granovetter interviewed dozens of people to find out how social networks are used to land new jobs. Granovetter found that most jobs were found through "weak" acquaintances. This pattern reminded Granovetter of his freshman chemistry lesson that demonstrated how "weak" hydrogen bonds hold together many water molecules, which are themselves composed of atoms held together by "strong" covalent bonds. In Granovetter's view, a similar combination of strong and weak bonds holds the members of society together. This model became the basis of his first manuscript on the importance of weak social ties in human life, published in May 1973. According to Current Contents, by 1986, the Weak Ties paper had become a citation classic, being one of the most cited papers in sociology. In a related line of research in 1969, anthropologist Bruce Kapferer, published "Norms and the Manipulation of Relationships in a Work Context" after doing field work in Africa. In the document, he postulated the existence of multiplex ties, characterized by multiple contexts in a relationship. In telecommunications, a multiplexer is a device that allows a transmission medium to carry a number of separate signals. In social relations, by extrapolation, "multiplexity" is the overlap of roles, exchanges, or affiliations in a social relationship. Research data In 1970, Granovetter submitted his doctoral dissertation to Harvard University, entitled "Changing Jobs: Channels of Mobility Information in a Suburban Community". The thesis of his dissertation illustrated the conception of weak ties. For his research, Dr. Granovetter crossed the Charles River to Newton, Massachusetts where he surveyed 282 professional, technical, and managerial workers in total. 100 were personally interviewed, in regards to the type of ties between the job changer and the contact person who provided the necessary information. Tie strength was measured in terms of how often they saw the contact person during the period of the job transition, using the following assignment: often = at least once a week occasionally = more than once a year but less than twice a week rarely = once a year or less Of those who found jobs through personal contacts (N=54), 16.7% reported seeing their contact often, 55.6% reported seeing their contact occasionally, and 27.8% rarely. When asked whether a friend had told them about their current job, the most frequent answer was "not a friend, an acquaintance". The conclusion from this study is that weak ties are an important resource in occupational mobility. When seen from a macro point of view, weak ties play a role in affecting social cohesion. Social networks In social network theory, social relationships are viewed in terms of nodes and ties. Nodes are the individual actors within the networks, and ties are the relationships between the actors. There can be many kinds of ties between the nodes. In its simplest form, a social network is a map of all of the relevant ties between the nodes being studied. Weak tie hypothesis The "weak tie hypothesis" argues, using a combination of probability and mathematics, as originally stated by Anatol Rapoport in 1957, that if A is linked to both B and C, then there is a greater-than-chance probability that B and C are linked to each other: That is, if we consider any two randomly selected individuals, such as A and B, from the set S = A, B, C, D, E, ..., of all persons with ties to either or both of them, then, for example, if A is strongly tied to both B and C, then according to probability arguments, the B–C tie is always present. The absence of the B–C tie, in this situation, would create, according to Granovetter, what is called the forbidden triad. In other words, the B–C tie, according to this logic, is always present, whether weak or strong, given the other two strong ties. In this direction, the "weak tie hypothesis" postulates that clumps or cliques of social structure will form, being bound predominately by "strong ties", and that "weak ties" will function as the crucial bridge between any two densely knit clumps of close friends. It may follow that individuals with few bridging weak ties will be deprived of information from distant parts of the social system and will be confined to the provincial news and views of their close friends. However, having a large number of weak ties can mean that novel information is effectively "swamped" among a high volume of information, even crowding out strong ties. The arrangement of links in a network may matter as well as the number of links. Further research is needed to examine the ways in which types of information, numbers of ties, quality of ties, and trust levels interact to affect the spreading of information. Strong ties hypothesis According to David Krackhardt, there are some problems in the Granovetter definition. The first one refers to the fact that the Granovetter definition of the strength of a tie is a curvilinear prediction and his question is "how do we know where we are on this theoretical curve?". The second one refers to the effective character of strong ties. Krackhardt says that there are subjective criteria in the definition of the strength of a tie such as emotional intensity and the intimacy. He thought that strong ties are very important in severe changes and uncertainty: He called this particular type of strong tie philo and define philos relationship as one that meets the following three necessary and sufficient conditions: Interaction: For A and B to be philos, A and B must interact with each other. Affection: For A and B to be philos, A must feel affection for B. Time: A and B, to be philos, must have a history of interactions with each other that have lasted over an extended period of time. The combination of these qualities predicts trust and predicts that strong ties will be the critical ones in generating trust and discouraging malfeasance. When it comes to major change, change that may threaten the status quo in terms of power and the standard routines of how decisions are made, then trust is required. Thus, change is the product of philos. Positive ties and negative ties Starting in the late 1940s, Anatol Rapoport and others developed a probabilistic approach to the characterization of large social networks in which the nodes are persons and the links are acquaintanceship. During these years, formulas were derived that connected local parameters such as closure of contacts, and the supposed existence of the B–C tie to the global network property of connectivity. Moreover, acquaintanceship (in most cases) is a positive tie. However, there are also negative ties such as animosity among persons. In considering the relationships of three, Fritz Heider initiated a balance theory of relations. In a larger network represented by a graph, the totality of relations is represented by a signed graph. This effort led to an important and non-obvious Structure Theorem for signed graphs, which was published by Frank Harary in 1953. A signed graph is called balanced if the product of the signs of all relations in every cycle is positive. A signed graph is unbalanced if the product is ever negative. The theorem says that if a network of interrelated positive and negative ties is balanced, then it consists of two subnetworks such that each has positive ties among its nodes and negative ties between nodes in distinct subnetworks. In other words, "my friend's enemy is my enemy". The imagery here is of a social system that splits into two cliques. There is, however, a special case where one of the two subnetworks may be empty, which might occur in very small networks. In these two developments, we have mathematical models bearing upon the analysis of the structure. Other early influential developments in mathematical sociology pertained to process. For instance, in 1952 Herbert A. Simon produced a mathematical formalization of a published theory of social groups by constructing a model consisting of a deterministic system of differential equations. A formal study of the system led to theorems about the dynamics and the implied equilibrium states of any group. Absent or invisible ties In a footnote, Mark Granovetter defines what he considers as absent ties: The concept of invisible tie was proposed to overcome the contradiction between the adjective "absent" and this definition, which suggests that such ties exist and might "usefully be distinguished" from the absence of ties. From this perspective, the relationship between two familiar strangers, such as two people living on the same street, is not absent but invisible. Indeed, because such ties involve only limited interaction (as in the case of 'nodding relationships'), if any, they are hardly observable, and are often overlooked as a relevant type of ties. Absent or invisible ties nevertheless support people's sense of familiarity and belonging. Latent tie Adding any network-based means of communication such as a new IRC channel, a social support group, a Webboard lays the groundwork for connectivity between formerly unconnected others. Similarly, laying an infrastructure, such as the Internet, intranets, wireless connectivity, grid computing, telephone lines, cellular service, or neighborhood networks, when combined with the devices that access them (phones, cellphones, computers, etc.) makes it possible for social networks to form. Such infrastructures make a connection available technically, even if not yet activated socially. These technical connections support latent social network ties, used here to indicate ties that are technically possible but not yet activated socially. They are only activated, i.e. converted from latent to weak, by some sort of social interaction between members, e.g. by telephoning someone, attending a group-wide meeting, reading and contributing to a Webboard, emailing others, etc. Given that such connectivity involves unrelated persons, the latent tie structure must be established by an authority beyond the persons concerned. Internet-based social support sites contain this profile. These are started by individuals with a particular interest in a subject who may begin by posting information and providing the means for online discussion. The individualistic perspective Granovetter's 1973 work proved to be crucial in the individualistic approach of the social network theory as seen by the number of references in other papers. His argument asserts that weak ties or "acquaintances", are less likely to be involved within the social network than strong ties (close friends and family). By not going further in the strong ties, but focusing on the weak ties, Granovetter highlights the importance of acquaintances in social networks. He argues, that the only thing that can connect two social networks with strong ties is a weak tie: "… these clumps / [strong ties networks] would not, in fact, be connected to one another at all were it not for the existence of weak ties. It follows that in an all-covering social network individuals are at a disadvantage with only a few weak links, compared to individuals with multiple weak links, as they are disconnected with the other parts of the network. Another interesting observation that Granovetter makes in his work is the increasing specialization of individuals creates the necessity for weak ties, as all the other specialist information and knowledge is present in large social networks consisting predominately of weak ties. Cross et al., (2001) confirm this by presenting six features which differentiate effective and ineffective knowledge sharing relations: "1)knowing what other person knows and thus when to turn to them; 2) being able to gain timely access to that person; 3) willingness of the person sought out to engage in the problem solving rather than dump information; 4) a degree of safety in the relationship that promoted learning and creativity; 5) the factors put by Geert Hofstede; and 6) individual characteristics, such as openness" (pp 5). This fits in nicely with Granovetter's argument that "Weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available." This weak/strong ties paradox is elaborated by myriad authors. The extent in which individuals are connected to others is called centrality. Sparrowe & Linden (1997) argue how the position of a person in a social network confer advantages such organizational assimilation, and job performance (Sparrowe et al., 2001); Burt (1992) expects it to result in promotions, Brass (1984) affiliates centrality with power and Friedkin (1993) with influence in decision power. Other authors, such as Krackhardt and Porter (1986) contemplate the disadvantages of the position is social networks such as organizational exit (see also Sparrowe et al., 2001) and Wellman et al.,(1988) introduce the use of social networks for emotional and material support. Blau and Fingerman, drawing from these and other studies, refer to weak ties as consequential strangers, positing that they provide some of the same benefits as intimates as well as many distinct and complementary functions. Labour market In the early 1990s, US social economist James D. Montgomery contributed to economic theories of network structures in the labour market. In 1991, Montgomery incorporated network structures in an adverse selection model to analyze the effects of social networks on labour market outcomes. In 1992, Montgomery explored the role of "weak ties", which he defined as non-frequent and transitory social relations in the labour market. He demonstrated that weak ties are positively correlated with higher wages and higher aggregate employment rates. See also Dependent origination Human bonding Six degrees of separation Bridge (interpersonal) Simmelian tie Social connection References External links Caves, Clusters, and Weak Ties: The Six Degrees World of Inventors – Harvard Business School, 28 November 2004 The Weakening of Strong Ties – Ross Mayfield, 15 September 2003 The Power of Weak Ties (in Recruiting) Interpersonal relationships
Interpersonal ties
[ "Biology" ]
3,362
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
7,274,002
https://en.wikipedia.org/wiki/2006%20Ivory%20Coast%20toxic%20waste%20dump
The 2006 Ivory Coast toxic waste dump was a health crisis in Ivory Coast in which a ship registered in Panama, the Probo Koala, chartered by the Singaporean-based oil and commodity shipping company Trafigura Beheer BV, offloaded toxic waste to an Ivorian waste handling company which disposed of it at the port of Abidjan. The local contractor, a company called Tommy, dumped the waste at 12 sites in and around the city in August 2006. The dumping, which took place against a backdrop of instability in Abidjan as a result of the country's first civil war, allegedly led to the death of 17 and 20 hospitalisations, with a further 26,000 people treated for symptoms of poisoning. In the days after the dumping, almost 100,000 Ivorians sought medical attention after Prime Minister Charles Konan Banny opened the hospitals and offered free healthcare to the capital's residents. Trafigura originally planned to dispose of the slops – which resulted from cleaning the vessel and contained 500 tonnes of a mixture of fuel, caustic soda, and hydrogen sulfide – at the port of Amsterdam in the Netherlands. The company refused to pay Dutch company Amsterdam Port Services (APS) for disposal after APS raised its charge from €27 to €1,000 per cubic meter. The Probo Koala was reportedly turned away by several countries before offloading the toxic waste at the Port of Abidjan. An inquiry in the Netherlands in late 2006 confirmed the composition of the waste substance. Trafigura denied any waste was transported from the Netherlands, saying that the substances contained only tiny amounts of hydrogen sulfide, and that the company did not know the substance was to be disposed of improperly. After two Trafigura officials who traveled to Ivory Coast to offer assistance were arrested and subsequently attacked in jail, the company paid for cleanup to the Ivorian government, without admitting wrongdoing in early 2007. A series of protests and resignations of Ivorian government officials followed this deal. In 2008, a civil lawsuit in London was launched by almost 30,000 Ivorians against Trafigura. In May 2009, Trafigura announced it would sue the BBC for libel after its Newsnight program alleged the company had knowingly sought to cover up its role in the incident. In September 2009 The Guardian obtained and published internal Trafigura emails showing that the traders responsible knew how dangerous the chemicals were. Trafigura agreed to a settlement of £30 million (US$42.4 million) to settle the class action suit against it. Law firm Leigh Day, which represented the Ivorian claimants, was later ruled to have been negligent in the way it paid out the settlement, after £6 million of the settlement funds was embezzled by officials of the Government of Ivory Coast. The incident Background In 2002, Mexican state-owned oil company Pemex began to accumulate significant quantities of coker gasoline, containing large amounts of sulfur and silica, at its Cadereyta refinery. By 2006 Pemex had run out of storage capacity and agreed to sell the coker gasoline to Trafigura. In early 2006, Pemex trucked the coker gasoline to Brownsville, Texas, where Trafigura loaded it aboard the Panamanian registered tanker Probo Koala, which was owned by Greek shipping company Prime Marine Management Inc. and chartered by Trafigura. Trafigura desired to strip the sulfurous products out of the coker gasoline to produce naphtha, which could then be sold. Instead of paying a refinery to do this work, Trafigura used an obsolete process on board the ship called "caustic washing", in which the coker gasoline was treated with caustic soda. The process worked, and the resulting naphtha was resold for a reported profit of . The waste resulting from the caustic washing would typically include hazardous substances such as sodium hydroxide, sodium sulfide and phenols. Dumping On 2 July 2006, the Probo Koala called at Amsterdam port in the Netherlands to discharge the slops contained in the vessel's dedicated slops tanks. During the transfer such a foul smell was released onto the city that the disposal company Amsterdam Port Services B.V. (APS) decided to consult with the city of Amsterdam. After half the waste was transferred APS increased the handling fee 30-fold. APS then informed the ship's master that authorities had given permission for the slops previously removed to be returned to the vessel. The Probo Koala departed Amsterdam on 5 July 2006 for Paldiski, Estonia, with the full knowledge and approval of the Dutch authorities. After taking on unleaded gasoline in Paldiski it left on 13 July 2006 on a previously-planned voyage to Lagos, Nigeria. In Lagos it unloaded the gasoline. Two offers to unload the slops in Lagos were refused by the captain and on 17 August 2006 the vessel sailed for Abidjan. On 19 August 2006, the Probo Koala offloaded more than 500 tons of toxic waste at the Port of Abidjan, Côte d'Ivoire. This material was then spread, allegedly by subcontractors, across the city and surrounding areas, dumped in waste grounds, public dumps, and along roads in populated areas. The substance gave off toxic gas and resulted in burns to lungs and skin, as well as severe headaches and vomiting. Seventeen people were confirmed to have died, and at least 30,000 injured. The company claimed that the waste was dirty water ("slops") used for cleaning the ship's gasoline tanks, but a Dutch government report, as well as an Ivorian investigation, disputed this, finding that it was toxic waste. During an ongoing civil lawsuit by over 30,000 Ivorian citizens against Trafigura, a Dutch government report concluded that in fact the liquid dumped contained two 'British tonnes' of hydrogen sulfide. Trafigura, following an investigative report by the BBC's Newsnight program, announced on 2009 that they would sue the BBC for libel. Immediate effects The scope of the dumping and the related illnesses were slow to emerge. While the first cargo was offloaded in August 2006, the dumping continued for almost three weeks before the population knew what was happening. But as early as 19 August, residents near the landfill at knew that trucks were dumping toxic liquid into the landfill and blocked the entrance of one of the trucks to the dump, which had been freshly painted with the logo of a newly created company. Residents near several landfills in the suburbs of Abidjan began complaining publicly of foul-smelling gas in the first week of September, and several people were reported to have died. Protests broke out in several areas against both the companies dumping liquid waste and the government. On , the government called for protesters to allow free circulation of traffic so the area's hospitals, which were complaining of a flood of injured, could operate. In the aftermath of the crisis, many top government figures resigned. This mass resignation has been called "unprecedented" in Côte d'Ivoire's history. In an effort to prevent the contamination of the food chain, large numbers of livestock (among them 450 pigs) affected by the dump were culled. Trafigura's description of events On 19 August 2006, the tanker ship Probo Koala, chartered by the company Trafigura and docked at the port of Abidjan, transferred a liquid into tankers owned by a firm called Compagnie Tommy. The company claims the ship had been chartered by Trafigura to transport oil to another West African port, and was returning to Europe, empty. The transfer at Abidjan, according to the company, was a routine maintenance stop, not a delivery of waste from European ports. Trafigura claims that this was done under agreement that the waste would be treated and disposed of legally, and that the substance was waste ("slops") from the routine washing of Probo Koala'''s tanks. Again according to Trafigura, it became apparent that the untreated slops had been dumped illegally at municipal refuse dumps. They contend that the slops were an alkaline mix of water, gasoline, and caustic soda, along with a very small amount of foul-smelling and toxic hydrogen sulfide. Further, the company says that their tests show that, while noxious, the slop from their ship could not have caused deaths, no matter how poorly it was handled by a third party. The company contends that the people of Abidjan, especially those living near dumps, suffered from a lifetime of exposure to toxic substances, not from their actions. Rejection in Europe The Probo Koala had its cargo rejected in Europe by Amsterdam Port Services BV, and was to be charged €500,000 in nearby Moerdijk. On it offloaded a liquid waste in Abidjan, paying around €18,500 for its disposal. According to the City of Amsterdam's report, before it dumped the waste in Abidjan, the Probo Koala was in port in the Netherlands from 2 to 2006. There the ship attempted to have the waste processed in Amsterdam, but Amsterdam Port Services BV, the company that had contracted to treat the waste, refused after its staff reported a noxious odor coming from the waste, which sickened several workers. A company specializing in the disposal of chemical waste, Afvalstoffen Terminal Moerdijk in nearby Moerdijk, tendered for the disposal of the waste (based on the samples it received) for €500,000. Instead, the material was pumped back into the Probo Koala, which then left port on , appearing on in Côte d'Ivoire, where Compagnie Tommy, which was registered only days before the arrival of the Probo Koala, was contracted for €18,500 to dispose of the waste. The company contends that no waste was transported from Europe, and the incident was an accident caused by the mishandling by an Ivorian company of waste water used to wash the ship's storage tanks. A Dutch newspaper reported on this possibility, saying the waste could have been generated as a result of attempted on-board desulfurization (removing mercaptans) of naphtha in a Merox-like process. In this way high mercaptan-laden gasoline is upgraded to meet certain country-specific specifications. This would explain the water/caustic soda/gasoline mix and also the presence in trace amounts of a certain catalyst called ARI-100 EXLz, generally used in this process. It would on the other hand not explain the presence of hydrogen sulfide, as the final stage of the Merox process is an organic disulfide unless the attempt at desulfurization had failed. The company has always contended that the amount of hydrogen sulfide in the waste was small. Press and government findings contend there was a substantial amount of hydrogen sulfide dumped, some 2 tonnes, of the 500 tonnes of dumped liquid. Aftermath Deaths and illnesses In the weeks following the incident the BBC reported that 17 people died, 23 were hospitalized, and a further 40,000 sought medical treatment (due to headaches, nosebleeds, and stomach pains). These numbers were revised upward over time, with the numbers reported by the Ivorian government in 2008 reaching 17 dead, dozens severely ill, 30,000 receiving medical treatment for ailments connected to the chemical exposure, of almost 100,000 seeking medical treatment at the time. While the company and the Ivorian government continue to disagree on the exact make up of the chemicals, specialists from the United Nations, France, and the Dutch National Institute of Public Health and the Environment (RIVM) were sent to Abidjan to investigate the situation. Fall of government Following revelations by local press and government on the extent of the illnesses involved, the nine-month-old transitional government of Prime Minister Charles Konan Banny resigned. The government vowed to provide treatment and pay all medical costs associated with the waste dump. Lawsuit by victims and compensation On 11 November 2006, a £100 million lawsuit was filed in the High Court in London by the UK firm Leigh Day & Co. alleging that "Trafigura were negligent and that this, and the nuisance resulting from their actions, caused the injuries to the local citizens." Martyn Day, of Leigh Day & Co said, "This has been a disaster on a monumental scale. We hold Trafigura fully to account for all the deaths and injuries that have resulted from the dumping of their waste." In response, Trafigura announced on Monday 2006 that it had started libel proceedings against British lawyer Martyn Day, of Leigh Day & Co. On 20 September 2009, both cases were dropped in an out-of-court settlement. Trafigura announced it would pay more than to claimants, noting that 20 independent experts had examined the case but were "unable to identify a link". The package would be divided into groups of $1,546 which would then be paid to 31,000 people. The deal came soon after a report by the UN claimed there was "strong prima facie evidence" that the waste was responsible for injuries. The company responded by saying they were "appalled at the basic lack of balance and analytical rigour reflected in the report." The Ivorian National Federation of Victims of Toxic Waste said Trafigura was trying to avoid a legal case. Trafigura claimed that at least 75% of the receivers of money agreed with the deal. In January 2010, The Guardian reported that solicitors Leigh Day, working for the victims of toxic poisoning, had been ordered by a Côte d'Ivoire court to transfer victim's compensation to a "shadowy local organisation", using the account of Claude Gouhourou, a "community representative". Martyn Day, a partner in the firm, feared that the cash would not reach the victims. Arrests Shortly after it became apparent that the toxic slops from the Probo Koala had led to the outbreak of sickness, two Trafigura executives, Claude Dauphin and Jean-Pierre Valentini, travelled to Abidjan. They were arrested on , four days after their arrival, and were held in Abidjan's Maca prison, charged with breaking Côte d'Ivoire's laws against poisoning. There were several reported attacks of the two executives during their imprisonment. Trafigura called for their immediate release, but this did not occur until a settlement for the cleanup was paid to the Ivorian government. Seven Ivorians were eventually brought to trial in Abidjan for their part in the dumping. The head of the Ivorian contractor who dumped more than 500 tonnes of toxic liquid was sentenced to 20 years in prison in November 2008. Ivorian government finding A November 2006 Ivorian government report into the incident said that Trafigura was to blame for the dumping of waste, and was aided by Ivorians. A government committee concluded that Trafigura knew that the nation had no facilities to store such waste and knowingly transported the waste from Europe to Abidjan. The report further claimed that the "Compagnie Tommy" which actually dumped the substance "shows all the signs of being front company set up specifically to handle the Trafigura waste", and was "established in a period between Trafigura's decision not to pay for expensive waste disposal in Amsterdam and its ship's arrival in Abidjan." The government fact-finding committee had no prosecutorial powers, and its findings were rejected by the company. The committee also found that officials in the Port of Abidjan and a variety of local and national bodies either failed to plug holes in environmental laws or were guilty of ignoring laws through corruption. Company payment On 13 February 2007, Trafigura agreed to pay the Ivorian government (US$198m) for the clean-up of the waste; however the group denied any liability for the dumping, and as a part of the deal the government would not pursue further action against the group. The Trafigura employees Claude Dauphin, Jean-Pierre Valentini and Nzi Kablan, held by the Côte d'Ivoire authorities after the incident, were then released and charges were dropped against them.Ivory Coast toxic clean-up offer. BBC News 13 February 2007 Further prosecutions against Ivorian citizens not employed by Trafigura continued. Dutch inquiry On 6 December 2006, an independent inquiry launched by the City of Amsterdam concluded that the city was negligent when they allowed Trafigura to take waste back on board the Probo Koala in Amsterdam in July. Part of the Probo cargo was offloaded with the intent to have it processed with an Amsterdam waste processing company but when this turned out too expensive Trafigura took it back. The responsible local civil servants were reportedly unaware of existing Dutch environmental laws that would not allow its export given these circumstances. On 19 December 2006, a majority of the Dutch House of Representatives expressed their desire for a new investigation into the Probo Koala. On 8 January 2007, The Guardian reported that the legal team for Leigh Day had arrived in Abidjan, and would begin taking statements from thousands of witnesses in the area. In late 2008, a criminal prosecution was begun in the Netherlands by the Dutch Public Prosecutors office. While the trial was not scheduled to begin until late 2009, the head of Trafigura, Claude Dauphin, was specifically cited as not under indictment. Rather the company itself, the captain of the Probo Koala, and Amsterdam port authorities would be charged with "illegally transporting toxic waste into and out of Amsterdam harbour" and falsification of the chemical composition of the ship's cargo on documents. The Dutch Supreme Court ruled on 6 July 2010, that the Court of Appeal should review again whether Claude Dauphin can be prosecuted for his part in the Probo Koala case, specifically for leading the export of dangerous waste materials. Earlier the Court of Appeal had ruled that this was not possible. On 23 July 2010, Trafigura were fined for the transit of the waste through Amsterdam before being taken to the Côte d'Ivoire to be dumped. The court ruled that the firm had concealed the problem when it was first unloaded from a ship in Amsterdam. While previous settlements had been made in the case this was the first time Trafigura have been found guilty under criminal charges over the incident. On 16 November 2012, Trafigura and the Dutch authorities agreed to a settlement. The settlement obliged Trafigura to pay the existing 1 million euro fine, and in addition, the company must also pay Dutch authorities a further 300,000 euros in compensation - the money it saved by dumping the toxic waste in Abidjan rather than having it properly disposed of in the Netherlands. The Dutch also agreed to stop the personal court case against Trafigura's chairman, Claude Dauphin, in exchange for a 67,000 euro fine. Minton Report and legal controversy In September 2006, Trafigura commissioned the internal "Minton Report" to determine the toxicity of the waste dumped in Abidjan. The Minton Report was subsequently leaked to the WikiLeaks web site and remains available there. On 11 September 2009, Trafigura, via lawyers Carter-Ruck, obtained a secret "super-injunction" against The Guardian, banning that newspaper from publishing the contents of the Minton report. Trafigura also threatened a number of other media organizations with legal action if they published the report's contents, including the Norwegian Broadcasting Corporation and The Chemical Engineer magazine. On 12 October, Carter-Ruck warned The Guardian against mentioning the content of a parliamentary question that was due to be asked about the Minton Report. Instead the paper published an article stating that they were unable to report on an unspecified question and claiming that the situation appeared to "call into question privileges guaranteeing free speech established under the 1689 Bill of Rights". The suppressed details rapidly circulated via the Internet and TwitterHigham, Nick (13 October 2009). When is a secret not a secret? BBC News. and, amid uproar, Carter-Ruck agreed the next day to the modification of the injunction before it was challenged in court, permitting The Guardian to reveal the existence of the question and the injunction. The 11 September 2009 injunction remained in force in the United Kingdom until it was lifted on the night of . The report contains discussion of various harmful chemicals "likely to be present" in the waste—sodium hydroxide, cobalt phthalocyanine sulfonate, coker naphtha, thiols, sodium alkanethiolate, sodium hydrosulfide, sodium sulfide, dialkyl disulfides, hydrogen sulfide—and notes that some of these "may cause harm at some distance". The report says potential health effects include "burns to the skin, eyes and lungs, vomiting, diarrhea, loss of consciousness and death", and suggests that the high number of reported casualties is "consistent with there having been a significant release of hydrogen sulphide gas". The version published on WikiLeaks, which has been republished by The Guardian, appears to be a preliminary draft, containing poor formatting and one comment in French. Trafigura has stated that the report was only preliminary and was inaccurate. Suppressed BBC report Faced with a libel case which under British law could drag on for years and cost millions of pounds on 10 December 2009, the BBC removed the original story entitled "Dirty Tricks and Toxic Waste in the Ivory Coast", along with accompanying video, from its website. The story featured interviews with victims in Côte d'Ivoire, including relatives of two children who, it claimed, died from the effects of the waste. The story also claimed that Trafigura brought "ruin" on the country in order to make a "massive profit". The stories remain available on WikiLeaks. On 15 December 2009, the broadcaster agreed to apologise to Trafigura for the "Dirty Tricks" report, pay £25,000 to charity, and withdraw any allegation that Trafigura's toxic waste dumped in Africa had caused deaths. But at the same time, the BBC issued a combative statement, pointing out that the dumping of Trafigura's hazardous waste had led to the British-based oil trader being forced to pay out £30m in compensation to victims. "The BBC has played a leading role in bringing to the public's attention the actions of Trafigura in the illegal dumping of 500 tonnes of hazardous waste", the statement said. "The dumping caused a public health emergency with tens of thousands of people seeking treatment." The BBC did not agree to remove further allegations about the dumping such as 16 September article:- "Trafigura knew of waste dangers"; this quoted from internal Trafigura emails which showed that the company knew the waste was toxic before they dumped it. In one, a Trafigura employee says "This operation is no longer allowed in the European Union, the United States and Singapore" it is "banned in most countries due to the 'hazardous nature of the waste'" and another says "environmental agencies do not allow disposal of the toxic caustic." Daniel Pearl Award On 24 April 2010, the International Consortium of Investigative Journalists presented the Daniel Pearl Award for Outstanding International Investigative Reporting to the team of journalists who had revealed the story of Trafigura and the Côte d'Ivoire toxic waste dump. The award went to the British journalists Meirion Jones and Liz MacKean from BBC Newsnight and David Leigh from The Guardian, Synnove Bakke and Kjersti Knudsson from Norwegian TV, and Jeroen Trommelen from the Dutch paper De Volkskrant. The citation says the award was for reports "which exposed how a powerful offshore oil trader tried to cover up the poisoning of 30,000 West Africans". Probo Koala The vessel (renamed Gulf Jash) was initially heading to Chittagong, Bangladesh for dismantling. However, the Government of Bangladesh imposed a ban on the ship from entering into its waters and therefore, as of June 2011, the ship was reportedly headed for Alang, India. In August 2011 it was again renamed the Hua Feng. In 2012 the ship, renamed to Hua Wen, was operating between China and Indonesia, and in 2013 she entered a ship breaking yard in Taizhou, China where she was due for demolition. See also Health crisis References External links Amnesty International and Greenpeace Netherlands: The Toxic Truth - About a company called Trafigura, a ship called the Probo Koala, and the dumping of toxic waste in Côte d’Ivoire. Amnesty International Publications, London 2012, . (PDF, 232 pages, 7.6 Mb ) The Trafigura files and how to read them – internal Trafigura emails published by The Guardian'' Minton Report 2006 health disasters 2006 industrial disasters 2006 in Ivory Coast Man-made disasters in Ivory Coast Environment of Ivory Coast Waste disposal incidents Environmental disasters in Africa 2006 in the environment Abidjan August 2006 events in Africa 2006 disasters in Africa Environmental controversies Ocean pollution Health in Ivory Coast 2006 in international relations Health disasters in Africa
2006 Ivory Coast toxic waste dump
[ "Chemistry", "Environmental_science" ]
5,229
[ "Ocean pollution", "Water pollution" ]
7,274,018
https://en.wikipedia.org/wiki/Debye%E2%80%93Falkenhagen%20effect
The increase in the conductivity of an electrolyte solution when the applied voltage has a very high frequency is known as Debye–Falkenhagen effect. Impedance measurements on water-p-dioxane and the methanol-toluene systems have confirmed Falkenhagen's predictions made in 1929. See also Peter Debye Debye length Hans Falkenhagen Wien effect References Electrochemical concepts Peter Debye
Debye–Falkenhagen effect
[ "Chemistry" ]
98
[ "Electrochemistry", "Electrochemical concepts", "Physical chemistry stubs", "Electrochemistry stubs" ]
7,274,114
https://en.wikipedia.org/wiki/STAR%20model
In statistics, Smooth Transition Autoregressive (STAR) models are typically applied to time series data as an extension of autoregressive models, in order to allow for higher degree of flexibility in model parameters through a smooth transition. Given a time series of data xt, the STAR model is a tool for understanding and, perhaps, predicting future values in this series, assuming that the behaviour of the series changes depending on the value of the transition variable. The transition might depend on the past values of the x series (similar to the SETAR models), or exogenous variables. The model consists of 2 autoregressive (AR) parts linked by the transition function. The model is usually referred to as the STAR(p) models proceeded by the letter describing the transition function (see below) and p is the order of the autoregressive part. Most popular transition function include exponential function and first and second-order logistic functions. They give rise to Logistic STAR (LSTAR) and Exponential STAR (ESTAR) models. Definition AutoRegressive Models Consider a simple AR(p) model for a time series yt where: for i=1,2,...,p are autoregressive coefficients, assumed to be constant over time; stands for white-noise error term with constant variance. written in a following vector form: where: is a column vector of variables; is the vector of parameters :; stands for white-noise error term with constant variance. STAR as an Extension of the AutoRegressive Model STAR models were introduced and comprehensively developed by Kung-sik Chan and Howell Tong in 1986 (esp. p. 187), in which the same acronym was used. It originally stands for Smooth Threshold AutoRegressive. For some background history, see Tong (2011, 2012). The models can be thought of in terms of extension of autoregressive models discussed above, allowing for changes in the model parameters according to the value of a transition variable zt. Chan and Tong (1986) rigorously proved that the family of STAR models includes the SETAR model as a limiting case by showing the uniform boundedness and equicontinuity with respect to the switching parameter. Without this proof, to say that STAR models nest the SETAR model lacks justification. Unfortunately, whether one should use a SETAR model or a STAR model for one's data has been a matter of subjective judgement, taste and inclination in much of the literature. Fortunately, the test procedure, based on David Cox's test of separate family of hypotheses and developed by Gao, Ling and Tong (2018, Statistica Sinica, volume 28, 2857-2883) is now available to address this issue. Such a test is important before adopting a STAR model because, among other issues, the parameter controlling its rate of switching is notoriously data-hungry. Defined in this way, STAR model can be presented as follows: where: is a column vector of variables; is the transition function bounded between 0 and 1. Basic Structure They can be understood as two-regime SETAR model with smooth transition between regimes, or as continuum of regimes. In both cases the presence of the transition function is the defining feature of the model as it allows for changes in values of the parameters. Transition Function Three basic transition functions and the name of resulting models are: first order logistic function - results in Logistic STAR (LSTAR) model: exponential function - results in Exponential STAR (ESTAR) model: second order logistic function: See also Characterizations of the exponential function Exponential growth Exponentiation Generalised logistic function Logistic distribution SETAR (model) References Nonlinear systems Time series models Nonlinear time series analysis
STAR model
[ "Mathematics" ]
776
[ "Nonlinear systems", "Dynamical systems" ]
7,274,119
https://en.wikipedia.org/wiki/Fabric%20Application%20Interface%20Standard
ANSI INCITS 432-2007: Information technology - Fabric Application Interface Standard or FAIS is an application programming interface framework for implementing storage applications in a storage area network. FAIS is defined by Technical Committee T11 of the International Committee for Information Technology Standards. It provides a high-speed, highly reliable device for performing fabric-based services throughout heterogeneous data center environments. Furthermore, it describes extensions to the Fibre Channel specification, specifically regarding Fibre Channel over 4-pair twisted pair cabling as described in ISO/IEC 11801. References Storage area networks Storage software
Fabric Application Interface Standard
[ "Technology" ]
118
[ "Computing stubs", "Computer network stubs" ]
7,275,487
https://en.wikipedia.org/wiki/Discrete%20manufacturing
Discrete manufacturing is the production of items that are distinct from one another. Examples of discrete manufacturing products are automobiles, furniture, smartphones, and airplanes. The resulting products are easily identifiable and differ greatly from process manufacturing where the products are undifferentiated, for example oil, natural gas and salt. Discrete manufacturing is often characterized by individual or separate unit production. Units can be produced in low volume with very high complexity or high volumes of low complexity. Low volume/high complexity production results in the need for a flexible manufacturing system that can improve quality and time-to-market speed while cutting costs. High volume/low complexity production puts high premiums on inventory controls, lead times and reducing or limiting materials costs and waste. Industry Profile - Discrete Manufacturing includes makers of consumer electronics, computer and accessories, appliances, and other household items, as well as "big ticket” consumer and commercial goods like cars and aeroplanes. Discrete Manufacturing companies make physical products that go directly to businesses and consumers, and assemblies that are used by other manufacturers. The processes deployed in discrete manufacturing are not continuous in nature. Each process can be individually started or stopped and can be run at varying production rates. The final product may be produced out of single or multiple inputs. Producing a steel structure will need only one type of raw material - steel. Producing a mobile phone requires many different inputs. The plastic case, LCD display, the mainboard, PVC keypad, sockets, cables are made from different materials, at different places. This is different from process manufacturing like production of paper or petroleum refining, where the end product is obtained by a continuous process or a set of continuous processes. Production capacity of the factory as a whole in discrete manufacturing is impossible to calculate. It is the question of common sense that how can one calculate the production capacity of its multiple characterize different products because the production time and machine setups of the parts produced are different from each other. Some discrete manufacturing vertical markets Industrial Manufacturing High Tech Automotive Aerospace and Defense Machinery Electronics References Manufacturing
Discrete manufacturing
[ "Engineering" ]
409
[ "Manufacturing", "Mechanical engineering" ]
7,275,524
https://en.wikipedia.org/wiki/DSOS
DSOS (Deep Six Operating System) was a real-time operating system (sometimes termed an operating system kernel) developed by Texas Instruments' division Geophysical Services Incorporated (GSI) in the mid-1970s. Background The Geophysical Services division of Texas Instruments' main business was to search for petroleum (oil). They would collect data in likely spots around the world, process that data using high performance computers, and produce analyses that guided oil companies toward promising sites for drilling. Much of the oil being sought was to be found beneath the ocean, hence GSI maintained a fleet of ships to collect seismic data from remote regions of the world. To do this properly, it was essential that the ships be navigated precisely. If evidence of oil is found, one cannot just mark an X on a tree. The oil is thousands of feet below the ocean and typically hundreds of miles from land. But this was a decade or more before GPS existed, thus the processing load to keep an accurate picture of where a finding is, was considerable. The GEONAV systems, which used DSOS (Frailey, 1975) as their operating system, performed the required navigation, and collected, processed, and stored the seismic data being received in real-time. Naming The name Deep Six Operating System was the brainchild of Phil Ward (subsequently a world-renowned GPS expert) who, at the time, was manager of the project and slightly skeptical of the computer science professor, Dennis Frailey, who insisted that an operating system was the solution to the problem at hand. In a sense the system lived up to its name, according to legend. Supposedly one of the ships hit an old World War II naval mine off the coast of Egypt and sank while being navigated by GEONAV and DSOS. Why an operating system? In the 1970s, most real-time applications did not use operating systems because the latter were perceived as adding too much overhead. Typical computers of the time had barely enough computing power to handle the tasks at hand. Moreover, most software of this type was written in assembly language. As a consequence, real-time systems were classic examples of spaghetti code: complex masses of assembly language software using all sorts of machine-dependent tricks to achieve maximum performance. DSOS ran on a Texas Instruments 980 minicomputer being used for marine navigation on GSI's fleet. DSOS was created to bring some order to the chaos that was typical of real-time system design at that time. The 980 was, for its time, a relatively powerful small computer that offered memory protection and multiple-priority interrupt abilities. DSOS was designed to exploit these features. Significance DSOS (Frailey, 1975) was one of the pioneering efforts in real-time operating systems. Incorporating many of the principles being introduced at the time in mainframe computer systems, such as semaphores, memory management, task management, and software interrupts, it used a clever scheme to assure appropriate real-time performance while providing many services formerly uncommon in the real-time domain (such as an orderly way to communicate with external devices and computer operators, multitasking, maintaining records, a disciplined form of inter-task communication, a reliable real-time clock, memory protection, and debugging support). It remained in use for at least three decades and it demonstrated that, if well designed, an operating system can make a real-time system faster (and vastly more maintainable) than what had been typical before. Today, almost all real-time applications use operating systems of this type. References Real-time operating systems
DSOS
[ "Technology" ]
729
[ "Real-time computing", "Real-time operating systems" ]
7,275,615
https://en.wikipedia.org/wiki/ExtremeTech
ExtremeTech is a technology weblog, launched in June 2001, which focuses on hardware, computer software, science and other technologies. Between 2003 and 2005, ExtremeTech was also a print magazine and the publisher of a popular series of how-to and do-it-yourself books. Background ExtremeTech was launched as a website in June 2001, with co-founder Bill Machrone as Editor-in-Chief, and fellow co-founder Nick Stam as Senior Technical Director. Loyd Case, Dave Salvator, Mark Hachman, and Jim Lynch were other original core ET staff. In 2002 Jim Louderback became the Editor-in-Chief. When initially launched, ExtremeTech covered a broad range of technical topics with very indepth technical stories. Topic areas included core PC techniques (CPUs/GPUs), networking, operating systems, software development, display technology, printers, scanners etc. By 2003, Ziff Davis management wanted to reduce expenses and cut back content to core PC tech areas, focusing on how to build and optimize your PC. Loyd Case took over as Editor-in-Chief, and Jason Cross joined as a technology analyst. In mid-2009, due to sinking corporate-level finances, Ziff Davis laid off most of the core team, and Jeremy Kaplan (Executive Editor of PC Magazine and EIC of ExtremeTech Magazine) tried to keep the online site going, but it was quite challenging without much dedicated staff. Similarly, Matthew Murray (currently Editor of PC Magazine's Digital Edition) tried to keep things alive. As described below in the Shutdown and Relaunch section in April 2011, the Ziff Davis management re-invested in ExtremeTech, and the site relaunched under Managing Editor Sal Cangeloso and Senior Editor Sebastian Anthony. ExtremeTech Magazine The magazine was first published in fall 2004 (Volume 1, Issue 1). The first issue noted different staff members for the website and magazine. Staff included Editor-in-Chief Michael J. Miller, Editor Jeremy Kaplan, Technical Director Loyd Case, Senior Technical Analyst Dave Salvator, and others. Subsequent issues were published in winter 2004 (Volume 1, Issue 2), spring 2005 (Volume 1, Issue 3), summer 2005 (Volume 1, Issue 4), with the magazine ending its run in fall 2005 (Volume 1, Issue 5). Shutdown and relaunch The site ceased updating daily on June 26, 2009 due to most of its core staff members being laid off. On April 26, 2011 it was announced that a relaunch was slated for late spring. The announcement noted that along with a complete visual redesign, ExtremeTech would be "widening its scope" to cover new topics that didn't exist when the site was first conceived in 2001. Sebastian Anthony, previously an editor at AOL's Download Squad software weblog, led the editorial side of the relaunch. Writers ExtremeTech is currently managed by Joel Hruska, who also served as the site's lead writer from 2015 to 2021. Previously, it was managed by Jamie Lendino, who came from PCMag.com and had written for ExtremeTech from 2005-2010. He was formerly the editor-in-chief of Smart Device Central. Other writers include David Cardinal, Jessica Hall, Adrianna Nine, Josh Norem, and Ryan Whitwam. Former editors Sebastian Anthony, who led the editorial side of ExtremeTech's relaunch in 2011, left at the end of 2014 to launch Ars Technica in the UK. References External links ExtremeTech – official website Computing websites American technology news websites Internet properties established in 2001
ExtremeTech
[ "Technology" ]
744
[ "Computing websites" ]
7,276,069
https://en.wikipedia.org/wiki/Kendall%20tau%20distance
The Kendall tau rank distance is a metric (distance function) that counts the number of pairwise disagreements between two ranking lists. The larger the distance, the more dissimilar the two lists are. Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. The Kendall tau distance was created by Maurice Kendall. Definition The Kendall tau ranking distance between two lists and is where and are the rankings of the element in and respectively. will be equal to 0 if the two lists are identical and (where is the list size) if one list is the reverse of the other. Kendall tau distance may also be defined as where P is the set of unordered pairs of distinct elements in and = 0 if i and j are in the same order in and = 1 if i and j are in the opposite order in and Kendall tau distance can also be defined as the total number of discordant pairs. Kendall tau distance in Rankings: A permutation (or ranking) is an array of N integers where each of the integers between 0 and N-1 appears exactly once. The Kendall tau distance between two rankings is the number of pairs that are in different order in the two rankings. For example, the Kendall tau distance between 0 3 1 6 2 5 4 and 1 0 3 6 4 2 5 is four because the pairs 0-1, 3-1, 2-4, 5-4 are in different order in the two rankings, but all other pairs are in the same order. The normalized Kendall tau distance is and therefore lies in the interval [0,1]. If Kendall tau distance function is performed as instead of (where and are the rankings of and elements respectively), then triangular inequality is not guaranteed. The triangular inequality fails sometimes also in cases where there are repetitions in the lists. So then we are not dealing with a metric anymore. Generalised versions of Kendall tau distance have been proposed to give weights to different items and different positions in the ranking. Comparison to Kendall tau rank correlation coefficient The  Kendall tau distance () must not be confused with the Kendall tau rank correlation coefficient ()  used in statistics. They are related by  , Or simpler by   where is the normalised distance see above) The distance is a value between 0 and . (The normalised distance is between 0 and 1) The correlation is between -1 and 1. The distance between equals is 0, the correlation between equals is 1. The distance between reversals is , the correlation between reversals is -1 For example comparing the rankings A>B>C>D and A>B>C>D the distance is 0 the correlation is 1. Comparing the rankings A>B>C>D and D>C>B>A the distance is 6 the correlation is -1 Comparing the rankings A>B>C>D and B>D>A>C the distance is 3 the correlation is 0 Example Suppose one ranks a group of five people by height and by weight: Here person A is tallest and third-heaviest, B is the second -tallest and fourth-heaviest and so on. In order to calculate the Kendall tau distance between these two rankings, pair each person with every other person and count the number of times the values in list 1 are in the opposite order of the values in list 2. Since there are four pairs whose values are in opposite order, the Kendall tau distance is 4. The normalized Kendall tau distance is A value of 0.4 indicates that 40% of pairs differ in ordering between the two lists. Computing the Kendall tau distance A naive implementation in Python (using NumPy) is: import numpy as np def normalised_kendall_tau_distance(values1, values2): """Compute the Kendall tau distance.""" n = len(values1) assert len(values2) == n, "Both lists have to be of equal length" i, j = np.meshgrid(np.arange(n), np.arange(n)) a = np.argsort(values1) b = np.argsort(values2) ndisordered = np.logical_or(np.logical_and(a[i] < a[j], b[i] > b[j]), np.logical_and(a[i] > a[j], b[i] < b[j])).sum() return ndisordered / (n * (n - 1)) However, this requires memory, which is inefficient for large arrays. Given two rankings , it is possible to rename the items such that . Then, the problem of computing the Kendall tau distance reduces to computing the number of inversions in —the number of index pairs such that while . There are several algorithms for calculating this number. A simple algorithm based on merge sort requires time . A more advanced algorithm requires time . Here is a basic C implementation. #include <stdbool.h> int kendallTau(short x[], short y[], int len) { int i, j, v = 0; bool a, b; for (i = 0; i < len; i++) { for (j = i + 1; j < len; j++) { a = x[i] < x[j] && y[i] > y[j]; b = x[i] > x[j] && y[i] < y[j]; if (a || b) v++; } } return abs(v); } float normalize(int kt, int len) { return kt / (len * (len - 1) / 2.0); } See also Kendall tau rank correlation coefficient Spearman's rank correlation coefficient Kemeny–Young maximum-likelihood voting rule References External links Online software: computes Kendall's tau rank correlation QuickVote — A website that calculates the Kendall tau distance between two interactive ranking lists and shows the pairwise disagreements between the lists. Covariance and correlation Statistical distance Comparison of assessments Permutations Distance
Kendall tau distance
[ "Physics", "Mathematics" ]
1,302
[ "Functions and mappings", "Distance", "Physical quantities", "Statistical distance", "Permutations", "Quantity", "Mathematical objects", "Size", "Combinatorics", "Space", "Mathematical relations", "Spacetime", "Wikipedia categories named after physical quantities" ]
7,276,195
https://en.wikipedia.org/wiki/Nocturnal%20penile%20tumescence
Nocturnal penile tumescence (NPT) is a spontaneous erection of the penis during sleep or when waking up. Along with nocturnal clitoral tumescence, it is also known as sleep-related erection. Colloquially, the term morning wood, or less commonly, morning glory is also used, although this is more commonly used to refer specifically to an erection beginning during sleep and persisting into the period just after waking. Men without physiological erectile dysfunction or severe depression experience nocturnal penile tumescence, usually three to five times during a period of sleep, typically during rapid eye movement sleep. Nocturnal penile tumescence is believed to contribute to penile health. Mechanism The cause of nocturnal penile tumescence is not known with certainty. In a wakeful state, in the presence of mechanical stimulation with or without an arousal, erection is initiated by the parasympathetic division of the autonomic nervous system with minimal input from the central nervous system. Parasympathetic branches extend from the sacral plexus of the spinal nerves into the arteries supplying the erectile tissue; upon stimulation, these nerve branches release acetylcholine, which in turn causes release of nitric oxide from endothelial cells in the trabecular arteries, that eventually causes tumescence. Bancroft (2005) hypothesizes that the noradrenergic neurons of the locus ceruleus in the brain are perpetually inhibitory to penile erection, and that the cessation of their discharge that occurs during rapid eye movement sleep may allow testosterone-related excitatory actions to manifest as nocturnal penile tumescence. Suh et al. (2003) recognizes that in particular the spinal regulation of the cervical cord is critical for nocturnal erectile activity. The nerves that control one's ability to have a reflex erection are located in the sacral nerves (S2-S4) of the spinal cord. Evidence supporting the possibility that a full bladder can stimulate an erection has existed for some time and is characterized as a 'reflex erection'. A full bladder is known to mildly stimulate nerves in the same region. The possibility of a full bladder causing an erection, especially during sleep, is perhaps further supported by the beneficial physiological effect of an erection inhibiting urination, thereby helping to avoid nocturnal enuresis . However, given females have a similar phenomenon called nocturnal clitoral tumescence, prevention of nocturnal enuresis (bed-wetting) is not likely a sole supporting cause. In a study published in 1972, during puberty, the average tumescence time per night was 159 min; average REM sleep time was 137 min. Average simultaneous REM sleep and penile tumescence per night was 102 min. Study subjects averaged 6.85 tumescence episodes/night, and, of these, 5.15 occurred during a REM sleep period. Tumescence episodes during REM averaged 30.8 min in duration, whereas episodes which occurred when no REM was present averaged 11.75 min. Study subjects had at least four REM periods per night and at least three tumescence episodes. In another study of healthy older people published in 1988, frequency and duration of nocturnal penile tumescence decreased progressively with age independent of variations in sleep. In contrast to younger age groups, the majority of those above age 60 did not have full sleep erections even though they and their partners reported regular intercourse. Unlike physiological penile tumescence, sleep-related painful erections (SRPE) and stuttering priapism (SP) are much rarer pathological erections, resulting in poor sleep and daytime tiredness, and long term cardiovascular morbidity. SRPE is a rare parasomnia consisting of nocturnal penile tumescence accompanied by pain that awakens the individual. It occurs predominantly during REM sleep, without an apparent underlying illness or penile anatomic abnormalities. On the contrary, stuttering priapism can occur spontaneously at any time of the day, but more commonly so during REM sleep. SP is a subtype of ischemic priapism that is characterized by recurrent, self-limiting, painful erections that often require maneuvers (compression, cold packs or a cold shower, voiding, or exercise, etc.) to aid detumescence. In ischemic priapism, most of the penis is hard; however, the glans penis is not. Much rarer priapism is secondary to blunt trauma to the perineum or penis, with laceration of the cavernous artery, which can generate an arterial-lacunar fistula resulting in a high blood flow state, hence the tumescence. Tumescence lasting for more than four hours is a medical emergency. At the time being, no treatment consensus for SRPE has been established. Baclofen tablets taken before sleep is the most commonly used medication, having a tolerable profile of adverse effects. Diagnostic value The existence and predictability of nocturnal tumescence is used by sexual health practitioners to ascertain whether a given case of erectile dysfunction is psychological or physiological in origin. A patient presenting with erectile dysfunction is fitted with an elastic device to wear around his penis during sleep; the device detects changes in girth and relays the information to a computer for later analysis. If nocturnal tumescence is detected, then the erectile dysfunction is presumed to be due to a psychosomatic illness such as sexual anxiety; if not, then it is presumed to be due to a physiological cause. Nocturnal penile tumescence testing Regularly, those who experience erectile dysfunction are given a nocturnal penile tumescence test, usually over a three-day period. Such a test detects the presence of an erection occurring during sleep using either: a small portable computer connected to two bands placed around the shaft of the penis which records penile tumescence, a band of paper tape with perforations (similar to coil postage stamps) that is fit snugly around the shaft of the penis and will break at the perforations during penile tumescence. The goal of nocturnal penile tumescence testing is to determine whether one can experience an erection while sleeping after reporting that they are unable to experience an erection while awake. On average, one has 3–5 episodes of NPT each night, and each episode lasts 30–60 minutes, although the duration is reduced with advanced age. If one does obtain an erection while sleeping, but cannot obtain one while awake, a psychological cause or a medication side effect is usually suspected. Otherwise, if one does not obtain an erection in either state, a physiological cause is usually suspected. See also Nocturnal emission Priapism Sleep sex Swelling Tumescence References Further reading External links Andrology Penile erection Sleep
Nocturnal penile tumescence
[ "Biology" ]
1,399
[ "Behavior", "Sleep" ]
7,276,379
https://en.wikipedia.org/wiki/Dynamic%20single-frequency%20networks
Dynamic single-frequency networks (DSFN) is a technique of using several transmitter antennas to transfer the same signal (macrodiversity) in orthogonal frequency-division multiplexing cellular networks. DSFN is based on the idea of single frequency networks, which is a group of radio transmitters that send the same signal simultaneously over the same frequency. The term originates from the broadcasting world, where a broadcast network is a group of transmitters that send the same TV or radio program. Digital wireless communication systems based on the OFDM modulation scheme are well-suited to SFN operation, since OFDM in combination with some forward error correction scheme can eliminate intersymbol interference and fading caused by multipath propagation without the use of complex equalization. The concept of DSFN implies the SFN grouping is changed dynamically over time, from timeslot to timeslot. The aim is to achieve efficient spectrum utilization for downlink unicast or multicast communication services in centrally controlled cellular systems based on for example the OFDM modulation scheme. A centralized scheduling algorithm assigns each data packet to a certain timeslot, frequency channel and group of base station transmitters. DSFN can be considered as a combination of packet scheduling, macro-diversity and dynamic channel allocation (DCA). The scheduling algorithm can be further extended to dynamically assign other radio resource management parameters to each timeslot and transmitter, such as modulation scheme and error correction scheme, in view to optimize the efficiency. DSFN makes it possible to increase the received signal strength to a mobile terminal in between several base station transmitters in comparison to non-macrodiversity communication schemes. Thus, DSFN can improve the coverage area and lessen the outage probability. Alternatively, DSFN may allow the same outage probability with a less robust but more efficient modulation and error coding scheme, and thus improve the spectral efficiency in bit/s/Hz/base station transmitter in comparison to a non-macrodiversity communication scheme. DSFN resembles the CDMA downlink soft handover. A difference is that in the CDMA case, co-channel interference from transmissions to other users are more efficiently avoided by giving the other users other spreading codes. A special form of DSFN is Continuous Transmission DSFN, where all base station transmitters always transmit at full power, without blocking of non-utilized transmitters, and without power control. This concept is very similar to so called Virtual cellular networks (VCNs), where a virtual cell is a group of base stations sending using the same spreading code, or a group of OFDM transmitters form a Single Frequency Network. See also Antenna diversity Cooperative diversity References Telecommunications techniques Wireless networking Radio resource management
Dynamic single-frequency networks
[ "Technology", "Engineering" ]
548
[ "Wireless networking", "Computer networks engineering" ]
7,276,747
https://en.wikipedia.org/wiki/Asphyxiant%20gas
An asphyxiant gas, also known as a simple asphyxiant, is a nontoxic or minimally toxic gas which reduces or displaces the normal oxygen concentration in breathing air. Breathing of oxygen-depleted air can lead to death by asphyxiation (suffocation). Because asphyxiant gases are relatively inert and odorless, their presence in high concentration may not be noticed, except in the case of carbon dioxide (hypercapnia). Toxic gases, by contrast, cause death by other mechanisms, such as competing with oxygen on the cellular level (e.g. carbon monoxide) or directly damaging the respiratory system (e.g. phosgene). Far smaller quantities of these are deadly. Notable examples of asphyxiant gases are methane, nitrogen, argon, helium, butane and propane. Along with trace gases such as carbon dioxide and ozone, these compose 79% of Earth's atmosphere. Asphyxia hazard Asphyxiant gases in the breathing air are normally not hazardous. Only where elevated concentrations of asphyxiant gases displace the normal oxygen concentration does a hazard exist. Examples are: Environmental gas displacement Confined spaces, combined with accidental gas leaks, such as mines, submarines, refrigerators, or other confined spaces Fire extinguisher systems that flood spaces with inert gases, such as computer data centers and sealed vaults Large-scale natural release of gas, such as during the Lake Nyos disaster in which volcanically-released carbon dioxide killed 1,800 people. Release of helium boiled off by the energy released in a magnet quench such as the Large Hadron Collider or a magnetic resonance imaging machine. Climbing inside an inflatable balloon filled with helium Direct administration of gas Inadvertent administration of asphyxiant gas in respirators Use in suicide and erotic asphyxiation Risk management The risk of breathing asphyxiant gases is frequently underestimated leading to fatalities, typically from breathing helium in domestic circumstances and nitrogen in industrial environments. The term asphyxiation is often mistakenly associated with the strong desire to breathe that occurs if breathing is prevented. This desire is stimulated from increasing levels of carbon dioxide. However, asphyxiant gases may displace carbon dioxide along with oxygen, preventing the victim from feeling short of breath. In addition the gases may also displace oxygen from cells, leading to loss of consciousness and death rapidly. United States The handling of compressed asphyxiant gases and the determination of appropriate environment for their use is regulated in the United States by the Occupational Safety and Health Administration (OSHA). The National Institute for Occupational Safety and Health (NIOSH) has an advisory role. OSHA requires employers who send workers into areas where the oxygen concentration is known or expected to be less than 19.5% to follow the provision of the Respiratory Protection Standard [29 CFR 1910.134]. Generally, work in an oxygen depleted environment requires an SCBA or airline respirator. The regulation also requires an evaluation of the worker's ability to perform the work while wearing a respirator, the regular training of personnel, respirator fit testing, periodic workplace monitoring, and regular respirator maintenance, inspection, and cleaning." Containers should be labeled according to OSHA's Hazard Communication Standard [29 CFR 1910.1200]. These regulations were developed in accordance with the official recommendations of the Compressed Gas Association (CGA) pamphlet P-1. The specific guidelines for prevention of asphyxiation due to displacement of oxygen by asphyxiant gases is covered under CGA's pamphlet SB-2, Oxygen-Deficient Atmospheres. Specific guidelines for use of gases other than air in back-up respirators is covered in pamphlet SB-28, Safety of Instrument Air Systems Backed Up by Gases Other Than Air. Odorized gas To decrease the risk of asphyxiation, there have been proposals to add warning odors to some commonly used gases such as nitrogen and argon. However, CGA has argued against this practice. They are concerned that odorizing may decrease worker vigilance, not everyone can smell the odorants, and assigning a different smell to each gas may be impractical. Another difficulty is that most odorants (e.g., the thiols) are chemically reactive. This is not a problem with natural gas intended to be burned as fuel, which is routinely odorized, but a major use of asphyxiants such as nitrogen, helium, argon and krypton is to protect reactive materials from the atmosphere. In mining The dangers of excess concentrations of nontoxic gases has been recognized for centuries within the mining industry. The concept of black damp (or "stythe") reflects an understanding that certain gaseous mixtures could lead to death with prolonged exposure. Early mining deaths due to mining fires and explosions were often a result of encroaching asphyxiant gases as the fires consumed available oxygen. Early self-contained respirators were designed by mining engineers such as Henry Fleuss to help in rescue efforts after fires and floods. While canaries were typically used to detect carbon monoxide, tools such as the Davy lamp and the Geordie lamp were useful for detecting methane and carbon dioxide, two asphyxiant gases. When methane was present, the lamp would burn higher; when carbon dioxide was present, the lamp would gutter or extinguish. Modern methods to detect asphyxiant gases in mines led to the Federal Mine Safety and Health Act of 1977 in the United States which established ventilation standards in which mines should be "ventilated by a current of air containing not less than 19.5 volume per centum of oxygen, not more than 0.5 volume per centum of carbon dioxide". See also Inert gas asphyxiation Controlled atmosphere killing, a method of execution using asphyxiant gases Limnic eruption Lake Kivu Lake Monoun Lake Nyos Mazuku Mining accidents References Toxicology
Asphyxiant gas
[ "Environmental_science" ]
1,248
[ "Toxicology" ]
7,277,255
https://en.wikipedia.org/wiki/Continuous%20embedding
In mathematics, one normed vector space is said to be continuously embedded in another normed vector space if the inclusion function between them is continuous. In some sense, the two norms are "almost equivalent", even though they are not both defined on the same space. Several of the Sobolev embedding theorems are continuous embedding theorems. Definition Let X and Y be two normed vector spaces, with norms ||·||X and ||·||Y respectively, such that X ⊆ Y. If the inclusion map (identity function) is continuous, i.e. if there exists a constant C > 0 such that for every x in X, then X is said to be continuously embedded in Y. Some authors use the hooked arrow "↪" to denote a continuous embedding, i.e. "X ↪ Y" means "X and Y are normed spaces with X continuously embedded in Y". This is a consistent use of notation from the point of view of the category of topological vector spaces, in which the morphisms ("arrows") are the continuous linear maps. Examples A finite-dimensional example of a continuous embedding is given by a natural embedding of the real line X = R into the plane Y = R2, where both spaces are given the Euclidean norm: In this case, ||x||X = ||x||Y for every real number X. Clearly, the optimal choice of constant C is C = 1. An infinite-dimensional example of a continuous embedding is given by the Rellich–Kondrachov theorem: let Ω ⊆ Rn be an open, bounded, Lipschitz domain, and let 1 ≤ p < n. Set Then the Sobolev space W1,p(Ω; R) is continuously embedded in the Lp space Lp∗(Ω; R). In fact, for 1 ≤ q < p∗, this embedding is compact. The optimal constant C will depend upon the geometry of the domain Ω. Infinite-dimensional spaces also offer examples of discontinuous embeddings. For example, consider the space of continuous real-valued functions defined on the unit interval, but equip X with the L1 norm and Y with the supremum norm. For n ∈ N, let fn be the continuous, piecewise linear function given by Then, for every n, ||fn||Y = ||fn||∞ = n, but Hence, no constant C can be found such that ||fn||Y ≤ C||fn||X, and so the embedding of X into Y is discontinuous. See also Compact embedding References Functional analysis
Continuous embedding
[ "Mathematics" ]
572
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
7,277,803
https://en.wikipedia.org/wiki/Ministry%20of%20Petroleum%20%28Iran%29
The Ministry of Petroleum (MOP) () manages all aspects of the Iranian oil industry, including the discovery, extraction, production, distribution, and importation and exportation of crude oil and petrochemical products. The Ministry has been sanctioned by the United States Department of State since 2020. According to BP, Iran has of proven oil reserves and 29.61 trillion cubic meters of proven gas reserves. Iran ranks third in the world in oil reserves and second in gas reserves. It is responsible for applying the principle of Iranian ownership and sovereignty over oil and gas reserves. Also, it is undertake the separation of sovereignty tasks from management and development of country's oil and gas industry. The Ministry was established after revolution in Iran and in the interim government of Bazargan, after departure of Hasan Nazia, the managing director of National Iranian Oil Company from the country in 1979. The organizational structure of this ministry consists of a central headquarters and four subsidiaries, including National Iranian Oil Company, National Iranian Gas Company, National Iranian Petrochemical Company and National Iranian Oil Refining and Distribution Company. It monitors the operations of exploration, extraction, marketing and sale of crude oil, natural gas and oil products in the country through its subsidiaries. In addition to meeting its major energy needs, the ministry supplies over 80% of foreign currency earnings by exporting crude oil and refined petroleum products. According to the Fourth Economic, Social and Cultural Development Plan, the Government has been required to transfer at least 10% of the activities related to the exploration, extraction and production of crude oil to the private sector, while in the meantime retaining its ownership of oil resources. This is also the case in other fields of the Ministry of Petroleum's activities. Iran plans to invest $500 billion in the oil sector until 2025. As of 2010, US$70 billion worth of oil and gas projects were under construction. Iran's annual oil and gas revenues were expected to reach $250 billion by 2015. History The Islamic Republic of Iran formed the Ministry with the aim of applying the principle of Iranian national ownership and sovereignty to oil and gas resources, and separating sovereignty functions from company in the management and development of oil and gas industry of the country. Since the petroleum industry has a special role in the country's economy as a propellant industry and plays a key role in achieving the major goals of national economy, the ministry's performance is very important. Iran holds 836.47 billion barrels of liquid hydrocarbon reserves (crude oil, liquids and gas condensate) and about 34 trillion gas reserves. It is ranked first in the world in terms of total hydrocarbon reserves and energy security. Also, the privileges like geopolitical position of the country and availability of powerful human capital have given it more strength. The National Petroleum Procurement Proposal was signed by 17 representatives of the National Petroleum Commission on 8 December 1950. In the text of message was following: "we are proposing for Iranian oil industry to be announced in all regions of the country without exception under the name of well-being of Iranian people and in order to contribute to peace of the world: all exploration, extraction and exploitation operations be in the control of government." Following the announcement of this proposal, "the law of oil Nationalization throughout the country and two-month extension to Petroleum Commission to study around implementation of this principle" passed in National Assembly and eventually in the Senate on 29 March 1950. Thus, The National Iranian Oil Company was established. The first board of directors of the National Iranian Oil Company was constituted by implementing the law of oil industry nationalization and after expropriation of former British oil company in June 1951. Then, new rules were adopted for this new company. The legal framework for activities of the National Iranian Oil Company in discussion of hydrocarbons sources and its products was determined by approving the "Law on Development of Petrochemical Industries (with subsequent amendments)" on 20 July 1965 and the "Law on Development of Gas Industry" on 25 May 1972. In addition, the extent of Iranian or foreign companies and firms has clarified to participate in petrochemical product plans. Finally, a detailed description of presenting and receiving proposals, signing contracts, contract termination, conservation and preventing environmental pollution, maintaining Iran's interests and pricing conditions were presented by the approval of first "Oil Act" on 8 August 1974, in addition to defining the terms and conditions of work within hydrocarbon resources of whole country. Upon approval of first "Oil Act", the "Law on Statute of National Iranian Oil Company" was ratified in five seasons on 17 May 1977. "General and capital", "subject, duties, rights and authorities of company", "the entity of company", "balance sheet and profit and loss account" have formed first four chapters of the statute. In the fifth chapter of this law is also addressed to "other regulations". Subsequently, the "Statute of National Petrochemical Company" and "Statute of National Iranian Gas Company" were approved on 21 November, and 25 November 1977, respectively. After Islamic Revolution of Iran, the editing and approval of new laws were also on the agenda of Islamic Consultative Assembly with the necessity of following some principles and with regard to departure of foreign experts. Hence, new oil law was approved on 9 October 1987. Oil law Following the announcement of this proposal, "the law of oil Nationalization throughout the country and two-month extension to Petroleum Commission to study around implementation of this principle" passed in National Assembly and eventually in the Senate on 29 March 1950. Thus, The National Iranian Oil Company was established. First board of directors of National Iranian Oil Company was constituted by implementing the law of oil industry nationalization and after expropriation of former British oil company in June 1951. Then, new rules were adopted for this new company. The legal framework for activities of National Iranian Oil Company in discussion of hydrocarbons sources and its products was determined by approving the "Law on Development of Petrochemical Industries (with subsequent amendments)" on 20 July 1965 and the "Law on Development of Gas Industry" on 25 May 1972. In addition, the extent of Iranian or foreign companies and firms has clarified to participate in petrochemical product plans. Finally, a detailed description of presenting and receiving proposals, signing contracts, contract termination, conservation and preventing environmental pollution, maintaining Iran's interests and pricing conditions were presented by the approval of first "Oil Act" on 8 August 1974, in addition to defining the terms and conditions of work within hydrocarbon resources of whole country. Upon approval of first "Oil Act", the "Law on Statute of National Iranian Oil Company" was ratified in five seasons on 17 May 1977. "General and capital", "subject, duties, rights and authorities of company", "the entity of company", "balance sheet and profit and loss account" have formed first four chapters of the statute. In the fifth chapter of this law is also addressed to "other regulations". Subsequently, the "Statute of National Petrochemical Company" and "Statute of National Iranian Gas Company" were approved on 21 November, and 25 November 1977, respectively. After Islamic Revolution of Iran, the editing and approval of new laws were also on the agenda of Islamic Consultative Assembly with the necessity of following some principles and with regard to departure of foreign experts. Hence, new oil law was approved on 9 October 1987. Ministers Constitution The Iranian constitution prohibits the granting of petroleum rights on a concessionary basis or direct equity stake. However, the 1987 Petroleum Law permits the establishment of contracts between the ministry, state companies and "local and foreign national persons and legal entities." Buyback contracts, for instance, are arrangements in which the contractor funds all investments, receives remuneration from the National Iranian Oil Company (NIOC) in the form of an allocated production share, then transfers operation of the field to NIOC after a set number of years, at which time the contract is completed. Since the 1979 revolution in Iran, the country has been under constant US unilateral sanctions. The first U.S. sanctions against Iran were formalized in November 1979, and during the hostage crisis, many sanctions were leveled against the Iranian government. By 1987 the import of Iranian goods into the United States had been banned. In 1995, President of the United States Bill Clinton issued Executive Order 12957, banning U.S. investment in Iran's energy sector, followed a few weeks later by Executive Order 12959 eliminating all trade and investment and virtually all interaction between the United States and Iran. Specifically the ministry has been on the sanction list of the European Union since 16 October 2012. Fifth Development Plan Features of fifth development plan in oil industry The features of fifth development plan in oil industry include: a systemic template of a set of interconnected components that interact with each other to exchange data, information, materials and products, and they perform a targeted move. Also, different parts of the plan have been coordinated and have been seen as a value chain in industry as a whole. The major goals of Iran's oil and gas industry in fifth development plan Objective 1: increase the share and improve position of oil, gas and petrochemical industry in the region and the world, to increase extraction of oil and gas with priority of common fields with neighboring countries, increasing refining capacity Objective 2: Optimum use of hydrocarbon reserves of the country as backing and stimulus for sustainable economic development of the country. Objective 3: Use of oil and gas industry capacity to defend national interest Objective 4: Implement energy management to prevent waste in the country's fuel consumption, reducing energy intensity and granting targeted subsidies Objective 5: Establishing effective and constructive interaction with energy producer and consumer countries; playing management role of Iran in energy distribution and transit. Objective 6: Realizing the general policies of article 44 of the constitution in oil industry Objective 7: Achieve advanced technology in oil, gas and petrochemical industries to reach the second position of science and technology in the region. Objective 8: Changing the look to oil and gas and its revenues, from source of public funding to "economic productive resources and capitals" Objective 9: Increase productivity in various sectors of oil industry in order to grow GDP (Gross Domestic Product) Subsidiaries National Iranian Oil Company National Iranian Oil Company (NIOC) is in charge of oil and gas exploration and production, processing and oil transportation. National Iranian South Oil Company (NISOC) is an important subsidiary of NIOC. NISOC is producing about 83% percent of all crude oil and 16% percent of natural gas produced in Iran. National Iranian Oil Company subsidiaries: National Iranian South Oilfields Company (NISOC) Iranian Central Oilfields Company (ICOFC) Pars Oil and Gas company Petroleum Engineering and Development Co. (PEDEC) Iranian Offshore Oil Company National Iranian Gas Company National Iranian Gas Company (NIGC) manages gathering, treatment, processing, transmission, distribution, and exports of gas and gas liquids. The huge reserves of natural gas put Iran in the second place, in terms of the natural gas reserve quantity, among other countries, only next to the Russian Federation, with an estimate of proven reserve quantity close to 23 bcm. Iran's gas reserves are exploited primarily for domestic use. National Iranian Petrochemical Company National Iranian Petrochemical Company (NPC) handles petrochemical production, distribution, and exports. National Iranian Petrochemical Company's output capacity will increase to over 100 million tpa by 2015 from an estimated 50 million tpa in 2010 thus becoming the world' second largest chemical producer globally after Dow Chemical with Iran housing some of the world's largest chemical complexes. National Iranian Oil Refining and Distribution Company National Iranian Oil Refining and Distribution Company (NIORDC) handles oil refining and transportation, with some overlap to NIOC. There are eight refineries with a potential capacity of and one refinery complex in the country with a total refining capacity of over (in Tehran, Tabriz, Isfahan, Abadan, Kermanshah, Shiraz, Bandar Abbas, Arak and Lavan Island) and a storage capacity of 8 milliard litre. Abundance of basic material, like natural gas, in the country provide favorable conditions for development and expansion of petrochemical plants. Production companies National Iranian South Oilfields Company (NISOC) Karoun oil and gas exploitation company Maroon Oil and Gas Company Masjed Soleyman Oil and Gas Company Gachsaran Oil and Gas Company Aghajari exploitation Company Iranian Central Oilfields Company (ICOFC) West Gas & Oil exploitation Company East Oil and Gas exploitation Company Southern Zagros Oil and Gas Company Iranian Offshore Oil Company Pars Oil and Gas company Arvandan Oil & Gas Co. Technical Services Companies National Iranian Drilling Company Petroleum Engineering and Development Co. Iranian Oil Terminals Company Pars Special Economic Zone Naftiran Intertrade Company (Nico) Iran fuel conservation optimization company Revenues from crude oil Iran's total revenues from the sale of oil amounted to $77 billion in Iranian year 1387 (2008–09). The average sale price of Iran's crude oil during that year was $100 per barrel. According to the National Iranian Oil Company, Iran's average daily production of crude oil stood at per day. Of this amount, 55% was exported and the remainder was consumed domestically. As of 2010, oil income accounts for 80% of Iran's foreign currency revenues and 60% of the nation's overall budget. Iran exported over of oil in the one year to 21 March 2010, averaging around a day. The exports included around of light crude and more than of heavy crude oil. Japan, China, South Africa, Brazil, Pakistan, Sri Lanka, Spain, India and the Netherlands are the main importers of Iran's crude oil. Iran's annual oil revenues reached $100 billion in 2011. Iran's annual oil and gas revenues are expected to reach $250 billion by 2015, including $100 billion from Iran's South Pars giant gas field. Alleged missing oil revenues Oil revenues: Foreign currency proceeds from crude sales are managed by the Central Bank. According to Farda newspaper, the difference between President Ahmadinejad administration's revenues and the amount deposited with the Central Bank of Iran exceeds $66 billion. This amount is broken down as follows: $35 billion in imported goods (2005–2009), $25 billion in oil revenues (2005–2008), $2.6 billion in non-oil export revenues, $3 billion in foreign exchange reserves. This is a large number as it is equal one-tenth of Iran's total oil revenues since the 1979 revolution. Reserves and production Public projects As of 2012, the Ministry of Petroleum in Iran handles 4,000 public (non-oil) projects across the country. The estimated value of the projects stands at 53,868 trillion rials (approximately $4 trillion). Sanctions The Ministry of Petroleum, in accordance with the US Executive Order 13876, was placed under sanctions by the United States Department of State in October 2020 and has been designated as Specially Designated Global Terrorist due to its alleged links with the Islamic Revolutionary Guard Corps for supplying "oil for terror" in Syria worth millions of dollars. See also The nationalization of the Iran oil industry movement Petroleum industry in Iran Economy of Iran International Rankings of Iran in Energy Iranian Economic Reform Plan Oil megaprojects References External links Oil & Gas in Iran Brief study (2003) US Department of Energy Iran's entry Ministry of Petroleum Overview 1979 establishments in Iran Ministries established in 1979 Economy of Iran Energy in Iran Petroleum Energy ministries Iranian entities subject to U.S. Department of the Treasury sanctions
Ministry of Petroleum (Iran)
[ "Engineering" ]
3,173
[ "Energy organizations", "Energy ministries" ]
7,278,018
https://en.wikipedia.org/wiki/Paxillus%20involutus
Paxillus involutus, also known as the brown roll-rim or the common roll-rim, is a basidiomycete fungus that is widely distributed across the Northern Hemisphere. It has been inadvertently introduced to Australia, New Zealand, South Africa, and South America, probably transported in soil with European trees. Various shades of brown in colour, the fruit body grows up to high and has a funnel-shaped cap up to wide with a distinctive inrolled rim and decurrent gills that may be pore-like close to the stipe. Although it has gills, it is more closely related to the pored boletes than to typical gilled mushrooms. It was first described by Pierre Bulliard in 1785, and was given its current binomial name by Elias Magnus Fries in 1838. Genetic testing suggests that Paxillus involutus may be a species complex rather than a single species. A common mushroom of deciduous and coniferous woods and grassy areas in late summer and autumn, Paxillus involutus forms ectomycorrhizal relationships with a broad range of tree species. These benefit from the symbiosis as the fungus reduces their intake of heavy metals and increases resistance to pathogens such as Fusarium oxysporum. Previously considered edible and eaten widely in Eastern and Central Europe, it has since been found to be dangerously poisonous, after being responsible for the death of German mycologist Julius Schäffer in 1944. It had been recognized as causing gastric upsets when eaten raw, but was more recently found to cause potentially fatal autoimmune hemolysis, even in those who had consumed the mushroom for years without any other ill effects. An antigen in the mushroom triggers the immune system to attack red blood cells. Serious and commonly fatal complications include acute kidney injury, shock, acute respiratory failure, and disseminated intravascular coagulation. Taxonomy and naming The brown roll-rim was described by French mycologist Pierre Bulliard in 1785 as Agaricus contiguus, although the 1786 combination Agaricus involutus of August Batsch is taken as the first valid description. James Bolton published a description of what he called Agaricus adscendibus in 1788; the taxonomical authority Index Fungorum considers this to be synonymous with P. involutus. Additional synonyms include Omphalia involuta described by Samuel Frederick Gray in 1821, and Rhymovis involuta, published by Gottlob Ludwig Rabenhorst in 1844. The species gained its current binomial name in 1838 when the 'father of mycology', Swedish naturalist Elias Magnus Fries erected the genus Paxillus, and set it as the type species. The starting date of fungal taxonomy had been set as January 1, 1821, to coincide with the date of Fries' works, which meant that names coined earlier than this date required sanction by Fries (indicated in the name by a colon) to be considered valid. It was thus written Paxillus involutus (Batsch:Fr.) Fr. A 1987 revision of the International Code of Botanical Nomenclature set the starting date at May 1, 1753, the date of publication of Linnaeus' seminal work, the Species Plantarum. Hence the name no longer requires the ratification of Fries' authority. The genus was later placed in a new family, Paxillaceae, by French mycologist René Maire who held it to be related to both agarics and boletes. Although it has gills rather than pores, it has long been recognised as belonging to the pored mushrooms of the order Boletales rather than the traditional agarics. The generic name is derived from the Latin for 'peg' or 'plug', and the specific epithet involutus, 'inrolled', refers to the cap margin. Common names include the naked brimcap, poison paxillus, inrolled pax, poison pax, common roll-rim, brown roll-rim, and brown chanterelle. Gray called it the "involved navel-stool" in his 1821 compendium of British flora. Studies of the ecology and genetics of Paxillus involutus indicate that it may form a complex of multiple similar-looking species. In a field study near Uppsala, Sweden, conducted from 1981 to 1983, mycologist Nils Fries found that there were three populations of P. involutus unable to breed with each other. One was found under conifers and mixed woodlands, while the other two were found in parklands, associated with nearby birch trees. He found that the first group tended to produce single isolated fruit bodies which had a thinner stipe and cap which was less inrolled at the margins, while the fruit bodies of the other two populations tended to appear in groups, and have thicker stipes, and caps with more inrolled and sometimes undulating margins. There were only general tendencies and he was unable to detect any consistent macroscopic or microscopic features that firmly differentiate them. A molecular study comparing the DNA sequences of specimens of Paxillus involutus collected from various habitats in Bavaria found that those collected from parks and gardens showed a close relationship with the North American species P. vernalis, while those from forests were allied with P. filamentosus. The authors suggested the park populations may have been introduced from North America. A multi-gene analysis of European isolates showed that P. involutus sensu lato (in the loose sense) could be separated into four distinct, genetically isolated lineages corresponding to P. obscurosporus, P. involutus sensu stricto (in the strict sense), P. validus, and a fourth species that has not yet been identified. Changes in host range have occurred frequently and independently among strains within this species complex. Description Resembling a brown wooden top, the epigeous (aboveground) fruit body may be up to high. The cap, initially convex then more funnel-shaped (infundibuliform) with a depressed centre and rolled rim (hence the common name), may be reddish-, yellowish- or olive-brown in colour and typically wide; the cap diameter does not get larger than . The cap surface is initially downy and later smooth, becoming sticky when wet. The cap and cap margin initially serve to protect the gills of young fruit bodies: this is termed pilangiocarpic development. The narrow brownish yellow gills are decurrent and forked, and can be peeled easily from the flesh (as is the case with the pores of boletes). Gills further down toward the stipe become more irregular and anastomose, and can even resemble the pores of bolete-type fungi. The fungus darkens when bruised and older specimens may have darkish patches. The juicy yellowish flesh has a mild to faintly sour or sharp odor and taste, and has been described as well-flavored upon cooking. Of similar colour to the cap, the short stipe measures some 3–6 cm tall and 1–3 wide, can be crooked, and tapers toward the base. The spore print is brown, and the dimensions of the ellipsoid (oval-shaped) spores are 7.5–9 by 5–6 μm. The hymenium has cystidia both on the gill edge and face (cheilo- and pleurocystidia respectively), which are slender and filament-like, typically measuring 40–65 by 8–10.5 μm. Similar species The brownish colour and funnel-like shape of P. involutus can lead to its confusion with several species of Lactarius, many of which have some degree of toxicity themselves. The lack of a milky exudate distinguishes it from any milk cap. One of the more similar is L. turpis, which presents a darker olive colouration. The related North American Paxillus vernalis has a darker spore print, thicker stipe and is found under aspen, whereas the closer relative P. filamentosus is more similar in appearance to P. involutus. A rare species that grows only in association with alder, P. filamentosus can be distinguished from it by the pressed-down scales on the cap surface that point towards the cap margin, a light yellow flesh that bruises only slightly brown, and deep yellow-ochre gills that do not change colour upon injury The most similar species are two once thought to be part of P. involutus in Europe. Paxillus obscurisporus (originally obscurosporus) has larger fruit bodies than P. involutus, with caps up to wide whose margins tend to unroll and flatten with age, and a layer of cream-coloured mycelia covering the base of its tapered stipe. P. validus, also known only from Europe, has caps up to wide with a stipe that is more or less equal in width throughout its length. Found under broadleaved trees in parks, it can be reliably distinguished from P. involutus (and other Paxillus species) by the presence of crystals up to 2.5 μm long in the rhizomorphs, as the crystals found in rhizomorphs of other Paxillus species do not exceed 0.5 μm long. Other similar species include Phylloporus arenicola, Tapinella atrotomentosa, and Tapinella panuoides. Ecology, distribution and habitat Paxillus involutus forms ectomycorrhizal relationships with a number of coniferous and deciduous tree species. Because the fungus has somewhat unspecialized nutrient requirements and a relatively broad host specificity, it has been frequently used in research and seedling inoculation programs. There is evidence of the benefit to trees of this arrangement: in one experiment where P. involutus was cultivated on the root exudate of red pine (Pinus resinosa), the root showed markedly increased resistance to pathogenic strains of the ubiquitous soil fungus Fusarium oxysporum. Seedlings inoculated with P. involutus also showed increased resistance to Fusarium. Thus P. involutus may be producing antifungal compounds which protect the host plants from root rot. Paxillus involutus also decreases the uptake of certain toxic elements, acting as a buffer against heavy metal toxicity in the host plant. For example, the fungus decreased the toxicity of cadmium and zinc to Scots pine (Pinus sylvestris) seedlings: even though cadmium itself inhibits ectomycorrhiza formation in seedlings, colonization with P. involutus decreases cadmium and zinc transport to the plant shoots and alters the ratio of zinc transported to the roots and shoots, causing more cadmium to be retained in the roots of the seedlings rather than distributed through its entire metabolism. Evidence suggests that the mechanism for this detoxification involves the cadmium binding to the fungal cell walls, as well as accumulating in the vacuolar compartments. Further, ectomycorrhizal hyphae exposed to copper or cadmium drastically increase production of a metallothionein—a low molecular weight protein that binds metals. The presence of Paxillus involutus is related to much reduced numbers of bacteria associated with the roots of Pinus sylvestris. Instead bacteria are found on the external mycelium. The types of bacteria change as well; a Finnish study published in 1997 found that bacterial communities under P. sylvestris without mycorrhizae metabolised organic and amino acids, while communities among P. involutus metabolised the sugar fructose. Paxillus involutus benefits from the presence of some species of bacteria in the soil it grows in. As the fungus grows it excretes polyphenols, waste products that are toxic to itself and impede its growth, but these compounds are metabolised by some bacteria, resulting in increased fungal growth. Bacteria also produce certain compounds such as citric and malic acid, which stimulate P. involutus. Highly abundant, the brown roll-rim is found across the Northern Hemisphere, Europe and Asia, with records from India, China, Japan, Iran, and Turkey's eastern Anatolia. It is equally widely distributed across northern North America, extending north to Alaska, where it has been collected from tundra near Coldfoot in the interior of the state. In southwestern Greenland, P. involutus has been recorded under the birch species Betula nana, B. pubescens and B. glandulosa. The mushroom is more common in coniferous woods in Europe, but is also closely associated with birch (Betula pendula). Within woodland, it prefers wet places or boggy ground, and avoids calcareous (chalky) soils. It has been noted to grow alongside Boletus badius in Europe, and Leccinum scabrum and Lactarius plumbeus in the Pacific Northwest region of North America. There it is found in both deciduous and coniferous woodland, commonly under plantings of white birch (Betula papyrifera) in urban areas. It is one of a small number of fungal species which thrive in Pinus radiata plantations planted outside their natural range. A study of polluted Scots pine forest around Oulu in northern Finland found that P. involutus became more abundant in more polluted areas while other species declined. Emissions from pulp mills, fertiliser, heating and traffic were responsible for the pollution, which was measured by sulfur levels in the pine needles. Paxillus involutus can be found growing on lawns and old meadows throughout its distribution. Fruit bodies are generally terrestrial, though they may be found on woody material around tree stumps. They generally appear in autumn and late summer. In California, David Arora discerned a larger form associated with oak and pine which appears in late autumn and winter, as well as the typical form that is associated with birch plantings and appears in autumn. Several species of flies and beetles have been recorded using the fruit bodies to rear their young. The mushroom can be infected by Hypomyces chrysospermus, or bolete eater, a mould species that parasitises Boletales members. Infection results in the appearance of a whitish powder that first manifests on the pores, then spreads over the surface of the mushroom, becoming golden yellow to reddish-brown in maturity. Australian mycologist John Burton Cleland noted it occurring under larch (Larix), oak, pine, birch and other introduced trees in South Australia in 1934, and it has subsequently been recorded in New South Wales, Victoria (where it was found near Betula and Populus) and Western Australia. It has been recorded under introduced birch (Betula) and hazel (Corylus) in New Zealand. Mycologist Rolf Singer reported a similar situation in South America, with the species recorded under introduced trees in Chile. It is likely to have been transported to those countries in the soil of imported European trees. Toxicity Paxillus involutus was widely eaten in Central and Eastern Europe until World War II, although English guidebooks did not recommend it. In Poland, the mushroom was often eaten after pickling or salting. It was known to be a gastrointestinal irritant when ingested raw but had been presumed edible after cooking. Questions were first raised about its toxicity after German mycologist Julius Schäffer died after eating it in October 1944. About an hour after he and his wife ate a meal prepared with the mushrooms, Schäffer developed vomiting, diarrhea, and fever. His condition worsened to the point where he was admitted to hospital the following day and developed kidney failure, perishing after 17 days. In the mid-1980s, Swiss physician René Flammer discovered an antigen within the mushroom that stimulates an autoimmune reaction causing the body's immune cells to consider its own red blood cells as foreign and attack them. Despite this, it was not until 1990 that guidebooks firmly warned against eating P. involutus, and one Italian guidebook recommended it as late as 1998. The relatively rare immunohemolytic syndrome occurs following the repeated ingestion of Paxillus mushrooms. Most commonly it arises when the person has ingested the mushroom for a long period of time, sometimes for many years, and has shown mild gastrointestinal symptoms on previous occasions. The Paxillus syndrome is better classed as a hypersensitivity reaction than a toxicological reaction as it is caused not by a genuinely poisonous substance but by the antigen in the mushroom. The antigen is still of unknown structure but it stimulates the formation of IgG antibodies in the blood serum. In the course of subsequent meals, antigen-antibody complexes are formed; these complexes attach to the surface of blood cells and eventually lead to their breakdown. Poisoning symptoms are rapid in onset, consisting initially of vomiting, diarrhea, abdominal pain, and associated decreased blood volume. Shortly after these initial symptoms appear, hemolysis develops, resulting in reduced urine output, hemoglobin in the urine or outright absence of urine formation, and anemia. Medical laboratory tests consist of testing for the presence of increasing bilirubin and free hemoglobin, and falling haptoglobins. Hemolysis may lead to numerous complications including acute kidney injury, shock, acute respiratory failure, and disseminated intravascular coagulation. These complications can cause significant morbidity with fatalities having been reported. There is no antidote for poisoning, only supportive treatment consisting of monitoring complete blood count, renal function, blood pressure, and fluid and electrolyte balance and correcting abnormalities. The use of corticosteroids may be a useful adjunct in treatment, as they protect blood cells against hemolysis, thereby reducing complications. Plasmapheresis reduces the circulating immune complexes in the blood which cause the hemolysis, and may be beneficial in improving the outcome. Additionally, hemodialysis can be used for patients with compromised kidney function or kidney failure. Paxillus involutus also contains agents which appear to damage chromosomes; it is unclear whether these have carcinogenic or mutagenic potential. Two compounds that have been identified are the phenols involutone and involutin; the latter is responsible for the brownish discolouration upon bruising. Despite the poisonings, Paxillus involutus is still consumed in parts of Poland, Russia, and Ukraine, where people die from it every year. See also List of deadly fungi References External links Paxillaceae Deadly fungi Fungi of Asia Fungi of Europe Fungi of North America Fungi of South America Fungi described in 1786 Taxa named by August Batsch Fungus species
Paxillus involutus
[ "Biology" ]
3,936
[ "Fungi", "Fungus species" ]
7,278,280
https://en.wikipedia.org/wiki/Robotic%20voice%20effects
Robotic voice effects became a recurring element in popular music starting in the second half of the twentieth century. Several methods of producing variations on this effect have arisen. Vocoder The vocoder was originally designed to aid in the transmission of voices over telephony systems. In musical applications the original sounds, either from vocals or from other sources such as instruments, are used and fed into a system of filters and noise generators. The input is fed through band-pass filters to separate the tonal characteristics which then trigger noise generators. The sounds generated are mixed back with some of the original sound and this gives the effect. Vocoders have been used in an analog form from as early as 1959 at Siemens Studio for Electronic Music but were made more famous after Robert Moog developed one of the first solid-state musical vocoders. In 1970 Wendy Carlos and Robert Moog built another musical vocoder, a 10-band device inspired by the vocoder designs of Homer Dudley which was later referred to simply as a vocoder. Carlos and Moog's vocoder was featured in several recordings, including the soundtrack to Stanley Kubrick's A Clockwork Orange for the vocal part of Beethoven's "Ninth Symphony" and a piece called "Timesteps". In 1974 Isao Tomita used a Moog vocoder on a classical music album, Snowflakes are Dancing, which became a worldwide success. Since then they have been widely used by artists such as: Kraftwerk's album Autobahn (1974); The Alan Parsons Project's track "The Raven" (Tales of Mystery and Imagination album 1976); Electric Light Orchestra on "Mr. Blue Sky" and "Sweet Talkin' Woman" (Out of the Blue album 1977) using EMS Vocoder 2000's. Other examples include Pink Floyd's album Animals, where the band put the sound of a barking dog through the device, and the Styx song "Mr. Roboto". Vocoders have appeared on pop recordings from time to time ever since, most often simply as a special effect rather than a featured aspect of the work. Many experimental electronic artists of the new-age music genre often utilize the vocoder in a more comprehensive manner in specific works, such as Jean Michel Jarre on Zoolook (1984), Mike Oldfield on QE2 (1980) and Five Miles Out (1982). There are also some artists who have made vocoders an essential part of their music, overall or during an extended phase, such as the German synthpop group Kraftwerk, or the jazz-infused metal band Cynic. Other examples Though the vocoder is by far the best-known, the following other pieces of music technology are often confused with it: Sonovox This was an early version of the talk box invented by Gilbert Wright in 1939. It worked by placing two loudspeakers over the larynx and as the speakers transmitted sounds up the throat, the performer would silently articulate words which would in turn make the sounds seem to "speak." It was used to create the voice of the piano in the Sparky's Magic Piano series from 1947, many musical instruments in Rusty in Orchestraville, and as the voice of Casey the Train in the films Dumbo and The Reluctant Dragon. Radio jingle companies PAMS and JAM Creative Productions used the sonovox in many of the station IDs they produced. Talk box The talk box guitar effect was invented by Doug Forbes and popularized by Peter Frampton. In the talk box effect, amplified sound is actually fed via a tube into the performer's mouth and is then shaped by the performer's lip, tongue, and mouth movements before being picked up by a microphone. In contrast, the vocoder effect is produced entirely electronically. The background riff from "Sensual Seduction" by Snoop Dogg is a well-known example. "California Love" by 2Pac and Roger Troutman is a more recent recording featuring a talk box fed with a synthesizer instead of guitar. Steven Drozd of The Flaming Lips used the talk box on parts of the group's eleventh album, At War with the Mystics, to imitate some of Wayne Coyne's repeated lyrics in the "Yeah Yeah Yeah Song". Pitch correction The vocoder should also not be confused with the Antares Auto-Tune Pitch Correcting Plug-In, which can be used to achieve a robotic-sounding vocal effect by quantizing (removing smooth changes in) voice pitch or by adding pitch changes. The first such use in a commercial song was in 1998 on "Believe", a song by Cher, and the radical pitch changes became known as the 'Cher effect'. This has been employed in recent years by artists such as Daft Punk (who also use vocoders and talk boxes), T-Pain, Kanye West, the Italian dance/pop group Eiffel 65, Japanese electropop acts Aira Mitsuki, Saori@destiny, Capsule, Meg and Perfume, and some Korean pop groups, most specifically 2NE1 and Big Bang. Linear prediction coding Linear prediction coding is also used as a musical effect (generally for cross-synthesis of musical timbres), but is not as popular as bandpass filter bank vocoders, and the musical use of the word vocoder refers exclusively to the latter type of device. Ring modulator Although ring modulation usually does not work well with melodic sounds, it can be used to make speech sound robotic. As an example, it has been used to robotify the voices of the Daleks in Dr Who. Speech synthesis Robotic voices in music may also be produced by speech synthesis. This does not usually create a "singing" effect (although it can). Speech synthesis means that, unlike in vocoding, no human speech is employed as basis. One example of such use is the song Das Boot by U96. A more tongue-in-cheek musical use of speech synthesis is MC Hawking. Most notably, Kraftwerk, who had previously used the vocoder extensively in their 1970s recordings, began opting for speech synthesis software in place of vocoders starting with 1981's Computer World album; on newer recordings and in the reworked versions of older songs that appear on The Mix and the band's current live show, the previously vocoder-processed vocals have been almost completely replaced by software-synthesized "singing". Comb filter A comb filter can be used to single out a few frequencies in the audio signal producing a sharp, resonating transformation of the voice. Comb filtering can be performed with a delay unit set to a high feedback level and delay time of less than a tenth of a second. Of the robot voice effects listed here, this one requires the least resources, since delay units are a staple of recording studios and sound editing software. As the effect deprives a voice of much of its musical qualities (and has few options for sound customization), the robotic delay is mostly used in TV/movie applications. References Robotics Sound effects
Robotic voice effects
[ "Engineering" ]
1,448
[ "Robotics", "Automation" ]
7,279,337
https://en.wikipedia.org/wiki/Buseck%20Center%20for%20Meteorite%20Studies
The Buseck Center for Meteorite Studies was founded in 1960, on the Tempe Campus of Arizona State University, and houses the world's largest university-based meteorite collection. The collection contains specimens from over 1,600 separate meteorite falls and finds, and is actively used internationally for planetary, geological and space science research. The Center also operates a meteorite museum which is open to the public. In 2021, the Center for Meteorite Studies was named in honor of Professor Peter R. Buseck. See also Nininger Meteorite Award Harvey H. Nininger Carleton B. Moore References Sources and external links Center for Meteorite Studies official website ASU Museums: Center for Meteorite Studies Map: University museums in Arizona Arizona State University Geology museums in Arizona Natural history museums in Arizona Museums in Tempe, Arizona Museums established in 1960 1960 establishments in Arizona
Buseck Center for Meteorite Studies
[ "Astronomy" ]
178
[ "Astronomy stubs", "Planetary science stubs" ]
7,280,436
https://en.wikipedia.org/wiki/Deoxyguanosine%20monophosphate
Deoxyguanosine monophosphate (dGMP), also known as deoxyguanylic acid or deoxyguanylate in its conjugate acid and conjugate base forms, respectively, is a derivative of the common nucleotide guanosine triphosphate (GTP), in which the –OH (hydroxyl) group on the 2' carbon on the nucleotide's pentose has been reduced to just a hydrogen atom (hence the "deoxy-" part of the name). It is used as a monomer in DNA. See also Cofactor Guanosine Nucleic acid References Nucleotides
Deoxyguanosine monophosphate
[ "Chemistry", "Biology" ]
146
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
7,280,450
https://en.wikipedia.org/wiki/Deoxyadenosine%20monophosphate
Deoxyadenosine monophosphate (dAMP), also known as deoxyadenylic acid or deoxyadenylate in its conjugate acid and conjugate base forms, respectively, is a derivative of the common nucleotide AMP, or adenosine monophosphate, in which the -OH (hydroxyl) group on the 2' carbon on the nucleotide's pentose has been reduced to just a hydrogen atom (hence the "deoxy-" part of the name). Deoxyadenosine monophosphate is abbreviated dAMP. It is a monomer used in DNA. See also Nucleic acid DNA metabolism Cofactor Guanosine Cyclic AMP (cAMP) ATP Sources Nucleotides
Deoxyadenosine monophosphate
[ "Chemistry", "Biology" ]
164
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
7,280,493
https://en.wikipedia.org/wiki/Tricolorability
In the mathematical field of knot theory, the tricolorability of a knot is the ability of a knot to be colored with three colors subject to certain rules. Tricolorability is an isotopy invariant, and hence can be used to distinguish between two different (non-isotopic) knots. In particular, since the unknot is not tricolorable, any tricolorable knot is necessarily nontrivial. Rules of tricolorability In these rules a strand in a knot diagram will be a piece of the string that goes from one undercrossing to the next. A knot is tricolorable if each strand of the knot diagram can be colored one of three colors, subject to the following rules: 1. At least two colors must be used, and 2. At each crossing, the three incident strands are either all the same color or all different colors. Some references state instead that all three colors must be used. For a knot, this is equivalent to the definition above; however, for a link it is not. "The trefoil knot and trivial 2-link are tricolorable, but the unknot, Whitehead link, and figure-eight knot are not. If the projection of a knot is tricolorable, then Reidemeister moves on the knot preserve tricolorability, so either every projection of a knot is tricolorable or none is." Examples Here is an example of how to color a knot in accordance of the rules of tricolorability. By convention, knot theorists use the colors red, green, and blue. Example of a tricolorable knot The granny knot is tricolorable. In this coloring the three strands at every crossing have three different colors. Coloring one but not both of the trefoil knots all red would also give an admissible coloring. The true lover's knot is also tricolorable. Tricolorable knots with less than nine crossings include 61, 74, 77, 85, 810, 811, 815, 818, 819, 820, and 821. Example of a non-tricolorable knot The figure-eight knot is not tricolorable. In the diagram shown, it has four strands with each pair of strands meeting at some crossing. If three of the strands had the same color, then all strands would be forced to be the same color. Otherwise each of these four strands must have a distinct color. Since tricolorability is a knot invariant, none of its other diagrams can be tricolored either. Isotopy invariant Tricolorability is an isotopy invariant, which is a property of a knot or link that remains constant regardless of any ambient isotopy. This can be proven for tame knots by examining Reidemeister moves. Since each Reidemeister move can be made without affecting tricolorability, tricolorability is an isotopy invariant of tame knots. Properties Because tricolorability is a binary classification (a link is either tricolorable or not*), it is a relatively weak invariant. The composition of a tricolorable knot with another knot is always tricolorable. A way to strengthen the invariant is to count the number of possible 3-colorings. In this case, the rule that at least two colors are used is relaxed and now every link has at least three 3-colorings (just color every arc the same color). In this case, a link is 3-colorable if it has more than three 3-colorings. Any separable link with a tricolorable separable component is also tricolorable. In torus knots If the torus knot/link denoted by (m,n) is tricolorable, then so are (j*m,i*n) and (i*n,j*m) for any natural numbers i and j. See also Fox n-coloring Graph coloring Sources Further reading Accessed: May 5, 2013. Graph coloring Knot invariants
Tricolorability
[ "Mathematics" ]
807
[ "Graph coloring", "Mathematical relations", "Graph theory" ]
7,280,692
https://en.wikipedia.org/wiki/Beta-M
The Beta-M is a radioisotope thermoelectric generator (RTG) that was used in Soviet-era lighthouses and beacons. Design The Beta-M contains a core made up of strontium-90, which has a half-life of 28.79 years. The service life of these generators is initially 10 years, and can be extended for another 5 to 10 years. The core is also known as radioisotope heat source 90 (RHS-90). In its initial state after manufacture, the generator is capable of generating 10 watts of electricity. The generator contains the strontium-90 radioisotope, with a heating power of 250W and 1,480 TBq of radioactivity – equivalent to some of Sr-90. Mass-scale production of RTGs in the Soviet Union was the responsibility of a plant called Baltiyets, in Narva, Estonia. Safety incidents Some Beta-M generators have been subject to incidents of vandalism when scavengers disassembled the units while searching for non-ferrous metals. In December 2001 a radiological accident occurred when three residents of Lia, Georgia found parts of an abandoned Beta-M in the forest while collecting firewood. The three suffered burns and symptoms of acute radiation syndrome as a result of their exposure to the strontium-90 contained in the Beta-M. The disposal team that removed the radiation sources consisted of 25 men who were restricted to 40 seconds' worth of exposure each while transferring the canisters to lead-lined drums. References External links Norwegian environmental concerns over Beta-M generators still in use RTG Master Plan Development Results and Priority Action Plan Elaboration for its Implementation Electrical generators Strontium Nuclear technology in the Soviet Union Energy in the Soviet Union
Beta-M
[ "Physics", "Technology" ]
372
[ "Physical systems", "Electrical generators", "Machines" ]
7,280,707
https://en.wikipedia.org/wiki/Variable-length%20code
In coding theory, a variable-length code is a code which maps source symbols to a variable number of bits. The equivalent concept in computer science is bit string. Variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. With the right coding strategy, an independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure. Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–Ziv coding, arithmetic coding, and context-adaptive variable-length coding. Codes and their extensions The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code. Using terms from formal language theory, the precise mathematical definition is as follows: Let and be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from to a sequence of symbols over , and the extension of to a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as its extension. Classes of variable-length codes Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes, and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular: Non-singular codes A code is non-singular if each source symbol is mapped to a different non-empty bit string; that is, the mapping from source symbols to bit strings is injective. For example, the mapping is not non-singular because both "a" and "b" map to the same bit string "0"; any extension of this mapping will generate a lossy (non-lossless) coding. Such singular coding may still be useful when some loss of information is acceptable (for example, when such code is used in audio or video compression, where a lossy coding becomes equivalent to source quantization). However, the mapping is non-singular; its extension will generate a lossless coding, which will be useful for general data transmission (but this feature is not always required). It is not necessary for the non-singular code to be more compact than the source (and in many applications, a larger code is useful, for example as a way to detect or recover from encoding or transmission errors, or in security applications to protect a source from undetectable tampering). Uniquely decodable codes A code is uniquely decodable if its extension is § non-singular. Whether a given code is uniquely decodable can be decided with the Sardinas–Patterson algorithm. The mapping is uniquely decodable (this can be demonstrated by looking at the follow-set after each target bit string in the map, because each bitstring is terminated as soon as we see a 0 bit which cannot follow any existing code to create a longer valid code in the map, but unambiguously starts a new code). Consider again the code from the previous section. This code is not uniquely decodable, since the string 011101110011 can be interpreted as the sequence of codewords 01110 – 1110 – 011, but also as the sequence of codewords 011 – 1 – 011 – 10011. Two possible decodings of this encoded string are thus given by cdb and babe. However, such a code is useful when the set of all possible source symbols is completely known and finite, or when there are restrictions (such as a formal syntax) that determine if source elements of this extension are acceptable. Such restrictions permit the decoding of the original message by checking which of the possible source symbols mapped to the same symbol are valid under those restrictions. Prefix codes A code is a prefix code if no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept are prefix-free code, instantaneous code, or context-free code. The example mapping above is not a prefix code because we do not know after reading the bit string "0" whether it encodes an "a" source symbol, or if it is the prefix of the encodings of the "b" or "c" symbols. An example of a prefix code is shown below. Example of encoding and decoding: → 00100110111010 → |0|0|10|0|110|111|0|10| → A special case of prefix codes are block codes. Here, all codewords must have the same length. The latter are not very useful in the context of source coding, but often serve as forward error correction in the context of channel coding. Another special case of prefix codes are LEB128 and variable-length quantity (VLQ) codes, which encode arbitrarily large integers as a sequence of octets—i.e., every codeword is a multiple of 8 bits. Advantages The advantage of a variable-length code is that unlikely source symbols can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a low expected codeword length. For the above example, if the probabilities of (a, b, c, d) were , the expected number of bits used to represent a source symbol using the code above would be: . As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered with zero error. See also Golomb code Kruskal count Variable-length instruction sets in computing References Further reading (xii+191 pages) Errata 1Errata 2 Draft available online Coding theory Entropy coding Data compression
Variable-length code
[ "Mathematics" ]
1,295
[ "Discrete mathematics", "Coding theory" ]
7,281,212
https://en.wikipedia.org/wiki/Amanita%20cokeri
Amanita cokeri, commonly known as Coker's amanita and solitary lepidella, is a poisonous mushroom in the family Amanitaceae. First described as Lepidella cokeri in 1928, it was transferred to the genus Amanita in 1940. Taxonomy Amanita cokeri was first described as Lepidella cokeri by mycologists E.-J.Gilbert and Robert Kühner in 1928. It was in 1940 when the species was transferred from genus Lepidella to Amanita by Gilbert. Presently, A. cokeri is placed under genus Amanita and section Roanokenses. The epithet cokeri is in honour of American mycologist and botanist William Chambers Coker. Description Its cap is white in colour, and across. It is oval to convex in shape. The surface is dry but sticky when wet. The cap surface is characterized by large pointed warts, white to brown in colour. Gills are closely spaced and free from the stem. They are cream at first, but can turn white as the mushroom matures. Short-gills are frequent. Stem is white, measuring long and thick. It tapers slightly to the top, smooth to shaggy in texture. There is a ring, thick and often double-edged, the underside being tissuelike. The universal veil hangs from the top of the stipe. The basal bulb is considerably large in size, with concentric circles of down-turned scales. The volval remnants stick to it and cause irregular patches. Spores are white, elliptical and amyloid. They measure 11–14 x 6–9 μm, and feel smooth. Flesh is white, and shows no change when exposed. There is no distinctive odour, but some specimens may develop the smell of decaying protein. Similar species Amanita solitaria is a closely related species, though a completely different European taxa. The notable similarity is that both it and A. cokeri are double-ringed. A. timida, from the tropical South Asia, resembles A. cokeri in its volval structure, thick and notable ring and the large bulbal base. Distribution and habitat A. cokeri inhabits mixed coniferous or deciduous woods and also grows on the ground. It grows mainly on oak and pine trees, and leaves a white deposit. It grows isolated or in groups. It is mostly distributed in southeastern North America. It fruits from July to November. Toxicity In a study, the presence of non-protein amino acids 2-amino-3-cyclopropylbutanoic acid and 2-amino-5-chloro-4-pentenoic acid was revealed. The former acid was found to be toxic to the fungus Cercospora kikuchii, the arthropod Oncopeltus fasciatus and the bacteria Agrobacterium tumefaciens, Erwinia amylovora, and Xanthomonas campestris. The toxicity for bacteria could be eliminated by adding isoleucine to the medium. The other acid did not prove toxic. See also List of Amanita species References External links cokeri Poisonous fungi Fungi described in 1928 Fungi of North America Fungus species
Amanita cokeri
[ "Biology", "Environmental_science" ]
660
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
7,282,047
https://en.wikipedia.org/wiki/Potassium%20nitrate%20%28data%20page%29
Potassium nitrate is an oxidizer so storing it near fire hazards or reducing agents should be avoided to minimise risk in case of a fire. Product Identification Synonyms: Saltpetre; Niter/Nitre; Nitric acid potassium salt; Salt Peter CAS No.: 7757-79-1 Molecular Weight: 101.1 Chemical Formula: KNO3 Hazards Identification Emergency Overview Danger - oxidizer. Contact with some materials may cause fire. Harmful if swallowed, inhaled or absorbed through the skin. Causes irritation to skin, eyes and respiratory tract. SAF-T-DATA Ratings Health Rating: 1 - Minimal Flammability Rating: 0 - None Reactivity Rating: 2 - Moderate (Oxidizer) Contact Rating: 1 - Minimal (Life) Lab Protective Equip: Safety goggles and surgical face mask (If you are planning to encounter this material close up for a period of time). Gloves optional. Storage Color Code: Yellow (Reactive) Potential Health Effects Inhalation: Causes irritation to the respiratory tract. Symptoms may include coughing, shortness of breath. Ingestion: Causes irritation to the gastrointestinal tract. Symptoms may include nausea, vomiting and diarrhea. May cause gastroenteritis and abdominal pains. Purging and diuresis can be expected. Rare cases of nitrates being converted to the more toxic nitrites have been reported, mostly with infants. Skin Contact: Causes irritation to skin. Symptoms include redness, itching, and pain. Eye Contact: Causes irritation, redness, and pain. Chronic Exposure: Under some circumstances methemoglobinemia occurs in individuals when the nitrate is converted by bacteria in the stomach to nitrite. Nausea, vomiting, dizziness, rapid heart beat, irregular breathing, convulsions, coma, and death can occur should this conversion take place. Chronic exposure to nitrites may cause anemia and adverse effects to kidney. First Aid Measures Inhalation:none Skin Contact: none Eye Contact: Flush eyes with water, lifting lower and upper eyelids occasionally. Fire Fighting Measures Fire: Not combustible itself but substance is a strong oxidizer and its heat of reaction with reducing agents or combustibles may accelerate burning. Explosion: No danger of explosion. KNO3 is an oxidising agent, so will accelerate combustion of combustibles. Fire Extinguishing Media: Dry chemical, carbon dioxide, Halon, water spray, or fog. If water is used, apply from as far a distance as possible. Water spray may be used to keep fire exposed containers cool. Do not allow water runoff to enter sewers or waterways. Special Information: Wear full protective clothing and breathing equipment for high-intensity fire or potential explosion conditions. This oxidizing material can increase the flammability of adjacent combustible materials. Accidental Release Measures Remove all sources of ignition. Ventilate area of leak or spill. Wear appropriate personal protective equipment as specified in Section 8. Spills: Clean up spills in a manner that does not disperse dust into the air. Use non-sparking tools and equipment. Reduce airborne dust and prevent scattering by moistening with water. Pick up spill for recovery or disposal and place in a closed container. Handling and Storage Keep in a tightly closed container, stored in a cool, dry, ventilated area. Protect against physical damage and moisture. Isolate from any source of heat or ignition. Avoid storage on wood floors. Separate from incompatibles, combustibles, organic or other readily oxidizable materials. Exposure Controls/Personal Protection Ventilation System: A system of local and/or general exhaust is recommended to keep employee exposures as low as possible. Local exhaust ventilation is generally preferred because it can control the emissions of the contaminant at its source, preventing dispersion of it into the general work area. Please refer to the ACGIH document, Industrial Ventilation, A Manual of Recommended Practices, most recent edition, for details. Personal Respirators (NIOSH Approved): For conditions of use where exposure to dust or mist is apparent and engineering controls are not feasible, a particulate respirator (NIOSH type N95 or better filters) may be worn. If oil particles (e.g. lubricants, cutting fluids, glycerine, etc.) are present, use a NIOSH type R or P filter. For emergencies or instances where the exposure levels are not known, use a full-face positive-pressure, air-supplied respirator. Skin Protection: Not required. Eye Protection: Not required. Optionally use chemical safety goggles where dusting or splashing of solutions is possible. Physical and Chemical Properties Appearance: White crystals. Odor: sour or salty. Solubility: 36 gm/100 ml water Specific Gravity: 2.1 pH: ca. 7 % Volatiles by volume @ 21C (70F): 0 Boiling Point: 400 °C (752 °F) Melting Point: 333 °C (631 °F) Vapor Density (Air=1): 3.00 Vapor Pressure (mm Hg): Negligible @ 20 °C Stability and Reactivity Stability: Stable under ordinary conditions of use and storage. Hazardous Decomposition Products : Oxides of nitrogen and toxic metal fumes may form when heated to decomposition. Hazardous Polymerization: Will not occur. Incompatibilities: Heavy metals, phosphites, organic compounds, carbonaceous materials, strong acids, and many other substances. Conditions to Avoid: Heat, flames, ignition sources and incompatibles. Disposal Considerations Whatever cannot be saved for recovery or recycling should be handled as hazardous waste and sent to a RCRA approved waste facility. Processing, use or contamination of this product may change the waste management options. State and local disposal regulations may differ from federal disposal regulations. Dispose of container and unused contents in accordance with federal, state and local requirements. See also Potassium nitrate Nitric acid Niter Black powder Sodium nitrate Sodium nitrite Potassium nitrite References Chemical data pages Chemical data pages cleanup
Potassium nitrate (data page)
[ "Chemistry" ]
1,247
[ "Chemical data pages", "nan" ]
17,396,736
https://en.wikipedia.org/wiki/Virtual%20file%20server
In computing, a virtual file server is a system consisting of one or more virtualized devices that store computer files such as documents, sound files, photographs, movies, images, or databases. The server can be accessed by workstations or application servers through the Virtual Fileserver Network (VFN). The term "server" highlights the role of the virtual machine in the client-server scheme, where the clients are the applications accessing the storage. The file server usually does not run application programs on behalf of the clients. It enables storage and retrieval of data, where the computation is provided by the client. With a storage area network (SAN), the server(s) act purely as virtual storage devices, with a client maintaining the file system. With network-attached storage (NAS), the server(s) manage the file system. Both SAN and NAS servers may be virtualized, so the users do not have to know which physical devices are hosting the files. A virtual file server typically combines the security of virtual private networks (VPN) with file synchronization, distribution and sharing services of network file servers. Various companies offer software for use by an organization in managing virtual file servers. The operating system may be stripped-down, concerned only with file management functions such as synchronizing redundant copies of the file, failure recovery, handling concurrent updates from different clients and enforcing client access rights. Some companies offer virtual file servers as a service to organizations that prefer to outsource server operations, with the servers residing in the "cloud". See also Storage virtualization Storage area network Network-attached storage Virtual private network Platform as a service References Computer storage devices Virtual private networks
Virtual file server
[ "Technology" ]
346
[ "Computer storage devices", "Recording devices" ]
17,398,087
https://en.wikipedia.org/wiki/Subdirectly%20irreducible%20algebra
In the branch of mathematics known as universal algebra (and in its applications), a subdirectly irreducible algebra is an algebra that cannot be factored as a subdirect product of "simpler" algebras. Subdirectly irreducible algebras play a somewhat analogous role in algebra to primes in number theory. Definition A universal algebra A is said to be subdirectly irreducible when A has more than one element, and when any subdirect representation of A includes (as a factor) an algebra isomorphic to A, with the isomorphism being given by the projection map. Examples The two-element chain, as either a Boolean algebra, a Heyting algebra, a lattice, or a semilattice, is subdirectly irreducible. In fact, the two-element chain is the only subdirectly irreducible distributive lattice. Any finite chain with two or more elements, as a Heyting algebra, is subdirectly irreducible. (This is not the case for chains of three or more elements as either lattices or semilattices, which are subdirectly reducible to the two-element chain. The difference with Heyting algebras is that a → b need not be comparable with a under the lattice order even when b is.) Any finite cyclic group of order a power of a prime (i.e. any cyclic p-group) is subdirectly irreducible. (One weakness of the analogy between subdirect irreducibles and prime numbers is that the integers are subdirectly representable by any infinite family of nonisomorphic prime-power cyclic groups, e.g. just those of order a Mersenne prime assuming there are infinitely many.) In fact, an abelian group is subdirectly irreducible if and only if it is isomorphic to a cyclic p-group or isomorphic to a Prüfer group (an infinite but countable p-group, which is the direct limit of its finite p-subgroups). A vector space is subdirectly irreducible if and only if it has dimension one. Properties The subdirect representation theorem of universal algebra states that every algebra is subdirectly representable by its subdirectly irreducible quotients. An equivalent definition of "subdirect irreducible" therefore is any algebra A that is not subdirectly representable by those of its quotients not isomorphic to A. (This is not quite the same thing as "by its proper quotients" because a proper quotient of A may be isomorphic to A, for example the quotient of the semilattice (Z, ) obtained by identifying just the two elements 3 and 4.) An immediate corollary is that any variety, as a class closed under homomorphisms, subalgebras, and direct products, is determined by its subdirectly irreducible members, since every algebra A in the variety can be constructed as a subalgebra of a suitable direct product of the subdirectly irreducible quotients of A, all of which belong to the variety because A does. For this reason one often studies not the variety itself but just its subdirect irreducibles. An algebra A is subdirectly irreducible if and only if it contains two elements that are identified by every proper quotient, equivalently, if and only if its lattice Con A of congruences has a least nonidentity element. That is, any subdirect irreducible must contain a specific pair of elements witnessing its irreducibility in this way. Given such a witness (a, b) to subdirect irreducibility we say that the subdirect irreducible is (a, b)-irreducible. Given any class C of similar algebras, Jónsson's lemma (due to Bjarni Jónsson) states that if the variety HSP(C) generated by C is congruence-distributive, its subdirect irreducibles are in HSPU(C), that is, they are quotients of subalgebras of ultraproducts of members of C. (If C is a finite set of finite algebras, the ultraproduct operation is redundant.) Applications A necessary and sufficient condition for a Heyting algebra to be subdirectly irreducible is for there to be a greatest element strictly below 1. The witnessing pair is that element and 1, and identifying any other pair a, b of elements identifies both a→b and b→a with 1 thereby collapsing everything above those two implications to 1. Hence every finite chain of two or more elements as a Heyting algebra is subdirectly irreducible. By Jónsson's Lemma, subdirectly irreducible algebras of a congruence-distributive variety generated by a finite set of finite algebras are no larger than the generating algebras, since the quotients and subalgebras of an algebra A are never larger than A itself. For example, the subdirect irreducibles in the variety generated by a finite linearly ordered Heyting algebra H must be just the nondegenerate quotients of H, namely all smaller linearly ordered nondegenerate Heyting algebras. The conditions cannot be dropped in general: for example, the variety of all Heyting algebras is generated by the set of its finite subdirectly irreducible algebras, but there exist subdirectly irreducible Heyting algebras of arbitrary (infinite) cardinality. There also exists a single finite algebra generating a (non-congruence-distributive) variety with arbitrarily large subdirect irreducibles. References Universal algebra Properties of groups
Subdirectly irreducible algebra
[ "Mathematics" ]
1,282
[ "Mathematical structures", "Universal algebra", "Properties of groups", "Fields of abstract algebra", "Algebraic structures" ]
17,401,600
https://en.wikipedia.org/wiki/Necessity%20good
In economics, a necessity good or a necessary good is a type of normal good. Necessity goods are product(s) and services that consumers will buy regardless of the changes in their income levels, therefore making these products less sensitive to income change. As for any other normal good, an income rise will lead to a rise in demand, but the increase for a necessity good is less than proportional to the rise in income, so the proportion of expenditure on these goods falls as income rises. If income elasticity of demand is lower than unity, it is a necessity good. This observation for food, known as Engel's law, states that as income rises, the proportion of income spent on food falls, even if absolute expenditure on food rises. This makes the income elasticity of demand for food between zero and one. Some necessity goods are produced by a public utility. According to Investopedia, stocks of private companies producing necessity goods are known as defensive stocks. Defensive stocks are stocks that provide a constant dividend and stable earnings regardless of the state of the overall stock market. See also Basic needs Income elasticity of demand Wealth (economics) References Goods (economics)
Necessity good
[ "Physics" ]
237
[ "Materials", "Goods (economics)", "Matter" ]
17,401,970
https://en.wikipedia.org/wiki/Airplane%20mode
Airplane mode (also known as aeroplane mode, flight mode, offline mode, or standalone mode) is a setting available on smartphones and other portable devices. When activated, this mode suspends the device's radio-frequency (RF) signal transmission technologies (i.e., Bluetooth, telephony and Wi-Fi), effectively disabling all analog voice, and digital data services, when implemented correctly by the electronic device software author. The mode is so named because most airlines prohibit the use of equipment that transmit RF signals while in flight. The Federal Communications Commission banned using most cell phones and wireless devices in 1991 because of interference concerns, although there is no scientific evidence of such. Typically, it is not possible to make phone calls or send messages in airplane mode, but some smartphones allow calls to emergency services. Most devices allow continued use of email clients and other mobile apps to write text or email messages. Messages are stored in memory to transmit later, once airplane mode is disabled. Wi-Fi and Bluetooth may be enabled separately while the device is in airplane mode, as allowed by the operator of the aircraft. Receiving RF signals (as by radio receivers and satellite navigation services) may not be inhibited by airplane mode; however, both transmitters and receivers are needed to receive calls and messages, even when not responding to them. Side effect Since a device's transmitters are shut down when in airplane mode, the mode reduces power consumption and increases battery life. Legal status in various nations Europe: On December 9, 2013, the European Aviation Safety Agency updated its guidelines on portable electronic devices (PEDs), allowing them to be used throughout the whole flight as long as they are set in Airplane mode. In November 2022, the EU announced its plans to enable 5G usage in airplanes using picocell, letting users make and receive calls and messages, and use data just as they would on the ground. On June 30, 2023, the deadline to implement was reached and airlines now can have picocells on airplanes. China: Prior to September 2017, cell phones, even with airplane mode, were never allowed to be used during the flight although other devices can be used while in cruising altitude. On September 18, 2017, the Civil Aviation Authority of China relaxed these rules and allowed all Chinese air carriers to allow the use of Portable Electronic Devices (PEDs) for the entire flight as long as they are in airplane mode. India: On April 23, 2014, the Directorate General of Civil Aviation (DGCA) amended the rule which bans use of portable electronic devices and allowing their usage in all phases of flight. United States: In a revised review in October 2013, the United States Federal Aviation Administration (FAA) made a recommendation on the use of electronic devices in airplane mode—cellular telephony must be disabled, while Wi-Fi may be used if the carrier offers it. Short-range transmission such as Bluetooth is permissible on aircraft that can tolerate it. The statement cites the common practice of aircraft operators whose aircraft can tolerate use of these personal electronic devices, but use may still be prohibited on some models of aircraft. See also Air gap (networking) Mobile phones on aircraft Picocell References External links Avionics Mobile phones
Airplane mode
[ "Technology" ]
665
[ "Avionics", "Aircraft instruments" ]
17,402,165
https://en.wikipedia.org/wiki/Benevolent%20dictator%20for%20life
Benevolent dictator for life (BDFL) is a title given to a small number of open-source software development leaders, typically project founders who retain the final say in disputes or arguments within the community. The phrase originated in 1995 with reference to Guido van Rossum, creator of the Python programming language. History Shortly after Van Rossum joined the Corporation for National Research Initiatives, the term appeared in a follow-up mail by Ken Manheimer to a meeting trying to create a semi-formal group that would oversee Python development and workshops; this initial use included an additional joke of naming Van Rossum the "First Interim BDFL". According to Rossum, the title was most likely created by Ken Manheimer or Barry Warsaw. In July 2018, Van Rossum announced that he would be stepping down as BDFL of Python without appointing a successor, effectively eliminating the title within the Python community structure. Usage BDFL should not be confused with the more common term for open-source leaders, "benevolent dictator", which was popularized by Eric S. Raymond's essay "Homesteading the Noosphere" (1999). Among other topics related to hacker culture, Raymond elaborates on how the nature of open source forces the "dictatorship" to keep itself benevolent, since a strong disagreement can lead to the forking of the project under the rule of new leaders. Referent candidates Organizational positions See also Design by dictator References Free software culture and documents Computer programming folklore Software engineering folklore
Benevolent dictator for life
[ "Engineering" ]
300
[ "Software engineering", "Software engineering folklore" ]
17,404,231
https://en.wikipedia.org/wiki/Conjugate%20Fourier%20series
In the mathematical field of Fourier analysis, the conjugate Fourier series arises by realizing the Fourier series formally as the boundary values of the real part of a holomorphic function on the unit disc. The imaginary part of that function then defines the conjugate series. studied the delicate questions of convergence of this series, and its relationship with the Hilbert transform. In detail, consider a trigonometric series of the form in which the coefficients an and bn are real numbers. This series is the real part of the power series along the unit circle with . The imaginary part of F(z) is called the conjugate series of f, and is denoted See also Harmonic conjugate References Fourier analysis Fourier series
Conjugate Fourier series
[ "Mathematics" ]
146
[ "Mathematical analysis", "Mathematical analysis stubs" ]
17,404,279
https://en.wikipedia.org/wiki/Urban%20wildlife
Urban wildlife is wildlife that can live or thrive in urban/suburban environments or around densely populated human settlements such as towns. Some urban wildlife, such as house mice, are synanthropic, ecologically associated with and even evolved to become entirely dependent on human habitats. For instance, the range of many synanthropic species is expanded to latitudes at which they could not survive the winter outside of the shelterings provided by human settlements. Other species simply tolerate cohabiting around humans and use the remaining urban forests, parklands, green spaces and garden/street vegetations as niche habitats, in some cases gradually becoming sufficiently accustomed around humans to also become synanthropic over time. These species represent a minority of the natural creatures that would normally inhabit an area, and contain a large proportions of feral and introduced species as opposed to truly native species. For example, a 2014 compilation of studies (that were severely biased towards work in Europe with very few studies from south and south-east Asia) found that only 8% of native bird and 25% of native plant species were present in urban areas compared with estimates of non-urban density of species. Urban wildlife can be found at any latitude that supports human dwellings - the list of animals that will venture into urbanized human settlements to forage on horticultures or to scavenge from trash runs from monkeys in the tropics to polar bears in the Arctic. Different types of urban areas support different kinds of wildlife. One general feature of bird species that adapt well to urban environments is they tend to be the species with bigger brains, perhaps allowing them to be more behaviorally adaptable to the more volatile urban environment. Arthropods (insects, spiders and millipedes), gastropods (land snails and slugs), various worms and some reptiles (e.g. house geckos) can also thrive well in the niches of human settlements. Evolution Urban environments can exert novel selective pressures on organisms, sometimes leading to new adaptations. For example, the weed Crepis sancta, found in France, has two types of seed, heavy and fluffy. The heavy ones land near the parent plant, whereas the fluffy seeds float further away on the wind. In urban environments, seeds that float far often land on infertile concrete surfaces. Within about 5-12 generations the weed has been found to evolve to produce significantly more heavy seeds than its rural relatives. Among vertebrates, a case is urban great tits, which have been found to sing at a higher pitch than their rural relatives so that their songs stand out above the city noise, although this is probably a learned rather than evolved response. Urban silvereyes (an Australian bird) make contact calls that are higher frequency and slower than those of rural silvereyes. As it appears that contact calls are instinctual and not learnt, this has been suggested as evidence that urban silvereyes have undergone recent evolutionary adaptation so as to better communicate in noisy urban environments. Animals that inhabit urban environments have differences in morphology, physiology and behavior when compared to animals that inhabit less urbanized areas. Hormone-mediated maternal effects are capable mechanisms of offspring phenotypic developmental modification. For instance, when female birds deposit androgens into their eggs, this affects many diverse aspects of offspring development and phenotype. Environmental factors that can influence the concentration of androgens in avian eggs include nest predation risk, breeding density, food abundance and parasite prevalence, all factors of which differ between urban and natural habitats. In a study that compared antibody and maternal hormone concentrations in eggs between an urban population and a forest population of European blackbirds, there were found to be clear differences in yolk androgen concentrations between the two populations. Although these differences cannot be attributed definitively (more studies have to be performed), they might result from different environments causing females to plastically adjust yolk androgens. Different yolk androgen levels are likely to program offspring phenotype. Plant genetic variation has an influence on herbivore population dynamics and other dependent communities. Conversely, different arthropod genotypes have varying abilities to live on different host plant species. Differential reproduction of herbivores could lead to adaptation to particular host plant genotypes. For instance, in two experiments that examined local adaptation and evolution of a free-feeding aphid (Chaitophorus populicola) in response to genetic variants of its host plant (Populus angustifolia), it was found that, 21 days (about two aphid generations) after aphid colony transplantation onto trees from foreign sites, aphid genotype composition had changed. In the experiments, tree cuttings and aphid colonies were collected from three different sites and used to conduct a reciprocal transplant experiment. Aphids that were transplanted onto trees from the same site produced 1.7-3.4 times as many offspring as aphids that were transplanted onto trees from different sites. These two results indicate that activities of human perturbation that cause plant evolution may also result in evolutionary responses in interacting species that could escalate to affect entire communities. Wildlife species that inhabit urban areas often experience shifts in food and resource availability. Some species, at times, must resort to human handouts or even human refuse as a source of food. One animal notorious for relying on such means for nutritional intake is the American white ibis. In a study that tested physiological challenge, the innate and adaptive immunity of two groups of white ibis (both consisting of 10 white ibis nurtured in captivity), one group being fed a simulated anthropogenic diet and the other being fed a natural ibis diet, it was determined that the wildlife consumption of a diet with anthropogenic components (such as white bread) may be detrimental to a species’ ability to battle bacterial pathogens. Human–wildlife conflict While urban areas tend to decrease the overall biodiversity of species within the city, most cities retain the flora and fauna characteristic of their geographic area. As rates of urbanization and city sprawl increase worldwide, many urban areas sprawl further into wildlife habitat, causing increased human-wildlife encounters and the potential for negative and conflict-based encounters. Humans have lived alongside and near wild animals for centuries, but the expansion of the study of urban ecology has allowed for new information surrounding human-wildlife interactions. Human wildlife conflict can be categorized into disease transmission, physical attacks, and property damage, and can be inflicted by a range of wildlife, from predatory tigers to grain-eating rodents. Benefits of human–wildlife interactions While negative human-wildlife conflicts can be damaging to the physical health of humans or property, human-wildlife interactions can be extremely beneficial in terms of ecosystem health and cultural experiences. The presence of native species allows systems and food chains to function in a healthy way, providing ecosystem services to the humans living around these areas. These services include the provisioning of food and water, flood control, cultural services, and nutrient cycling. Due to those perceived benefits, urban rewilding is now an active movement. Costs of conflict The most direct impacts of human-wildlife conflict include loss of livelihood due to property damage, loss of possessions due to property damage, injury, or transmission of disease from wildlife to humans. After the direct impacts of conflict, however, the people facing human-wildlife conflict are left with long-term issues including opportunity costs and long-term fear of wildlife. Conflicts between human and wildlife are most likely to occur in areas intermediate between rural and entirely urban landscapes, and these interactions are most likely to involve species with broad diets able to live in areas with high populations. Some areas are subject to more extreme conflicts between humans and wildlife, such as in Mozambique and Namibia, where more than 100 people are killed each year by crocodiles. In Asia and Africa, many communities are also subject to 10-15% loss of agricultural output to elephants. Disease transmission is also significant in cases of human-wildlife conflict, where sprawling cities can expand into environments that increase exposure to hosts of vector-borne diseases, causing large outbreaks in cities with greater density of people. Modern examples of disease outbreaks from wildlife include the H5N1 virus (originating from and spread via birds) and SARS-CoV-2 (likely originated as a bat virome before jumping species), with the latter causing the COVID-19 pandemic that wrought significant global economic, political, and sociological turmoil within one year from its outbreak. Conflict management At the center of human-wildlife conflicts in urban areas are social attitudes towards wildlife encounters. A certain community's perception of risk of wildlife encounter greatly impacts their attitude towards wildlife, particularly in situations where livelihoods or safety are at risk. Many cutting-edge wildlife conflict management proposals include education programs to inform the public of both the risks and benefits of interacting with urban wildlife, and how to prevent hysteria and future negative encounters. Furthermore, conflict management includes addressing the hidden impacts of wildlife conflict, such as the disruption psychosocial wellbeing, disruption of livelihood and food sources, and food insecurity. Broadly distributed Some urban species have a cosmopolitan (i.e. nonselective) distribution, in some cases almost global. They include cockroaches, silverfish, house mice, black/brown rats, house sparrows, rock doves and feral populations of domestic species. Africa As Africa becomes increasingly urbanized, native animals are exposed to this new environment with the potential of uniquely African urban ecologies developing. In the Cape Town urban area in South Africa, there is increasing conflict between human development and nearby populations of chacma baboons due to their growing dependence on tourists and the urban environment as sources of food. Elsewhere in Africa, vervet monkeys and baboons adapt to urbanization and similarly enter houses and gardens for food. African penguins are also known to invade urban areas, searching for food and a safe place to breed, even nesting inside storm drains; Simon's Town, which is near the popular Boulders Beach, actually had to take action to restrict penguin movement due to the noise and damage they caused. There are reports of leopards roaming suburban areas in cities such as Nairobi, Kenya and Windhoek, Namibia. Reptiles like the house gecko (Hemidactylus) can be found in houses. Artificial wetness brought about by swimming pools and watered lawns alongside supplementary feeding has made urban areas conducive for waterbirds such as African woolly-necked storks and hadada ibises in South Africa. Australia and New Guinea Urban areas in Australia are a particularly fruitful habitat type for many wildlife species. Australian cities are hotspots for threatened species diversity and have been shown to support more threatened animal and plant species on a per unit-area basis than all other non-urban habitat types. An analysis of urban sensitive bird species (birds that are easily disturbed and displaced) found that revegetation was effective at encouraging birds back into urban greenspaces, but also found that weed control was not. Invasive plant species such as Lantana (L. camara) actually provides refuge for some bird species, such as the superb fairywren (Malurus cyaneus) and silvereye (Zosterops lateralis), in the absence of native plant equivalents. Some species of native animals in Australia, such as various bird species including the Australian magpie, crested pigeon, rainbow lorikeet, willie wagtail, laughing kookaburra and tawny frogmouth, are able to survive as urban wildlife, although introduced birds such as the Old World sparrow are more common in the centre of larger cities. In Queensland and parts of New Guinea, the local cassowary population has also shown behavioural changes to better adapt in the urban environment as their original rainforest habitats decline in size. These birds were far more alert and rested less than the more 'wild' counterparts and had quickly adapted to foraging on human waste as it offers a greater reward in food bounties. The Australian white ibis has reached pest status in parts of Australia, necessitating the killing of eggs in an effort to control the species. The urbanisation of these birds have made the cassowaries the largest urbanised birds in the world. Some of the most resilient small marsupial species, including the common ringtail/brushtail possum, sugar glider and northern brown bandicoot, and some megabats such as the grey-headed flying fox have also adapted somewhat to the urban/suburban environment. Nevertheless, there are many threats to urban areas in Australia such as habitat loss and fragmentation, invasive species (such as cats and Indian mynas), pest species (such as noisy miners), invasive weeds and other disturbances that accompany intensive human land use. If biodiversity is to flourish in urban areas, efforts at the community scale thorough initiatives such as Land for Wildlife and private land conservation, as well as policy and management efforts through restricting land clearing and providing incentives to retain nature in cities is needed. Japan Although culled aggressively in most of Japan for being a pest, the Sika deer is, for religious reasons, protected in the city of Nara and has become part of the urban environment. Due to the denseness of Japanese cities, birdlife is not as common as other parts of the world, though typical urban birds such as crows, sparrows, and gulls have adapted well. The declining human population in several urban and rural settings in Japan has led to federal plans to prevent species reestablishment or remove recolonized animals capable of increasing human-wildlife conflict. Hawaii The urban birdlife of Hawaii is dominated by introduced species, with native species largely remaining only in preserved areas. New Zealand The birdlife in the most urban parts of New Zealand is dominated by introduced species, with bush fragments in the less urbanized areas allowing native species to cohabit. India In parts of India, monkeys, such as langurs, have been known to enter cities for food and cause havoc in food markets when they steal fruit from vendors. In Mumbai, leopards have been reported to enter neighbourhoods surrounding Sanjay Gandhi National Park and kill several people; the park itself is besieged by a surrounding burgeoning population as poaching and illegal woodcutting is rife there. In Mount Abu, Rajasthan, sloth bears have grown accustomed to entering the town throughout the year to feed on hotel waste in open rubbish bins and injure several people each year in chance encounters. Persisting green patches have helped retain over 100 bird species in Delhi. Also, the ponds there have been invaluable to support a very diverse bird community helped partly by management interventions that included islands and greening around the wetlands to make the wetlands themselves attractive for people. Ponds constitute 0.5% of the city's land area but support 37% of all bird species ever documented there, suggesting that even highly-populated cities can be important bird refuges if small habitat patches are retained. A large number of waterbirds nest on trees in Indian cities, benefitting from people's positive attitudes towards the birds despite the noise and smell around such breeding sites. The painted stork (Mycteria leucocephala) breeding colonies in the National Zoological Park in Delhi have been studied for over three decades. Small cities in India frequently retain substantial green cover, enabling the nesting of large numbers of waterbirds, especially the more common/widespread species of egrets like western cattle egrets (Bubulcus ibis) and little egrets (Egretta garzetta). Small cities with artificial wetlands can support substantial numbers of a diverse community of roosting and nesting waterbirds, like in the city of Udaipur, Rajasthan, where artificial large lakes were constructed to help cool the city during the summertime. Waterbird species nesting in these lakes include several herons like the Indian pond heron (Ardeola grayii) and the cattle egret), storks like the Asian openbill (Anastomus oscitans), and ibises like the red-naped ibis (Pseudibis papillosa). In the city of Pune, bird diversity is being negatively impacted by the spread of an exotic invasive tree, Prosopis juliflora. The patchwork of vegetation (both native and exotic) alongside natural relief and associated habitats such as scrub and grasslands, when juxtaposed with urban elements such as open plots readied for development, can create conditions to support a relatively large number of bird species such as in Udaipur city, India. It appears more likely for such conducive conditions for birds to be created in smaller cities due to their retaining green patches and other more natural aspects relative to the much more heavily urbanized larger cities. Europe Many towns in the United Kingdom have urban wildlife groups that work to preserve and encourage urban wildlife. One example is Oxford. Outside Urban areas range from fully urban – areas having little green space and mostly covered by paving, tarmac, or buildings – to suburban areas with gardens and parks. Pigeons are found scavenging on scraps of food left by humans and nesting on buildings, even in the most urban areas, as the tall buildings resemble their natural rocky homes in the mountains. Rats can also be found scavenging on food. Gulls of various types also breed and scavenge in various U.K. cities. A study by bird biologist Peter Rock – Europe's leading authority on urban gulls – on the rise of herring gulls and lesser black-backed gulls in Bristol has discovered that in 20 years the city's colony has grown from about 100 pairs to more than 1,200. From a gull's point of view, buildings are simply cliff-sided islands, with no predators and much nearby food. The trend is the same in places as far apart as Gloucester and Aberdeen. With an endless supply of food, more city chicks survive each year, and become accustomed to urban living. They in turn breed even more birds, with less reason to undertake a winter migration. Waterfowl such as ducks, coots, geese, swans, and moorhens thrive in gardens and parks with access to water. Small populations can form around fountains and other ornamental features, far from natural bodies of water, provided there are adequate amounts of food such as aquatic plants growing in the fountain. In the United Kingdom, improvements in water quality in urban areas have coincided with reintroduction and conservation projects for the Eurasian otter, resulting in frequent sightings of these animals in urban and suburban environments. Otters have been recorded in settlements of a variety of sizes, ranging from large towns and small cities such as Andover, Inverness and Exeter, to major cities such as London, Manchester, Birmingham and Edinburgh. A study was conducted on great tits living in ten European cities and ten nearby forests in which an analysis was made of the way the birds used songs to attract mates and establish territorial boundaries. Hans Slabbekoorn of Leiden University in the Netherlands said that city birds adapt to life by singing faster, shorter and higher-pitched songs in cities compared to forests. The forest birds sing low and slow. Great tits living in noisy cities have to compete with the low-frequency sounds of heavy traffic, which means their songs go up in pitch to make themselves heard. A bird that sang like Barry White in the forest sounded more like Michael Jackson in the city. The advent of these animals has also drawn a predator, as peregrine falcons have also been known to nest in urban areas, nesting on tall buildings and preying on pigeons. The peregrine falcon is becoming more nocturnal in urban environments, using urban lighting to spot its prey. This has provided them with new opportunities to hunt night-flying birds and bats. Red foxes are also in many urban and suburban areas in the U.K. as scavengers. They scavenge and eat insects and small vertebrates such as pigeons and rodents. People also leave food for them to eat in their gardens. One red fox was even found living at the top of the then-partially completed Shard in 2011, having climbed the stairwell to reach its temporary home some 72 stories above ground. In some cases, even large animals have been found living in cities. Berlin has wild boars. Wild roe deer are becoming increasingly common in green areas in Scottish towns and cities, such as in the Easterhouse suburb of Glasgow. Urban waterways can also contain wildlife, including large animals. In London, since improvements in water quality in the Thames, seals and porpoises have been seen in its waters in the center of the city. Inside houses Numerous animals can also live within buildings. Insects that sometimes inhabit buildings include various species of small beetles such as ladybirds, which often seek refuge inside buildings during the winter months, as well as cockroaches and houseflies. North America Many North American species have successfully adapted to urban and suburban environments and are thriving. Typical examples include urban coyotes, the top predator of such regions. Other common urban animals include predators such as (especially) red foxes, grey foxes, and bobcats that prey on small animals such as rodents. Omnivores such as raccoons, Virginia opossums, and striped skunks are abundant, but seldom seen, due to their elusive and nocturnal nature. In the south and southeastern United States and Mexico, the nine-banded armadillo also fills this niche, but due to the armadillo's lack of thick fur, they are unable to thrive in more northern climates. Squirrels, including the American red squirrel, fox squirrel, and especially the eastern grey squirrel are extremely common in areas with enough trees. Herbivores forage in the early morning and evening, with cottontail rabbits, and, in dryer parts of the country, jackrabbits, as well as the two most common deer species in North America: the white-tailed deer and the mule deer. Shy of humans, deer are often spotted as a mother with fawns, or a lone buck creeping through the trees and bushes. As whitetails prefer forest edge and meadow to actual dense forest, the cutting of forests has actually made more habitat for the white-tailed deer, which has increased its numbers beyond what they were at when Europeans arrived in America. In some cities, older deer seem to have learned how to cross streets, as they look back and forth looking for cars while crossing roads, while fawns and younger deer will recklessly run out without looking; most traffic accidents involving deer happen with deer that have just left their mother, and are less likely to watch for cars. Red-tailed hawks are a common sight in urban areas, with individuals such as Pale Male being documented nesting and raising chicks in New York City since at least the 1990s. The American alligator, a once-threatened species that was saved from extinction through farming and conservation, can frequently be found in the southern United States living in open areas with access to water, such as golf courses and parks, in its native range. These animals living in urban areas usually come into conflict with humans, as some of them will open garbage bags in search of food, eat food left out for pets, prey on unattended pets, feed on prized garden plants, dig up lawns or become traffic hazards when they run out into the road. There are media accounts of alligators being found in sewer pipes and storm drains, but so-called "sewer alligators" are unlikely to sustain a breeding population in such environments, due to a lack of a place to bury their eggs and food. Urban wildlife is often considered a nuisance, with local governments being tasked to manage the issue. In 2009, a large blobby mass made of colonies of tubifex worms was found to be living in the sewers of Raleigh, North Carolina. Revealed by a snake camera inspection of sewer piping under the Cameron Village shopping center, videos of the creature went viral on YouTube in 2009 under the name "Carolina poop monster". Animals known to dwell within human habitations in the US include house centipedes (Scutigera coleoptrata) and firebrats. South America Marmosets can be found living wild in city parks in Brazil. Urban-dwelling marmosets tend to return more often to the same sleeping sites than jungle-dwelling marmosets. Urban-dwelling marmosets tend to prefer to sleep in tall trees with high branches and smooth bark. It has been suggested they do this to avoid cats. Human-wildlife conflicts in urban areas are increasing in several South American countries, with species that include jaguars, pumas, capybaras, and wild boars. Urban expansion has led to a novel and underreported challenge to wildlife: an increase in the demand for wild meat that includes several taxa such as birds, turtles, small mammals and caimans. See also Urban ecology Wild animal suffering References External links Wild in the City, a National Film Board of Canada documentary on urban wildlife in Vancouver Wildlife
Urban wildlife
[ "Biology" ]
5,111
[ "Animals", "Wildlife" ]
17,404,454
https://en.wikipedia.org/wiki/Engineering%20fit
Engineering fits are generally used as part of geometric dimensioning and tolerancing when a part or assembly is designed. In engineering terms, the "fit" is the clearance between two mating parts, and the size of this clearance determines whether the parts can, at one end of the spectrum, move or rotate independently from each other or, at the other end, are temporarily or permanently joined. Engineering fits are generally described as a "shaft and hole" pairing, but are not necessarily limited to just round components. ISO is the internationally accepted standard for defining engineering fits, but ANSI is often still used in North America. ISO and ANSI both group fits into three categories: clearance, location or transition, and interference. Within each category are several codes to define the size limits of the hole or shaft – the combination of which determines the type of fit. A fit is usually selected at the design stage according to whether the mating parts need to be accurately located, free to slide or rotate, separated easily, or resist separation. Cost is also a major factor in selecting a fit, as more accurate fits will be more expensive to produce, and tighter fits will be more expensive to assemble. Methods of producing work to the required tolerances to achieve a desired fit range from casting, forging and drilling for the widest tolerances through broaching, reaming, milling and turning to lapping and honing at the tightest tolerances. ISO system of limits and fits Overview The International Organization for Standardization system splits the three main categories into several individual fits based on the allowable limits for hole and shaft size. Each fit is allocated a code, made up of a number and a letter, which is used on engineering drawings in place of upper & lower size limits to reduce clutter in detailed areas. Hole and shaft basis A fit is either specified as shaft-basis or hole-basis, depending on which part has its size controlled to determine the fit. In a hole-basis system, the size of the hole remains constant and the diameter of the shaft is varied to determine the fit; conversely, in a shaft-basis system the size of shaft remains constant and the hole diameter is varied to determine the fit. The ISO system uses an alpha-numeric code to illustrate the tolerance ranges for the fit, with the upper-case representing the hole tolerance and lower-case representing the shaft. For example, in H7/h6 (a commonly-used fit) H7 represents the tolerance range of the hole and h6 represents the tolerance range of the shaft. These codes can be used by machinists or engineers to quickly identify the upper and lower size limits for either the hole or shaft. The potential range of clearance or interference can be found by subtracting the smallest shaft diameter from the largest hole, and largest shaft from the smallest hole. Types of fit The three types of fit are: Clearance: The hole is larger than the shaft, enabling the two parts to slide and / or rotate when assembled, e.g. piston and valves Location / transition: The hole is fractionally smaller than the shaft and mild force is required to assemble / disassemble, e.g. Shaft key Interference: The hole is smaller than the shaft and high force and / or heat is required to assemble / disassemble, e.g. Bearing bush Clearance fits For example, using an H8/f7 close-running fit on a 50mm diameter: H8 (hole) tolerance range = +0.000mm to +0.039 f7 (shaft) tolerance range = −0.050mm to −0.025mm Potential clearance will be between +0.025mm and +0.089mm Transition fits For example, using an H7/k6 similar fit on a 50mm diameter: H7 (hole) tolerance range = +0.000mm to +0.025mm k6 (shaft) tolerance range = +0.002mm to +0.018mm Potential clearance / interference will be between +0.023mm and −0.018mm Interference fits For example, using an H7/p6 press fit on a 50mm diameter: H7 (hole) tolerance range = +0.000mm to +0.025mm p6 (shaft) tolerance range = +0.042mm to +0.026mm Potential interference will be between −0.001mm and −0.042mm. Useful tolerances Common tolerances for sizes ranging from 0 to 120 mm ANSI fit classes (US only) Interference fits Interference fits, also known as press fits or friction fits, are fastenings between two parts in which the inner component is larger than the outer component. Achieving an interference fit requires applying force during assembly. After the parts are joined, the mating surfaces will feel pressure due to friction, and deformation of the completed assembly will be observed. Force fits Force fits are designed to maintain a controlled pressure between mating parts, and are used where forces or torques are being transmitted through the joining point. Like interference fits, force fits are achieved by applying a force during component assembly. FN 1 to FN 5 Shrink fits Shrink fits serve the same purpose as force fits, but are achieved by heating one member to expand it while the other remains cool. The parts can then be easily put together with little applied force, but after cooling and contraction, the same dimensional interference exists as for a force fit. Like force fits, shrink fits range from FN 1 to FN 5. Location fits Location fits are for parts that do not normally move relative to each other. Location interference fit LN 1 to LN 3 (or LT 7 to LT 21? ) Location transition fit LT 1 to LT 6 Location fit is for have comparatively better fit than slide fit. Location clearance fit LC 1 to LC 11 RC fits The smaller RC numbers have smaller clearances for tighter fits, the larger numbers have larger clearances for looser fits. RC1: close sliding fits Fits of this kind are intended for the accurate location of parts which must assemble without noticeable play. RC2: sliding fits Fits of this kind are intended for the accurate location but with greater maximum clearance than class RC1. Parts made to this fit turn and move easily. This type is not designed for free run. Sliding fits in larger sizes may seize with small temperature changes due to little allowance for thermal expansion or contraction. RC3: precision running fits Fits of this kind are about the closest fits which can be expected to run freely. Precision fits are intended for precision work at low speed, low bearing pressures, and light journal pressures. RC3 is not suitable where noticeable temperature differences occur. RC4: close running fits Fits of this kind are mostly for running fits on accurate machinery with moderate surface speed, bearing pressures, and journal pressures where accurate location and minimum play are desired. Fits of this kind also can be described as smaller clearances with higher requirements for precision fit. RC5 and R6: medium running fits Fits of this kind are designed for machines running at higher running speeds, considerable bearing pressures, and heavy journal pressure. Fits of this kind also can be described with greater clearances with common requirements for fit precision. RC7: Free running fits Fits of this kind are intended for use where accuracy is not essential. It is suitable for great temperature variations. This fit is suitable to use without any special requirements for precise guiding of shafts into certain holes. RC8 and RC9: loose running fits Fits of this kind are intended for use where wide commercial tolerances may be required on the shaft. With these fits, the parts with great clearances with having great tolerances. Loose running fits may be exposed to effects of corrosion, contamination by dust, and thermal or mechanical deformations. See also Coiled spring pins Engineering tolerance Geometric dimensioning and tolerancing Interchangeable parts Statistical interference References Mechanical engineering Metalworking terminology
Engineering fit
[ "Physics", "Engineering" ]
1,622
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
17,404,706
https://en.wikipedia.org/wiki/Secret%20Dubai
Secret Dubai was one of the most popular independent blogs in Dubai, United Arab Emirates in the 8 years of its operation, from 2002 until 2010. Launched in 2002 and written by an unidentified expatriate, Secret Dubai generated a significant following in the Middle East Blogosphere until the UAE's Telecoms Regulatory Authority (TRA) blocked the website in the UAE. In 2007, it won a Bloggie award for Africa and the Middle East. The blog's last entry date is April 2014. References 7DAYS Report External links Official Website Emirati news websites Internet censorship in the United Arab Emirates 2002 establishments in the United Arab Emirates
Secret Dubai
[ "Technology" ]
130
[ "Computing stubs", "World Wide Web stubs" ]
17,406,071
https://en.wikipedia.org/wiki/Global%20Environmental%20Politics
Global Environmental Politics (GEP) is a quarterly peer-reviewed academic journal which examines the relationship between global political forces and environmental change. It covers such topics as the role of states, international finance, science and technology, and grass roots movements. Issues of Global Environmental Politics are divided into three types of articles: short commentaries for a section called Current Debates/Forum, full-length research articles, and book review articles. According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.8, ranking it 7th out of 160 journals in the category "International Relations". Scope Articles published in Global Environmental Politics include issues concerning certain countries and small groups within those countries, but they must address environmental disputes that are relevant on a global scale. Due to the primary focus of political and policy issues discussed in GEP articles, the range of reader and author backgrounds is presumed and expected. The range of submissions focuses on how local-global interactions affect the natural environment, as well as how environmental change affects world politics. The articles published address issues like poverty and inequality, norms and institutions, and economic relationships. The scope of articles also includes specific environmental issues, for example, ozone depletion, climate change, and deforestation. GEP also offers an "Early Access" submission option. The Early Access option applies to articles that have been accepted for publication and copyedited, but are not yet finished. These articles are displayed online for durations spanning from weeks to months. They are only to be replaced once the final version is completed and its issue is published. The Early Access option allows the peer review process to begin, increasing the opportunities for feedback and displaying what an uncorrected proof looks like before it is ready for publication. This helps establish the standards for what GEP accepts as an uncorrected proof and expects once it is finalized, while also creating transparency in the editing process that benefits peer reviewers. Editorial History The journal was established in 2000 and is published by MIT Press online . The founding editor was Peter Dauvergne. Jennifer Clapp and Matthew Paterson were the co-editors 2007 through 2012, and Kate O'Neill and Stacy VanDever led the journal 2013–2017. The editors for 2018-2022 were Steven Bernstein, Matthew Hoffmann, and Erika Weinthal. Currently, Susan Park, Henrik Selin and D.G. Webster edit the journal. Current Debates/Forum Originally called "Current Debates", the emphasis for this section of the article was shifted when the new editorial team consisting of Jennifer Clapp and Matthew Paterson took over the editorial board. The shorter articles in the Forum section are included in the journal as a means to encourage debate as well as future research. They include new theoretical or historical insights, emerging environmental issues, and discussion of controversial developments in environmental policy. Some issues of Global Environmental Politics feature numerous articles discussing a single topic while others contain only one article with the goal of inciting debate on a range of connected issues. These forum articles comprise short commentaries (2000-3000 words) that prompt discussion on salient issues of interest to other readers and scholars in the field. Research Articles The journal hosts full-length research articles that provide an academic setting for original theoretical or empirical contributions relating to global environmental or comparative politics on a global scale. Research Articles are full-length papers of a maximum of 8000 words, including footnotes and bibliography, that must contain original first-party research. Each journal edition typically consists of four to six articles. Book reviews Each addition of Global Environmental Politics contains an array of book reviews pertaining to global political forces and environmental changes. The current book review editor is Elizabeth DeSombre. The book review process consists of the editor choosing a number of books per journal edition to which a reviewer may submit a single book review or a review essay. Review essays contain a collective analysis of multiple books on one topic which have been previously outlined by the review editor. Abstracting and indexing GEP is indexed in sources including: Academic Search BIOBASE CNKI, China Current Awareness in Biological Sciences Current Contents EBSCO Discovery EconLit Environment Index Environmental Sciences and Pollution Management GEOBASE International Political Science Abstracts Political Science Complete Pollution Abstracts ProQuest Summon Public Affairs Index Scopus Social Sciences Citation Index See also List of political science journals MIT Press Jstor References External links Environmental social science journals International relations journals MIT Press academic journals Quarterly journals English-language journals Academic journals established in 2000 Political science journals Globalization-related journals
Global Environmental Politics
[ "Environmental_science" ]
904
[ "Environmental social science journals", "Environmental social science" ]
17,406,125
https://en.wikipedia.org/wiki/Grey%20Room
Grey Room is a peer-reviewed academic journal published quarterly, in print and online, by the MIT Press. Founded in 2000, it includes work in the fields of architecture, art, media, and politics. To date it has featured contributions by such prominent historians and theorists as Yve-Alain Bois, Judith Butler, Georges Canguilhem, Jonathan Crary, Hubert Damisch, T. J. Demos, Friedrich Kittler, Chantal Mouffe, Antonio Negri, Jennifer Roberts, Bernhard Siegert, Paolo Virno, Paul Virilio, and Samuel Weber. Beginning with issue #51, the composition of the editorial board changed. Founding editors Branden W. Joseph, Reinhold Martin, and Felicity Scott, and editors Karen Beckman and Tom McDonough, resigned from the editorial board after issue #50 and assumed roles on the advisory board of the journal. Zeynep Çelik Alexander, Lucia Allais, Eric de Bruyn, Gabriella Coleman (since resigned from the editorial board), Noam M. Elcott, John Harwood, Byron Hamann, and Matthew C. Hunter have served as editors since. External links Official website Architecture journals MIT Press academic journals Quarterly journals Academic journals established in 2000 English-language journals
Grey Room
[ "Engineering" ]
258
[ "Architecture stubs", "Architecture" ]
17,406,353
https://en.wikipedia.org/wiki/AUFS
Absorbance Units Full Scale (AUFS) or Absorption Units Full Scale is a unit of absorbance intensity that denotes the output of a spectrophotometer. The acronym AUFS can also be written out as Absorbance Units per Full Scale Deflection. Usage AUFS is an arbitrary unit of the maximum ultraviolet or visible light absorbance intensity measured by a detector. It can be used in chemical analysis to quantify components in a mixture, as each component's integrated peak area corresponds to their relative abundance. AUFS is given as a number ranging from 0 to 1, where a measurement of 1 AUFS indicates an absorbance reading of 1 at full deflection. Application areas Analytical chemistry Chromatography See also Spectroscopy References Analytical chemistry Chromatography
AUFS
[ "Chemistry" ]
157
[ "Chromatography", "Analytical chemistry stubs", "nan", "Separation processes" ]
17,408,349
https://en.wikipedia.org/wiki/Ophel
Ophel () is the biblical term given to a certain part of a settlement or city that is elevated from its surroundings, and probably means fortified hill or risen area. In the Hebrew Bible, the term is in reference to two cities: Jerusalem (as in 2 Chronicles 27, 2 Chronicles 33, Nehemiah 3, and Nehemiah 11) and Samaria (mentioned in 2 Kings 5). The Mesha Stele, written in Moabite, a Canaanite language closely related to Biblical Hebrew, is the only extra-biblical source using the word, also in connection to a fortified place. Meaning of the term Ophel, with the definite article ha-ophel, is a common noun known from two Canaanitic languages, Biblical Hebrew and Moabitic. As a place name or description it appears several times in the Hebrew Bible and once on the Mesha Stele from Moab. There is no ultimate agreement as to its exact meaning, and scholars have long been trying to deduce it from the different contexts it appears in. When used as a common noun, it has been translated as "tumors" (), and in a verbal form it was taken to mean "puffed up" (), this indicating that the root might be associated with "swelling". When referring to a place, it seems from the context to mean either hill, or fortified place, or a mixture of the two, i.e., a fortified hill, and by considering the presumed meaning of the root, it might signify a "bulging or rounded" fortification. Biblical verses in which it has been translated either as "fortified place " (tower, citadel, stronghold etc.) or "hill" are , , , , and . On the Mesha Stele, named for the king of Moab who erected it, Mesha says: "I built Q-R-CH-H (Karhah?), the wall of ye'arim [forests], and the wall of ophel and I built its gates and I built its towers." Here ophel is commonly translated as "citadel". Jerusalem ophel Hebrew Bible The location of the ophel of the Hebrew Bible is easy to make out from the references from 2 Chronicles and Nehemiah: it was on the eastern ridge, which descended south of Solomon's Temple, and probably near the middle of it. In current terms, the still extant Herodian cased-in Temple Mount is bordered to the south by a saddle, followed by the ridge in case, also known as the southeastern hill, which stretches down to the King's Garden and the lower Pool of Siloam. If the ophel was, as it seems, close to its centre, the use of the term ophel ridge" for the entire southeastern hill including the saddle, seems to be wrong. Two kings of Judah, Jotham and Manasseh, are described to have massively strengthened the ophel fortifications in 2 Chronicles 27:3 and 33:14, leading to the conclusion that this must have been an area of great strategic importance, and either very close to or identical with the "stronghold of Zion" conquered and reused by David in 2 Samuel 5:7). Josephus' ophlas Josephus, writing about the First Jewish–Roman War (66–70 CE), uses the Graecised form ophlas, and places it slightly higher up the eastern ridge from the First Temple-period ophel, touching the "eastern cloister of the temple" (Jewish Wars, V, iv, 2) and in the context of "the temple and the parts thereto adjoining ... and the ... 'Valley of the Cedron'" (Jewish Wars, V, iv, 1). This takes us to the area of the saddle right next to the southeast corner of Herod's Temple Mount. Wadi Hilweh excavation Benjamin Mazar and Eilat Mazar began excavating an area identified as Jerusalem's ophel, lying on the rise to the north of the Wadi Hilweh neighbourhood, in 1968. The term is commonly used by archaeologists with this meaning. The excavation work was a joint project of Hebrew University, in cooperation with the Israel Antiquities Authority, the Israel Nature and Parks Authority, and the East Jerusalem Development Company, with funding provided by Jewish American couple, Daniel Mintz and Meredith Berkman. Notable structures found during these excavations include architectural remains and a variety of movable objects, some dated to the First Temple period, many to the Second Temple period, as well as the Byzantine and Early Muslim periods, the latter including major findings from the Umayyad and Fatimid periods. The findings included remains interpreted by archaeologist Eilat Mazar to be a 70- or 79-metre-long segment of city wall including a gatehouse leading to a royal structure, and a watchtower overlooking the Kidron Valley. Eilat Mazar believes these are the remains of the fortifications that, according to the biblical First Book of Kings, once encompassed the city. Eilat Mazar, who re-excavated the remains in 2010, believes them to date to the late 10th century BCE, associating them with King Solomon, which is controversial and not supported by past and contemporary archaeologists. Also present were several Hellenistic-period buildings, a large mikvah, the southern steps to the Herodian Temple compound, leading up to the Double and Triple Gates of the Temple compound, the Monastery of the Virgins, and several large residential and administrative structures (qasr-type "palaces"), probably Umayyad, to the south of the ophel. A discovered artefact of particular importance is the ophel inscription, a 3,000-year-old pottery shard that bears the earliest alphabetical inscription found in Jerusalem. Archaeological assessment Although consensus on the dating of the wall has not been reached by the archaeological community, Mazar maintains that, "It's the most significant construction we have from First Temple days in Israel," and "It means that at that time, the 10th century (BCE), in Jerusalem there was a regime capable of carrying out such construction." The 10th century is the period the Bible describes as the reign of King Solomon. Claiming that broken pottery in the "royal structure" enabled the team to date the building. One storage jar bears an inscription in Hebrew. Mazar told the Jerusalem Post that "The jars that were found are the largest ever found in Jerusalem," and "the inscription found on one of them shows that it belonged to a government official, apparently the person responsible for overseeing the provision of baked goods to the royal court." Aren Maeir, an archeology professor at Bar Ilan University said he has yet to see evidence that the fortifications are as old as Mazar claims. Whilst acknowledging that 10th century remains have been found in Jerusalem, he describes proof of a strong, centralized kingdom at that time as "tenuous". A section of wall long and high has been uncovered. The discoveries include an inner gatehouse, a "royal structure" and a corner tower with a base measuring by from which watchmen could keep watch on the Kidron Valley. According to Mazar, the built structures are similar to the First Temple era fortifications of Megiddo, Beersheba and Ashdod. Mazar told reporters that "A comparison of this latest finding with city walls and gates from the period of the First Temple, as well as pottery found at the site," enable her to "postulate, with a great degree of assurance" that the wall dates form the late 10th-century BCE. Mazar told reporters that "A comparison of this latest finding with city walls and gates from the period of the First Temple, as well as pottery found at the site, enable us to postulate, with a great degree of assurance, that the wall that has been revealed is that which was built by King Solomon in Jerusalem in the latter part of the tenth century BCE." The wall has been excavated twice before, once in the 1860s and again in the 1980s. In 1867 Charles Warren conducted an underground survey in the area, describing the outline of a large tower but without attributing it to the era of Solomon. Israel Finkelstein and other archaeologists from Tel Aviv University have flagged concern that, with reference to her 2006 dating of the "Solomonic city wall" in the area to the south of the Temple Mount known as the ophel, "the biblical text dominates this field operation, not archaeology. Had it not been for Mazar's literal reading of the biblical text, she never would have dated the remains to the 10th century BCE with such confidence". Samaria ophel speaks of the ophel of Samaria, where Gehazi took the presents he received from Naaman of Aram. Traditionally translated as "hill", it can as well have meant "tower" and can quite likely be understood as a spot in the city wall or its citadel. King Mesha's ophel Here, too, the context indicates part of a fortification—either a fortified hill, or something like a tower or enceinte and, judging by the root of the word, probably a bulging or rounded one. See also Acropolis – similar concept in ancient Greek architecture For the Jerusalem Ophel Acra (fortress) Desert castles: Umayyad qasr-type palaces including Qasr al-Minya and Al-Sinnabra on the Sea of Galilee Excavations at the Temple Mount Givati Parking Lot dig Jerusalem pilgrim road a.k.a. the stepped pilgrimage road Jerusalem Water Channel, actually the drainage under the stepped pilgrimage road Millo Ophel Treasure, a gold hoard from the early 7th century Further reading References External links Bible Hub: Ophel. Excellent overview, based on critical analysis of the texts; only partially outdated (article predates most excavations on the eastern hill). Jerusalem Archaeological Park. History of archaeological investigation 1838-2000. Does not include important new findings by Eilat Mazar.The Jerusalem Archaeological Park - homepage Jerusalem 101: Ophel. Location of the Ophel: plans, models, some photos. Some mix-up of slightly differing First and Second Temple-period locations. Ophel - Jerusalem 101 George Wesley Buchanan, In Search of King Solomon's Temple, Associates for Scriptural Knowledge, September 2013, Number 9/13. Supports the very controversial "southern location" theory placing both Jerusalem Temples above the Gihon Spring, rather than on the Temple Mount. Ariel Winderbaum, The Iron IIA Pottery Assemblages from the Ophel Excavations and their Contribution to the Understanding of the Settlement History of Jerusalem. A Ph.D. including findings by Eilat Mazar. Ancient history of Jerusalem Architectural history City of David (archaeological site)
Ophel
[ "Engineering" ]
2,224
[ "Architectural history", "Architecture" ]
17,408,593
https://en.wikipedia.org/wiki/Himbacine
Himbacine is an alkaloid isolated from the bark of Australian magnolias. Himbacine has been synthesized using a Diels-Alder reaction as a key step. Himbacine's activity as a muscarinic receptor antagonist, with specificity for the muscarinic acetylcholine receptor M2, made it a promising starting point in Alzheimer's disease research. The development of a muscarinic antagonist based on himbacine failed but an analog, vorapaxar, has been approved by the FDA as a thrombin receptor antagonist. References Lactones M2 receptor antagonists M4 receptor antagonists Heterocyclic compounds with 4 rings Piperidine alkaloids
Himbacine
[ "Chemistry" ]
149
[ "Pharmacology", "Medicinal chemistry stubs", "Piperidine alkaloids", "Alkaloids by chemical classification", "Pharmacology stubs" ]
17,410,467
https://en.wikipedia.org/wiki/Safety%20pharmacology
Safety pharmacology is a branch of pharmacology specialising in detecting and investigating potential undesirable pharmacodynamic effects of new chemical entities (NCEs) on physiological functions in relation to exposure in the therapeutic range and above. Primary organ systems (so-called core battery systems) are: Central Nervous System Cardiovascular System Respiratory System Secondary organ systems of interest are: Gastrointestinal System Renal System Safety pharmacology studies are required to be completed prior to human exposure (i.e., Phase I clinical trials), and regulatory guidance is provided in ICH S7A and other documents. Key aims of safety pharmacology The aims of nonclinical safety pharmacology evaluations are three-fold: To protect Phase I clinical trial volunteers from acute adverse effects of drugs To protect patients (including patients participating in Phase II and III clinical trials) To minimize risks of failure during drug development and post-marketing phases due to undesirable pharmacodynamic effects Key issues The following key issues have to be considered within safety pharmacology: The detection of adverse effects liability (i.e. hazard identification) Investigation of the mechanism of effect (risk assessment) Calculating a projected safety margin Implications for clinical safety monitoring Mitigation strategies Background The first appearance of the term ‘safety pharmacology’ in the published literature dates back to 1980. The term was certainly in common usage in the 1980s within the pharmaceutical industry to describe nonclinical pharmacological evaluation of unintended effects of candidate drugs for regulatory submissions. Back then, it was part of a wider ‘general pharmacology’ assessment, which addressed actions of a drug candidate beyond the therapeutically intended effects. The only detailed guidelines indicating the requirements from drug regulatory authorities for general pharmacology studies were from the Ministry of Health, Labour, and Welfare. Nowadays, the term ‘general pharmacology’ is no longer used, and the ICH S7A guidelines distinguish between primary pharmacodynamics (“studies on the mode of action and/or effects of a substance in relation to its desired therapeutic target”), secondary pharmacodynamics (“studies on the mode of action and/or effects of a substance not related to its desired therapeutic target”) and safety pharmacology (“studies that investigate the potential undesirable pharmacodynamic effects of a substance on physiological functions in relation to exposure in the therapeutic range and above.”). A major stimulus to the discipline of safety pharmacology was the release in 1996 of a draft ‘Points to Consider’ document on QT prolongation by the European Medicines Agency's Committee for Proprietary Medicinal Products (CPMP), issued in final form the following year. This initiative had been prompted by growing concern of sudden death caused by drug-induced torsade de pointes, a potentially lethal cardiac tachyarrhythmia. Later, in 2005, this concern was addressed by issue of the ICH S7B guidelines. Preclinical safety pharmacology Preclinical safety pharmacology integrates in silico, in vitro, and in vivo approaches. In vitro safety pharmacology studies are focused on early hazard identification and subsequent compound profiling in order to guide preclinical in vivo safety and toxicity studies. Early compound profiling can flag for receptor-, enzyme-, transporter-, and ion channel-related liabilities of NCEs (e.g., inhibition of the human ether-a-go-go related gene protein (hERG)). Classically, in vivo investigations comprise the use of young adult conscious animals. Study design Safety pharmacology studies have to be designed for defining the dose-response relationship of the adverse effect observed. Justification should be provided for the selection of the particular animal model or test system. The time course (e.g., onset and duration of response) of the adverse effect is investigated through selected time points for the measurements based on pharmacodynamic and pharmacokinetic considerations. Generally, the doses eliciting the adverse effect have to be compared to the doses eliciting the primary pharmacodynamic effect in the test species or the proposed therapeutic effect in humans. Regulatory guidance documents (current versions) The primary reference document for safety pharmacology is ICH S7A, followed by many key regulatory documents which either focus on or mention safety pharmacology: ICH S7A: Safety pharmacology studies for human pharmaceuticals. . ICH S7B: Nonclinical evaluation of the potential for delayed ventricular repolarization (QT interval prolongation) by human pharmaceuticals. . ICH S6(R1): Preclinical safety evaluation of biotechnology-derived pharmaceuticals. . ICH S9: Nonclinical evaluation for anticancer pharmaceuticals. . ICH M3(R2): Guidance on nonclinical safety studies for the conduct of human clinical trials and marketing authorisation for pharmaceuticals. . ICH E14: Clinical evaluation of QT/QTc interval and proarrhythmic potential for non-antiarrhythmic drugs. . EMEA/CHMP/SWP/94227/2004. Adopted by CHMP. Guideline on the Non-Clinical Investigation of the Dependence Potential of Medicinal Products. . FDA U.S. Department of Health and Human Services Food and Drug Administration - Center for Drug Evaluation and Research (CDER). Guidance for Industry. Assessment of abuse potential of drugs. Final Guidance. . FDA U.S. Department of Health and Human Services Food and Drug Administration - Center for Drug Evaluation and Research (CDER). Guidance for Industry. Exploratory IND studies. . See also SPS: There is a global scientific society fostering best practice within the discipline of safety pharmacology. This Safety Pharmacology Society (SPS) promotes knowledge, development, application, and training in safety pharmacology. CiPA: Comprehensive in vitro Proarrhythmia Assay (2013): In the coming years, the FDA plans to update the current regulatory documents for preclinical and clinical safety evaluation of proarrhythmic risk in human (i.e. ICH-S7B and ICH-E14). The Comprehensive in vitro Proarrhythmia Assay (CiPA) is a novel safety pharmacology paradigm intending to provide a more accurate assessment of cardiac safety testing for potential proarrhythmic events in human. This initiative is driven by a steering team including partners from US FDA, HESI, CSRC, SPS, EMA, Health Canada, Japan NIHS, and PMDA. The CiPA includes in vitro assays coupled to in silico reconstructions of cellular cardiac electrophysiological activity with verification of relevance through comparison of drug effects in human stem cell-derived cardiomyocytes. If these evaluation efforts succeed, CiPA will become a Safety Pharmacology screening tool for drug research and development purposes. The CiPA Steering Committee and the ICH-S7B and ICH-E14 Working Groups will position the CiPA paradigm within the upcoming revisions of the a forementioned regulatory documents. References External links http://cipaproject.org/ emka TECHNOLOGIES Physiological data acquisition & analysis for preclinical research Pharmacology
Safety pharmacology
[ "Chemistry" ]
1,544
[ "Pharmacology", "Medicinal chemistry" ]
17,411,030
https://en.wikipedia.org/wiki/Internal%20drainage%20board
An internal drainage board (IDB) is a type of operating authority which is established in areas of special drainage need in England and Wales with permissive powers to undertake work to secure clean water drainage and water level management within drainage districts. The area of an IDB is not determined by county or metropolitan council boundaries, but by water catchment areas within a given region. IDBs are geographically concentrated in the Broads, Fens in East Anglia and Lincolnshire, Somerset Levels and Yorkshire. In comparison with public bodies in other countries, IDBs are most similar to the Waterschappen of the Netherlands, Consorzi di bonifica e irrigazione of Italy, wateringen of Flanders and Northern France, Watershed Districts of Minnesota, United States and Marsh Bodies of Nova Scotia, Canada. Responsibilities Much of their work involves the maintenance of rivers, drainage channels (rhynes), ordinary watercourses, pumping stations and other critical infrastructure, facilitating drainage of new developments, the ecological conservation and enhancement of watercourses, monitoring and advising on planning applications and making sure that any development is carried out in line with legislation (NPPF). IDBs are not responsible for watercourses designated as main rivers within their drainage districts; the supervision of these watercourses is undertaken by the Environment Agency. The precursors to internal drainage boards date back to 1252; however, the majority of today's IDBs were established by the national government following the passing of the Land Drainage Act 1930 and today predominantly operate under the Land Drainage Act 1991 under which, an IDB is required to exercise a general supervision over all matters relating to water level management of land within its district. Some IDBs may also have other duties, powers and responsibilities under specific legislation for the district (for instance the Middle Level Commissioners are also a navigation authority). IDBs are responsible to Defra from whom all legislation/regulations affecting them are issued. The work of an IDB is closely linked with that of the Environment Agency which has a range of functions providing a supervisory role over them. Regulation Defra brought IDBs under the jurisdiction of the Local Government Ombudsman (LGO) from 1 April 2004, and introduced a model complaints procedure for IDBs to operate. This move was aimed to increase the accountability of IDBs to the general public who have an interest in the way that IDBs are run and operate by providing an independent means of review. At this time Defra also revised and re-issued model statutory rules and procedures under which IDBs operate. Current internal drainage boards of England There are 112 internal drainage boards in England covering 1.2 million hectares (9.7% of England's total land area) and areas around The Wash, the Lincolnshire Coast, the lower reaches of the Trent and the Yorkshire Ouse, the Somerset Levels and the Fens have concentrations of adjacent IDBs covering broad areas of lowland. In other parts of the country IDBs stretch in narrow ‘fingers’ up river valleys, separated by less low-lying areas, especially in Norfolk and Suffolk, Sussex, Kent, West Yorkshire, Herefordshire/Shropshire and the northern Vale of York. The largest IDB (Lindsey Marsh DB) covers 52,757 hectares and the smallest (Cawdle Fen IDB) 181 hectares. 24 of the county councils in England include one or more IDB in their area as do six metropolitan districts, and 109 unitary authorities or district councils. The Association of Drainage Authorities holds a definitive record of all IDBs within England and Wales and their boundaries. The Environment Agency acts as the internal drainage board for one internal drainage district in East Sussex. In Wales internal drainage districts are managed by Natural Resources Wales. Water level management and flood risk IDBs have an important role in reducing flood risk through management of water levels and drainage in their districts. The water level management activities of internal drainage boards cover 1.2 million hectares of England which represents 9.7% of the total land area. Reducing the flood risk to ~600,000 people who live or work, and ~879,000 properties located in IDB districts. Whilst many thousands of people outside of these boundaries also derive reduced flood risk from IDB water level management activities. Several forms of critical infrastructure fall within IDB districts including; 56 major power stations (28%) are located within an Internal Drainage District, 68 other major industrial premises and 208 km of motorway. In fact a recent publication by the Association of Drainage Authorities identified that 53% of the installed capacity (potential maximum power output) of major power stations in England and Wales are located within an IDB. Although of much reduced significance since the 1980s, many IDB districts in Yorkshire and Nottinghamshire lie in areas of coal reserves and drainage has been significantly affected by subsidence from mining. IDBs have played an important role in monitoring and mitigating the effects of this activity and have worked in close collaboration with the coal companies and the Coal Authority. Maintenance of watercourses The fundamental role of an internal drainage board is to manage the water level within its district. The majority of lowland rivers and watercourses have been heavily modified by man or are totally artificial channels. All are engineered structures designed and constructed for the primary function of conveying surplus run-off to their outfall efficiently and safely, managing water levels to sustain a multitude of land functions. As with any engineered structure it must be maintained in order to function at or near its design capacity. Annual or bi-annual vegetation clearance and periodic de-silting (dredging) of these rivers and watercourses is therefore an essential component of the whole life cycle of these watercourses. Accommodating sustainability within the design and maintenance process for lowland rivers and watercourses has to address three essential elements: year round conveyance of flows, storage of flood peaks, retention and protection of flora and fauna dependent on or resident in the water corridor. Many IDBs are redesigning watercourses to create a two-stage or bermed channel. These have been extensively created in the Lindsey Marsh Drainage Board area of East Lincolnshire to accommodate the three elements of lowland watercourse sustainability. Berms are created at or near to the normal retained water level in the system. It is sometimes replanted with vegetation removed from the watercourse prior to improvement works but is often left to re-colonise naturally. In all cases this additional part of the channel profile allows for enhanced environmental value to develop. The area created above the berm also provides additional flood storage capacity whilst the low level channel can be maintained in such a manner that design conveyance conditions are achieved and flood risk controlled. By widening the channel and the berm, the berm can be safely used as access for machinery carrying out channel maintenance. While in-channel habitat that develops can be retained for a much longer period during the summer months, flood storage is provided for rare or extreme events and a buffer zone between the channel and any adjacent land use is created. The timing of vegetation clearance works is essential to striking a sustainable balance in lowland watercourses. The Conveyance Estimating System (CES) is a modelling tool developed through a Defra / Environment Agency research collaboration. IDBs use CES to estimate the seasonal variation of conveyance owing to vegetation growth and other physical parameters which they use to assess the impact of varying the timing of vegetation clearance operations. This is critical during the spring and early summer, the prime nesting season for aquatic birds, the breeding season for many protected mammal species such as water voles and the season when many rare species of plant life flower and seed. Many IDBs have developed vegetation control strategies in co-ordination with Natural England. Pumping stations 111 IDB districts require pumping to some degree for water level management and 79 are purely gravity boards (where no pumping is required). 53 IDBs have more than 95% of their area dependent on pumping. This means in England some of land in IDB districts rely on pumping, almost 51% of the total. A new pumping station was commissioned in April 2011 by the Middle Level Commissioners at Wiggenhall St Germans, Norfolk. The station replaced its 73-year-old predecessor and is vital to the flood risk management of of surrounding Fenland and 20,000 residential properties. When running at full capacity, it is capable of draining five Olympic-size swimming pools every 2 minutes. Emergency actions During times of heavy rainfall and high river levels IDBs: liaise with the Environment Agency (in England) or Natural Resources Wales (in Wales) over developing flood conditions check sensitive locations and remove restrictions take actions, where possible, to reduce risk of flooding to property advise local authorities on the developing situation in order that Local Authorities can execute their emergency plan effectively for the protection of people, property and critical infrastructure assist where possible in any post-flood remedial and clearance operations assess flooding incidents to determine if new works can be undertaken to reduce the effect of future flooding incidents An IDB's priorities during flooding are: ensuring the board's systems are working efficiently protection of people and residential properties protection of commercial properties protection of agricultural land and ecologically sensitive sites Some IDBs are able to provide a 24-hour contact number and most extend office hours during severe emergencies. Planning guidance Associated with the powers to regulate activities that may impede drainage, IDBs provide comments to local planning authorities on developments in their district and when asked, make recommendations on measures required to manage flood risk and to provide adequate drainage. Environmental responsibilities Internal drainage boards in England have responsibilities associated with 398 Sites of Special Scientific Interest plus other designated environmental areas, in coordination with Natural England. Slow flowing drainage channels such as those managed by IDBs can form an important habitat for a diverse community of aquatic and emergent plants, invertebrates and higher organisms. IDB channels form one of the last refuges in the UK of the BAP registered spined loach (Cobitis Taenia), a small nocturnal bottom-feeding fish that have been recorded only in the lower parts of the Trent and Great Ouse catchments, and in some small rivers and drains in Lincolnshire and East Anglia. All IDBs are currently engaging with their own individual biodiversity action plans which will further enhance their environmental role. Many IDBs are involved with assisting major wetland biodiversity projects with organisations such as the RSPB, National Trust and the Wildfowl and Wetlands Trust. Many smaller conservation projects are co-ordinated with Wildlife Trusts and local authorities. Current projects include: The Great Fen Project (Middle Level Commissioners), Newport Wetlands Reserve (Caldicot and Wentlooge Levels IDB) and WWT Welney (MLC). Middle Level Commissioners launched a three-year Otter Recovery Project in December 2007. It will build 33 otter holts and 15 other habitat areas. Drainage rates All properties within a drainage district are deemed to derive benefit from the activities of an IDB. Every property is therefore subject to a drainage rate paid annually to the IDB. For the purposes of rating, properties are divided into: Agricultural land and buildings Other land (such as domestic houses, factories, shops etc.) Occupiers of all "other land" pay Council Tax or non-domestic rates to the local authority who then are charged by the board. This charge is called the "Special Levy". The board, therefore, only demands drainage rates direct on agricultural land and buildings. The basis of this is that each property has been allotted an "annual value" which were last revised in the early 1990s. The annual value is an amount equal to the yearly rent, or the rent that might be reasonably expected if let on a tenancy from year to year commencing 1 April 1988. The annual value remains the same from year to year. Each year the board lays a rate "in the £" to meet its estimated expenditure. This is multiplied by the annual value to produce the amount of drainage rate due on each property. Precepts Under Section 141 of the Water Resources Act 1991 the Environment Agency may issue a precept to an IDB to recover a contribution that the agency considers fair towards their expenses. Under Section 57 of the Land Drainage Act 1991, in cases where a drainage district receives water from land at a higher level, the IDB may make an application to the Environment Agency for a contribution towards the expenses of dealing with that water. District drainage commissioners District drainage commissioners (DDCs) are internal drainage boards set up under local legislation rather than the Land Drainage Act 1991 and its predecessor legislation. The majority of the provisions of the Land Drainage Acts, do however, apply to such commissioners and they are statutory public bodies. The most important in terms of size and revenue is the Middle Level Commissioners. Association of Drainage Authorities The majority of internal drainage boards are members of the Association of Drainage Authorities (ADA) their representative organisation. Through ADA the collective views of drainage authorities and other members involved in water level management are represented to government, regulators, other policy makers and stakeholders. At a European level ADA represents IDBs through EUWMA. In 2013 it was announced that the Caldicot and Wentlooge Levels Internal Drainage Board was to be abolished in April 2015, after officials at the Wales Audit Office detailed a series of irregularities, including overpaying its chief executive, misuse of public funds, financial irregularities, and unlawful actions. References External links Association of Drainage Authorities Defra Flood and Coastal Risk Management European Union of Water Management Associations Internal drainage board websites Bedford Group of Drainage Boards Black Sluice Internal Drainage Board Caldicot and Wentlooge Levels Internal Drainage Board Downham Market Group of Internal Drainage Boards Ely Group of Internal Drainage Boards Lower Aire & Don Consortia of Drainage Boards Lower Ouse Internal Drainage Board Lower Severn Internal Drainage Board Lindsey Marsh Drainage Board Market Weighton Internal Drainage Board Medway Internal Drainage Boards Middle Level Commissioners Newark Area Internal Drainage Board North East Lindsey Internal Drainage Board North Level District Internal Drainage Board River Stour (Kent) Internal Drainage Board Romney Marshes Area Internal Drainage Board Shire Group of Internal Drainage Boards Somerset Drainage Boards Consortium Vale of Pickering Internal Drainage Boards Upper Witham Internal Drainage Board Water Management Alliance Welland and Deepings Internal Drainage Board West Mendip Internal Drainage Board Whittlesey Consortium of Internal Drainage Boards Witham First District Internal Drainage Board Witham Third District Internal Drainage Board Witham 4th District Internal Drainage Board York Consortium of Drainage Boards Department for Environment, Food and Rural Affairs Public bodies and task forces of the United Kingdom government Water management authorities in the United Kingdom Hydrology Hydraulic engineering Land drainage in the United Kingdom Internal Drainage Boards
Internal drainage board
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
2,958
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
338,331
https://en.wikipedia.org/wiki/Value%20%28computer%20science%29
In computer science and software programming, a value is the representation of some entity that can be manipulated by a program. The members of a type are the values of that type. The "value of a variable" is given by the corresponding mapping in the environment. In languages with assignable variables, it becomes necessary to distinguish between the r-value (or contents) and the l-value (or location) of a variable. In declarative (high-level) languages, values have to be referentially transparent. This means that the resulting value is independent of the location of the expression needed to compute the value. Only the contents of the location (the bits, whether they are 1 or 0) and their interpretation are significant. Value category Despite its name, in the C++ language standards this terminology is used to categorize expressions, not values. Assignment: l-values and r-values Some languages use the idea of l-values and r-values, deriving from the typical mode of evaluation on the left and right-hand side of an assignment statement. An l-value refers to an object that persists beyond a single expression. An r-value is a temporary value that does not persist beyond the expression that uses it. The notion of l-values and r-values was introduced by Combined Programming Language (CPL). The notions in an expression of r-value, l-value, and r-value/l-value are analogous to the parameter modes of input parameter (has a value), output parameter (can be assigned), and input/output parameter (has a value and can be assigned), though the technical details differ between contexts and languages. R-values and addresses In many languages, notably the C family, l-values have storage addresses that are programmatically accessible to the running program (e.g., via some address-of operator like "&" in C/C++), meaning that they are variables or de-referenced references to a certain memory location. R-values can be l-values (see below) or non-l-values—a term only used to distinguish from l-values. Consider the C expression . When executed, the computer generates an integer value of 13, but because the program has not explicitly designated where in the computer this 13 is stored, the expression is a non l-value. On the other hand, if a C program declares a variable x and assigns the value of 13 to x, then the expression has a value of 13 and is an l-value. In C, the term l-value originally meant something that could be assigned to (hence the name, indicating it is on the left side of the assignment operator), but since the reserved word (constant) was added to the language, the term is now 'modifiable l-value'. In C++11 a special semantic-glyph exists ( not to be confused with the && operator used for logical operations ), to denote the use/access of the expression's address for the compiler only; i.e., the address cannot be retrieved using the address-of operator during the run-time of the program (see the use of move semantics). The addition of move semantics complicated the value classification taxonomy by adding to it the concept of an xvalue (expiring value) which refers to an object near the end of its lifetime whose resources can be reused (typically by moving them). This also lead to the creation of the categories glvalue (generalized lvalue) which are lvalues and xvalues and prvalues (pure rvalues) which are rvalues that are not xvalues. This type of reference can be applied to all r-values including non-l-values as well as l-values. Some processors provide one or more instructions which take an immediate value, sometimes referred to as "immediate" for short. An immediate value is stored as part of the instruction which employs it, usually to load into, add to, or subtract from, a register. The other parts of the instruction are the opcode, and destination. The latter may be implicit. (A non-immediate value may reside in a register, or be stored elsewhere in memory, requiring the instruction to contain a direct or indirect address [e.g., index register address] to the value.) The l-value expression designates (refers to) an object. A non-modifiable l-value is addressable, but not assignable. A modifiable l-value allows the designated object to be changed as well as examined. An r-value is any expression, a non-l-value is any expression that is not an l-value. One example is an "immediate value" (see above) and consequently not addressable. In assembly language A value can be virtually any kind of data by a given data type, for instance a string, a digit, a single letter. Processors often support more than one size of immediate data, e.g. 8 or 16 bit, employing a unique opcode and mnemonic for each instruction variant. If a programmer supplies a data value that will not fit, the assembler issues an "Out of range" error message. Most assemblers allow an immediate value to be expressed as ASCII, decimal, hexadecimal, octal, or binary data. Thus, the ASCII character is the same as or . The byte order of strings may differ between processors, depending on the assembler and computer architecture. Notes References External links Value Object Transfer Object Pattern Computer data Programming language concepts Type theory
Value (computer science)
[ "Mathematics", "Technology" ]
1,171
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "Computer data", "Type theory", "Data" ]
338,403
https://en.wikipedia.org/wiki/Long%20line%20%28topology%29
In topology, the long line (or Alexandroff line) is a topological space somewhat similar to the real line, but in a certain sense "longer". It behaves locally just like the real line, but has different large-scale properties (e.g., it is neither Lindelöf nor separable). Therefore, it serves as an important counterexample in topology. Intuitively, the usual real-number line consists of a countable number of line segments laid end-to-end, whereas the long line is constructed from an uncountable number of such segments. Definition The closed long ray is defined as the Cartesian product of the first uncountable ordinal with the half-open interval equipped with the order topology that arises from the lexicographical order on . The open long ray is obtained from the closed long ray by removing the smallest element The long line is obtained by "gluing" together two long rays, one in the positive direction and the other in the negative direction. More rigorously, it can be defined as the order topology on the disjoint union of the reversed open long ray (“reversed” means the order is reversed) (this is the negative half) and the (not reversed) closed long ray (the positive half), totally ordered by letting the points of the latter be greater than the points of the former. Alternatively, take two copies of the open long ray and identify the open interval of the one with the same interval of the other but reversing the interval, that is, identify the point (where is a real number such that ) of the one with the point of the other, and define the long line to be the topological space obtained by gluing the two open long rays along the open interval identified between the two. (The former construction is better in the sense that it defines the order on the long line and shows that the topology is the order topology; the latter is better in the sense that it uses gluing along an open set, which is clearer from the topological point of view.) Intuitively, the closed long ray is like a real (closed) half-line, except that it is much longer in one direction: we say that it is long at one end and closed at the other. The open long ray is like the real line (or equivalently an open half-line) except that it is much longer in one direction: we say that it is long at one end and short (open) at the other. The long line is longer than the real lines in both directions: we say that it is long in both directions. However, many authors speak of the “long line” where we have spoken of the (closed or open) long ray, and there is much confusion between the various long spaces. In many uses or counterexamples, however, the distinction is unessential, because the important part is the “long” end of the line, and it doesn't matter what happens at the other end (whether long, short, or closed). A related space, the (closed) extended long ray, is obtained as the one-point compactification of by adjoining an additional element to the right end of One can similarly define the extended long line by adding two elements to the long line, one at each end. Properties The closed long ray consists of an uncountable number of copies of 'pasted together' end-to-end. Compare this with the fact that for any ordinal , pasting together copies of gives a space which is still homeomorphic (and order-isomorphic) to (And if we tried to glue together than copies of the resulting space would no longer be locally homeomorphic to ) Every increasing sequence in converges to a limit in ; this is a consequence of the facts that (1) the elements of are the countable ordinals, (2) the supremum of every countable family of countable ordinals is a countable ordinal, and (3) every increasing and bounded sequence of real numbers converges. Consequently, there can be no strictly increasing function In fact, every continuous function is eventually constant. As order topologies, the (possibly extended) long rays and lines are normal Hausdorff spaces. All of them have the same cardinality as the real line, yet they are 'much longer'. All of them are locally compact. None of them is metrizable; this can be seen as the long ray is sequentially compact but not compact, or even Lindelöf. The (non-extended) long line or ray is not paracompact. It is path-connected, locally path-connected and simply connected but not contractible. It is a one-dimensional topological manifold, with boundary in the case of the closed ray. It is first-countable but not second countable and not separable, so authors who require the latter properties in their manifolds do not call the long line a manifold. It makes sense to consider all the long spaces at once because every connected (non-empty) one-dimensional (not necessarily separable) topological manifold possibly with boundary, is homeomorphic to either the circle, the closed interval, the open interval (real line), the half-open interval, the closed long ray, the open long ray, or the long line. The long line or ray can be equipped with the structure of a (non-separable) differentiable manifold (with boundary in the case of the closed ray). However, contrary to the topological structure which is unique (topologically, there is only one way to make the real line "longer" at either end), the differentiable structure is not unique: in fact, there are uncountably many ( to be precise) pairwise non-diffeomorphic smooth structures on it. This is in sharp contrast to the real line, where there are also different smooth structures, but all of them are diffeomorphic to the standard one. The long line or ray can even be equipped with the structure of a (real) analytic manifold (with boundary in the case of the closed ray). However, this is much more difficult than for the differentiable case (it depends on the classification of (separable) one-dimensional analytic manifolds, which is more difficult than for differentiable manifolds). Again, any given structure can be extended in infinitely many ways to different (=analytic) structures (which are pairwise non-diffeomorphic as analytic manifolds). The long line or ray cannot be equipped with a Riemannian metric that induces its topology. The reason is that Riemannian manifolds, even without the assumption of paracompactness, can be shown to be metrizable. The extended long ray is compact. It is the one-point compactification of the closed long ray but it is its Stone-Čech compactification, because any continuous function from the (closed or open) long ray to the real line is eventually constant. is also connected, but not path-connected because the long line is 'too long' to be covered by a path, which is a continuous image of an interval. is not a manifold and is not first countable. p-adic analog There exists a p-adic analog of the long line, which is due to George Bergman. This space is constructed as the increasing union of an uncountable directed set of copies of the ring of p-adic integers, indexed by a countable ordinal Define a map from to whenever as follows: If is a successor then the map from to is just multiplication by For other the map from to is the composition of the map from to and the map from to If is a limit ordinal then the direct limit of the sets for is a countable union of p-adic balls, so can be embedded in as with a point removed is also a countable union of p-adic balls. This defines compatible embeddings of into for all This space is not compact, but the union of any countable set of compact subspaces has compact closure. Higher dimensions Some examples of non-paracompact manifolds in higher dimensions include the Prüfer manifold, products of any non-paracompact manifold with any non-empty manifold, the ball of long radius, and so on. The bagpipe theorem shows that there are isomorphism classes of non-paracompact surfaces, even when a generalization of paracompactness, ω-boundedness, is assumed. There are no complex analogues of the long line as every Riemann surface is paracompact, but Calabi and Rosenlicht gave an example of a non-paracompact complex manifold of complex dimension 2. See also Lexicographic order topology on the unit square List of topologies References Topological spaces
Long line (topology)
[ "Mathematics" ]
1,836
[ "Topological spaces", "Topology", "Mathematical structures", "Space (mathematics)" ]
338,405
https://en.wikipedia.org/wiki/Long%20line%20%28telecommunications%29
In telecommunications, a long line is a transmission line in a long-distance communications network such as carrier systems, microwave radio relay links, geosynchronous satellite links, underground cables, aerial cables and open wire, and Submarine communications cables. In the United States, some of this technology was spun off into the corporate entity known as AT&T Long Distance with the breakup of AT&T in 1984. Previously, the AT&T Long Lines division of the Bell System provided maintenance and installation of long line facilities for the Bell System's long-distance service. See also Long-haul communications Long-distance calling Telephony Communication circuits
Long line (telecommunications)
[ "Engineering" ]
131
[ "Telecommunications engineering", "Communication circuits" ]
338,526
https://en.wikipedia.org/wiki/Humoral%20immunity
Humoral immunity is the aspect of immunity that is mediated by macromolecules – including secreted antibodies, complement proteins, and certain antimicrobial peptides – located in extracellular fluids. Humoral immunity is named so because it involves substances found in the humors, or body fluids. It contrasts with cell-mediated immunity. Humoral immunity is also referred to as antibody-mediated immunity. The study of the molecular and cellular components that form the immune system, including their function and interaction, is the central science of immunology. The immune system is divided into a more primitive innate immune system and an acquired or adaptive immune system of vertebrates, each of which contain both humoral and cellular immune elements. Humoral immunity refers to antibody production and the coinciding processes that accompany it, including: Th2 activation and cytokine production, germinal center formation and isotype switching, and affinity maturation and memory cell generation. It also refers to the effector functions of antibodies, which include pathogen and toxin neutralization, classical complement activation, and opsonin promotion of phagocytosis and pathogen elimination. History The concept of humoral immunity developed based on the analysis of antibacterial activity of the serum components. Hans Buchner is credited with the development of the humoral theory. In 1890, Buchner described alexins as "protective substances" that exist in the blood serum and other bodily fluids and are capable of killing microorganisms. Alexins, later redefined as "complements" by Paul Ehrlich, were shown to be the soluble components of the innate response that leads to a combination of cellular and humoral immunity. This discovery helped to bridge the features of innate and acquired immunity. Following the 1888 discovery of the bacteria that cause diphtheria and tetanus, Emil von Behring and Kitasato Shibasaburō showed that disease need not be caused by microorganisms themselves. They discovered that cell-free filtrates were sufficient to cause disease. In 1890, filtrates of diphtheria, later named diphtheria toxins, were used to vaccinate animals in an attempt to demonstrate that immunized serum contained an antitoxin that could neutralize the activity of the toxin and could transfer immunity to non-immune animals. In 1897, Paul Ehrlich showed that antibodies form against the plant toxins ricin and abrin, and proposed that these antibodies are responsible for immunity. Ehrlich, with his colleague von Behring, went on to develop the diphtheria antitoxin, which became the first major success of modern immunotherapy. The discovery of specified compatible antibodies became a major tool in the standardization of immunity and the identification of lingering infections. Antibodies Antibodies or Immunoglobulins are glycoproteins found within blood and lymph. Structurally, antibodies are large Y-shaped globular proteins. In mammals, there are five types of antibodies: immunoglobulin A, immunoglobulin D, immunoglobulin E, immunoglobulin G, and immunoglobulin M. Each immunoglobulin class differs in its biological properties and has evolved to deal with different antigens. Antibodies are synthesized and secreted by plasma cells that are derived from the B cells of the immune system. An antibody is used by the acquired immune system to identify and neutralize foreign objects like bacteria and viruses. Each antibody recognizes a specific antigen unique to its target. By binding their specific antigens, antibodies can cause agglutination and precipitation of antibody-antigen products, prime for phagocytosis by macrophages and other cells, block viral receptors, and stimulate other immune responses, such as the complement pathway. An incompatible blood transfusion causes a transfusion reaction, which is mediated by the humoral immune response. This type of reaction, called an acute hemolytic reaction, results in the rapid destruction (hemolysis) of the donor red blood cells by host antibodies. The cause is usually a clerical error, such as the wrong unit of blood being given to the wrong patient. The symptoms are fever and chills, sometimes with back pain and pink or red urine (hemoglobinuria). The major complication is that hemoglobin released by the destruction of red blood cells can cause acute kidney failure. Antibody production In humoral immune response, the naive B cells begin the maturation process in the bone marrow, gaining B-cell receptors (BCRs) along the cell surface. These BCRs are membrane-bound protein complexes that have a high binding affinity for specific antigens; this specificity is derived from the amino acid sequence of the heavy and light polypeptide chains that constitute the variable region of the BCR. Once a BCR interacts with an antigen, it creates a binding signal which directs the B cell to produce a unique antibody that only binds with that antigen. The mature B cells then migrate from the bone marrow to the lymph nodes or other lymphatic organs, where they begin to encounter pathogens. B cell activation When a B cell encounters an antigen, a signal is activated, the antigen binds to the receptor and is taken inside the B cell by endocytosis. The antigen is processed and presented on the B cell's surface again by MHC-II proteins. The MHC-II proteins are recognized by helper T cells, stimulating the production of proteins, allowing for B cells to multiply and the descendants to differentiate into antibody-secreting cells circulating in the blood. B cells can be activated through certain microbial agents without the help of T-cells and have the ability to work directly with antigens to provide responses to pathogens present. B cell proliferation The B cell waits for a helper T cell (TH) to bind to the complex. This binding will activate the TH cell, which then releases cytokines that induce B cells to divide rapidly, making thousands of identical clones of the B cell. These daughter cells either become plasma cells or memory cells. The memory B cells remain inactive here; later, when these memory B cells encounter the same antigen due to reinfection, they divide and form plasma cells. On the other hand, the plasma cells produce a large number of antibodies which are released freely into the circulatory system. Antibody-antigen reaction These antibodies will encounter antigens and bind with them. This will either interfere with the chemical interaction between host and foreign cells, or they may form bridges between their antigenic sites hindering their proper functioning. Their presence might also attract macrophages or killer cells to attack and phagocytose them. Complement system The complement system is a biochemical cascade of the innate immune system that helps clear pathogens from an organism. It is derived from many small blood plasma proteins that work together to disrupt the target cell's plasma membrane leading to cytolysis of the cell. The complement system consists of more than 35 soluble and cell-bound proteins, 12 of which are directly involved in the complement pathways. The complement system is involved in the activities of both innate immunity and acquired immunity. Activation of this system leads to cytolysis, chemotaxis, opsonization, immune clearance, and inflammation, as well as the marking of pathogens for phagocytosis. The proteins account for 5% of the serum globulin fraction. Most of these proteins circulate as zymogens, which are inactive until proteolytic cleavage. Three biochemical pathways activate the complement system: the classical complement pathway, the alternate complement pathway, and the mannose-binding lectin pathway. These processes differ only in the process of activating C3 convertase, which is the initial step of complement activation, and the subsequent process are eventually the same. The classical pathway is initiated through exposure to free-floating antigen-bound antibodies. This leads to enzymatic cleavage of smaller complement subunits which synthesize to form the C3 convertase. This differs from the mannose-binding lectin pathway, which is initiated by bacterial carbohydrate motifs, such as mannose, found on the surface of bacterium. After the binding process, the same subunit cleavage and synthesis occurs as in the classical pathway. The alternate complement pathway completely diverges from the previous pathways, as this pathway spontaneously initiates in the presence of hydrolyzed C3, which then recruits other subunits which can be cleaved to form C3 convertase. In all three pathways, once C3 convertase is synthesized, complements are cleaved into subunits which either form a structure called the membrane attack complex (MAC) on the bacterial cell wall to destroy the bacteria or act as cytokines and chemokines, amplifying the immune response. See also Cell-mediated immunity (vs. humoral immunity) Immune system Polyclonal response Serology References Further reading Immunology
Humoral immunity
[ "Biology" ]
1,852
[ "Immunology" ]
338,746
https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym%20theorem
In mathematics, the Radon–Nikodym theorem is a result in measure theory that expresses the relationship between two measures defined on the same measurable space. A measure is a set function that assigns a consistent magnitude to the measurable subsets of a measurable space. Examples of a measure include area and volume, where the subsets are sets of points; or the probability of an event, which is a subset of possible outcomes within a wider probability space. One way to derive a new measure from one already given is to assign a density to each point of the space, then integrate over the measurable subset of interest. This can be expressed as where is the new measure being defined for any measurable subset and the function is the density at a given point. The integral is with respect to an existing measure , which may often be the canonical Lebesgue measure on the real line or the n-dimensional Euclidean space (corresponding to our standard notions of length, area and volume). For example, if represented mass density and was the Lebesgue measure in three-dimensional space , then would equal the total mass in a spatial region . The Radon–Nikodym theorem essentially states that, under certain conditions, any measure can be expressed in this way with respect to another measure on the same space. The function is then called the Radon–Nikodym derivative and is denoted by . An important application is in probability theory, leading to the probability density function of a random variable. The theorem is named after Johann Radon, who proved the theorem for the special case where the underlying space is in 1913, and for Otto Nikodym who proved the general case in 1930. In 1936 Hans Freudenthal generalized the Radon–Nikodym theorem by proving the Freudenthal spectral theorem, a result in Riesz space theory; this contains the Radon–Nikodym theorem as a special case. A Banach space is said to have the Radon–Nikodym property if the generalization of the Radon–Nikodym theorem also holds, mutatis mutandis, for functions with values in . All Hilbert spaces have the Radon–Nikodym property. Formal description Radon–Nikodym theorem The Radon–Nikodym theorem involves a measurable space on which two σ-finite measures are defined, and It states that, if (that is, if is absolutely continuous with respect to ), then there exists a -measurable function such that for any measurable set Radon–Nikodym derivative The function satisfying the above equality is , that is, if is another function which satisfies the same property, then . The function is commonly written and is called the . The choice of notation and the name of the function reflects the fact that the function is analogous to a derivative in calculus in the sense that it describes the rate of change of density of one measure with respect to another (the way the Jacobian determinant is used in multivariable integration). Extension to signed or complex measures A similar theorem can be proven for signed and complex measures: namely, that if is a nonnegative σ-finite measure, and is a finite-valued signed or complex measure such that that is, is absolutely continuous with respect to then there is a -integrable real- or complex-valued function on such that for every measurable set Examples In the following examples, the set is the real interval [0,1], and is the Borel sigma-algebra on . is the length measure on . assigns to each subset of , twice the length of . Then, . is the length measure on . assigns to each subset of , the number of points from the set {0.1, …, 0.9} that are contained in . Then, is not absolutely-continuous with respect to since it assigns non-zero measure to zero-length points. Indeed, there is no derivative : there is no finite function that, when integrated e.g. from to , gives for all . , where is the length measure on and is the Dirac measure on 0 (it assigns a measure of 1 to any set containing 0 and a measure of 0 to any other set). Then, is absolutely continuous with respect to , and – the derivative is 0 at and 1 at . Properties Let ν, μ, and λ be σ-finite measures on the same measurable space. If ν ≪ λ and μ ≪ λ (ν and μ are both absolutely continuous with respect to λ), then If ν ≪ μ ≪ λ, then In particular, if μ ≪ ν and ν ≪ μ, then If μ ≪ λ and is a μ-integrable function, then If ν is a finite signed or complex measure, then Applications Probability theory The theorem is very important in extending the ideas of probability theory from probability masses and probability densities defined over real numbers to probability measures defined over arbitrary sets. It tells if and how it is possible to change from one probability measure to another. Specifically, the probability density function of a random variable is the Radon–Nikodym derivative of the induced measure with respect to some base measure (usually the Lebesgue measure for continuous random variables). For example, it can be used to prove the existence of conditional expectation for probability measures. The latter itself is a key concept in probability theory, as conditional probability is just a special case of it. Financial mathematics Amongst other fields, financial mathematics uses the theorem extensively, in particular via the Girsanov theorem. Such changes of probability measure are the cornerstone of the rational pricing of derivatives and are used for converting actual probabilities into those of the risk neutral probabilities. Information divergences If μ and ν are measures over , and μ ≪ ν The Kullback–Leibler divergence from ν to μ is defined to be For α > 0, α ≠ 1 the Rényi divergence of order α from ν to μ is defined to be The assumption of σ-finiteness The Radon–Nikodym theorem above makes the assumption that the measure μ with respect to which one computes the rate of change of ν is σ-finite. Negative example Here is an example when μ is not σ-finite and the Radon–Nikodym theorem fails to hold. Consider the Borel σ-algebra on the real line. Let the counting measure, , of a Borel set be defined as the number of elements of if is finite, and otherwise. One can check that is indeed a measure. It is not -finite, as not every Borel set is at most a countable union of finite sets. Let be the usual Lebesgue measure on this Borel algebra. Then, is absolutely continuous with respect to , since for a set one has only if is the empty set, and then is also zero. Assume that the Radon–Nikodym theorem holds, that is, for some measurable function one has for all Borel sets. Taking to be a singleton set, , and using the above equality, one finds for all real numbers . This implies that the function , and therefore the Lebesgue measure , is zero, which is a contradiction. Positive result Assuming the Radon–Nikodym theorem also holds if is localizable and is accessible with respect to , i.e., for all Proof This section gives a measure-theoretic proof of the theorem. There is also a functional-analytic proof, using Hilbert space methods, that was first given by von Neumann. For finite measures and , the idea is to consider functions with . The supremum of all such functions, along with the monotone convergence theorem, then furnishes the Radon–Nikodym derivative. The fact that the remaining part of is singular with respect to follows from a technical fact about finite measures. Once the result is established for finite measures, extending to -finite, signed, and complex measures can be done naturally. The details are given below. For finite measures Constructing an extended-valued candidate First, suppose and are both finite-valued nonnegative measures. Let be the set of those extended-value measurable functions such that: , since it contains at least the zero function. Now let , and suppose is an arbitrary measurable set, and define: Then one has and therefore, . Now, let be a sequence of functions in such that By replacing with the maximum of the first functions, one can assume that the sequence is increasing. Let be an extended-valued function defined as By Lebesgue's monotone convergence theorem, one has for each , and hence, . Also, by the construction of , Proving equality Now, since , defines a nonnegative measure on . To prove equality, we show that . Suppose ; then, since is finite, there is an such that . To derive a contradiction from , we look for a positive set for the signed measure (i.e. a measurable set , all of whose measurable subsets have non-negative measure), where also has positive -measure. Conceptually, we're looking for a set , where in every part of . A convenient approach is to use the Hahn decomposition for the signed measure . Note then that for every one has , and hence, where is the indicator function of . Also, note that as desired; for if , then (since is absolutely continuous in relation to ) , so and contradicting the fact that . Then, since also and satisfies This is impossible because it violates the definition of a supremum; therefore, the initial assumption that must be false. Hence, , as desired. Restricting to finite values Now, since is -integrable, the set is -null. Therefore, if a is defined as then has the desired properties. Uniqueness As for the uniqueness, let be measurable functions satisfying for every measurable set . Then, is -integrable, and In particular, for or . It follows that and so, that -almost everywhere; the same is true for , and thus, -almost everywhere, as desired. For -finite positive measures If and are -finite, then can be written as the union of a sequence of disjoint sets in , each of which has finite measure under both and . For each , by the finite case, there is a -measurable function such that for each -measurable subset of . The sum of those functions is then the required function such that . As for the uniqueness, since each of the is -almost everywhere unique, so is . For signed and complex measures If is a -finite signed measure, then it can be Hahn–Jordan decomposed as where one of the measures is finite. Applying the previous result to those two measures, one obtains two functions, , satisfying the Radon–Nikodym theorem for and respectively, at least one of which is -integrable (i.e., its integral with respect to is finite). It is clear then that satisfies the required properties, including uniqueness, since both and are unique up to -almost everywhere equality. If is a complex measure, it can be decomposed as , where both and are finite-valued signed measures. Applying the above argument, one obtains two functions, , satisfying the required properties for and , respectively. Clearly, is the required function. The Lebesgue decomposition theorem Lebesgue's decomposition theorem shows that the assumptions of the Radon–Nikodym theorem can be found even in a situation which is seemingly more general. Consider a σ-finite positive measure on the measure space and a σ-finite signed measure on , without assuming any absolute continuity. Then there exist unique signed measures and on such that , , and . The Radon–Nikodym theorem can then be applied to the pair . See also Girsanov theorem Radon–Nikodym set Notes References Contains a proof for vector measures assuming values in a Banach space. Contains a lucid proof in case the measure ν is not σ-finite. Contains a proof of the generalisation. Theorems in measure theory Articles containing proofs Generalizations of the derivative Integral representations
Radon–Nikodym theorem
[ "Mathematics" ]
2,488
[ "Articles containing proofs", "Theorems in mathematical analysis", "Theorems in measure theory" ]
338,849
https://en.wikipedia.org/wiki/The%20Last%20Starfighter
The Last Starfighter is a 1984 American space opera film directed by Nick Castle. The film tells the story of Alex Rogan (Lance Guest), a teenager who, after winning the high score in an arcade game that's secretly a simulation test, is recruited by an alien defense force to fight in an interstellar war. It also features Dan O'Herlihy, Catherine Mary Stewart, and Robert Preston in his final role in a theatrical film. The character of Centauri, a "lovable con-man", was written with him in mind and was a nod to his most famous role as Professor Harold Hill in The Music Man (1962). The Last Starfighter was released on July 13, 1984 by Universal Pictures. It received $28.7 million in the worldwide box office, against a budget of $15 million, and positive reviews from critics. The film, along with Walt Disney Pictures' Tron (1982), has the distinction of being one of cinema's earliest films to use extensive "real-life" computer-generated imagery (CGI) to depict its many starships, environments, and battle scenes. There was a subsequent novelization of the film by Alan Dean Foster, as well as a video game based on the production. In 2004, it was also adapted as an off-Broadway musical. Plot Alex Rogan is a teenager living in a trailer park with his mother and younger brother Louis, spending most of his spare time as the park's ad hoc handyman. Aside from his girlfriend Maggie, Alex's only diversion from his mundane existence is an arcade game called Starfighter, in which the player is "recruited by the Star League to defend the Frontier against Xur and the Ko-Dan Armada" in a space battle. On the evening he breaks the game's record as its highest-scoring player, Alex becomes angry and depressed on learning his bank loan for a college tuition has been rejected. The inventor of Starfighter, Centauri, arrives in a futuristic car with a proposition for Alex. Centauri is in fact a disguised alien and his car a spacecraft. Alex is taken to the planet Rylos while Beta, a doppelgänger android, is used to cover Alex's absence. Alex learns there is actually a real conflict between a Star League of peaceful worlds and the oppressive Ko-Dan Empire; the latter's armada, poised to invade Rylos, is led by Xur, a tyrannical Rylan traitor who has sabotaged the Frontier forcefield shielding Rylos and other worlds from the Ko-Dan. The last line of defense against the armada is a small fleet of Gunstar spacecraft, operated by "Navigators" paired with gunners called "Starfighters". Centauri's Starfighter arcade game is a recruiting tool designed to train Starfighters. Alex meets a friendly reptilian Navigator named Grig, and explains his unwillingness to take part in the coming conflict. Grig sympathizes with Alex while Centauri tries to persuade him to stay, touting him as a gifted Starfighter. Xur contacts Starfighter Command as Alex watches. After publicly executing a Star League spy, Xur threatens Rylos with imminent invasion, and an unnerved Alex asks to be taken home. On Earth, a disappointed Centauri gives Alex a means to contact him should he change his mind. A saboteur eliminates Starfighter Command's defenses and the Ko-Dan attack, killing the Starfighters and destroying their Gunstars. The saboteur warns Xur of Alex's escape. Alex discovers Beta and contacts Centauri to retrieve him. Centauri arrives just as Alex and Beta are attacked by a Zando-Zan, an alien assassin in Xur's service. Centauri is wounded protecting Alex, and he and Beta explain that more of them will be on their way to Earth; the only way for Alex to protect himself, his family, and his planet is to embrace his ability as a Starfighter. Alex agrees, and Centauri flies Alex back to Starfighter Command before succumbing to his injury. Alex and Grig take off in a prototype Gunstar which survived the earlier attack. While Grig mentors Alex, Beta finds it difficult to maintain his impersonation, particularly with Maggie. When another Zando-Zan shoots Beta in front of Maggie, revealing to both that Beta is an android imposter, Beta tells her the truth. They steal a pickup truck and chase the Zando-Zan back to its ship as it attempts to warn Xur. Beta has Maggie jump out before sacrificing himself by crashing the truck into the ship, destroying both and preventing the assassin's warning from being sent. The arrogant Xur assumes Alex has been eliminated and orders the armada to invade, but Alex and Grig ambush his command ship from behind. Ko-Dan Commander Kril orders Xur's arrest, but Alex's attack severely damages the command ship's weapons and communications with its fighters, and Xur escapes in the confusion. Alex and Grig attack the Ko-Dan fighters but are outnumbered and overwhelmed. Alex desperately activates a secret weapon that quickly destroys the remaining fighters. Kril attempts to ram them, but Alex cripples the command ship further, causing it to crash into Rylos' moon. Alex is proclaimed the savior of Rylos, and is persuaded to stay and help rebuild the Star League's Starfighter legion by Grig, Rylan Ambassador Enduran, and a recovered Centauri. Alex and Grig briefly return to Earth, landing their Gunstar in the trailer park, where Grig tells its residents of Alex's heroism. Alex bids his family farewell and asks Maggie to come with him, and she agrees. Inspired, Louis begins playing the Starfighter game. Cast Lance Guest as Alex Rogan / Beta Alex Rogan Robert Preston as Centauri Dan O'Herlihy as Grig Catherine Mary Stewart as Maggie Gordon Norman Snow as Xur Kay E. Kuter as Ambassador Enduran Barbara Bosson as Jane Rogan Chris Hebert as Louis Rogan Dan Mason as Lord Kril Vernon Washington as Otis John O'Leary as Rylan Bursar George McDaniel as Kodan 1st Officer Charlene Nelson as Rylan Technician John Maio as Friendly Alien Al Berry as Rylan Spy Scott Dunlop as Tentacle Alien Peter Nelson as Jack Blake Peggy Pope as Elvira Meg Wyllie as Granny Gordon Ellen Blake as Clara Potter Britt Leach as Mr. Potter Bunny Summers as Mrs. Boone Owen Bush as Mr. Boone Marc Alaimo as Hitchhiker Wil Wheaton as Louis' Friend Cameron Dye as Andy Geoffrey Blake as Gary Production The Last Starfighter was shot in 38 days, mostly night shoots in Canyon Country. It was one of the earliest films to make extensive use of computer graphics for its special effects. In place of physical models, 3D rendered models were used to depict space ships and many other objects. The Gunstar and other spaceships were designed by artist Ron Cobb, who also worked on Dark Star, Alien, Star Wars and Conan the Barbarian. Computer graphics for the film were rendered by Digital Productions (DP) on a Cray X-MP supercomputer. The company created 27 minutes of effects for the film. This was considered an enormous amount of computer generated imagery at the time. For the 300 scenes containing computer graphics in the film, each frame of the animation contained an average of 250,000 polygons and had a resolution of 3000 × 5000 36-bit pixels. Digital Productions estimated that using computer animation required only half the time and between a third to half of the cost of traditional special effects. The result was a cost of $14 million for a film that made close to $29 million at the box office. DP used Fortran, CFT77 for programming: Not all special effects in the film were done with computer animation. The depiction of the Beta unit before it had taken Alex's form was a practical effect, created by makeup artist Lance Anderson. The Starcar, created by Gene Winfield and driven by Centauri, was a working vehicle based on Winfield's Spinner designs from Blade Runner. Because the test audiences responded positively to the Beta Alex character, director Nick Castle added more scenes of Beta Alex interacting with the trailer park community. Because Lance Guest had cut his hair short after initial filming had been completed and he contracted an illness during the re-shoots, his portrayal of Beta Alex in the added scenes has him wearing a wig and heavy makeup. Wil Wheaton had a few lines of dialogue that were ultimately cut from the film, but he still is visible in the background of several scenes. Music Composer Craig Safan wanted to go "bigger than Star Wars" and therefore utilized a "Mahler-sized" orchestra, resulting in an unusual breadth of instruments, including "quadruple woodwinds" and "eight trumpets, [trombones], and horns!" Reception Critical response At the review aggregator website Rotten Tomatoes, The Last Starfighter received an approval rating of 76%, based on 90 reviews, with an average rating of 6.5/10. The website's critical consensus reads: "While The Last Starfighter is clearly derivative of other sci-fi franchises, its boundary-pushing visual effects and lovably plucky tone make for an appealing adventure". Metacritic gave the film a score of 67 based on 8 reviews, indicating "generally favorable reviews". Over time, it has developed a cult following. Roger Ebert of The Chicago Sun-Times gave the film two-and-a-half out of four stars. While the actors were good, particularly Preston and O'Herlihy, Ebert wrote The Last Starfighter was "not a terrifically original movie" but it was nonetheless "well-made". Colin Greenland reviewed The Last Starfighter for Imagine magazine, and stated that "apart from a mildly amusing little sub-plot with the android replica left on Earth to conceal his absence, Alex's adventure is strictly the movie of the video game: simple as can be, and pitched at a pre-teen audience who can believe Alex and Grig blasting a hundred alien ships and escaping without a scratch." Halliwell's Film Guide described the film as "a surprisingly pleasant variation on the Star Wars boom, with sharp and witty performances from two reliable character actors and some elegant gadgetry to offset the teenage mooning". In 2017, Variety described it as having "a simple yet ingenious plot" and added that "the action is suitably fast and furious, but what makes the movie especially enjoyable are the quirky character touches given to Guest and his fellow players." Variety also noted that film critic Gene Siskel described The Last Starfighter as the best of all Star Wars imitators. Alan Jones awarded it three stars out of five for Radio Times, writing that it was a "glossy, space-age fairy tale" and "highly derivative — Star Trek-like aliens have Star Wars-inspired dog-fights against a computer-graphic backdrop — but the sensitive love story between Guest and Catherine Mary Stewart cuts through the cuteness and gives the intergalactic adventures a much-needed boost." Adaptations The Last Starfighters popularity has resulted in several non-film adaptations of the storyline and uses of the name. Musical A musical adaptation was first produced at the Storm Theatre Off-Off Broadway in New York City in October 2004, with music and lyrics by Skip Kennon and book by Fred Landau. In November 2005, the original cast recording was released on the Kritzerland label. Books Alan Dean Foster wrote a novelization of the film shortly after it was released (). Comics The year the film was released, Marvel Comics published a comic book adaptation by writer Bill Mantlo and artists Bret Blevins and Tony Salmons in Marvel Super Special #31. The adaptation was also available as a three-issue limited series. Games In 1984, FASA, a sci-fi tabletop game maker, created The Last Starfighter: Tunnel Chase board game. Video games Arcade A real The Last Starfighter arcade game by Atari, Inc. is promised in the end credits, but was never released. If released, the game would have been Atari's first 3D polygonal arcade game to use a Motorola 68000 as the CPU. Gameplay was to have been taken from game scenes and space battle scenes in the film, and used the same controller used on the first Star Wars arcade game. The game was abandoned once Atari representatives saw the film in post-production and decided it was not going to be a financial success. Home Home versions of the game for the Atari 2600 and Atari 5200 consoles and Atari 8-bit computers were also developed, but never commercially released under the Last Starfighter name. The home computer version was eventually renamed and released (with some minor changes) as Star Raiders II. A prototype exists for the Atari 2600 Last Starfighter game, which was in actuality a game already in development by Atari under the name Universe. The game was eventually released as Solaris. In 1990, an NES game titled The Last Starfighter was released, but was actually a conversion of Uridium for Commodore 64, with modified sprites, title screen and soundtrack. A freeware playable version of the game, based on what is seen in the film, was released for PC in 2007. This is a faithful reproduction of the arcade game from the film, featuring full sound effects and music from the game. Game creators Rogue Synapse also built a working arcade cabinet of the game. Potential sequel In February 2008, production company GPA Entertainment added "Starfighter – The sequel to the classic motion picture Last Starfighter to its list of projects. But two months later, the project was reported to be "stuck in the pre-production phase". It was still there . Hollywood directors including Seth Rogen and Steven Spielberg, as well as screenwriter Gary Whitta, expressed interest in creating a sequel or remake, but the potential sequel rights-holder, Jonathan R. Betuel, has allegedly indicated that he does not want another film made. The rights to the film have not been clearly defined due to conflicting information. Multiple sources say Universal Pictures still owns the theatrical and home media distribution rights while Warner Bros., which absorbed Lorimar-Telepictures (Lorimar's successor) in 1990, has the international distribution rights. Another source states that Universal has the option to remake the film while Betuel has sequel rights. Further complicating the situation is a claim that both Universal and Warner Bros. each have remake and sequel rights. In July 2015, it was reported that Betuel would write a TV reboot of the film. On April 4, 2018, Whitta posted concept art for The Last Starfighter sequel on his Twitter account. In the same tweet, he also indicated that Betuel would be collaborating with him on the project. In a follow-up interview with Gizmodo, Whitta referred to the project as "a combination of reboot and sequel that we both think honors the legacy of the original film while passing the torch to a new generation." On October 20, 2020, Betuel stated that, with Whitta, a script for a sequel was being written and the rights to the film had been recaptured. On March 25, 2021, Whitta posted a sequel concept reel on YouTube called The Last Starfighters with concept art by Matt Allsopp and music by Chris Tilton and Craig Safan, featuring an audio clip from the original movie by Robert Preston. See also Ender's Game — a 1977 short story/novelette by Orson Scott Card Armada — A 2015 novel by Ernest Cline with a similar premise References External links Animation Timeline from Brown University The Last Starfighter video game Arcade game specifications by Atari Podcast about The Last Starfighter by the Retroist 1984 films 1980s coming-of-age films 1980s science fiction action films American coming-of-age films American science fiction action films American science fiction war films American space adventure films American space opera films Films about computing Films about video games Films adapted into comics Films directed by Nick Castle Films scored by Craig Safan Films set on fictional planets Films set on spacecraft Films set in California Films shot in California Fiction about flying cars Universal Pictures films 1980s English-language films 1980s American films 1984 science fiction films English-language science fiction action films
The Last Starfighter
[ "Technology" ]
3,379
[ "Works about computing", "Films about computing" ]
338,937
https://en.wikipedia.org/wiki/White%20Pass%20and%20Yukon%20Route
The White Pass and Yukon Route (WP&Y, WP&YR) is a Canadian and U.S. Class III narrow-gauge railroad linking the port of Skagway, Alaska, with Whitehorse, the capital of Yukon. An isolated system, it has no direct connection to any other railroad. Equipment, freight and passengers are ferried by ship through the Port of Skagway, and via road through a few of the stops along its route. The railroad began construction in 1898 during the Klondike Gold Rush as a means of reaching the gold fields. With its completion in 1900, it became the primary route to the interior of the Yukon, supplanting the Chilkoot Trail and other routes. The route continued operation until 1982, and in 1988 was partially revived as a heritage railway. In July 2018, the railway was purchased by Carnival Corporation & plc. For many years the railroad was a subsidiary of Tri White Corporation, also the parent of Clublink, and operated by the Pacific and Arctic Railway and Navigation Company (in Alaska), the British Columbia Yukon Railway Company (in British Columbia) and the British Yukon Railway Company, originally known as the British Yukon Mining, Trading and Transportation Company (in Yukon), which used the trade name White Pass and Yukon Route. The railroad was sold by Clublink to a joint venture controlled by Survey Point Holdings, with a minority holding by the Carnival Corporation & plc parent company of the Carnival Cruise Line. The railway was designated as an international historic civil engineering landmark by the Canadian Society for Civil Engineering and the American Society of Civil Engineers in 1994. History Construction The line was born of the Klondike Gold Rush of 1897. The most popular route taken by prospectors to the gold fields in Dawson City was a treacherous route from the port in Skagway or Dyea, Alaska, across the mountains to the Canada–US border at the summit of the Chilkoot Pass or the White Pass. There, the prospectors were not allowed across by Canadian authorities unless they had sufficient gear for the winter, typically one ton of supplies. This usually required several trips across the passes. There was a need for better transportation than pack horses used over the White Pass or human portage over the Chilkoot Pass. This need generated numerous railroad schemes. In 1897, the Canadian government received 32 proposals for Yukon railroads, and most were never realized. In 1897, three separate companies were organized to build a rail link from Skagway to Fort Selkirk, Yukon, away. Largely financed by British investors organized by Close Brothers merchant bank, a railroad was soon under construction. A gauge was chosen by the railway contract builder Michael James Heney. The narrow roadbed required by narrow gauge greatly reduced costs when the roadbed was blasted in solid rock. Even so, 450 tons of explosives were used to reach White Pass summit. The narrow gauge also permitted tighter radii to be used on curves, making the task easier by allowing the railroad to follow the landscape more, rather than having to be blasted through it. Construction started in May 1898, but they encountered obstacles in dealing with the Skagway city government and the town's crime boss, Soapy Smith. The company president, Samuel H. Graves (1852–1911), was elected as chairman of the vigilante organization that was trying to expel Soapy and his gang of confidence men and rogues. On the evening of July 8, 1898, Soapy Smith was killed in the Shootout on Juneau Wharf with guards at one of the vigilante's meetings. Samuel Graves witnessed the shooting. The railroad helped block off the escape routes of the gang, aiding in their capture, and the remaining difficulties in Skagway subsided. On July 21, 1898, an excursion train hauled passengers for out of Skagway, the first train to operate in Alaska. On July 30, 1898, the charter rights and concessions of the three companies were acquired by the White Pass & Yukon Railway Company Limited, a new company organized in London. Construction reached the summit of White Pass, away from Skagway, by mid-February 1899. The railway reached Bennett, British Columbia, on July 6, 1899. In the summer of 1899, construction started north from Carcross to Whitehorse, north of Skagway. The construction crews working from Bennett along a difficult lakeshore reached Carcross the next year, and the last spike was driven on July 29, 1900, with service starting on August 1, 1900. By then much of the Gold Rush fever had died down. At the time, the gold spike was actually a regular iron spike. A gold spike was on hand, but the gold was too soft and instead of being driven, was just hammered out of shape. Early years As the gold rush wound down, serious professional mining was taking its place; not so much for gold as for other metals such as copper, silver and lead. The closest port was Skagway, and the only route there was via the White Pass & Yukon Route's river boats and railroad. While ores and concentrates formed the bulk of the traffic, the railroad also carried passenger traffic, and other freight. There was, for a long time, no easier way into the Yukon Territory, and no other way into or out of Skagway except by sea. Financing and route was in place to extend the rails from Whitehorse to Carmacks, but there was chaos in the river transportation service, resulting in a bottleneck. The White Pass instead used the money to purchase most of the riverboats, providing a steady and reliable transportation system between Whitehorse and Dawson City. While the WP&YR never built between Whitehorse and Fort Selkirk, some minor expansion of the railway occurred after 1900. In 1901, the Taku Tram, a portage railroad was built at Taku City, British Columbia, which was operated until 1951. It carried passengers and freight between the SS Tutshi operating on Tagish Lake and the MV Tarahne operating across Atlin Lake to Atlin, British Columbia (While Tutshi was destroyed by a suspicious fire around 1990, Tarahne was restored and hosts special dinners including murder mysteries. Lifeboats built for Tutshi'''s restoration were donated to Tarahne). The Taku Tram could not turn around, and simply backed up on its westbound run. The locomotive used, the Duchess, is now in Carcross. In 1910, the WP&YR operated a branch line to Pueblo, a mining area near Whitehorse. This branch line was abandoned in 1918; a haul-road follows that course today but is mostly barricaded; a Whitehorse Star editorial in the 1980s noted that this route would be an ideal alignment if the Alaska Highway should ever require a bypass reroute around Whitehorse. By June 1914, the WP&YR had 11 locomotives, 15 passenger cars and 233 freight cars operating on of trackage; generating $68,368 in passenger revenue and $257,981 in freight revenue; still a profitable operation as operating expenses were only $100,347. While all other railroads in the Yukon (such as the Klondike Mines Railway at Dawson City) had been abandoned by 1914, the WP&YR continued to operate. During the Great Depression, traffic was sparse on the WP&YR, and for a time trains operated as infrequently as once a week. World War II Alaska became strategically important for the United States during World War II; there was concern that the Japanese might invade it, as Alaska was the closest part of the United States to Japan. Following the Attack on Pearl Harbor, the decision was made by the US and Canadian governments to construct the Alaska Highway as an all-weather overland route to ensure communication. One of the principal staging points for construction was Whitehorse, which could be supplied by the WP&YR. By that time the railroad was a financially-starved remnant from Klondike gold rush days, with well-worn engines and rolling stock. Despite this, the railroad moved during the first 9 months of 1942, more than double its prewar annual traffic. Even this was deemed insufficient, so the U.S. Government leased the railroad for the duration, effective at 12:01 a.m. on 1 October 1942, handing control to the United States Army. What became the 770th Railway Operating Battalion of the Military Railway Service took over train operations in company with the WP&Y's civilian staff. Major John E. Ausland, a former executive with the Chicago, Burlington and Quincy Railroad, was named superintendent, while Lieutenant Stanley Jerome Gaetz was trainmaster. Canadian law forbade foreign government agencies from operating within Canada and its territories, but Japanese forces had occupied some of the Aleutian Islands by this time, and an accommodation was quickly reached to "make an illegal action legal." The MRS scoured the US for usable narrow-gauge locomotives and rolling stock, and soon a strange and colorful assortment began arriving at Skagway. The single-largest group was seven D&RGW K-28 class 2-8-2's acquired prior to the lease, in August 1942, 2-8-0's from the Silverton Northern and the Colorado & Southern, all over 40 years old, and a pair of ET&WNC 4-6-0's soon appeared, among others, as well as eleven new War Department Class S118 2-8-2's. WP&Y's original roster of 10 locomotives and 83 cars was soon eclipsed by the Army's additional 26 engines and 258 cars. The increase in traffic was remarkable: in the last 3 months of 1942, the railroad moved . In 1943, the line carried , equivalent to ten years' worth of typical prewar traffic: all this despite some of the most severe winter weather recorded since 1910; gales, snowdrifts and temperatures of succeeded in blockading the line from 5 to 15 February 1943 and from 27 January to 14 February 1944. The peak movement occurred on August 4, 1943, when the White Pass moved 38 trains north and south, totaling (), and in 24 hours. Control of the railroad was handed back to its civilian operators late in 1944. 1946–1982 In May, 1947, the railroad purchased its last steam locomotives. These were a pair of 2-8-2 Mikado type engines built by the Baldwin Locomotive Works of Philadelphia, numbered No. 72 and No. 73. In 1951, the White Pass and Yukon Corporation Ltd., a new holding company, was incorporated to acquire the three railway companies comprising the WP&YR from the White Pass and Yukon Company, Ltd., which was in liquidation. The railway was financially restructured. While most other narrow-gauge systems in North America were closing around this time, the WP&YR remained open. In 1959, the first dividend to stockholders was paid: 10 cents per share. The railroad began dieselizing in the mid to late 1950s: one of the few North American narrow-gauge railroads to do so. The railroad bought shovelnose diesels from General Electric, and later road-switchers from American Locomotive Company (ALCO) and Montreal Locomotive Works, as well as a few small switchers. On June 30, 1964, the line retired its last steam locomotive. The railroad was an early pioneer of intermodal freight traffic, commonly called containerization; advertising of the time referred to it as the Container Route. The WP&YR owned an early container ship (Clifford J. Rogers, built in 1955), and in 1956 introduced containers, although these were far smaller than the truck-sized containers that came into use in the United States in 1956 and could not readily be handed off to other railroads or ship lines. The Faro lead-zinc mine opened in 1969. The railway was upgraded with seven new locomotives from ALCO, new freight cars, ore buckets, a straddle carrier at Whitehorse to transfer from the railway's new fleet of trucks, a new ore dock at Skagway, and assorted work on the rail line to improve alignment. In the fall of 1969, a new tunnel and bridge that bypassed Dead Horse Gulch were built to replace the tall steel cantilever bridge that could not carry the heavier trains. This enormous investment made the company dependent on continued ore traffic to earn the revenue, and left the railway vulnerable to loss of that ore-carrying business. As well, passenger traffic on the WP&YR was increasing as cruise ships started to visit Alaska's Inside Passage. There was no road from Skagway to Whitehorse until 1978. Even after the road was built, the White Pass still survived on the ore traffic from the mines. During this time, the green-yellow engine color scheme, with a thunderbird on the front, was replaced with blue, patterned with black and white. (The green-yellow scheme was restored in the early 1990s, along with the thunderbird. , however, one engine still had the blue color scheme. The steam engines, however, remained basic black.) In 1982, metal prices plunged, striking with devastating effect on the mines that were the White Pass and Yukon Route's main customers. Many, including the Faro lead-zinc mine, closed down, and, with that traffic gone, the White Pass was doomed as a commercial railroad. Hopeful of a reopening, the railway ran at a significant loss for several months, carrying only passengers. However, the railway closed down on October 7, 1982. Some of the road's ALCO diesels were sold to a railroad in Colombia, and three (out of four, and one of these was wrecked) of the newer ALCO diesels built by and in storage with ALCO's Canadian licensee MLW (Montreal Locomotive Works) were sold to US Gypsum in Plaster City, California. Only one of these modern narrow-gauge diesels, the last narrow-gauge diesel locomotives built for a North American customer, was delivered to the White Pass. The five diesels sold to Colombia were not used there as they were too heavy, and were re-acquired in 1999; one was nearly lost at sea during a storm as it broke loose on the barge and slowly rolled towards the edge. The railway was the focus of the first episode of the BBC Television series Great Little Railways in 1983. Heritage railway: 1988–present The shutdown, however, was not for long. Tourism to Alaska began to increase, with many cruise ships stopping at Skagway. The scenery of the White Pass route sounded like a great tourist draw; and the rails of the White Pass & Yukon Route were laid right down to the docks, even along them, for the former freight and cruise ship traffic. Cruise operators, remembering the attraction of the little mountain climbing trains to their passengers, pushed for a re-opening of the line as a heritage railway. The White Pass was and is perfectly positioned to sell a railroad ride through the mountains to cruise ship tourists; they do not even have to walk far from their ships. Following a deal between White Pass and the United Transportation Union, representing Alaska employees of the road, the White Pass Route was reopened between Skagway and White Pass in 1988: purely for tourist passenger traffic. The White Pass Route also bid on the ore-haul from the newly reopened Faro mine, but its price was considerably higher than road haulage over the Klondike Highway. The railway still uses vintage parlor cars, the oldest four built in 1881 and predating WP&YR by 17 years, and four new cars built in 2007 follow the same 19th-century design. At least eight cars have wheelchair lifts. A work train reached Whitehorse on September 22, 1988, its intent being to haul two locomotives, parked in Whitehorse for six years, to Skagway to be overhauled and used on the tourist trains. While in Whitehorse for approximately one week, it hauled the parked rolling stock – flatcars, tankers and a caboose – out of the downtown area's sidings, and the following year, they were hauled further south, many eventually sold. Most of the tracks in downtown Whitehorse have now been torn up, and the line's terminus is six city blocks south of the old train depot at First Avenue and Main Street. A single new track along the waterfront enables the operation of the Whitehorse Waterfront Trolley, a tourist line run by a local historical society. After customs and Canadian Labour Union jurisdictional issues were resolved, the WP&YR main line reopened to Fraser in 1989, and to Bennett in 1992. A train reached Carcross station in 1997 to participate in the Ton of Gold centennial celebration. A special passenger run, by invitation only, was made from Carcross to Whitehorse on October 10, 1997, and there are plans to eventually re-open the entire line north to Whitehorse if a market exists. So far, the tracks are only certified to Carcross by the Canadian Transportation Agency; on July 29, 2006, White Pass ran a train to Carcross and announced passenger service would begin in May 2007, six trains per week, with motorcoach return trips. Since the distance between Skagway and Whitehorse is , and the distance of line between Skagway and Carcross is , this means that about 63% of the original line is now used again. Even when the length of the unused portion of the line is excluded, the WP&YR is longer than other notable North American narrow-gauge railroads, such as the Cumbres & Toltec Scenic Railroad () and the Durango & Silverton Narrow Gauge Railroad (). WP&YR acquired some rolling stock from Canadian National's Newfoundland operations, which shut down in November 1988; the acquisition included 8 side-pivot, drop-side air dump cars for large rocks, and 8 longitudinal hoppers for ballast, still painted in CN orange. These cars were converted from Newfoundland's gauge bogies to White Pass and Yukon Route's 3 ft (914 mm) narrow gauge bogies. Most trains are hauled by the line's diesel locomotives, painted in green (lower) and yellow (upper). However, one of the line's steam locomotives is still in operation, No. 73, a 2-8-2 Mikado-type locomotive. Another steam locomotive, No. 40 a 2-8-0 Consolidation type locomotive was on loan from the Georgetown Loop R.R. in Colorado for a period of five years, but was returned after only two years. Former WP&Y 69, a 2-8-0, was re-acquired in 2001, rebuilt, and re-entered service in 2008. Also operational, a few times a year, is an original steam-powered rotary snowplow, an essential device in the line's commercial service days. (The rotaries were retired in 1964, along with the remaining steam engines that pushed them, and snow clearing was done by caterpillar tractor.) While it is not needed, as the tourist season is only in the summer months, it is a spectacle in operation, and the White Pass runs the steam plow for railfan groups once or twice each winter, pushed by two diesel locomotives (in 2000 only, it was pushed by two steam locomotives, Nos. 73 and 40). The centennial of the Golden Spike at Carcross was re-enacted on July 29, 2000, complete with two steam engines meeting nose-to-nose (No. 73 and No. 40), and a gold-coated steel spike being driven by a descendant of WP&YR contractor Michael James Heney. One organization chartered a steam-pulled train from Carcross to Fraser, with a stopover at Bennett, on Friday, June 24, 2005. When participants seemed unlikely to reach the planned numbers, surplus seats were sold to the public (120 USD or 156 CAD), with bus return to Carcross from Fraser. This represented the first paid passenger trips out of Carcross since 1982, a feature that started regular service in 2007. White Pass president Gary Danielsen advised a CBC Radio interviewer that service to Whitehorse would require an enormous capital investment to restore the tracks, but the company is willing if there is either a passenger or freight potential to make it cost-effective. A June 2006 report on connecting Alaska to the continental railroad network suggested Carmacks as a hub, with a branch line to Whitehorse and beyond to either Skagway or Haines, Alaska.Several former White Pass steam locomotives are currently in operation at tourist attractions in the Southeastern United States. Locomotives 70, 71, and 192 are at the Dollywood amusement park in Pigeon Forge, Tennessee. Locomotive 190 is at the Tweetsie Railroad in Boone, North Carolina. In late June 2010, the railroad and the City of Skagway entered into an agreement whereby the two would jointly advocate for the restoration of freight service on the line, including the revival of the trackage north of Carcross back to Whitehorse and the possibility of constructing new track north from Whitehorse to Carmacks. The expansion would require federal funds, and, if completed, would serve the region's mining industry. In July 2018, the railway was purchased by Klondike Holdings and Carnival Corporation & plc, in a joint venture.Carnival Corp buys Alaska heritage railroad Railway Age June 15, 2018. Accidents In 1951, engine No. 70 caught a guardrail with its snowplow and rolled over on its side. The locomotive was repaired and now is in operation at Dollywood in Pigeon Forge, Tennessee, working on the Dollywood Express. In 1994, during rock removal operations, a backhoe operator accidentally struck a petroleum pipeline near the railroad tracks. The operator's mistake caused the pipeline to rupture and spill between of heating oil into the Skagway River. Roadmaster Edward Hanousek Jr. and President M. Paul Taylor Jr. were charged with several crimes associated with the accident. Both men maintained their innocence throughout many years of extensive litigation. After remand by the 9th Circuit Court of Appeals, the President settled on a plea agreement on misdemeanor charges of making negligent misrepresentations to the Coast Guard. The Roadmaster was convicted on negligence-related charges. A serious derailment on September 3, 2006 resulted in the death of one section worker. A work train, Engine 114 pulling eight gravel cars, derailed approximately south of Bennett, injuring all four train crew, two Canadian and two American; one died at the scene and the others had to be airlifted to a hospital. Passenger operations on the blocked section had ended for the season just before the accident. In February 2007, Engine 114 was taken for repair to the Coast Engine and Equipment Company (CEECO) in Tacoma, Washington. On July 23, 2014 a derailment occurred involving two vintage diesel locomotives and four passenger rail cars. The accident was due to a broken throw rod at a switch. Both locomotives derailed, and the rails broke. There were nine minor injuries initially reported, and those passengers were treated and released in Skagway. Later reports state that 19 passengers and four railroad employees were injured. Due to the derailment, the line was temporarily suspended. Rosters of White Pass Locomotives and Cars, Boats, and Winter Stages For the roster of White Pass locomotives and railroad cars, see List of White Pass and Yukon Route locomotives and cars. For the roster of White Pass boats, see List of steamboats on the Yukon River. For the roster of White Pass winter stages, see'' Overland Trail (Yukon). War Department Baldwin 2-8-2 Locomotives There are two persistent myths regarding the role the White Pass & Yukon Route played in building the Alaska Highway during the Second World War. They concern the eleven new USATC S118 Class locomotives that the United States Army Transportation Corps brought to the WP&YR in 1943. The first is that they were converted from gauge to gauge by the WP&YR shops in Skagway, Alaska. The second is that they were built for Iran and diverted to the WP&YR. These locomotives, designated USA 190 to USA 200, were constructed by Baldwin Locomotive Works as gauge and shipped fully assembled. The MacArthur was designed by the American Locomotive Company for gauge and the smaller gauges were accommodated with spacers (rings) of various widths between the wheels and the bogie side frames on same length axles. The spacers were wide for gauge track and wide for . In total, nearly 800 MacArthurs were produced by ALCO, Baldwin, and a few other manufacturers. The reason the USA 190–200 were never destined for Iran is that the Trans-Iranian Railway was built to . Also, because of scarce water and extensive tunnels, Iran was the first case where the Army primarily used diesel locomotives. USATC narrow-gauge locomotives were never destined for Iran. The first locomotives of the MacArthur design that Baldwin Locomotive Works built were USA 190–200 for the WP&YR, which makes them unique. The initial 1942 sale order to Baldwin was for 60 MacArthur gauge locomotives for India's extensive meter-gauge rail network. The first eleven were diverted to the WP&YR as gauge, the next 15 went to India as meter gauge, another 20 went to Queensland Rail as gauge, and the remaining 14 were meter gauge for India where the order was destined before the Alaskan and Australian diversions. Gallery See also List of heritage railways in Canada List of heritage railroads in the United States List of narrow-gauge railways in British Columbia Narrow-gauge railways in Canada List of Historic Civil Engineering Landmarks References Additional reading External links Historic WP&Y route map A WP&YR friend and fan web site by Boerries Burkhardt 4/15/1899; The first railway to the Klondike – The White Pass and Yukon Railway Davies/Scroggie Collection of White Pass and Yukon Documents and Ephemera. Yale Collection of Western Americana, Beinecke Rare Book and Manuscript Library 3 ft gauge railways in the United States 3 ft gauge railways in Canada Narrow gauge railways in British Columbia Narrow gauge railways in Yukon Narrow gauge railroads in Alaska Heritage railways in British Columbia Heritage railways in Yukon Heritage railroads in Alaska Defunct Alaska railroads Defunct British Columbia railways Defunct Yukon railways Passenger railroads in Alaska Passenger railways in British Columbia Passenger railways in Yukon Transportation in Municipality of Skagway Borough, Alaska Klondike Gold Rush Historic American Engineering Record in Alaska Historic Civil Engineering Landmarks Tourist attractions in the Municipality of Skagway Borough, Alaska Carnival Corporation & plc
White Pass and Yukon Route
[ "Engineering" ]
5,467
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
338,965
https://en.wikipedia.org/wiki/Wing%20loading
In aerodynamics, wing loading is the total weight of an aircraft or flying animal divided by the area of its wing. The stalling speed, takeoff speed and landing speed of an aircraft are partly determined by its wing loading. The faster an aircraft flies, the more its lift is changed by a change in angle of attack, so a smaller wing is less adversely affected by vertical gusts. Consequently, faster aircraft generally have higher wing loadings than slower aircraft in order to avoid excessive response to vertical gusts. A higher wing loading also decreases maneuverability. The same constraints apply to winged biological organisms. Range of wing loadings Effect on performance Wing loading is a useful measure of the stalling speed of an aircraft. Wings generate lift owing to the motion of air around the wing. Larger wings move more air, so an aircraft with a large wing area relative to its mass (i.e., low wing loading) will have a lower stalling speed. Therefore, an aircraft with lower wing loading will be able to take off and land at a lower speed (or be able to take off with a greater load). It will also be able to turn at a greater rate. Effect on takeoff and landing speeds The lift force L on a wing of area A, traveling at true airspeed v is given by where ρ is the density of air, and CL is the lift coefficient. The lift coefficient is a dimensionless number that depends on the wing cross-sectional profile and the angle of attack. At steady flight, neither climbing nor diving, the lift force and the weight are equal. With L/A = Mg/A = WSg, where M is the aircraft mass, WS = M/A the wing loading (in mass/area units, i.e. lb/ft2 or kg/m2, not force/area) and g the acceleration due to gravity, this equation gives the speed v through As a consequence, aircraft with the same CL at takeoff under the same atmospheric conditions will have takeoff speeds proportional to . So if an aircraft's wing area is increased by 10% and nothing else is changed, the takeoff speed will fall by about 5%. Likewise, if an aircraft designed to take off at 150 mph grows in weight during development by 40%, its takeoff speed increases to ≈ 177 mph. Some flyers rely on their muscle power to gain speed for takeoff over land or water. Ground nesting and water birds have to be able to run or paddle at their takeoff speed before they can take off. The same is true for a hang-glider pilot, though they may get assistance from a downhill run. For all these, a low WS is critical, whereas passerines and cliff-dwelling birds can get airborne with higher wing loadings. Effect on turning performance To turn, an aircraft must roll in the direction of the turn, increasing the aircraft's bank angle. Turning flight lowers the wing's lift component against gravity and hence causes a descent. To compensate, the lift force must be increased by increasing the angle of attack by use of up elevator deflection, which increases drag. Turning can be described as "climbing around a circle" (wing lift is diverted to turning the aircraft), so the increase in wing angle of attack creates even more drag. The tighter the turn radius attempted, the more drag induced; this requires that power (thrust) be added to overcome the drag. The maximum rate of turn possible for a given aircraft design is limited by its wing size and available engine power: the maximum turn the aircraft can achieve and hold is its sustained turn performance. As the bank angle increases, so does the g-force applied to the aircraft, this having the effect of increasing the wing loading and also the stalling speed. This effect is also experienced during level pitching maneuvers. As stalling is due to wing loading and maximum lift coefficient at a given altitude and speed, this limits the turning radius due to maximum load factor. At Mach 0.85 and 0.7 lift coefficient, a wing loading of can reach a structural limit of 7.33g up to and then decreases to 2.3g at . With a wing loading of the load factor is twice smaller and barely reaches 1g at . Aircraft with low wing loadings tend to have superior sustained turn performance because they can generate more lift for a given quantity of engine thrust. The immediate bank angle an aircraft can achieve before drag seriously bleeds off airspeed is known as its instantaneous turn performance. An aircraft with a small, highly loaded wing may have superior instantaneous turn performance, but poor sustained turn performance: it reacts quickly to control input, but its ability to sustain a tight turn is limited. A classic example is the F-104 Starfighter, which has a very small wing and high wing loading. At the opposite end of the spectrum was the large Convair B-36: its large wings resulted in a low wing loading that could make it sustain tighter turns at high altitude than contemporary jet fighters, while the slightly later Hawker Hunter had a similar wing loading of . The Boeing 367-80 airliner prototype could be rolled at low altitudes with a wing loading of at maximum weight. Like any body in circular motion, an aircraft that is fast and strong enough to maintain level flight at speed v in a circle of radius R accelerates towards the center at . This acceleration is caused by the inward horizontal component of the lift, , where is the banking angle. Then from Newton's second law, Solving for R gives The lower the wing loading, the tighter the turn. Gliders designed to exploit thermals need a small turning circle in order to stay within the rising air column, and the same is true for soaring birds. Other birds, for example, those that catch insects on the wing, also need high maneuverability. All need low wing loadings. Effect on stability Wing loading also affects gust response, the degree to which the aircraft is affected by turbulence and variations in air density. A small wing has less area on which a gust can act, both of which serve to smooth the ride. For high-speed, low-level flight (such as a fast low-level bombing run in an attack aircraft), a small, thin, highly loaded wing is preferable: aircraft with a low wing loading are often subject to a rough, punishing ride in this flight regime. The F-15E Strike Eagle has a wing loading of (excluding fuselage contributions to the effective area), whereas most delta-wing aircraft (such as the Dassault Mirage III, for which WS = 387 kg/m2) tend to have large wings and low wing loadings. Quantitatively, if a gust produces an upward pressure of G (in N/m2, say) on an aircraft of mass M, the upward acceleration a will, by Newton's second law be given by decreasing with wing loading. Effect of development A further complication with wing loading is that it is difficult to substantially alter the wing area of an existing aircraft design (although modest improvements are possible). As aircraft are developed they are prone to "weight growth"—the addition of equipment and features that substantially increase the operating mass of the aircraft. An aircraft whose wing loading is moderate in its original design may end up with very high wing loading as new equipment is added. Although engines can be replaced or upgraded for additional thrust, the effects on turning and takeoff performance resulting from higher wing loading are not so easily reconciled. Water ballast use in gliders Modern gliders often use water ballast carried in the wings to increase wing loading when soaring conditions are strong. By increasing the wing loading the average speed achieved across country can be increased to take advantage of strong thermals. With a higher wing loading, a given lift-to-drag ratio is achieved at a higher airspeed than with a lower wing loading, and this allows a faster average speed across country. The ballast can be ejected overboard when conditions weaken or prior to landing. Design considerations Fuselage lift A blended wing-fuselage design such as that found on the General Dynamics F-16 Fighting Falcon or Mikoyan MiG-29 Fulcrum helps to reduce wing loading; in such a design the fuselage generates aerodynamic lift, thus improving wing loading while maintaining high performance. Variable-sweep wing Aircraft like the Grumman F-14 Tomcat and the Panavia Tornado employ variable-sweep wings. As their wing area varies in flight so does the wing loading (although this is not the only benefit). When the wing is in the forward position takeoff and landing performance is greatly improved. Flaps Like all aircraft flaps, Fowler flaps increase the camber and hence the maximum value of lift coefficient (CLmax) lowering the landing speed. They also increase wing area, decreasing the wing loading, which further lowers the landing speed. High lift devices such as certain flaps allow the option of smaller wings to be used in a design in order to achieve similar landing speeds compared to an alternate design using a larger wing without a high lift device. Such options allow for higher wing loading in a design. This may result in beneficial features, such as higher cruise speeds or a reduction in bumpiness at high speed low altitude flight (the latter feature is very important for close air support aircraft roles). For instance, Lockheed's Starfighter uses internal Blown flaps to achieve a high wing loading design (723 kg/m²) which allows it a much smoother low altitude flight at full throttle speeds compared to low wing loading delta designs such as the Mirage 2000 or Mirage III (387 kg/m²). The F-16 which has a relatively high wing loading of 689 kg/m² uses leading-edge extensions to increase wing lift at high angles of attack. See also Disk loading Lift coefficient Wing warping References Notes Bibliography Notes External links Aircraft aerodynamics Aircraft configurations Aircraft performance Gliding technology
Wing loading
[ "Engineering" ]
1,999
[ "Aircraft configurations", "Aerospace engineering" ]
338,985
https://en.wikipedia.org/wiki/Business%20performance%20management
Business performance management (BPM) (also known as corporate performance management (CPM) enterprise performance management (EPM), organizational performance management, or performance management) is a management approach which encompasses a set of processes and analytical tools to ensure that an organization's activities and output are aligned with its goals. BPM is associated with business process management, a larger framework managing organizational processes. It aims to measure and optimize the overall performance of an organization, specific departments, individual employees, or processes to manage particular tasks. Performance standards are set by senior leadership and task owners which may include expectations for job duties, timely feedback and coaching, evaluating employee performance and behavior against desired outcomes, and implementing reward systems. BPM can involve outlining the role of each individual in an organization in terms of functions and responsibilities. History By 2017, Gartner had reclassified CPM as "financial planning and analysis" (FP&A) and "financial close" to reflect an increased focus on planning and the emergence of new solutions for financial close management. Definition and scope New technology realizes corporate strategic outcomes and describes risk-management programs. Application Performance-management principles are used most often in the workplace and can be applied wherever people interact with their environments to produce desired effects, such as health settings. How performance management is applied is important to get the most out of a group, and can improve day-to-day employee performance. It must not encourage internal competition, but teamwork, cooperation, and trust. Performance management aligns company goals with those of teams and employees to increase efficiency, productivity, and profitability. Its guidelines stipulate the activities and outcomes by which employees and teams are evaluated during performance appraisal. Many types of organizations use performance management systems (PMS) to evaluate themselves according to their targets, objectives, and goals; a research institute may use PMS to evaluate its success in reaching development targets. Complex performance drivers such as the societal contribution of research may be evaluated with other performance drivers, such as research commercialization and collaborations, in sectors like commercial agriculture. A research institute may use data-driven, real-time PMS to deal with complex performance-management challenges for a country developing its agricultural sector. Werner Erhard, Michael C. Jensen, and their colleagues developed a new approach to improving performance in organizations. Their work emphasizes how constraints imposed by one's worldview can impede cognitive abilities, and explores the source of performance which is inaccessible by cause-and-effect analysis. They say that a person's performance correlates with their work situation, and language (including what is said and unsaid in conversations) plays a major role. Performance is more likely to be improved when management understands how employees perceive the world and implementing changes which are compatible with that worldview. Public-sector effects In the public sector, the effects of performance-management systems have ranged from positive to negative; this suggests that differences among systems and the context in which they are implemented affect their success or failure. How it can fail Employees who question the fairness of a performance-management system or are overly competitive will affect its effectiveness; those who do not feel adequately rewarded become disgruntled with the process. Without proper system planning, employees may view it as mandating compliance. Organizational development In organizational development (OD), performance can be thought of as actual versus desired results; where actual results fall short of those desired is the performance-improvement zone. Performance improvement aims to close the gap between the two. Other organizational-development definitions differ slightly. According to the U.S. Office of Personnel Management (OPM), performance management is a system or process in which work is planned and expectations are set; performance of the work is monitored; staff ability to perform is developed; performance is rated and the ratings summarized, and top performance is rewarded. Design and implementation An organization-wide 360-degree feedback process integrated into the organization's culture can be a powerful tool for communicating and instituting change, rapidly touching all members of the organization when new markets, strategies, values and structures are introduced into the system. Each year, companies spend considerable money on their performance-management systems. For performance management to succeed, businesses must continue to adapt their system to correct current deficiencies. Some aspects, such as goal setting or performance bonuses, may resonate more with employees than others. Outcomes According to Richard et al. (2009), organizational-performance metrics encompass three outcomes: Financial performance, such as profits, return on assets, and return on investment Product market performance, such as sales and market share Total shareholder return, economic value added, and similar Organizational effectiveness is a similar term. Technology Business performance management requires large organizations to collect and report large volumes of data. Software vendors, particularly those offering business intelligence tools, offer products to assist in this process. BPM is often incorrectly understood as relying on software to work, and many definitions suggest software as essential to the approach. Interest in BPM by the software community may be sales-driven. See also Behavioral systems analysis Data visualization Electronic performance support systems Executive information systems Integrated business planning IT performance management List of management topics Operational performance management Organizational behavior management Organizational engineering PDCA Performance measurement Rosabeth Moss Kanter Vitality curve (a.k.a. stack ranking) Strategy Markup Language and particularly StratML Part 2, Performance Plans and Reports References Further reading Business Intelligence and Performance Management: Theory, Systems, and Industrial Applications, P. Rausch, A. Sheta, A. Ayesh (Eds.), Springer Verlag U.K., 2013, . Performance Management - Integrating Strategy Execution, Methodologies, Risk, and Analytics. Gary Cokins, John Wiley & Sons, Inc. 2009. Journal of Organizational Behavior Management, Routledge Taylor & Francis Group. Published quarterly. 2009. Handbook of Organizational Performance, Thomas C. Mawhinney, William K. Redmon & Carl Merle Johnson. Routledge. 2001. Improving Performance: How to Manage the White Space in the Organization Chart, Geary A. Rummler & Alan P. Brache. Jossey-Bass; 2nd edition. 1995. Human Competence: Engineering Worthy Performance, Thomas F. Gilbert. Pfeiffer. 1996. The Values-Based Safety Process: Improving Your Safety Culture with Behavior-Based Safety, Terry E. McSween. John Wiley & Sons. 1995. Performance-based Instruction: Linking Training to Business Results, Dale Brethower & Karolyn Smalley. Pfeiffer; Har/Dis edition. 1998. Handbook of Applied Behavior Analysis, John Austin & James E. Carr. Context Press. 2000. Managing for Performance, Alasdair A. K. White. Piatkus Books, 1995 External links Plug-In T12 Business Process http://www.sci.brooklyn.cuny.edu/~firat/mis/PlugInT12.pdf Business Finance: Bred Tough: The Best-of-Breed, 2009 (July 2009) Defining success through strategic planning and priority goal setting The Balanced Scorecard Organizational Performance Management Business intelligence terms Information technology management Management by type Organizational theory Engineering management
Business performance management
[ "Technology", "Engineering" ]
1,461
[ "Information technology", "Engineering economics", "Engineering management", "Information technology management" ]
339,024
https://en.wikipedia.org/wiki/Length%20contraction
Length contraction is the phenomenon that a moving object's length is measured to be shorter than its proper length, which is the length as measured in the object's own rest frame. It is also known as Lorentz contraction or Lorentz–FitzGerald contraction (after Hendrik Lorentz and George Francis FitzGerald) and is usually only noticeable at a substantial fraction of the speed of light. Length contraction is only in the direction in which the body is travelling. For standard objects, this effect is negligible at everyday speeds, and can be ignored for all regular purposes, only becoming significant as the object approaches the speed of light relative to the observer. History Length contraction was postulated by George FitzGerald (1889) and Hendrik Antoon Lorentz (1892) to explain the negative outcome of the Michelson–Morley experiment and to rescue the hypothesis of the stationary aether (Lorentz–FitzGerald contraction hypothesis). Although both FitzGerald and Lorentz alluded to the fact that electrostatic fields in motion were deformed ("Heaviside-Ellipsoid" after Oliver Heaviside, who derived this deformation from electromagnetic theory in 1888), it was considered an ad hoc hypothesis, because at this time there was no sufficient reason to assume that intermolecular forces behave the same way as electromagnetic ones. In 1897 Joseph Larmor developed a model in which all forces are considered to be of electromagnetic origin, and length contraction appeared to be a direct consequence of this model. Yet it was shown by Henri Poincaré (1905) that electromagnetic forces alone cannot explain the electron's stability. So he had to introduce another ad hoc hypothesis: non-electric binding forces (Poincaré stresses) that ensure the electron's stability, give a dynamical explanation for length contraction, and thus hide the motion of the stationary aether. Lorentz believed that length contraction represented a physical contraction of the atoms making up an object. He envisioned no fundamental change in the nature of space and time. Lorentz expected that length contraction would result in compressive strains in an object that should result in measurable effects. Such effects would include optical effects in transparent media, such as optical rotation and induction of double refraction, and the induction of torques on charged condensers moving at an angle with respect to the aether. Lorentz was perplexed by experiments such as the Trouton–Noble experiment and the experiments of Rayleigh and Brace, which failed to validate his theoretical expectations. For mathematical consistency, Lorentz proposed a new time variable, the "local time", called that because it depended on the position of a moving body, following the relation . Lorentz considered local time not to be "real"; rather, it represented an ad hoc change of variable. Impressed by Lorentz's "most ingenious idea", Poincaré saw more in local time than a mere mathematical trick. It represented the actual time that would be shown on a moving observer's clocks. On the other hand, Poincaré did not consider this measured time to be the "true time" that would be exhibited by clocks at rest in the aether. Poincaré made no attempt to redefine the concepts of space and time. To Poincaré, Lorentz transformation described the apparent states of the field for a moving observer. True states remained those defined with respect to the ether. Albert Einstein (1905) is credited with removing the ad hoc character from the contraction hypothesis, by deriving this contraction from his postulates instead of experimental data. Hermann Minkowski gave the geometrical interpretation of all relativistic effects by introducing his concept of four-dimensional spacetime. Basis in relativity First it is necessary to carefully consider the methods for measuring the lengths of resting and moving objects. Here, "object" simply means a distance with endpoints that are always mutually at rest, i.e., that are at rest in the same inertial frame of reference. If the relative velocity between an observer (or his measuring instruments) and the observed object is zero, then the proper length of the object can simply be determined by directly superposing a measuring rod. However, if the relative velocity is greater than zero, then one can proceed as follows: The observer installs a row of clocks that either are synchronized a) by exchanging light signals according to the Poincaré–Einstein synchronization, or b) by "slow clock transport", that is, one clock is transported along the row of clocks in the limit of vanishing transport velocity. Now, when the synchronization process is finished, the object is moved along the clock row and every clock stores the exact time when the left or the right end of the object passes by. After that, the observer only has to look at the position of a clock A that stored the time when the left end of the object was passing by, and a clock B at which the right end of the object was passing by at the same time. It's clear that distance AB is equal to length of the moving object. Using this method, the definition of simultaneity is crucial for measuring the length of moving objects. Another method is to use a clock indicating its proper time , which is traveling from one endpoint of the rod to the other in time as measured by clocks in the rod's rest frame. The length of the rod can be computed by multiplying its travel time by its velocity, thus in the rod's rest frame or in the clock's rest frame. In Newtonian mechanics, simultaneity and time duration are absolute and therefore both methods lead to the equality of and . Yet in relativity theory the constancy of light velocity in all inertial frames in connection with relativity of simultaneity and time dilation destroys this equality. In the first method an observer in one frame claims to have measured the object's endpoints simultaneously, but the observers in all other inertial frames will argue that the object's endpoints were not measured simultaneously. In the second method, times and are not equal due to time dilation, resulting in different lengths too. The deviation between the measurements in all inertial frames is given by the formulas for Lorentz transformation and time dilation (see Derivation). It turns out that the proper length remains unchanged and always denotes the greatest length of an object, and the length of the same object measured in another inertial reference frame is shorter than the proper length. This contraction only occurs along the line of motion, and can be represented by the relation where is the length observed by an observer in motion relative to the object is the proper length (the length of the object in its rest frame) is the Lorentz factor, defined as where is the relative velocity between the observer and the moving object is the speed of light Replacing the Lorentz factor in the original formula leads to the relation In this equation both and are measured parallel to the object's line of movement. For the observer in relative movement, the length of the object is measured by subtracting the simultaneously measured distances of both ends of the object. For more general conversions, see the Lorentz transformations. An observer at rest observing an object travelling very close to the speed of light would observe the length of the object in the direction of motion as very near zero. Then, at a speed of (30 million mph, 0.0447) contracted length is 99.9% of the length at rest; at a speed of (95 million mph, 0.141), the length is still 99%. As the magnitude of the velocity approaches the speed of light, the effect becomes prominent. Symmetry The principle of relativity (according to which the laws of nature are invariant across inertial reference frames) requires that length contraction is symmetrical: If a rod is at rest in an inertial frame , it has its proper length in and its length is contracted in . However, if a rod rests in , it has its proper length in and its length is contracted in . This can be vividly illustrated using symmetric Minkowski diagrams, because the Lorentz transformation geometrically corresponds to a rotation in four-dimensional spacetime. Magnetic forces Magnetic forces are caused by relativistic contraction when electrons are moving relative to atomic nuclei. The magnetic force on a moving charge next to a current-carrying wire is a result of relativistic motion between electrons and protons. In 1820, André-Marie Ampère showed that parallel wires having currents in the same direction attract one another. In the electrons' frame of reference, the moving wire contracts slightly, causing the protons of the opposite wire to be locally denser. As the electrons in the opposite wire are moving as well, they do not contract (as much). This results in an apparent local imbalance between electrons and protons; the moving electrons in one wire are attracted to the extra protons in the other. The reverse can also be considered. To the static proton's frame of reference, the electrons are moving and contracted, resulting in the same imbalance. The electron drift velocity is relatively very slow, on the order of a meter an hour but the force between an electron and proton is so enormous that even at this very slow speed the relativistic contraction causes significant effects. This effect also applies to magnetic particles without current, with current being replaced with electron spin. Experimental verifications Any observer co-moving with the observed object cannot measure the object's contraction, because he can judge himself and the object as at rest in the same inertial frame in accordance with the principle of relativity (as it was demonstrated by the Trouton–Rankine experiment). So length contraction cannot be measured in the object's rest frame, but only in a frame in which the observed object is in motion. In addition, even in such a non-co-moving frame, direct experimental confirmations of length contraction are hard to achieve, because (a) at the current state of technology, objects of considerable extension cannot be accelerated to relativistic speeds, and (b) the only objects traveling with the speed required are atomic particles, whose spatial extensions are too small to allow a direct measurement of contraction. However, there are indirect confirmations of this effect in a non-co-moving frame: It was the negative result of a famous experiment, that required the introduction of length contraction: the Michelson–Morley experiment (and later also the Kennedy–Thorndike experiment). In special relativity its explanation is as follows: In its rest frame the interferometer can be regarded as at rest in accordance with the relativity principle, so the propagation time of light is the same in all directions. Although in a frame in which the interferometer is in motion, the transverse beam must traverse a longer, diagonal path with respect to the non-moving frame thus making its travel time longer, the factor by which the longitudinal beam would be delayed by taking times L/(c−v) and L/(c+v) for the forward and reverse trips respectively is even longer. Therefore, in the longitudinal direction the interferometer is supposed to be contracted, in order to restore the equality of both travel times in accordance with the negative experimental result(s). Thus the two-way speed of light remains constant and the round trip propagation time along perpendicular arms of the interferometer is independent of its motion & orientation. Given the thickness of the atmosphere as measured in Earth's reference frame, muons' extremely short lifespan shouldn't allow them to make the trip to the surface, even at the speed of light, but they do nonetheless. From the Earth reference frame, however, this is made possible only by the muon's time being slowed down by time dilation. However, in the muon's frame, the effect is explained by the atmosphere being contracted, shortening the trip. Heavy ions that are spherical when at rest should assume the form of "pancakes" or flat disks when traveling nearly at the speed of lightand in fact, the results obtained from particle collisions can only be explained when the increased nucleon density due to length contraction is considered. The ionization ability of electrically charged particles with large relative velocities is higher than expected. In pre-relativistic physics the ability should decrease at high velocities, because the time in which ionizing particles in motion can interact with the electrons of other atoms or molecules is diminished; however, in relativity, the higher-than-expected ionization ability can be explained by length contraction of the Coulomb field in frames in which the ionizing particles are moving, which increases their electrical field strength normal to the line of motion. In synchrotrons and free-electron lasers, relativistic electrons were injected into an undulator, so that synchrotron radiation is generated. In the proper frame of the electrons, the undulator is contracted which leads to an increased radiation frequency. Additionally, to find out the frequency as measured in the laboratory frame, one has to apply the relativistic Doppler effect. So, only with the aid of length contraction and the relativistic Doppler effect, the extremely small wavelength of undulator radiation can be explained. Reality of length contraction In 1911 Vladimir Varićak asserted that one sees the length contraction in an objective way, according to Lorentz, while it is "only an apparent, subjective phenomenon, caused by the manner of our clock-regulation and length-measurement", according to Einstein. Einstein published a rebuttal: Einstein also argued in that paper, that length contraction is not simply the product of arbitrary definitions concerning the way clock regulations and length measurements are performed. He presented the following thought experiment: Let A'B' and A"B" be the endpoints of two rods of the same proper length L0, as measured on x' and x" respectively. Let them move in opposite directions along the x* axis, considered at rest, at the same speed with respect to it. Endpoints A'A" then meet at point A*, and B'B" meet at point B*. Einstein pointed out that length A*B* is shorter than A'B' or A"B", which can also be demonstrated by bringing one of the rods to rest with respect to that axis. Paradoxes Due to superficial application of the contraction formula, some paradoxes can occur. Examples are the ladder paradox and Bell's spaceship paradox. However, those paradoxes can be solved by a correct application of the relativity of simultaneity. Another famous paradox is the Ehrenfest paradox, which proves that the concept of rigid bodies is not compatible with relativity, reducing the applicability of Born rigidity, and showing that for a co-rotating observer the geometry is in fact non-Euclidean. Visual effects Length contraction refers to measurements of position made at simultaneous times according to a coordinate system. This could suggest that if one could take a picture of a fast moving object, that the image would show the object contracted in the direction of motion. However, such visual effects are completely different measurements, as such a photograph is taken from a distance, while length contraction can only directly be measured at the exact location of the object's endpoints. It was shown by several authors such as Roger Penrose and James Terrell that moving objects generally do not appear length contracted on a photograph. This result was popularized by Victor Weisskopf in a Physics Today article. For instance, for a small angular diameter, a moving sphere remains circular and is rotated. This kind of visual rotation effect is called Penrose-Terrell rotation. Derivation Length contraction can be derived in several ways: Known moving length In an inertial reference frame S, let and denote the endpoints of an object in motion. In this frame the object's length is measured, according to the above conventions, by determining the simultaneous positions of its endpoints at . Meanwhile, the proper length of this object, as measured in its rest frame S', can be calculated by using the Lorentz transformation. Transforming the time coordinates from S into S' results in different times, but this is not problematic, since the object is at rest in S' where it does not matter when the endpoints are measured. Therefore, the transformation of the spatial coordinates suffices, which gives: Since , and by setting and , the proper length in S' is given by Therefore, the object's length, measured in the frame S, is contracted by a factor : Likewise, according to the principle of relativity, an object that is at rest in S will also be contracted in S'. By exchanging the above signs and primes symmetrically, it follows that Thus an object at rest in S, when measured in S', will have the contracted length Known proper length Conversely, if the object rests in S and its proper length is known, the simultaneity of the measurements at the object's endpoints has to be considered in another frame S', as the object constantly changes its position there. Therefore, both spatial and temporal coordinates must be transformed: Computing length interval as well as assuming simultaneous time measurement , and by plugging in proper length , it follows: Equation (2) gives which, when plugged into (1), demonstrates that becomes the contracted length : . Likewise, the same method gives a symmetric result for an object at rest in S': . Using time dilation Length contraction can also be derived from time dilation, according to which the rate of a single "moving" clock (indicating its proper time ) is lower with respect to two synchronized "resting" clocks (indicating ). Time dilation was experimentally confirmed multiple times, and is represented by the relation: Suppose a rod of proper length at rest in and a clock at rest in are moving along each other with speed . Since, according to the principle of relativity, the magnitude of relative velocity is the same in either reference frame, the respective travel times of the clock between the rod's endpoints are given by in and in , thus and . By inserting the time dilation formula, the ratio between those lengths is: . Therefore, the length measured in is given by So since the clock's travel time across the rod is longer in than in (time dilation in ), the rod's length is also longer in than in (length contraction in ). Likewise, if the clock were at rest in and the rod in , the above procedure would give Geometrical considerations Additional geometrical considerations show that length contraction can be regarded as a trigonometric phenomenon, with analogy to parallel slices through a cuboid before and after a rotation in E3 (see left half figure at the right). This is the Euclidean analog of boosting a cuboid in E1,2. In the latter case, however, we can interpret the boosted cuboid as the world slab of a moving plate. Image: Left: a rotated cuboid in three-dimensional euclidean space E3. The cross section is longer in the direction of the rotation than it was before the rotation. Right: the world slab of a moving thin plate in Minkowski spacetime (with one spatial dimension suppressed) E1,2, which is a boosted cuboid. The cross section is thinner in the direction of the boost than it was before the boost. In both cases, the transverse directions are unaffected and the three planes meeting at each corner of the cuboids are mutually orthogonal (in the sense of E1,2 at right, and in the sense of E3 at left). In special relativity, Poincaré transformations are a class of affine transformations which can be characterized as the transformations between alternative Cartesian coordinate charts on Minkowski spacetime corresponding to alternative states of inertial motion (and different choices of an origin). Lorentz transformations are Poincaré transformations which are linear transformations (preserve the origin). Lorentz transformations play the same role in Minkowski geometry (the Lorentz group forms the isotropy group of the self-isometries of the spacetime) which are played by rotations in euclidean geometry. Indeed, special relativity largely comes down to studying a kind of noneuclidean trigonometry in Minkowski spacetime, as suggested by the following table: References External links Physics FAQ: Can You See the Lorentz–Fitzgerald Contraction? Or: Penrose-Terrell Rotation; The Barn and the Pole Special relativity Length Hendrik Lorentz
Length contraction
[ "Physics", "Mathematics" ]
4,197
[ "Scalar physical quantities", "Physical quantities", "Distance", "Quantity", "Size", "Special relativity", "Length", "Theory of relativity", "Wikipedia categories named after physical quantities" ]
339,110
https://en.wikipedia.org/wiki/Bulat%20steel
Bulat is a type of steel alloy known in Russia from medieval times; it was regularly mentioned in Russian legends as the material of choice for cold steel. The name is a Russian transliteration of the Persian word , meaning steel. This type of steel was used by the armies of nomadic peoples. Bulat steel was the main type of steel used for swords in the armies of Genghis Khan. Bulat steel is generally agreed to be a Russian name for wootz steel, the production method of which has been lost for centuries, and the bulat steel used today makes use of a more recently developed technique. History The secret of bulat manufacturing had been lost by the beginning of the 19th century. It is known that the process involved dipping the finished weapon into a vat containing a special liquid of which spiny restharrow extract was a part (the plant's name in Russian, , reflects its historical role), then holding the sword aloft while galloping on a horse, allowing it to dry and harden against the wind. Pavel Anosov eventually managed to duplicate the qualities of that metal in 1838, when he completed ten years of study into the nature of Damascus steel swords. Anosov had entered the Saint Petersburg Mine Cadet School in 1810, where a Damascus steel sword was stored in a display case. He became enchanted with the sword, and was filled with stories of them slashing through their European counterparts. In November 1817, he was sent to the factories of Zlatoust mining region in the southern Urals, where he was soon promoted to the inspector of the "weapon decoration department". Here he again came into contact with Damascus steel of European origin (which was in fact pattern welded steel, and not at all similar), but quickly found that this steel was quite inferior to the original forged in the Middle East from wootz steel from India. Anosov had been working with various quenching techniques, and decided to attempt to duplicate Damascus steel with quenching. He eventually developed a methodology that greatly increased the hardness of his steels. Bulat became popular in cannon manufacturing, until the Bessemer process was able to make the same quality steels for far less money. Structure Carbon steel consists of two components: pure iron, in the form of ferrite, and cementite or iron carbide, a compound of iron and carbon. Cementite is very hard and brittle; its hardness is about 640 by the Brinell hardness test, whereas ferrite is only 200. The amount of the carbon and the cooling regimen determine the crystalline and chemical composition of the final steel. In bulat, the slow cooling process allowed the cementite to precipitate as micro particles in between ferrite crystals and arrange in random patterns. The color of the carbide is dark while steel is grey. This mixture is what leads to the famous patterning of Damascus steel. Cementite is essentially a ceramic, which accounts for the sharpness of Damascus (and bulat) steel. Cementite is unstable and breaks down between 600 and 1100 °C into ferrite and carbon, so working the hot metal must be done very carefully. See also Toledo steel Damascus steel Wootz steel Noric steel Tamahagane steel References Bibliography Steels Steel industry of Russia History of metallurgy pl:Bułat (stal)
Bulat steel
[ "Chemistry", "Materials_science" ]
686
[ "Steels", "Metallurgy", "History of metallurgy", "Alloys" ]
339,125
https://en.wikipedia.org/wiki/Crucible%20steel
Crucible steel is steel made by melting pig iron, cast iron, iron, and sometimes steel, often along with sand, glass, ashes, and other fluxes, in a crucible. Crucible steel was first developed in the middle of the 1st millennium BCE in Southern India and Sri Lanka using the wootz process. In ancient times, it was not possible to produce very high temperatures with charcoal or coal fires, which were required to melt iron or steel. However, pig iron, having a higher carbon content and thus a lower melting point, could be melted, and by soaking wrought iron or steel in the liquid pig-iron for a long time, the carbon content of the pig iron could be reduced as it slowly diffused into the iron, turning both into steel. Crucible steel of this type was produced in South and Central Asia during the medieval era. This generally produced a very hard steel, but also a composite steel that was inhomogeneous, consisting of a very high-carbon steel (formerly the pig-iron) and a lower-carbon steel (formerly the wrought iron). This often resulted in an intricate pattern when the steel was forged, filed or polished, with possibly the most well-known examples coming from the wootz steel used in Damascus swords. The steel was often much higher in carbon content (typically ranging in the area of 1.5 to 2.0%) and in quality (lacking impurities) in comparison with other methods of steel production of the time because of the use of fluxes. The steel was usually worked very little and at relatively low temperatures to avoid any decarburization, hot short crumbling, or excess diffusion of carbon. With a carbon content close to that of cast iron, it usually required no heat treatment after shaping other than air cooling to achieve the correct hardness, relying on composition alone. The higher-carbon steel provided a very hard edge, but the lower-carbon steel helped to increase the toughness, helping to decrease the chance of chipping, cracking, or breaking. In Europe, crucible steel was developed by Benjamin Huntsman in England in the 18th century. Huntsman used coke rather than coal or charcoal, achieving temperatures high enough to melt steel and dissolve iron. Huntsman's process differed from some of the wootz processes in that it used a longer time to melt the steel and to cool it down and thus allowed more time for the diffusion of carbon. Huntsman's process used iron and steel as raw materials, in the form of blister steel, rather than direct conversion from cast iron as in puddling or the later Bessemer process. The ability to fully melt the steel removed any inhomogeneities in the steel, allowing the carbon to dissolve evenly into the liquid steel and negating the prior need for extensive blacksmithing in an attempt to achieve the same result. Similarly, it allowed steel to be cast by pouring into molds. The use of fluxes allowed nearly complete extraction of impurities from the liquid, which could then simply float to the top for removal. This produced the first steel of modern quality, providing a means of efficiently changing excess wrought iron into useful steel. Huntsman's process greatly increased the European output of quality steel suitable for use in items like knives, tools, and machinery, helping to pave the way for the Industrial Revolution. Methods of crucible steel production Iron alloys are most broadly divided by their carbon content: cast iron has 2–4% carbon impurities; wrought iron oxidizes away most of its carbon, to less than 0.1%. The much more valuable steel has a delicately intermediate carbon fraction, and its material properties range according to the carbon percentage: high carbon steel is stronger but more brittle than low carbon steel. Crucible steel sequesters the raw input materials from the heat source, allowing precise control of carburization (raising) or decarburization (lowering carbon content). Fluxes, such as limestone, could be added to the crucible to remove or promote sulfur, silicon, and other impurities, further altering its material qualities. Various methods were used to produce crucible steel. According to Islamic texts such as al-Tarsusi and Abu Rayhan Biruni, three methods are described for indirect production of steel. The medieval Islamic historian Abu Rayhan Biruni (c. 973–1050) provides the earliest reference of the production of Damascus steel. The first, and the most common, traditional method is solid state carburization of wrought iron. This is a diffusion process in which wrought iron is packed in crucibles or a hearth with charcoal, then heated to promote diffusion of carbon into the iron to produce steel. Carburization is the basis for the wootz process of steel. The second method is the decarburization of cast iron by removing carbon from the cast iron. The third method uses wrought iron and cast iron. In this process, wrought iron and cast iron may be heated together in a crucible to produce steel by fusion. In regard to this method Abu Rayhan Biruni states: "this was the method used in Hearth". It is proposed that the Indian method refers to Wootz carburization method; i.e., the Mysore or Tamil processes. Variations of co-fusion process have been found primarily in Persia and Central Asia but have also been found in Hyderabad, India called Deccani or Hyderabad process. For the carbon, a variety of organic materials are specified by the contemporary Islamic authorities, including pomegranate rinds, acorns, fruit skins like orange peel, leaves as well as the white of egg and shells. Slivers of wood are mentioned in some of the Indian sources, but significantly none of the sources mention charcoal. Early history Crucible steel is generally attributed to production centres in India and Sri Lanka where it was produced using the so-called "wootz" process, and it is assumed that its appearance in other locations was due to long-distance trade. Only recently it has become apparent that places in Central Asia like Merv in Turkmenistan and Akhsiket in Uzbekistan were important centres of production of crucible steel. The Central Asian finds are all from excavations and date from the 8th to 12th centuries CE, while the Indian/Sri Lankan material is as early as 300 BCE. India's iron ore had trace vanadium and other alloying elements leading to increased hardenability in Indian crucible steel which was famous throughout the middle east for its ability to retain an edge. While crucible steel is more attributed to the Middle East in early times, pattern welded swords, incorporating high-carbon, and likely crucible steel, have been discovered in Europe, from the 3rd century CE, particularly in Scandinavia. Swords bearing the brand name Ulfberht, and dating to a 200-year period from the 9th century to the early 11th century, are prime examples of the technique. It is speculated by many that the process of making these blades originated in the Middle East and subsequently had been traded during the Volga Trade Route days. In the first centuries of the Islamic period, some scientific studies on swords and steel appeared. The best known of these are by Jabir ibn Hayyan 8th century, al-Kindi 9th century, Al-Biruni in the early 11th century, al-Tarsusi in the late 12th century, and Fakhr-i-Mudabbir 13th century. Any of these contains far more information about Indian and damascene steels than appears in the entire surviving literature of classical Greece and Rome. South India and Sri Lanka There are many ethnographic accounts of Indian crucible steel production; however, scientific investigations of the remains of crucible steel production have only been published for four regions: three in India and one in Sri Lanka. Indian/Sri Lankan crucible steel is commonly referred to as wootz, which is generally agreed to be an English corruption of the word ukko (in the Canarese language) or hookoo (in the Telugu language). European accounts from the 17th century onwards have referred to the repute and manufacture of "wootz", a traditional crucible steel made specially in parts of southern India in the former provinces of Golconda, Mysore and Salem. As yet the scale of excavations and surface surveys is too limited to link the literary accounts to archaeometallurgical evidence. The proven sites of crucible steel production in south India, e.g. at Konasamudram and Gatihosahalli, date from at least the late medieval period, 16th century. One of the earliest known potential sites, which shows some promising preliminary evidence that may be linked to ferrous crucible processes in Kodumanal, near Coimbatore in Tamil Nadu. The site is dated between the third century BCE and the third century CE. By the seventeenth century the main centre of crucible steel production seems to have been in Hyderabad. The process was apparently quite different from that recorded elsewhere. Wootz from Hyderabad or the Deccani process for making watered blades involved a co-fusion of two different kinds of iron: one was low in carbon and the other was a high-carbon steel or cast iron. Wootz steel was widely exported and traded throughout ancient Europe, China, the Arab world, and became particularly famous in the Middle East, where it became known as Damascus steel. Recent archaeological investigations have suggested that Sri Lanka also supported innovative technologies for iron and steel production in antiquity. The Sri Lankan system of crucible steel making was partially independent of the various Indian and Middle Eastern systems. Their method was something similar to the method of carburization of wrought iron. The earliest confirmed crucible steel site is located in the Knuckles range in the northern area of the Central Highlands of Sri Lanka dated to 6th–10th centuries CE. In the twelfth century the land of Serendib (Sri Lanka) seems to have been the main supplier of crucible steel, but over the centuries production slipped back, and by the nineteenth century just a small industry survived in the Balangoda district of the central southern highlands. A series of excavations at Samanalawewa indicated the unexpected and previously unknown technology of west-facing smelting sites, which are different types of steel production. These furnaces were used for direct smelting to steel. These are named "west facing" because they were located on the western sides of hilltops to use the prevailing wind in the smelting process. Sri Lankan furnace steels were known and traded between the 9th and 11th centuries and earlier, but apparently not later. These sites were dated to the 7th–11th centuries. The coincidence of this dating with the 9th century Islamic reference to Sarandib is of great importance. The crucible process existed in India at the same time that the west- facing technology was operating in Sri Lanka. Excavations of the Yodhawewa (near Mannar) site (in 2018) have uncovered a lower half of a bottom spherical furnace and crucible fragments used to make crucible steel in Sri Lanka during the 7th-8th centuries AD. The crucible fragments uncovered at the site were similar to the elongated tube-shaped crucibles of Samanalawewa. Central Asia Central Asia has a rich history of crucible steel production, beginning during the late 1st millennium CE. From the sites in modern Uzbekistan and Merv in Turkmenistan, there is good archaeological evidence for the large scale production of crucible steel. They all belong in broad terms to the same early medieval period between the late 8th or early 9th and the late 12th century CE, contemporary with the early crusades. The two most prominent crucible steel sites in eastern Uzbekistan carrying the Ferghana Process are Akhsiket and Pap in the Ferghana Valley, whose position within the Great Silk Road has been historically and archaeologically proved. The material evidence consists of large number of archaeological finds relating to steel making from 9th–12th centuries CE in the form of hundreds of thousands of fragments of crucibles, often with massive slag cakes. Archaeological work at Akhsiket, has identified that the crucible steel process was of the carburization of iron metal. This process appears to be typical of and restricted to the Ferghana Valley in eastern Uzbekistan, and it is therefore called the Ferghana Process. This process lasted in that region for roughly four centuries.. Evidence of the production of crucible steel have been found in Merv, Turkmenistan, a major city on the 'Silk Road'. The Islamic scholar al-Kindi (801–866 CE) mentions that during the ninth century CE the region of Khorasan, the area to which the cities Nishapur, Merv, Herat and Balkh belong, was a steel manufacturing centre. Evidence from a metallurgical workshop at Merv, dated to the ninth- early tenth century CE, provides an illustration of the co-fusion method of steel production in crucibles, about 1000 years earlier than the distinctly different wootz process. The crucible steel process at Merv might be seen as technologically related to what Bronson (1986, 43) calls Hyderabad process, a variation of the wootz process, after the location of the process documented by Voysey in the 1820s. China The production of crucible steel in China began around the first century BC, or possibly earlier. The Chinese developed a method of producing pig iron around 1200 BC, which they used to make cast iron. By the first century BC, they had developed puddling to produce mild steel and a process of rapidly decarburizing molten cast-iron to make wrought iron by stirring it atop beds of saltpeter (called the Heaton process, it was independently discovered by John Heaton in the 1860s). Around this time, the Chinese began producing crucible steel to convert excess quantities of cast iron and wrought iron into steel suitable for swords and weapons. In 1064, Shen Kuo, in his book Dream Pool Essays, gave the earliest written description of the patterns in the steel, the methods of sword production, and some of the reasoning behind it: Ancient people use chi kang, (combined steel), for the edge, and jou thieh (soft iron) for the back, otherwise it would often break. Too strong a weapon will cut and destroy its own edge; that is why it is advisable to use nothing but combined steel. As for the yu-chhang (fish intestines) effect, it is what is now called the 'snake-coiling' steel sword, or alternatively, the 'pine tree design'. If you cook a fish fully and remove its bones, the shape of its guts will be seen to be like the lines on a 'snake-coiling sword'. Modern history Early modern accounts The first European references to crucible steel seem to be no earlier than the Post Medieval period. European experiments with "Damascus" steels go back to at least the sixteenth century, but it was not until the 1790s that laboratory researchers began to work with steels that were specifically known to be Indian/wootz. At this time, Europeans knew of India's ability to make crucible steel from reports brought back by travellers who had observed the process at several places in southern India. From the mid-17th century onwards, European travellers to the Indian subcontinent wrote numerous vivid eyewitness accounts of the production of steel there. These include accounts by Jean-Baptiste Tavernier in 1679, Francis Buchanan in 1807, and H.W. Voysey in 1832. The 18th, 19th and early 20th century saw a heady period of European interest in trying to understand the nature and properties of wootz steel. Indian wootz engaged the attention of some of the best-known scientists. One was Michael Faraday who was fascinated by wootz steel. It was probably the investigations of George Pearson, reported at the Royal Society in 1795, which had the most far-reaching impact in terms of kindling interest in wootz amongst European scientists. He was the first of these scientists to publish his results and, incidentally, the first to use the word "wootz" in print. Another investigator, David Mushet, was able to infer that wootz was made by fusion. David Mushet patented his process in 1800. He made his report in 1805. As it happens, however, the first successful European process had been developed by Benjamin Huntsman some 50 years previously in the 1740s. History of production in England Benjamin Huntsman was a clockmaker in search of a better steel for clock springs. In Handsworth near Sheffield, he began producing steel in 1740 after years of experimenting in secret. Huntsman's system used a coke-fired furnace capable of reaching 1,600 °C, into which up to twelve clay crucibles, each capable of holding about 15 kg of iron, were placed. When the crucibles or "pots" were white-hot, they were charged with lumps of blister steel, an alloy of iron and carbon produced by the cementation process, and a flux to help remove impurities. The pots were removed after about 3 hours in the furnace, impurities in the form of slag skimmed off, and the molten steel poured into moulds to end up as cast ingots. Complete melting of the steel produced a highly uniform crystal structure upon cooling, which gave the metal increased tensile strength and hardness in comparison with other steels being made at the time. Before the introduction of Huntsman's technique, Sheffield produced about 200 tonnes of steel per year from Swedish wrought iron (see Oregrounds iron). The introduction of Huntsman's technique changed this radically: one hundred years later the amount had risen to over 80,000 tonnes per year, or almost half of Europe's total production. Sheffield developed from a small township into one of Europe's leading industrial cities. The steel was produced in specialised workshops called 'crucible furnaces', which consisted of a workshop at ground level and a subterranean cellar. The furnace buildings varied in size and architectural style, growing in size towards the latter part of the 19th century as technological developments enabled multiple pots to be "fired" at once, using gas as a heating fuel. Each workshop had a series of standard features, such as rows of melting holes, teaming pits, roof vents, rows of shelving for the crucible pots and annealing furnaces to prepare each pot before firing. Ancillary rooms for weighing each charge and for the manufacture of the clay crucibles were either attached to the workshop, or located within the cellar complex. The steel, originally intended for making clock springs, was later used in other applications such as scissors, axes and swords. Sheffield's Abbeydale Industrial Hamlet operates for the public a scythe-making works, which dates from Huntsman's times and is powered by a water wheel, using crucible steel made at the site. Material properties Previous to Huntsman, the most common method of producing steel was the manufacture of shear steel. In this method, blister steel produced by cementation was used, which consisted of a core of wrought iron surrounded by a shell of very high-carbon steel, typically ranging from 1.5 to 2.0% carbon. To help homogenize the steel, it was pounded into flat plates, which were stacked and forge welded together. This produced steel with alternating layers of steel and iron. The resulting billet could then be hammered flat, cut into plates, which were stacked and welded again, thinning and compounding the layers, and evening out the carbon more as it slowly diffused out of the high-carbon steel into the lower-carbon iron. However, the more the steel was heated and worked, the more it tended to decarburize, and this outward diffusion occurs much faster than the inward diffusion between layers. Thus, further attempts to homogenize the steel resulted in a carbon content too low for use in items like springs, cutlery, swords, or tools. Therefore, steel intended for use in such items, especially tools, was still being made primarily by the slow and arduous bloomery process in very small amounts and at high cost, which, albeit better, had to be manually separated from the wrought iron and was still impossible to fully homogenize in the solid state. Huntsman's process was the first to produce a fully homogeneous steel. Unlike previous methods of steel production, the Huntsman process was the first to fully melt the steel, allowing the full diffusion of carbon throughout the liquid. With the use of fluxes it also allowed the removal of most impurities, producing the first steel of modern quality. Due to carbon's high melting point (nearly triple that of steel) and its tendency to oxidize (burn) at high temperatures, it cannot usually be added directly to molten steel. However, by adding wrought iron or pig iron, allowing it to dissolve into the liquid, the carbon content could be carefully regulated (in a way similar to Asian crucible-steels but without the stark inhomogeneities indicative of those steels). Another benefit was that it allowed other elements to be alloyed with the steel. Huntsman was one of the first to begin experimenting with the addition of alloying agents like manganese to help remove impurities such as oxygen from the steel. His process was later used by many others, such as Robert Hadfield and Robert Forester Mushet, to produce the first alloy steels like mangalloy, high-speed steel, and stainless steel. Due to variations in the carbon content of the blister steel, the carbon steel produced could vary in carbon content between crucibles by as much as 0.18%, but on average produced a eutectoid steel containing ~ 0.79% carbon. Due to the quality and high hardenability of the steel, it was quickly adopted for the manufacture of tool steel, machine tools, cutlery, and many other items. Because no oxygen was blown through the steel, it exceeded Bessemer steel in both quality and hardenability, so Huntsman's process was used for manufacturing tool steel until better methods, utilizing an electric arc, were developed in the early 20th century. 19th and 20th century production In another method, developed in the United States in the 1880s, iron and carbon were melted together directly to produce crucible steel. Throughout the 19th century and into the 1920s a large amount of crucible steel was directed into the production of cutting tools, where it was called tool steel. The crucible process continued to be used for specialty steels, but is today obsolete. Similar quality steels are now made with an electric arc furnace. Some uses of tool steel were displaced, first by high-speed steel and later by materials such as tungsten carbide. Crucible steel elsewhere Another form of crucible steel was developed in 1837 by the Russian engineer Pavel Anosov. His technique relied less on the heating and cooling, and more on the quenching process of rapidly cooling the molten steel when the right crystal structure had formed within. He called his steel bulat; its secret died with him. In the United States crucible steel was pioneered by William Metcalf. See also Damascus steel Noric steel Pattern welding Notes References Bronson, B., 1986. "The Making and Selling of Wootz, a Crucible Steel of India". Archeomaterials 1.1, 13–51. Craddock, P.T., 1995. Early Metal Mining and Production. Cambridge: Edinburgh University Press. Craddock, P.T, 2003. "Cast Iron, Fined Iron, Crucible Steel: Liquid Iron in the Ancient World". In: P.T., Craddock, and J., Lang. (eds) Mining and Metal Production through the ages. London: The British Museum Press, 231–257. Feuerbach, A.M., 2002. "Crucible Steel in Central Asia: Production, Use, and Origins": a dissertation presented to the University of London. Feuerbach, A., Griffiths, D. R. and Merkel, J.F., 1997. "Production of crucible steel by co-fusion: Archaeometallurgical evidence from the ninth- early tenth century at the site of Merv, Turkmenistan". In: J.R., Druzik, J.F., Merkel, J., Stewart and P.B., Vandiver (eds) Materials issues in art and archaeology V: symposium held 3–5 December 1996, Boston, Massachusetts; Pittsburgh, Pa: Materials Research Society, 105–109. Feuerbach, A., Griffiths, D., and Merkel, J.F., 1995. Analytical Investigation of Crucible Steel Production at Merv, Turkmenistan. IAMS 19, 12–14. Feuerbach, A.M., Griffiths, D.R. and Merkel, J.F., 1998. "An examination of crucible steel in the manufacture of Damascus steel, including evidence from Merv", Turkmenistan. Metallurgica Antiqua 8, 37–44. Feuerbach, A.M., Griffiths, D.R., and Merkel, J.F., 2003. "Early Islamic Crucible Steel Production at Merv, Turkmenistan", In: P.T., Craddock, J., Lang (eds). Mining and Metal Production through the ages. London: The British Museum Press, 258–266. Freestone, I.C. and Tite, M. S. (eds) 1986. "Refractories in the Ancient and Preindustrial World", In: W.D., Kingery (ed.) and E., Lense (associated editor) High technology ceramics : past, present, and future ; the nature of innovation and change in ceramic technology. Westerville, OH: American Ceramic Society, 35–63. Juleff, G., 1998. Early Iron and Steel in Sri Lanka: a study of the Samanalawewa area. Mainz am Rhein: von Zabern. Moshtagh Khorasani, M., 2006. Arms and Armor from Iran, the Bronze Age to the End of the Qajar Period. Tübingen: Legat. Needham, J. 1958. "The development of iron and steel technology in China": second biennial Dickinson Memorial Lecture to the Newcomen Society, 1900–1995. Newcomen Society. Papakhristu, O.A., and Rehren, Th., 2002. "Techniques and Technology of Ceramic Vessel Manufacture Crucibles for Wootz Smelting in Centural Asia". In: V., Kilikoglou, A., Hein, and Y., Maniatis (eds) Modern Trends in Scientific Studies on Ancient Ceramics, papers presented at the 5th European Meeting on Ancient Ceramics, Athens 1999/ Oxford : Archaeopress, 69–74. Ranganathan, S. and Srinivasan, Sh., 2004. India's Legendary Wootz steel, and advanced material of the ancient world. Bangalore: National Institute of Advanced Studies: Indian Institute of Science. Rehren, Th. and Papachristou, O., 2003. "Similar like White and Black: a Comparison of Steel-making Crucibles from Central Asia and the Indian subcontinent". In: Th., Stöllner et al. (eds) Man and mining : Mensch und Bergbau : studies in honour of Gerd Weisgerber on occasion of his 65th birthday. Bochum : Deutsches Bergbau-Museum, 393–404. Rehren, Th. and Papakhristu, O. 2000. "Cutting Edge Technology – the Ferghana Process of medieval crucible steel smelting". Metalla 7.2, 55–69 Srinivasan, Sh., 1994. "woots crucible steel: a newly discovered production site in south India". Institute of Archaeology, University College London, 5, 49–61. Srinivasan, Sh., and Griffiths, D., 1997. Crucible Steel in South India-Preliminary Investigations on Crucibles from some newly identified sites. In: J.R., Druzik, J.F., Merkel, J., Stewart and P.B., Vandiver (eds) Materials issues in art and archaeology V: symposium held 3–5 December 1996, Boston, Massachusetts; Pittsburgh, Pa: Materials Research Society, 111–125. Srinivasan, S. and Griffiths, D. South Indian wootz: evidence for high-carbon steel from crucibles from a newly identified site and preliminary comparisons with related finds. Material Issues in Art and Archaeology-V, Materials Research Society Symposium Proceedings Series Vol. 462. Srinivasan, S. & Ranganathan, S. Wootz Steel: An Advanced Material of the Ancient World. Bangalore: Indian Institute of Science. Wayman Michael L. The Ferrous Metallurgy of Early Clocks and Watches. The British Museum 2000 External links Merv, Turkmenistan CFD in the 1st Millennium AD Wootz Steel: An advanced material of the ancient world Making Steel by Hand: A 1949 British Pathe newsreel showing the production of crucible steel in Sheffield Metalworking History Detailed from 9000 BC Steels Steelmaking Indian inventions Firing techniques
Crucible steel
[ "Chemistry" ]
6,058
[ "Steels", "Metallurgical processes", "Steelmaking", "Alloys" ]
339,130
https://en.wikipedia.org/wiki/Wootz%20steel
Wootz steel is a crucible steel characterized by a pattern of bands and high carbon content. These bands are formed by sheets of microscopic carbides within a tempered martensite or pearlite matrix in higher-carbon steel, or by ferrite and pearlite banding in lower-carbon steels. It was a pioneering steel alloy developed in southern India in the mid-1st millennium BC and exported globally. History Wootz steel originated in the mid-1st millennium BC in India, wootz steel was made in Golconda in Telangana, Karnataka and Sri Lanka. The steel was exported as cakes of steely iron that came to be known as "wootz". The method was to heat black magnetite ore in the presence of carbon in a sealed clay crucible inside a charcoal furnace to completely remove slag. An alternative was to smelt the ore first to give wrought iron, then heat and hammer it to remove slag. The carbon source was bamboo and leaves from plants such as Avārai. Locals in Sri Lanka adopted the production methods of creating wootz steel from the Cheras by the 5th century BC. In Sri Lanka, this early steel-making method employed a unique wind furnace, driven by the monsoon winds. Production sites from antiquity have emerged, in places such as Anuradhapura, Tissamaharama and Samanalawewa, as well as imported artifacts of ancient iron and steel from Kodumanal. Recent archaeological excavations (2018) of the Yodhawewa site (in Mannar District) discovered the lower half of a spherical furnace, crucible fragments, and lid fragments related to the crucible steel production through the carburization process. In the South East of Sri Lanka, there were some of the oldest iron and steel artifacts and production processes to the island from the classical period. Trade between India and Sri Lanka through the Arabian Sea introduced wootz steel to Arabia. The term muhannad مهند or hendeyy هندي in pre-Islamic and early Islamic Arabic refers to sword blades made from Indian steel, which were highly prized, and are attested in Arabic poetry. Further trade spread the technology to the city of Damascus, where an industry developed for making weapons of this steel. This led to the development of Damascus steel. The 12th century Arab traveler Edrisi mentioned the "Hinduwani" or Indian steel as the best in the world. Arab accounts also point to the fame of 'Teling' steel, which can be taken to refer to the region of Telangana. The Golconda region of Telangana clearly being the nodal center for the export of wootz steel to West Asia. Another sign of its reputation is seen in a Persian phraseto give an "Indian answer", meaning "a cut with an Indian sword". Wootz steel was widely exported and traded throughout ancient Europe and the Arab world, and became particularly famous in the Middle East. Development of modern metallurgy From the 17th century onwards, several European travelers observed the steel manufacturing in South India, at Mysore, Malabar and Golconda. The word "wootz" appears to have originated as a mistranscription of Sanskrit terms; the Sanskrit root word for the alloy is utsa. Anothertheory says that the word is a variation of uchcha or ucha ("superior"). According to one theory, the word ukku is based on the meaning "melt, dissolve". Other Dravidian languages have similar-sounding words for steel: ukku in Kannada and Telugu, and urukku in Malayalam. When Benjamin Heyne inspected the Indian steel in Ceded Districts and other Kannada-speaking areas, he was informed that the steel was ucha kabbina ("superior iron"), also known as ukku tundu in Mysore. Legends of wootz steel and Damascus swords aroused the curiosity of the European scientific community from the 17th to the 19th century. The use of high-carbon alloys was little known in Europe previously and thus the research into wootz steel played an important role in the development of modern English, French and Russian metallurgy. In 1790, samples of wootz steel were received by Sir Joseph Banks, president of the British Royal Society, sent by Helenus Scott. These samples were subjected to scientific examination and analysis by several experts. Specimens of daggers and other weapons were sent by the Rajas of India to the Great Exhibition in London in 1851 and 1862 International Exhibition. Though the arms of the swords were beautifully decorated and jeweled, they were most highly prized for the quality of their steel. The swords of the Sikhs were said to bear bending and crumpling, and yet be fine and sharp. Characteristics Wootz is characterized by a pattern caused by bands of clustered particles made by melting of low levels of carbide-forming elements. Wootz contains greater carbonaceous matter than common qualities of cast steel. The distinct patterns of wootz steel that can be made through forging are wave, ladder, and rose patterns with finely spaced bands. However, with hammering, dyeing, and etching further customized patterns were made. The presence of cementite nanowires and carbon nanotubes has been identified by Peter Pepler of TU Dresden in the microstructure of wootz steel. There is a possibility of an abundance of ultrahard metallic carbides in the steel matrix precipitating out in bands. Wootz swords were renowned for their sharpness and toughness. Composition T. H. Henry analyzed and recorded the composition of wootz steel samples provided by the Royal School of Mines. Recording: Carbon (Combined) 1.34% Carbon (Uncombined) 0.31% Sulfur 0.17% Silicon 0.04% Arsenic 0.03% Wootz steel was analyzed by Michael Faraday and recorded to contain 0.01-0.07% aluminium. Faraday, Messrs (et al.), and Stodart hypothesized that aluminium was needed in the steel and was important in forming the excellent properties of wootz steel. However T. H. Henry deduced that presence of aluminium in the wootz used by these studies was due to slag, forming as silicates. Percy later reiterated that the quality of wootz steel does not depend on the presence of aluminium. Reproduction research Wootz steel has been reproduced and studied in depth by the Royal School of Mines. Dr. Pearson was the first to chemically examine wootz in 1795 and he published his contributions to the Philosophical Transactions of the Royal Society. Russian metallurgist Pavel Petrovich Anosov (see Bulat steel) was almost able to reproduce ancient wootz steel with nearly all of its properties and the steel he created was very similar to traditional wootz. He documented four different methods of producing wootz steel that exhibited traditional patterns. He died before he could fully document and publish his research. Oleg Sherby and Jeff Wadsworth and Lawrence Livermore National Laboratory have all done research, attempting to create steels with characteristics similar to wootz, but without success. J.D Verhoeven and Alfred Pendray reconstructed methods of production, proved the role of impurities of ore in the pattern creation, and reproduced wootz steel with patterns microscopically and visually identical to one of the ancient blade patterns. Reibold et al.'s analyses spoke of the presence of carbon nanotubes enclosing nanowires of cementite, with the trace elements/impurities of vanadium, molybdenum, chromium etc. contributing to their creation, in cycles of heating/cooling/forging. This resulted in a hard high carbon steel that remained malleable There are smiths who are now consistently producing wootz steel blades visually identical to the old patterns. Steel manufactured in Kutch (in present-day India) particularly enjoyed a widespread reputation, similar to those manufactured at Glasgow and Sheffield. Wootz was made over nearly a 2,000-year period (the oldest sword samples date to around 200 CE) and the methods of production of ingots, the ingredients, and the methods of forging varied from one area to the next. Some wootz blades displayed a pattern, while some did not. Heat treating was quite different from forging, and there were many different patterns that were created by the various smiths who spanned from China to Scandinavia. With fellow experts, the Georgian-Dutch master armourer Gocha Laghidze developed a new method to reintroduce 'Georgian Damascus steel'. In 2010, he and his colleagues gave a masterclass on this at the Royal Academy of Fine Arts in Antwerp. See also Toledo steel Damascus steel Noric steel Bulat steel Tamahagane steel Ferrous metallurgy Iron pillar of Delhi Pattern welding References Further reading urukku - from the Tamil Lexicon, University of Madras External links Wootz Militaria Archived at Ghostarchive and the Wayback Machine: Archived at Ghostarchive and the Wayback Machine: Steels Indian inventions History of metallurgy Economic history of Tamil Nadu Chera dynasty Tamilakam
Wootz steel
[ "Chemistry", "Materials_science" ]
1,861
[ "Steels", "Metallurgy", "History of metallurgy", "Alloys" ]
339,174
https://en.wikipedia.org/wiki/Exponential%20family
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic. The concept of exponential families is credited to E. J. G. Pitman, G. Darmois, and B. O. Koopman in 1935–1936. Exponential families of distributions provide a general framework for selecting a possible alternative parameterisation of a parametric family of distributions, in terms of natural parameters, and for defining useful sample statistics, called the natural sufficient statistics of the family. Nomenclature difficulty The terms "distribution" and "family" are often used loosely: Specifically, an exponential family is a set of distributions, where the specific distribution varies with the parameter; however, a parametric family of distributions is often referred to as "a distribution" (like "the normal distribution", meaning "the family of normal distributions"), and the set of all exponential families is sometimes loosely referred to as "the" exponential family. Definition Most of the commonly used distributions form an exponential family or subset of an exponential family, listed in the subsection below. The subsections following it are a sequence of increasingly more general mathematical definitions of an exponential family. A casual reader may wish to restrict attention to the first and simplest definition, which corresponds to a single-parameter family of discrete or continuous probability distributions. Examples of exponential family distributions Exponential families include many of the most common distributions. Among many others, exponential families includes the following: normal exponential gamma chi-squared beta Dirichlet Bernoulli categorical Poisson Wishart inverse Wishart geometric A number of common distributions are exponential families, but only when certain parameters are fixed and known. For example: binomial (with fixed number of trials) multinomial (with fixed number of trials) negative binomial (with fixed number of failures) Note that in each case, the parameters which must be fixed are those that set a limit on the range of values that can possibly be observed. Examples of common distributions that are not exponential families are Student's t, most mixture distributions, and even the family of uniform distributions when the bounds are not fixed. See the section below on examples for more discussion. Scalar parameter The value of is called the parameter of the family. A single-parameter exponential family is a set of probability distributions whose probability density function (or probability mass function, for the case of a discrete distribution) can be expressed in the form where and are known functions. The function must be non-negative. An alternative, equivalent form often given is or equivalently In terms of log probability, Note that and Support must be independent of Importantly, the support of (all the possible values for which is greater than ) is required to not depend on This requirement can be used to exclude a parametric family distribution from being an exponential family. For example: The Pareto distribution has a pdf which is defined for (the minimum value, being the scale parameter) and its support, therefore, has a lower limit of Since the support of is dependent on the value of the parameter, the family of Pareto distributions does not form an exponential family of distributions (at least when is unknown). Another example: Bernoulli-type distributions – binomial, negative binomial, geometric distribution, and similar – can only be included in the exponential class if the number of Bernoulli trials, is treated as a fixed constant – excluded from the free parameter(s) – since the allowed number of trials sets the limits for the number of "successes" or "failures" that can be observed in a set of trials. Vector valued and Often is a vector of measurements, in which case may be a function from the space of possible values of to the real numbers. More generally, and can each be vector-valued such that is real-valued. However, see the discussion below on vector parameters, regarding the exponential family. Canonical formulation If then the exponential family is said to be in canonical form. By defining a transformed parameter it is always possible to convert an exponential family to canonical form. The canonical form is non-unique, since can be multiplied by any nonzero constant, provided that is multiplied by that constant's reciprocal, or a constant c can be added to and multiplied by to offset it. In the special case that and then the family is called a natural exponential family. Even when is a scalar, and there is only a single parameter, the functions and can still be vectors, as described below. The function or equivalently is automatically determined once the other functions have been chosen, since it must assume a form that causes the distribution to be normalized (sum or integrate to one over the entire domain). Furthermore, both of these functions can always be written as functions of even when is not a one-to-one function, i.e. two or more different values of map to the same value of and hence cannot be inverted. In such a case, all values of mapping to the same will also have the same value for and Factorization of the variables involved What is important to note, and what characterizes all exponential family variants, is that the parameter(s) and the observation variable(s) must factorize (can be separated into products each of which involves only one type of variable), either directly or within either part (the base or exponent) of an exponentiation operation. Generally, this means that all of the factors constituting the density or mass function must be of one of the following forms: where and are arbitrary functions of the observed statistical variable; and are arbitrary functions of the fixed parameters defining the shape of the distribution; and is any arbitrary constant expression (i.e. a number or an expression that does not change with either or ). There are further restrictions on how many such factors can occur. For example, the two expressions: are the same, i.e. a product of two "allowed" factors. However, when rewritten into the factorized form, it can be seen that it cannot be expressed in the required form. (However, a form of this sort is a member of a curved exponential family, which allows multiple factorized terms in the exponent.) To see why an expression of the form qualifies, and hence factorizes inside of the exponent. Similarly, and again factorizes inside of the exponent. A factor consisting of a sum where both types of variables are involved (e.g. a factor of the form ) cannot be factorized in this fashion (except in some cases where occurring directly in an exponent); this is why, for example, the Cauchy distribution and Student's t distribution are not exponential families. Vector parameter The definition in terms of one real-number parameter can be extended to one real-vector parameter A family of distributions is said to belong to a vector exponential family if the probability density function (or probability mass function, for discrete distributions) can be written as or in a more compact form, This form writes the sum as a dot product of vector-valued functions and . An alternative, equivalent form often seen is As in the scalar valued case, the exponential family is said to be in canonical form if A vector exponential family is said to be curved if the dimension of is less than the dimension of the vector That is, if the dimension, , of the parameter vector is less than the number of functions, , of the parameter vector in the above representation of the probability density function. Most common distributions in the exponential family are not curved, and many algorithms designed to work with any exponential family implicitly or explicitly assume that the distribution is not curved. Just as in the case of a scalar-valued parameter, the function or equivalently is automatically determined by the normalization constraint, once the other functions have been chosen. Even if is not one-to-one, functions and can be defined by requiring that the distribution is normalized for each value of the natural parameter . This yields the canonical form or equivalently The above forms may sometimes be seen with in place of . These are exactly equivalent formulations, merely using different notation for the dot product. Vector parameter, vector variable The vector-parameter form over a single scalar-valued random variable can be trivially expanded to cover a joint distribution over a vector of random variables. The resulting distribution is simply the same as the above distribution for a scalar-valued random variable with each occurrence of the scalar replaced by the vector The dimensions of the random variable need not match the dimension of the parameter vector, nor (in the case of a curved exponential function) the dimension of the natural parameter and sufficient statistic  . The distribution in this case is written as Or more compactly as Or alternatively as Measure-theoretic formulation We use cumulative distribution functions (CDF) in order to encompass both discrete and continuous distributions. Suppose is a non-decreasing function of a real variable. Then Lebesgue–Stieltjes integrals with respect to are integrals with respect to the reference measure of the exponential family generated by  . Any member of that exponential family has cumulative distribution function is a Lebesgue–Stieltjes integrator for the reference measure. When the reference measure is finite, it can be normalized and is actually the cumulative distribution function of a probability distribution. If is absolutely continuous with a density with respect to a reference measure (typically Lebesgue measure), one can write . In this case, is also absolutely continuous and can be written so the formulas reduce to that of the previous paragraphs. If is discrete, then is a step function (with steps on the support of ). Alternatively, we can write the probability measure directly as for some reference measure . Interpretation In the definitions above, the functions , , and were arbitrary. However, these functions have important interpretations in the resulting probability distribution. is a sufficient statistic of the distribution. For exponential families, the sufficient statistic is a function of the data that holds all information the data provides with regard to the unknown parameter values. This means that, for any data sets and , the likelihood ratio is the same, that is if . This is true even if and are not equal to each other. The dimension of equals the number of parameters of and encompasses all of the information regarding the data related to the parameter . The sufficient statistic of a set of independent identically distributed data observations is simply the sum of individual sufficient statistics, and encapsulates all the information needed to describe the posterior distribution of the parameters, given the data (and hence to derive any desired estimate of the parameters). (This important property is discussed further below.) is called the natural parameter. The set of values of for which the function is integrable is called the natural parameter space. It can be shown that the natural parameter space is always convex. is called the log-partition function because it is the logarithm of a normalization factor, without which would not be a probability distribution: The function is important in its own right, because the mean, variance and other moments of the sufficient statistic can be derived simply by differentiating . For example, because is one of the components of the sufficient statistic of the gamma distribution, can be easily determined for this distribution using . Technically, this is true because is the cumulant generating function of the sufficient statistic. Properties Exponential families have a large number of properties that make them extremely useful for statistical analysis. In many cases, it can be shown that only exponential families have these properties. Examples: Exponential families are the only families with sufficient statistics that can summarize arbitrary amounts of independent identically distributed data using a fixed number of values. (Pitman–Koopman–Darmois theorem) Exponential families have conjugate priors, an important property in Bayesian statistics. The posterior predictive distribution of an exponential-family random variable with a conjugate prior can always be written in closed form (provided that the normalizing factor of the exponential-family distribution can itself be written in closed form). In the mean-field approximation in variational Bayes (used for approximating the posterior distribution in large Bayesian networks), the best approximating posterior distribution of an exponential-family node (a node is a random variable in the context of Bayesian networks) with a conjugate prior is in the same family as the node. Given an exponential family defined by , where is the parameter space, such that . Then If has nonempty interior in , then given any IID samples , the statistic is a complete statistic for . is a minimal statistic for iff for all , and in the support of , if , then or . Examples It is critical, when considering the examples in this section, to remember the discussion above about what it means to say that a "distribution" is an exponential family, and in particular to keep in mind that the set of parameters that are allowed to vary is critical in determining whether a "distribution" is or is not an exponential family. The normal, exponential, log-normal, gamma, chi-squared, beta, Dirichlet, Bernoulli, categorical, Poisson, geometric, inverse Gaussian, ALAAM, von Mises, and von Mises-Fisher distributions are all exponential families. Some distributions are exponential families only if some of their parameters are held fixed. The family of Pareto distributions with a fixed minimum bound xm form an exponential family. The families of binomial and multinomial distributions with fixed number of trials n but unknown probability parameter(s) are exponential families. The family of negative binomial distributions with fixed number of failures (a.k.a. stopping-time parameter) r is an exponential family. However, when any of the above-mentioned fixed parameters are allowed to vary, the resulting family is not an exponential family. As mentioned above, as a general rule, the support of an exponential family must remain the same across all parameter settings in the family. This is why the above cases (e.g. binomial with varying number of trials, Pareto with varying minimum bound) are not exponential families — in all of the cases, the parameter in question affects the support (particularly, changing the minimum or maximum possible value). For similar reasons, neither the discrete uniform distribution nor continuous uniform distribution are exponential families as one or both bounds vary. The Weibull distribution with fixed shape parameter k is an exponential family. Unlike in the previous examples, the shape parameter does not affect the support; the fact that allowing it to vary makes the Weibull non-exponential is due rather to the particular form of the Weibull's probability density function (k appears in the exponent of an exponent). In general, distributions that result from a finite or infinite mixture of other distributions, e.g. mixture model densities and compound probability distributions, are not exponential families. Examples are typical Gaussian mixture models as well as many heavy-tailed distributions that result from compounding (i.e. infinitely mixing) a distribution with a prior distribution over one of its parameters, e.g. the Student's t-distribution (compounding a normal distribution over a gamma-distributed precision prior), and the beta-binomial and Dirichlet-multinomial distributions. Other examples of distributions that are not exponential families are the F-distribution, Cauchy distribution, hypergeometric distribution and logistic distribution. Following are some detailed examples of the representation of some useful distribution as exponential families. Normal distribution: unknown mean, known variance As a first example, consider a random variable distributed normally with unknown mean μ and known variance σ2. The probability density function is then This is a single-parameter exponential family, as can be seen by setting If σ = 1 this is in canonical form, as then η(μ) = μ. Normal distribution: unknown mean and unknown variance Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then This is an exponential family which can be written in canonical form by defining Binomial distribution As an example of a discrete exponential family, consider the binomial distribution with known number of trials n. The probability mass function for this distribution is This can equivalently be written as which shows that the binomial distribution is an exponential family, whose natural parameter is This function of p is known as logit. Table of distributions The following table shows how to rewrite a number of common distributions as exponential-family distributions with natural parameters. Refer to the flashcards for main exponential families. For a scalar variable and scalar parameter, the form is as follows: For a scalar variable and vector parameter: For a vector variable and vector parameter: The above formulas choose the functional form of the exponential-family with a log-partition function . The reason for this is so that the moments of the sufficient statistics can be calculated easily, simply by differentiating this function. Alternative forms involve either parameterizing this function in terms of the normal parameter instead of the natural parameter, and/or using a factor outside of the exponential. The relation between the latter and the former is: To convert between the representations involving the two types of parameter, use the formulas below for writing one type of parameter in terms of the other. * The Iverson bracket is a generalization of the discrete delta-function: If the bracketed expression is true, the bracket has value 1; if the enclosed statement is false, the Iverson bracket is zero. There are many variant notations, e.g. wavey brackets: is equivalent to the notation used above. The three variants of the categorical distribution and multinomial distribution are due to the fact that the parameters are constrained, such that Thus, there are only independent parameters. Variant 1 uses natural parameters with a simple relation between the standard and natural parameters; however, only of the natural parameters are independent, and the set of natural parameters is nonidentifiable. The constraint on the usual parameters translates to a similar constraint on the natural parameters. Variant 2 demonstrates the fact that the entire set of natural parameters is nonidentifiable: Adding any constant value to the natural parameters has no effect on the resulting distribution. However, by using the constraint on the natural parameters, the formula for the normal parameters in terms of the natural parameters can be written in a way that is independent on the constant that is added. Variant 3 shows how to make the parameters identifiable in a convenient way by setting This effectively "pivots" around and causes the last natural parameter to have the constant value of 0. All the remaining formulas are written in a way that does not access , so that effectively the model has only parameters, both of the usual and natural kind. Variants 1 and 2 are not actually standard exponential families at all. Rather they are curved exponential families, i.e. there are independent parameters embedded in a -dimensional parameter space. Many of the standard results for exponential families do not apply to curved exponential families. An example is the log-partition function , which has the value of 0 in the curved cases. In standard exponential families, the derivatives of this function correspond to the moments (more technically, the cumulants) of the sufficient statistics, e.g. the mean and variance. However, a value of 0 suggests that the mean and variance of all the sufficient statistics are uniformly 0, whereas in fact the mean of the th sufficient statistic should be . (This does emerge correctly when using the form of shown in variant 3.) Moments and cumulants of the sufficient statistic Normalization of the distribution We start with the normalization of the probability distribution. In general, any non-negative function f(x) that serves as the kernel of a probability distribution (the part encoding all dependence on x) can be made into a proper distribution by normalizing: i.e. where The factor Z is sometimes termed the normalizer or partition function, based on an analogy to statistical physics. In the case of an exponential family where the kernel is and the partition function is Since the distribution must be normalized, we have In other words, or equivalently This justifies calling A the log-normalizer or log-partition function. Moment-generating function of the sufficient statistic Now, the moment-generating function of T(x) is proving the earlier statement that is the cumulant generating function for T. An important subclass of exponential families are the natural exponential families, which have a similar form for the moment-generating function for the distribution of x. Differential identities for cumulants In particular, using the properties of the cumulant generating function, and The first two raw moments and all mixed second moments can be recovered from these two identities. Higher-order moments and cumulants are obtained by higher derivatives. This technique is often useful when T is a complicated function of the data, whose moments are difficult to calculate by integration. Another way to see this that does not rely on the theory of cumulants is to begin from the fact that the distribution of an exponential family must be normalized, and differentiate. We illustrate using the simple case of a one-dimensional parameter, but an analogous derivation holds more generally. In the one-dimensional case, we have This must be normalized, so Take the derivative of both sides with respect to η: Therefore, Example 1 As an introductory example, consider the gamma distribution, whose distribution is defined by Referring to the above table, we can see that the natural parameter is given by the reverse substitutions are the sufficient statistics are and the log-partition function is We can find the mean of the sufficient statistics as follows. First, for η1: Where is the digamma function (derivative of log gamma), and we used the reverse substitutions in the last step. Now, for η2: again making the reverse substitution in the last step. To compute the variance of x, we just differentiate again: All of these calculations can be done using integration, making use of various properties of the gamma function, but this requires significantly more work. Example 2 As another example consider a real valued random variable X with density indexed by shape parameter (this is called the skew-logistic distribution). The density can be rewritten as Notice this is an exponential family with natural parameter sufficient statistic and log-partition function So using the first identity, and using the second identity This example illustrates a case where using this method is very simple, but the direct calculation would be nearly impossible. Example 3 The final example is one where integration would be extremely difficult. This is the case of the Wishart distribution, which is defined over matrices. Even taking derivatives is a bit tricky, as it involves matrix calculus, but the respective identities are listed in that article. From the above table, we can see that the natural parameter is given by the reverse substitutions are and the sufficient statistics are The log-partition function is written in various forms in the table, to facilitate differentiation and back-substitution. We use the following forms: Expectation of X (associated with η1) To differentiate with respect to η1, we need the following matrix calculus identity: Then: The last line uses the fact that V is symmetric, and therefore it is the same when transposed. Expectation of log |X| (associated with η2) Now, for η2, we first need to expand the part of the log-partition function that involves the multivariate gamma function: We also need the digamma function: Then: This latter formula is listed in the Wishart distribution article. Both of these expectations are needed when deriving the variational Bayes update equations in a Bayes network involving a Wishart distribution (which is the conjugate prior of the multivariate normal distribution). Computing these formulas using integration would be much more difficult. The first one, for example, would require matrix integration. Entropy Relative entropy The relative entropy (Kullback–Leibler divergence, KL divergence) of two distributions in an exponential family has a simple expression as the Bregman divergence between the natural parameters with respect to the log-normalizer. The relative entropy is defined in terms of an integral, while the Bregman divergence is defined in terms of a derivative and inner product, and thus is easier to calculate and has a closed-form expression (assuming the derivative has a closed-form expression). Further, the Bregman divergence in terms of the natural parameters and the log-normalizer equals the Bregman divergence of the dual parameters (expectation parameters), in the opposite order, for the convex conjugate function. Fixing an exponential family with log-normalizer (with convex conjugate ), writing for the distribution in this family corresponding a fixed value of the natural parameter (writing for another value, and with for the corresponding dual expectation/moment parameters), writing for the KL divergence, and for the Bregman divergence, the divergences are related as: The KL divergence is conventionally written with respect to the first parameter, while the Bregman divergence is conventionally written with respect to the second parameter, and thus this can be read as "the relative entropy is equal to the Bregman divergence defined by the log-normalizer on the swapped natural parameters", or equivalently as "equal to the Bregman divergence defined by the dual to the log-normalizer on the expectation parameters". Maximum-entropy derivation Exponential families arise naturally as the answer to the following question: what is the maximum-entropy distribution consistent with given constraints on expected values? The information entropy of a probability distribution dF(x) can only be computed with respect to some other probability distribution (or, more generally, a positive measure), and both measures must be mutually absolutely continuous. Accordingly, we need to pick a reference measure dH(x) with the same support as dF(x). The entropy of dF(x) relative to dH(x) is or where dF/dH and dH/dF are Radon–Nikodym derivatives. The ordinary definition of entropy for a discrete distribution supported on a set I, namely assumes, though this is seldom pointed out, that dH is chosen to be the counting measure on I. Consider now a collection of observable quantities (random variables) Ti. The probability distribution dF whose entropy with respect to dH is greatest, subject to the conditions that the expected value of Ti be equal to ti, is an exponential family with dH as reference measure and (T1, ..., Tn) as sufficient statistic. The derivation is a simple variational calculation using Lagrange multipliers. Normalization is imposed by letting T0 = 1 be one of the constraints. The natural parameters of the distribution are the Lagrange multipliers, and the normalization factor is the Lagrange multiplier associated to T0. For examples of such derivations, see Maximum entropy probability distribution. Role in statistics Classical estimation: sufficiency According to the Pitman–Koopman–Darmois theorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there a sufficient statistic whose dimension remains bounded as sample size increases. Less tersely, suppose Xk, (where k = 1, 2, 3, ... n) are independent, identically distributed random variables. Only if their distribution is one of the exponential family of distributions is there a sufficient statistic T(X1, ..., Xn) whose number of scalar components does not increase as the sample size n increases; the statistic T may be a vector or a single scalar number, but whatever it is, its size will neither grow nor shrink when more data are obtained. As a counterexample if these conditions are relaxed, the family of uniform distributions (either discrete or continuous, with either or both bounds unknown) has a sufficient statistic, namely the sample maximum, sample minimum, and sample size, but does not form an exponential family, as the domain varies with the parameters. Bayesian estimation: conjugate distributions Exponential families are also important in Bayesian statistics. In Bayesian statistics a prior distribution is multiplied by a likelihood function and then normalised to produce a posterior distribution. In the case of a likelihood which belongs to an exponential family there exists a conjugate prior, which is often also in an exponential family. A conjugate prior π for the parameter of an exponential family is given by or equivalently where s is the dimension of and and are hyperparameters (parameters controlling parameters). corresponds to the effective number of observations that the prior distribution contributes, and corresponds to the total amount that these pseudo-observations contribute to the sufficient statistic over all observations and pseudo-observations. is a normalization constant that is automatically determined by the remaining functions and serves to ensure that the given function is a probability density function (i.e. it is normalized). and equivalently are the same functions as in the definition of the distribution over which π is the conjugate prior. A conjugate prior is one which, when combined with the likelihood and normalised, produces a posterior distribution which is of the same type as the prior. For example, if one is estimating the success probability of a binomial distribution, then if one chooses to use a beta distribution as one's prior, the posterior is another beta distribution. This makes the computation of the posterior particularly simple. Similarly, if one is estimating the parameter of a Poisson distribution the use of a gamma prior will lead to another gamma posterior. Conjugate priors are often very flexible and can be very convenient. However, if one's belief about the likely value of the theta parameter of a binomial is represented by (say) a bimodal (two-humped) prior distribution, then this cannot be represented by a beta distribution. It can however be represented by using a mixture density as the prior, here a combination of two beta distributions; this is a form of hyperprior. An arbitrary likelihood will not belong to an exponential family, and thus in general no conjugate prior exists. The posterior will then have to be computed by numerical methods. To show that the above prior distribution is a conjugate prior, we can derive the posterior. First, assume that the probability of a single observation follows an exponential family, parameterized using its natural parameter: Then, for data , the likelihood is computed as follows: Then, for the above conjugate prior: We can then compute the posterior as follows: The last line is the kernel of the posterior distribution, i.e. This shows that the posterior has the same form as the prior. The data X enters into this equation only in the expression which is termed the sufficient statistic of the data. That is, the value of the sufficient statistic is sufficient to completely determine the posterior distribution. The actual data points themselves are not needed, and all sets of data points with the same sufficient statistic will have the same distribution. This is important because the dimension of the sufficient statistic does not grow with the data size — it has only as many components as the components of (equivalently, the number of parameters of the distribution of a single data point). The update equations are as follows: This shows that the update equations can be written simply in terms of the number of data points and the sufficient statistic of the data. This can be seen clearly in the various examples of update equations shown in the conjugate prior page. Because of the way that the sufficient statistic is computed, it necessarily involves sums of components of the data (in some cases disguised as products or other forms — a product can be written in terms of a sum of logarithms). The cases where the update equations for particular distributions don't exactly match the above forms are cases where the conjugate prior has been expressed using a different parameterization than the one that produces a conjugate prior of the above form — often specifically because the above form is defined over the natural parameter while conjugate priors are usually defined over the actual parameter Unbiased estimation If the likelihood is an exponential family, then the unbiased estimator of is . Hypothesis testing: uniformly most powerful tests A one-parameter exponential family has a monotone non-decreasing likelihood ratio in the sufficient statistic T(x), provided that η(θ) is non-decreasing. As a consequence, there exists a uniformly most powerful test for testing the hypothesis H0: θ ≥ θ0 vs. H1: θ < θ0. Generalized linear models Exponential families form the basis for the distribution functions used in generalized linear models (GLM), a class of model that encompasses many of the commonly used regression models in statistics. Examples include logistic regression using the binomial family and Poisson regression. See also Exponential dispersion model Gibbs measure Modified half-normal distribution Natural exponential family Footnotes References Citations Sources Reprinted as Further reading External links A primer on the exponential family of distributions Exponential family of distributions on the Earliest known uses of some of the words of mathematics jMEF: A Java library for exponential families Graphical Models, Exponential Families, and Variational Inference by Wainwright and Jordan (2008) Exponentials Continuous distributions Discrete distributions Types of probability distributions
Exponential family
[ "Mathematics" ]
6,974
[ "E (mathematical constant)", "Exponentials" ]
339,183
https://en.wikipedia.org/wiki/Points%20of%20the%20compass
The points of the compass are a set of horizontal, radially arrayed compass directions (or azimuths) used in navigation and cartography. A compass rose is primarily composed of four cardinal directions—north, east, south, and west—each separated by 90 degrees, and secondarily divided by four ordinal (intercardinal) directions—northeast, southeast, southwest, and northwest—each located halfway between two cardinal directions. Some disciplines such as meteorology and navigation further divide the compass with additional azimuths. Within European tradition, a fully defined compass has 32 "points" (and any finer subdivisions are described in fractions of points). Compass points or compass directions are valuable in that they allow a user to refer to a specific azimuth in a colloquial fashion, without having to compute or remember degrees. Designations The names of the compass point directions follow these rules: 8-wind compass rose The four cardinal directions are north (N), east (E), south (S), west (W), at 90° angles on the compass rose. The four intercardinal (or ordinal) directions are formed by bisecting the above, giving: northeast (NE), southeast (SE), southwest (SW), and northwest (NW). In English and many other tongues, these are compound words. Different style guides for the four mandate spaces, dashes, or none. In Bulgarian, Catalan, Czech, Danish, Dutch, English, Esperanto, French, Galician, German, Greek, Hungarian, Ido, Italian, Japanese (usually), Macedonian, Norwegian (both Bokmal and Nynorsk), Polish, Portuguese, Romansch, Russian, Serbian, Croatian, Spanish, Swedish, Ukrainian, and Welsh the part meaning north or south precedes the part meaning east or west. In Chinese, Vietnamese, Gaelic, and less commonly Japanese, the part meaning east or west precedes the other. In Estonian, Finnish, Breton, the "Italianate system", and many South Asian and Southeast Asian languages such as Telugu, the intercardinals have distinct words. The eight principal winds (or main winds) are the set union of the cardinals and intercardinals. Taken in turn, each is 45° from the next. These form the 8-wind compass rose, the rose at its usual basic level today. 16-wind compass rose The eight half-winds are the direction points obtained by bisecting the angles between the principal winds. The half-winds are north-northeast (NNE), east-northeast (ENE), east-southeast (ESE), south-southeast (SSE), south-southwest (SSW), west-southwest (WSW), west-northwest (WNW), and north-northwest (NNW). The name of each half-wind is constructed by combining the names of the principal winds to either side, with the cardinal wind coming first and the intercardinal wind second. The eight principal winds and the eight half-winds together form the 16-wind compass rose, with each compass point at a ° angle from its two neighbours. 32-wind compass rose The sixteen quarter-winds are the direction points obtained by bisecting the angles between the points on the 16-wind compass rose (above). The quarter-winds are as follows. in NE quadrant: north by east (NbE), northeast by north (NEbN), northeast by east (NEbE), and east by north (EbN); in SE quadrant: east by south (EbS), southeast by east (SEbE), southeast by south (SEbS), and south by east (SbE); in SW quadrant: south by west (SbW), southwest by south (SWbS), southwest by west (SWbW), and west by south (WbS); in NW quadrant: west by north (WbN), northwest by west (NWbW), northwest by north (NWbN), and north by west (NbW) All of the points in the 16-wind compass rose plus the sixteen quarter-winds together form the 32-wind compass rose. If breaking down for study/signalling the subcomponents are called the "principal" followed by the "cardinal" wind/direction. As a mnemonic (memory device), minds familiar encode the meaning of "X by Y" as "one small measure from X towards Y". It can be noted such measure ('one point') is °. So, for example, "northeast by east" means "one quarter of the gap from NE towards E". In summary, the 32-wind compass rose comes from the eight principal winds, eight half-winds, and sixteen quarter-winds combined, with each compass point at an ° angle from the next. Half- and quarter-points By the middle of the 18th century, the 32-point system had been further extended by using half- and quarter-points to give a total of 128 directions. These fractional points are named by appending, for example, east, east, or east to the name of one of the 32 points. Each of the 96 fractional points can be named in two ways, depending on which of the two adjoining whole points is used, for example, NE is equivalent to NbEN. Either form is easily understood, but alternative conventions as to correct usage developed in different countries and organisations. "It is the custom in the United States Navy to box from north and south toward east and west, with the exception that divisions adjacent to a cardinal or inter-cardinal point are always referred to that point." The Royal Navy used the additional "rule that quarter points were never read from a point beginning and ending with the same letter." Compass roses very rarely named the fractional points and only showed small, unlabelled markers as a guide for helmsmen. Maritime Use Prior to the modern three-figure method of describing directions (using the 360° of a circle), the 32-point compass was used for directions on most ships, especially among European crews. The smallest unit of measure recognized was 'one point', 1/32 of a circle, or °. In the mariner's exercise of "boxing the compass", all thirty-two points of the compass are named in clockwise order. This exercise became more significant as navigation improved and the half- and quarter-point system increased the number of directions to include in the 'boxing'. Points remained the standard unit until switching to the three-figure degree method. These points were also used for relative measurement, so that an obstacle might be noted as 'two points off the starboard bow', meaning two points clockwise of straight ahead, ° This relative measurement may still be used in shorthand on modern ships, especially for handoffs between outgoing and incoming helmsmen, as the loss of granularity is less significant than the brevity and simplicity of the summary. 128 compass directions The table below shows how each of the 128 directions are named. The first two columns give the number of points and degrees clockwise from north. The third gives the equivalent bearing to the nearest degree from north or south towards east or west. The "CW" column gives the fractional-point bearings increasing in the clockwise direction and "CCW" counterclockwise. The final three columns show three common naming conventions: No "by" avoids the use of "by" with fractional points. Colour coding shows whether each of the three naming systems matches the "CW" or "CCW" column. Traditional Mediterranean compass points The traditional compass rose of eight winds (and its 16-wind and 32-wind derivatives) was invented by seafarers in the Mediterranean Sea during the Middle Ages (with no obvious connection to the twelve classical compass winds of the ancient Greeks and Romans). The traditional mariner's wind names were expressed in Italian, or more precisely, the Italianate Mediterranean lingua franca common among sailors in the 13th and 14th centuries, which was principally composed of Genoese (Ligurian), mixed with Venetian, Sicilian, Provençal, Catalan, Greek, and Arabic terms from around the Mediterranean basin. This Italianate patois was used to designate the names of the principal winds on the compass rose found in mariners' compasses and portolan charts of the 14th and 15th centuries. The traditional names of the eight principal winds are: (N) – Tramontana (NE) – Greco (or Bora in some Venetian sources) (E) – Levante (sometimes Oriente) (SE) – Scirocco (or Exaloc in Catalan) (S) – Ostro (or Mezzogiorno in Venetian) (SW) – Libeccio (or Garbino, Eissalot in Provençal) (W) – Ponente (or Zephyrus in Greek) (NW) – Maestro (or Mistral in Provençal) Local spelling variations are far more numerous than listed, e.g. Tramutana, Gregale, Grecho, Sirocco, Xaloc, Lebeg, Libezo, Leveche, Mezzodi, Migjorn, Magistro, Mestre, etc. Traditional compass roses will typically have the initials T, G, L, S, O, L, P, and M on the main points. Portolan charts also colour-coded the compass winds: black for the eight principal winds, green for the eight half-winds, and red for the sixteen quarter-winds. Each half-wind name is simply a combination of the two principal winds that it bisects, with the shortest name usually placed first, for example: NNE is "Greco-Tramontana"; ENE is "Greco-Levante"; SSE is "Ostro-Scirocco", etc. The quarter winds are expressed with an Italian phrase, "Quarto di X verso Y" ( one quarter from X towards Y), or "X al Y" (X to Y) or "X per Y" (X by Y). There are no irregularities to trip over; the closest principal wind always comes first, the more distant one second, for example: north-by-east is "Quarto di Tramontana verso Greco"; and northeast-by-north is "Quarto di Greco verso Tramontana". The table below shows how the 32 compass points are named. Each point has an angular range of degrees where the azimuth midpoint is the horizontal angular direction (clockwise from north) of the given compass bearing; minimum is the lower (counterclockwise) angular limit of the compass point; and maximum is the upper (clockwise) angular limit of the compass point. Chinese compass points Navigation texts dating from the Yuan, Ming, and Qing dynasties in China use a 24-pointed compass with named directions. These are based on the twelve Earthly Branches, which also form the basis of the Chinese zodiac. When a single direction is specified, it may be prefaced by the character (meaning single) or . Headings mid-way in-between are compounds as in English. For instance, refers to the direction halfway between point and point , or °. This technique is referred to as a double-needle () compass. See also Bearing (navigation) Cardinal direction Course (navigation) Heading (navigation) TVMDC Wind rose References External links Wind Rose (archived) – discusses the origins of the names for compass directions. Navigational equipment Orientation (geometry) Units of angle de:Himmelsrichtung#Systematik_der_Benennung
Points of the compass
[ "Physics", "Mathematics" ]
2,424
[ "Quantity", "Units of angle", "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)", "Units of measurement" ]
339,195
https://en.wikipedia.org/wiki/Galileo%27s%20paradox
Galileo's paradox is a demonstration of one of the surprising properties of infinite sets. In his final scientific work, Two New Sciences, Galileo Galilei made apparently contradictory statements about the positive integers. First, a square is an integer which is the square of an integer. Some numbers are squares, while others are not; therefore, all the numbers, including both squares and non-squares, must be more numerous than just the squares. And yet, for every number there is exactly one square; hence, there cannot be more of one than of the other. This is an early use, though not the first, of the idea of one-to-one correspondence in the context of infinite sets. Galileo concluded that the ideas of less, equal, and greater apply to finite quantities but not to infinite quantities. During the nineteenth century Cantor found a framework in which this restriction is not necessary; it is possible to define comparisons amongst infinite sets in a meaningful way (by which definition the two sets, integers and squares, have "the same size"), and that by this definition some infinite sets are strictly larger than others. The ideas were not new with Galileo, but his name has come to be associated with them. In particular, Duns Scotus, about 1302, compared even numbers to the whole of numbers. Galileo on infinite sets The relevant section of Two New Sciences is excerpted below: See also Dedekind-infinite set Hilbert's paradox of the Grand Hotel References External links Philosophical Method and Galileo's Paradox of Infinity by Matthew W. Parker – PhilSci-Archive Paradoxes of set theory Paradoxes of infinity Paradox
Galileo's paradox
[ "Mathematics" ]
334
[ "Paradoxes of infinity", "Mathematical objects", "Infinity", "Paradoxes of set theory", "Mathematical paradoxes", "Mathematical problems" ]
339,208
https://en.wikipedia.org/wiki/Vitaly%20Ginzburg
Vitaly Lazarevich Ginzburg, ForMemRS (; 4 October 1916 – 8 November 2009) was a Russian physicist who was honored with the Nobel Prize in Physics in 2003, together with Alexei Abrikosov and Anthony Leggett for their "pioneering contributions to the theory of superconductors and superfluids." He spent his career in the former Soviet Union and was one of the leading figure in former Soviet program of nuclear weapons, working towards designs of the thermonuclear devices. He became a member of the Russian Academy of Sciences and succeeded Igor Tamm as head of the Department of Theoretical Physics of the Lebedev Physical Institute of the Russian Academy of Sciences (FIAN). In his later life, Ginzburg become an outspoken atheist and was critical of clergy's influence in Russian society. Biography Vitaly Ginzburg was born to a Jewish family in Moscow on 4 October 1916— the son of an engineer, Lazar Yefimovich Ginzburg, and a doctor, Augusta Wildauer who was a graduate from the Physics Faculty of Moscow State University in 1938. After attending his mother's alma mater, he defended his qualifications of the candidate's (Kandidat Nauk) dissertation in 1940, and his comprehensive thesis for the doctor's (Doktor Nauk) qualification in 1942. In 1944, he became a member of the Communist Party of the Soviet Union. Among his achievements are a partially phenomenological theory of superconductivity, the Ginzburg–Landau theory, developed with Lev Landau in 1950; the theory of electromagnetic wave propagation in plasmas (for example, in the ionosphere); and a theory of the origin of cosmic radiation. He is also known to biologists as being part of the group of scientists that helped bring down the reign of the politically connected anti-Mendelian agronomist Trofim Lysenko, thus allowing modern genetic science to return to the USSR. In 1937, Ginzburg married Olga Zamsha. In 1946, he married his second wife, Nina Ginzburg (nee Yermakova), who had spent more than a year in custody on fabricated charges of plotting to assassinate the Soviet leader Joseph Stalin. As a renowned professor and researcher, Ginzburg was an obvious candidate for the Soviet bomb project. From 1948 through 1952 Ginzburg worked under Igor Kurchatov to help with the hydrogen bomb. Ginzburg and Igor Tamm both proposed ideas that would make it possible to build a hydrogen bomb. When the bomb project moved to Arzamas-16 to continue in even more secrecy, Ginzburg was not allowed to follow. Instead he stayed in Moscow and supported from afar, staying under watch due to his background and past. As the work got continuously more classified, Ginzburg was phased out of the project and allowed to pursue his true passion, superconductors. During the Cold War, the thirst for knowledge and technological advancement was never-ending. This was no different with the research done on superconductors. The Soviet Union believed that the research done on superconductors would place them ahead of their American counterparts. Both sides sought to leverage the potential military applications of superconductors. Ginzburg was the editor-in-chief of the scientific journal Uspekhi Fizicheskikh Nauk. He also headed the Academic Department of Physics and Astrophysics Problems, which Ginzburg founded at the Moscow Institute of Physics and Technology in 1968. Ginzburg identified as a secular Jew, and following the collapse of communism in the former Soviet Union, he was very active in Jewish life, especially in Russia, where he served on the board of directors of the Russian Jewish Congress. He is also well known for fighting anti-Semitism and supporting the state of Israel. In the 2000s (decade), Ginzburg was politically active, supporting the Russian liberal opposition and human rights movement. He defended Igor Sutyagin and Valentin Danilov against charges of espionage put forth by the authorities. On 2 April 2009, in an interview to the Radio Liberty Ginzburg denounced the FSB as an institution harmful to Russia and the ongoing expansion of its authority as a return to Stalinism. Ginzburg worked at the P. N. Lebedev Physical Institute of Soviet and Russian Academy of Sciences in Moscow since 1940. Russian Academy of Sciences is a major institution where mostly all Nobel Prize laureates of physics from Russia have done their studies and/or research works. Stance on religion Ginzburg was an avowed atheist, both under the militantly atheist Soviet government and in post-Communist Russia when religion made a strong revival. He criticized clericalism in the press and wrote several books devoted to the questions of religion and atheism. Because of this, some Orthodox Christian groups denounced him and said no science award could excuse his verbal attacks on the Russian Orthodox Church. He was one of the signers of the Open letter to the President Vladimir V. Putin from the Members of the Russian Academy of Sciences against clericalisation of Russia. Nobel Prize Vitaly Ginzburg, along with Anthony Leggett and Alexei Abrikosov were awarded the Nobel Prize in Physics in 2003 for their groundbreaking work on the theory of superconductors. The Nobel Prize recognized Ginzburg's work in theoretical physics, specifically his contributions to understanding the behavior of matter at extremely low temperatures. His collaboration with Lev Landau in 1950 led to the development of the Ginzburg-Landau theory, which became paramount to later work on superconductors. Landau had been working on superconductors for years before their partnership, with Landau publishing many papers between 1941 and 1947 on the properties of quantum fluids at extremely low temperatures. Lev Landau would later receive a Nobel Prize in 1962 for this research on the properties of the superfluid liquid helium in 1941. Before their collaboration, Landau had just done research on liquid helium and other quantum fluids, but Ginzburg allowed them to go a step further. Ginzburg introduced the concept of an order parameter, which would allow them to characterize the state of the superconductor. To do this, they derived a complex set of equations that would allow them to describe the behavior of the superconductor. These equations provided a model from which researchers can understand the transition between a normal and superconducting state, as well as be able to predict various properties of other superconductors. Using these equations, they were also able to introduce the Ginzburg-Landau Parameter. This parameter used a separate set of equations in order to classify if they were looking at a Type-I or Type-II superconductor. This advancement allowed Anthony Leggett to build upon it and complete his own research on superconductors. This research on superconductors allowed many new technological advancements to unfold, including some we can see in everyday life. The use of superconductors can be seen in MRI machines, engines, and new Maglev trains. Death A spokeswoman for the Russian Academy of Sciences announced that Ginzburg died in Moscow on 8 November 2009 from cardiac arrest. He had been suffering from ill health for several years, and three years before his death said "In general, I envy believers. I am 90, and [am] being overcome by illnesses. For believers, it is easier to deal with them and with life's other hardships. But what can be done? I cannot believe in resurrection after death." Prime Minister of Russia Vladimir Putin sent his condolences to Ginzburg's family, saying "We bid farewell to an extraordinary personality whose outstanding talent, exceptional strength of character and firmness of convictions evoked true respect from his colleagues". President of Russia Dmitry Medvedev, in his letter of condolences, described Ginzburg as a "top physicist of our time whose discoveries had a huge impact on the development of national and world science." Ginzburg was buried on 11 November in the Novodevichy Cemetery in Moscow, the resting place of many famous politicians, writers and scientists of Russia. Family The first wife (in 1937–1946) is a graduate of the Faculty of Physics of Moscow State University (1938) Olga Ivanovna Zamsha (born 1915, Yeysk), candidate of physical and mathematical sciences (1945), associate professor at MEPhI (1949–1985), author of the “Collection of problems on general physics" (with co-authors, 1968, 1972, 1975). The second wife (since 1946) is a graduate of the Faculty of Mechanics and Mathematics of Moscow State University, experimental physicist Nina Ivanovna Ginzburg (née Ermakova) (October 2, 1922 — May 19, 2019). Daughter — Irina Vitalievna Dorman (born 1939), graduate of the Faculty of Physics of Moscow State University (1961), candidate of physical and mathematical sciences, historian of science (her husband is a cosmophysicist, doctor of physical and mathematical sciences Leib (Lev) Isaakovich Dorman). Granddaughter — Victoria Lvovna Dorman, American physicist, graduate of the physics department of Moscow State University and Princeton University, deputy dean for academic affairs at the Princeton School of Engineering and Applied Science; her husband is physicist and writer Mikhail Petrov. Great cousin — Mark Ginzburg. Other honors and awards Medal "For Valiant Labour in the Great Patriotic War 1941–1945" (1946) Medal "In Commemoration of the 800th Anniversary of Moscow" (1948) Stalin Prize in 1953 Order of Lenin (1954) Order of the Badge of Honour, twice (1954, 1975) Order of the Red Banner of Labour, twice (1956, 1986) Lenin Prize in 1966 Medal "For Valiant Labour. To commemorate the 100th anniversary of the birth of Vladimir Ilyich Lenin" (1970) Marian Smoluchowski Medal (1984) Elected a Foreign Member of the Royal Society (ForMemRS) in 1987 Gold Medal of the Royal Astronomical Society in 1991 Wolf Prize in Physics in 1994/5 Vavilov Gold Medal (1995) – for outstanding work in physics, including a series of papers on the theory of radiation by uniformly moving sources Lomonosov Gold Medal in 1995 – for outstanding achievement in the field of theoretical physics and astrophysics 3rd class (3 October 1996) – for outstanding scientific achievements and the training of highly qualified personnel Elected a Fellow of the American Physical Society in 2003. Order "For Merit to the Fatherland", 1st class (4 October 2006) – for outstanding contribution to the development of national science and many years of fruitful activity See also List of Jewish Nobel laureates References External links including the Nobel Lecture On Superconductivity and Superfluidity Ginzburg's homepage Curriculum Vitae Open letter to the President of the Russian Federation Vladimir V. Putin Obituary The Daily Telegraph 11 Nov 2009. Obituary The Independent November 14, 2009 (by Martin Childs). Biography Obituary Archival collections Vitalii Ginzburg papers, 1992, Niels Bohr Library & Archives 1916 births 2009 deaths Scientists from Moscow People from Moskovsky Uyezd Communist Party of the Soviet Union members Members of the Congress of People's Deputies of the Soviet Union Russian atheism activists Jewish atheists Jewish Russian physicists Nuclear weapons program of the Soviet Union people Soviet astronomers Soviet inventors Soviet physicists Russian theoretical physicists Superconductivity Academic journal editors Moscow State University alumni Academic staff of the Moscow Institute of Physics and Technology Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Foreign associates of the National Academy of Sciences Foreign fellows of the Indian National Science Academy Fellows of the American Physical Society Foreign members of the Royal Society Recipients of the Stalin Prize Recipients of the Lenin Prize Recipients of the Gold Medal of the Royal Astronomical Society Recipients of the Lomonosov Gold Medal Recipients of the Order "For Merit to the Fatherland", 1st class Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Nobel laureates in Physics Russian Nobel laureates Wolf Prize in Physics laureates UNESCO Niels Bohr Medal recipients Burials at Novodevichy Cemetery Russian scientists
Vitaly Ginzburg
[ "Physics", "Materials_science", "Technology", "Engineering" ]
2,500
[ "Physical quantities", "Superconductivity", "Recipients of the Lomonosov Gold Medal", "Materials science", "Condensed matter physics", "Science and technology awards", "Electrical resistance and conductance" ]
339,220
https://en.wikipedia.org/wiki/Alexei%20Abrikosov%20%28physicist%29
Alexei Alexeyevich Abrikosov (; June 25, 1928 – March 29, 2017) was a Soviet, Russian and American theoretical physicist whose main contributions are in the field of condensed matter physics. He was the co-recipient of the 2003 Nobel Prize in Physics, with Vitaly Ginzburg and Anthony James Leggett, for theories about how matter can behave at extremely low temperatures. Education and early life Abrikosov was born in Moscow, Russian SFSR, Soviet Union, on June 25, 1928, to a couple of physicians: Aleksey Abrikosov and Fani ( Wulf). His mother was Jewish. After graduating from high school in 1943, Abrikosov began studying energy technology. He graduated from Moscow State University in 1948. From 1948 to 1965, he worked at the Institute for Physical Problems of the USSR Academy of Sciences, where he received his Ph.D. in 1951 for the theory of thermal diffusion in plasmas, and then his Doctor of Physical and Mathematical Sciences (a "higher doctorate") degree in 1955 for a thesis on quantum electrodynamics at high energies. Abrikosov moved to the US in 1991 and lived there until his death in 2017, in Palo Alto, California. While in the US, Abrikosov was elected to the National Academy of Sciences in 2000, and in 2001, to be a foreign member of the Royal Society. Career From 1965 to 1988, he worked at the Landau Institute for Theoretical Physics (USSR Academy of Sciences). He has been a professor at Moscow State University since 1965. In addition, he held tenure at the Moscow Institute of Physics and Technology from 1972 to 1976, and at the Moscow Institute of Steel and Alloys from 1976 to 1991. He served as a full member of the USSR Academy of Sciences from 1987 to 1991. In 1991, he became a full member of the Russian Academy of Sciences. In two works in 1952 and 1957, Abrikosov explained how magnetic flux can penetrate a class of superconductors. This class of materials are called type-II superconductors. The accompanying arrangement of magnetic flux lines is called the Abrikosov vortex lattice. Together with Lev Gor'kov and Igor Dzyaloshinskii, Abrikosov has written an iconic book on theoretical solid-state physics, which has been used to train physicists in the field for decades. From 1991 until his retirement, he worked at Argonne National Laboratory in the U.S. state of Illinois. Abrikosov was an Argonne Distinguished Scientist at the Condensed Matter Theory Group in Argonne's Materials Science Division. When he received the Nobel Prize, his research was focused on the origins of magnetoresistance, a property of some materials that change their resistance to electrical flow under the influence of a magnetic field. Honours and awards Abrikosov was awarded the Lenin Prize in 1966, the Fritz London Memorial Prize in 1972, and the USSR State Prize in 1982. In 1989 he received the Landau Prize from the Academy of Sciences, Russia. Two years later, in 1991, Abrikosov was awarded the Sony Corporation's John Bardeen Award. The same year he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences. He shared the 2003 Nobel Prize in Physics. He was also a member of the Royal Academy of London, a fellow of the American Physical Society, and in 2000 was elected to the prestigious National Academy of Sciences. Other awards include: Member of the Academy of Sciences of the USSR (now Russian Academy of Sciences), 1964 Honorary Doctor of the University of Lausanne, 1975 Order of the Badge of Honour, 1975 Order of the Red Banner of Labour, 1988 Academician of the Academy of Sciences of the USSR (now Russian Academy of Sciences), 1987 Elected a Foreign Member of the Royal Society (ForMemRS) in 2001 Golden Plate Award of the American Academy of Achievement, 2004 Gold Medal of Vernadsky from National Academy of Sciences of Ukraine, 2015 Personal life Abrikosov was the son of the physicians Alexei Ivanovich Abrikosov (1875-1955) and his second wife, Fania Davidovna Woolf (1895—1965). Through his father, Abrikosov was the nephew of the martyred Catholic nun Anna Abrikosova (1882-1936). His sister was Maria Alekseevna Abrikósova (1929-1998), physician. He married Svetlana Yuriyevna Bunkova and had 3 children. He died in California on 29 March 2017 at the age of 88. Books See also List of Jewish Nobel laureates References External links including the Nobel Lecture on December 8, 2003 Type II Superconductors and the Vortex Lattice M. R. Norman, "Aleksei A. Abrikosov", Biographical Memoirs of the National Academy of Sciences (2018) 1928 births 2017 deaths Nobel laureates in Physics American Nobel laureates Russian Nobel laureates Members of the United States National Academy of Sciences Foreign members of the Royal Society Jewish American physicists Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Moscow State University alumni Academic staff of Moscow State University Academic staff of the Moscow Institute of Physics and Technology Recipients of the Lenin Prize Recipients of the Order of the Red Banner of Labour Recipients of the USSR State Prize Jewish Russian physicists Soviet physicists Superconductivity Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society Theoretical physicists Soviet Jews 20th-century Russian physicists
Alexei Abrikosov (physicist)
[ "Physics", "Materials_science", "Engineering" ]
1,133
[ "Physical quantities", "Theoretical physics", "Superconductivity", "Materials science", "Condensed matter physics", "Theoretical physicists", "Electrical resistance and conductance" ]
339,350
https://en.wikipedia.org/wiki/Black%20hole%20thermodynamics
In physics, black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. As the study of the statistical mechanics of black-body radiation led to the development of the theory of quantum mechanics, the effort to understand the statistical mechanics of black holes has had a deep impact upon the understanding of quantum gravity, leading to the formulation of the holographic principle. Overview The second law of thermodynamics requires that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole. The increase of the entropy of the black hole more than compensates for the decrease of the entropy carried by the object that was swallowed. In 1972, Jacob Bekenstein conjectured that black holes should have an entropy proportional to the area of the event horizon, where by the same year, he proposed no-hair theorems. In 1973 Bekenstein suggested as the constant of proportionality, asserting that if the constant was not exactly this, it must be very close to it. The next year, in 1974, Stephen Hawking showed that black holes emit thermal Hawking radiation corresponding to a certain temperature (Hawking temperature). Using the thermodynamic relationship between energy, temperature and entropy, Hawking was able to confirm Bekenstein's conjecture and fix the constant of proportionality at : where is the area of the event horizon, is the Boltzmann constant, and is the Planck length. This is often referred to as the Bekenstein–Hawking formula. The subscript BH either stands for "black hole" or "Bekenstein–Hawking". The black hole entropy is proportional to the area of its event horizon . The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle. This area relationship was generalized to arbitrary regions via the Ryu–Takayanagi formula, which relates the entanglement entropy of a boundary conformal field theory to a specific surface in its dual gravitational theory. Although Hawking's calculations gave further thermodynamic evidence for black hole entropy, until 1995 no one was able to make a controlled calculation of black hole entropy based on statistical mechanics, which associates entropy with a large number of microstates. In fact, so called "no-hair" theorems appeared to suggest that black holes could have only a single microstate. The situation changed in 1995 when Andrew Strominger and Cumrun Vafa calculated the right Bekenstein–Hawking entropy of a supersymmetric black hole in string theory, using methods based on D-branes and string duality. Their calculation was followed by many similar computations of entropy of large classes of other extremal and near-extremal black holes, and the result always agreed with the Bekenstein–Hawking formula. However, for the Schwarzschild black hole, viewed as the most far-from-extremal black hole, the relationship between micro- and macrostates has not been characterized. Efforts to develop an adequate answer within the framework of string theory continue. In loop quantum gravity (LQG) it is possible to associate a geometrical interpretation with the microstates: these are the quantum geometries of the horizon. LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon. It is possible to derive, from the covariant formulation of full quantum theory (spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy. The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes. There seems to be also discussed the calculation of Bekenstein–Hawking entropy from the point of view of loop quantum gravity. The current accepted microstate ensemble for black holes is the microcanonical ensemble. The partition function for black holes results in a negative heat capacity. In canonical ensembles, there is limitation for a positive heat capacity, whereas microcanonical ensembles can exist at a negative heat capacity. The laws of black hole mechanics The four laws of black hole mechanics are physical properties that black holes are believed to satisfy. The laws, analogous to the laws of thermodynamics, were discovered by Jacob Bekenstein, Brandon Carter, and James Bardeen. Further considerations were made by Stephen Hawking. Statement of the laws The laws of black hole mechanics are expressed in geometrized units. The zeroth law The horizon has constant surface gravity for a stationary black hole. The first law For perturbations of stationary black holes, the change of energy is related to change of area, angular momentum, and electric charge by where is the energy, is the surface gravity, is the horizon area, is the angular velocity, is the angular momentum, is the electrostatic potential and is the electric charge. The second law The horizon area is, assuming the weak energy condition, a non-decreasing function of time: This "law" was superseded by Hawking's discovery that black holes radiate, which causes both the black hole's mass and the area of its horizon to decrease over time. The third law It is not possible to form a black hole with vanishing surface gravity. That is, cannot be achieved. Discussion of the laws The zeroth law The zeroth law is analogous to the zeroth law of thermodynamics, which states that the temperature is constant throughout a body in thermal equilibrium. It suggests that the surface gravity is analogous to temperature. T constant for thermal equilibrium for a normal system is analogous to constant over the horizon of a stationary black hole. The first law The left side, , is the change in energy (proportional to mass). Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right side the term . The second law The second law is the statement of Hawking's area theorem. Analogously, the second law of thermodynamics states that the change in entropy in an isolated system will be greater than or equal to 0 for a spontaneous process, suggesting a link between entropy and the area of a black hole horizon. However, this version violates the second law of thermodynamics by matter losing (its) entropy as it falls in, giving a decrease in entropy. However, generalizing the second law as the sum of black hole entropy and outside entropy, shows that the second law of thermodynamics is not violated in a system including the universe beyond the horizon. The generalized second law of thermodynamics (GSL) was needed to present the second law of thermodynamics as valid. This is because the second law of thermodynamics, as a result of the disappearance of entropy near the exterior of black holes, is not useful. The GSL allows for the application of the law because now the measurement of interior, common entropy is possible. The validity of the GSL can be established by studying an example, such as looking at a system having entropy that falls into a bigger, non-moving black hole, and establishing upper and lower entropy bounds for the increase in the black hole entropy and entropy of the system, respectively. One should also note that the GSL will hold for theories of gravity such as Einstein gravity, Lovelock gravity, or Braneworld gravity, because the conditions to use GSL for these can be met. However, on the topic of black hole formation, the question becomes whether or not the generalized second law of thermodynamics will be valid, and if it is, it will have been proved valid for all situations. Because a black hole formation is not stationary, but instead moving, proving that the GSL holds is difficult. Proving the GSL is generally valid would require using quantum-statistical mechanics, because the GSL is both a quantum and statistical law. This discipline does not exist so the GSL can be assumed to be useful in general, as well as for prediction. For example, one can use the GSL to predict that, for a cold, non-rotating assembly of nucleons, , where is the entropy of a black hole and is the sum of the ordinary entropy. The third law The third law of black hole thermodynamics is controversial. Specific counterexamples called extremal black holes fail to obey the rule. The classical third law of thermodynamics, known as the Nernst theorem, which says the entropy of a system must go to zero as the temperature goes to absolute zero is also not a universal law. However the systems that fail the classical third law have not been realized in practice, leading to the suggestion that the extremal black holes may not represent the physics of black holes generally. A weaker form of the classical third law known as the "unattainability principle" states that an infinite number of steps are required to put a system in to its ground state. This form of the third law does have an analog in black hole physics. Interpretation of the laws The four laws of black hole mechanics suggest that one should identify the surface gravity of a black hole with temperature and the area of the event horizon with entropy, at least up to some multiplicative constants. If one only considers black holes classically, then they have zero temperature and, by the no-hair theorem, zero entropy, and the laws of black hole mechanics remain an analogy. However, when quantum-mechanical effects are taken into account, one finds that black holes emit thermal radiation (Hawking radiation) at a temperature From the first law of black hole mechanics, this determines the multiplicative constant of the Bekenstein–Hawking entropy, which is (in geometrized units) which is the entropy of the black hole in Einstein's general relativity. Quantum field theory in curved spacetime can be utilized to calculate the entropy for a black hole in any covariant theory for gravity, known as the Wald entropy. Critique While black hole thermodynamics (BHT) has been regarded as one of the deepest clues to a quantum theory of gravity, there remain a philosophical criticism that "the analogy is not nearly as good as is commonly supposed", that it “is often based on a kind of caricature of thermodynamics” and "it’s unclear what the systems in BHT are supposed to be". These criticisms where reexamined in detail, ending with the opposite conclusion, "stationary black holes are not analogous to thermodynamic systems: they are thermodynamic systems, in the fullest sense." Beyond black holes Gary Gibbons and Hawking have shown that black hole thermodynamics is more general than black holes—that cosmological event horizons also have an entropy and temperature. More fundamentally, Gerard 't Hooft and Leonard Susskind used the laws of black hole thermodynamics to argue for a general holographic principle of nature, which asserts that consistent theories of gravity and quantum mechanics must be lower-dimensional. Though not yet fully understood in general, the holographic principle is central to theories like the AdS/CFT correspondence. There are also connections between black hole entropy and fluid surface tension. See also Joseph Polchinski Robert Wald Notes Citations Bibliography External links Bekenstein-Hawking entropy on Scholarpedia Black Hole Thermodynamics Black hole entropy on arxiv.org Black holes Branches of thermodynamics
Black hole thermodynamics
[ "Physics", "Chemistry", "Astronomy" ]
2,464
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Thermodynamics", "Branches of thermodynamics", "Stellar phenomena", "Astronomical objects" ]
339,396
https://en.wikipedia.org/wiki/List%20of%20real%20analysis%20topics
This is a list of articles that are considered real analysis topics. See also: glossary of real and complex analysis. General topics Limits Limit of a sequence Subsequential limit – the limit of some subsequence Limit of a function (see List of limits for a list of limits of common functions) One-sided limit – either of the two limits of functions of real variables x, as x approaches a point from above or below Squeeze theorem – confirms the limit of a function via comparison with two other functions Big O notation – used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions Sequences and series (see also list of mathematical series) Arithmetic progression – a sequence of numbers such that the difference between the consecutive terms is constant Generalized arithmetic progression – a sequence of numbers such that the difference between consecutive terms can be one of several possible constants Geometric progression – a sequence of numbers such that each consecutive term is found by multiplying the previous one by a fixed non-zero number Harmonic progression – a sequence formed by taking the reciprocals of the terms of an arithmetic progression Finite sequence – see sequence Infinite sequence – see sequence Divergent sequence – see limit of a sequence or divergent series Convergent sequence – see limit of a sequence or convergent series Cauchy sequence – a sequence whose elements become arbitrarily close to each other as the sequence progresses Convergent series – a series whose sequence of partial sums converges Divergent series – a series whose sequence of partial sums diverges Power series – a series of the form Taylor series – a series of the form Maclaurin series – see Taylor series Binomial series – the Maclaurin series of the function f given by f(x) = (1 + x) α Telescoping series Alternating series Geometric series Divergent geometric series Harmonic series Fourier series Lambert series Summation methods Cesàro summation Euler summation Lambert summation Borel summation Summation by parts – transforms the summation of products of into other summations Cesàro mean Abel's summation formula More advanced topics Convolution Cauchy product –is the discrete convolution of two sequences Farey sequence – the sequence of completely reduced fractions between 0 and 1 Oscillation – is the behaviour of a sequence of real numbers or a real-valued function, which does not converge, but also does not diverge to +∞ or −∞; and is also a quantitative measure for that. Indeterminate forms – algebraic expressions gained in the context of limits. The indeterminate forms include 00, 0/0, 1∞, ∞ − ∞, ∞/∞, 0 × ∞, and ∞0. Convergence Pointwise convergence, Uniform convergence Absolute convergence, Conditional convergence Normal convergence Radius of convergence Convergence tests Integral test for convergence Cauchy's convergence test Ratio test Direct comparison test Limit comparison test Root test Alternating series test Dirichlet's test Stolz–Cesàro theorem – is a criterion for proving the convergence of a sequence Functions Function of a real variable Real multivariable function Continuous function Nowhere continuous function Weierstrass function Smooth function Analytic function Quasi-analytic function Non-analytic smooth function Flat function Bump function Differentiable function Integrable function Square-integrable function, p-integrable function Monotonic function Bernstein's theorem on monotone functions – states that any real-valued function on the half-line [0, ∞) that is totally monotone is a mixture of exponential functions Inverse function Convex function, Concave function Singular function Harmonic function Weakly harmonic function Proper convex function Rational function Orthogonal function Implicit and explicit functions Implicit function theorem – allows relations to be converted to functions Measurable function Baire one star function Symmetric function Domain Codomain Image Support Differential of a function Continuity Uniform continuity Modulus of continuity Lipschitz continuity Semi-continuity Equicontinuous Absolute continuity Hölder condition – condition for Hölder continuity Distributions Dirac delta function Heaviside step function Hilbert transform Green's function Variation Bounded variation Total variation Derivatives Second derivative Inflection point – found using second derivatives Directional derivative, Total derivative, Partial derivative Differentiation rules Linearity of differentiation Product rule Quotient rule Chain rule Inverse function theorem – gives sufficient conditions for a function to be invertible in a neighborhood of a point in its domain, also gives a formula for the derivative of the inverse function Differentiation in geometry and topology see also List of differential geometry topics Differentiable manifold Differentiable structure Submersion – a differentiable map between differentiable manifolds whose differential is everywhere surjective Integrals (see also Lists of integrals) Antiderivative Fundamental theorem of calculus – a theorem of antiderivatives Multiple integral Iterated integral Improper integral Cauchy principal value – method for assigning values to certain improper integrals Line integral Anderson's theorem – says that the integral of an integrable, symmetric, unimodal, non-negative function over an n-dimensional convex body (K) does not decrease if K is translated inwards towards the origin Integration and measure theory see also List of integration and measure theory topics Riemann integral, Riemann sum Riemann–Stieltjes integral Darboux integral Lebesgue integration Fundamental theorems Monotone convergence theorem – relates monotonicity with convergence Intermediate value theorem – states that for each value between the least upper bound and greatest lower bound of the image of a continuous function there is at least one point in its domain that the function maps to that value Rolle's theorem – essentially states that a differentiable function which attains equal values at two distinct points must have a point somewhere between them where the first derivative is zero Mean value theorem – that given an arc of a differentiable curve, there is at least one point on that arc at which the derivative of the curve is equal to the "average" derivative of the arc Taylor's theorem – gives an approximation of a times differentiable function around a given point by a -th order Taylor-polynomial. L'Hôpital's rule – uses derivatives to help evaluate limits involving indeterminate forms Abel's theorem – relates the limit of a power series to the sum of its coefficients Lagrange inversion theorem – gives the Taylor series of the inverse of an analytic function Darboux's theorem – states that all functions that result from the differentiation of other functions have the intermediate value property: the image of an interval is also an interval Heine–Borel theorem – sometimes used as the defining property of compactness Bolzano–Weierstrass theorem – states that each bounded sequence in has a convergent subsequence Extreme value theorem - states that if a function is continuous in the closed and bounded interval , then it must attain a maximum and a minimum Foundational topics Numbers Real numbers Construction of the real numbers Natural number Integer Rational number Irrational number Completeness of the real numbers Least-upper-bound property Real line Extended real number line Dedekind cut Specific numbers 0 1 0.999... Infinity Sets Open set Neighbourhood Cantor set Derived set (mathematics) Completeness Limit superior and limit inferior Supremum Infimum Interval Partition of an interval Maps Contraction mapping Metric map Fixed point – a point of a function that maps to itself Applied mathematical tools Infinite expressions Continued fraction Series Infinite products Inequalities See list of inequalities Triangle inequality Bernoulli's inequality Cauchy–Schwarz inequality Hölder's inequality Minkowski inequality Jensen's inequality Chebyshev's inequality Inequality of arithmetic and geometric means Means Generalized mean Pythagorean means Arithmetic mean Geometric mean Harmonic mean Geometric–harmonic mean Arithmetic–geometric mean Weighted mean Quasi-arithmetic mean Orthogonal polynomials Classical orthogonal polynomials Hermite polynomials Laguerre polynomials Jacobi polynomials Gegenbauer polynomials Legendre polynomials Spaces Euclidean space Metric space Banach fixed point theorem – guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, provides method to find them Complete metric space Topological space Function space Sequence space Compact space Measures Lebesgue measure Outer measure Hausdorff measure Dominated convergence theorem – provides sufficient conditions under which two limit processes commute, namely Lebesgue integration and almost everywhere convergence of a sequence of functions. Field of sets Sigma-algebra Historical figures Michel Rolle (1652–1719) Brook Taylor (1685–1731) Leonhard Euler (1707–1783) Joseph-Louis Lagrange (1736–1813) Joseph Fourier (1768–1830) Bernard Bolzano (1781–1848) Augustin Cauchy (1789–1857) Niels Henrik Abel (1802–1829) Peter Gustav Lejeune Dirichlet (1805–1859) Karl Weierstrass (1815–1897) Eduard Heine (1821–1881) Pafnuty Chebyshev (1821–1894) Leopold Kronecker (1823–1891) Bernhard Riemann (1826–1866) Richard Dedekind (1831–1916) Rudolf Lipschitz (1832–1903) Camille Jordan (1838–1922) Jean Gaston Darboux (1842–1917) Georg Cantor (1845–1918) Ernesto Cesàro (1859–1906) Otto Hölder (1859–1937) Hermann Minkowski (1864–1909) Alfred Tauber (1866–1942) Felix Hausdorff (1868–1942) Émile Borel (1871–1956) Henri Lebesgue (1875–1941) Wacław Sierpiński (1882–1969) Johann Radon (1887–1956) Karl Menger (1902–1985) Related fields of analysis Asymptotic analysis – studies a method of describing limiting behaviour Convex analysis – studies the properties of convex functions and convex sets List of convexity topics Harmonic analysis – studies the representation of functions or signals as superpositions of basic waves List of harmonic analysis topics Fourier analysis – studies Fourier series and Fourier transforms List of Fourier analysis topics List of Fourier-related transforms Complex analysis – studies the extension of real analysis to include complex numbers Functional analysis – studies vector spaces endowed with limit-related structures and the linear operators acting upon these spaces Nonstandard analysis – studies mathematical analysis using a rigorous treatment of infinitesimals. See also Calculus, the classical calculus of Newton and Leibniz. Non-standard calculus, a rigorous application of infinitesimals, in the sense of non-standard analysis, to the classical calculus of Newton and Leibniz. Outlines of mathematics and logic Outlines Mathematics-related lists
List of real analysis topics
[ "Mathematics" ]
2,131
[ "nan" ]
339,488
https://en.wikipedia.org/wiki/Cloning%20vector
A cloning vector is a small piece of DNA that can be stably maintained in an organism, and into which a foreign DNA fragment can be inserted for cloning purposes. The cloning vector may be DNA taken from a virus, the cell of a higher organism, or it may be the plasmid of a bacterium. The vector contains features that allow for the convenient insertion of a DNA fragment into the vector or its removal from the vector, for example through the presence of restriction sites. The vector and the foreign DNA may be treated with a restriction enzyme that cuts the DNA, and DNA fragments thus generated contain either blunt ends or overhangs known as sticky ends, and vector DNA and foreign DNA with compatible ends can then be joined by molecular ligation. After a DNA fragment has been cloned into a cloning vector, it may be further subcloned into another vector designed for more specific use. There are many types of cloning vectors, but the most commonly used ones are genetically engineered plasmids. Cloning is generally first performed using Escherichia coli, and cloning vectors in E. coli include plasmids, bacteriophages (such as phage λ), cosmids, and bacterial artificial chromosomes (BACs). Some DNA, however, cannot be stably maintained in E. coli, for example very large DNA fragments, and other organisms such as yeast may be used. Cloning vectors in yeast include yeast artificial chromosomes (YACs). Features of a cloning vector All commonly used cloning vectors in molecular biology have key features necessary for their function, such as a suitable cloning site and selectable marker. Others may have additional features specific to their use. For reason of ease and convenience, cloning is often performed using E. coli. Thus, the cloning vectors used often have elements necessary for their propagation and maintenance in E. coli, such as a functional origin of replication (ori). The ColE1 origin of replication is found in many plasmids. Some vectors also include elements that allow them to be maintained in another organism in addition to E. coli, and these vectors are called shuttle vector. Cloning site All cloning vectors have features that allow a gene to be conveniently inserted into the vector or removed from it. This may be a multiple cloning site (MCS) or polylinker, which contains many unique restriction sites. The restriction sites in the MCS are first cleaved by restriction enzymes, then a PCR-amplified target gene also digested with the same enzymes is ligated into the vectors using DNA ligase. The target DNA sequence can be inserted into the vector in a specific direction if so desired. The restriction sites may be further used for sub-cloning into another vector if necessary. Other cloning vectors may use topoisomerase instead of ligase and cloning may be done more rapidly without the need for restriction digest of the vector or insert. In this TOPO cloning method a linearized vector is activated by attaching topoisomerase I to its ends, and this "TOPO-activated" vector may then accept a PCR product by ligating both the 5' ends of the PCR product, releasing the topoisomerase and forming a circular vector in the process. Another method of cloning without the use of DNA digest and ligase is by DNA recombination, for example as used in the Gateway cloning system. The gene, once cloned into the cloning vector (called entry clone in this method), may be conveniently introduced into a variety of expression vectors by recombination. Selectable marker A selectable marker is carried by the vector to allow the selection of positively transformed cells. Antibiotic resistance is often used as marker, an example being the beta-lactamase gene, which confers resistance to the penicillin group of beta-lactam antibiotics like ampicillin. Some vectors contain two selectable markers, for example the plasmid pACYC177 has both ampicillin and kanamycin resistance gene. Shuttle vector which is designed to be maintained in two different organisms may also require two selectable markers, although some selectable markers such as resistance to zeocin and hygromycin B are effective in different cell types. Auxotrophic selection markers that allow an auxotrophic organism to grow in minimal growth medium may also be used; examples of these are LEU2 and URA3 which are used with their corresponding auxotrophic strains of yeast. Another kind of selectable marker allows for the positive selection of plasmid with cloned gene. This may involve the use of a gene lethal to the host cells, such as barnase, Ccda, and the parD/parE toxins. This typically works by disrupting or removing the lethal gene during the cloning process, and unsuccessful clones where the lethal gene still remains intact would kill the host cells, therefore only successful clones are selected. Reporter gene Reporter genes are used in some cloning vectors to facilitate the screening of successful clones by using features of these genes that allow successful clone to be easily identified. Such features present in cloning vectors may be the lacZα fragment for α complementation in blue-white selection, and/or marker gene or reporter genes in frame with and flanking the MCS to facilitate the production of fusion proteins. Examples of fusion partners that may be used for screening are the green fluorescent protein (GFP) and luciferase. Elements for expression A cloning vector need not contain suitable elements for the expression of a cloned target gene, such as a promoter and ribosomal binding site (RBS), many however do, and may then work as an expression vector. The target DNA may be inserted into a site that is under the control of a particular promoter necessary for the expression of the target gene in the chosen host. Where the promoter is present, the expression of the gene is preferably tightly controlled and inducible so that proteins are only produced when required. Some commonly used promoters are the T7 and lac promoters. The presence of a promoter is necessary when screening techniques such as blue-white selection are used. Cloning vectors without promoter and RBS for the cloned DNA sequence are sometimes used, for example when cloning genes whose products are toxic to E. coli cells. Promoter and RBS for the cloned DNA sequence are also unnecessary when first making a genomic or cDNA library of clones since the cloned genes are normally subcloned into a more appropriate expression vector if their expression is required. Some vectors are designed for transcription only with no heterologous protein expressed, for example for in vitro mRNA production. These vectors are called transcription vectors. They may lack the sequences necessary for polyadenylation and termination, therefore may not be used for protein production. Types of cloning vectors A large number of cloning vectors are available, and choosing the vector may depend upon a number of factors, such as the size of the insert, copy number and cloning method. Large insert may not be stably maintained in a general cloning vector, especially for those with a high copy number, therefore cloning large fragments may require more specialised cloning vector. Plasmid Plasmids are autonomously replicating circular extra-chromosomal DNA. They are the standard cloning vectors and the ones most commonly used. Most general plasmids may be used to clone DNA inserts of up to 15 kb in size. One of the earliest commonly used cloning vectors is the pBR322 plasmid. Other cloning vectors include the pUC series of plasmids, and a large number of different cloning plasmid vectors are available. Many plasmids have high copy numbers, for example, pUC19 has a copy number of 500-700 copies per cell, and high copy number is useful as it produces greater yield of recombinant plasmid for subsequent manipulation. However low-copy-number plasmids may be preferably used in certain circumstances, for example, when the protein from the cloned gene is toxic to the cells. Some plasmids contain an M13 bacteriophage origin of replication and may be used to generate single-stranded DNA. These are called phagemids, and examples are the pBluescript series of cloning vectors. Bacteriophage The bacteriophages used for cloning are the λ phage and M13 phage. There is an upper limit on the amount of DNA that can be packed into a phage (a maximum of 53 kb), therefore to allow foreign DNA to be inserted into phage DNA, phage cloning vectors may need to have some non-essential genes deleted, for example the genes for lysogeny since using phage λ as a cloning vector involves only the lytic cycle. There are two kinds of λ phage vectors - insertion vector and replacement vector. Insertion vectors contain a unique cleavage site whereby foreign DNA with size of 5–11 kb may be inserted. In replacement vectors, the cleavage sites flank a region containing genes not essential for the lytic cycle, and this region may be deleted and replaced by the DNA insert in the cloning process, and a larger sized DNA of 8–24 kb may be inserted. There is also a lower size limit for DNA that can be packed into a phage, and vector DNA that is too small cannot be properly packaged into the phage. This property can be used for selection - vector without insert may be too small, therefore only vectors with insert may be selected for propagation. Cosmid Cosmids are plasmids that incorporate a segment of bacteriophage λ DNA that has the cohesive end site (cos) which contains elements required for packaging DNA into λ particles. Under apt origin of replication (ori), it can replicate as a plasmid. It is normally used to clone large DNA fragments between 28 and 45 Kb. Bacterial artificial chromosome Insert size of up to 350 kb can be cloned in bacterial artificial chromosome (BAC). BACs are maintained in E. coli with a copy number of only 1 per cell. BACs are based on F plasmid, another artificial chromosome called the PAC is based on the P1 phage. Yeast artificial chromosome Yeast artificial chromosome are used as vectors to clone DNA fragments of more than 1 mega base (1Mb=1000kb) in size. They are useful in cloning larger DNA fragments as required in mapping genomes such as in the Human Genome Project. It contains a telomeric sequence, an autonomously replicating sequence (features required to replicate linear chromosomes in yeast cells). These vectors also contain suitable restriction sites to clone foreign DNA as well as genes to be used as selectable markers. Human artificial chromosome Human artificial chromosome may be potentially useful as a gene transfer vectors for gene delivery into human cells, and a tool for expression studies and determining human chromosome function. It can carry very large DNA fragment (there is no upper limit on size for practical purposes), therefore it does not have the problem of limited cloning capacity of other vectors, and it also avoids possible insertional mutagenesis caused by integration into host chromosomes by viral vector. Animal and plant viral vectors Viruses that infect plant and animal cells have also been manipulated to introduce foreign genes into plant and animal cells. The natural ability of viruses to adsorb to cells, introduce their DNA and replicate have made them ideal vehicles to transfer foreign DNA into eukaryotic cells in culture. A vector based on Simian virus 40 (SV40) was used in first cloning experiment involving mammalian cells. A number of vectors based on other type of viruses like Adenoviruses and Papilloma virus have been used to clone genes in mammals. At present, retroviral vectors are popular for cloning genes in mammalian cells. In case of plants like Cauliflower mosaic virus, Tobacco mosaic virus and Gemini viruses have been used with limited success. Screening: example of the blue/white screen Many general purpose vectors such as pUC19 usually include a system for detecting the presence of a cloned DNA fragment, based on the loss of an easily scored phenotype. The most widely used is the gene coding for E. coli β-galactosidase, whose activity can easily be detected by the ability of the enzyme it encodes to hydrolyze the soluble, colourless substrate X-gal (5-bromo-4-chloro-3-indolyl-beta-d-galactoside) into an insoluble, blue product (5,5'-dibromo-4,4'-dichloro indigo). Cloning a fragment of DNA within the vector-based lacZα sequence of the β-galactosidase prevents the production of an active enzyme. If X-gal is included in the selective agar plates, transformant colonies are generally blue in the case of a vector with no inserted DNA and white in the case of a vector containing a fragment of cloned DNA. See also Vector (molecular biology) Plant transformation vector IMAGE cDNA clones fosmid Golden Gate Cloning References Genetics techniques Molecular biology Cloning Plasmids
Cloning vector
[ "Chemistry", "Engineering", "Biology" ]
2,750
[ "Genetics techniques", "Cloning", "Plasmids", "Genetic engineering", "Bacteria", "Molecular biology", "Biochemistry" ]
339,530
https://en.wikipedia.org/wiki/Refinery
A refinery is a production facility composed of a group of chemical engineering unit processes and unit operations refining certain materials or converting raw material into products of value. Types of refineries Different types of refineries are as follows: Petroleum oil refinery, which converts crude oil into high-octane motor spirit (gasoline/petrol), diesel oil, liquefied petroleum gases (LPG), kerosene, heating fuel oils, hexane, lubricating oils, bitumen, and petroleum coke Edible oil refinery which converts cooking oil into a product that is uniform in taste, smell and appearance, and stability Natural gas processing plant, which purifies and converts raw natural gas into residential, commercial and industrial fuel gas, and also recovers natural gas liquids (NGL) such as ethane, propane, butanes and pentanes Sugar refinery, which converts sugar cane and sugar beets into crystallized sugar and sugar syrups Salt refinery, which cleans common salt (NaCl), produced by the solar evaporation of sea water, followed by washing and re-crystallization Metal refineries refining metals such as alumina, copper, gold, lead, nickel, silver, uranium, zinc, magnesium and cobalt Iron refining, a stage of refining pig iron (typically grey cast iron to white cast iron), before fining, which converts pig iron into bar iron or steel A typical oil refinery The image below is a schematic flow diagram of a typical oil refinery depicting various unit processes and the flow of intermediate products between the inlet crude oil feedstock and the final products. The diagram depicts only one of the hundreds of different configurations. It does not include any of the usual facilities providing utilities such as steam, cooling water, and electric power as well as storage tanks for crude oil feedstock and for intermediate products and end products. Natural gas processing plant The image below is a schematic block flow diagram of a typical natural gas processing plant. It shows various unit processes converting raw natural gas into gas pipelined to end users. The block flow diagram also shows how processing of the raw natural gas yields byproduct sulfur, byproduct ethane, and natural gas liquids (NGL) propane, butanes and natural gasoline (denoted as pentanes +). Sugar refining Sugar is generally produced from sugarcane or sugar beets. As the global production of sugar from sugarcane is at least twice the production from sugar beets, this section focuses on sugarcane. Milling Sugarcane is traditionally refined into sugar in two stages. In the first stage, raw sugar is produced by the milling of harvested sugarcane. In a sugar mill, sugarcane is washed, chopped, and shredded by revolving knives. The shredded cane is mixed with water and crushed. The juices (containing 10-15 percent sucrose) are collected and mixed with lime to adjust pH to 7, prevent decay into glucose and fructose, and precipitate impurities. The lime and other suspended solids are settled out, and the clarified juice is concentrated in a multiple-effect evaporator to make a syrup with about 60 weight percent sucrose. The syrup is further concentrated under vacuum until it becomes supersaturated and is then seeded with crystalline sugar. Upon cooling, sugar crystallizes out of the syrup. Centrifuging then separates the sugar from the remaining liquid (molasses). Raw sugar has a yellow to brown color. Sugar is sometimes consumed locally at this stage but usually undergoes further purification. Sulfur dioxide is bubbled through the cane juice subsequent to crystallization in a process known as "sulfitation". This process inhibits color forming reactions and stabilizes the sugar juices to produce "mill white" or "plantation white" sugar. The fibrous solids, called bagasse, remaining after the crushing of the shredded sugarcane are burned for fuel which helps a sugar mill to become self-sufficient in energy. Any excess bagasse can be used for animal feed, to produce paper, or burned to generate electricity for the local power grid. Refining The second stage is often executed in heavy sugar-consuming regions such as North America, Europe, and Japan. In the second stage, white sugar is produced that is more than 99 percent pure sucrose. In such refineries, raw sugar is further purified by fractional crystallization. References Chemical processes Industrial buildings and structures
Refinery
[ "Chemistry" ]
899
[ "Chemical process engineering", "Chemical processes", "nan" ]
339,542
https://en.wikipedia.org/wiki/Semiprime
In mathematics, a semiprime is a natural number that is the product of exactly two prime numbers. The two primes in the product may equal each other, so the semiprimes include the squares of prime numbers. Because there are infinitely many prime numbers, there are also infinitely many semiprimes. Semiprimes are also called biprimes, since they include two primes, or second numbers, by analogy with how "prime" means "first". Examples and variations The semiprimes less than 100 are: Semiprimes that are not square numbers are called discrete, distinct, or squarefree semiprimes: The semiprimes are the case of the -almost primes, numbers with exactly prime factors. However some sources use "semiprime" to refer to a larger set of numbers, the numbers with at most two prime factors (including unit (1), primes, and semiprimes). These are: Formula for number of semiprimes A semiprime counting formula was discovered by E. Noel and G. Panos in 2005. Let denote the number of semiprimes less than or equal to n. Then where is the prime-counting function and denotes the kth prime. Properties Semiprime numbers have no composite numbers as factors other than themselves. For example, the number 26 is semiprime and its only factors are 1, 2, 13, and 26, of which only 26 is composite. For a squarefree semiprime (with ) the value of Euler's totient function (the number of positive integers less than or equal to that are relatively prime to ) takes the simple form This calculation is an important part of the application of semiprimes in the RSA cryptosystem. For a square semiprime , the formula is again simple: Applications Semiprimes are highly useful in the area of cryptography and number theory, most notably in public key cryptography, where they are used by RSA and pseudorandom number generators such as Blum Blum Shub. These methods rely on the fact that finding two large primes and multiplying them together (resulting in a semiprime) is computationally simple, whereas finding the original factors appears to be difficult. In the RSA Factoring Challenge, RSA Security offered prizes for the factoring of specific large semiprimes and several prizes were awarded. The original RSA Factoring Challenge was issued in 1991, and was replaced in 2001 by the New RSA Factoring Challenge, which was later withdrawn in 2007. In 1974 the Arecibo message was sent with a radio signal aimed at a star cluster. It consisted of binary digits intended to be interpreted as a bitmap image. The number was chosen because it is a semiprime and therefore can be arranged into a rectangular image in only two distinct ways (23 rows and 73 columns, or 73 rows and 23 columns). See also Chen's theorem Sphenic number, a product of three distinct primes Parity problem (sieve theory) References External links Integer sequences Prime numbers Theory of cryptography
Semiprime
[ "Mathematics" ]
651
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
339,555
https://en.wikipedia.org/wiki/Almost%20prime
In number theory, a natural number is called -almost prime if it has prime factors. More formally, a number is -almost prime if and only if , where is the total number of primes in the prime factorization of (can be also seen as the sum of all the primes' exponents): A natural number is thus prime if and only if it is 1-almost prime, and semiprime if and only if it is 2-almost prime. The set of -almost primes is usually denoted by . The smallest -almost prime is . The first few -almost primes are: The number of positive integers less than or equal to with exactly prime divisors (not necessarily distinct) is asymptotic to: a result of Landau. See also the Hardy–Ramanujan theorem. Properties The product of a -almost prime and a -almost prime is a -almost prime. A -almost prime cannot have a -almost prime as a factor for all . References External links Integer sequences Prime numbers
Almost prime
[ "Mathematics" ]
212
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
339,558
https://en.wikipedia.org/wiki/Hedge
A hedge or hedgerow is a line of closely spaced (3 feet or closer) shrubs and sometimes trees, planted and trained to form a barrier or to mark the boundary of an area, such as between neighbouring properties. Hedges that are used to separate a road from adjoining fields or one field from another, and are of sufficient age to incorporate larger trees, are known as hedgerows. Often they serve as windbreaks to improve conditions for the adjacent crops, as in bocage country. When clipped and maintained, hedges are also a simple form of topiary. A hedge often operates as, and sometimes is called, a "live fence". This may either consist of individual fence posts connected with wire or other fencing material, or it may be in the form of densely planted hedges without interconnecting wire. This is common in tropical areas where low-income farmers can demarcate properties and reduce maintenance of fence posts that otherwise deteriorate rapidly. Many other benefits can be obtained depending on the species chosen. History The development of hedges over the centuries is preserved in their structure. The first hedges enclosed land for cereal crops during the Neolithic Age (4000–6000 years ago). The farms were of about , with fields about for hand cultivation. Some hedges date from the Bronze and Iron Ages, 2000–4000 years ago, when traditional patterns of landscape became established. Others were built during the Medieval field rationalisations; more originated in the industrial boom of the 18th and 19th centuries, when heaths and uplands were enclosed. Many hedgerows separating fields from lanes in the United Kingdom, Ireland and the Low Countries are estimated to have been in existence for more than seven hundred years, originating in the medieval period. The root word of 'hedge' is much older: it appears in the Old English language, in German (Hecke), and Dutch (haag) to mean 'enclosure', as in the name of the Dutch city The Hague, or more formally 's Gravenhage, meaning The Count's hedge. Charles the Bald is recorded as complaining in 864, at a time when most official fortifications were constructed of wooden palisades, that some unauthorized men were constructing haies et fertés; tightly interwoven hedges of hawthorns. In parts of Britain, early hedges were destroyed to make way for the manorial open-field system. Many were replaced after the inclosure acts, then removed again during modern agricultural intensification, and now some are being replanted for wildlife. As of 2024 in a study using Lidar by the UK Centre for Ecology & Hydrology England alone was found to have a total of 390,000 km of hedgerows, which would span the circumference of the earth 10 times. Composition A hedge may consist of a single species or several, typically mixed at random. In many newly planted British hedges, at least 60 per cent of the shrubs are hawthorn, blackthorn, and (in the southwest) hazel, alone or in combination. The first two are particularly effective barriers to livestock. In North America, Maclura pomifera (i.e., hedge apple) was grown to form a barrier to exclude free-range livestock from vegetable gardens and corn fields. Other shrubs and trees used include holly, beech, oak, ash, and willow; the last three can become very tall. Of the hedgerows in the Normandy region of France, Martin Blumenson said The hedgerow is a fence, half earth, half hedge. The wall at the base is a dirt parapet that varies in thickness from one to four or more feet and in height from three to twelve feet. Growing out of the wall is a hedge of hawthorn, brambles, vines, and trees, in thickness from one to three feet. Originally property demarcations, hedgerows protect crops and cattle from the ocean winds that sweep across the land. The hedgerows of Normandy became barriers that slowed the advance of Allied troops following the D-Day invasion during World War II. Allied armed forces modified their armored vehicles to facilitate breaking out of their beachheads into the Normandy bocage. Species Formal, or modern garden hedges are grown in many varieties, including the following species: Berberis thunbergii – native to Japan and eastern Asia Buxus sempervirens (box) – native to western and southern Europe, northwest Africa, and southwest Asia, from southern England south to northern Morocco, and east through the northern Mediterranean region to Turkey. Carpinus betulus (European hornbeam) – native to Western Asia and central, eastern, and southern Europe, including southern England. Crataegus monogyna (hawthorn) – native to Europe, northwestern Africa, and West Asia Fagus sylvatica (European green beech) – native from northern Europe, in Sweden, Denmark, Norway, Germany, Poland, Switzerland, Bulgaria, eastern parts of Russia, Romania, through central Europe to France, southern England, northern Portugal, central Spain, and east to northwest Turkey where it intergrades with the oriental beech (Fagus orientalis) Fagus sylvatica 'Purpurea' (European purple beech) – a variant of the above Ilex aquifolium (European holly) – native to western and southern Europe, northwest Africa, and southwest Asia Ligustrum ovalifolium (privet) – native to Japan and Korea Ligustrum × ibolium (north privet) – native to Japan and Korea Photinia × fraseri (red robin) – a hybrid between Photinia glabra and Photinia serratifolia, native to Japan and to China, Taiwan, Japan, the Philippines, Indonesia, and India, respectively Prunus laurocerasus (common cherry-laurel) – native to regions bordering the Black Sea in southwestern Asia and southeastern Europe, from Albania and Bulgaria east through Turkey to the Caucasus Mountains and northern Iran Prunus lusitanica (Portuguese cherry-laurel) – native to southwestern France, Spain, Portugal, Morocco, and Macaronesia (the Azores, Canary Islands and Madeira) Quercus ilex (holm oak) – native to the Mediterranean region Taxus baccata (yew) – native to Western Europe, Central Europe and Southern Europe (including Great Britain and Ireland), Northwest Africa, northern Iran, and Southwest Asia Thuja occidentalis (yellow ribbon; northern white cedar) – native to eastern Canada and much of the north-central and northeastern United States Thuja plicata (western red cedar) – native to the Pacific Northwest of North America Hedgerow trees Hedgerow trees are trees that grow in hedgerows but have been allowed to reach their full height and width. There are thought to be around 1.8 million hedgerow trees in Britain (counting only those whose canopies do not touch others) with perhaps 98% of these being in England and Wales. Hedgerow trees are both an important part of the English landscape and valuable habitats for wildlife. Many hedgerow trees are veteran trees and therefore of great wildlife interest. The most common species are English oak (Quercus robur) and ash (Fraxinus excelsior), though in the past field elm (Ulmus minor 'Atinia') would also have been common. Around 20 million elm trees, most of them hedgerow trees, were felled or died through Dutch elm disease in the late 1960s. Many other species are used, notably including common beech (Fagus sylvatica) and various nut and fruit trees. The age structure of British hedgerow trees is old because the number of new trees is not sufficient to replace the number of trees that are lost through age or disease. New trees can be established by planting but it is generally more successful to leave standard trees behind when laying hedges. Trees should be left at no closer than apart and the distances should vary so as to create a more natural landscape. The distance allows the young trees to develop full crowns without competing or producing too much shade. It is suggested that hedgerow trees cause gaps in hedges but it has been found that cutting some lower branches off lets sufficient light through to the hedge below to allow it to grow. Importance of hedgerows Hedges are recognised as part of a cultural heritage and historical record and for their great value to wildlife and the landscape. Increasingly, they are valued too for the major role they have to play in preventing soil loss and reducing pollution, and for their potential to regulate water supply and to reduce flooding. There is increased earthworm diversity in the soils under hedgerows which also help to store organic carbon and support distinct communities of arbuscular mycorrhizal (AM) fungi. In addition to maintaining the health of the environment, hedgerows also play a huge role in providing shelter for smaller animals like birds and insects. A recent study by Emma Coulthard mentioned the possibility that hedgerows may act as guides for moths, like Acronicta rumicis, when flying from one location to another. As moths are nocturnal, it is highly unlikely that they use visual aids as guides, but rather are following sensory or olfactory markers on the hedgerows. Larkin et al. 2013 find 100% of northwest European farms have hedges, providing 43% of the wildlife habitat there. Historically, hedges were used as a source of firewood, and for providing shelter from wind, rain and sun for crops, farm animals and people. Today, mature hedges' uses include screening unsightly developments. In England and Wales agricultural hedgerow removal is controlled by the Hedgerows Regulations 1997, administered by the local planning authority. Dating Hedges that have existed for hundreds of years are colonised by additional species. This may be useful as a means of determining the age of the hedge. Hooper's rule (or Hooper's law, named after Dr. Max Hooper) is based on ecological data obtained from hedges of known age, and suggests that the age of a hedge can be roughly estimated by counting the number of woody species in a thirty-yard section and multiplying by 110 years. Max Hooper published his original formula in the book Hedges in 1974. This method is only a rule of thumb, and can be off by a couple of centuries; it should always be backed up by documentary evidence, if possible, and take into account other factors. Caveats include the fact that planted hedgerows, hedgerows with elm, and hedgerows in the north of England tend not to follow the rule as closely. The formula also does not work on hedges more than a thousand years old. Hooper's scheme is important not least for its potential use in determining what an important hedgerow is, given their protection in The Hedgerows Regulations (1997; No. 1160) of the Department of the Environment, based on age and other factors. Removal Hedgerow removal is part of the transition of arable land from low-intensity to high-intensity farming. The removal of hedgerows gives larger fields making the sowing and harvesting of crops easier, faster and cheaper, and giving a larger area to grow the crops, increasing yield and profits. Hedgerows serve as important wildlife corridors, especially in the United Kingdom where they link the country's fractured ancient woodland. They also serve as a habitat for birds and other animals. As the land within a few metres of hedges is difficult to plough, sow, or spray with herbicides, the land around hedges also typically includes high plant biodiversity. Hedges also serve to stabilise the soil and on slopes help prevent soil creep and leaching of minerals and plant nutrients. Removal thus weakens the soil and leads to erosion. In the United Kingdom hedgerow removal has been occurring since World War I as technology made intensive farming possible, and the increasing population demanded more food from the land. The trend has slowed down somewhat since the 1980s when cheap food imports reduced the demand on British farmland, and as the European Union Common Agricultural Policy made environmental projects financially viable. Under reforms to national and EU agricultural policies, the environmental impact of farming features more highly and in many places hedgerow conservation and replanting is taking place. In England and Wales agricultural hedgerow removal is controlled by the Hedgerows Regulations 1997, administered by the local planning authority. Hedge laying If hedges are not maintained and trimmed regularly, gaps tend to form at the base over many years. In essence, hedgelaying consists of cutting most of the way through the stem of each plant near the base, bending it over and interweaving or pleaching it between wooden stakes. This also encourages new growth from the base of each plant. Originally, the main purpose of hedgelaying was to ensure the hedge remained stock-proof. Some side branches were also removed and used as firewood. The maintenance and laying of hedges to form an impenetrable barrier for farm animals is a skilled art. In Britain there are many local hedgelaying traditions, each with a distinct style. Hedges are still being laid today not only for aesthetic and functional purposes but also for their ecological role in helping wildlife and protecting against soil erosion. Hedge trimming An alternative to hedge laying is trimming using a tractor-mounted flail cutter or circular saw, or a hedge trimmer. The height of the cutting can be increased a little every year. Trimming a hedge helps to promote bushy growth. If a flail cutter is used, then the flail must be kept sharp to ensure that the cutting is effective on the hedge. The disadvantage of this is that the hedge species takes a number of years before it will flower again and subsequently bear fruit for wildlife and people. If the hedge is trimmed repeatedly at the same height, a 'hard knuckle' will start to form at that height – similar to the shape of a pollarded tree. Additionally, hedge trimming causes habitat destruction to species like the small eggar moth which spend nearly their entire life cycle in blackthorn and hawthorn hedgerow. This has led to a decline in the moth's population. It is now nationally scarce in Britain. General hedge management A 'hedgerow management' scale has been devised by an organisation called Hedgelink UK ranging from 1 to 10. '1' describes the action to take for a heavily over trimmed hedge, '5' is a healthy dense hedgerow more than 2 metres in height, and '10' is a hedge that has not been managed at all and has become a line of trees. The RSPB suggest that hedges in Britain not be cut between March and August. This is to protect nesting birds, which are protected by law. Coppicing The techniques of coppicing and hard pollarding can be used to rejuvenate a hedge where hedge laying is not appropriate. Types Instant hedge The term instant hedge has become known since early this century for hedging plants that are planted collectively in such a way as to form a mature hedge from the moment they are planted together, with a height of at least 1.2 metres. They are usually created from hedging elements or individual plants which means very few are actually hedges from the start, as the plants need time to grow and entwine to form a real hedge. An example of an instant hedge can be seen at the Elveden Hall Estate in East Anglia, where fields of hedges can be seen growing in cultivated rows, since 1998. The development of this type of mature hedge has led to such products being specified by landscape architects, garden designers, property developers, insurance companies, sports clubs, schools and local councils, as well as many private home owners. Demand has also increased from planning authorities in specifying to developers that mature hedges are planted rather than just whips (a slender, unbranched shoot or plant). A 'real' instant hedge could be defined as having a managed root growth system allowing the hedge to be sold with a continuous rootstrips (rather than individual plants) which then enables year-round planting. During its circa 8-year production time, all stock should be irrigated, clipped and treated with controlled-release nutrients to optimise health. Quickset hedge A quickset hedge is a type of hedge created by planting live whitethorn (common hawthorn) cuttings directly into the earth (hazel does not sprout from cuttings). Once planted, these cuttings root and form new plants, creating a dense barrier. The technique is ancient, and the term quickset hedge is first recorded in 1484. The word quick in the name refers to the fact that the cuttings are living (as in "the quick and the dead"), and not to the speed at which the hedge grows, although it will establish quite rapidly. An alternative meaning of quickset hedging is any hedge formed of living plants or of living plants combined with a fence. The technique of quicksetting can also be used for many other shrubs and trees. Devon hedge A Devon hedge is an earth bank topped with shrubs. The bank may be faced with turf or stone. When stone-faced, the stones are generally placed on edge, often laid flat around gateways. A quarter of Devon's hedges are thought to be over 800 years old. There are approximately 33,000 miles (53,000 km) of Devon hedge, which is more than any other county. Traditional farming throughout the county has meant that fewer Devon hedges have been removed than elsewhere. Devon hedges are particularly important for wildlife habitat. Around 20% of the UK's species-rich hedges occur within Devon. Over 600 species of flowering plants, 1500 species of insects, 65 species of birds and 20 species of mammals have been recorded living or feeding in Devon hedges. Hedge laying in Devon is usually referred to as steeping and involves cutting and laying steepers (the stems) along the top of the bank and securing them with crooks (forked sticks). Cornish hedge A Cornish hedge is an earth bank with stones. It normally consists of large stone blocks constructed either side of a narrow earth bank, and held in place with interlocking stones. The neat rows of square stones at the top are called "edgers". The top of the hedge is planted with grass turf. Sometimes hedging plants or trees are planted on the hedge to increase its windbreaking height. A rich flora develops over the lifespan of a Cornish hedge. The Cornish hedge contributes to the distinctive field-pattern of the Cornish landscape and its semi-natural wildlife habitat. There are about of hedges in Cornwall today. Hedges suffer from the effects of tree roots, burrowing rabbits, rain, wind, farm animals and people. How often repairs are needed depends on how well the hedge was built, its stone, and what has happened to it since it was last repaired. Typically a hedge needs a cycle of repair every 150 years or so, or less often if it is fenced. Building new hedges, and repairing existing hedges, is a skilled craft, and there are professional hedgers in Cornwall. The Cornish Hedge Research and Education Group (CHREG) supports the development of traditional skills and works with Cornwall Council, FWAG (Farming and Wildlife Advisory Group), Stone Academy Bodmin, Cornwall AONB, Country Trust and professional hedgers to ensure the future of Cornish Hedges in the landscape. In gardening Hedges, both clipped and unclipped, are often used as ornament in the layout of gardens. Typical woody plants for clipped hedges include privet, hawthorn, beech, yew, leyland cypress, hemlock, arborvitae, barberry, box, holly, oleander, lavender, among others. An early 20th-century fashion was for tapestry hedges, using a mix of golden, green and glaucous dwarf conifers, or beech and copper beech. Unclipped hedges take up more space, generally at a premium in modern gardens, but compensate by flowering. Rosa multiflora is widely used as a dense hedge along the central reservation of dual-carriageway roads, such as parkways in the United States. In mild climates, more exotic flowering hedges are formed, using Ceanothus, Hibiscus, Camellia, orange jessamine (Murraya paniculata), or lillypilly (Syzygium species). It is also possible to prepare really nice and dense hedge from other deciduous plants, however they do not have decorative flowers as the bushes mentioned before. Hedges of clipped trees forming avenues are a feature of 16th-century Italian gardens such as the Boboli Gardens in Florence, and of formal French gardens in the manner of André Le Nôtre, e.g. in the Gardens of Versailles, where they surround bosquets or areas formalized woodland. The English version of this was the wilderness, normal in large gardens until the English landscape garden style and the rise of the shrubbery began to sweep them away from about 1750. The 'hedge on stilts' of clipped hornbeams at Hidcote Manor Garden, Gloucestershire, is famous and has sometimes been imitated; it is fact a standard French and Italian style of the bosquet. Hedges below knee height are generally thought of as borders. Elaborately shaped and interlaced borders forming knot gardens or parterres were fashionable in Europe during the 16th and early 17th centuries. Generally they were appreciated from a raised position, either the windows of a house, or a terrace. Clipped hedges above eye level may be laid out in the form of a labyrinth or garden maze. Few such mazes survived the change of fashion towards more naturalistic plantings in the 18th and 19th centuries, but many were replanted in 20th-century restorations of older gardens. An example is behind the Governor's Palace, Colonial Williamsburg, Virginia. Hedges and pruning can both be used to enhance a garden's privacy, as a buffer to visual pollution and to hide fences. A hedge can be aesthetically pleasing, as in a tapestry hedge, where alternate species are planted at regular intervals to present different colours or textures. In America, fences have always been more common than hedges to mark garden boundaries. The English radical William Cobbett was already complaining about this in 1819: And why should America not possess this most beautiful and useful plant [the Haw-Thorn]? She has English gew-gaws, English Play-Actors, English Cards and English Dice and Billiards; English fooleries and English vices enough in all conscience; and why not English Hedges, instead of post-and-rail and board fences? If, instead of these steril-looking and cheerless enclosures the gardens and meadows and fields, in the neighbourhood of New York and other cities and towns, were divided by quick-set hedges, what a difference would the alteration make in the look, and in the real value too, of those gardens, meadows and fields! Regulation In the US, some local jurisdictions may strictly regulate the placement or height of a hedge, such as the case where a Palo Alto city resident was arrested for allowing her xylosma hedge to grow above two feet. In the UK the owner of a large hedge that is adversely affecting the reasonable enjoyment of neighbouring domestic property can be made to reduce it in height. In England and Wales, high hedges are covered under Part 8 of the Anti-Social Behaviour Act 2003. For a hedge to qualify for reduction, it must be made up wholly or mainly of a line of two or more evergreen or semi-evergreen trees or shrubs and be over 2 metres high. To some degree, it must be a barrier to light or access. It must be adversely affecting the complainant's reasonable enjoyment of their domestic property (either their house or garden) because of its height. Later legislation with similar effect was introduced in Northern Ireland, Isle of Man and Scotland. Significant hedges The 19th-century Great Hedge of India was probably the largest example of a hedge used as a barrier. It was planted and used to collect taxes by the British. The Willow Palisade, constructed during the early Qing dynasty (17th century) to control people's movement and to collect taxes on ginseng and timber in southern Manchuria, also had hedge-like features. The palisade included two dikes and a moat between them, the dikes topped by rows of willow trees, tied to one another with their branches. Gradually decaying throughout the late 18th and 19th centuries, the palisade disappeared in the early 20th century, its remaining willows cut during the Russo-Japanese War of 1904–1905 by the two countries' soldiers. The Meikleour Beech Hedges, located near Meikleour in Scotland, are noted in the Guinness World Records as the tallest and longest hedge on earth, reaching in height and in length. The beech trees were planted in 1745 by Jean Mercer on the Marquess of Lansdowne's Meikleour estate. The hedgerows and sunken lanes in Normandy, France posed a problem to Allied tanks after Operation Overlord, the invasion of Europe, in World War 2. The hedgerows prevented the tanks from freely moving about the area, until they were fitted with tusks. See also Bocage Dead hedge Drovers' road Enclosure Green wall Hedgehog Shelterbelt Topiary Notes References Brooks, Alan and Agate, Elizabeth Agate (1998). Hedging, a Practical handbook. British Trust for Conservation Volunteers. Pollard, E., Hooper, M.D. and Moore, N.W. (1974). Hedges. London: Collins. Rackham, Oliver (1986). The History of the Countryside. London: J.M. Dent and Sons. Further reading External links The British Hedgelaying Society The English Hedgerow Trust Devon Hedge Group "The age of hedges" – "Botanist Max Hooper correlates number of species in English hedgerows with centuries in age", by Charles Elliott. Whole Earth Review, Summer 1995. How to Date Hedges – Bingham Heritage. About the Hedgerows Regulations 1997 "Divide and rule: best fencing and hedges"—Bunny Guinness in The Daily Telegraph "Giving trees a lived-in look comes at a price"—Alan Titchmarsh in the Daily Express "How to Choose the Right Hedge for an Urban Garden"—Gareth James in The Huffington Post "Bring on the screen stars"—Jo Morrison in The Daily Telegraph Fences Garden features Landscape architecture
Hedge
[ "Engineering" ]
5,389
[ "Landscape architecture", "Architecture" ]
339,562
https://en.wikipedia.org/wiki/Camera%20lucida
A camera lucida is an optical device used as a drawing aid by artists and microscopists. It projects an optical superimposition of the subject being viewed onto the surface upon which the artist is drawing. The artist sees both scene and drawing surface simultaneously, as in a photographic double exposure. This allows the artist to duplicate key points of the scene on the drawing surface, thus aiding in the accurate rendering of perspective. History The camera lucida was patented in 1806 by the English chemist William Hyde Wollaston. The basic optics were described 200 years earlier by the German astronomer Johannes Kepler in his Dioptrice (1611), but there is no evidence he constructed a working camera lucida. There is also evidence to suggest that the Elizabethan spy Arthur Gregory's 1596 "perspective box" operated on at least highly similar principles to the later camera lucida, but the secretive nature of his work and fear of rivals copying his methods led to his invention becoming lost. By the 19th century, Kepler's description had similarly fallen into oblivion, so Wollaston's claim to have invented the device was never challenged. The term "" (Latin 'well-lit room' as opposed to 'dark room') is Wollaston's. While on honeymoon in Italy in 1833, the photographic pioneer William Fox Talbot used a camera lucida as a sketching aid. He later wrote that it was a disappointment with his resulting efforts which encouraged him to seek a means to "cause these natural images to imprint themselves durably". In 2001, artist David Hockney's book Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters was met with controversy. His argument, known as the Hockney-Falco thesis, is that the notable transition in style for greater precision and visual realism that occurred around the decade of the 1420s is attributable to the artists' discovery of the capability of optical projection devices, specifically an arrangement using a concave mirror to project real images. Their evidence is based largely on the characteristics of the paintings by great artists of later centuries, such as Ingres, Van Eyck, and Caravaggio. The camera lucida is still available today through art-supply channels but is not well known or widely used. It has enjoyed a resurgence as of 2017 through a number of Kickstarter campaigns. Description The name "" (Latin for 'light chamber') is intended to recall the much older drawing aid, the (Latin for 'dark chamber'). There is no optical similarity between the devices. The camera lucida is a lightweight, portable device that does not require special lighting conditions. No image is projected by the camera lucida. In the simplest form of camera lucida, the artist looks down at the drawing surface through a glass pane or half-silvered mirror tilted at 45 degrees. This superimposes a direct view of the drawing surface beneath, and a reflected view of a scene horizontally in front of the artist. This design produces an inverted image which is right-left reversed when turned the right way up. Also, light is lost in the imperfect reflection. Wollaston's design used a prism with four optical faces to produce two successive reflections (see illustration), thus producing an image that is not inverted or reversed. Angles ABC and ADC are 67.5° and BCD is 135°. Hence, the reflections occur through total internal reflection, so very little light is lost. It is not possible to see straight through the prism, so it is necessary to look at the very edge to see the paper. The instrument often came with an assortment of weak negative lenses, to create a virtual image of the scene at several distances. If the right lens is inserted, so that the chosen distance roughly equals the distance of the drawing surface, both images can be viewed in good focus simultaneously. If white paper is used with the camera lucida, the superimposition of the paper with the scene tends to wash out the scene, making it difficult to view. When working with a camera lucida, it is often beneficial to use toned or grey paper. Some historical designs included shaded filters to help balance lighting. Microscopy As recently as the 1980s, the camera lucida was still a standard tool of microscopists. It is still a key tool in the field of palaeontology. Until very recently, photomicrographs were expensive to reproduce. Furthermore, in many cases, a clear illustration of the structure that the microscopist wished to document was much easier to produce by drawing than by micrography. Thus, most routine histological and microanatomical illustrations in textbooks and research papers were camera lucida drawings rather than photomicrographs. The camera lucida is still used as the most common method among neurobiologists for drawing brain structures, although it is recognised to have limitations. "For decades in cellular neuroscience, camera lucida hand drawings have constituted essential illustrations. (...) The limitations of camera lucida can be avoided by the procedure of digital reconstruction". Of particular concern is distortion, and new digital methods are being introduced which can limit or remove this, "computerized techniques result in far fewer errors in data transcription and analysis than the camera lucida procedure". It is also regularly used in biological taxonomy. Gallery See also Camera obscura Claude glass, or black mirror Graphic telescope Pepper's ghost References External links Kenyon College Department of Physics on the Camera Lucida Artistic techniques Optical devices Precursors of photography Drawing aids
Camera lucida
[ "Materials_science", "Engineering" ]
1,131
[ "Glass engineering and science", "Optical devices" ]
339,596
https://en.wikipedia.org/wiki/Chrysophyta
Chrysophyta or golden algae is a term used to refer to certain heterokonts. It can be used to refer to: Chrysophyceae (golden algae), Bacillariophyceae (diatoms), and Xanthophyceae (yellow-green algae) together. E.g., Pascher (1914). Chrysophyceae (golden algae) E.g., Margulis et al. (1990). Chrysophyta has some characteristics which includes their possession of the photosynthetic pigments which are chlorophylls a and c, they also possess a yellow carotenoid called fucoxanthin, this is responsible for their unique and characteristic color. They also store food as oil and not starch, their cells contain no cellulose and are often impregnated with silicon compounds. Each species has its own special markings. References Ochrophyta
Chrysophyta
[ "Biology" ]
201
[ "Ochrophyta", "Algae" ]
339,633
https://en.wikipedia.org/wiki/Vaccinia
The vaccinia virus (VACV or VV) is a large, complex, enveloped virus belonging to the poxvirus family. It has a linear, double-stranded DNA genome approximately 190 kbp in length, which encodes approximately 250 genes. The dimensions of the virion are roughly 360 × 270 × 250 nm, with a mass of approximately 5–10 fg. The vaccinia virus is the source of the modern smallpox vaccine, which the World Health Organization (WHO) used to eradicate smallpox in a global vaccination campaign in 1958–1977. Although smallpox no longer exists in the wild, vaccinia virus is still studied widely by scientists as a tool for gene therapy and genetic engineering. Smallpox had been an endemic human disease that had a 30% fatality rate. In 1796, the British doctor Edward Jenner proved that an infection with the relatively mild cowpox virus would also confer immunity to the deadly smallpox. Jenner referred to cowpox as variolae vaccinae (smallpox of the cow). However, the origins of the smallpox vaccine became murky over time, especially after Louis Pasteur developed laboratory techniques for creating vaccines in the 19th century. Allan Watt Downie demonstrated in 1939 that the modern smallpox vaccine was serologically distinct from cowpox, and vaccinia was subsequently recognized as a separate viral species. Whole-genome sequencing has revealed that vaccinia is most closely related to horsepox, and the cowpox strains found in Great Britain are the least closely related to vaccinia. Classification of vaccinia infections In addition to the morbidity of uncomplicated primary vaccination, transfer of infection to other sites by scratching, and post-vaccinial encephalitis, other complications of vaccinia infections may be divided into the following types: Generalized vaccinia Eczema vaccinatum Progressive vaccinia (vaccinia gangrenosum, vaccinia necrosum) Roseola vaccinia Origin Vaccinia virus is closely related to the virus that causes cowpox; historically the two were often considered to be one and the same. The precise origin of vaccinia virus is unknown due to the lack of record-keeping, as the virus was repeatedly cultivated and passaged in research laboratories for many decades. The most common notion is that vaccinia virus, cowpox virus, and variola virus (the causative agent of smallpox) were all derived from a common ancestral virus. There is also speculation that vaccinia virus was originally isolated from horses, and analysis of DNA from an early (1902) sample of smallpox vaccine showed that it was 99.7% similar to horsepox virus. Virology Poxviruses are unique among DNA viruses because they replicate only in the cytoplasm of the host cell, outside of the nucleus. Therefore, the large genome is required for encoding various enzymes and proteins involved in viral DNA replication and gene transcription. During its replication cycle, VV produces four infectious forms which differ in their outer membranes: intracellular mature virion (IMV), the intracellular enveloped virion (IEV), the cell-associated enveloped virion (CEV) and the extracellular enveloped virion (EEV). Although the issue remains contentious, the prevailing view is that the IMV consists of a single lipoprotein membrane, while the CEV and EEV are both surrounded by two membrane layers and the IEV has three envelopes. The IMV is the most abundant infectious form and is thought to be responsible for spread between hosts. On the other hand, the CEV is believed to play a role in cell-to-cell spread and the EEV is thought to be important for long range dissemination within the host organism. Multiplicity reactivation Vaccinia virus is able to undergo multiplicity reactivation (MR). MR is the process by which two, or more, virus genomes containing otherwise lethal damage interact within an infected cell to form a viable virus genome. Abel found that vaccinia viruses exposed to doses of UV light sufficient to prevent progeny formation when single virus particles infected host chick embryo cells, could still produce viable progeny viruses when host cells were infected by two or more of these inactivated viruses; that is, MR could occur. Kim and Sharp demonstrated MR of vaccinia virus after treatment with UV-light, nitrogen mustard, and X-rays or gamma rays. Michod et al. reviewed numerous examples of MR in different viruses, and suggested that MR is a common form of sexual interaction in viruses that provides the advantage of recombinational repair of genome damages. Host resistance Vaccinia contains within its genome genes for several proteins that give the virus resistance to interferons: K3L () is a protein with homology to the protein eukaryotic initiation factor 2 (eIF-2alpha). K3L protein inhibits the action of PKR, an activator of interferons. E3L () is another protein encoded by Vaccinia. E3L also inhibits PKR activation, and is also able to bind to double stranded RNA. B18R () directly binds to type I interferon to abolish its function; in other words it works as a decoy receptor. It is used in bioengineering, for cells with added nucleic acid might otherwise enter an antiviral state that reduces protein output. It is also used in a Moderna paper to enhance the chance of converting a cell into a induced pluripotent stem cell (iPSC) using mRNA. Treatments Dissemination of vaccinia infection is rare due widespread immunization. Immunocompromised patients may be at risk of developing severe infection. The only current FDA-approved treatment is serotherapy (intravenous infusion of anti-anti-vaccinia immunoglobulin). Use as a vaccine Vaccinia virus infection is typically very mild and often does not cause symptoms in healthy individuals, although it may cause rash and fever. Immune responses generated from a vaccinia virus infection protects the person against a lethal smallpox infection. For this reason, vaccinia virus was, and still is, being used as a live-virus vaccine against smallpox. Unlike vaccines that use weakened forms of the virus being vaccinated against, the vaccinia virus vaccine cannot cause a smallpox infection because it does not contain the smallpox virus. However, certain complications and/or vaccine adverse effects occasionally arise. The chance of this happening is significantly increased in people who are immunocompromised. Approximately 1 to 2 people out of every 1 million people vaccinated could die as a result of life-threatening reactions to the vaccination. The rate of myopericarditis with ACAM2000 is 5.7 per 1,000 of primary vaccinees. On September 1, 2007, the U.S. Food and Drug Administration (FDA) licensed a new vaccine ACAM2000 against smallpox which can be produced quickly upon need. Manufactured by Sanofi Pasteur, the U.S. Centers for Disease Control and Prevention stockpiled 192.5 million doses of the new vaccine (see list of common strains below). A smallpox vaccine, Imvanex, which is based on the Modified vaccinia Ankara strain, was approved by the European Medicines Agency (EMA) in 2013. This strain has been used in vaccines during the 2022 monkeypox outbreak. Vaccinia is also used in recombinant vaccines, as a vector for expression of foreign genes within a host, in order to generate an immune response. Other poxviruses are also used as live recombinant vaccines. History The original vaccine for smallpox, and the origin of the idea of vaccination, was Cowpox, described by Edward Jenner in 1798. The Latin term used for Cowpox was Variolae vaccinae, Jenner's own translation of "smallpox of the cow". That term lent its name to the whole idea of vaccination. When it was realized that the virus used in smallpox vaccination was not, or was no longer, the same as cowpox virus, the name 'vaccinia' was used for the virus in smallpox vaccine. (See OED.) Vaccine potency and efficacy prior to the invention of refrigerated methods of transportation was unreliable. The vaccine would be rendered impotent by heat and sunlight, and the method of drying samples on quills and shipping them to countries in need often resulted in an inactive vaccine. Another method employed was the "arm to arm" method. This involved vaccinating an individual then transferring it to another as soon as the infectious pustule forms, then to another, etc. This method was used as a form of living transportation of the vaccine, and usually employed orphans as carriers. However, this method was problematic due to the possibility of spreading other blood diseases, such as hepatitis and syphilis, as was the case in 1861, when 41 Italian children contracted syphilis after being vaccinated by the "arm to arm" method. Henry Austin Martin introduced a method for vaccine production from calves. In 1913, E. Steinhardt, C. Israeli, and R. A. Lambert grew vaccinia virus in fragments of pig corneal tissue culture.A paper published in 1915 by Fredrick W. Twort, a student of Willian Bulloch, is considered to be the beginning of modern phage research. He was attempting to grow vaccinia virus on agar media in the absence of living cells when he noted that many colonies of contaminating micrococci grew up and appeared mucoid, watery or glassy, and this transformation could be induced in other colonies by inoculation of the fresh colony with material from the watery colony. Using a microscope, he observed that bacteria had degenerated into small granules that stained red with Giemsa stain. He concluded that "...it [the agent of transformation] might almost be considered as an acute infectious disease of micrococci." In 1939 Allan Watt Downie showed that the smallpox vaccines being used in the 20th century and cowpox virus were not the same, but were immunologically related. 2000–present In March 2007, a 2-year-old Indiana boy and his mother contracted a life-threatening vaccinia infection from the boy's father. The boy developed the telltale rash over 80 percent of his body after coming into close contact with his father, who was vaccinated for smallpox before being deployed overseas by the United States Army. The United States military resumed smallpox vaccinations in 2002. The child acquired the infection due to eczema, which is a known risk factor for vaccinia infection. The boy was treated with intravenous immunoglobulin, cidofovir, and Tecovirimat (ST-246), a (then) experimental drug developed by SIGA Technologies. On April 19, 2007, he was sent home with no after effects except for possible scarring of the skin. In 2010, the Centers for Disease Control and Prevention (CDC) reported that a woman in Washington had contracted vaccinia virus infection after digital vaginal contact with her boyfriend, a military member who had recently been vaccinated for smallpox. The woman had a history of childhood eczema, but she had not been symptomatic as an adult. The CDC indicated that it was aware of four similar cases in the preceding 12 months of vaccinia infection after sexual contact with a recent military vaccinee. Further cases—also in patients with a history of eczema—occurred in 2012. Common strains This is a list of some of the well-characterized vaccinia strains used for research and vaccination. Lister (also known as Elstree): the English vaccine strain used by Leslie Collier to develop heat stable vaccine in powdered form. Used as the basis for vaccine production during the World Health Organization Smallpox Eradication Campaign (SEC) Dryvax (also known as "Wyeth"): the vaccine strain previously used in the United States, produced by Wyeth. Used in the SEC, it was replaced in 2008 by ACAM2000 (see below), produced by Acambis. It was produced as preparations of calf lymph which was freeze-dried and treated with antibiotics. EM63: Russian strain used in the SEC ACAM2000: The current strain in use in the US, produced by Acambis. ACAM2000 was derived from a clone of a Dryvax virus by plaque purification. It is produced in cultures of Vero cells. Modified vaccinia Ankara (also known as MVA): a highly attenuated (not virulent) strain created by passaging vaccinia virus several hundred times in chicken embryo fibroblasts. Unlike some other vaccinia strains it does not make immunodeficient mice sick and therefore may be safer to use in humans who have weaker immune systems due to being very young, very old, having HIV/AIDS, etc. LC16m8: an attenuated strain developed and currently used in Japan CV-1: an attenuated strain developed in the United States and used there in the late 1960s- 1970s Western Reserve Copenhagen Connaught Laboratories (Canada) See also B13R (virus protein) Notes References Further reading </ref> External links Virus Pathogen Database and Analysis Resource (ViPR): Poxviridae Chordopoxvirinae Virus-related cutaneous conditions Genetic engineering
Vaccinia
[ "Chemistry", "Engineering", "Biology" ]
2,830
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
339,639
https://en.wikipedia.org/wiki/Mobile%20Servicing%20System
The Mobile Servicing System (MSS), is a robotic system on board the International Space Station (ISS). Launched to the ISS in 2001, it plays a key role in station assembly and maintenance; it moves equipment and supplies around the station, supports astronauts working in space, and services instruments and other payloads attached to the ISS and is used for external maintenance. Astronauts receive specialized training to enable them to perform these functions with the various systems of the MSS. The MSS is composed of three components: the Space Station Remote Manipulator System (SSRMS), known as Canadarm2. the Mobile Remote Servicer Base System (MBS). the Special Purpose Dexterous Manipulator (SPDM, also known as "Dextre" or "Canada hand"). The system can move along rails on the Integrated Truss Structure on top of the US provided Mobile Transporter cart which hosts the MRS Base System. The system's control software was written in the Ada 95 programming language. The MSS was designed and manufactured by MDA, (previously divisions of MacDonald Dettwiler Associates called MDA Space Missions, MD Robotics, and previously called SPAR Aerospace) for the Canadian Space Agency's contribution to the International Space Station. Canadarm2 Officially known as the Space Station Remote Manipulator System (SSRMS). Launched on STS-100 in April 2001, this second generation arm is a larger, more advanced version of the Space Shuttle's original Canadarm. Canadarm2 is when fully extended and has seven motorized joints (an 'elbow' hinge in the middle, and three rotary joints at each of the 'wrist/shoulder' ends). It has a mass of and a diameter of and is made from titanium. The arm is capable of handling large payloads of up to and was able to assist with docking the space shuttle. It is self-relocatable and can move end-over-end to reach many parts of the Space Station in an inchworm-like movement. In this movement, it is limited only by the number of Power Data Grapple Fixtures (PDGFs) on the station. PDGFs located around the station provide power, data and video to the arm through either of its two Latching End Effectors (LEEs). The arm can also travel the entire length of the space station truss using the Mobile Base System. In addition to moving itself around the station, the arm can move any object with a grapple fixture. In construction of the station the arm was used to move large segments into place. It can also be used to capture unpiloted ships like the SpaceX Dragon, the Cygnus spacecraft, and Japanese H-II Transfer Vehicle (HTV) which are equipped with a standard grapple fixture which the Canadarm2 uses to capture and berth the spacecraft. The arm is also used to unberth and release the spacecraft after use. On-board operators see what they are doing by looking at the three Robotic Work Station (RWS) LCD screens. The MSS has two RWS units: one in the Destiny module and the other in the Cupola. Only one RWS controls the MSS at a time. The RWS has two sets of control joysticks: one Rotational Hand Controller (RHC) and one Translational Hand Controller (THC). In addition to this is the Display and Control Panel (DCP) and the Portable Computer System (PCS) laptop. In recent years, the majority of robotic operations are commanded remotely by flight controllers on the ground at Christopher C. Kraft Jr. Mission Control Center, or from the Canadian Space Agency's John H. Chapman Space Centre. Operators can work in shifts to accomplish objectives with more flexibility than when done by on-board crew operators, albeit at a slower pace. Astronaut operators are used for time-critical operations such as visiting vehicle captures and robotics-supported extra-vehicular activity. Some time before 12 May 2021 Canadarm2 was hit by a small piece of orbital debris damaging its thermal blankets and one of the booms. Its operation appeared to be unaffected. Canadarm 2 will also help to berth the Axiom Space Station modules to the ISS. Latching End Effectors Canadarm2 has two LEEs, one at each end. A LEE has 3 snare wires to catch the grapple fixture shaft. Another LEE is on the Mobile Base System's Payload ORU Accommodations (POA) unit. The POA LEE is used to temporarily hold large ISS components. One more is on the Special Purpose Dexterous Manipulator (SPDM, also known as "Dextre" or "Canada hand"). Six LEEs have been manufactured and used in various locations on the ISS. Special Purpose Dexterous Manipulator The Special Purpose Dexterous Manipulator, or "Dextre", is a smaller two-armed robot that can attach to Canadarm2, the ISS, or the Mobile Base System. The arms and their power tools are capable of handling delicate assembly tasks and changing Orbital Replacement Units (ORUs) currently handled by astronauts during spacewalks. Although Canadarm2 can move around the station in an "inchworm motion", it is unable to carry anything with it unless Dextre is attached. Testing was done in the space simulation chambers of the Canadian Space Agency's David Florida Laboratory in Ottawa, Ontario. The manipulator was launched to the station on 11 March 2008 on STS-123. Mobile Base System The Mobile Remote Servicer Base System (MBS) is a base platform for the robotic arms. It was added to the station during STS-111 in June 2002. The platform rests atop the Mobile Transporter (installed on STS-110, designed by Northrop Grumman in Carpinteria, CA), which allows it to glide 108 metres down rails on the station's main truss. Canadarm2 can relocate by itself, but can't carry at the same time, Dextre can't relocate by itself. The MBS gives the two robotic arms the ability to travel to work sites all along the truss structure and to step off onto grapple fixtures along the way. When Canadarm2 and Dextre are attached to the MBS, they have a combined mass of . Like Canadarm2 it was built by MD Robotics and it has a minimum service life of 15 years. The MBS is equipped with four Power Data Grapple Fixtures, one at each of its four top corners. Any of these can be used as a base for the two robots, Canadarm2 and Dextre, as well as any of the payloads that might be held by them. The MBS also has two locations to attach payloads. The first is the Payload/Orbital Replacement Unit Accommodations (POA). This is a device that looks and functions much like the Latching End Effectors of Canadarm2. It can be used to park, power and command any payload with a grapple fixture, while keeping Canadarm2 free to do something else. The other attachment location is the MBS Common Attachment System (MCAS). This is another type of attachment system that is used to host scientific experiments. The MBS also supports astronauts during extravehicular activities. It has locations to store tools and equipment, foot-restraints, handrails and safety tether attachment points as well as a camera assembly. If needed, it is even possible for an astronaut to "ride" the MBS while it moves at a top speed of about 1.5 meters per minute. On either side of the MBS are the Crew and Equipment Translation Aids. These carts ride on the same rails as the MBS. Astronauts ride them manually during EVAs to transport equipment and to facilitate their movements around the station. Enhanced ISS Boom Assembly Installed on May 27, 2011, is a 15.24 meter (50-foot) boom with handrails and inspection cameras, attached to the end of Canadarm2. Other ISS robotics The station received a second robotic arm during STS-124, the Japanese Experiment Module Remote Manipulator System (JEM-RMS). The JEM-RMS is primarily used to service the JEM Exposed Facility. An additional robotic arm, the European Robotic Arm (ERA) was launched alongside the Russian-built Multipurpose Laboratory Module on July 15, 2021. Originally connected to Pirs, the ISS also has two Strela cargo cranes. One of the cranes could be extended to reach the end of Zarya. The other could extend to the opposite side and reach the end of Zvezda. The first crane was assembled in space during STS-96 and STS-101. The second crane was launched alongside Pirs itself. The cranes were later moved to the docking compartment Poisk and Zarya module. List of Cranes See also MacDonald Dettwiler and Associates (MDA), the manufacturers of Canadarm2 Canadarm, which was used on the Space Shuttle Orbiters European Robotic Arm, a third robotic arm installed on the ISS The Remote Manipulator System, used on the ISS module Kibo Dextre, also known as the Special Purpose Dexterous Manipulator (SPDM), used on the ISS Strela, a crane used on the ISS to perform similar tasks as the Mobile Servicing System References Further reading Robotic Transfer and Interfaces for External ISS Payloads. 2014 good diagrams of SSRMS/Canadarm2 External links ISS Assembly: Canadarm2 and the Mobile Servicing System Canadian Space Agency information about Canadarm2 Youtube animation of the Mobile Base System, Canadarm2 and Dextre working together Youtube animation of Canadarm2 inchworming on the station Components of the International Space Station Space robots Space program of Canada 2001 in spaceflight 2001 robots Robots of Canada Robotic manipulators fr:Canadarm 2
Mobile Servicing System
[ "Astronomy" ]
2,016
[ "Outer space", "Space robots" ]
339,742
https://en.wikipedia.org/wiki/Pinnation
Pinnation (also called pennation) is the arrangement of feather-like or multi-divided features arising from both sides of a common axis. Pinnation occurs in biological morphology, in crystals, such as some forms of ice or metal crystals, and in patterns of erosion or stream beds. The term derives from the Latin word pinna meaning "feather", "wing", or "fin". A similar concept is "pectination", which is a comb-like arrangement of parts (arising from one side of an axis only). Pinnation is commonly referred to in contrast to "palmation", in which the parts or structures radiate out from a common point. The terms "pinnation" and "pennation" are cognate, and although they are sometimes used distinctly, there is no consistent difference in the meaning or usage of the two words. Plants Botanically, pinnation is an arrangement of discrete structures (such as leaflets, veins, lobes, branches, or appendages) arising at multiple points along a common axis. For example, once-divided leaf blades having leaflets arranged on both sides of a rachis are pinnately compound leaves. Many palms (notably the feather palms) and most cycads and grevilleas have pinnately divided leaves. Most species of ferns have pinnate or more highly divided fronds, and in ferns, the leaflets or segments are typically referred to as "pinnae" (singular "pinna"). Plants with pinnate leaves are sometimes colloquially called "feather-leaved". Most of the following definitions are from Jackson's Glossary of Botanical Terms: Depth of divisions pinnatifid and pinnatipartite: leaves with pinnate lobes that are not discrete, remaining sufficiently connected to each other that they are not separate leaflets. pinnatisect: cut all the way to the midrib or other axis, but with the bases of the pinnae not contracted to form discrete leaflets. pinnate-pinnatifid: pinnate, with the pinnae being pinnatifid. Number of divisions paripinnate: pinnately compound leaves in which leaflets are borne in pairs along the rachis without a single terminal leaflet; also called "even-pinnate". imparipinnate: pinnately compound leaves in which there is a lone terminal leaflet rather than a terminal pair of leaflets; also called "odd-pinnate". Iteration of divisions bipinnate: pinnately compound leaves in which the leaflets are themselves pinnately compound; also called "twice-pinnate". tripinnate: pinnately compound leaves in which the leaflets are themselves bipinnate; also called "thrice-pinnate". tetrapinnate: pinnately compound leaves in which the leaflets are themselves tripinnate. unipinnate: solitary compound leaf with a row of leaflets arranged along each side of a common rachis. The term pinnula (plural: pinnulae) is the Latin diminutive of pinna (plural: pinnae); either as such or in the Anglicised form: pinnule, it is differently defined by various authorities. Some apply it to the leaflets of a pinna, especially the leaflets of bipinnate or tripinnate leaves. Others also or alternatively apply it to second or third order divisions of a bipinnate or tripinnate leaf. It is the ultimate free division (or leaflet) of a compound leaf, or a pinnate subdivision of a multipinnate leaf. Animals In animals, pinnation occurs in various organisms and structures, including: Some muscles can be unipinnate or bipinnate muscles. The fish Platax pinnatus is known as the pinnate spadefish or pinnate batfish. Geomorphology Pinnation occurs in certain waterway systems in which all major tributary streams enter the main channels by flowing in one direction at an oblique angle. References Plant morphology Leaves sv:Parflikig
Pinnation
[ "Biology" ]
832
[ "Plant morphology", "Plants" ]
339,776
https://en.wikipedia.org/wiki/Liberty%20%28personification%29
The concept of liberty has frequently been represented by personifications, often loosely shown as a female classical goddess. Examples include Marianne, the national personification of the French Republic and its values of , and the female Liberty portrayed in artworks, on United States coins beginning in 1793, and many other depictions. These descend from images on ancient Roman coins of the Roman goddess Libertas and from various developments from the Renaissance onwards. The Dutch Maiden was among the first, re-introducing the cap of liberty on a liberty pole featured in many types of image, though not using the Phrygian cap style that became conventional. The 1886 Statue of Liberty (Liberty Enlightening the World) by Frédéric Auguste Bartholdi is a well-known example in art, a gift from France to the United States. Ancient Rome The ancient Roman goddess Libertas was honored during the second Punic War (218 to 201 BC) by a temple erected on the Aventine Hill in Rome by the father of Tiberius Gracchus. In a highly political gesture, a temple for her was raised in 58 BC by Publius Clodius Pulcher on the site of Marcus Tullius Cicero's house after it had been razed. When depicted as a standing figure, on the reverse of coins, she usually holds out, but never wears, a , the soft cap that symbolised the granting of freedom to former slaves. She also carries a rod, which formed part of the ceremony for manumission. In the 18th century, the turned into the similar Phrygian cap carried on a pole by English-speaking "Liberty" figures, and then worn by Marianne and other 19th-century personifications, as the "cap of liberty". Libertas had been important under the Roman Republic, and was somewhat uncomfortably co-opted by the empire; it was not seen as an innate right, but as granted to some under Roman law. Her attribute of the pileus appeared on the Ides of March coin of the assassins of Julius Caesar, defenders of the Roman republic, between two daggers with the inscription "EID MAR" (Eidibus Martiis – on the Ides of March). Early modern period The medieval republics, mostly in Italy, greatly valued their liberty, and often use the word, but produce very few direct personifications. One exception, showing just the cap of liberty between daggers, a copy of coins by the assassins of Julius Caesar, featured on a medal struck by Lorenzino de' Medici to commemorate his assassination of his cousin Alessandro de' Medici, Duke of Florence in 1547. Liberty featured in emblem books, usually with her cap; the most popular, the Iconologia of Cesare Ripa, showed the cap held on a pole by the 1611 edition. With the rise of nationalism and new states, many nationalist personifications included a strong element of liberty, perhaps culminating in the Statue of Liberty. The long poem Liberty by the Scottish James Thomson (1734), is a lengthy monologue spoken by the "Goddess of Liberty", "characterized as British Liberty", describing her travels through the ancient world, and then English and British history, before the resolution of the Glorious Revolution of 1688 confirms her position there. Thomson also wrote the lyrics for Rule Britannia, and the two personifications were often combined as a personified "British Liberty". A large monument, originally called the "Column of British Liberty", now usually just the "Column to Liberty", was begun in the 1750s on his Gibside estate outside Newcastle-on-Tyne by the hugely wealthy Sir George Bowes, reflecting his Whig politics. Set at the top of a steep hillock, the monument itself is taller than Nelson's Column in London, and topped by a bronze female figure, originally gilded, carrying a cap of liberty on a pole. In other images, she took the seated form already very familiar from the British copper coinage, where Britannia had first appeared in 1672, with shield but carrying the cap on a rod as a liberty pole, rather than her usual trident. In the run up to the American War of Independence, this conflated figure of Britannia/Liberty was attractive to American colonists agitating for the full set of British civil rights, and from 1770 some American newspapers adopted her for their masthead. When war broke out, the Britannia element quickly disappeared, but a classical-looking Liberty still appealed, and was now sometimes just labelled "America". In the 1790s Columbia, who had been sometimes present in literature for some decades, emerged as a common name for this figure. Her position was cemented by the popular song Hail, Columbia (1798). By the time of the French Revolution the modern type of imagery was well-established, and the French figure acquired the name of Marianne from 1792. Unlike her predecessors, she normally wore the cap of Liberty on her head, rather than carrying it on a pole or lance. In 1793 the Notre Dame de Paris cathedral was turned into a "Temple of Reason" and, for a brief time, the Goddess of Liberty replaced the Virgin Mary on several altars. The Great Seal of France, applied to the official copies of legislation, had a Marianne with Phrygian cap of liberty from 1792, until she was replaced the next year by a Hercules after Jacques-Louis David. A standing Liberty, with fasces and cap on a pole, was on the seal of Napoleon's French Consulate, before being replaced by his head. Liberty returned to the seal with the French Second Republic in 1848, seated amid symbols of agriculture and industry, designed by Jacques-Jean Barre. She carries fasces on her lap, now wears a radiant crown with seven spikes or rays, and leans on a rudder. After a gap with the Second French Empire, a version of the 1848 design was used by the French Third Republic and under subsequent republics to the present day. The radiant crown, never used in antiquity for Libertas (but for the sun god Sol Invictus and some later emperors), was adopted by Frédéric Auguste Bartholdi for the Statue of Liberty. This was conceived in the 1860s, under the French Second Republic, when Liberty no longer featured on the seal or in French official iconography. The Great Seal's rudder was another original borrowing from classical iconography. In Roman art it (called a gubernaculum) was the usual attribute of Fortuna, or "Lady Luck", representing her control of the changeable fortunes of life. As well as such dignified representations, all these figures very frequently figured in the political cartoons that were becoming extremely popular in all the countries concerned over this period. The Napoleonic Wars produced a particular outpouring of cartoons on all sides. In the 19th century various national personifications took on this form, some wearing the cap of liberty. The Dutch Maiden, accompanied by the Leo Belgicus became the official symbol of the Batavian Republic established after the French occupied the Netherlands. Depictions in the United States In the United States, "Liberty" is often depicted with five-pointed stars, as they appear on the American flag, usually held in a raised hand. Another hand may hold a sword which points downward. Depictions which are familiar to Americans include the following: The Statue of Liberty (Liberty Enlightening the World), its replicas, and its portrayal on many U.S. postage stamps and coins Many denominations of American coins have depicted Liberty in both bust side-view and full-figure designs; see also the Liberty dollar, Seated Liberty dollar, Liberty dime, Walking Liberty half dollar, Indian Head cent, Morgan dollar, Silver Eagle, Gold Eagle, American Innovation dollar, and others The flags of the States of New York and New Jersey (along with various signs and public-owned items bearing the Seal of New Jersey) On the dome of the U.S. Capitol as Freedom On the dome of the Georgia State Capitol as Miss Freedom On the dome of the Texas State Capitol On the dome of the Wisconsin State Capitol as Wisconsin On the dome of the Allen County Courthouse in Fort Wayne, Indiana On the dome of the Bergen County Courthouse in Hackensack, New Jersey Atop the Yorktown Victory Monument on the Yorktown Battlefield near Yorktown, Virginia On both Union and Confederate currency In the early decades of the 20th century, Liberty mostly displaced Columbia, who was widely used as the national personification of the US during the 19th century. See also Notes References Higham, John (1990). "Indian Princess and Roman Goddess: The First Female Symbols of America", Proceedings of the American Antiquarian Society. 100: 50–51, JSTOR or PDF Sear, David, Roman Coins and Their Values, Volume 2, 46-48, 49-51, 2002, Spink & Son, Ltd, , 9781912667239, google books Warner, Marina, Monuments and Maidens: The Allegory of the Female Form, 2000, University of California Press, , 9780520227330, Google Books External links Texas Memorial Museum Texas Statue Bee county courthouse in Texas Another article on Beeville courthouse Mackinac Island American deities Roman goddesses Feminist spirituality Iconography Liberty symbols Mascots National personifications National symbols of the United States
Liberty (personification)
[ "Mathematics" ]
1,888
[ "Symbols", "Mascots" ]
339,838
https://en.wikipedia.org/wiki/Molecular%20genetics
Molecular genetics is a branch of biology that addresses how differences in the structures or expression of DNA molecules manifests as variation among organisms. Molecular genetics often applies an "investigative approach" to determine the structure and/or function of genes in an organism's genome using genetic screens.  The field of study is based on the merging of several sub-fields in biology: classical Mendelian inheritance, cellular biology, molecular biology, biochemistry, and biotechnology. It integrates these disciplines to explore things like genetic inheritance, gene regulation and expression, and the molecular mechanism behind various life processes. A key goal of molecular genetics is to identify and study genetic mutations. Researchers search for mutations in a gene or induce mutations in a gene to link a gene sequence to a specific phenotype. Therefore molecular genetics is a powerful methodology for linking mutations to genetic conditions that may aid the search for treatments of various genetics diseases. History The discovery of DNA as the blueprint for life and breakthroughs in molecular genetics research came from the combined works of many scientists. In 1869, chemist Johann Friedrich Miescher, who was researching the composition of white blood cells, discovered and isolated a new molecule that he named nuclein from the cell nucleus, which would ultimately be the first discovery of the molecule DNA that was later determined to be the molecular basis of life. He determined it was composed of hydrogen, oxygen, nitrogen and phosphorus. Biochemist Albrecht Kossel identified nuclein as a nucleic acid and provided its name deoxyribonucleic acid (DNA). He continued to build on that by isolating the basic building blocks of DNA and RNA; made up of the nucleotides: adenine, guanine, thymine, cytosine. and uracil. His work on nucleotides earned him a Nobel Prize in Physiology. In the early 1800s, Gregor Mendel, who became known as one of the fathers of genetics, made great contributions to the field of genetics through his various experiments with pea plants where he was able to discover the principles of inheritance such as recessive and dominant traits, without knowing what genes where composed of. In the mid 19th century, anatomist Walther Flemming, discovered what we now know as chromosomes and the separation process they undergo through mitosis. His work along with Theodor Boveri first came up with the Chromosomal Theory of Inheritance, which helped explain some of the patterns Mendel had observed much earlier. For molecular genetics to develop as a discipline, several scientific discoveries were necessary.  The discovery of DNA as a means to transfer the genetic code of life from one cell to another and between generations was essential for identifying the molecule responsible for heredity. Molecular genetics arose initially from studies involving genetic transformation in bacteria. In 1944 Avery, McLeod and McCarthy isolated DNA from a virulent strain of S. pneumoniae, and using just this DNA were able to convert a harmless strain to virulence. They called the uptake, incorporation and expression of DNA by bacteria "transformation". This finding suggested that DNA is the genetic material of bacteria. Bacterial transformation is often induced by conditions of stress, and the function of transformation appears to be repair of genomic damage. In 1950, Erwin Chargaff derived rules that offered evidence of DNA being the genetic material of life. These were "1) that the base composition of DNA varies between species and 2) in natural DNA molecules, the amount of adenine (A) is equal to the amount of thymine (T), and the amount of guanine (G) is equal to the amount of cytosine (C)." These rules, known as Chargaff's rules, helped to understand of molecular genetics. In 1953 Francis Crick and James Watson, building upon the X-ray crystallography work done by Rosalind Franklin and Maurice Wilkins, were able to derive the 3-D double helix structure of DNA. The phage group was an informal network of biologists centered on Max Delbrück that contributed substantially to molecular genetics and the origins of molecular biology during the period from about 1945 to 1970. The phage group took its name from bacteriophages, the bacteria-infecting viruses that the group used as experimental model organisms. Studies by molecular geneticists affiliated with this group contributed to understanding how gene-encoded proteins function in DNA replication, DNA repair and DNA recombination, and on how viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Furthermore, the role of chain terminating codons was elucidated. One noteworthy study was performed by Sydney Brenner and collaborators using "amber" mutants defective in the gene encoding the major head protein of bacteriophage T4. This study demonstrated the co-linearity of the gene with its encoded polypeptide, thus providing strong evidence for the "sequence hypothesis" that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein.  The isolation of a restriction endonuclease in E. coli by Arber and Linn in 1969 opened the field of genetic engineering. Restriction enzymes were used to linearize DNA for separation by electrophoresis and Southern blotting allowed for the identification of specific DNA segments via hybridization probes. In 1971, Berg utilized restriction enzymes to create the first recombinant DNA molecule and first recombinant DNA plasmid.  In 1972, Cohen and Boyer created the first recombinant DNA organism by inserting recombinant DNA plasmids into E. coli, now known as bacterial transformation, and paved the way for molecular cloning.  The development of DNA sequencing techniques in the late 1970s, first by Maxam and Gilbert, and then by Frederick Sanger, was pivotal to molecular genetic research and enabled scientists to begin conducting genetic screens to relate genotypic sequences to phenotypes. Polymerase chain reaction (PCR) using Taq polymerase, invented by Mullis in 1985, enabled scientists to create millions of copies of a specific DNA sequence that could be used for transformation or manipulated using agarose gel separation. A decade later, the first whole genome was sequenced (Haemophilus influenzae), followed by the eventual sequencing of the human genome via the Human Genome Project in 2001. The culmination of all of those discoveries was a new field called genomics that links the molecular structure of a gene to the protein or RNA encoded by that segment of DNA and the functional expression of that protein within an organism. Today, through the application of molecular genetic techniques, genomics is being studied in many model organisms and data is being collected in computer databases like NCBI and Ensembl. The computer analysis and comparison of genes within and between different species is called bioinformatics, and links genetic mutations on an evolutionary scale. Central dogma The central dogma plays a key role in the study of molecular genetics. The central dogma states that DNA replicates itself, DNA is transcribed into RNA, and RNA is translated into proteins. Along with the central dogma, the genetic code is used in understanding how RNA is translated into proteins. Replication of DNA and transcription from DNA to mRNA occurs in the nucleus while translation from RNA to proteins occurs in the ribosome. The genetic code is made of four interchangeable parts othe DNA molecules, called "bases": adenine, cytosine, uracil (in RNA; thymine in DNA), and guanine and is redundant, meaning multiple combinations of these base pairs (which are read in triplicate) produce the same amino acid. Proteomics and genomics are fields in biology that come out of the study of molecular genetics and the central dogma. Structure of DNA An organism's genome is made up by its entire set of DNA and is responsible for its genetic traits, function and development. The composition of DNA itself is an essential component to the field of molecular genetics; it is the basis of how DNA is able to store genetic information, pass it on, and be in a format that can be read and translated. DNA is a double stranded molecule, with each strand oriented in an antiparallel fashion. Nucleotides are the building blocks of DNA, each composed of a sugar molecule, a phosphate group and one of four nitrogenous bases: adenine, guanine, cytosine, and thymine. A single strand of DNA is held together by covalent bonds, while the two antiparallel strands are held together by hydrogen bonds between the nucleotide bases. Adenine binds with thymine and cytosine binds with guanine. It is these four base sequences that form the genetic code for all biological life and contains the information for all the proteins the organism will be able to synthesize. Its unique structure allows DNA to store and pass on biological information across generations during cell division. At cell division, cells must be able to copy its genome and pass it on to daughter cells. This is possible due to the double-stranded structure of DNA because one strand is complementary to its partner strand, and therefore each of these strands can act as a template strand for the formation of a new complementary strand. This is why the process of DNA replication is known as a semiconservative process. Techniques Forward genetics Forward genetics is a molecular genetics technique used to identify genes or genetic mutations that produce a certain phenotype. In a genetic screen, random mutations are generated with mutagens (chemicals or radiation) or transposons and individuals are screened for the specific phenotype. Often, a secondary assay in the form of a selection may follow mutagenesis where the desired phenotype is difficult to observe, for example in bacteria or cell cultures. The cells may be transformed using a gene for antibiotic resistance or a fluorescent reporter so that the mutants with the desired phenotype are selected from the non-mutants. Mutants exhibiting the phenotype of interest are isolated and a complementation test may be performed to determine if the phenotype results from more than one gene. The mutant genes are then characterized as dominant (resulting in a gain of function), recessive (showing a loss of function), or epistatic (the mutant gene masks the phenotype of another gene). Finally, the location and specific nature of the mutation is mapped via sequencing. Forward genetics is an unbiased approach and often leads to many unanticipated discoveries, but may be costly and time consuming. Model organisms like the nematode worm Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio have been used successfully to study phenotypes resulting from gene mutations. Reverse genetics Reverse genetics is the term for molecular genetics techniques used to determine the phenotype resulting from an intentional mutation in a gene of interest. The phenotype is used to deduce the function of the un-mutated version of the gene. Mutations may be random or intentional changes to the gene of interest. Mutations may be a missense mutation caused by nucleotide substitution, a nucleotide addition or deletion to induce a frameshift mutation, or a complete addition/deletion of a gene or gene segment. The deletion of a particular gene creates a gene knockout where the gene is not expressed and a loss of function results (e.g. knockout mice). Missense mutations may cause total loss of function or result in partial loss of function, known as a knockdown. Knockdown may also be achieved by RNA interference (RNAi). Alternatively, genes may be substituted into an organism's genome (also known as a transgene) to create a gene knock-in and result in a gain of function by the host. Although these techniques have some inherent bias regarding the decision to link a phenotype to a particular function, it is much faster in terms of production than forward genetics because the gene of interest is already known. Molecular genetic tools Molecular genetics is a scientific approach that utilizes the fundamentals of genetics as a tool to better understand the molecular basis of a disease and biological processes in organisms. Below are some tools readily employed by researchers in the field. Microsatellites Microsatellites or single sequence repeats (SSRS) are short repeating segment of DNA composed to 6 nucleotides at a particular location on the genome that are used as genetic marker. Researchers can analyze these microsatellites in techniques such DNA fingerprinting and paternity testing since these repeats are highly unique to individuals/families. a can also be used in constructing genetic maps and to studying genetic linkage to locate the gene or mutation responsible for specific trait or disease. Microsatellites can also be applied to population genetics to study comparisons between groups. Genome-wide association studies Genome-wide association studies (GWAS) are a technique that relies on single nucleotide polymorphisms (SNPs) to study genetic variations in populations that can be associated with a particular disease. The Human Genome Project mapped the entire human genome and has made this approach more readily available and cost effective for researchers to implement. In order to conduct a GWAS researchers use two groups, one group that has the disease researchers are studying and another that acts as the control that does not have that particular disease. DNA samples are obtained from participants and their genome can then be derived through lab machinery and quickly surveyed to compare participants and look for SNPs that can potentially be associated with the disease. This technique allows researchers to pinpoint genes and locations of interest in the human genome that they can then further study to identify that cause of the disease. Karyotyping Karyotyping allows researchers to analyze chromosomes during metaphase of mitosis, when they are in a condensed state. Chromosomes are stained and visualized through a microscope to look for any chromosomal abnormalities. This technique can be used to detect congenital genetic disorder such as down syndrome, identify gender in embryos, and diagnose some cancers that are caused by chromosome mutations such as translocations. Modern applications Genetic engineering Genetic engineering is an emerging field of science, and researcher are able to leverage molecular genetic technology to modify the DNA of organisms and create genetically modified and enhanced organisms for industrial, agricultural and medical purposes. This can be done through genome editing techniques, which can involve modifying base pairings in a DNA sequence, or adding and deleting certain regions of DNA. Gene editing Gene editing allows scientists to alter/edit an organism's DNA. One way to due this is through the technique Crispr/Cas9, which was adapted from the genome immune defense that is naturally occurring in bacteria. This technique relies on the protein Cas9 which allows scientists to make a cut in strands of DNA at a specific location, and it uses a specialized RNA guide sequence to ensure the cut is made in the proper location in the genome. Then scientists use DNAs repair pathways to induce changes in the genome; this technique has wide implications for disease treatment. Personalized medicine Molecular genetics has wide implications in medical advancement and understanding the molecular basis of a disease allows the opportunity for more effective diagnostic and therapies. One of the goals of the field is personalized medicine, where an individual's genetics can help determine the cause and tailor the cure for a disease they are afflicted with and potentially allow for more individualized treatment approaches which could be more effective. For example, certain genetic variations in individuals could make them more receptive to a particular drug while other could have a higher risk of adverse reaction to treatments. So this information would allow researchers and clinicals to make the most informed decisions about treatment efficacy for patients rather than the standard trial and error approach. Forensic genetics Forensic genetics plays an essential role for criminal investigations through that use of various molecular genetic techniques. One common technique is DNA fingerprinting which is done using a combination of molecular genetic techniques like polymerase chain reaction (PCR) and gel electrophoresis. PCR is a technique that allows a target DNA sequence to be amplified, meaning even a tiny quantity of DNA from a crime scene can be extracted and replicated many times to provide a sufficient amount of material for analysis. Gel electrophoresis allows the DNA sequence to be separated based on size, and the pattern that is derived is known as DNA fingerprinting and is unique to each individual. This combination of molecular genetic techniques allows a simple DNA sequence to be extracted, amplified, analyzed and compared with others and is a standard technique used in forensics. See also Complementation (genetics) DNA damage (naturally occurring) DNA damage theory of aging Epigenetics Gene mapping Genetic code Genetic recombination Genomic imprinting History of genetics Homologous recombination Mutagenesis Regulation of gene expression Timeline of the history of genetics Transformation (genetics) Sources and notes Further reading Sites and databases related to genetics, cytogenetics and oncology, at Atlas of Genetics and Cytogenetics in Oncology and Haematology Jeremy W. Dale and Simon F. Park. 2010. Molecular Genetics of Bacteria, 5th Edition External links
Molecular genetics
[ "Chemistry", "Biology" ]
3,522
[ "Molecular genetics", "Molecular biology" ]
339,894
https://en.wikipedia.org/wiki/Histone%20methyltransferase
Histone methyltransferases (HMT) are histone-modifying enzymes (e.g., histone-lysine N-methyltransferases and histone-arginine N-methyltransferases), that catalyze the transfer of one, two, or three methyl groups to lysine and arginine residues of histone proteins. The attachment of methyl groups occurs predominantly at specific lysine or arginine residues on histones H3 and H4. Two major types of histone methyltranferases exist, lysine-specific (which can be SET (Su(var)3-9, Enhancer of Zeste, Trithorax) domain containing or non-SET domain containing) and arginine-specific. In both types of histone methyltransferases, S-Adenosyl methionine (SAM) serves as a cofactor and methyl donor group. The genomic DNA of eukaryotes associates with histones to form chromatin. The level of chromatin compaction depends heavily on histone methylation and other post-translational modifications of histones. Histone methylation is a principal epigenetic modification of chromatin that determines gene expression, genomic stability, stem cell maturation, cell lineage development, genetic imprinting, DNA methylation, and cell mitosis. Types The class of lysine-specific histone methyltransferases is subdivided into SET domain-containing and non-SET domain-containing. As indicated by their monikers, these differ in the presence of a SET domain, which is a type of protein domain. Human genes encoding proteins with histone methyltransferase activity include: ASH1L DOT1L EHMT1, EHMT2, EZH1, EZH2 MLL, MLL2, MLL3, MLL4, MLL5 NSD1 PRDM2 SET, SETBP1, SETD1A, SETD1B, SETD2, SETD3, SETD4, SETD5, SETD6, SETD7, SETD8, SETD9, SETDB1, SETDB2 SETMAR, SMYD1, SMYD2, SMYD3, SMYD4, SMYD5, SUV39H1, SUV39H2, KMT5B, SUV420H2 SET domain-containing lysine-specific Structure The structures involved in methyltransferase activity are the SET domain (composed of approximately 130 amino acids), the pre-SET, and the post-SET domains. The pre-SET and post-SET domains flank the SET domain on either side. The pre-SET region contains cysteine residues that form triangular zinc clusters, tightly binding the zinc atoms and stabilizing the structure. The SET domain itself contains a catalytic core rich in β-strands that, in turn, make up several regions of β-sheets. Often, the β-strands found in the pre-SET domain will form β-sheets with the β-strands of the SET domain, leading to slight variations to the SET domain structure. These small changes alter the target residue site specificity for methylation and allow the SET domain methyltransferases to target many different residues. This interplay between the pre-SET domain and the catalytic core is critical for enzyme function. Catalytic mechanism In order for the reaction to proceed, S-Adenosyl methionine (SAM) and the lysine residue of the substrate histone tail must first be bound and properly oriented in the catalytic pocket of the SET domain. Next, a nearby tyrosine residue deprotonates the ε-amino group of the lysine residue. The lysine chain then makes a nucleophilic attack on the methyl group on the sulfur atom of the SAM molecule, transferring the methyl group to the lysine side chain. Non-SET domain-containing lysine-specific Instead of SET, non-SET domain-containing histone methyltransferase utilizes the enzyme Dot1. Unlike the SET domain, which targets the lysine tail region of the histone, Dot1 methylates a lysine residue in the globular core of the histone, and is the only enzyme known to do so. A possible homolog of Dot1 was found in archaea which shows the ability to methylate archaeal histone-like protein in recent studies. Structure The N terminal of Dot1 contains the active site. A loop serving as the binding site for SAM links the N-terminal and the C-terminal domains of the Dot1 catalytic domain. The C-terminal is important for the substrate specificity and binding of Dot1 because the region carries a positive charge, allowing for a favorable interaction with the negatively charged backbone of DNA. Due to structural constraints, Dot1 is only able to methylate histone H3. Arginine-specific There are three different types of protein arginine methyltransferases (PRMTs) and three types of methylation that can occur at arginine residues on histone tails. The first type of PRMTs (PRMT1, PRMT3, CARM1⧸PRMT4, and Rmt1⧸Hmt1) produce monomethylarginine and asymmetric dimethylarginine (Rme2a). The second type (JBP1⧸PRMT5) produces monomethyl or symmetric dimethylarginine (Rme2s). The third type (PRMT7) produces only monomethylated arginine. The differences in methylation patterns of PRMTs arise from restrictions in the arginine binding pocket. Structure The catalytic domain of PRMTs consists of a SAM binding domain and substrate binding domain (about 310 amino acids in total). Each PRMT has a unique N-terminal region and a catalytic core. The arginine residue and SAM must be correctly oriented within the binding pocket. SAM is secured inside the pocket by a hydrophobic interaction between an adenine ring and a phenyl ring of a phenylalanine. Catalytic mechanism A glutamate on a nearby loop interacts with nitrogens on the target arginine residue. This interaction redistributes the positive charge and leads to the deprotonation of one nitrogen group, which can then make a nucleophilic attack on the methyl group of SAM. Differences between the two types of PRMTs determine the next methylation step: either catalyzing the dimethylation of one nitrogen or allowing the symmetric methylation of both groups. However, in both cases the proton stripped from the nitrogen is dispersed through a histidine–aspartate proton relay system and released into the surrounding matrix. Role in gene regulation Histone methylation plays an important role in epigenetic gene regulation. Methylated histones can either repress or activate transcription as different experimental findings suggest, depending on the site of methylation. For example, it is likely that the methylation of lysine 9 on histone H3 (H3K9me3) in the promoter region of genes prevents excessive expression of these genes and, therefore, delays cell cycle transition and/or proliferation. In contrast, methylation of histone residues H3K4, H3K36, and H3K79 is associated with transcriptionally active euchromatin. Depending on the site and symmetry of methylation, methylated arginines are considered activating (histone H4R3me2a, H3R2me2s, H3R17me2a, H3R26me2a) or repressive (H3R2me2a, H3R8me2a, H3R8me2s, H4R3me2s) histone marks. Generally, the effect of a histone methyltransferase on gene expression strongly depends on which histone residue it methylates. See Histone#Chromatin regulation. Disease relevance Abnormal expression or activity of methylation-regulating enzymes has been noted in some types of human cancers, suggesting associations between histone methylation and malignant transformation of cells or formation of tumors. In recent years, epigenetic modification of the histone proteins, especially the methylation of the histone H3, in cancer development has been an area of emerging research. It is now generally accepted that in addition to genetic aberrations, cancer can be initiated by epigenetic changes in which gene expression is altered without genomic abnormalities. These epigenetic changes include loss or gain of methylations in both DNA and histone proteins. There is not yet compelling evidence that suggests cancers develop purely by abnormalities in histone methylation or its signaling pathways, however they may be a contributing factor. For example, down-regulation of methylation of lysine 9 on histone 3 (H3K9me3) has been observed in several types of human cancer (such as colorectal cancer, ovarian cancer, and lung cancer), which arise from either the deficiency of H3K9 methyltransferases or elevated activity or expression of H3K9 demethylases. DNA repair The methylation of histone lysine has an important role in choosing the pathway for repairing DNA double-strand breaks. As an example, tri-methylated H3K36 is required for homologous recombinational repair, while dimethylated H4K20 can recruit the 53BP1 protein for repair by the pathway of non-homologous end joining. Further research Histone methyltransferase may be able to be used as biomarkers for the diagnosis and prognosis of cancers. Additionally, many questions still remain about the function and regulation of histone methyltransferases in malignant transformation of cells, carcinogenesis of the tissue, and tumorigenesis. See also Histone-Modifying Enzymes Histone acetyltransferase (HAT) Histone deacetylase (HDAC) RNA polymerase control by chromatin structure Histone methylation References Further reading External links GeneReviews/NCBI/NIH/UW entry on Kleefstra Syndrome EC 2.1.1 Epigenetics Methylation
Histone methyltransferase
[ "Chemistry" ]
2,141
[ "Methylation" ]
339,979
https://en.wikipedia.org/wiki/James%20Lick
James Lick (August 25, 1796 – October 1, 1876) was an American real estate investor, carpenter, piano builder, land baron, and patron of the sciences. The wealthiest man in California at the time of his death, Lick left the majority of his estate to social and scientific causes. Early years James Lick was born to Pennsylvania Dutch parents in Stumpstown (Fredericksburg), Pennsylvania on August 25, 1796. Lick's paternal grandfather, William Lük, was a German immigrant from the Palatinate, and served in the American Revolutionary War. William's son, John, anglicized the family name to Lick. The son of a carpenter, Lick began learning the craft at an early age. When he was twenty-one, he had a romance with Barbara Snavely, the mother of his only child, John Henry. They never married, and the romance failed. Lick left Stumpstown for Baltimore, Maryland, where he learned the art of piano making. He quickly mastered the skill, and moved to New York City and established his own shop. In 1821 Lick moved to Argentina, after learning that his pianos were being exported to South America. South American years Lick found his time in Buenos Aires to be difficult, because of his ignorance of Spanish and the turbulent political situation in the country. Despite this, his business thrived, and in 1825 Lick left Argentina to tour Europe for a year. On his return trip, his ship was captured by the Portuguese, and the passengers and crew were taken to Montevideo as prisoners of war. Lick escaped captivity and returned to Buenos Aires on foot. In 1832, Lick returned to Stumpstown. He failed to reunite with Barbara Snavely and their son, and returned to Buenos Aires. He decided the political situation was too unstable and moved to Valparaíso, Chile. After four years, he again moved his business, this time to Lima, Peru. In 1846, Lick returned to North America. Anticipating the Mexican–American War and the future annexation of California, he decided to settle there. A backlog of orders for his pianos delayed him 18 months, as his Mexican workers returned to their homes to join the Mexican Army. He finished the orders himself. California years Lick arrived in San Francisco, California, in January 1848, bringing with him his tools, work bench, $30,000 in gold (valued at approximately $2.75 million as of 2020), and 600 pounds (275 kilograms) of chocolate. The chocolate quickly sold, so Lick sent back word convincing his friend and neighbor in Peru, the confectioner Domingo Ghirardelli, to move to San Francisco, where he founded the Ghirardelli Chocolate Company. Upon his arrival, Lick began buying real estate in the small village of San Francisco. The discovery of gold at Sutter's Mill near Sacramento a few days after Lick's arrival in the future state began the California Gold Rush and created a housing boom in San Francisco, which grew from about one thousand residents in 1848 to over twenty thousand by 1850. Lick got a touch of "gold fever" and sought to mine the metal, but after a week decided his fortune was to be made by owning land, not digging in it. Lick continued buying land in San Francisco, and also began buying farmland in and around San Jose, where he planted orchards and built the largest flour mill in the state. He invited his son to join him there in 1854; however, the younger Lick suffered from poor health and returned to Pennsylvania in 1863. In 1861, Lick began construction of a hotel, known as Lick House, 41 Montgomery Street (Lick Street, 111 Sutter Street), at the intersection with Sutter Street, in San Francisco. The hotel had a dining room that could seat 400, based on a similar room at the palace of Versailles. Lick House was considered the finest hotel west of the Mississippi River. The hotel was destroyed in the fire following the San Francisco earthquake of 1906. The Hunter-Dulin Building was constructed 1925-1927. Following the construction, Lick returned to his San Jose orchards. In 1874, Lick suffered a massive stroke in the kitchen of his home in Santa Clara. The following morning, he was found by his employee, Thomas Fraser, and taken to Lick House, where he could be better cared for. At the time of his illness, his estates, outside his considerable area in Santa Clara County and San Francisco, included large holdings around Lake Tahoe, a large ranch in Los Angeles County, and all of Santa Catalina Island, making Lick the richest man in California. In the next three years, Lick spent his time determining how to dispense his fortune. He originally wanted to build giant statues of himself and his parents, and erect a pyramid larger than the Great Pyramid of Giza in his own honor in downtown San Francisco. Through the efforts of George Davidson, president of the California Academy of Sciences, Lick was persuaded to leave the greatest portion of his fortune to the establishment of a mountaintop observatory, with the largest, most powerful telescope yet built. In 1874, he placed $3,000,000 ($65,200,000 relative value in 2017) at the disposal of seven trustees, by whom the funds were to be applied to specific uses. He replaced the board in 1875 with Faxon Atherton, John Nightingale, Bernard D. Murphy and his son, John H. Lick. The principal divisions of the funds were: $700,000 to the University of California for the construction of an observatory and the installation of a telescope more powerful than any other $150,000 for the building and maintenance of free public James Lick Baths in San Francisco $540,000 to found and endow an institution of San Francisco to be known as the California School of Mechanic Arts $100,000 for the erection of three appropriate groups of bronze statuary to represent three periods in Californian history and to be placed before the city hall of San Francisco $60,000 to erect in Golden Gate Park, San Francisco, a memorial to Francis Scott Key, author of “The Star-Spangled Banner” Lick had had an interest in astronomy since at least 1860, when he and George Madeira, the founder of the first observatory in California, spent several nights observing. They had also met again in 1873 and Lick said that Madeira's telescopes were the only ones he had ever used. In 1875, Thomas Fraser recommended a site at the summit of Mount Hamilton, near San Jose. Lick approved, on the condition that Santa Clara County build a "first-class" road to the site. The county agreed and the road was completed by the fall of 1876. On October 1, 1876, Lick died in his room in Lick House, San Francisco. In 1887, his body was moved to its final resting place, under the future home of the Great Lick Refracting Telescope. Legacy Lick's will stipulated that all of his fortune should be used for the public good, including $700,000 for the building of the observatory. In 1888, Lick Observatory was completed and given to the University of California as the Lick Astronomical Department. The Observatory was the first permanently staffed mountain top observatory in the world and housed the largest refracting telescope in the world at that time. In 1887 Lick's body was buried under the future site of the telescope, with a brass tablet bearing the inscription "Here lies the body of James Lick." His will stipulates that fresh flowers be on his grave – always. James Lick Mansion in Santa Clara is a nationally registered historical landmark, and is leased at very low rates to non-profit organizations. the mansion was occupied by the S.A.F.E. Place. In 1884, the Lick Old Ladies' Home, later renamed the University Mound Ladies Home, was established in San Francisco with a grant from the Lick estate. The Conservatory of Flowers and the statue of Francis Scott Key in Golden Gate Park were donated to San Francisco by Lick. The Pioneer Monument in front of San Francisco's City Hall was donated to the city by Lick. James Lick High School in San Jose and James Lick Middle School, Lick-Wilmerding High School, and the James Lick Freeway, all in San Francisco, are named in his honor. The Southern Pacific Railroad named a Control Point after Lick (CP Lick) on their Coast Line route in San Jose, California. At the same location there was also a Lick Station and Lick Branch rail line that went into San Jose's Almaden Valley, which were abandoned in the early 1980s. The crater Lick on the Moon and the asteroid 1951 Lick are named after him. Lickdale, Pennsylvania, a village approximately 3 miles west of Fredericksburg, Pennsylvania (formerly Stumpstown), was named for James Lick. Lickdale was a prominent 19th century canal port along a branch of the Union Canal and contained a large commercial ice house. A large monument to James Lick was erected by the local citizens in the community cemetery in Fredericksburg, Pennsylvania. Lick is commemorated in the scientific name of a species of lizard, Sceloporus licki, which is endemic to Baja California Sur. References External links University of California Observatory, biography of James Lick University Mound Ladies Home, a nonprofit assisted living residence for San Francisco women, founded with a bequest from James Lick American businesspeople in real estate American musical instrument makers Businesspeople from California Philanthropists from California Pennsylvania Dutch people People associated with astronomy People of the California Gold Rush People from Lebanon County, Pennsylvania People from San Francisco 1796 births 1876 deaths 19th-century American philanthropists 19th-century American businesspeople Expatriates in Argentina Expatriates in Chile
James Lick
[ "Astronomy" ]
1,956
[ "People associated with astronomy" ]
339,992
https://en.wikipedia.org/wiki/Myhill%E2%80%93Nerode%20theorem
In the theory of formal languages, the Myhill–Nerode theorem provides a necessary and sufficient condition for a language to be regular. The theorem is named for John Myhill and Anil Nerode, who proved it at the University of Chicago in 1957 . Statement Given a language , and a pair of strings and , define a distinguishing extension to be a string such that exactly one of the two strings and belongs to . Define a relation on strings as if there is no distinguishing extension for and . It is easy to show that is an equivalence relation on strings, and thus it divides the set of all strings into equivalence classes. The Myhill–Nerode theorem states that a language is regular if and only if has a finite number of equivalence classes, and moreover, that this number is equal to the number of states in the minimal deterministic finite automaton (DFA) accepting . Furthermore, every minimal DFA for the language is isomorphic to the canonical one . Generally, for any language, the constructed automaton is a state automaton acceptor. However, it does not necessarily have finitely many states. The Myhill–Nerode theorem shows that finiteness is necessary and sufficient for language regularity. Some authors refer to the relation as Nerode congruence, in honor of Anil Nerode. Use and consequences The Myhill–Nerode theorem may be used to show that a language is regular by proving that the number of equivalence classes of is finite. This may be done by an exhaustive case analysis in which, beginning from the empty string, distinguishing extensions are used to find additional equivalence classes until no more can be found. For example, the language consisting of binary representations of numbers that can be divided by 3 is regular. Given the empty string, (or ), , and are distinguishing extensions resulting in the three classes (corresponding to numbers that give remainders 0, 1 and 2 when divided by 3), but after this step there is no distinguishing extension anymore. The minimal automaton accepting our language would have three states corresponding to these three equivalence classes. Another immediate corollary of the theorem is that if for a language the relation has infinitely many equivalence classes, it is regular. It is this corollary that is frequently used to prove that a language is not regular. Generalizations The Myhill–Nerode theorem can be generalized to tree automata. See also Pumping lemma for regular languages, an alternative method for proving that a language is not regular. The pumping lemma may not always be able to prove that a language is not regular. Syntactic monoid References . . . ASTIA Document No. AD 155741. . Further reading Formal languages Theorems in discrete mathematics Finite automata
Myhill–Nerode theorem
[ "Mathematics" ]
559
[ "Discrete mathematics", "Mathematical theorems", "Formal languages", "Mathematical logic", "Theorems in discrete mathematics", "Mathematical problems" ]