id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
30,521,194
https://en.wikipedia.org/wiki/Auriscalpium%20andinum
Auriscalpium andinum is a species of fungus in the family Auriscalpiaceae of the Russulales order. Originally described in 1895 as Hydnum andinum by Narcisse Théophile Patouillard, it was transferred to the genus Auriscalpium in 2001 by Leif Ryvarden. It is found in Ecuador. References External links Fungi described in 1895 Fungi of Ecuador Russulales Taxa named by Narcisse Théophile Patouillard Fungus species
Auriscalpium andinum
Biology
105
62,856,495
https://en.wikipedia.org/wiki/Vichy%20Catal%C3%A1n
Vichy Catalán is a Spanish brand of carbonated mineral water bottled from its homonymous thermal spring in Caldes de Malavella, Girona. It is the leading carbonated mineral water in Spain, with 40% market share. The brand is owned by Grup Vichy Catalan («Premium Mix Group S.L.») ) by the physician and surgeon Modest Furest i Roca after buying the lands of the water spring in Caldes de Malavella, and discovering the mineral-medicinal properties of its thermal waters. In 2022, the global revenue of the beverage subsidiary amounted to 133.5 million euros, with a profit of 1.58 million euros and a workforce of 410 people. Composition Bibliography Content in this edit is translated from the existing Catalan Wikipedia article at Grup Vichy Catalan; see its history for attribution. References External links Official site La Tienda Vichy Manantial de Sant Hilari Mineral water Spanish brands
Vichy Catalán
Chemistry
192
5,126,167
https://en.wikipedia.org/wiki/Berry%20mechanism
The Berry mechanism, or Berry pseudorotation mechanism, is a type of vibration causing molecules of certain geometries to isomerize by exchanging the two axial ligands (see the figure) for two of the equatorial ones. It is the most widely accepted mechanism for pseudorotation and most commonly occurs in trigonal bipyramidal molecules such as PF5, though it can also occur in molecules with a square pyramidal geometry. The Berry mechanism is named after R. Stephen Berry, who first described this mechanism in 1960. Berry mechanism in trigonal bipyramidal structure The process of pseudorotation occurs when the two axial ligands close like a pair of scissors pushing their way in between two of the equatorial groups which scissor out to accommodate them. Both the axial and equatorial constituents move at the same rate of increasing the angle between the other axial or equatorial constituent. This forms a square based pyramid where the base is the four interchanging ligands and the tip is the pivot ligand, which has not moved. The two originally equatorial ligands then open out until they are 180 degrees apart, becoming axial groups perpendicular to where the axial groups were before the pseudorotation. This requires about 3.6 kcal/mol in PF5. This rapid exchange of axial and equatorial ligands renders complexes with this geometry unresolvable (unlike carbon atoms with four distinct substituents), except at low temperatures or when one or more of the ligands is bi- or poly-dentate. Berry mechanism in square pyramidal structure The Berry mechanism in square pyramidal molecules (such as IF5) is somewhat like the inverse of the mechanism in bipyramidal molecules. Starting at the "transition phase" of bipyramidal pseudorotation, one pair of fluorines scissors back and forth with a third fluorine, causing the molecule to vibrate. Unlike with pseudorotation in bipyramidal molecules, the atoms and ligands which are not actively vibrating in the "scissor" motion are still participating in the process of pseudorotation; they make general adjustment based on the movement of the actively vibrating atoms and ligands. However, this geometry requires a significant amount of energy to occur of about 26.7 kcal/mol. See also Pseudorotation Bailar twist Bartell mechanism Ray–Dutt twist Fluxional molecule References Molecular geometry Chemical kinetics fr:Pseudorotation de Berry
Berry mechanism
Physics,Chemistry
503
23,981,224
https://en.wikipedia.org/wiki/C20H32
{{DISPLAYTITLE:C20H32}} The molecular formula C20H32 (molar mass: 272.47 g/mol) may refer to: β-Araneosene, a diterpene Cembrene A, a diterpene Elisabethatriene, a bicyclic compound Laurenene, a diterpene Sclarene, a diterpene Stemarene, a diterpene Stemodene, a diterpene Taxadiene, a taxane diterpene
C20H32
Chemistry
117
43,339,730
https://en.wikipedia.org/wiki/Choiromyces%20aboriginum
Choiromyces aboriginum is a species of truffle-like fungi in genus Choiromyces, which is part of the Tuberaceae family. It is found in several regions in Australia, where it has been used as a food and as a source of water. Distribution This fungus is found in the dry areas of South Australia, Western Australia and the Northern Territory. Uses In Australia, it has been used as traditional native food and has also been used as a source of water. The fruiting bodies were eaten raw or cooked and Kalotas reported one experience, as follows: "They were cooked in hot sand and ashes for over an hour, and then eaten. They had a rather soft consistency (a texture akin to that of soft, camembert-like cheese) and a bland taste. Cooked specimens left for 24 hours and then reheated developed a flavour like that of baked cheese." References Pezizales Fungus species
Choiromyces aboriginum
Biology
195
23,951,499
https://en.wikipedia.org/wiki/Silandrone
Silandrone (, ) (developmental code name SC-16148), also known as testosterone 17β-trimethylsilyl ether or 17β-trimethylsilyltestosterone, as well as 17β-(trimethylsiloxy)androst-4-en-3-one, is a synthetic anabolic-androgenic steroid (AAS) and an androgen ether – specifically, the 17β-trimethylsilyl ether of testosterone – which was developed by the G. D. Searle & Company in the 1960s but was never marketed. It has a very long duration of action when given via subcutaneous or intramuscular injection, as well as significantly greater potency than that of testosterone propionate. In addition, silandrone, unlike testosterone and most esters of testosterone like testosterone propionate, is orally active. See also List of androgen esters References Abandoned drugs Androgen ethers Anabolic–androgenic steroids Androstanes Prodrugs Testosterone Trimethylsilyl compounds
Silandrone
Chemistry
227
24,313,526
https://en.wikipedia.org/wiki/Hirox
Hirox (ハイロックス) is a lens company in Tokyo, Japan that created the first digital microscope in 1985. This company is now known as Hirox Co Ltd. Hirox's main industry is digital microscopes, but still makes the lenses for a variety of items including rangefinders. Hirox's newest digital microscope systems are currently the RH-2000 and the RH-8800. The RH-2000 connects to a desktop computer by USB 3.0 and USB 2.0. The RH-8800 system is a standalone system with the computer built-in. Both are capable of 3D rotation, high dynamic range, 2D and 3D measurement, 2D and 3D tiling, as well as automated particle counting. History Hirox founded in Tokyo, Japan in 1978 as a lens and optical system manufacturer. In 1980 the company started to design and sell TV lenses for people with poor eyesight, and to supply products to the Swedish government. It introduced the first digital microscope in 1985, followed by a hand-held video microscope system in 1986, supplied to the Japanese police force. The Hirox Digital Microscope System started distribution in the USA in 1986. The 3-D rotational microscope was introduced in 1992. From 2000 offices and associated companies were set up in Osaka, USA, China, Nagoya, Korea, Europe, and Asia, with distribution agreements with LECO (USA), Leeds Precision Instruments, and Olympus Corporation. By 2014 distribution agreements with LECO (USA), Leeds Precision Instruments, and Olympus Corporation had ended. In 2018, a distribution agreement started with Nikon Metrology. One of the first major demonstrations of the Hirox technology was the high-resolution digitalization of Girl with a Pearl Earring starting in 2018, resulting in a panoramic image of over 1 billion pixels, believed to be first such panoramic image of this size. Digital microscope magnification The Hirox Digital Microscope System supports magnifications of up to 7000×. A primary difference between an optical and a digital microscope is the magnification. With an optical microscope the magnification is the lens magnification multiplied by the eyepiece magnification. The magnification for a digital microscope is defined as the ratio of the size of image on the monitor to the subject size. The Hirox Digital Microscope System has a 15" monitor. Optical and digital microscopes Since the digital microscope has the image projected directly on to the CCD camera, it is possible to have higher quality recorded images than with an optical microscope. With the optical microscope, the lenses are designed for the optics of the eye. Attaching a CCD camera to an optical microscope will result in an image that has compromises due to the eyepiece. 2D measurement The Hirox Digital Microscope System can measure distances on-screen. Calibration is needed at each magnification. 3D measurement 3D measurement is achieved with a digital microscope by image stacking. Using a step motor, the system takes images from the lowest focal plane in the field of view to the highest focal plane. Then it reconstructs these images into a 3D model based on contrast to give a 3D color image of the sample. From these 3D model measurements can be made, but their accuracy is based on the step motor and depth of field of the lens. The step motor is necessary to get accurate height information and accuracy is higher with a shallower depth of field. The most accurate 3D measurement from a step motor for a digital microscope is 1 micrometres. The 3D measurement abilities include, but are not limited to height, length, angle, radius, volume and area. Also, the 3D model can be shown as a texture model, wireframe, or rainbow graph. This data can be exported to be viewed on a PC or in programs such as MATLAB. 2D and 3D tiling 2D and 3D tiling, also known as stitching or creating a panoramic image, can be done with the more advanced digital microscope systems. In 2D tiling images are automatically tiled together seamlessly in real-time by moving the XY stage. 3D tiling combines the XY-stage movement of 2D-tiling with the Z-axis movement of 3D measurement to create a 3D panoramic. See also Microscope Digital microscope High dynamic range Optical microscope References Hirox Europe Hirox Asia Hirox Korea McCrone Group Seika Global Spec Hirox Japanese Wiki Microscopes Electronics companies of Japan Japanese brands Lens manufacturers Optics manufacturing companies Electronics companies established in 1978 Manufacturing companies based in Tokyo
Hirox
Chemistry,Technology,Engineering
942
55,914,590
https://en.wikipedia.org/wiki/Dunlap%20Institute%20for%20Astronomy%20%26%20Astrophysics
The Dunlap Institute for Astronomy and Astrophysics at the University of Toronto is an astronomical research centre. The institute was founded in 2008 with the help of endowed gifts to the University of Toronto from David M. Dunlap and J. Moffat Dunlap, using the proceeds from the sale of the David Dunlap Observatory. The Dunlap Institute is allied with and co-located with the University of Toronto's Department of Astronomy & Astrophysics and with the Canadian Institute for Theoretical Astrophysics, and no longer has any association or connection to the David Dunlap Observatory. Research Astronomers at the Dunlap Institute investigate a variety of topics including: the structure of the Milky Way Galaxy cosmic magnetic fields cosmic explosions the large scale structure in the universe Dark Energy the Cosmic Microwave Background Technology & Instrumentation Telescope, instrumentation and software projects with leadership from Dunlap scientists include: The Dragonfly Telephoto Array, which comprises many telephoto lens and is designed to detect dim astronomical objects. Dragonfly was co-designed by the U of T's Roberto Abraham and Yale's Pieter van Dokkum. The Canadian Hydrogen Intensity Mapping Experiment (CHIME) The South Pole Telescope, designed to study the Cosmic Microwave Background from its location at the South Pole The Gemini InfraRed Multi-Object Spectrograph (GIRMOS), to be deployed on the Gemini South telescope in Chile in 2024 The Canadian Initiative for Radio Astronomy Data Analysis (CIRADA), which is producing advanced data products for the CHIME, ASKAP and VLA radio telescopes, and which is a pilot project for a Canadian Square Kilometre Array data centre. Training At the Dunlap's annual Introduction to Astronomical Instrumentation Summer School, undergraduate and graduate students from around the world attend lectures and labs. Undergraduate students also pursue summer research projects at the Dunlap Institute's Summer Undergraduate Research Program. Public outreach The Dunlap Institute runs many public outreach events including: Astronomy on Tap TO SpaceTime Cool Cosmos (part of the International Year of Astronomy in 2009) Transit of Venus viewing (2012) Toronto Science Festival (in partnership with U of T Science Engagement) (2013) Dunlap Prize Lecture featuring Neil deGrasse Tyson (2014) Supermoon Lunar Eclipse viewing (2015) Partial Solar Eclipse viewing (2017) Planet gazing parties, in partnership with the Royal Astronomical Society of Canada Directors 2010 - 2012: James R. Graham 2012 – 2015: Peter Martin (Acting/Interim) 2015 – present: Bryan Gaensler References University of Toronto Astronomy institutes and departments Astrophysics research institutes
Dunlap Institute for Astronomy & Astrophysics
Physics,Astronomy
521
12,083,298
https://en.wikipedia.org/wiki/List%20of%20renewable%20energy%20organizations
This is a list of notable renewable energy organizations: Associations Bioenergy World Bioenergy Association Biomass Thermal Energy Council (BTEC) Pellet Fuels Institute Geothermal energy Geothermal Energy Association Geothermal Rising Global Geothermal Alliance Hydropower International Hydropower Association (IHA) (International) National Hydropower Association (US) Renewable energy Agency for Non-conventional Energy and Rural Technology (ANERT), Kerala, India American Council on Renewable Energy American Solar Energy Society Clean Energy States Alliance (CESA) EKOenergy Energy-Quest Environmental and Energy Study Institute EurObserv'ER European Renewable Energy Council Green Power Forum International Renewable Energy Agency (IRENA) International Renewable Energy Alliance (REN Alliance) Office of Energy Efficiency and Renewable Energy REN21 Renewable and Appropriate Energy Laboratory Renewable Energy and Energy Efficiency Partnership (REEEP) RenewableUK Renewable Fuels Association Rocky Mountain Institute SustainableEnergy Trans-Mediterranean Renewable Energy Cooperation World Council for Renewable Energy The World Renewable Energy Association (WoREA) Solar energy International Solar Alliance(ISA) International Solar Energy Society Solar Cookers International Solar Energy Industries Association (SEIA) Wadebridge Renewable Energy Network (WREN) Wind energy American Wind Energy Association Citizen Partnerships for Offshore Wind (CPOW) Global Wind Energy Council WindEurope World Wind Energy Association Educational and research institutions Renewable energy Centre for Renewable Energy Systems Technology (CREST) at Loughborough University NaREC (UK National Renewable Energy Centre) National Renewable Energy Laboratory (NREL) RES - The School for Renewable Energy Science (University in Iceland and University in Akureyri) Norwegian Centre for Renewable Energy (SFFE) at NTNU, SINTEF. Centre for Alternative Technology (CAT) Solar energy Clean Energy Institute (CEI) at the University of Washington Florida Solar Energy Center (FSEC) Plataforma Solar de Almería (PSA) See also List of countries by renewable electricity production List of renewable energy topics by country List of photovoltaics companies List of large wind farms List of environmental organizations List of anti-nuclear groups List of photovoltaics research institutes Renewable Organizations Renewable energy commercialization
List of renewable energy organizations
Engineering
426
62,278,303
https://en.wikipedia.org/wiki/Gregory%20Beylkin
Gregory Beylkin (born 16 March 1953) is a Russian–American mathematician. Education and career He studied from 1970 to 1975 at the University of Leningrad, with Diploma in Mathematics in November 1975. From 1976 to 1979 he was a research scientist at the Research Institute of Ore Geophysics, Leningrad. From 1980 to 1982 he was a graduate student at New York University, where he received his PhD under the supervision of Peter Lax. From 1982 to 1983 Beylkin was an associate research scientist at the Courant Institute of Mathematical Sciences. From 1983 to 1991 he was a member of the professional staff of Schlumberger-Doll Research in Ridgefield, Connecticut. Since 1991 he has been a professor in the Department of Applied Mathematics at the University of Colorado Boulder. He was a visiting professor at Yale University, the University of Minnesota, and the Mittag-Leffler Institute and participated in 2012 and 2015 in the summer seminar on "Applied Harmonic Analysis and Sparse Approximation" at Oberwolfach. He is the author or co-author of over 100 articles in refereed journal and has served on several editorial boards. Awards and honors 1998 — Invited Speaker of the International Congress of Mathematicians 2012 — Fellow of the American Mathematical Society 2016 — Fellow of the Society for Industrial and Applied Mathematics Patents See also References External links 1953 births Living people 20th-century Russian mathematicians 21st-century Russian mathematicians 20th-century American mathematicians 21st-century American mathematicians Applied mathematicians Fellows of the American Mathematical Society Fellows of the Society for Industrial and Applied Mathematics Saint Petersburg State University alumni Courant Institute of Mathematical Sciences alumni University of Colorado Boulder alumni
Gregory Beylkin
Mathematics
326
4,430,027
https://en.wikipedia.org/wiki/SAIFI
The System Average Interruption Frequency Index (SAIFI) is commonly used as a reliability index by electric power utilities. SAIFI is the average number of interruptions that a customer would experience, and is calculated as where is the failure rate and is the number of customers for location . In other words, SAIFI is measured in units of interruptions per customer. It is usually measured over the course of a year, and according to IEEE Standard 1366-1998 the median value for North American utilities is approximately 1.10 interruptions per customer. Sources Electric power Reliability indices
SAIFI
Physics,Engineering
115
72,632,599
https://en.wikipedia.org/wiki/Windows%2010X
Windows 10X was an edition of Windows 10, a major release of the Microsoft Windows series of operating systems. Announced by Microsoft on October 2, 2019, it was initially developed as an operating system to support dual-screen devices, such as the unreleased Surface Neo. 10X was expected to be released in 2020, but Microsoft later announced that the project had been cancelled in May 2021. However, some features and design changes from 10X were integrated into the newer Windows 11. While the operating system was originally designed for dual-screen devices, Windows 10X shifted its target to single-screen devices in 2020 due to increasing demand for traditional computers from the COVID-19 pandemic. Features New and changed Windows 10X introduced major changes to the Windows shell, abolishing legacy components in favor of new user experiences and enhanced security, as well as some notable design changes, which were integrated into Windows 11: The taskbar was centered. It had 3 different sizes: small, intended for mouse-controlled desktop computers, and medium and large intended for touch computers The taskbar was automatically hidden and could be clicked/tapped to be shown. New Start menu: Microsoft redesigned the Start menu with a focus on productivity, with the search box now at the top instead of in the taskbar like in other editions of Windows 10, as well as a section of pinned apps which is the successor to the Live Tiles from other Windows 10 editions and 8. The Action Center had been renamed “Quick Settings” and was given a redesign. Network/Internet controls, volume controls and power options were moved to Quick Settings. There also exised an area to check notifications and control music playing from a specific app. Window borders were rounded. The Out-of-box setup was updated to better fit the new user interface of 10X, with a more modern design, as well as Cortana no longer being an integrated feature. The default UI now uses a more light theme than a dark one. Windows Update improvements: The Windows Update method was improved to complete faster. Feature updates now automatically install in the background and will only reboot when required to. Enhanced security: 10X introduced a new security system dubbed “State Separation”; instead of installing every file (including the user’s, the system’s, the applications’, etc.) into a single, accessible partition, which allowed attackers and malware to easily access system files, 10X installed system, application and other important files into a read-only partition, while leaving the user files in a separate, accessible partition. Therefore, the users and apps can only access files in the user partition. Cancellation In May 2021, Microsoft announced that 10X was cancelled, but new features and design changes would be integrated into other Microsoft products (such as Windows 11). References Windows 10 Discontinued versions of Microsoft Windows 10X
Windows 10X
Technology
578
76,145,702
https://en.wikipedia.org/wiki/Single-cell%20multi-omics%20integration
Single-cell multi-omics integration describes a suite of computational methods used to harmonize information from multiple "omes" to jointly analyze biological phenomena. This approach allows researchers to discover intricate relationships between different chemical-physical modalities by drawing associations across various molecular layers simultaneously. Multi-omics integration approaches can be categorized into four broad categories: Early integration, intermediate integration, late integration methods. Multi-omics integration can enhance experimental robustness by providing independent sources of evidence to address hypotheses, leveraging modality-specific strengths to compensate for another's weaknesses through imputation, and offering cell-type clustering and visualizations that are more aligned with reality Background The emergence of single-cell sequencing technologies has revolutionized our understanding of cellular heterogeneity, uncovering a nuanced landscape of cell types and their associations with biological processes. Single-cell omics technologies has extended beyond the transcriptome to profile diverse physical-chemical properties at single-cell resolution, including whole genomes/exomes, DNA methylation, chromatin accessibility, histone modifications, epitranscriptome (e.g., mRNAs, microRNAs, tRNAs, lncRNAs), proteome, phosphoproteome, metabolome, and more. In fact, there is an expanding repository of publicly available single-cell datasets, exemplified by growing databases such as the Human Cell Atlas Project (HCA), the Cancer Genome Atlas (TCGA), and the ENCODE project. With the increasing diversity in both available datasets and data types, multi-omics data integration and multimodal data analysis represent pivotal trajectories for the future of systems biology. Single-cell multi-omics integration can reveal underappreciated relationships between chemical-physical modalities, broaden our definition of cell states beyond single modality feature profiles, and provide independent evidence during analysis to support testing of biological hypotheses. However, the high dimensionality (features > observations), high degree of stochastic technical and biological variability, and sparsity of single-cell data (low molecule recovery efficiency) make computational integration a challenging problem. Furthermore, different solutions for multi-omics integration are available depending on factors such as whether the data is matched (simultaneous measurements derived from the same cell) or unmatched (measurements derived from different cells), whether cell-type annotations are available, or whether modality feature conversion is available, with different implementations tailored to suit the specific use case. As such, there are multiple approaches to single-cell data integration, each with a distinct use case, and each with its own set of advantages and disadvantages. Approaches to multi-omics integration Early integration Early integration is a method that concatenates (by binding rows and columns) two or more omics datasets into a single data matrix. Some advantages of early integration are that the approach is simple, highly interpretable, and capable of capturing relationships between features from different modalities. Early integration is primarily employed to merge datasets of the same datatype (e.g., integrating two distinct scRNA-seq datasets). This is because integrating datasets from different modalities may lead to a combined feature set with variable feature value ranges. For instance, expression data often spans a wider range compared to accessibility data, which typically ranges between values of 0 and 2. Early integration approaches produce data matrices with higher dimensionality compared to the original matrix. As such, dimensionality reduction methods such as feature selection and feature extraction are often necessary steps for downstream analysis. Feature selection involves retaining only the important variables from the original omic layers, while feature extraction transforms the original input features into combinations of the original features. The projection of high-dimensional data into a lower-dimensional space reduces noise and simplifies the dataset, resulting in easier data handling. Intermediate integration Intermediate integration describes a class of approaches which aim to analyze multiple omic datasets simultaneously without the need for prior data transformation (as this occurs during data integration). Several examples of intermediate integration include similarity-based integration, joint dimension reduction, and statistical modelling. Similarity-based integration Similarity-based integration aims to identify patterns across multi-omic datasets through the use of spectral clustering (eg. Spectrum and PC-MSC). Spectral clustering cluster cells based on either similarity matrices derived from a multi-omic dataset or graph fusion algorithms (eg. Seurat4) which construct graphs from individual omics layers and merges them into a single graph. Joint dimension reduction Joint dimension reduction aims to reduce the complexity of multi-omics data by projecting observations onto a lower dimensional latent space such that the different omics layers can be analyzed together. Canonical correlation analysis (CCA), non-negative matrix factorization (NMF) and manifold alignment are popular approaches for joint dimensionality reduction. Tools that use CCA or its derivative sparse CCA, such as Seurat3 and bindSC identify linear relationships between datasets by identifying linear combinations of variables that maximize feature correlation. Tools which use NMF (eg. LIGER and coupledNMF) extract low-dimensional representations of high-dimensional data such that both shared and dataset-specific factors across the multiple omics datasets can be identified. Manifold alignment (eg., MATCHER and MAGAN) refers to an approach where low dimension representations of various multi-omic datasets are computed individually and then represented as a common latent space. Statistical modeling Various statistical approaches, including the probabilistic Bayesian modeling framework (which allows for the incorporation of prior knowledge and uncertainties into the analysis), can be used to integrate multi-omic datasets. For instance, BREM-SC employ a Bayesian clustering framework to jointly cluster multi-omic datasets, while other tools like clonealign utilizes Bayesian methods to integrate gene expression and copy number profiles for studying cancer clones. Late integration Late integration aims to preprocess and model omics modalities separately, and then combine the two models at the end. The advantage of late integration is that tailored tools for each omics modality can be applied per modality. While late integration approaches are commonly used in the context of bulk multi-omics studies (eg., Cluster-of-clusters analysis and Kernel Learning Integrative Clustering), late integration approaches to single cell integration is still a novel field. For example, ensemble learning techniques such as ensemble clustering (eg. SAME-clustering, Sc-GPE, EC-PGMGR), have demonstrated potential in aggregating clustering results from different sources. These methods combine the clustering results from different omics datasets to create a consensus clustering which models the relationships between the individual clustering results to find an improved global clustering solution across the different modalities. As late integration involves analyzing each individual omics layer separately before integrating the results into a consensus result, it may fail to capture interactions and relationships across different omics modalities. As such, some groups argue that late integration represents multiple parallel single-omics analysis conducted on multiple data types, rather than fulfilling the "true goal" of multi-omics integration, which is to discover inter-omics relationships present in multi-omics data. Considerations for multi-omics data integration Noise As single-cell data is prone to noise from both biological and technical sources, developing robust de-noising methods to mitigate noise may be necessary. In the context of single-cell experiments, biological variation arising from factors such as transcriptional bursts, differences in cell cycle, and cell microenvironment can introduce noise to the dataset. Additionally, technical variability resulting from factors like poor sequence quality, uneven sequence coverage, and sample contamination must also be addressed. Dataset compatibility Integrating different omic modalities can be challenging due to differences in the structure of different datasets. For example, scRNA-seq features are expressed on a continuous scale whereas chromatin accessibility data (ie. scATAC-seq) exists between 0-2 (two copies of each region per cell). As such, integration of different modalities may require additional steps to transform the datasets into a common latent space. Even then, integration strategies such as early integration may still be prone to issues of bias if the resulting matrix is disproportionately represented by features from one specific modality. Dimensionality Analyzing large-scale single-cell multi-omics datasets can be computationally intensive because of the high dimensionality of the datasets. Hence, the tools employed for integrating datasets must be computationally efficient, or computational methods should be utilized initially to reduce the dimensionality of the datasets (refer to dimensionality reduction). Interpretability and validation Many integration methods focus on statistical associations rather than detailed causal modeling. As such, interpreting and validating the results can be particularly challenging, especially if a neural network was utilized, as these methods are black boxes. The utility and validation of integration methods need to be assessed based on practical applications, such as accurately identifying biologically relevant multi-omic relationships. Matched and unmatched data The integration of single-cell multi-omic data presents different challenges depending on whether the datasets are matched or unmatched. Matched datasets refer to multiple omic layers that are measured from the same individual cell whereas unmatched data refer to dataset that are measured from a different set of cells. While matched datasets enable direct comparisons between the different omics layers within the same cell, they may not be as readily available as unmatched datasets. On the other hand, while unmatched datasets allow for the integration of different sources and conditions, they require considerations of potential biases and confounding factors. (e.g., differences in cell populations, experimental conditions, or sample preparation methods between different datasets). Several approaches to multi-omics integration for unmatched data include matching by cell group (requires cell type annotations), matching by shared features, or statistical approaches such as NMF. Applications and uses While single-modality datasets have proven to be a mainstay in systems biology, combining biological information across multiple modalities has the potential to address biological questions that cannot be inferred by a single data type alone. Modelling biological networks For example, the integration of transcriptome and DNA accessibility has enabled the development of bioinformatic tools to infer cell-type-specific gene regulatory networks. This is achieved by leveraging transcription factor and target gene expression along with cis-regulatory information to impute relevant transcription factors and their regulatory partners. Expanding definitions of cell state Another application for multi omics integration is in expanding definitions of cell states incorporating features observed across multiple modalities. For instance, integrating protein marker detection with transcriptome profiling using a multi-omics sequencing technology such as CITE-seq can resolve cell state signatures based on joint gene regulatory and surface marker expression. This enables more robust inferences regarding cellular phenotypes, which are akin to and directly comparable with results from classical flow cytometry. Moreover, defining cell states based on clustering analysis within an integrated latent space may offer more stable estimations of cellular phenotypes compared to analysis within a single-modality latent space. Imputation Furthermore, multi omics integration can overcome modality-specific limitations through imputation. For example, most spatial transcriptomic sequencing technologies suffer from limited spatial resolution (pixels comprising a mixture of local cells) and low feature complexity. Integration of spatial transcriptomics with scRNAseq can help overcome these limitations by supporting the spatial deconvolution of low-resolution readouts and estimating the frequencies of each cell type References Omics
Single-cell multi-omics integration
Biology
2,473
342,815
https://en.wikipedia.org/wiki/Toyota%20Production%20System
The Toyota Production System (TPS) is an integrated socio-technical system, developed by Toyota, that comprises its management philosophy and practices. The TPS is a management system that organizes manufacturing and logistics for the automobile manufacturer, including interaction with suppliers and customers. The system is a major precursor of the more generic "lean manufacturing". Taiichi Ohno and Eiji Toyoda, Japanese industrial engineers, developed the system between 1948 and 1975. Originally called "just-in-time production", it builds on the approach created by the founder of Toyota, Sakichi Toyoda, his son Kiichiro Toyoda, and the engineer Taiichi Ohno. The principles underlying the TPS are embodied in The Toyota Way. Goals The main objectives of the TPS are to design out overburden (muri) and inconsistency (mura), and to eliminate waste (muda). The most significant effects on process value delivery are achieved by designing a process capable of delivering the required results smoothly; by designing out "mura" (inconsistency). It is also crucial to ensure that the process is as flexible as necessary without stress or "muri" (overburden) since this generates "muda" (waste). Finally the tactical improvements of waste reduction or the elimination of muda are very valuable. There are eight kinds of muda that are addressed in the TPS: Waste of overproduction (largest waste) Waste of time on hand (waiting) Waste of transportation Waste of processing itself Waste of excess inventory Waste of movement Waste of making defective products Waste of underutilized workers Concept Toyota Motor Corporation published an official description of TPS for the first time in 1992; this booklet was revised in 1998. In the foreword it was said: "The TPS is a framework for conserving resources by eliminating waste. People who participate in the system learn to identify expenditures of material, effort and time that do not generate value for customers and furthermore we have avoided a 'how-to' approach. The booklet is not a manual. Rather it is an overview of the concepts, that underlie our production system. It is a reminder that lasting gains in productivity and quality are possible whenever and wherever management and employees are united in a commitment to positive change". TPS is grounded on two main conceptual pillars: Just-in-time – meaning "Making only what is needed, only when it is needed, and only in the amount that is needed" Jidoka – (Autonomation) meaning "Automation with a human touch" Toyota has developed various tools to transfer these concepts into practice and apply them to specific requirements and conditions in the company and business. Origins Toyota has long been recognized as a leader in the automotive manufacturing and production industry. Toyota received their inspiration for the system, not from the American automotive industry (at that time the world's largest by far), but from visiting a supermarket. The idea of just-in-time production was originated by Kiichiro Toyoda, founder of Toyota. The question was how to implement the idea. In reading descriptions of American supermarkets, Ohno saw the supermarket as the model for what he was trying to accomplish in the factory. A customer in a supermarket takes the desired amount of goods off the shelf and purchases them. The store restocks the shelf with enough new product to fill up the shelf space. Similarly, a work-center that needed parts would go to a "store shelf" (the inventory storage point) for the particular part and "buy" (withdraw) the quantity it needed, and the "shelf" would be "restocked" by the work-center that produced the part, making only enough to replace the inventory that had been withdrawn. While low inventory levels are a key outcome of the System, an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed. Many Western businesses, having observed Toyota's factories, set out to attack high inventory levels directly without understanding what made these reductions possible. The act of imitating without understanding the underlying concept or motivation may have led to the failure of those projects. Principles The underlying principles, called the Toyota Way, have been outlined by Toyota as follows: Continuous improvement Challenge (We form a long-term vision, meeting challenges with courage and creativity to realize our dreams.) Kaizen (We improve our business operations continuously, always driving for innovation and evolution.) Genchi Genbutsu (Go to the source to find the facts to make correct decisions.) Respect for people Respect (We respect others, make every effort to understand each other, take responsibility and do our best to build mutual trust.) Teamwork (We stimulate personal and professional growth, share the opportunities of development and maximize individual and team performance.) External observers have summarized the principles of the Toyota Way as: The right process will produce the right results Create continuous process flow to bring problems to the surface. Use the "pull" system to avoid overproduction. Level out the workload (heijunka). (Work like the tortoise, not the hare.) Build a culture of stopping to fix problems, to get quality right from the start. (Jidoka) Standardized tasks are the foundation for continuous improvement and employee empowerment. Use visual control so no problems are hidden. Use only reliable, thoroughly tested technology that serves your people and processes. Add value to the organization by developing your people and partners Grow leaders who thoroughly understand the work, live the philosophy, and teach it to others. Develop exceptional people and teams who follow your company's philosophy. Respect your extended network of partners and suppliers by challenging them and helping them improve. Continuously solving root problems drives organizational learning Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu, 現地現物); Make decisions slowly by consensus, thoroughly considering all options (Nemawashi, 根回し); implement decisions rapidly; Become a learning organization through relentless reflection (Hansei, 反省) and continuous improvement and never stop (Kaizen, 改善). What this means is that it is a system for thorough waste elimination. Here, waste refers to anything which does not advance the process, everything that does not increase added value. Many people settle for eliminating the waste that everyone recognizes as waste. But much remains that simply has not yet been recognized as waste or that people are willing to tolerate. People had resigned themselves to certain problems, had become hostage to routine and abandoned the practice of problem-solving. This going back to basics, exposing the real significance of problems and then making fundamental improvements, can be witnessed throughout the Toyota Production System. The principles of the Toyota Production System have been compared to production methods in the industrialization of construction. Sharing Toyota originally began sharing TPS with its parts suppliers in the 1990s. Because of interest in the program from other organizations, Toyota began offering instruction in the methodology to others. Toyota has even "donated" its system to charities, providing its engineering staff and techniques to non-profits in an effort to increase their efficiency and thus ability to serve people. For example, Toyota assisted the Food Bank For New York City to significantly decrease waiting times at soup kitchens, packing times at a food distribution center, and waiting times in a food pantry. Toyota announced on June 29, 2011 the launch of a national program to donate its Toyota Production System expertise towards nonprofit organizations with goal of improving their operations, extending their reach, and increasing their impact. By September, less than three months later, SBP, a disaster relief organization based out of New Orleans, reported that their home rebuilds had been reduced from 12 to 18 weeks, to 6 weeks. Additionally, employing Toyota methods (like kaizen) had reduced construction errors by 50 percent. The company included SBP among its first 20 community organizations, along with AmeriCorps. Workplace Management Taiichi Ohno's Workplace Management (2007) outlines in 38 chapters how to implement the TPS. Some important concepts are: Chapter 1 Wise Mend Their Ways - See the Analects of Confucius for further information. Chapter 4 Confirm Failures With Your Own Eyes Chapter 11 Wasted Motion Is Not Work Chapter 15 Just In Time - Phrase invented by Kiichiro Toyoda - the first president of Toyota. There is conflict on what the actual English translation of what "just in time" really means. Taiichi Ohno quoted from the book says " 'Just In Time' should be interpreted to mean that it is a problem when parts are delivered too early". Chapter 23 How To Produce At A Lower Cost - "One of the main fundamentals of the Toyota System is to make 'what you need, in the amount you need, by the time you need it', but to tell the truth there is another part to this and that is 'at lower cost'. But that part is not written down." World economies, events, and each individual job also play a part in production specifics. Commonly used terminology Andon (行灯) (English: A large lighted board used to alert floor supervisors to a problem at a specific station. Literally: Signboard) Chaku-Chaku (着々 or 着着) (English: Load-Load) Gemba (現場) (English: The actual place, the place where the real work is done; On site) Genchi Genbutsu (現地現物) (English: Go and see for yourself) Hansei (反省) (English: Self-reflection) Heijunka (平準化) (English: Production Smoothing) Jidoka (自働化) (English: Autonomation - automation with human intelligence) Just-in-Time (ジャストインタイム "Jasutointaimu") (JIT) Kaizen (改善) (English: Continuous Improvement) Kanban (看板, also かんばん) (English: Sign, Index Card) Manufacturing supermarket where all components are available to be withdrawn by a process Muda (無駄, also ムダ) (English: Waste) Mura (斑 or ムラ) (English: Unevenness) Muri (無理) (English: Overburden) Nemawashi (根回し) (English: Laying the groundwork, building consensus, literally: Going around the roots) Obeya (大部屋) (English: Manager's meeting. Literally: Large room, war room, council room) Poka-yoke (ポカヨケ) (English: fail-safing, bulletproofing - to avoid (yokeru) inadvertent errors (poka) Seibi (English: To Prepare) Seiri (整理) (English: Sort, removing whatever isn't necessary.) Seiton (整頓) (English: Organize) Seiso (清掃) (English: Clean and inspect) Seiketsu (清潔) (English: Standardize) Shitsuke (躾) (English: Sustain) See also Lean construction W. Edwards Deming Training Within Industry Production flow analysis Industrial engineering References Bibliography Emiliani, B., with Stec, D., Grasso, L. and Stodder, J. (2007), Better Thinking, Better Results: Case Study and Analysis of an Enterprise-Wide Lean Transformation, second edition, The CLBM, LLC Kensington, Conn., Liker, Jeffrey (2003), The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer, First edition, McGraw-Hill, . Monden, Yasuhiro (1998), Toyota Production System, An Integrated Approach to Just-In-Time, Third edition, Norcross, GA: Engineering & Management Press, . Spear, Steven, and Bowen, H. Kent (September 1999), "Decoding the DNA of the Toyota Production System," Harvard Business Review Womack, James P. and Jones, Daniel T. (2003), Lean Thinking: Banish Waste and Create Wealth in Your Corporation, Revised and Updated, HarperBusiness, . Womack, James P., Jones, Daniel T., and Roos, Daniel (1991), The Machine That Changed the World: The Story of Lean Production, HarperBusiness, . External links Toyota Production System History of the TPS at the Toyota Motor Manufacturing Kentucky Site Toyota Production System Terms Article: Lean Primer: Introduction Lean manufacturing Toyota Production System
Toyota Production System
Engineering
2,556
1,422,891
https://en.wikipedia.org/wiki/Streptophyta
Streptophyta (), informally the streptophytes (, from the Greek strepto 'twisted', for the morphology of the sperm of some members), is a clade of plants. The composition of the clade varies considerably between authors, but the definition employed here includes land plants and all green algae except the Chlorophyta and the more basal Prasinodermophyta. Classifications The composition of Streptophyta and similar groups (Streptophytina, Charophyta) varies in each classification. Some authors include only the Charales and Embryophyta (e.g., Streptophyta, Streptophytina Lewis & McCourt 2004), others include more groups (e.g., Charophyta Lewis & McCourt 2004; Karol et al. 2009; Adl et al. 2012, Streptophyta; de Reviers 2002; Leliaert et al. 2012, Streptobionta Kenrick & Crane 1997; some authors use this broader definition, but exclude the Embryophyta, e.g., Charophyta Leliaert et al. 2012, Charophyceae Mattox & Stewart, 1984, Streptophycophytes de Reviers, 2002). The clade Streptophyta includes both unicellular and multicellular organisms. Streptophyta contains the freshwater charophyte green algae and all land plants that reproduce sexually by conjugation. Mesostigma viride, a unicellular green flagellate alga may be a basal Streptophyte. These earlier classifications have not taken into account that the Coleochaetophyceae and the Zygnemophyceae appear to have emerged in the Charophyceae + Embryophyta clade, resulting in the synonymy of the Phragmoplastophyta and Streptophytina/Streptophyta sensu stricto (a.k.a. Adl 2012) nomenclature. Jeffrey, 1967 Streptophyta Charales Embryophyta Lewis & McCourt 2004 Division Charophyta (charophyte algae and embryophytes) Class Mesostigmatophyceae (mesostigmatophytes) Class Chlorokybophyceae (chlorokybophytes) Class Klebsormidiophyceae (klebsormidiophytes) Class Zygnemophyceae (conjugates) Order Zygnematales (filamentous conjugates and saccoderm desmids) Order Desmidiales (placoderm desmids) Class Coleochaetophyceae (coleochaetophytes) Order Coleochaetales Subdivision Streptophytina Class Charophyceae (same as the Smith system, 1938) Order Charales (charophytes sensu stricto) Class Embryophyceae (embryophytes) Leliaert et al. 2012 Streptophyta charophytes Mesostigmatophyceae Chlorokybophyceae Klebsormidiophyceae Charophyceae Zygnematophyceae Coleochaetophyceae Embryophyta (land plants) Adl et al. 2012 Archaeplastida Adl et al. 2005 Chloroplastida Adl et al. 2005 (Viridiplantae Cavalier-Smith 1981) Chlorophyta Pascher 1914, emend. Lewis & McCourt 2004 Charophyta Migula 1897, emend. Karol et al. 2009 (Charophyceae Smith 1938, Mattox & Stewart 1984) Chlorokybus Geitler 1942 Mesostigma Lauterborn 1894 Klebsormidiophyceae van den Hoek et al. 1995 Phragmoplastophyta Lecointre & Guyander 2006 Zygnematophyceae van den Hoek et al. 1995, emend. Hall et al. 2009 Coleochaetophyceae Jeffrey 1982 Streptophyta Jeffrey 1967 Charophyceae Smith 1938, emend. Karol et al. 2009 (Charales Lindley 1836; Charophytae Engler 1887) Embryophyta Engler 1886, emend. Lewis & McCourt 2004 (Cormophyta Endlicher 1836; Plantae Haeckel 1866) Adl et al. 2019 Archaeplastida Adl et al. 2005 Chloroplastida Adl et al. 2005 (Viridiplantae Cavalier-Smith 1981) Phylum Streptophyta [Charophyta] Chlorokybus atmophyticus Mesostigma viridae Family Klebsomidiophyceae Class Phragmoplastophyta Family Zygnemataceae Order Coleochaetophyceae Family Characeae Kingdom Embryophyta Phylogeny Below is a reconstruction of Streptophyta relationships, based on genomic data. Streptofilum, described in 2018, appears to bring in a new branching. References External links The plant tree of life: an overview and some points of view The closest land plants relatives Green algae Infrakingdoms
Streptophyta
Biology
1,137
29,190,336
https://en.wikipedia.org/wiki/Fort%20des%20Ayvelles
The Fort des Ayvelles, also known as the Fort Dubois-Crancé, is a fortification near the French communes of Villers-Semeuse and Les Ayvelles in the Ardennes, just to the south of Charleville-Mézières. As part of the Séré de Rivières system of fortifications, the fort was planned as part of a new ring of forts replacing the older citadel of Mézières with dispersed fortifications. With advances in the range and destructive power of artillery, the city's defensive perimeter had to be pushed away from the city center to the limits of artillery range. The Fort des Ayvelles was the only such fortification to be completed of the ensemble, as resources were diverted elsewhere. At the time of its construction the fort controlled the Meuse and the railway line linking Reims, Montmédy, Givet and Hirson. The Fort des Ayvelles was reduced in status in 1899, its masonry construction rendered obsolete by the advent of high-explosive artillery shells. However, it was re-manned for the First World War before it was captured by the Germans on 29 August 1914. The fort was partly destroyed in 1918. During the Battle of France in 1940 the fort was bombarded. French resisters were executed at Ayvelles during both world wars. At present the fort is maintained by a preservation society, and may be visited. Description Built starting in 1876 under the direction of Captain Léon Boulenger, the fort was completed in 1878. The fort's four faces form a square perimeter, surrounded by a ditch wide and deep. The fort features particularly elaborate double caponiers to protect the outer wall and ditch on opposite corners, as well as counterscarps. The caponiers were provided with unique projecting watch-stations, or échauguettes. The fort and a subsidiary battery featured Mougin casemates, each armed with a single de Bange Model 1877 155 mm gun. The fort possessed 53 artillery pieces in 1899, manned by 880 men, and disposed in two-level casemates on a north-south line. The battery is about to the east, connected to the main fort by a covered causeway. The caponiers were damaged by both world wars and by the French military in explosives tests in 1960 in preparation for demolition of the urban fortifications of Charleville Mézières. The Mougin gun was removed at about this time, but the casemate remains. In addition to its own Mougin casemate, the pentagonal detached battery was armed with 10 artillery pieces, served by 150 men. The battery was provided with a wall and ditch, with caponiers and counterscarps for defense. The battery was built in 1878 and was never modernized. The battery's Mougin casemate was entirely demolished after World War II by the French Army. 1914 In 1914 the fort was manned by personnel of the French Fourth Army, under the overall command of General Fernand de Langle de Cary. The fort had been hastily garrisoned after the defeat of French forces in Belgium with two companies of the 45th Territorial Infantry Regiment and 300 territorial artillerymen, under the command of Commandant Georges Lévi Alvarès. These were reserve formations, largely composed of local residents. As French forces retreated and maneuvered in the face of the German attack, the fort was the only French force holding almost of the front between Rimogne and Flize. Under these circumstances, Georges Lévi Alvarès requested permission from the Fourth Army to evacuate the fort in the event of German attack. However, before receiving a reply, he decided to evacuate after sabotaging the fort's arms. The garrison evacuated on August 25. Arriving at Boulzicourt, the troops were ordered back to the fort. At the same time, the Germans were preparing a bombardment of the fort. When the garrison returned to the fort on the 26th, the Germans opened fire. The French column retreated. Reaching Launois, the troops were sequestered. Georges Lévi Alvarès, who had remained at the Fort des Ayvelles, committed suicide on the 27th. His body was found by the Germans and was buried nearby, with honors. German forces had bombarded the fort on the 26th and 27th, and waited until the 29th to enter the fort. They stripped the fort of its remaining metals for scrap. While they occupied the area Germans used the Fort des Ayvelles as a munitions depot and as a prison. The fort was the execution site for three French civilians, executed by the Germans between October 1915 and January 1916. The fort was reoccupied by France at the close of the war in November 1918. 1940 In 1940 the Fort des Ayvelles was manned by the second battalion of the French 148th Fortress Infantry Regiment under the command of Commandant Marie, which was in turn part of the weak 102nd Fortress Infantry Division. The 102nd DIF was the successor organization to the Defensive Sector of the Ardennes, which had administered a weak section of the Maginot Line fortifications. The sector was composed principally of scattered casemates and blockhouses, as the French command regarded the Ardennes sector as unsuitable for mechanized warfare. On 14 May 1940 the fort was bombarded by German forces, while the first and third battalions of the 148th RIF faced direct German attack. During the night of 15 May the fort was abandoned by French forces. The remaining troops of the 148th RIF nonetheless found themselves encircled and surrendered. Once again, the fort was the scene of civilian executions, with thirteen members of the French Resistance executed there. The most notable victims were les quatres cheminots d'Amagne ("the four railway workers of Amagne"), René Arnould, Georges Boillot, Robert Stadler and Lucien Maisonneuve, executed on 26 June 1944 for sabotage at the Amagne-Lucqy depot. Blockhouses Two blockhouses are near the fort, constructed in the 1930s as part of the Defensive Sector of the Ardennes: the Blockhaus du Fort des Ayvelles Sud, and the Blockhaus de Villers-Semeuse. Both were lightly armed. Present status The fort is maintained by the Association du Fort et de la Batterie des Ayvelles, and may be visited. References External links Association du Fort et de la Batterie des Ayvelles Fort des Ayvelles at fortiffsere.fr World War II internment camps in France World War I museums in France World War II museums in France Séré de Rivières system Defensive Sector of the Ardennes
Fort des Ayvelles
Engineering
1,350
67,096,543
https://en.wikipedia.org/wiki/S%C3%B6derala%20vane
The Söderala vane () is a weather vane dating from the Viking Age, richly ornamented and made of gilt bronze. It derives its name from in Söderala, Sweden, where it was used as a weather vane during the 18th century. It was most probably originally used as a vane on a Viking ship, and shows signs of wear. On stylistic grounds, it has been dated to 1050. It is today part of the collections of the Swedish History Museum. A copy of the vane is in Söderala. History In 1916, bought the vane from a farmer. At the time it was attached to an iron rod from the 17th century, and the small figure of an animal attached to the top of the vane was kept separately. The farmer who sold the vane to the museum also had a receipt from the late 18th century, showing that the vane had at that time been bought from where it had been used as a weather vane. The farmer was paid 50 Swedish crowns for the vane, which was subsequently sold to the Swedish History Museum in Stockholm, where it has remained part of the collections of the museum since. A copy is in Söderala. The weather vane is older than the church, which is the earliest known location of the vane. On stylistic grounds it has been dated to 1050, and scholars believe it was originally made to be used as a weather vane on a Viking ship. Comparisons with other Viking-age vanes and analysis of mentions of such vanes in the Icelandic sagas indicate that a vane of this size and splendour may have been made for a large ship like a longship. Description The Söderala vane consists of a triangular plate, made of gilt bronze and reinforced by smaller bronze plates and rivets in some places. A small sculpture of an animal, kept separately from the vane when it was bought by the museum, was originally attached to the top end of the bronze plate. The curved edge of the plate is pierced by several small holes, in which some kind of loosely hanging decorations may once have been attached. The plate itself is decorated with depictions of three beasts, interlaced with each other and with other purely decorative elements such as spirals, in a style closely related to that of Swedish burial monuments from the middle of the 11th century. The main decorative element is a depiction of a Norse dragon with wings, its forelegs and neck stretched somewhat like a horse about to rise. Its back is comparatively small. The dragon is very similar to a dragon depicted on a tombstone from the mid-11th century from Sundby Church in Södermanland, Sweden. Another creature lies coiled around the forelegs of the dragon, while the third, legless, is wrapped around the body of the dragon. The vane has traces of continuous use as a weather vane, presumably on a ship, and had been repaired before it was converted for use as a church weather vane. Apart from wear, it has also been somewhat buckled as a result of considerable violence, possibly by being hit by projectiles during some battle. It is not known where the vane was made. It is comparable with other Viking art objects from the same time from Sweden, but there are also details in the vane which show similarities with insular art, particularly Irish art. For instance, the wing and head of the dragon are comparable with similar ornamentation known from the British Isles, and the animal crowning the vane is similar to one depicted on an Irish crosier. It has therefore been speculated that the vane could have been made in present-day Sweden but also that it may have been made by Norse settlers on the British Isles. References Sources cited External links Viking art Viking ships Meteorological instrumentation and equipment 11th-century sculptures Bronze sculptures in Sweden Söderhamn Municipality Collection of the Swedish History Museum
Söderala vane
Technology,Engineering
779
71,220,531
https://en.wikipedia.org/wiki/Maximum%20inner-product%20search
Maximum inner-product search (MIPS) is a search problem, with a corresponding class of search algorithms which attempt to maximise the inner product between a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, including recommendation algorithms and machine learning. Formally, for a database of vectors defined over a set of labels in an inner product space with an inner product defined on it, MIPS search can be defined as the problem of determining for a given query . Although there is an obvious linear-time implementation, it is generally too slow to be used on practical problems. However, efficient algorithms exist to speed up MIPS search. Under the assumption of all vectors in the set having constant norm, MIPS can be viewed as equivalent to a nearest neighbor search (NNS) problem in which maximizing the inner product is equivalent to minimizing the corresponding distance metric in the NNS problem. Like other forms of NNS, MIPS algorithms may be approximate or exact. MIPS search is used as part of DeepMind's RETRO algorithm. References See also Nearest neighbor search Search algorithms Computational problems Machine learning
Maximum inner-product search
Mathematics,Engineering
235
2,902,845
https://en.wikipedia.org/wiki/65%20Arietis
65 Arietis is a star in the northern constellation of Aries, located near Tau Arietis. 65 Arietis, abbreviated '65 Ari', is the Flamsteed designation. It has an apparent visual magnitude of 6.07, which, according to the Bortle Dark-Sky Scale, means it is faintly visible to the naked eye when viewed from dark suburban skies. Based upon an annual parallax shift of , it is approximately distant from the Sun. The star is moving closer to the Earth with a heliocentric radial velocity of around −6 km/s. This is an ordinary A-type main sequence star with a stellar classification of A1 V. It has about 2.45 times the mass of the Sun and shines with 40 times the Sun's luminosity. This energy is being radiated into outer space at an effective temperature of 10.300 K, giving it the white-hued glow of an A-type star. It is roughly 23% of the way through its lifetime on the main sequence of core hydrogen burning stars. References External links HR 1027 Image 65 Arietis A-type main-sequence stars Aries (constellation) Durchmusterung objects Arietis, 65 021050 015870 1027
65 Arietis
Astronomy
264
46,736,415
https://en.wikipedia.org/wiki/Caffeic%20aldehyde
Caffeic aldehyde is a phenolic aldehyde contained in the seeds of Phytolacca americana (American pokeweed). It is present in various parts of a large number of plants, such as the seeds of Phytolacca americana. See also Caffeic acid Caffeyl alcohol References External links Knapsack Phenylpropanoids Conjugated aldehydes Catechols
Caffeic aldehyde
Chemistry
88
22,408,665
https://en.wikipedia.org/wiki/Covariance%20intersection
Covariance intersection (CI) is an algorithm for combining two or more estimates of state variables in a Kalman filter when the correlation between them is unknown. Formulation Items of information a and b are known and are to be fused into information item c. We know a and b have mean/covariance , and , , but the cross correlation is not known. The covariance intersection update gives mean and covariance for c as where ω is computed to minimize a selected norm, e.g., the trace, or the logarithm of the determinant. While it is necessary to solve an optimization problem for higher dimensions, closed-form solutions exist for lower dimensions. Application CI can be used in place of the conventional Kalman update equations to ensure that the resulting estimate is conservative, regardless of the correlation between the two estimates, with covariance strictly non-increasing according to the chosen measure. The use of a fixed measure is necessary for rigor to ensure that a sequence of updates does not cause the filtered covariance to increase. Advantages According to a recent survey paper and, the covariance intersection has the following advantages: The identification and computation of the cross covariances are completely avoided. It yields a consistent fused estimate, and thus a non-divergent filter is obtained. The accuracy of the fused estimate outperforms each local one. It gives a common upper bound of actual estimation error variances, which has robustness with respect to unknown correlations. These advantages have been demonstrated in the case of simultaneous localization and mapping (SLAM) involving over a million map features/beacons. Motivation It is widely believed that unknown correlations exist in a diverse range of multi-sensor fusion problems. Neglecting the effects of unknown correlations can result in severe performance degradation, and even divergence. As such, it has attracted and sustained the attention of researchers for decades. However, owing to its intricate, unknown nature, it is not easy to come up with a satisfying scheme to address fusion problems with unknown correlations. If we ignore the correlations, which is the so-called "naive fusion", it may lead to filter divergence. To compensate this kind of divergence, a common sub-optimal approach is to artificially increase the system noise. However, this heuristic requires considerable expertise and compromises the integrity of the Kalman filter framework. References Control theory Nonlinear filters Linear filters Signal estimation Robot control
Covariance intersection
Mathematics,Engineering
501
3,959,734
https://en.wikipedia.org/wiki/Comparison%20of%20parser%20generators
This is a list of notable lexer generators and parser generators for various language classes. Regular languages Regular languages are a category of languages (sometimes termed Chomsky Type 3) which can be matched by a state machine (more specifically, by a deterministic finite automaton or a nondeterministic finite automaton) constructed from a regular expression. In particular, a regular language can match constructs like "A follows B", "Either A or B", "A, followed by zero or more instances of B", but cannot match constructs which require consistency between non-adjacent elements, such as "some instances of A followed by the same number of instances of B", and also cannot express the concept of recursive "nesting" ("every A is eventually followed by a matching B"). A classic example of a problem which a regular grammar cannot handle is the question of whether a given string contains correctly nested parentheses. (This is typically handled by a Chomsky Type 2 grammar, also termed a context-free grammar.) Deterministic context-free languages Context-free languages are a category of languages (sometimes termed Chomsky Type 2) which can be matched by a sequence of replacement rules, each of which essentially maps each non-terminal element to a sequence of terminal elements and/or other nonterminal elements. Grammars of this type can match anything that can be matched by a regular grammar, and furthermore, can handle the concept of recursive "nesting" ("every A is eventually followed by a matching B"), such as the question of whether a given string contains correctly nested parentheses. The rules of Context-free grammars are purely local, however, and therefore cannot handle questions that require non-local analysis such as "Does a declaration exist for every variable that is used in a function?". To do so technically would require a more sophisticated grammar, like a Chomsky Type 1 grammar, also termed a context-sensitive grammar. However, parser generators for context-free grammars often support the ability for user-written code to introduce limited amounts of context-sensitivity. (For example, upon encountering a variable declaration, user-written code could save the name and type of the variable into an external data structure, so that these could be checked against later variable references detected by the parser.) The deterministic context-free languages are a proper subset of the context-free languages which can be efficiently parsed by deterministic pushdown automata. Parsing expression grammars, deterministic Boolean grammars This table compares parser generators with parsing expression grammars, deterministic Boolean grammars. General context-free, conjunctive, or Boolean languages This table compares parser generator languages with a general context-free grammar, a conjunctive grammar, or a Boolean grammar. Context-sensitive grammars This table compares parser generators with context-sensitive grammars. See also Compiler-compiler List of program transformation systems Comparison of regular expression engines Notes References External links The Catalog of Compiler Construction Tools Open Source Parser Generators in Java Parser software
Comparison of parser generators
Technology
652
14,449,116
https://en.wikipedia.org/wiki/History%20of%20timekeeping%20devices
The history of timekeeping devices dates back to when ancient civilizations first observed astronomical bodies as they moved across the sky. Devices and methods for keeping time have gradually improved through a series of new inventions, starting with measuring time by continuous processes, such as the flow of liquid in water clocks, to mechanical clocks, and eventually repetitive, oscillatory processes, such as the swing of pendulums. Oscillating timekeepers are used in modern timepieces. Sundials and water clocks were first used in ancient Egypt  BC (or equally acceptable BCE) and later by the Babylonians, the Greeks and the Chinese. Incense clocks were being used in China by the 6th century. In the medieval period, Islamic water clocks were unrivalled in their sophistication until the mid-14th century. The hourglass, invented in Europe, was one of the few reliable methods of measuring time at sea. In medieval Europe, purely mechanical clocks were developed after the invention of the bell-striking alarm, used to signal the correct time to ring monastic bells. The weight-driven mechanical clock controlled by the action of a verge and foliot was a synthesis of earlier ideas from European and Islamic science. Mechanical clocks were a major breakthrough, one notably designed and built by Henry de Vick in , which established basic clock design for the next 300 years. Minor developments were added, such as the invention of the mainspring in the early 15th century, which allowed small clocks to be built for the first time. The next major improvement in clock building, from the 17th century, was the discovery that clocks could be controlled by harmonic oscillators. Leonardo da Vinci had produced the earliest known drawings of a pendulum in 14931494, and in 1582 Galileo Galilei had investigated the regular swing of the pendulum, discovering that frequency was only dependent on length, not weight. The pendulum clock, designed and built by Dutch polymath Christiaan Huygens in 1656, was so much more accurate than other kinds of mechanical timekeepers that few verge and foliot mechanisms have survived. Other innovations in timekeeping during this period include inventions for striking clocks, the repeating clock and the deadbeat escapement. Error factors in early pendulum clocks included temperature variation, a problem tackled during the 18th century by the English clockmakers John Harrison and George Graham. Following the Scilly naval disaster of 1707, after which governments offered a prize to anyone who could discover a way to determine longitude, Harrison built a succession of accurate timepieces, introducing the term chronometer. The electric clock, invented in 1840, was used to control the most accurate pendulum clocks until the 1940s, when quartz timers became the basis for the precise measurement of time and frequency. The wristwatch, which had been recognised as a valuable military tool during the Boer War, became popular after World War I, in variations including non-magnetic, battery-driven, and solar powered, with quartz, transistors and plastic parts all introduced. Since the early 2010s, smartphones and smartwatches have become the most common timekeeping devices. The most accurate timekeeping devices in practical use today are atomic clocks, which can be accurate to a few billionths of a second per year and are used to calibrate other clocks and timekeeping instruments. Continuous timekeeping devices Ancient civilizations observed astronomical bodies, often the Sun and Moon, to determine time. According to the historian Eric Bruton, Stonehenge is likely to have been the Stone Age equivalent of an astronomical observatory, used for seasonal and annual events such as equinoxes or solstices. As megalithic civilizations left no recorded history, little is known of their timekeeping methods. The Warren Field calendar monument is currently considered to be the oldest lunisolar calendar yet found. Mesoamericans modified their usual vigesimal (base-20) counting system when dealing with calendars to produce a 360-day year. Aboriginal Australians understood the movement of objects in the sky well, and used their knowledge to construct calendars and aid navigation; most Aboriginal cultures had seasons that were well-defined and determined by natural changes throughout the year, including celestial events. Lunar phases were used to mark shorter periods of time; the Yaraldi of South Australia being one of the few people recorded as having a way to measure time during the day, which was divided into seven parts using the position of the Sun. All timekeepers before the 13th century relied upon methods that used something that moved continuously. No early method of keeping time changed at a steady rate. Devices and methods for keeping time have improved continuously through a long series of new inventions and ideas. Shadow clocks and sundials The first devices used for measuring the position of the Sun were shadow clocks, which later developed into the sundial. The oldest known sundial dates back to  BC (during the 19th Dynasty), and was discovered in the Valley of the Kings in 2013. Obelisks could indicate whether it was morning or afternoon, as well as the summer and winter solstices. A kind of shadow clock was developed  BC that was similar in shape to a bent T-square. It measured the passage of time by the shadow cast by its crossbar, and was oriented eastward in the mornings, and turned around at noon, so it could cast its shadow in the opposite direction. A sundial is referred to in the Bible, in 2 Kings 20:911, when Hezekiah, king of Judea during the 8th century BC, is recorded as being healed by the prophet Isaiah and asks for a sign that he would recover: A clay tablet from the late Babylonian period describes the lengths of shadows at different times of the year. The Babylonian writer Berossos () is credited by the Greeks with the invention of a hemispherical sundial hollowed out of stone; the path of the shadow was divided into 12 parts to mark the time. Greek sundials evolved to become highly sophisticated—Ptolemy's Analemma, written in the 2nd century AD, used an early form of trigonometry to derive the position of the Sun from data such as the hour of day and the geographical latitude. The Romans inherited the sundial from the Greeks. The first sundial in Rome arrived in 264 BC, looted from Catania in Sicily. This sundial offered the innovation of the hours of the "horologium" throughout the day where before the Romans simply split the day into early morning and forenoon (mane and ante merididiem). Still, there were unexpected astronomical challenges; this clock gave the incorrect time for a century. This mistake was noticed only in 164 BC, when the Roman censor came to check and adjusted for the appropriate latitude. According to the German historian of astronomy Ernst Zinner, sundials were developed during the 13th century with scales that showed equal hours. The first based on polar time appeared in Germany ; an alternative theory proposes that a Damascus sundial measuring in polar time can be dated to 1372. European treatises on sundial design appeared . An Egyptian method of determining the time during the night, used from at least 600 BC, was a type of plumb-line called a merkhet. A north–south meridian was created using two merkhets aligned with Polaris, the north pole star. The time was determined by observing particular stars as they crossed the meridian. The Jantar Mantar in Jaipur built in 1727 by Jai Singh II includes the Vrihat Samrat Yantra, 88 feet (27 m) tall sundial. It can tell local time to an accuracy of about two seconds. Water clocks The oldest description of a clepsydra, or water clock, is from the tomb inscription of an early 18th Dynasty ( BC) Egyptian court official named Amenemhet, who is identified as its inventor. It is assumed that the object described on the inscription is a bowl with markings to indicate the time. The oldest surviving water clock was found in the tomb of pharaoh Amenhotep III ( 14171379 BC). There are no recognised examples in existence of outflowing water clocks from ancient Mesopotamia, but written references have survived. The introduction of the water clock to China, perhaps from Mesopotamia, occurred as far back as the 2nd millennium BC, during the Shang dynasty, and at the latest by the 1st millennium BC. Around 550 AD, Yin Kui (殷蘷) was the first in China to write of the overflow or constant-level tank in his book "Lou ke fa (漏刻法)". Around 610, two Sui dynasty inventors, Geng Xun (耿詢) and Yuwen Kai (宇文愷), created the first balance clepsydra, with standard positions for the steelyard balance. In 721 the mathematician Yi Xing and government official Liang Lingzan regulated the power of the water driving an astronomical clock, dividing the power into unit impulses so that motion of the planets and stars could be duplicated. In 976, the Song dynasty astronomer Zhang Sixun addressed the problem of the water in clepsydrae freezing in cold weather when he replaced the water with liquid mercury. A water-powered astronomical clock tower was built by the polymath Su Song in 1088, which featured the first known endless power-transmitting chain drive. The Greek philosophers Anaxagoras and Empedocles both referred to water clocks that were used to enforce time limits or measure the passing of time. The Athenian philosopher Plato is supposed to have invented an alarm clock that used lead balls cascading noisily onto a copper platter to wake his students. A problem with most clepsydrae was the variation in the flow of water due to the change in fluid pressure, which was addressed from 100 BC when the clock's water container was given a conical shape. They became more sophisticated when innovations such as gongs and moving mechanisms were included. There is strong evidence that the 1st century BC Tower of the Winds in Athens once had a water clock, and a wind vane, as well as the nine vertical sundials still visible on the outside. In Greek tradition, clepsydrae were used in court, a practise later adopted by the Ancient Romans. Ibn Khalaf al-Muradi in medieval Al-Andalus described a water clock that employed both segmental and epicyclic gearing. Islamic water clocks, which used complex gear trains and included arrays of automata, were unrivalled in their sophistication until the mid-14th century. Liquid-driven mechanisms (using heavy floats and a constant-head system) were developed that enabled water clocks to work at a slower rate. Some have argued that the first known geared clock was rather invented by the great mathematician, physicist, and engineer Archimedes during the 3rd century BC. Archimedes created his astronomical clock, which was also a cuckoo clock with birds singing and moving every hour. It is the first carillon clock as it plays music simultaneously with a person blinking his eyes, surprised by the singing birds. The Archimedes clock works with a system of four weights, counterweights, and strings regulated by a system of floats in a water container with siphons that regulate the automatic continuation of the clock. The principles of this type of clock are described by the mathematician and physicist Hero, who says that some of them work with a chain that turns a gear in the mechanism. The 12th-century Jayrun Water Clock at the Umayyad Mosque in Damascus was constructed by Muhammad al-Sa'ati, and was later described by his son Ridwan ibn al-Sa'ati in his On the Construction of Clocks and their Use (1203). A sophisticated water-powered astronomical clock was described by Al-Jazari in his treatise on machines, written in 1206. This castle clock was about high. In 1235, a water-powered clock that "announced the appointed hours of prayer and the time both by day and by night" stood in the entrance hall of the Mustansiriya Madrasah in Baghdad. Chinese incense clocks Incense clocks were first used in China around the 6th century, mainly for religious purposes, but also for social gatherings or by scholars. Due to their frequent use of Devanagari characters, American sinologist Edward H. Schafer has speculated that incense clocks were invented in India. As incense burns evenly and without a flame, the clocks were safe for indoor use. To mark different hours, differently scented incenses (made from different recipes) were used. The incense sticks used could be straight or spiralled; the spiralled ones were intended for long periods of use, and often hung from the roofs of homes and temples. Some clocks were designed to drop weights at even intervals. Incense seal clocks had a disk etched with one or more grooves, into which incense was placed. The length of the trail of incense, directly related to the size of the seal, was the primary factor in determining how long the clock would last; to burn 12 hours an incense path of around has been estimated. The gradual introduction of metal disks, most likely beginning during the Song dynasty, allowed craftsmen to more easily create seals of different sizes, design and decorate them more aesthetically, and vary the paths of the grooves, to allow for the changing length of the days in the year. As smaller seals became available, incense seal clocks grew in popularity and were often given as gifts. Astrolabes Sophisticated timekeeping astrolabes with geared mechanisms were made in Persia. Examples include those built by the polymath Abū Rayhān Bīrūnī in the 11th century and the astronomer Muhammad ibn Abi Bakr al‐Farisi in 1221. A brass and silver astrolabe (which also acts as a calendar) made in Isfahan by al‐Farisi is the earliest surviving machine with its gears still intact. Openings on the back of the astrolabe depict the lunar phases and gives the Moon's age; within a zodiacal scale are two concentric rings that show the relative positions of the Sun and the Moon. Muslim astronomers constructed a variety of highly accurate astronomical clocks for use in their mosques and observatories, such as the astrolabic clock by Ibn al-Shatir in the early 14th century. Candle clocks and hourglasses One of the earliest references to a candle clock is in a Chinese poem, written in 520 by You Jianfu, who wrote of the graduated candle being a means of determining time at night. Similar candles were used in Japan until the early 10th century. The invention of the candle clock was attributed by the Anglo-Saxons to Alfred the Great, king of Wessex (r. 871–889), who used six candles marked at intervals of , each made from 12 pennyweights of wax, and made to be in height and of a uniform thickness. The 12th century Muslim inventor Al-Jazari described four different designs for a candle clock in his book Book of Knowledge of Ingenious Mechanical Devices. His so-called "scribe" candle clock was invented to mark the passing of 14 hours of equal length: a precisely engineered mechanism caused a candle of specific dimensions to be slowly pushed upwards, which caused an indicator to move along a scale. The hourglass was one of the few reliable methods of measuring time at sea, and it has been speculated that it was used on board ships as far back as the 11th century, when it would have complemented the compass as an aid to navigation. The earliest unambiguous evidence of the use an hourglass appears in the painting Allegory of Good Government, by the Italian artist Ambrogio Lorenzetti, from 1338. The Portuguese navigator Ferdinand Magellan used 18 hourglasses on each ship during his circumnavigation of the globe in 1522. Though used in China, the hourglass's history there is unknown, but does not seem to have been used before the mid-16th century, as the hourglass implies the use of glassblowing, then an entirely European and Western art. From the 15th century onwards, hourglasses were used in a wide range of applications at sea, in churches, in industry, and in cooking; they were the first dependable, reusable, reasonably accurate, and easily constructed time-measurement devices. The hourglass took on symbolic meanings, such as that of death, temperance, opportunity, and Father Time, usually represented as a bearded, old man. History of early oscillating devices in timekeepers The English word clock first appeared in Middle English as , , or . The origin of the word is not known for certain; it may be a borrowing from French or Dutch, and can perhaps be traced to the post-classical Latin ('bell'). 7th century Irish and 9th century Germanic sources recorded clock as meaning 'bell'. Judaism, Christianity and Islam all had times set aside for prayer, although Christians alone were expected to attend prayers at specific hours of the day and night—what the historian Jo Ellen Barnett describes as "a rigid adherence to repetitive prayers said many times a day". The bell-striking alarms warned the monk on duty to toll the monastic bell. His alarm was a timer that used a form of escapement to ring a small bell. This mechanism was the forerunner of the escapement device found in the mechanical clock. 13th century The first innovations to improve on the accuracy of the hourglass and the water clock occurred in the 10th century, when attempts were made to slow their rate of flow using friction or the force of gravity. The earliest depiction of a clock powered by a hanging weight is from the Bible of St Louis, an illuminated manuscript made between 1226 and 1234 that shows a clock being slowed by water acting on a wheel. The illustration seems to show that weight-driven clocks were invented in western Europe. A treatise written by Robertus Anglicus in 1271 shows that medieval craftsmen were attempting to design a purely mechanical clock (i.e. only driven by gravity) during this period. Such clocks were a synthesis of earlier ideas derived from European and Islamic science, such as gearing systems, weight drives, and striking mechanisms. In 1250, the artist Villard de Honnecourt illustrated a device that was the step towards the development of the escapement. Another forerunner of the escapement was the , which used an early kind of verge mechanism to operate a knocker that continuously struck a bell. The weight-driven clock was probably a Western European invention, as a picture of a clock shows a weight pulling an axle around, its motion slowed by a system of holes that slowly released water. In 1271, the English astronomer Robertus Anglicus wrote of his contemporaries that they were in the process of developing a form of mechanical clock. 14th century The invention of the verge and foliot escapement in 1275 was one of the most important inventions in both the history of the clock and the history of technology. It was the first type of regulator in horology. A verge, or vertical shaft, is forced to rotate by a weight-driven crown wheel, but is stopped from rotating freely by a foliot. The foliot, which cannot vibrate freely, swings back and forth, which allows a wheel to rotate one tooth at a time. Although the verge and foliot was an advancement on previous timekeepers, it was impossible to avoid fluctuations in the beat caused by changes in the applied forces—the earliest mechanical clocks were regularly reset using a sundial. At around the same time as the invention of the escapement, the Florentine poet Dante Alighieri used clock imagery to depict the souls of the blessed in Paradiso, the third part of the Divine Comedy, written in the early part of the 14th century. It may be the first known literary description of a mechanical clock. There are references to house clocks from 1314 onwards; by 1325 the development of the mechanical clock can be assumed to have occurred. Large mechanical clocks were built that were mounted in towers so as to ring the bell directly. The tower clock of Norwich Cathedral constructed 1273 (reference to a payment for a mechanical clock dated to this year) is the earliest such large clock known. The clock has not survived. The first clock known to strike regularly on the hour, a clock with a verge and foliot mechanism, is recorded in Milan in 1336. By 1341, clocks driven by weights were familiar enough to be able to be adapted for grain mills, and by 1344 the clock in London's Old St Paul's Cathedral had been replaced by one with an escapement. The foliot was first illustrated by Dondi in 1364, and mentioned by the court historian Jean Froissart in 1369. The most famous example of a timekeeping device during the medieval period was a clock designed and built by the clockmaker Henry de Vick in 1360, which was said to have varied by up to two hours a day. For the next 300 years, all the improvements in timekeeping were essentially developments based on the principles of de Vick's clock. Between 1348 and 1364, Giovanni Dondi dell'Orologio, the son of Jacopo Dondi, built a complex astrarium in Florence. During the 14th century, striking clocks appeared with increasing frequency in public spaces, first in Italy, slightly later in France and England—between 1371 and 1380, public clocks were introduced in over 70 European cites. Salisbury Cathedral clock, dating from about 1386, is one of the oldest working clocks in the world, and may be the oldest; it still has most of its original parts. The Wells Cathedral clock, built in 1392, is unique in that it still has its original medieval face. Above the clock are figures which hit the bells, and a set of jousting knights who revolve around a track every 15 minutes. Later developments The invention of the mainspring in the early 15th century—a device first used in locks and for flintlocks in guns— allowed small clocks to be built for the first time. The need for an escapement mechanism that steadily controlled the release of the stored energy, led to the development of two devices, the stackfreed (which although invented in the 15th century can be documented no earlier than 1535) and the fusee, which first originated from medieval weapons such as the crossbow. There is a fusee in the earliest surviving spring-driven clock, a chamber clock made for Philip the Good in  1430. Leonardo da Vinci, who produced the earliest known drawings of a pendulum in 14931494, illustrated a fusee in  1500, a quarter of a century after the coiled spring first appeared. Clock towers in Western Europe in the Middle Ages struck the time. Early clock dials showed hours; a clock with a minutes dial is mentioned in a 1475 manuscript. During the 16th century, timekeepers became more refined and sophisticated, so that by 1577 the Danish astronomer Tycho Brahe was able to obtain the first of four clocks that measured in seconds, and in Nuremberg, the German clockmaker Peter Henlein was paid for making what is thought to have been the earliest example of a watch, made in 1524. By 1500, the use of the foliot in clocks had begun to decline. The oldest surviving spring-driven clock is a device made by Bohemian in 1525. The first person to suggest travelling with a clock to determine longitude, in 1530, was the Dutch instrument maker Gemma Frisius. The clock would be set to the local time of a starting point whose longitude was known, and the longitude of any other place could be determined by comparing its local time with the clock time. The Ottoman engineer Taqi ad-Din described a weight-driven clock with a verge-and-foliot escapement, a striking train of gears, an alarm, and a representation of the Moon's phases in his book The Brightest Stars for the Construction of Mechanical Clocks (), written around 1565. Jesuit missionaries brought the first European clocks to China as gifts. The Italian polymath Galileo Galilei is thought to have first realized that the pendulum could be used as an accurate timekeeper after watching the motion of suspended lamps at Pisa Cathedral. In 1582, he investigated the regular swing of the pendulum, and discovered that this was only dependent on its length. Galileo never constructed a clock based on his discovery, but prior to his death he dictated instructions for building a pendulum clock to his son, Vincenzo. Era of precision timekeeping Pendulum clocks The first accurate timekeepers depended on the phenomenon known as harmonic motion, in which the restoring force acting on an object moved away from its equilibrium position—such as a pendulum or an extended spring—acts to return the object to that position, and causes it to oscillate. Harmonic oscillators can be used as accurate timekeepers as the period of oscillation does not depend on the amplitude of the motion—and so it always takes the same time to complete one oscillation. The period of a harmonic oscillator is completely dependent on the physical characteristics of the oscillating system and not the starting conditions or the amplitude. The period when clocks were controlled by harmonic oscillators was the most productive era in timekeeping. The first invention of this type was the pendulum clock, which was designed and built by Dutch polymath Christiaan Huygens in 1656. Early versions erred by less than one minute per day, and later ones only by 10 seconds, very accurate for their time. Dials that showed minutes and seconds became common after the increase in accuracy made possible by the pendulum clock. Brahe used clocks with minutes and seconds to observe stellar positions. The pendulum clock outperformed all other kinds of mechanical timekeepers to such an extent that these were usually refitted with a pendulum—a task that could be done without difficulty—so that few verge escapement devices have survived in their original form. The first pendulum clocks used a verge escapement, which required wide swings of about 100° and so had short, light pendulums. The swing was reduced to around 6° after the invention of the anchor mechanism enabled the use of longer, heavier pendulums with slower beats that had less variation, as they more closely resembled simple harmonic motion, required less power, and caused less friction and wear. The first known anchor escapement clock was built by the English clockmaker William Clement in 1671 for King's College, Cambridge, now in the Science Museum, London. The anchor escapement originated with Hooke, although it has been argued that it was invented by Clement, or the English clockmaker Joseph Knibb. The Jesuits made major contributions to the development of pendulum clocks in the 17th and 18th centuries, having had an "unusually keen appreciation of the importance of precision". In measuring an accurate one-second pendulum, for example, the Italian astronomer Father Giovanni Battista Riccioli persuaded nine fellow Jesuits "to count nearly 87,000 oscillations in a single day". They served a crucial role in spreading and testing the scientific ideas of the period, and collaborated with Huygens and his contemporaries. Huygens first used a clock to calculate the equation of time (the difference between the apparent solar time and the time given by a clock), publishing his results in 1665. The relationship enabled astronomers to use the stars to measure sidereal time, which provided an accurate method for setting clocks. The equation of time was engraved on sundials so that clocks could be set using the Sun. In 1720, Joseph Williamson claimed to have invented a clock that showed solar time, fitted with a cam and differential gearing, so that the clock indicated true solar time. Other innovations in timekeeping during this period include the invention of the rack and snail striking mechanism for striking clocks by the English mechanician Edward Barlow, the invention by either Barlow or Daniel Quare, a London clock-maker, in 1676 of the repeating clock that chimes the number of hours or minutes, and the deadbeat escapement, invented around 1675 by the astronomer Richard Towneley. Paris and Blois were the early centres of clockmaking in France, and French clockmakers such as Julien Le Roy, clockmaker of Versailles, were leaders in case design and ornamental clocks. Le Roy belonged to the fifth generation of a family of clockmakers, and was described by his contemporaries as "the most skillful clockmaker in France, possibly in Europe". He invented a special repeating mechanism which improved the precision of clocks and watches, a face that could be opened to view the inside clockwork, and made or supervised over 3,500 watches during his career of almost five decades, which ended with his death in 1759. The competition and scientific rivalry resulting from his discoveries further encouraged researchers to seek new methods of measuring time more accurately. Any inherent errors in early pendulum clocks were smaller than other errors caused by factors such as temperature variation. In 1729 the Yorkshire carpenter and self-taught clockmaker John Harrison invented the gridiron pendulum, which used at least three metals of different lengths and expansion properties, connected so as to maintain the overall length of the pendulum when it is heated or cooled by its surroundings. In 1781 the clockmaker George Graham compensated for temperature variation in an iron pendulum by using a bob made from a glass jar of mercury—a liquid metal at room temperature that expands faster than glass. More accurate versions of this innovation contained the mercury in thinner iron jars to make them more responsive. This type of temperature compensating pendulum was improved still further when the mercury was contained within the rod itself, which allowed the two metals to be thermally coupled more tightly. In 1895, the invention of invar, an alloy made from iron and nickel that expands very little, largely eliminated the need for earlier inventions designed to compensate for the variation in temperature. Between 1794 and 1795, in the aftermath of the French Revolution, the French government mandated the use of decimal time, with a day divided into 10 hours of 100 minutes each. A clock in the Palais des Tuileries kept decimal time as late as 1801. Marine chronometer After the Scilly naval disaster of 1707, in which four ships were wrecked as a result of navigational mistakes, the British government offered a prize of £20,000, equivalent to millions of pounds today, for anyone who could determine the longitude to within at a latitude just north of the equator. The position of a ship at sea could be determined to within if a navigator could refer to a clock that lost or gained less than about six seconds per day. Proposals were examined by a newly created Board of Longitude. Among the many people who attempted to claim the prize was the Yorkshire clockmaker Jeremy Thacker, who first used the term chronometer in a pamphlet published in 1714. Huygens built the first sea clock, designed to remain horizontal aboard a moving ship, but that stopped working if the ship moved suddenly. In 1715, at the age of 22, John Harrison had used his carpentry skills to construct a wooden eight-day clock. His clocks had innovations that included the use of wooden parts to remove the need for additional lubrication (and cleaning), rollers to reduce friction, a new kind of escapement, and the use of two different metals to reduce the problem of expansion caused by temperature variation. He travelled to London to seek assistance from the Board of Longitude in making a sea clock. He was sent to visit Graham, who assisted Harrison by arranging to finance his work to build a clock. After 30 years, his device, now named "H1" was built and in 1736 it was tested at sea. Harrison then went on to design and make two other sea clocks, "H2" (completed in around 1739) and "H3", both of which were ready by 1755. Harrison made two watches, "H4" and "H5". Eric Bruton, in his book The History of Clocks and Watches, has described H4 as "probably the most remarkable timekeeper ever made". After the completion of its sea trials during the winter of 17611762 it was found that it was three times more accurate than was needed for Harrison to be awarded the Longitude prize. Electric clocks In 1815, the prolific English inventor Francis Ronalds produced the forerunner of the electric clock, the electrostatic clock. It was powered with dry piles, a high voltage battery with extremely long life but the disadvantage of its electrical properties varying according to the air temperature and humidity. He experimented with ways of regulating the electricity and his improved devices proved to be more reliable. In 1840 the Scottish clock and instrument maker Alexander Bain, first used electricity to sustain the motion of a pendulum clock, and so can be credited with the invention of the electric clock. On January 11, 1841, Bain and the chronometer maker John Barwise took out a patent describing a clock with an electromagnetic pendulum. The English scientist Charles Wheatstone, whom Bain met in London to discuss his ideas for an electric clock, produced his own version of the clock in November 1840, but Bain won a legal battle to establish himself as the inventor. In 1857, the French physicist Jules Lissajous showed how an electric current can be used to vibrate a tuning fork indefinitely, and was probably the first to use the invention as a method for accurately measuring frequency. The piezoelectric properties of crystalline quartz were discovered by the French physicist brothers Jacques and Pierre Curie in 1880. The most accurate pendulum clocks were controlled electrically. The Shortt–Synchronome clock, an electrical driven pendulum clock designed in 1921, was the first clock to be a more accurate timekeeper than the Earth itself. A succession of innovations and discoveries led to the invention of the modern quartz timer. The vacuum tube oscillator was invented in 1912. An electrical oscillator was first used to sustain the motion of a tuning fork by the British physicist William Eccles in 1919; his achievement removed much of the damping associated with mechanical devices and maximised the stability of the vibration's frequency. The first quartz crystal oscillator was built by the American engineer Walter G. Cady in 1921, and in October 1927 the first quartz clock was described by Joseph Horton and Warren Marrison at Bell Telephone Laboratories. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes, limited their practical use elsewhere. In 1932, a quartz clock able to measure small weekly variations in the rotation rate of the Earth was developed. Their inherent physical and chemical stability and accuracy has resulted in the subsequent proliferation, and since the 1940s they have formed the basis for precision measurements of time and frequency worldwide. Development of the watch The first wristwatches were made in the 16th century. Elizabeth I of England had made an inventory in 1572 of the watches she acquired, all of which were considered to be part of her jewellery collection. The first pocketwatches were inaccurate, as their size precluded them from having sufficiently well-made moving parts. Unornamented watches began to appear in 1625. Dials that showed minutes and seconds became common after the increase in accuracy made possible by the balance spring (or hairspring). Invented separately in 1675 by Huygens and Hooke, it enabled the oscillations of the balance wheel to have a fixed frequency. The invention resulted in a great advance in the accuracy of the mechanical watch, from around half an hour to within a few minutes per day. Some dispute remains as to whether the balance spring was first invented by Huygens or by Hooke; both scientists claimed to have come up with the idea of the balance spring first. Huygens' design for the balance spring is the type used in virtually all watches up to the present day. Thomas Tompion was one of the first clockmakers to recognise the potential of the balance spring and use it successfully in his pocket watches; the improved accuracy enabled watches to perform as well as they are generally used today, as a second hand to be added to the face, a development that occurred during the 1690s. The concentric minute hand was an earlier invention, but a mechanism was devised by Quare that enabled the hands to be actuated together. Nicolas Fatio de Duillier, a Swiss natural philosopher, is credited with the design of the first jewel bearings in watches in 1704. Other notable 18th-century English horologists include John Arnold and Thomas Earnshaw, who devoted their careers to constructing high-quality chronometers and so-called 'deck watches', smaller versions of the chronometer that could be kept in a pocket. Military use of the watch Watches were worn during the Franco-Prussian War (18701871), and by the time of the Boer War (18991902), watches had been recognised as a valuable tool. Early models were essentially standard pocket watches fitted to a leather strap, but, by the early 20th century, manufacturers began producing purpose-built wristwatches. In 1904, Alberto Santos-Dumont, an early aviator, asked his friend the French watchmaker Louis Cartier to design a watch that could be useful during his flights. During World War I, wristwatches were used by artillery officers. The so-called trench watch, or 'wristlets' were practical, as they freed up one hand that would normally be used to operate a pocket watch, and became standard equipment. The demands of trench warfare meant that soldiers needed to protect the glass of their watches, and a guard in the form of a hinged cage was sometimes used. The guard was designed to allow the numerals to be read easily, but it obscured the hands—a problem that was solved after the introduction of shatter-resistant Plexiglass in the 1930s. Prior to the advent of its military use, the wristwatch was typically only worn by women, but during World War I they became symbols of masculinity and bravado. Modern watches Fob watches were starting to be replaced at the turn of the 20th century. The Swiss, who were neutral throughout World War I, produced wristwatches for both sides of the conflict. The introduction of the tank influenced the design of the Cartier Tank watch, and the design of watches during the 1920s was influenced by the Art Deco style. The automatic watch, first introduced with limited success in the 18th century, was reintroduced in the 1920s by the English watchmaker John Harwood. After he went bankrupt in 1929, restrictions on automatic watches were lifted and companies such as Rolex were able to produce them. In 1930, Tissot produced the first ever non-magnetic wristwatch. The first battery-driven watches were developed in the 1950s. High quality watches were produced by firms such as Patek Philippe, an example being a Patek Philippe ref. 1518, introduced in 1941, possibly the most complicated wristwatch ever made in stainless steel, which fetched a world record price in 2016 when it was sold at auction for $11,136,642. The manual winding Speedmaster Professional or "Moonwatch" was worn during the first United States spacewalk as part of NASA's Gemini 4 mission and was the first watch worn by an astronaut walking on the Moon during the Apollo 11 mission. In 1969, Seiko produced the world's first quartz wristwatch, the Astron. During the 1970s, the introduction of digital watches made using transistors and plastic parts enabled companies to reduce their work force. By the 1970s, many of those firms that maintained more complicated metalworking techniques had gone bankrupt. Smartwatches, essentially wearable computers in the form of watches, were introduced to the market in the early 21st century. Atomic clocks Atomic clocks are the most accurate timekeeping devices in practical use today. Accurate to within a few seconds over many thousands of years, they are used to calibrate other clocks and timekeeping instruments. The U.S. National Bureau of Standards (NBS, now National Institute of Standards and Technology (NIST)) changed the way it based the time standard of the United States from quartz to atomic clocks in the 1960s. The idea of using atomic transitions to measure time was first suggested by the British scientist Lord Kelvin in 1879, although it was only in the 1930s with the development of magnetic resonance that there was a practical method for measuring time in this way. A prototype ammonia maser device was built in 1948 at NIST. Although less accurate than existing quartz clocks, it served to prove the concept of an atomic clock. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by the English physicist Louis Essen in 1955 at the National Physical Laboratory in London. It was calibrated by the use of the astronomical time scale ephemeris time (ET). In 1967 the International System of Units (SI) standardized its unit of time, the second, on the properties of caesium. The SI defined the second as 9,192,631,770 cycles of the radiation which corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom. The caesium atomic clock maintained by NIST is accurate to 30 billionths of a second per year. Atomic clocks have employed other elements, such as hydrogen and rubidium vapor, offering greater stability (in the case of hydrogen clocks) and smaller size, lower power consumption, and thus lower cost (in the case of rubidium clocks). Recent advances in clock technology have largely been based on trapped ion platforms, with the record for the lowest systematic uncertainty being traded between aluminum ion clocks and strontium optical lattice clocks. Next-generation clocks will likely be based on nuclear transitions in the 229mTh nucleus, as nuclei are shielded from external effects by the accompanying electron cloud, and the transition frequency is much higher than optical and ion clocks, allowing for much lower systematic uncertainty in the clock frequency. See also (UTC) Explanatory notes Citations References External links Relativity Science Calculator – Philosophic Question: are clocks and time separable? Ancient Discoveries Islamic Science Part 4 clip from History Repeating of Islamic time-keeping inventions (YouTube). Timekeeping devices Timekeeping devices Timekeeping
History of timekeeping devices
Physics,Technology
8,655
49,720,775
https://en.wikipedia.org/wiki/Pseudomonas%20phage%20F116%20holin
The Pseudomonas phage F116 holin is a non-characterized holin homologous to one in Neisseria gonorrheae that has been characterized. This protein is the prototype of the Pseudomonas phage F116 holin (F116 Holin) family (TC# 1.E.25), which is a member of the Holin Superfamily II. Bioinformatic analysis of the genome sequence of N. gonorrhoeae revealed the presence of nine probable prophage islands. The genomic sequence of FA1090 identified five genomic regions (NgoPhi1 - 5) that are related to dsDNA lysogenic phage. The DNA sequences from NgoPhi1, NgoPhi2 and NgoPhi3 contained regions of identity. A region of NgoPhi2 showed high similarity with the Pseudomonas aeruginosa generalized transducing phage F116. NgoPhi1 and NgoPhi2 encode functionally active phages. The holin gene of NgoPhi1 (identical to that encoded by NgoPhi2), when expressed in E. coli, could substitute for the phage lambda S gene. See also Holin Lysin Transporter Classification Database References Holins Protein families
Pseudomonas phage F116 holin
Biology
256
27,415,385
https://en.wikipedia.org/wiki/Materials%20%28journal%29
Materials is a semi-monthly peer-reviewed open access scientific journal covering materials science and engineering. It was established in 2008 and is published by MDPI. The editor-in-chief is Maryam Tabrizian (McGill University). The journal publishes reviews, regular research papers, short communications, and book reviews. There are currently hundreds of calls for submissions to special issues, a fact that has led to serious concerns. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.748. References External links Materials science journals Academic journals established in 2008 Monthly journals English-language journals MDPI academic journals Creative Commons Attribution-licensed journals
Materials (journal)
Materials_science,Engineering
150
9,078,833
https://en.wikipedia.org/wiki/Chain%20sequence
In the analytic theory of continued fractions, a chain sequence is an infinite sequence {an} of non-negative real numbers chained together with another sequence {gn} of non-negative real numbers by the equations where either (a) 0 ≤ gn < 1, or (b) 0 < gn ≤ 1. Chain sequences arise in the study of the convergence problem – both in connection with the parabola theorem, and also as part of the theory of positive definite continued fractions. The infinite continued fraction of Worpitzky's theorem contains a chain sequence. A closely related theorem shows that converges uniformly on the closed unit disk |z| ≤ 1 if the coefficients {an} are a chain sequence. An example The sequence {, , , ...} appears as a limiting case in the statement of Worpitzky's theorem. Since this sequence is generated by setting g0 = g1 = g2 = ... = , it is clearly a chain sequence. This sequence has two important properties. Since f(x) = x − x2 is a maximum when x = , this example is the "biggest" chain sequence that can be generated with a single generating element; or, more precisely, if {gn} = {x}, and x < , the resulting sequence {an} will be an endless repetition of a real number y that is less than . The choice gn =  is not the only set of generators for this particular chain sequence. Notice that setting generates the same unending sequence {, , , ...}. Notes References H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948; reprinted by Chelsea Publishing Company, (1973), Continued fractions
Chain sequence
Mathematics
366
458,698
https://en.wikipedia.org/wiki/Weak%20operator%20topology
In functional analysis, the weak operator topology, often abbreviated WOT, is the weakest topology on the set of bounded operators on a Hilbert space , such that the functional sending an operator to the complex number is continuous for any vectors and in the Hilbert space. Explicitly, for an operator there is base of neighborhoods of the following type: choose a finite number of vectors , continuous functionals , and positive real constants indexed by the same finite set . An operator lies in the neighborhood if and only if for all . Equivalently, a net of bounded operators converges to in WOT if for all and , the net converges to . Relationship with other topologies on B(H) The WOT is the weakest among all common topologies on , the bounded operators on a Hilbert space . Strong operator topology The strong operator topology, or SOT, on is the topology of pointwise convergence. Because the inner product is a continuous function, the SOT is stronger than WOT. The following example shows that this inclusion is strict. Let and consider the sequence of right shifts. An application of Cauchy-Schwarz shows that in WOT. But clearly does not converge to in SOT. The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the strong operator topology are precisely those that are continuous in the WOT (actually, the WOT is the weakest operator topology that leaves continuous all strongly continuous linear functionals on the set of bounded operators on the Hilbert space H). Because of this fact, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT. It follows from the polarization identity that a net converges to in SOT if and only if in WOT. Weak-star operator topology The predual of B(H) is the trace class operators C1(H), and it generates the w*-topology on B(H), called the weak-star operator topology or σ-weak topology. The weak-operator and σ-weak topologies agree on norm-bounded sets in B(H). A net {Tα} ⊂ B(H) converges to T in WOT if and only Tr(TαF) converges to Tr(TF) for all finite-rank operator F. Since every finite-rank operator is trace-class, this implies that WOT is weaker than the σ-weak topology. To see why the claim is true, recall that every finite-rank operator F is a finite sum So {Tα} converges to T in WOT means Extending slightly, one can say that the weak-operator and σ-weak topologies agree on norm-bounded sets in B(H): Every trace-class operator is of the form where the series converges. Suppose and in WOT. For every trace-class S, by invoking, for instance, the dominated convergence theorem. Therefore every norm-bounded closed set is compact in WOT, by the Banach–Alaoglu theorem. Other properties The adjoint operation T → T*, as an immediate consequence of its definition, is continuous in WOT. Multiplication is not jointly continuous in WOT: again let be the unilateral shift. Appealing to Cauchy-Schwarz, one has that both Tn and T*n converges to 0 in WOT. But T*nTn is the identity operator for all . (Because WOT coincides with the σ-weak topology on bounded sets, multiplication is not jointly continuous in the σ-weak topology.) However, a weaker claim can be made: multiplication is separately continuous in WOT. If a net Ti → T in WOT, then STi → ST and TiS → TS in WOT. SOT and WOT on B(X,Y) when X and Y are normed spaces We can extend the definitions of SOT and WOT to the more general setting where X and Y are normed spaces and is the space of bounded linear operators of the form . In this case, each pair and defines a seminorm on via the rule . The resulting family of seminorms generates the weak operator topology on . Equivalently, the WOT on is formed by taking for basic open neighborhoods those sets of the form where is a finite set, is also a finite set, and . The space is a locally convex topological vector space when endowed with the WOT. The strong operator topology on is generated by the family of seminorms via the rules . Thus, a topological base for the SOT is given by open neighborhoods of the form where as before is a finite set, and Relationships between different topologies on B(X,Y) The different terminology for the various topologies on can sometimes be confusing. For instance, "strong convergence" for vectors in a normed space sometimes refers to norm-convergence, which is very often distinct from (and stronger than) than SOT-convergence when the normed space in question is . The weak topology on a normed space is the coarsest topology that makes the linear functionals in continuous; when we take in place of , the weak topology can be very different than the weak operator topology. And while the WOT is formally weaker than the SOT, the SOT is weaker than the operator norm topology. In general, the following inclusions hold: and these inclusions may or may not be strict depending on the choices of and . The WOT on is a formally weaker topology than the SOT, but they nevertheless share some important properties. For example, Consequently, if is convex then in other words, SOT-closure and WOT-closure coincide for convex sets. References See also Topological vector spaces Topology of function spaces
Weak operator topology
Mathematics
1,186
73,805,306
https://en.wikipedia.org/wiki/Exobiology%20Extant%20Life%20Surveyor
Exobiology Extant Life Surveyor, (also called EELS) is a snakebot vehicle originally designed to explore the surface and the oceans of Enceladus, a moon of Saturn. The JPL has also referred to the possibility of using EELS to explore locations such as lunar lava tubes, Mars's polar caps, and Earth's ice sheets. It uses multiple segments containing actuation, propulsion, power and, communication electronics. The segments use corkscrews to move across the ground. These corkscrews can act as propellers while underwater. , the current version (1.0) weighs approximately , and is or 10 segments long. EELS has no scientific instruments, uses stereo cameras and Lidar, and it uses a tether for power and communications. References Robotic sensing Lidar Planetary rovers Jet Propulsion Laboratory
Exobiology Extant Life Surveyor
Astronomy
167
2,472,622
https://en.wikipedia.org/wiki/Colored%20matroid
In mathematics, a colored matroid is a matroid whose elements are labeled from a set of colors, which can be any set that suits the purpose, for instance the set of the first n positive integers, or the sign set {+, −}. The interest in colored matroids is through their invariants, especially the colored Tutte polynomial, which generalizes the Tutte polynomial of a signed graph of . There has also been study of optimization problems on matroids where the objective function of the optimization depends on the set of colors chosen as part of a matroid basis. See also Bipartite matroid Rota's basis conjecture References Matroid theory
Colored matroid
Mathematics
136
22,313,929
https://en.wikipedia.org/wiki/Coke%20strength%20after%20reaction
Coke Strength after Reaction (CSR) refers to coke "hot" strength, generally a quality reference in a simulated reaction condition in an industrial blast furnace. The test is based on a procedure developed by Nippon Steel Corp in the 1970s as an attempt to get an indication of coke performance and is used widely throughout the world since then. It is one of the major considerations when blending coking coal for export sale. Test procedure The coke sample is first tested for its reactivity (CRI), then the same sample is tested for strength (CSR). Reactivity test A 200 g sample of 19–21 mm particle range coke is heated at 1100°C under 1 atmosphere pressure of carbon dioxide for 2 hours. Next, the coke is cooled under nitrogen and the weight loss resulting from reaction is measured. The percentage weight loss is known as reactivity (CRI). Strength test The reacted coke is placed in an I-type drum (no lifters) and subjected to 600 revolutions in 30 minutes. The percent of carbon material removed from the drum that is ≥10 mm is known as the coke strength after reaction (CSR). References Solid fuels
Coke strength after reaction
Physics
235
5,056,451
https://en.wikipedia.org/wiki/Chi%20Capricorni
Chi Capricorni, Latinized from χ Capricorni, is a star in the southern constellation of Capricornus. Based upon an annual parallax shift of 18.14 mas as seen from the Earth, the star is located about 180 light years from the Sun. It is visible to the naked eye with an apparent visual magnitude of +5.28. Properties This is an A-type main sequence star with a stellar classification of A0 V. It is a candidate Lambda Boötis star, showing a chemically peculiar spectrum with a low abundance of most elements heavier than oxygen. The star is around 251 million years old and is spinning rapidly with a projected rotational velocity of 212 km/s. It has 2.78 times the mass of the Sun and is radiating 21 times the solar luminosity from its photosphere at an effective temperature of 10,878 K. At an angular separation of 1,199 arcseconds lies a faint proper motion companion designated HIP 99550. At the estimated distance of Chi Capricorni, this is equal to a projected separation of 28,300 AU. It has a visual magnitude of 10.94 and a classification of M0 Vk, indicating this is a red dwarf star. Chinese Name In Chinese, (), meaning Twelve States, refers to an asterism which is represent twelve ancient states in the Spring and Autumn period and the Warring States period, consisting of χ Capricorni, φ Capricorni, ι Capricorni, 38 Capricorni, 35 Capricorni, 36 Capricorni, θ Capricorni, 30 Capricorni, 33 Capricorni, ζ Capricorni, 19 Capricorni, 26 Capricorni, 27 Capricorni, 20 Capricorni, η Capricorni and 21 Capricorni. Consequently, the Chinese name for χ Capricorni itself represents the state Qi (), together with 112 Herculis in Left Wall of Heavenly Market Enclosure (asterism). R.H.Allen had opinion that χ Capricorni, together with φ Capricorni, were represent the state Wei (魏). References A-type main-sequence stars Capricorni, Chi Capricornus Durchmusterung objects Capricorni, 25 201184 104365 8087
Chi Capricorni
Astronomy
500
618,241
https://en.wikipedia.org/wiki/Absorber
In high energy physics experiments, an absorber is a block of material used to absorb some of the energy of an incident particle in an experiment. Absorbers can be made of a variety of materials, depending on the purpose; lead, tungsten and liquid hydrogen are common choices. Most absorbers are used as part of a particle detector; particle accelerators use absorbers to reduce the radiation damage on accelerator components. Other uses of the same word Absorbers are used in ionization cooling, as in the International Muon Ionization Cooling Experiment. In solar power, a high degree of efficiency is achieved by using black absorbers which reflect off much less of the incoming energy. In sunscreen formulations, ingredients which absorb UVA/UVB rays, such as avobenzone and octyl methoxycinnamate, are known as absorbers. They are contrasted with physical "blockers" of UV radiation such as titanium dioxide and zinc oxide. References Particle detectors Accelerator physics
Absorber
Physics,Technology,Engineering
200
40,280,869
https://en.wikipedia.org/wiki/Mirabamide
Mirabamides are sea sponge isolates that inhibit HIV-1 fusion. Variants A to D are known from Siliquariaspongia mirabilis and E through H are derived from Stelletta clavosa. Mirabamides have a macrocyclic region closed through an ester bond between the C-terminus and a 𝛽-hydroxyl group, and terminated with a polyketide moiety or a more simple branched aliphatic acid. Mirabamide G is a fusion of several amino acids: 2,3 diamino butanoic acid, 3-hydroxyleucine, N-methylthreonine, 2,3-dihydroxy-2,6,8-trimethyldeca-(4Z,6E)-dienoic acid, 3-methoxy alanine, β-methoxytyrosine, 3,4-dimethylglutamine. References Depsipeptides attribution: Contains text from the CC/BY/4.0 licensed "Chemical Reactivity Properties, pKa Values, AGEs Inhibitor Abilities and Bioactivity Scores of the Mirabamides A–H Peptides of Marine Origin Studied by Means of Conceptual DFT" by Lazcano-Perez et al.
Mirabamide
Chemistry
271
52,124
https://en.wikipedia.org/wiki/VHS
VHS (Video Home System) is a standard for consumer-level analog video recording on tape cassettes, introduced in 1976 by the Victor Company of Japan (JVC). It was the dominant home video format throughout the tape media period in the 1980s and 1990s. Magnetic tape video recording was adopted by the television industry in the 1950s in the form of the first commercialized video tape recorders (VTRs), but the devices were expensive and used only in professional environments. In the 1970s, videotape technology became affordable for home use, and widespread adoption of videocassette recorders (VCRs) began; the VHS became the most popular media format for VCRs as it would win the "format war" against Betamax (backed by Sony) and a number of other competing tape standards. The cassettes themselves use a 0.5-inch magnetic tape between two spools and typically offer a capacity of at least two hours. The popularity of VHS was intertwined with the rise of the video rental market, when films were released on pre-recorded videotapes for home viewing. Newer improved tape formats such as S-VHS were later developed, as well as the earliest optical disc format, LaserDisc; the lack of global adoption of these formats increased VHS's lifetime, which eventually peaked and started to decline in the late 1990s after the introduction of DVD, a digital optical disc format. VHS rentals were surpassed by DVD in the United States in 2003, which eventually became the preferred low-end method of movie distribution. For home recording purposes, VHS and VCRs were surpassed by (typically hard disk–based) digital video recorders (DVR) in the 2000s. History Before VHS In 1956, after several attempts by other companies, the first commercially successful VTR, the Ampex VRX-1000, was introduced by Ampex Corporation. At a price of US$50,000 in 1956 () and US$300 () for a 90-minute reel of tape, it was intended only for the professional market. Kenjiro Takayanagi, a television broadcasting pioneer then working for JVC as its vice president, saw the need for his company to produce VTRs for the Japanese market at a more affordable price. In 1959, JVC developed a two-head video tape recorder and, by 1960, a color version for professional broadcasting. In 1964, JVC released the DV220, which would be the company's standard VTR until the mid-1970s. In 1969, JVC collaborated with Sony Corporation and Matsushita Electric (Matsushita was the majority stockholder of JVC until 2011) to build a video recording standard for the Japanese consumer. The effort produced the U-matic format in 1971, which was the first cassette format to become a unified standard for different companies. It was preceded by the reel-to-reel " EIAJ format. The U-matic format was successful in businesses and some broadcast television applications, such as electronic news-gathering, and was produced by all three companies until the late 1980s, but because of cost and limited recording time, very few of the machines were sold for home use. Therefore, soon after the U-Matic release, all three companies started working on new consumer-grade video recording formats of their own. Sony started working on Betamax, Matsushita started working on VX, and JVC released the CR-6060 in 1975, based on the U-matic format. VHS development In 1971, JVC engineers Yuma Shiraishi and Shizuo Takano put together a team to develop a VTR for consumers. By the end of 1971, they created an internal diagram, "VHS Development Matrix", which established twelve objectives for JVC's new VTR: The system must be compatible with any ordinary television set. Picture quality must be similar to a normal air broadcast. The tape must have at least a two-hour recording capacity. Tapes must be interchangeable between machines. The overall system should be versatile, meaning it can be scaled and expanded, such as connecting a video camera, or dubbing between two recorders. Recorders should be affordable, easy to operate, and have low maintenance costs. Recorders must be capable of being produced in high volume, their parts must be interchangeable, and they must be easy to service. In early 1972, the commercial video recording industry in Japan took a financial hit. JVC cut its budgets and restructured its video division, shelving the VHS project. However, despite the lack of funding, Takano and Shiraishi continued to work on the project in secret. By 1973, the two engineers had produced a functional prototype. Competition with Betamax In 1974, the Japanese Ministry of International Trade and Industry (MITI), desiring to avoid consumer confusion, attempted to force the Japanese video industry to standardize on just one home video recording format. Later, Sony had a functional prototype of the Betamax format, and was very close to releasing a finished product. With this prototype, Sony persuaded the MITI to adopt Betamax as the standard, and allow it to license the technology to other companies. JVC believed that an open standard, with the format shared among competitors without licensing the technology, was better for the consumer. To prevent the MITI from adopting Betamax, JVC worked to convince other companies, in particular Matsushita (Japan's largest electronics manufacturer at the time, marketing its products under the National brand in most territories and the Panasonic brand in North America, and JVC's majority stockholder), to accept VHS, and thereby work against Sony and the MITI. Matsushita agreed, primarily out of concern that Sony might become the leader in the field if its proprietary Betamax format was the only one allowed to be manufactured. Matsushita also regarded Betamax's one-hour recording time limit as a disadvantage. Matsushita's backing of JVC persuaded Hitachi, Mitsubishi, and Sharp to back the VHS standard as well. Sony's release of its Betamax unit to the Japanese market in 1975 placed further pressure on the MITI to side with the company. However, the collaboration of JVC and its partners was much stronger, which eventually led the MITI to drop its push for an industry standard. JVC released the first VHS machines in Japan in late 1976, and in the United States in mid-1977. Sony's Betamax competed with VHS throughout the late 1970s and into the 1980s (see Videotape format war). Betamax's major advantages were its smaller cassette size, theoretical higher video quality, and earlier availability, but its shorter recording time proved to be a major shortcoming. Originally, Beta I machines using the NTSC television standard were able to record one hour of programming at their standard tape speed of 1.5 inches per second (ips). The first VHS machines could record for two hours, due to both a slightly slower tape speed (1.31 ips) and significantly longer tape. Betamax's smaller cassette limited the size of the reel of tape, and could not compete with VHS's two-hour capability by extending the tape length. Instead, Sony had to slow the tape down to 0.787 ips (Beta II) in order to achieve two hours of recording in the same cassette size. Sony eventually created a Beta III speed of 0.524 ips, which allowed NTSC Betamax to break the two-hour limit, but by then VHS had already won the format battle. Additionally, VHS had a "far less complex tape transport mechanism" than Betamax, and VHS machines were faster at rewinding and fast-forwarding than their Sony counterparts. VHS eventually won the war, gaining 60% of the North American market by 1980. Initial releases of VHS-based devices The first VCR to use VHS was the Victor HR-3300, and was introduced by the president of JVC in Japan on September 9, 1976. JVC started selling the HR-3300 in Akihabara, Tokyo, Japan, on October 31, 1976. Region-specific versions of the JVC HR-3300 were also distributed later on, such as the HR-3300U in the United States, and the HR-3300EK in the United Kingdom. The United States received its first VHS-based VCR, the RCA VBT200, on August 23, 1977. The RCA unit was designed by Matsushita and was the first VHS-based VCR manufactured by a company other than JVC. It was also capable of recording four hours in LP (long play) mode. The UK received its first VHS-based VCR, the Victor HR-3300EK, in 1978. Quasar and General Electric followed-up with VHS-based VCRs – all designed by Matsushita. By 1999, Matsushita alone produced just over half of all Japanese VCRs. TV/VCR combos, combining a TV set with a VHS mechanism, were also once available for purchase. Combo units containing both a VHS mechanism and a DVD player were introduced in the late 1990s, and at least one combo unit, the Panasonic DMP-BD70V, included a Blu-ray player. Technical details VHS has been standardized in IEC 60774–1. Cassette and tape design The VHS cassette is a 187 mm wide, 103 mm deep, and 25 mm thick (7 × 4× 1 inch) plastic shell held together with five Phillips-head screws. The flip-up cover, which allows players and recorders to access the tape, has a latch on the right side, with a push-in toggle to release it (bottom view image). The cassette has an anti-despooling mechanism, consisting of several plastic parts between the spools, near the front of the cassette (white and black in the top view). The spool latches are released by a push-in lever within a 6.35 mm ( inch) hole at the bottom of the cassette, 19 mm ( inch) in from the edge label. The tapes are made, pre-recorded, and inserted into the cassettes in cleanrooms, to ensure quality and to keep dust from getting embedded in the tape and interfering with recording (both of which could cause signal dropouts) There is a clear tape leader at both ends of the tape to provide an optical auto-stop for the VCR transport mechanism. In the VCR, a light source is inserted into the cassette through the circular hole in the center of the underside, and two photodiodes are on the left and right sides of where the tape exits the cassette. When the clear tape reaches one of these, enough light will pass through the tape to the photodiode to trigger the stop function; some VCRs automatically rewind the tape when the trailing end is detected. Early VCRs used an incandescent bulb as the light source: when the bulb failed, the VCR would act as if a tape were present when the machine was empty, or would detect the blown bulb and completely stop functioning. Later designs use an infrared LED, which has a much longer life. The recording medium is a Mylar magnetic tape, 12.7 mm ( inch) wide, coated with metal oxide, and wound on two spools. The tape speed for "Standard Play" mode (see below) is 3.335 cm/s (1.313 ips) for NTSC, 2.339 cm/s (0.921 ips) for PAL—or just over 2.0 and 1.4 metres (6 ft 6.7 in and 4 ft 7.2 in) per minute respectively. The tape length for a T-120 VHS cassette is 247.5 metres (812 ft). Tape loading technique As with almost all cassette-based videotape systems, VHS machines pull the tape out of the cassette shell and wrap it around the inclined head drum, which rotates at 1,800 rpm in NTSC machines and at 1,500 rpm for PAL, one complete rotation of the head corresponding to one video frame. VHS uses an "M-loading" system, also known as M-lacing, where the tape is drawn out by two threading posts and wrapped around more than 180 degrees of the head drum (and also other tape transport components) in a shape roughly approximating the letter M. The heads in the rotating drum get their signal wirelessly using a rotary transformer. Recording capacity A VHS cassette holds a maximum of about 430 m (1,410 ft) of tape at the lowest acceptable tape thickness, giving a maximum playing time of about four hours in a T-240/DF480 for NTSC and five hours in an E-300 for PAL at "standard play" (SP) quality. More frequently, however, VHS tapes are thicker than the required minimum to avoid complications such as jams or tears in the tape. Other speeds include "long play" (LP), "extended play" (EP) or "super long play" (SLP) (standard on NTSC; rarely found on PAL machines). For NTSC, LP and EP/SLP double and triple the recording time accordingly, but these speed reductions cause a reduction in horizontal resolution – from the normal equivalent of 250 vertical lines in SP, to the equivalent of 230 in LP and even less in EP/SLP. Due to the nature of recording diagonally from a spinning drum, the actual write speed of the video heads does not get slower when the tape speed is reduced. Instead, the video tracks become narrower and are packed closer together. This results in noisier playback that can be more difficult to track correctly: The effect of subtle misalignment is magnified by the narrower tracks. The heads for linear audio are not on the spinning drum, so for them, the tape speed from one reel to the other is the same as the speed of the heads across the tape. This speed is quite slow: for SP it is about 2/3s that of an audio cassette, and for EP it is slower than the slowest microcassette speed. This is widely considered inadequate for anything but basic voice playback, and was a major liability for VHS-C camcorders that encouraged the use of the EP speed. Color depth deteriorates significantly at lower speeds in PAL: often, a color image on a PAL tape recorded at low speed is displayed only in monochrome, or with intermittent color, when playback is paused. Tape lengths VHS cassettes for NTSC and PAL/SECAM systems are physically identical, although the signals recorded on the tape are incompatible. The tape speeds are different too, so the playing time for any given cassette will vary between the systems. To avoid confusion, manufacturers indicate the playing time in minutes that can be expected for the market the tape is sold in: E-XXX indicates playing time in minutes for PAL or SECAM. T-XXX indicates playing time in minutes for NTSC or PAL-M. To calculate the playing time for a T-XXX tape in a PAL machine, this formula is used: PAL/SECAM recording time = T-XXX in minutes × 1.426 To calculate the playing time for an E-XXX tape in an NTSC machine, this formula is used: NTSC recording time = E-XXX in minutes × 0.701 Since the recording/playback time for PAL/SECAM is roughly 1/3 longer than the recording/playback time for NTSC, some tape manufacturers label their cassettes with both T-XXX and E-XXX marks, like T60/E90, T90/E120 and T120/E180. SP is standard play, LP is long play ( speed, equal to recording time in DVHS "HS" mode), EP/SLP is extended/super long play ( speed) which was primarily released into the NTSC market. Copy protection As VHS was designed to facilitate recording from various sources, including television broadcasts or other VCR units, content producers quickly found that home users were able to use the devices to copy videos from one tape to another. Despite generation loss in quality when a tape was copied, this practice was regarded as a widespread problem, which members of the Motion Picture Association of America (MPAA) claimed caused them great financial losses. In response, several companies developed technologies to protect copyrighted VHS tapes from casual duplication by home users. The most popular method was Analog Protection System, better known simply as Macrovision, produced by a company of the same name. According to Macrovision: The technology is applied to over 550 million videocassettes annually and is used by every MPAA movie studio on some or all of their videocassette releases. Over 220 commercial duplication facilities around the world are equipped to supply Macrovision videocassette copy protection to rights owners...The study found that over 30% of VCR households admit to having unauthorized copies, and that the total annual revenue loss due to copying is estimated at $370,000,000 annually. The system was first used in copyrighted movies beginning with the 1984 film The Cotton Club. Macrovision copy protection saw refinement throughout its years, but has always worked by essentially introducing deliberate errors into a protected VHS tape's output video stream. These errors in the output video stream are ignored by most televisions, but will interfere with re-recording of programming by a second VCR. The first version of Macrovision introduces high signal levels during the vertical blanking interval, which occurs between the video fields. These high levels confuse the automatic gain control circuit in most VHS VCRs, leading to varying brightness levels in an output video, but are ignored by the TV as they are out of the frame-display period. "Level II" Macrovision uses a process called "colorstriping", which inverts the analog signal's colorburst period and causes off-color bands to appear in the picture. Level III protection added additional colorstriping techniques to further degrade the image. These protection methods worked well to defeat analog-to-analog copying by VCRs of the time. Consumer products capable of digital video recording are mandated by law to include features which detect Macrovision encoding of input analog streams, and disrupt copying of the video. Both intentional and false-positive detection of Macrovision protection has frustrated archivists who wish to copy now-fragile VHS tapes to a digital format for preservation. As of the 2020s, modern software decoding ignores Macrovision as software is not limited to the fixed standards that Macrovision was intended to disrupt in hardware based systems. Recording process The recording process in VHS consists of the following steps, in this order: The tape is pulled from the supply reel by a capstan and pinch roller, similar to those used in audio tape recorders. The tape passes across the erase head, which wipes any existing recording from the tape. The tape is wrapped around the head drum, using a little more than 180 degrees of the drum. One of the heads on the spinning drum records one field of video onto the tape, in one diagonally oriented track. The tape passes across the audio and control head, which records the control track and the linear audio tracks. The tape is wound onto the take-up reel due to torque applied to the reel by the machine. Erase head The erase head is fed by a high-level, high-frequency AC signal that overwrites any previous recording on the tape. Without this step, the new recording cannot be guaranteed to completely replace any old recording that might have been on the tape. Video recording The tape path then carries the tape around the spinning video-head drum, wrapping it around a little more than 180 degrees (called the omega transport system) in a helical fashion, assisted by the slanted tape guides. The head rotates constantly at 1798.2 rpm in NTSC machines, exactly 1500 in PAL, each complete rotation corresponding to one frame of video. Two tape heads are mounted on the cylindrical surface of the drum, 180 degrees apart from each other, so that the two heads "take turns" in recording. The rotation of the inclined head drum, combined with the relatively slow movement of the tape, results in each head recording a track oriented at a diagonal with respect to the length of the tape, with the heads moving across the tape at speeds higher than what would otherwise be possible. This is referred to as helical scan recording. A tape speed of inches per second corresponds to the heads on the drum moving across the tape at (a writing speed of) 4.86 or 6.096 meters per second. To maximize the use of the tape, the video tracks are recorded very close together. To reduce crosstalk between adjacent tracks on playback, an azimuth recording method is used: The gaps of the two heads are not aligned exactly with the track path. Instead, one head is angled at plus six degrees from the track, and the other at minus six degrees. This results, during playback, in destructive interference of the signal from the tracks on either side of the one being played. Each of the diagonal-angled tracks is a complete TV picture field, lasting of a second ( on PAL) on the display. One tape head records an entire picture field. The adjacent track, recorded by the second tape head, is another or of a second TV picture field, and so on. Thus one complete head rotation records an entire NTSC or PAL frame of two fields. The original VHS specification had only two video heads. When the EP recording speed was introduced, the thickness of these heads was reduced to accommodate the narrower tracks. However, this subtly reduced the quality of the SP speed, and dramatically lowered the quality of freeze frame and high speed search. Later models implemented both wide and narrow heads, and could use all four during pause and shuttle modes to further improve quality although machines later combined both pairs into one. In machines supporting VHS HiFi (described later), yet another pair of heads was added to handle the VHS HiFi signal. Camcorders using the miniaturized drum required twice as many heads to complete any given task. This almost always meant four heads on the miniaturized drum with performance similar to a two head VCR with a full sized drum. No attempt was made to record Hi-Fi audio with such devices, as this would require an additional four heads to work. W-VHS decks could have up to 12 heads in the head drum, of which 11 were active including a flying erase head for erasing individual video fields, and one was a dummy used for balancing the head drum. The high tape-to-head speed created by the rotating head results in a far higher bandwidth than could be practically achieved with a stationary head. VHS machines record up to 3 MHz of baseband video bandwidth and 300 kHz of baseband chroma bandwidth. The luminance (black and white) portion of the video is frequency modulated and combined with a down-converted "color under" chroma (color) signal that is encoded using quadrature amplitude modulation. Including side bands, the signal on a VHS tape can use up to 10 MHz of RF bandwidth. VHS horizontal resolution is 240 TVL, or about 320 lines across a scan line. The vertical resolution (number of scan lines) is the same as the respective analog TV standard (625 for PAL or 525 for NTSC; somewhat fewer scan lines are actually visible due to overscan and the VBI). In modern-day digital terminology, NTSC VHS resolution is roughly equivalent to 333×480 pixels for luma and 40×480 pixels for chroma. 333×480=159,840 pixels or 0.16 MP (1/6 of a megapixel). PAL VHS resolution is roughly 333×576 pixels for luma and 40×576 pixels for chroma (although when decoded PAL and SECAM half the vertical color resolution). JVC countered 1985's SuperBeta with VHS HQ, or High Quality. The frequency modulation of the VHS luminance signal is limited to 3 megahertz, which makes higher resolutions technically impossible even with the highest-quality recording heads and tape materials, but an HQ branded deck includes luminance noise reduction, chroma noise reduction, white clip extension, and improved sharpness circuitry. The effect was to increase the apparent horizontal resolution of a VHS recording from 240 to 250 analog (equivalent to 333 pixels from left-to-right, in digital terminology). The major VHS OEMs resisted HQ due to cost concerns, eventually resulting in JVC reducing the requirements for the HQ brand to white clip extension plus one other improvement. In 1987, JVC introduced a new format called Super VHS (often known as S-VHS) which extended the bandwidth to over 5 megahertz, yielding 420 analog horizontal (560 pixels left-to-right). Most Super VHS recorders can play back standard VHS tapes, but not vice versa. S-VHS was designed for higher resolution, but failed to gain popularity outside Japan because of the high costs of the machines and tapes. Because of the limited user base, Super VHS was never picked up to any significant degree by manufacturers of pre-recorded tapes, although it was used extensively in the low-end professional market for filming and editing. Audio recording After leaving the head drum, the tape passes over the stationary audio and control head. This records a control track at the bottom edge of the tape, and one or two linear audio tracks along the top edge. Original linear audio system In the original VHS specification, audio was recorded as baseband in a single linear track, at the upper edge of the tape, similar to how an audio compact cassette operates. The recorded frequency range was dependent on the linear tape speed. For the VHS SP mode, which already uses a lower tape speed than the compact cassette, this resulted in a mediocre frequency response of roughly 100 Hz to 10 kHz for NTSC, frequency response for PAL VHS with its lower standard tape speed was somewhat worse of about 80 Hz to 8 kHz. The signal-to-noise ratio (SNR) was an acceptable 42 dB for NTSC and 41 dB for PAL. Both parameters degraded significantly with VHS's longer play modes, with EP/NTSC frequency response peaking at 4 kHz. S-VHS tapes can give better audio (and video) quality, because the tapes are designed to have almost twice the bandwidth of VHS at the same speed. Sound cannot be recorded on a VHS tape without recording a video signal because the video signal is used to generate the control track pulses which effectively regulate the tape speed on playback. Even in the audio dubbing mode, a valid video recording (control track signal) must be present on the tape for audio to be correctly recorded. If there is no video signal to the VCR input during recording, most later VCRs will record black video and generate a control track while the sound is being recorded. Some early VCRs record audio without a control track signal; this is of little use, because the absence of a signal from the control track means that the linear tape speed is irregular during playback. More sophisticated VCRs offer stereo audio recording and playback. Linear stereo fits two independent channels in the same space as the original mono audiotrack. While this approach preserves acceptable backward compatibility with monoaural audio heads, the splitting of the audio track degrades the audio's signal-to-noise ratio, causing objectionable tape hiss at normal listening volume. To counteract the hiss, linear stereo VHS VCRs use Dolby B noise reduction for recording and playback. This dynamically boosts the high frequencies of the audio program on the recorded medium, improving its signal strength relative to the tape's background noise floor, then attenuates the high frequencies during playback. Dolby-encoded program material exhibits a high-frequency emphasis when played on non-Hi-Fi VCRs that are not equipped with the matching Dolby Noise Reduction decoder, although this may actually improve the sound quality of non-Hi-Fi VCRs, especially at the slower recording speeds. High-end consumer recorders take advantage of the linear nature of the audio track, as the audio track could be erased and recorded without disturbing the video portion of the recorded signal. Hence, "audio dubbing" and "video dubbing", where either the audio or video is re-recorded on tape (without disturbing the other), were supported features on prosumer linear video editing-decks. Without dubbing capability, an audio or video edit could not be done in-place on master cassette, and requires the editing output be captured to another tape, incurring generational loss. Studio film releases began to emerge with linear stereo audiotracks in 1982. From that point, nearly every home video release by Hollywood featured a Dolby-encoded linear stereo audiotrack. However, linear stereo was never popular with equipment makers or consumers. Tracking adjustment and index marking Another linear control track at the tape's lower edge holds pulses that mark the beginning of every frame of video; these are used to fine-tune the tape speed during playback, so that the high speed rotating heads remained exactly on their helical tracks rather than somewhere between two adjacent tracks (known as "tracking"). Since good tracking depends on precise distances between the rotating drum and the fixed control/audio head reading the linear tracks, which usually varies by a couple of micrometers between machines due to manufacturing tolerances, most VCRs offer tracking adjustment, either manual or automatic, to correct such mismatches. The control track is also used to hold index marks, which were normally written at the beginning of each recording session, and can be found using the VCR's index search function: this will fast-wind forward or backward to the nth specified index mark, and resume playback from there. At times, higher-end VCRs provided functions for the user to manually add and remove these marks. By the late 1990s, some high-end VCRs offered more sophisticated indexing. For example, Panasonic's Tape Library system assigned an ID number to each cassette, and logged recording information (channel, date, time and optional program title entered by the user) both on the cassette and in the VCR's memory for up to 900 recordings (600 with titles). Hi-Fi audio system Around 1984, JVC added Hi-Fi audio to VHS (model HR-D725U, in response to Betamax's introduction of Beta Hi-Fi.) Both VHS Hi-Fi and Betamax Hi-Fi delivered flat full-range frequency response (20 Hz to 20 kHz), excellent 70 dB signal-to-noise ratio (in consumer space, second only to the compact disc), dynamic range of 90 dB, and professional audio-grade channel separation (more than 70 dB). VHS Hi-Fi audio is achieved by using audio frequency modulation (AFM), modulating the two stereo channels (L, R) on two different frequency-modulated carriers and embedding the combined modulated audio signal pair into the video signal. To avoid crosstalk and interference from the primary video carrier, VHS's implementation of AFM relied on a form of magnetic recording called depth multiplexing. The modulated audio carrier pair was placed in the hitherto-unused frequency range between the luminance and the color carrier (below 1.6 MHz), and recorded first. Subsequently, the video head erases and re-records the video signal (combined luminance and color signal) over the same tape surface, but the video signal's higher center frequency results in a shallower magnetization of the tape, allowing both the video and residual AFM audio signal to coexist on tape. (PAL versions of Beta Hi-Fi use this same technique). During playback, VHS Hi-Fi recovers the depth-recorded AFM signal by subtracting the audio head's signal (which contains the AFM signal contaminated by a weak image of the video signal) from the video head's signal (which contains only the video signal), then demodulates the left and right audio channels from their respective frequency carriers. The result of the complex process was audio of high fidelity, which was uniformly solid across all tape-speeds (EP, LP or SP.) Since JVC had gone through the complexity of ensuring Hi-Fi's backward compatibility with non-Hi-Fi VCRs, virtually all studio home video releases produced after this time contained Hi-Fi audio tracks, in addition to the linear audio track. Under normal circumstances, all Hi-Fi VHS VCRs will record Hi-Fi and linear audio simultaneously to ensure compatibility with VCRs without Hi-Fi playback, though only early high-end Hi-Fi machines provided linear stereo compatibility. The sound quality of Hi-Fi VHS stereo is comparable to some extent to the quality of CD audio, particularly when recordings were made on high-end or professional VHS machines that have a manual audio recording level control. This high quality compared to other consumer audio recording formats such as compact cassette attracted the attention of amateur and hobbyist recording artists. Home recording enthusiasts occasionally recorded high quality stereo mixdowns and master recordings from multitrack audio tape onto consumer-level Hi-Fi VCRs. However, because the VHS Hi-Fi recording process is intertwined with the VCR's video-recording function, advanced editing functions such as audio-only or video-only dubbing are impossible. A short-lived alternative to the HiFi feature for recording mixdowns of hobbyist audio-only projects was a PCM adaptor so that high-bandwidth digital video could use a grid of black-and-white dots on an analog video carrier to give pro-grade digital sounds though DAT tapes made this obsolete. Some VHS decks also had a "simulcast" switch, allowing users to record an external audio input along with off-air pictures. Some televised concerts offered a stereo simulcast soundtrack on FM radio and as such, events like Live Aid were recorded by thousands of people with a full stereo soundtrack despite the fact that stereo TV broadcasts were some years off (especially in regions that adopted NICAM). Other examples of this included network television shows such as Friday Night Videos and MTV for its first few years in existence. Likewise, some countries, most notably South Africa, provided alternate language audio tracks for TV programming through an FM radio simulcast. The considerable complexity and additional hardware limited VHS Hi-Fi to high-end decks for many years. While linear stereo all but disappeared from home VHS decks, it was not until the 1990s that Hi-Fi became a more common feature on VHS decks. Even then, most customers were unaware of its significance and merely enjoyed the better audio performance of the newer decks. VHS Hi-Fi audio has been standardized in IEC 60774-2. Issues with Hi-Fi audio Due to the path followed by the video and Hi-Fi audio heads being striped and discontinuous—unlike that of the linear audio track—head-switching is required to provide a continuous audio signal. While the video signal can easily hide the head-switching point in the invisible vertical retrace section of the signal, so that the exact switching point is not very important, the same is obviously not possible with a continuous audio signal that has no inaudible sections. Hi-Fi audio is thus dependent on a much more exact alignment of the head switching point than is required for non-HiFi VHS machines. Misalignments may lead to imperfect joining of the signal, resulting in low-pitched buzzing. The problem is known as "head chatter", and tends to increase as the audio heads wear down. Another issue that made VHS Hi-Fi imperfect for music is the inaccurate reproduction of levels (softer and louder) which are not re-created as the original source. Variations Super-VHS / ADAT / SVHS-ET Several improved versions of VHS exist, most notably Super-VHS (S-VHS), an analog video standard with improved video bandwidth. S-VHS improved the horizontal luminance resolution to 400 lines (versus 250 for VHS/Beta and 500 for DVD). The audio system (both linear and AFM) is the same. S-VHS made little impact on the home market, but gained dominance in the camcorder market due to its superior picture quality. The ADAT format provides the ability to record multitrack digital audio using S-VHS media. JVC also developed SVHS-ET technology for its Super-VHS camcorders and VCRs, which simply allows them to record Super VHS signals onto lower-priced VHS tapes, albeit with a slight blurring of the image. Nearly all later JVC Super-VHS camcorders and VCRs have SVHS-ET ability. VHS-C / Super VHS-C Another variant is VHS-Compact (VHS-C), originally developed for portable VCRs in 1982, but ultimately finding success in palm-sized camcorders. The longest tape available for NTSC holds 60 minutes in SP mode and 180 minutes in EP mode. Since VHS-C tapes are based on the same magnetic tape as full-size tapes, they can be played back in standard VHS players using a mechanical adapter, without the need of any kind of signal conversion. The magnetic tape on VHS-C cassettes is wound on one main spool and uses a gear wheel to advance the tape. The adapter is mechanical, although early examples were motorized, with a battery. It has an internal hub to engage with the VCR mechanism in the location of a normal full-size tape hub, driving the gearing on the VHS-C cassette. Also, when a VHS-C cassette is inserted into the adapter, a small swing-arm pulls the tape out of the miniature cassette to span the standard tape path distance between the guide rollers of a full-size tape. This allows the tape from the miniature cassette to use the same loading mechanism as that from the standard cassette. Super VHS-C or S-VHS Compact was developed by JVC in 1987. S-VHS provided an improved luminance and chrominance quality, yet S-VHS recorders were compatible with VHS tapes. Sony was unable to shrink its Betamax form any further, so instead developed Video8/Hi8 which was in direct competition with the VHS-C/S-VHS-C format throughout the 1980s, 1990s, and 2000s. Ultimately neither format "won" and both have been superseded by digital high definition equipment. W-VHS / Digital-VHS (high-definition) Wide-VHS (W-VHS) allowed recording of MUSE Hi-Vision analog high definition television, which was broadcast in Japan from 1989 until 2007. The other improved standard, called Digital-VHS (D-VHS), records digital high definition video onto a VHS form factor tape. D-VHS can record up to 4 hours of ATSC digital television in 720p or 1080i formats using the fastest record mode (equivalent to VHS-SP), and up to 49 hours of lower-definition video at slower speeds. D9 There is also a JVC-designed component digital professional production format known as Digital-S, or officially under the name D9, that uses a VHS form factor tape and essentially the same mechanical tape handling techniques as an S-VHS recorder. This format is the least expensive format to support a Sel-Sync pre-read for video editing. This format competed with Sony's Digital Betacam in the professional and broadcast market, although in that area Sony's Betacam family ruled supreme, in contrast to the outcome of the VHS/Betamax domestic format war. It has now been superseded by high definition formats. V-Lite In the late 1990s, there was a disposable promotional variation of the VHS format called V-Lite. It was a cassette constructed largely with polystyrene, with only the rotating components like the tape reels being of hard plastic with glued casings without standard features like a protective cover for the exposed tape. Its purpose was to be as lightweight as possible for minimized mass delivery costs for the purpose of a media company's promotional campaign and intended for only a few viewings with a runtime of typically 2 to 3 minutes. One such production so promoted was the A&E Network's 2000 adaptation of The Great Gatsby. The format arose concurrently and then rendered obsolete, with the rise of the DVD video format which eventually supplanted VHS, being lighter and less expensive still to mass-distribute, while video streaming would later supplant the use of physical media for video promotion. Accessories Shortly after the introduction of the VHS format, VHS tape rewinders were developed. These devices served the sole purpose of rewinding VHS tapes. Proponents of the rewinders argued that the use of the rewind function on the standard VHS player would lead to wear and tear of the transport mechanism. The rewinder would rewind the tapes smoothly and also normally do so at a faster rate than the standard rewind function on VHS players. However, some rewinder brands did have some frequent abrupt stops, which occasionally led to tape damage. Some devices were marketed which allowed a personal computer to use a VHS recorder as a data backup device. The most notable of these was ArVid, widely used in Russia and CIS states. Similar systems were manufactured in the United States by Corvus and Alpha Microsystems, and in the UK by Backer from Danmere Ltd. The Backer system could store up to 4 GB of data with a transfer rate of 9 MB per minute. Signal standards VHS can record and play back all varieties of analog television signals in existence at the time VHS was devised. However, a machine must be designed to record a given standard. Typically, a VHS machine can only handle signals using the same standard as the country it was sold in. This is because some parameters of analog broadcast TV are not applicable to VHS recordings, the number of VHS tape recording format variations is smaller than the number of broadcast TV signal variations—for example, analog TVs and VHS machines (except multistandard devices) are not interchangeable between the UK and Germany, but VHS tapes are. The following tape recording formats exist in conventional VHS (listed in the form of standard/lines/frames): SECAM/625/25 (SECAM, French variety) MESECAM/625/25 (most other SECAM countries, notably the former Soviet Union and Middle East) NTSC/525/30 (Most parts of Americas, Japan, South Korea) PAL/525/30 (i.e., PAL-M, Brazil) PAL/625/25 (most of Western Europe, Australia, New Zealand, many parts of Asia such as China and India, some parts of South America such as Argentina, Uruguay and the Falklands, and Africa) PAL/625/25 VCRs allow playback of SECAM (and MESECAM) tapes with a monochrome picture, and vice versa, as the line standard is the same. Since the 1990s, dual and multi-standard VHS machines, able to handle a variety of VHS-supported video standards, became more common. For example, VHS machines sold in Australia and Europe could typically handle PAL, MESECAM for record and playback, and NTSC for playback only on suitable TVs. Dedicated multi-standard machines can usually handle all standards listed, and some high-end models could convert the content of a tape from one standard to another on the fly during playback by using a built-in standards converter. S-VHS is only implemented as such in PAL/625/25 and NTSC/525/30; S-VHS machines sold in SECAM markets record internally in PAL, and convert between PAL and SECAM during recording and playback. S-VHS machines for the Brazilian market record in NTSC and convert between it and PAL-M. A small number of VHS decks are able to decode closed captions on video cassettes before sending the full signal to the set with the captions. A smaller number still are able, additionally, to record subtitles transmitted with world standard teletext signals (on pre-digital services), simultaneously with the associated program. S-VHS has a sufficient resolution to record teletext signals with relatively few errors, although for some years now it has been possible to recover teletext pages and even complete "page carousels" from regular VHS recordings using non-real-time computer processing. Uses in marketing VHS was popular for long-form content, such as feature films or documentaries, as well as short-play content, such as music videos, in-store videos, teaching videos, distribution of lectures and talks, and demonstrations. VHS instruction tapes were sometimes included with various products and services, including exercise equipment, kitchen appliances, and computer software. Comparison to Betamax VHS was the winner of a protracted and somewhat bitter format war during the late 1970s and early 1980s against Sony's Betamax format as well as other formats of the time. Betamax was widely perceived at the time as the better format, as the cassette was smaller in size, and Betamax offered slightly better video quality than VHS – it had lower video noise, less luma-chroma crosstalk, and was marketed as providing pictures superior to those of VHS. However, the sticking point for both consumers and potential licensing partners of Betamax was the total recording time. To overcome the recording limitation, Beta II speed (two-hour mode, NTSC regions only) was released in order to compete with VHS's two-hour SP mode, thereby reducing Betamax's horizontal resolution to 240 lines (vs 250 lines). In turn, the extension of VHS to VHS HQ produced 250 lines (vs 240 lines), so that overall a typical Betamax/VHS user could expect virtually identical resolution. (Very high-end Betamax machines still supported recording in the Beta I mode and some in an even higher resolution Beta Is (Beta I Super HiBand) mode, but at a maximum single-cassette run time of 1:40 [with an L-830 cassette].) Because Betamax was released more than a year before VHS, it held an early lead in the format war. However, by 1981, United States' Betamax sales had dipped to only 25-percent of all sales. There was debate between experts over the cause of Betamax's loss. Some, including Sony's founder Akio Morita, say that it was due to Sony's licensing strategy with other manufacturers, which consistently kept the overall cost for a unit higher than a VHS unit, and that JVC allowed other manufacturers to produce VHS units license-free, thereby keeping costs lower. Others say that VHS had better marketing, since the much larger electronics companies at the time (Matsushita, for example) supported VHS. Sony would make its first VHS players/recorders in 1988, although it continued to produce Betamax machines concurrently until 2002. Decline VHS was widely used in television-equipped American and European living rooms for more than twenty years from its introduction in the late 1970s. The home television recording market, also known as the VHS market, as well as the camcorder market, has since transitioned to digital recording on solid-state memory cards. The introduction of the DVD format to American consumers in March 1997 triggered the market share decline of VHS. DVD rentals surpassed those on the VHS format in the United States for the first time in June 2003. The Hill said that David Cronenberg's movie A History of Violence, sold on VHS in 2006, was "widely believed to be the last instance of a major motion picture to be released in that format". By December 2008, the Los Angeles Times reported on "the final truckload of VHS tapes" being shipped from a warehouse in Palm Harbor, Florida, citing Ryan J. Kugler's Distribution Video Audio Inc. as "the last major supplier". Though 94.5 million Americans still owned VHS format VCRs in 2005, market share continued to drop. In the mid-2000s, several retail chains in the United States and Europe announced they would stop selling VHS equipment. In the U.S., no major brick-and-mortar retailers stock VHS home-video releases, focusing only on DVD and Blu-ray media. Sony Pictures Home Entertainment along with other companies ceased production of VHS in late 2010 in South Korea. The last known company in the world to manufacture VHS equipment was Funai of Japan, who produced video cassette recorders under the Sanyo brand in China and North America. Funai ceased production of VHS equipment (VCR/DVD combos) in July 2016, citing falling sales and a shortage of components. Modern use Despite the decline in both VHS players and programming on VHS machines, they are still owned in some households worldwide. Those who still use or hold on to VHS do so for a number of reasons, including nostalgic value, ease of use in recording, keeping personal videos or home movies, watching content currently exclusive to VHS, and collecting. Some expatriate communities in the United States also obtain video content from their native countries in VHS format. Although VHS has been discontinued in the United States, VHS recorders and blank tapes were still sold at stores in other developed countries prior to digital television transitions. As an acknowledgement of the continued use of VHS, Panasonic announced the world's first dual deck VHS-Blu-ray player in 2009. The last standalone JVC VHS-only unit was produced October 28, 2008. JVC, and other manufacturers, continued to make combination DVD+VHS units even after the decline of VHS. Countries like South Korea released films on VHS until December 2010, with Inception being the last Hollywood film to be released on VHS in the country. A market for pre-recorded VHS tapes has continued, and some online retailers such as Amazon still sell new and used pre-recorded VHS cassettes of movies and television programs. None of the major Hollywood studios generally issues releases on VHS. The last major studio film to be released in the format in the United States and Canada, other than as part of special marketing promotions, was A History of Violence in 2006. In October 2008, Distribution Video Audio Inc., the last major American supplier of pre-recorded VHS tapes, shipped its final truckload of tapes to stores in America. However, there have been a few exceptions. For example, The House of the Devil was released on VHS in 2010 as an Amazon-exclusive deal, in keeping with the film's intent to mimic 1980s horror films. The first Paranormal Activity film, produced in 2007, had a VHS release in the Netherlands in 2010. The horror film V/H/S/2 was released as a combo in North America that included a VHS tape in addition to a Blu-ray and a DVD copy on September 24, 2013. In 2019, Paramount Pictures produced limited quantities of the 2018 film Bumblebee to give away as promotional contest prizes. In 2021, professional wrestling promotion Impact Wrestling released a limited run of VHS tapes containing that year's Slammiversary, which quickly sold out. The company later announced future VHS runs of pay-per-view events. The VHS medium has a cult following. For instance, in February 2021, it was reported that VHS was once again doing well as an underground market. In January 2023, it was reported that VHS tapes were once again becoming valuable collectors items. VHS collecting would make a comeback in the 2020s. The 2024 horror film, Alien: Romulus, will have a limited release on VHS, marking the first major Hollywood film to receive an official VHS release since 2006. Successors VCD The Video CD (VCD) was created in 1993, becoming an alternative medium for video, in a CD-sized disc. Though occasionally showing compression artifacts and color banding that are common discrepancies in digital media, the durability and longevity of a VCD depends on the production quality of the disc, and its handling. The data stored digitally on a VCD theoretically does not degrade (in the analog sense like tape). In the disc player, there is no physical contact made with either the data or label sides. When handled properly, a VCD will last a long time. Since a VCD can hold only 74 minutes of video, a movie exceeding that mark has to be divided into two or more discs. DVD The DVD-Video format was introduced first on November 1, 1996, in Japan; to the United States on March 26, 1997 (test marketed); and mid-to-late 1998 in Europe and Australia. While the DVD was highly successful in the pre-recorded retail market, it failed to displace VHS for in home recording of video content (e.g. broadcast or cable television). A number of factors hindered the commercial success of the DVD in this regard, including: A reputation for being temperamental and unreliable, as well as the risk of scratches and hairline cracks. Incompatibilities in playing discs recorded on a different manufacturer's machines to that of the original recording machine. Compression artifacts: MPEG-2 video compression can result in visible artifacts such as macroblocking, mosquito noise and ringing which become accentuated in extended recording modes (more than three hours on a DVD-5 disc). Standard VHS will not suffer from any of these problems, all of which are characteristic of certain digital video compression systems (see Discrete cosine transform) but VHS will result in reduced luminance and chroma resolution, which makes the picture look horizontally blurred (resolution decreases further with LP and EP recording modes). VHS also adds considerable noise to both the luminance and chroma channels. High-capacity digital recording technologies High-capacity digital recording systems are also gaining in popularity with home users. These types of systems come in several form factors: Hard disk–based set-top boxes Hard disk/optical disc combination set-top boxes Personal computer–based media center Portable media players with TV-out capability Hard disk-based systems include TiVo as well as other digital video recorder (DVR) offerings. These types of systems provide users with a no-maintenance solution for capturing video content. Customers of subscriber-based TV generally receive electronic program guides, enabling one-touch setup of a recording schedule. Hard disk–based systems allow for many hours of recording without user-maintenance. For example, a 120 GB system recording at an extended recording rate (XP) of 10 Mbit/s MPEG-2 can record over 25 hours of video content. Legacy Often considered an important medium of film history, the influence of VHS on art and cinema was highlighted in a retrospective staged at the Museum of Arts and Design in 2013. In 2015, the Yale University Library collected nearly 3,000 horror and exploitation movies on VHS tapes, distributed from 1978 to 1985, calling them "the cultural id of an era." The documentary film Rewind This! (2013), directed by Josh Johnson, tracks the impact of VHS on film industry through various filmmakers and collectors. The last Blockbuster franchise is still renting out VHS tapes, and is based in Bend, Oregon, a town home to under 100,000 people as of 2020. The VHS aesthetic is also a central component of the analog horror genre, which is largely known for imitating recordings of late 20th century TV broadcasts. See also Analog video Tape head cleaner Analog video on discs: Capacitance Electronic Disc (CED) Video High Density (VHD) LaserDisc Notes References External links HowStuffWorks: How VCRs Work The 'Total Rewind' VCR museum – A covering the history of VHS and other vintage formats. VHSCollector.com: Analog Video Cassette Archive – A growing archive of commercially released video cassettes from their dawn to the present, and a guide to collecting. Audiovisual introductions in 1976 Products introduced in 1976 Japanese inventions Composite video formats Panasonic Videotape Digital media Home video Videocassette formats
VHS
Technology
11,530
18,634,715
https://en.wikipedia.org/wiki/Yahoo%20M45
The M45 Project (or M45) is the name of a cluster announced in November 2007 by Yahoo!. According to Yahoo!, it has approximately 4,000 processors, three terabytes of memory, 1.5 petabytes of disks, and a peak performance of more than 27 trillion calculations per second (27 teraflops), placing it among the top 50 fastest supercomputers in the world. Name M45 is named after the Messier catalog number of the constellation commonly known as the Pleiades. References Supercomputers M45
Yahoo M45
Technology
118
961,138
https://en.wikipedia.org/wiki/Messier%2026
Messier 26, also known as NGC 6694, is an open cluster of stars in the southern constellation of Scutum. It was discovered by Charles Messier in 1764. This 8th magnitude cluster is a challenge to find in ideal skies with typical binoculars, where it can be, with any modern minimum aperture device. It is south-southwest of the open cluster Messier 11 and is across. About 25 stars are visible in a telescope with a aperture. M26 spans a linear size of 22 light years across with a tidal radius of , and is at a distance of 5,160 light years from the Earth. The brightest star is of magnitude 11 and the age of this cluster has been calculated to be 85.3 million years. It includes one known spectroscopic binary system. An interesting feature of M26 is a region of low star density near the nucleus. A hypothesis was that it was caused by an obscuring cloud of interstellar matter between us and the cluster, but a paper by James Cuffey suggested that this is not possible and that it really is a "shell of low stellar space density". In 2015, Michael Merrifield of the University of Nottingham said that there is, as yet, no clear explanation for the phenomenon. Gallery See also List of Messier objects NGC 1193 Footnotes and references Footnotes References External links Messier 26, SEDS Messier pages Messier 026 Carina–Sagittarius Arm Messier 026 026 Messier 026 17640620 Discoveries by Charles Messier
Messier 26
Astronomy
316
48,212,124
https://en.wikipedia.org/wiki/List%20of%20star%20systems%20within%2045%E2%80%9350%20light-years
This is a list of star systems within 45–50 light years of Earth. See also Lists of stars List of star systems within 40–45 light-years List of star systems within 50–55 light-years List of nearest stars and brown dwarfs Notes References Lists of stars Star systems Lists by distance
List of star systems within 45–50 light-years
Physics,Astronomy
62
74,175,376
https://en.wikipedia.org/wiki/Type%20and%20cotype%20of%20a%20Banach%20space
In functional analysis, the type and cotype of a Banach space are a classification of Banach spaces through probability theory and a measure, how far a Banach space from a Hilbert space is. The starting point is the Pythagorean identity for orthogonal vectors in Hilbert spaces This identity no longer holds in general Banach spaces, however one can introduce a notion of orthogonality probabilistically with the help of Rademacher random variables, for this reason one also speaks of Rademacher type and Rademacher cotype. The notion of type and cotype was introduced by French mathematician Jean-Pierre Kahane. Definition Let be a Banach space, be a sequence of independent Rademacher random variables, i.e. and for and . Type is of type for if there exist a finite constant such that for all finite sequences . The sharpest constant is called type constant and denoted as . Cotype is of cotype for if there exist a finite constant such that respectively for all finite sequences . The sharpest constant is called cotype constant and denoted as . Remarks By taking the -th resp. -th root one gets the equation for the Bochner norm. Properties Every Banach space is of type (follows from the triangle inequality). A Banach space is of type and cotype if and only if the space is also isomorphic to a Hilbert space. If a Banach space: is of type then it is also type . is of cotype then it is also of cotype . is of type for , then its dual space is of cotype with (conjugate index). Further it holds that Examples The spaces for are of type and cotype , this means is of type , is of type and so on. The spaces for are of type and cotype . The space is of type and cotype . Literature References Functional analysis Banach spaces
Type and cotype of a Banach space
Mathematics
389
1,678,438
https://en.wikipedia.org/wiki/Temperate%20forest
A temperate forest is a forest found between the tropical and boreal regions, located in the temperate zone. It is the second largest terrestrial biome, covering 25% of the world's forest area, only behind the boreal forest, which covers about 33%. These forests cover both hemispheres at latitudes ranging from 25 to 50 degrees, wrapping the planet in a belt similar to that of the boreal forest. Due to its large size spanning several continents, there are several main types: deciduous, coniferous, mixed forest, and rainforest. Climate The climate of a temperate forest is highly variable depending on the location of the forest. For example, Los Angeles and Vancouver, Canada are both considered to be located in a temperate zone, however, Vancouver is located in a temperate rainforest, while Los Angeles is a relatively dry tropical climate. Types of temperate forest Deciduous They are found in Europe, East Asia, North America, and in some parts of South America. Deciduous forests are composed mainly of broadleaf trees, such as maple and oak, that shed all their leaves during one season. They are typically found in three middle-latitude regions with temperate climates characterized by a winter season and year-round precipitation: eastern North America, western Eurasia and northeastern Asia. Coniferous Coniferous forests are composed of needle-leaved evergreen trees, such as pine or fir. Evergreen forests are typically found in regions with moderate climates. Boreal forests, however, are an exception as they are found in subarctic regions. Coniferous trees often have an advantage over broadleaf trees in harsher environments. Their leaves are typically hardier and longer lived but require more energy to grow. Mixed As the name implies, conifers and broadleaf trees grow in the same area. The main trees found in these forests in North America and Eurasia include fir, oak, ash, maple, birch, beech, poplar, elm and pine. Other plant species may include magnolia, prunus, holly, and rhododendron. In South America, conifer and oak species predominate. In Australia, eucalypts are the predominant trees. Hardwood evergreen trees which are widely spaced and are found in the Mediterranean region are olive, cork, oak and stone pine. Temperate rainforest Temperate rainforests are the wettest of all the types, and are found only in very wet coastal areas. Adding to its rarity is that most of the temperate rainforests outside protected areas have been cut down and no longer exist. Temperate rainforests can, however, still be found in some areas, including the Pacific Northwest, southern Chile, northern Turkey (along with some regions of Bulgaria and Georgia), most of Japan, and others. Effect of human activity Temperate forests are located in the middle latitudes where much of the planet's population is. Not only were these forests cut down to build cities (i.e. New York City, Seattle, London, Tokyo, Paris), they have also been "cut down long ago to make way for cultivation." This biome has been subject to mining, logging, hunting, pollution, deforestation and habitat loss. References Biomes Ecosystems Forests
Temperate forest
Biology
644
36,443,307
https://en.wikipedia.org/wiki/Clavulina%20rugosa
Clavulina rugosa, commonly known as the wrinkled coral fungus, is a species of coral fungus in the family Clavulinaceae. It is edible. Taxonomy The species was originally described as Clavaria rugosa by Jean Bulliard in 1790. It was transferred to Clavulina by Joseph Schröter in 1888. Description It grows up to tall and varies in width. Distribution and habitat It can be found in Europe, growing near wooded paths from August to November. Uses One field guide lists it as edible when cooked. References External links Edible fungi Fungi described in 1790 Fungi of North America rugosa Fungus species
Clavulina rugosa
Biology
129
44,645,174
https://en.wikipedia.org/wiki/Token%20reconfiguration
In computational complexity theory and combinatorics, the token reconfiguration problem is a reconfiguration problem on a graph with both an initial and desired state for tokens. Given a graph , an initial state of tokens is defined by a subset of the vertices of the graph; let . Moving a token from vertex to vertex is valid if and are joined by a path in that does not contain any other tokens; note that the distance traveled within the graph is inconsequential, and moving a token across multiple edges sequentially is considered a single move. A desired end state is defined as another subset . The goal is to minimize the number of valid moves to reach the end state from the initial state. Motivation The problem is motivated by so-called sliding puzzles, which are in fact a variant of this problem, often restricted to rectangular grid graphs with no holes. The most famous such puzzle, the 15 puzzle, is a variant of this problem on a 4 by 4 grid graph such that . One key difference between sliding block puzzles and the token reconfiguration problem is that in the original token reconfiguration problem, the tokens are indistinguishable. As a result, if the graph is connected, the token reconfiguration problem is always solvable; this is not necessarily the case for sliding block puzzles. Complexity Calinescu, Dumitrescu, and Pach have shown several results regarding both the optimization and approximation of this problem on various types of graphs. Optimization Firstly, reducing to the case of trees, there is always a solution in at most moves, with at most one move per token. Furthermore, an optimal solution can be found in time linear in the size of the tree. Clearly, the first result extends to arbitrary graphs; the latter does not. A sketch of the optimal algorithm for trees is as follows. First, we obtain an algorithm that moves each node exactly once, which may not be optimal. Do this recursively: consider any leaf of the smallest tree in the graph containing both the initial and desired sets. If a leaf of this tree is in both, remove it and recurse down. If a leaf is in the initial set only, find a path from it to a vertex in the desired set that does not pass through any other vertices in the desired set. Remove this path (it'll be the last move), and recurse down. The other case, where the leaf is in the desired set only, is symmetric. To extend to an algorithm that achieves the optimum, consider any token in both the initial and desired sets. If removing it would split the graph into subtrees, all of which have the same number of elements from the initial and desired sets, then do so and recurse. If there is no such token, then each token must move exactly once, and so the solution that moves all tokens exactly once must be optimal. While the algorithm for finding the optimum on trees is linear time, finding the optimum for general graphs is NP-complete, a leap up in difficulty. It is in NP; the certificate is a sequence of moves, which is at most linear size, so it remains to show the problem is NP-hard as well. This is done via reduction from set cover. Consider an instance of set cover, where we wish to cover all elements in a universe using subsets of using the minimum number of subsets. Construct a graph as follows: Make a vertex for each of the elements in the universe and each of the subsets. Connect a subset vertex to an element vertex if the subset contains that element. Create a long path of size , and attach one end to every subset vertex. The initial set is the added path plus every subset vertex, and the final set is every subset vertex plus every element vertex. To see why this is a reduction, consider the selection of which subset vertex tokens to move. Clearly, we must open up paths to each of the element vertices, and we do so by moving some of the subset vertex tokens. After doing so, each token on the long path must move once. Thus, the optimum cost is equal to the number of selected subsets plus the number of elements (the latter of which is notably a constant). So we have a polynomial-time reduction from set cover, which is NP-complete, to token reconfiguration. Thus token reconfiguration is also NP-complete on general graphs. Approximation The token reconfiguration problem is APX-complete, meaning that in some sense, it is as hard to approximate as any problem that has a constant-factor approximation algorithm. The reduction is the same one as above, from set cover. However, the set cover problem is restricted to subsets of size at most 3, which is an APX-hard problem. Using exactly the same structure as above, we obtain an L-reduction, as the distance of any solution from optimum is equal between the set cover instance and the transformed token reconfiguration problem. The only change is the addition of the number of elements in the universe. Furthermore, the set cover optimum is at least 1/3 of the number of elements, due to the bounded subset size. Thus, the constants from the L-reduction are . One can, in fact, modify the reduction to work for labeled token reconfiguration as well. To do so, attach a new vertex to each of the subset vertices, which is neither an initial nor desired vertex. Label the vertices on the long path 1 through , and do the same for the element vertices. Now, the solution consists of 'moving aside' each chosen subset vertex token, correctly placing the labeled vertices from the path, and returning the subset vertex tokens to the initial locations. This is an L-reduction with . Calinescu, Dumitrescu, and Pach have also shown that there exists a 3-approximation for unlabeled token reconfiguration, so the problem is in APX as well and thus APX-complete. The proof is much more complicated and omitted here. References NP-complete problems Computational problems in graph theory Approximation algorithms Reconfiguration
Token reconfiguration
Mathematics
1,285
2,336,870
https://en.wikipedia.org/wiki/Bo%C3%B6tes%20Void
The Boötes Void ( ) (colloquially referred to as the Great Nothing) is an approximately spherical region of space found in the vicinity of the constellation Boötes, containing only 60 galaxies instead of the 2,000 that should be expected from an area this large, hence its name. With a radius of 62 megaparsecs (nearly 330 million light-years across), it is one of the largest voids in the visible universe, and is referred to as a supervoid. It was discovered in 1981 by Robert Kirshner as part of a survey of galactic redshifts. Its centre is located 700 million light-years from Earth, and at approximately right ascension and declination . The Hercules Supercluster forms part of the near edge of the void. Origins There are no major apparent inconsistencies between the existence of the Boötes Void and the Lambda-CDM model of cosmological evolution. The Boötes Void is theorized to have formed from the merger of smaller voids, much like the way in which soap bubbles coalesce to form larger bubbles. This would account for the small number of galaxies that populate a roughly tube-shaped region running through the middle of the void. Confusion with Barnard 68 The Boötes Void has been often associated with images of Barnard 68, a dark nebula that does not allow light to pass through; however, the images of Barnard 68 are much darker than those observed of the Boötes Void, as the nebula is much closer and there are fewer stars in front of it, as well as its being a physical mass that blocks light passing through. See also List of largest voids Baryon acoustic oscillations References Sources voids (astronomy) Boötes
Boötes Void
Astronomy
353
35,919,500
https://en.wikipedia.org/wiki/Clavulinopsis%20amoena
Clavulinopsis amoena is a clavarioid fungus in the family Clavariaceae. It forms slender, cylindrical, golden-yellow fruiting bodies that grow on the ground among plant litter. It was originally described from Indonesia and appears to be distributed in temperate areas of the southern hemisphere. Taxonomy The species was originally described from Java in 1844 by Swiss mycologists Heinrich Zollinger and Alexander Moritzi. In his influential monograph of the clavarioid fungi, English mycologist E.J.H. Corner considered Clavulinopsis amoena to be a globose-spored species of variable colour and form that was widespread in the tropics, particularly in Asia. American mycologist Ronald H. Petersen initially agreed with Corner that C. amoena was a globose-spored species. But Petersen's subsequent study of the type specimen showed that C. amoena had ellipsoid (not globose) spores and was therefore not the same taxon described in earlier works. Petersen considered Clavaria aurantia (described from Australia) and C. luteotenerrima (described from Indonesia) to be synonyms. Despite this, the name C. amoena has continued to be used for a globose-spored species in some more recent taxonomic accounts. Description The fruit body of Clavulinopsis amoena is cylindrical, up to 50 by 2 mm, bright apricot yellow to cadmium yellow, borne on a similarly coloured, cylindrical stipe up to 15 by 1.5 mm. Microscopically, the basidiospores are smooth, hyaline, and ellipsoid, 6 to 7 by 4 to 4.5 μm. Distribution and habitat Confusion over the identification of Clavulinopsis amoena means that its distribution is unclear. The species was initially described from Indonesia, but has also been reported from Australia and New Zealand. Petersen considered that "the taxon seems to be distributed over the Southern Hemisphere, at least in temperate areas." Records of C. amoena from Brazil refer to a different, globose-spored species, as do at least some records from elsewhere in America. The species typically occurs in small clusters on the ground in broadleaf woodland. References Clavariaceae Fungi described in 1844 Fungi of Asia Fungi of Australia Fungi of New Zealand Taxa named by Heinrich Zollinger Taxa named by Alexander Moritzi Fungus species
Clavulinopsis amoena
Biology
514
466,854
https://en.wikipedia.org/wiki/FileVault
FileVault is a disk encryption program in Mac OS X 10.3 Panther (2003) and later. It performs on-the-fly encryption with volumes on Mac computers. Versions and key features FileVault was introduced with Mac OS X 10.3 Panther, and could only be applied to a user's home directory, not the startup volume. The operating system uses an encrypted sparse disk image (a large single file) to present a volume for the home directory. Mac OS X 10.5 Leopard and Mac OS X 10.6 Snow Leopard use more modern sparse bundle disk images which spread the data over 8 MB files (called bands) within a bundle. Apple refers to this original iteration of FileVault as "legacy FileVault". OS X 10.7 Lion and newer versions offer FileVault 2, which is a significant redesign. This encrypts the entire OS X startup volume and typically includes the home directory, abandoning the disk image approach. For this approach to disk encryption, authorised users' information is loaded from a separate non-encrypted boot volume (partition/slice type Apple_Boot). FileVault The original version of FileVault was added in Mac OS X Panther to encrypt a user's home directory. Master passwords and recovery keys When FileVault is enabled the system invites the user to create a master password for the computer. If a user password is forgotten, the master password or recovery key may be used to decrypt the files instead. FileVault recovery key is different from a Mac recovery key, which is a 28-character code used to reset your password or regain access to your Apple ID. Migration Migration of FileVault home directories is subject to two limitations: there must be no prior migration to the target computer the target must have no existing user accounts. If Migration Assistant has already been used or if there are user accounts on the target: before migration, FileVault must be disabled at the source. If transferring FileVault data from a previous Mac that uses 10.4 using the built-in utility to move data to a new machine, the data continues to be stored in the old sparse image format, and the user must turn FileVault off and then on again to re-encrypt in the new sparse bundle format. Manual encryption Instead of using FileVault to encrypt a user's home directory, using Disk Utility a user can create an encrypted disk image themselves and store any subset of their home directory in there (for example, ). This encrypted image behaves similar to a FileVault encrypted home directory, but is under the user's maintenance. Encrypting only a part of a user's home directory might be problematic when applications need access to the encrypted files, which will not be available until the user mounts the encrypted image. This can be mitigated to a certain extent by making symbolic links for these specific files. Limitations and issues Backups Without Mac OS X Server, Time Machine will back up a FileVault home directory only while the user is logged out. In such cases, Time Machine is limited to backing up the home directory in its entirety. Using Mac OS X Server as a Time Machine destination, backups of FileVault home directories occur while users are logged in. Because FileVault restricts the ways in which other users' processes can access the user's content, some third party backup solutions can back up the contents of a user's FileVault home directory only if other parts of the computer (including other users' home directories) are excluded. Issues Several shortcomings were identified in legacy FileVault. Its security can be broken by cracking either 1024-bit RSA or 3DES-EDE. Legacy FileVault used the CBC mode of operation (see disk encryption theory); FileVault 2 uses stronger XTS-AES mode. Another issue is storage of keys in the macOS "safe sleep" mode. A study published in 2008 found data remanence in dynamic random-access memory (DRAM), with data retention of seconds to minutes at room temperature and much longer times when memory chips were cooled to low temperature. The study authors were able to use a cold boot attack to recover cryptographic keys for several popular disk encryption systems, including FileVault, by taking advantage of redundancy in the way keys are stored after they have been expanded for efficient use, such as in key scheduling. The authors recommend that computers be powered down, rather than be left in a "sleep" state, when not in physical control by the owner. Early versions of FileVault automatically stored the user's passphrase in the system keychain, requiring the user to notice and manually disable this security hole. In 2006, following a talk at the 23rd Chaos Communication Congress titled Unlocking FileVault: An Analysis of Apple's Encrypted Disk Storage System, Jacob Appelbaum & Ralf-Philipp Weinmann released VileFault which decrypts encrypted Mac OS X disk image files. A free space wipe using Disk Utility left a large portion of previously deleted file remnants intact. Similarly, FileVault compact operations only wiped small parts of previously deleted data. FileVault 2 Security FileVault uses the user's login password as the encryption pass phrase. It uses the XTS-AES mode of AES with 128 bit blocks and a 256 bit key to encrypt the disk, as recommended by NIST. Only unlock-enabled users can start or unlock the drive. Once unlocked, other users may also use the computer until it is shut down. Performance The I/O performance penalty for using FileVault 2 was found to be in the order of around 3% when using CPUs with the AES instruction set, such as the Intel Core i, and OS X 10.10.3 Yosemite. Performance deterioration will be larger for CPUs without this instruction set, such as older Core CPUs. Master passwords and recovery keys When FileVault 2 is enabled while the system is running, the system creates and displays a recovery key for the computer, and optionally offers the user to store the key with Apple. The 120 bit recovery key is encoded with all letters and numbers 1 through 9, and read from , and therefore relies on the security of the PRNG used in macOS. During a cryptanalysis in 2012, this mechanism was found safe. Changing the recovery key is not possible without re-encrypting the File Vault volume. Validation Users who use FileVault 2 in OS X 10.9 and above can validate their key correctly works after encryption by running in Terminal after encryption has finished. The key must be in form and will return true if correct. Starting the OS with FileVault 2 without a user account If a volume to be used for startup is erased and encrypted before clean installation of OS X 10.7.4 Lion or 10.8 Mountain Lion: there is a password for the volume the clean system will immediately behave as if FileVault was enabled after installation there is no recovery key, no option to store the key with Apple (but the system will behave as if a key was created) when the computer is started, Disk Password will appear at the EfiLoginUI – this may be used to unlock the volume and start the system the running system will present the traditional login window. Apple describes this type of approach as Disk Password—based DEK. See also Keychain BitLocker TrueCrypt VeraCrypt Linux Unified Key Setup References MacOS Cryptographic software Disk encryption
FileVault
Mathematics
1,600
34,238,866
https://en.wikipedia.org/wiki/Ceramic%20building%20material
Ceramic building material, often abbreviated to CBM, is an umbrella term used in archaeology to cover all building materials made from baked clay. It is particularly, but not exclusively, used in relation to Roman building materials. It is a useful and necessary term because, especially when initially found in archaeological excavation, it may be difficult to distinguish, for example, fragments of bricks from fragments of roofing or flooring tiles. However, ceramic building materials are usually readily distinguishable from fragments of ceramic pottery by their rougher finish. See also Further reading External links Current Archaeology Archaeological Ceramic Building Materials Group South Oxfordshire Archaeological Group Ceramic Building Material Recording (Introduction) Archaeological artefact types Ceramic materials Soil-based building materials Bricks
Ceramic building material
Engineering
143
99,603
https://en.wikipedia.org/wiki/Wrought%20iron
Wrought iron is an iron alloy with a very low carbon content (less than 0.05%) in contrast to that of cast iron (2.1% to 4.5%). It is a semi-fused mass of iron with fibrous slag inclusions (up to 2% by weight), which give it a wood-like "grain" that is visible when it is etched, rusted, or bent to failure. Wrought iron is tough, malleable, ductile, corrosion resistant, and easily forge welded, but is more difficult to weld electrically. Before the development of effective methods of steelmaking and the availability of large quantities of steel, wrought iron was the most common form of malleable iron. It was given the name wrought because it was hammered, rolled, or otherwise worked while hot enough to expel molten slag. The modern functional equivalent of wrought iron is mild steel, also called low-carbon steel. Neither wrought iron nor mild steel contain enough carbon to be hardened by heating and quenching. Wrought iron is highly refined, with a small amount of silicate slag forged out into fibers. It comprises around 99.4% iron by mass. The presence of slag can be beneficial for blacksmithing operations, such as forge welding, since the silicate inclusions act as a flux and give the material its unique, fibrous structure. The silicate filaments in the slag also protect the iron from corrosion and diminish the effect of fatigue caused by shock and vibration. Historically, a modest amount of wrought iron was refined into steel, which was used mainly to produce swords, cutlery, chisels, axes, and other edged tools, as well as springs and files. The demand for wrought iron reached its peak in the 1860s, being in high demand for ironclad warships and railway use. However, as properties such as brittleness of mild steel improved with better ferrous metallurgy and as steel became less costly to make thanks to the Bessemer process and the Siemens–Martin process, the use of wrought iron declined. Many items, before they came to be made of mild steel, were produced from wrought iron, including rivets, nails, wire, chains, rails, railway couplings, water and steam pipes, nuts, bolts, horseshoes, handrails, wagon tires, straps for timber roof trusses, and ornamental ironwork, among many other things. Wrought iron is no longer produced on a commercial scale. Many products described as wrought iron, such as guard rails, garden furniture, and gates are made of mild steel. They are described as "wrought iron" only because they have been made to resemble objects which in the past were wrought (worked) by hand by a blacksmith (although many decorative iron objects, including fences and gates, were often cast rather than wrought). Terminology The word "wrought" is an archaic past participle of the verb "to work", and so "wrought iron" literally means "worked iron". Wrought iron is a general term for the commodity, but is also used more specifically for finished iron goods, as manufactured by a blacksmith. It was used in that narrower sense in British Customs records, such manufactured iron was subject to a higher rate of duty than what might be called "unwrought" iron. Cast iron, unlike wrought iron, is brittle and cannot be worked either hot or cold. In the 17th, 18th, and 19th centuries, wrought iron went by a wide variety of terms according to its form, origin, or quality. While the bloomery process produced wrought iron directly from ore, cast iron or pig iron were the starting materials used in the finery forge and puddling furnace. Pig iron and cast iron have higher carbon content than wrought iron, but have a lower melting point than iron or steel. Cast and especially pig iron have excess slag which must be at least partially removed to produce quality wrought iron. At foundries it was common to blend scrap wrought iron with cast iron to improve the physical properties of castings. For several years after the introduction of Bessemer and open hearth steel, there were different opinions as to what differentiated iron from steel; some believed it was the chemical composition and others that it was whether the iron heated sufficiently to melt and "fuse". Fusion eventually became generally accepted as relatively more important than composition below a given low carbon concentration. Another difference is that steel can be hardened by heat treating. Historically, wrought iron was known as "commercially pure iron"; however, it no longer qualifies because current standards for commercially pure iron require a carbon content of less than 0.008 wt%. Types and shapes Bar iron is a generic term sometimes used to distinguish it from cast iron. It is the equivalent of an ingot of cast metal, in a convenient form for handling, storage, shipping and further working into a finished product. The bars were the usual product of the finery forge, but not necessarily made by that process: Rod iron—cut from flat bar iron in a slitting mill provided the raw material for spikes and nails. Hoop iron—suitable for the hoops of barrels, made by passing rod iron through rolling dies. Plate iron—sheets suitable for use as boiler plate. Blackplate—sheets, perhaps thinner than plate iron, from the black rolling stage of tinplate production. Voyage iron—narrow flat bar iron, made or cut into bars of a particular weight, a commodity for sale in Africa for the Atlantic slave trade. The number of bars per ton gradually increased from 70 per ton in the 1660s to 75–80 per ton in 1685 and "near 92 to the ton" in 1731. Origin Charcoal iron—until the end of the 18th century, wrought iron was smelted from ore using charcoal, by the bloomery process. Wrought iron was also produced from pig iron using a finery forge or in a Lancashire hearth. The resulting metal was highly variable, both in chemistry and slag content. Puddled iron—the puddling process was the first large-scale process to produce wrought iron. In the puddling process, pig iron is refined in a reverberatory furnace to prevent contamination of the iron from the sulfur in the coal or coke. The molten pig iron is manually stirred, exposing the iron to atmospheric oxygen, which decarburizes the iron. As the iron is stirred, globs of wrought iron are collected into balls by the stirring rod (rabble arm or rod) and those are periodically removed by the puddler. Puddling was patented in 1784 and became widely used after 1800. By 1876, annual production of puddled iron in the UK alone was over 4 million tons. Around that time, the open hearth furnace was able to produce steel of suitable quality for structural purposes, and wrought iron production went into decline. Oregrounds iron—a particularly pure grade of bar iron made ultimately from iron ore from the Dannemora mine in Sweden. Its most important use was as the raw material for the cementation process of steelmaking. Danks iron—originally iron imported to Great Britain from Gdańsk, but in the 18th century more probably the kind of iron (from eastern Sweden) that once came from Gdańsk. Forest iron—iron from the English Forest of Dean, where haematite ore enabled tough iron to be produced. Lukes iron—iron imported from Liège, whose Dutch name is "Luik". Ames iron or amys iron—another variety of iron imported to England from northern Europe. Its origin has been suggested to be Amiens, but it seems to have been imported from Flanders in the 15th century and Holland later, suggesting an origin in the Rhine valley. Its origins remain controversial. Botolf iron or Boutall iron—from Bytów (Polish Pomerania) or Bytom (Polish Silesia). Sable iron (or Old Sable)—iron bearing the mark (a sable) of the Demidov family of Russian ironmasters, one of the better brands of Russian iron. Quality Tough iron Also spelled "tuf", is not brittle and is strong enough to be used for tools. Blend iron Made using a mixture of different types of pig iron. Best iron Iron put through several stages of piling and rolling to reach the stage regarded (in the 19th century) as the best quality. Marked bar iron Made by members of the Marked Bar Association and marked with the maker's brand mark as a sign of its quality. Defects Wrought iron is a form of commercial iron containing less than 0.10% of carbon, less than 0.25% of impurities total of sulfur, phosphorus, silicon and manganese, and less than 2% slag by weight. Wrought iron is redshort or hot short if it contains sulfur in excess quantity. It has sufficient tenacity when cold, but cracks when bent or finished at a red heat. Hot short iron was considered unmarketable. Cold short iron, also known as coldshear, colshire, contains excessive phosphorus. It is very brittle when cold and cracks if bent. It may, however, be worked at high temperature. Historically, coldshort iron was considered sufficient for nails. Phosphorus is not necessarily detrimental to iron. Ancient Near Eastern smiths did not add lime to their furnaces. The absence of calcium oxide in the slag, and the deliberate use of wood with high phosphorus content during the smelting, induces a higher phosphorus content (typically <0.3%) than in modern iron (<0.02–0.03%). Analysis of the Iron Pillar of Delhi gives 0.11% in the iron. The included slag in wrought iron also imparts corrosion resistance. Antique music wire, manufactured at a time when mass-produced carbon-steels were available, was found to have low carbon and high phosphorus; iron with high phosphorus content, normally causing brittleness when worked cold, was easily drawn into music wires. Although at the time phosphorus was not an easily identified component of iron, it was hypothesized that the type of iron had been rejected for conversion to steel but excelled when tested for drawing ability. History China During the Han dynasty (202 BC – 220 AD), new iron smelting processes led to the manufacture of new wrought iron implements for use in agriculture, such as the multi-tube seed drill and iron plough. In addition to accidental lumps of low-carbon wrought iron produced by excessive injected air in ancient Chinese cupola furnaces. The ancient Chinese created wrought iron by using the finery forge at least by the 2nd century BC, the earliest specimens of cast and pig iron fined into wrought iron and steel found at the early Han dynasty site at Tieshengguo. Pigott speculates that the finery forge existed in the previous Warring States period (403–221 BC), due to the fact that there are wrought iron items from China dating to that period and there is no documented evidence of the bloomery ever being used in China. The fining process involved liquifying cast iron in a fining hearth and removing carbon from the molten cast iron through oxidation. Wagner writes that in addition to the Han dynasty hearths believed to be fining hearths, there is also pictorial evidence of the fining hearth from a Shandong tomb mural dated 1st to 2nd century AD, as well as a hint of written evidence in the 4th century AD Daoist text Taiping Jing. Western world Wrought iron has been used for many centuries, and is the "iron" that is referred to throughout Western history. The other form of iron, cast iron, was in use in China since ancient times but was not introduced into Western Europe until the 15th century; even then, due to its brittleness, it could be used for only a limited number of purposes. Throughout much of the Middle Ages, iron was produced by the direct reduction of ore in manually operated bloomeries, although water power had begun to be employed by 1104. The raw material produced by all indirect processes is pig iron. It has a high carbon content and as a consequence, it is brittle and cannot be used to make hardware. The osmond process was the first of the indirect processes, developed by 1203, but bloomery production continued in many places. The process depended on the development of the blast furnace, of which medieval examples have been discovered at Lapphyttan, Sweden and in Germany. The bloomery and osmond processes were gradually replaced from the 15th century by finery processes, of which there were two versions, the German and Walloon. They were in turn replaced from the late 18th century by puddling, with certain variants such as the Swedish Lancashire process. Those, too, are now obsolete, and wrought iron is no longer manufactured commercially. Bloomery process Wrought iron was originally produced by a variety of smelting processes, all described today as "bloomeries". Different forms of bloomery were used at different places and times. The bloomery was charged with charcoal and iron ore and then lit. Air was blown in through a tuyere to heat the bloomery to a temperature somewhat below the melting point of iron. In the course of the smelt, slag would melt and run out, and carbon monoxide from the charcoal would reduce the ore to iron, which formed a spongy mass (called a "bloom") containing iron and also molten silicate minerals (slag) from the ore. The iron remained in the solid state. If the bloomery were allowed to become hot enough to melt the iron, carbon would dissolve into it and form pig or cast iron, but that was not the intention. However, the design of a bloomery made it difficult to reach the melting point of iron and also prevented the concentration of carbon monoxide from becoming high. After smelting was complete, the bloom was removed, and the process could then be started again. It was thus a batch process, rather than a continuous one such as a blast furnace. The bloom had to be forged mechanically to consolidate it and shape it into a bar, expelling slag in the process. During the Middle Ages, water-power was applied to the process, probably initially for powering bellows, and only later to hammers for forging the blooms. However, while it is certain that water-power was used, the details remain uncertain. That was the culmination of the direct process of ironmaking. It survived in Spain and southern France as Catalan Forges to the mid 19th century, in Austria as the stuckofen to 1775, and near Garstang in England until about 1770; it was still in use with hot blast in New York in the 1880s. In Japan the last of the old tatara bloomeries used in production of traditional tamahagane steel, mainly used in swordmaking, was extinguished only in 1925, though in the late 20th century the production resumed on a low scale to supply the steel to the artisan swordmakers. Osmond process Osmond iron consisted of balls of wrought iron, produced by melting pig iron and catching the droplets on a staff, which was spun in front of a blast of air so as to expose as much of it as possible to the air and oxidise its carbon content. The resultant ball was often forged into bar iron in a hammer mill. Finery process In the 15th century, the blast furnace spread into what is now Belgium where it was improved. From there, it spread via the Pays de Bray on the boundary of Normandy and then to the Weald in England. With it, the finery forge spread. Those remelted the pig iron and (in effect) burnt out the carbon, producing a bloom, which was then forged into bar iron. If rod iron was required, a slitting mill was used. The finery process existed in two slightly different forms. In Great Britain, France, and parts of Sweden, only the Walloon process was used. That employed two different hearths, a finery hearth for finishing the iron and a chafery hearth for reheating it in the course of drawing the bloom out into a bar. The finery always burnt charcoal, but the chafery could be fired with mineral coal, since its impurities would not harm the iron when it was in the solid state. On the other hand, the German process, used in Germany, Russia, and most of Sweden used a single hearth for all stages. The introduction of coke for use in the blast furnace by Abraham Darby in 1709 (or perhaps others a little earlier) initially had little effect on wrought iron production. Only in the 1750s was coke pig iron used on any significant scale as the feedstock of finery forges. However, charcoal continued to be the fuel for the finery. Potting and stamping From the late 1750s, ironmasters began to develop processes for making bar iron without charcoal. There were a number of patented processes for that, which are referred to today as potting and stamping. The earliest were developed by John Wood of Wednesbury and his brother Charles Wood of Low Mill at Egremont, patented in 1763. Another was developed for the Coalbrookdale Company by the Cranage brothers. Another important one was that of John Wright and Joseph Jesson of West Bromwich. Puddling process A number of processes for making wrought iron without charcoal were devised as the Industrial Revolution began during the latter half of the 18th century. The most successful of those was puddling, using a puddling furnace (a variety of the reverberatory furnace), which was invented by Henry Cort in 1784. It was later improved by others including Joseph Hall, who was the first to add iron oxide to the charge. In that type of furnace, the metal does not come into contact with the fuel, and so is not contaminated by its impurities. The heat of the combustion products passes over the surface of the puddle and the roof of the furnace reverberates (reflects) the heat onto the metal puddle on the fire bridge of the furnace. Unless the raw material used is white cast iron, the pig iron or other raw product of the puddling first had to be refined into refined iron, or finers metal. That would be done in a refinery where raw coal was used to remove silicon and convert carbon within the raw material, found in the form of graphite, to a combination with iron called cementite. In the fully developed process (of Hall), this metal was placed into the hearth of the puddling furnace where it was melted. The hearth was lined with oxidizing agents such as haematite and iron oxide. The mixture was subjected to a strong current of air and stirred with long bars, called puddling bars or rabbles, through working doors. The air, the stirring, and the "boiling" action of the metal helped the oxidizing agents to oxidize the impurities and carbon out of the pig iron. As the impurities oxidize, they formed a molten slag or drifted off as gas, while the remaining iron solidified into spongy wrought iron that floated to the top of the puddle and was fished out of the melt as puddle balls, using puddle bars. Shingling There was still some slag left in the puddle balls, so while they were still hot they would be shingled to remove the remaining slag and cinder. That was achieved by forging the balls under a hammer, or by squeezing the bloom in a machine. The material obtained at the end of shingling is known as bloom. The blooms are not useful in that form, so they were rolled into a final product. Sometimes European ironworks would skip the shingling process completely and roll the puddle balls. The only drawback to that is that the edges of the rough bars were not as well compressed. When the rough bar was reheated, the edges might separate and be lost into the furnace. Rolling The bloom was passed through rollers and to produce bars. The bars of wrought iron were of poor quality, called muck bars or puddle bars. To improve their quality, the bars were cut up, piled and tied together by wires, a process known as faggoting or piling. They were then reheated to a welding state, forge welded, and rolled again into bars. The process could be repeated several times to produce wrought iron of desired quality. Wrought iron that has been rolled multiple times is called merchant bar or merchant iron. Lancashire process The advantage of puddling was that it used coal, not charcoal as fuel. However, that was of little advantage in Sweden, which lacked coal. Gustaf Ekman observed charcoal fineries at Ulverston, which were quite different from any in Sweden. After his return to Sweden in the 1830s, he experimented and developed a process similar to puddling but used firewood and charcoal, which was widely adopted in the Bergslagen in the following decades. Aston process In 1925, James Aston of the United States developed a process for manufacturing wrought iron quickly and economically. It involved taking molten steel from a Bessemer converter and pouring it into cooler liquid slag. The temperature of the steel is about 1500 °C and the liquid slag is maintained at approximately 1200 °C. The molten steel contains a large amount of dissolved gases so when the liquid steel hit the cooler surfaces of the liquid slag the gases were liberated. The molten steel then froze to yield a spongy mass having a temperature of about 1370 °C. The spongy mass would then be finished by being shingled and rolled as described under puddling (above). Three to four tons could be converted per batch with the method. Decline Steel began to replace iron for railroad rails as soon as the Bessemer process for its manufacture was adopted (1865 on). Iron remained dominant for structural applications until the 1880s, because of problems with brittle steel, caused by introduced nitrogen, high carbon, excess phosphorus, or excessive temperature during or too-rapid rolling. By 1890 steel had largely replaced iron for structural applications. Sheet iron (Armco 99.97% pure iron) had good properties for use in appliances, being well-suited for enamelling and welding, and being rust-resistant. In the 1960s, the price of steel production was dropping due to recycling, and even using the Aston process, wrought iron production was labor-intensive. It has been estimated that the production of wrought iron is approximately twice as expensive as that of low-carbon steel. In the United States, the last plant closed in 1969. The last in the world was the Atlas Forge of Thomas Walmsley and Sons in Bolton, Great Britain, which closed in 1973. Its 1860s-era equipment was moved to the Blists Hill site of Ironbridge Gorge Museum for preservation. Some wrought iron is still being produced for heritage restoration purposes, but only by recycling scrap. Properties The slag inclusions, or stringers, in wrought iron give it properties not found in other forms of ferrous metal. There are approximately 250,000 inclusions per square inch. A fresh fracture shows a clear bluish color with a high silky luster and fibrous appearance. Wrought iron lacks the carbon content necessary for hardening through heat treatment, but in areas where steel was uncommon or unknown, tools were sometimes cold-worked (hence cold iron) to harden them. An advantage of its low carbon content is its excellent weldability. Furthermore, sheet wrought iron cannot bend as much as steel sheet metal when cold worked. Wrought iron can be melted and cast; however, the product is no longer wrought iron, since the slag stringers characteristic of wrought iron disappear on melting, so the product resembles impure, cast, Bessemer steel. There is no engineering advantage to melting and casting wrought iron, as compared to using cast iron or steel, both of which are cheaper. Due to the variations in iron ore origin and iron manufacture, wrought iron can be inferior or superior in corrosion resistance, compared to other iron alloys. There are many mechanisms behind its corrosion resistance. Chilton and Evans found that nickel enrichment bands reduce corrosion. They also found that in puddled, forged, and piled iron, the working-over of the metal spread out copper, nickel, and tin impurities that produce electrochemical conditions that slow down corrosion. The slag inclusions have been shown to disperse corrosion to an even film, enabling the iron to resist pitting. Another study has shown that slag inclusions are pathways to corrosion. Other studies show that sulfur in the wrought iron decreases corrosion resistance, while phosphorus increases corrosion resistance. Chloride ions also decrease wrought iron's corrosion resistance. Wrought iron may be welded in the same manner as mild steel, but the presence of oxide or inclusions will give defective results. The material has a rough surface, so it can hold platings and coatings better than smooth steel. For instance, a galvanic zinc finish applied to wrought iron is approximately 25–40% thicker than the same finish on steel. In Table 1, the chemical composition of wrought iron is compared to that of pig iron and carbon steel. Although it appears that wrought iron and plain carbon steel have similar chemical compositions, that is deceptive. Most of the manganese, sulfur, phosphorus, and silicon in the wrought iron are incorporated into the slag fibers, making wrought iron purer than plain carbon steel. Amongst its other properties, wrought iron becomes soft at red heat and can be easily forged and forge welded. It can be used to form temporary magnets, but it cannot be magnetized permanently, and is ductile, malleable, and tough. Ductility For most purposes, ductility rather than tensile strength is a more important measure of the quality of wrought iron. In tensile testing, the best irons are able to undergo considerable elongation before failure. Higher tensile wrought iron is brittle. Because of the large number of boiler explosions on steamboats in the early 1800s, the U.S. Congress passed legislation in 1830 which approved funds for correcting the problem. The treasury awarded a $1500 contract to the Franklin Institute to conduct a study. As part of the study, Walter R. Johnson and Benjamin Reeves conducted strength tests on boiler iron using a tester they had built in 1832 based on a design by Lagerhjelm in Sweden. Because of misunderstandings about tensile strength and ductility, their work did little to reduce failures. The importance of ductility was recognized by some very early in the development of tube boilers, evidenced by Thurston's comment: Various 19th century investigations of boiler explosions, especially those by insurance companies, found causes to be most commonly the result of operating boilers above the safe pressure range, either to get more power, or due to defective boiler pressure relief valves and difficulties of obtaining reliable indications of pressure and water levels. Poor fabrication was also a common problem. Also, the thickness of the iron in steam drums was low, by modern standards. By the late 19th century, when metallurgists were able to better understand what properties and processes made good iron, iron in steam engines was being displaced by steel. Also, the old cylindrical boilers with fire tubes were displaced by water tube boilers, which are inherently safer. Purity In 2010, Gerry McDonnell demonstrated in England by analysis that a wrought iron bloom, from a traditional smelt, could be worked into 99.7% pure iron with no evidence of carbon. It was found that the stringers common to other wrought irons were not present, thus making it very malleable for the smith to work hot and cold. A commercial source of pure iron is available and is used by smiths as an alternative to traditional wrought iron and other new generation ferrous metals. Applications Wrought iron furniture has a long history, dating back to Roman times. There are 13th century wrought iron gates in Westminster Abbey in London, and wrought iron furniture seemed to reach its peak popularity in Britain in the 17th century, during the reign of William III and Mary II. However, cast iron and cheaper steel caused a gradual decline in wrought iron manufacture; the last wrought ironworks in Britain closed in 1974. It is also used to make home decor items such as baker's racks, wine racks, pot racks, etageres, table bases, desks, gates, beds, candle holders, curtain rods, bars, and bar stools. The vast majority of wrought iron available today is from reclaimed materials. Old bridges and anchor chains dredged from harbors are major sources. The greater corrosion resistance of wrought iron is due to the siliceous impurities (naturally occurring in iron ore), namely ferrous silicate. Wrought iron has been used for decades as a generic term across the gate and fencing industry, even though mild steel is used for manufacturing these "wrought iron" gates. This is mainly because of the limited availability of true wrought iron. Steel can also be hot-dip galvanised to prevent corrosion, which cannot be done with wrought iron. See also Bronze and brass ornamental work Cast iron Semi-steel casting Notes References Further reading External links Architectural elements Building materials Chinese inventions Ferrous alloys Han dynasty Iron Ironmongery Metalworking
Wrought iron
Physics,Chemistry,Technology,Engineering
5,967
18,960,025
https://en.wikipedia.org/wiki/Mountain%20West%20Energy
Mountain West Energy, LLC is an American unconventional oil recovery technology research and development company based in Orem, Utah. It is a developer of the In-situ Vapor Extraction Technology, an in-situ shale oil extraction technology. The company owns oil shale leases in the Uintah Basin, Uintah County, Utah. In 2008, Mountain West Energy won the Clean Technology and Energy Utah Innovation Award. Technology Mountain West Energy has proposed an experimental technology for in-situ shale oil extraction called In-Situ Vapor Extraction. The company claims its technology would also be suitable for enhanced oil recovery and for extraction of heavy crude oil and oil sands. For conversion of the kerogen in oil shale into shale oil, the company proposes using a high temperature gas, injected through an injection well. In the oil shale formation, the gas would cause pyrolysis, releasing shale oil vapors. These vapors would be brought to the surface through an extraction well. In 2009, Mountain West Energy concluded an exclusive agreement with San Leon Energy granting the San Leon rights to the technology for a three-year pilot project on the Tarfaya oil shale deposit of Morocco. San Leon signed a memorandum of understanding with the National Office of Hydrocarbon and Mining of Morocco on the Tarfaya oil shale deposit in May 2009. References External links Company website Oil companies of the United States Oil shale companies of the United States Bituminous sands Companies based in Utah 2005 establishments in Utah
Mountain West Energy
Chemistry
293
57,851,551
https://en.wikipedia.org/wiki/Louisa%20Elizabeth%20Allen
Louisa Elizabeth Allen (born 1972) is a New Zealand sex education academic. She is currently a full professor at the University of Auckland. Academic career After a 2000 PhD titled Exploring relationships' : a study of young people's (hetero)sexual subjectivities, knowledge and practices.' at the University of Cambridge, she moved to the University of Auckland, rising to full professor. Selected works Allen, Louisa. "Girls want sex, boys want love: Resisting dominant discourses of (hetero) sexuality." Sexualities 6, no. 2 (2003): 215–236. Allen, Louisa. Sexual subjects: Young people, sexuality and education. Springer, 2005. Allen, Louisa. "Beyond the birds and the bees: Constituting a discourse of erotics in sexuality education." Gender and education 16, no. 2 (2004): 151–167. Allen, Louisa. "‘Say everything’: Exploring young people's suggestions for improving sexuality education." Sex Education 5, no. 4 (2005): 389-404. Allen, Louisa. "Closing sex education's knowledge/practice gap: the reconceptualisation of young people's sexual knowledge." Sex Education: Sexuality, Society and Learning 1, no. 2 (2001): 109–122. Allen, Louisa. "Managing masculinity: Young men's identity work in focus groups." Qualitative Research 5, no. 1 (2005): 35–57. References External links Living people 1972 births Academic staff of the University of Auckland New Zealand women academics Alumni of the University of Cambridge Sex educators New Zealand educational theorists 21st-century New Zealand women writers
Louisa Elizabeth Allen
Biology
346
54,288,469
https://en.wikipedia.org/wiki/Template-guided%20self-assembly
Template-guided self-assembly is a versatile fabrication process that can arrange various micrometer to nanometer sized particles into lithographically created template with defined patterns. The process contain the following four steps. Create Template The "template" can be created by either photolithography or e-beam lithography to define binding sites for various building blocks. The binding sites should reflect the footprint of the building blocks or clusters to be bound. Surface Treatment After film development, the created pattern is treated with charged polymers in order to “stick” the particles. Take poly-lysine as an example, the poly-lysine will cover the negatively charged glass surface and turn the charge to be positive; it thus can non-specifically bind negatively charged metallic nanoparticles. Particle Assembly To do particle assembly, treated pattern is submerged in a small amount of aqueous solution of particles. A few approaches can be used to facilitate the binding efficiency. One of them is to use capillary force at the edge of the aqueous droplet to “push” the particles into the binding sites. If assembling multiple types of particles, the particles should be assembled in the order of decreasing sizes. For example, if assembling both 60   nm gold nanoparticles as well as 40   nm silver nanoparticles, 60   nm gold nanoparticles should be applied first because it is too big to enter binding sites tailored for 40   nm particles. Rationally design the binding sequence as well as the binding site sizes can result in minimizing the binding errors from occurring. Remove Template After binding of all building blocks, the template can be removed by either dissolving in an organic solvent, or stripped off by a scotch tape. References Microtechnology
Template-guided self-assembly
Materials_science,Engineering
360
1,562,061
https://en.wikipedia.org/wiki/Video%20Privacy%20Protection%20Act
The Video Privacy Protection Act (VPPA) is a bill that was passed by the United States Congress in 1988 as and signed into law by President Ronald Reagan. It was created to prevent what it refers to as "wrongful disclosure of video tape rental or sale records" or similar audio visual materials, to cover items such as video games. Congress passed the VPPA after Robert Bork's video rental history was published during his Supreme Court nomination and it became known as the "Bork bill". It makes any "video tape service provider" that discloses rental information outside the ordinary course of business liable for up to $2,500 in actual damages unless the consumer has consented, the consumer had the opportunity to consent, or the data was subject to a court order or warrant. In 2013, the law was amended to add provisions allowing consumers to electronically consent to sharing video rental histories and to extend the time that consent can last to up to two years. The law became a focus of attention in the legal industry once again in the twenty-first century with the rise of audiovisual content sharing through digital media. Its revival is part of a trend in the filing of consumer privacy class actions, both through new laws like the California Consumer Privacy Act and older laws like the VPPA and wiretapping statutes. Computer-based VPPA litigation Toward the end of the 2010's and beginning of the 2020's, the 1988 law experienced a resurgence in consumer class action lawsuits. The numerous lawsuits filed as part of this trend alleged that companies violated the VPPA by collecting and disclosing consumers' video viewing history through their websites, mobile apps, and other smart devices. While the language of the VPPA focuses on "video tape service providers," consumers have argued that the law also protects the privacy of their personal information that is collected while they watch audiovisual content online. Cookies and other website behavior tracking technologies commonly found on popular websites allow the website operators to connect visitors' browsers with third parties who collect information from their website visit. This information can be shared with the third parties for various purposes including website functionality, language preferences and other personalization, and third party advertising. The recent resurgence of VPPA lawsuits is premised on the idea that data collected through the various tracking technologies may include personal information protected by the VPPA. Consumer plaintiffs assert that if that information is shared with third parties for analytics, advertising, or any other purpose that falls outside the exceptions in the VPPA, it is unlawful. Prior to 2007, VPPA had not been cited by privacy attorneys as a cause of action involving electronic computing devices. Early lawsuits raising the VPPA in the context of data shared through the internet included a 2008 lawsuit against Facebook and thirty-three companies, including Blockbuster, Zappos, and Overstock.com, as well as the Lane v. Facebook, Inc. class action lawsuit, involving alleged privacy violations caused by the Facebook Beacon program. The online advertising industry, in association with analytic companies, increasingly used video-based ads and at the same time gathered data from webpages and smart TV's showing digital video. By tracking web traffic online, consumers and their attorneys gather evidence of the data being collected by third parties through cookies and other tracking technologies when a person visits a website. Consumers use that traffic analysis to determine whether their protected personal information has been shared with third parties when they visited a particular website. For example, attorneys use software applications to log HTTP/HTTPS traffic between a computer's web browser and the Internet to produce evidence of tracking activities. This approach led to a $9.5 million settlement in the Lane v. Facebook, Inc. case. 2013 Amendments Following VPPA litigation against Netflix and other digital media industry giants, in January 2013, President Barack Obama signed into law H.R. 6671 amending the VPPA. The amendments allow video rental companies to share rental information on social networking sites after obtaining customer permission. Netflix, which had expressed concerns about violating the VPPA with its increasingly social video viewing services, reportedly lobbied for the change. Netflix cited the VPPA in 2011 following the announcement of its global integration with Facebook. The company noted that the VPPA was the sole reason why the new feature was not immediately available in the United States, and encouraged its customers to contact their representatives in support of legislation that would clarify the language of the law. In 2012, Netflix changed its privacy rules so that it no longer retained records for people who have left the site, a change that was reported to have been inspired by VPPA litigation. Further results of VPPA litigation after the passage of these amendments were initially mixed. In 2015, the United States Court of Appeals for the Eleventh Circuit found that the law's protections do not reach the users of a free Android app, even when the app assigns each user a unique identification number and shares user behavior with a third party data analytics company. References 1988 in American law United States federal privacy legislation Computer law
Video Privacy Protection Act
Technology
1,008
14,966,101
https://en.wikipedia.org/wiki/Coh-Metrix
Coh-Metrix is a computational tool that produces indices of the linguistic and discourse representations of a text. Developed by Arthur C. Graesser and Danielle S. McNamara, Coh-Metrix analyzes texts on many different features. Measurements Coh-Metrix can be used in many different ways to investigate the cohesion of the explicit text and the coherence of the mental representation of the text. "Our definition of cohesion consists of characteristics of the explicit text that play some role in helping the reader mentally connect ideas in the text" (Graesser, McNamara, & Louwerse, 2003). The definition of coherence is the subject of much debate. Theoretically, the coherence of a text is defined by the interaction between linguistic representations and knowledge representations. While coherence can be defined as characteristics of the text (i.e., aspects of cohesion) that are likely to contribute to the coherence of the mental representation, Coh-Metrix measurements provide indices of these cohesion characteristics. According to an empirical study, the Coh-Metrix L2 Reading Index performs significantly better than traditional readability formulas. See also L2 Syntactic Complexity Analyzer References External links An exposition of the report. Memphis.edu. Includes many detailed concepts under "discourse coherence" and "linguistic cohesion". Reading (process) Computational linguistics Applied linguistics Second language writing
Coh-Metrix
Technology
293
26,703,335
https://en.wikipedia.org/wiki/Eulerian%20poset
In combinatorial mathematics, an Eulerian poset is a graded poset in which every nontrivial interval has the same number of elements of even rank as of odd rank. An Eulerian poset which is a lattice is an Eulerian lattice. These objects are named after Leonhard Euler. Eulerian lattices generalize face lattices of convex polytopes and much recent research has been devoted to extending known results from polyhedral combinatorics, such as various restrictions on f-vectors of convex simplicial polytopes, to this more general setting. Examples The face lattice of a convex polytope, consisting of its faces, together with the smallest element, the empty face, and the largest element, the polytope itself, is an Eulerian lattice. The odd–even condition follows from Euler's formula. Any simplicial generalized homology sphere is an Eulerian lattice. Let L be a regular cell complex such that |L| is a manifold with the same Euler characteristic as the sphere of the same dimension (this condition is vacuous if the dimension is odd). Then the poset of cells of L, ordered by the inclusion of their closures, is Eulerian. Let W be a Coxeter group with Bruhat order. Then (W,≤) is an Eulerian poset. Properties The defining condition of an Eulerian poset P can be equivalently stated in terms of its Möbius function: The dual of an Eulerian poset with a top element, obtained by reversing the partial order, is Eulerian. Richard Stanley defined the toric h-vector of a ranked poset, which generalizes the h-vector of a simplicial polytope. He proved that the Dehn–Sommerville equations hold for an arbitrary Eulerian poset of rank d + 1. However, for an Eulerian poset arising from a regular cell complex or a convex polytope, the toric h-vector neither determines, nor is neither determined by the numbers of the cells or faces of different dimension and the toric h-vector does not have a direct combinatorial interpretation. Notes References Richard P. Stanley, Enumerative Combinatorics, Volume 1, first edition. Cambridge University Press, 1997. See also Abstract polytope Star product, a method for combining posets while preserving the Eulerian property Algebraic combinatorics
Eulerian poset
Mathematics
515
40,329,058
https://en.wikipedia.org/wiki/Desulfovibrio%20marrakechensis
Desulfovibrio marrakechensis is a bacterium. It is sulfate-reducing and tyrosol-oxidising. Its cells are mesophilic, non-spore-forming, non-motile, Gram-negative, catalase-positive and straight-rod-shaped. They contain cytochrome c(3) and desulfoviridin. The type strain is EMSSDQ(4)(T) (=DSM 19337(T) =ATCC BAA-1562(T)). References Further reading Staley, James T., et al. "Bergey's manual of systematic bacteriology, vol. 3."Williams and Wilkins, Baltimore, MD (1989): 2250–2251. *Bélaich, Jean-Pierre, Mireille Bruschi, and Jean-Louis Garcia, eds. Microbiology and biochemistry of strict Anaerobes Involved in interspecies hydrogen transfer. No. 54. Springer, 1990. External links LPSN Type strain of Desulfovibrio marrakechensis at BacDive - the Bacterial Diversity Metadatabase Desulfovibrio Bacteria described in 2009
Desulfovibrio marrakechensis
Biology
248
47,076,907
https://en.wikipedia.org/wiki/Carrier%20frequency%20offset
Carrier frequency offset (CFO) is one of many non-ideal conditions that may affect in baseband receiver design. In designing a baseband receiver, we should notice not only the degradation invoked by non-ideal channel and noise, we should also regard RF and analog parts as the main consideration. Those non-idealities include sampling clock offset, IQ imbalance, power amplifier, phase noise and carrier frequency offset nonlinearity. Carrier frequency offset often occurs when the local oscillator signal for down-conversion in the receiver does not synchronize with the carrier signal contained in the received signal. This phenomenon can be attributed to two important factors: frequency mismatch in the transmitter and the receiver oscillators; and the Doppler effect as the transmitter or the receiver is moving. When this occurs, the received signal will be shifted in frequency. For an OFDM system, the orthogonality among sub carriers is maintained only if the receiver uses a local oscillation signal that is synchronous with the carrier signal contained in the received signal. Otherwise, mismatch in carrier frequency can result in inter-carrier interference (ICI). The oscillators in the transmitter and the receiver can never be oscillating at identical frequency. Hence, carrier frequency offset always exists even if there is no Doppler effect. A standard-compliant communication system usually requires oscillators to have a small enough tolerance and thus bounds CFO. For example, IEEE 802.11 WLAN specifies the oscillator precision tolerance to be less than ±20 ppm, so that CFO is in the range from -40 ppm to +40 ppm. Example If the TX oscillator runs at a frequency that is 20 ppm above the nominal frequency and if the RX oscillator is running at 20 ppm below, then the received baseband signal will have a CFO of 40 ppm. With a carrier frequency of 5.2 GHz in this standard, the CFO is up to ±208 kHz. In addition, if the transmitter or the receiver is moving, the Doppler effect adds some hundreds of hertz in frequency spreading. Compared to the CFO resulting from the oscillator mismatch, the Doppler effect in this case is relatively minor. Effects of synchronization error Given a carrier frequency offset,Δ, the received continuous-time signal will be rotated by a constant frequency and is in the form of The carrier frequency offset can first be normalized with respect to the sub carrier spacing ( and then decomposed into the integral component and fractional component , that is, and . The received frequency-domain signal then becomes The second term of the equation denotes the ICI, namely signals from other subcarriers that interfere with the desired subcarrier signal. Also note that is the channel noise component. The fractional carrier frequency offset, , results in attenuation in magnitude, phase shift, and ICI, while the integer carrier frequency offset, , causes index shift as well as phase shift in the received frequency-domain signals. Note that the phase shift is identical in every subcarrier and is also proportional to the symbol index . Carrier frequency offset estimation Fractional CFO estimation Maximum likelihood (ML) estimation An estimate of the CFO, if within a certain limit, can be obtained simultaneously when the coarse symbol timing is acquired by the algorithms mentioned earlier. The ML CFO estimator is given by Note that the phase can only be resolved in , and the above formula estimates only the part of the CFO that is within . If , then , the part of the CFO that is within plus and minus half the subcarrier spacing, also known as the fractional CFO. In the case in which , frequency ambiguity occurs, and the total CFO must be resolved by additional integer CFO estimation. BLUE If the preamble has U identical repetitions, where , then another best linear unbiased estimator (BLUE) exploiting the correlation of the repeated segments is possible. Assume that there are R samples in a segment, so, in total, samples are available. The BLUE estimation algorithm starts with computing several linear auto-correlation functions with samples of delay, Then the phase differences between all pairs of auto-correlation functions with delay difference are computed, where denotes a modulo- operation and is a design parameter less than . Note that each represents an estimate of the CFO, scaled by a constant. The smaller the constant , the better accuracy it achieves. To gain an effective CFO estimate, the BLUE estimator uses a weighted average of all and computes where The optimal value for achieving the minimal variance of is . The range of estimated carrier frequency offset is . With some modification, this estimator can also be applied to preambles consisting of several repeated segments with specific sign changes. With proper acquired symbol timing, the received segments of the preamble are multiplied by their respective signs, and then the same method as the BLUE estimator can be applied. Integer CFO estimation In the IEEE 802.16e OFDM mode standard, the oscillator deviation is within ±8 ppm. With the highest possible carrier frequency of 10.68 GHz, the maximum CFO is about ± 171 kHz when the transmitter LO and the receiver LO both have the largest yet opposite-sign frequency deviations, which is also equivalent to ± 1 1 sub carrier spacing . In the 6 MHz DVB-T system, assuming that the oscillator deviation is within ±20 ppm and the carrier frequency is around 800 MHz, the maximum CFO can be up to ±38 subcarrier spacing in the 8K transmission mode. From the previous discussion, it is clear that the estimated CFO obtained simultaneously in the coarse symbol boundary detection has ambiguity in frequency. In the following, algorithms for resolving such frequency ambiguity in the estimated carrier frequency offset will be presented. Time-domain correlation In the 802.16e OFDM mode, the initial estimated CFO is within . Besides this estimation, additional frequency offset of , , or , is possible given a CFO range of . In order to estimate this additional integer CFO, a matched filter matching the fractional CFO-compensated received signal against the modulated long preamble waveforms can be used. The coefficients of the matched filter are the complex conjugate of the long preamble and they are modulated by a sinusoidal wave whose frequency is a possible integer CFO mentioned above. The output of the matched filter will have a maximum peak value if its coefficients are modulated by the carrier with the correct integer CFO. It is possible to deploy one such matched filter for each possible integer CFO. In this case, seven matched filters are needed. However, we can use only one set of matched filter hardware that handles different integer CFOs sequentially. In addition, as suggested previously in the symbol timing detection subsection, the coefficients of the matched filter can be quantized to -1, 0, 1 to reduce hardware complexity. Carrier frequency offset estimation in MIMO-OFDM system In MIMO-OFDM systems, the transmit antennas are often co-located, so are the receive antennas. Hence, it is valid to assume that only one oscillator is referenced in either the transmitter side or the receiver side. As a result, a single CFO set is to be estimated for the multiple receive antennas. The ML estimation for the fractional CFO is quite popular in MIMO-OFDM systems. Another fractional CFO estimation algorithm for MIMO-OFDM systems applies different weights to the receive signals according to the respective degrees of channel fading The preamble is designed so that each transmit antenna uses non-overlapping sub carriers to facilitate separation of signals from different transmit antennas. At each receive antenna, the cross-correlation between the received signal and the known preamble is examined. The magnitude of the cross-correlation output reflects the channel fading between the corresponding transmit and receive antenna pair. Based on the channel fading information, weights are applied to the received signals to emphasize those with stronger channel gains and at the same time to suppress those that are deeply faded. Then, the CFO is estimated based on the phase of delay correlation of weighted signals. For integer CFO, frequency-domain cross-correlation and frequency-domain PN correlation can be used with slight modification. First, the received signals must be compensated by the estimated fractional CFO. Then, the compensated signals are transformed into the frequency domain. The frequency-domain cross-correlation algorithm for one specific receive antenna is similar to that in the SISO case Residual CFO and SCO Estimation Although the CFO in the received signal has been estimated and compensated in the receiver, some residual CFO may still exist. Besides, the CFO contained in the received signal may very well be time-varying and, thus, it needs to be continuously tracked. The received signal also suffers from sampling clock offset (SCO), which may cause a gradual drift of the safe DFT window in addition to extra phase shift in the received frequency-domain signals. In frame-based OFDM systems, both the residual CFO tracking and the SCO tracking are inevitable, because the receiver may operate for a long period of time. In packet-based OFDM systems, however, the influences of these two offsets depend on the packet length and the magnitude of the offsets. The SCO may not be easily estimated from the time-domain signal. However, it can be examined through the phase shift of the frequency-domain pilot signals. The residual CFO can also be estimated in a similar way. In many OFDM wireless communication standards, for example, DVB-T, IEEE 802.11 a/g/n, and IEEE 802.16e OFDM mode, dedicated pilot subcarriers are allocated to facilitate receiver synchronization. The phase shifts in the received frequency-domain signals caused by the CFO are identical at all subcarriers provided that the ICI is ignored. On the other hand, the SCO causes phase shifts that are proportional to the respective sub carrier indices. The received signals contain ICI and noise, and therefore the phases deviate from the two ideal straight lines. Conventionally, the SCO can be estimated by computing a slope from the plot of measured pilot subcarrier phase differences versus pilot subcarrier indices. Moreover, joint estimation of CFO and SCO has also been studied extensively. Carrier Frequency Offset Compensation In order to suppress the ICI and thereby reduce SNR degradation, the residual CFO must be sufficiently small. For example, when using the 64QAM constellation, it is better to keep the residual CFO below 0. 01/s to ensure that DSNR < 0 . 3 dB for moderate SNR. On the other hand, when QPSK is used, the residual CFO can be up to 0.03 fs. References Further reading G. L. Stuber et ai., 2004. "Broadband MIMO-OFDM wireless communications," Proceedings of the IEEE, 92,271-293. A. van Zelst and T. C. W. Schenk, 2004. "Implementation of a MIMO OFDM-based wireless LAN system," IEEE Transactions on Signal Processing, 52, 483–494. E. Zhou, X. Zhang, H. Zhao, and W. Wang, 2005. "Synchronization algorithms for MIMO OFDM systems," in Proceedings of the IEEE Wireless Communications and Networking Conference, March, pp. 18–22. P. Priotti, 2004. " Frequency synchronization of MIMO OFDM systems with frequency selective weighting," in Proceedings of the IEEE Vehicular Technology Conference, vol. 2, May, pp. 1114–1 118. Baseband Receiver Design for Wireless MIMO-OFDM Communications Signal processing
Carrier frequency offset
Technology,Engineering
2,463
71,494,751
https://en.wikipedia.org/wiki/GEEKOM
GEEKOM is a multinational consumer electronics company specializing in mini PCs. Its research and design headquarters are located in China. History GEEKOM was founded in 2003. In late 2021, GEEKOM switched its focus to the mini PC market and has since become one of the world's leading manufacturers of mini PCs. Products GEEKOM specializes in the production and sale of mini PCs. It launched its first flagship mini PC, the Mini IT8, on 20 November 2021. The Mini IT8 has been praised as an "affordable and compact" alternative to NUCs, a similar line of barebone computers produced by Intel. , GEEKOM has released the following three mini PCs in addition to the Mini IT8: the Mini IT8 SE, MiniAir 11, and Mini IT11. In January 2022, GEEKOM partnered with Intel to launch its first gaming laptop, the GEEKOM BookFun 11, marking GEEKOM's first venture into the smart laptop market. In 2024, Geekom released A7, A8 - a next-gen AI mini PC with an AMD HawkPoint Ryzen 8040 processor. References External links Computer hardware companies Computer companies established in 2003 Consumer electronics brands Mini PC
GEEKOM
Technology
243
51,503,257
https://en.wikipedia.org/wiki/NGC%20182
NGC 182 is a spiral galaxy with a ring structure, located in the constellation Pisces. It was discovered on December 25, 1790 by William Herschel. In 2004 a type IIb supernova was discovered in this galaxy and designated SN 2004ex. References External links 0182 Intermediate spiral galaxies Discoveries by William Herschel Pisces (constellation) 002279
NGC 182
Astronomy
78
2,679,233
https://en.wikipedia.org/wiki/Epsilon%20Virginis
Epsilon Virginis (ε Virginis, abbreviated Epsilon Vir, ε Vir), formally named Vindemiatrix , is a star in the zodiac constellation of Virgo. The apparent visual magnitude of this star is +2.8, making it the third-brightest member of Virgo. Based upon parallax measurements made by the Gaia spacecraft, Vindemiatrix lies at a distance of about from the Sun, give or take 0.7light-years. Stellar properties Vindemiatrix is a giant star with a stellar classification of G8 III. With 2.7 times the mass of the Sun and at an age of 700 million years, it has reached a stage in its evolution where the hydrogen fuel in its core is exhausted. It is believed to be a red clump star; a red giant star fusing helium into carbon in its core surrounded by a shell fusing hydrogen into helium. As a result, it has expanded to 12 times the Sun's size and is now radiating around 91 times as much luminosity as the Sun. This energy is being emitted from its outer atmosphere at an effective temperature of 5,071 K, which gives it the yellow-hued glow of a G-type star. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. This star is a likely member of the thin disk population and the orbit departs by no more than from the galactic plane. Nomenclature ε Virginis (Latinised to Epsilon Virginis) is the star's Bayer designation. It bore the traditional names Vindemiatrix and Vindemiator, which come from Greek through the Latin vindēmiātrix, vindēmiātor meaning 'the grape-harvestress'. Additional medieval names are Almuredin , Alaraph, Provindemiator, Protrigetrix and Protrygetor. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Vindemiatrix for this star. This star, along with Beta Virginis (Zavijava), Gamma Virginis (Porrima), Eta Virginis (Zaniah) and Delta Virginis (Minelauva), were Al ʽAwwāʼ, which is Arabic for 'the Barker'. In Chinese, (), meaning Left Wall of Supreme Palace Enclosure, refers to an asterism consisting of Epsilon Virginis, Eta Virginis, Gamma Virginis, Delta Virginis and Alpha Comae Berenices. Consequently, the Chinese name for Epsilon Virginis itself is (, .), representing (), meaning The Second Eastern General. 東次將 (Dōngcìjiāng), westernized into Tsze Tseang by R.H. Allen and the meaning is "the Second General". References External links Virginis, Epsilon Virgo (constellation) G-type giants Vindemiatrix Virginis, 047 063608 4932 113226 Durchmusterung objects TIC objects
Epsilon Virginis
Astronomy
662
19,169,901
https://en.wikipedia.org/wiki/Ferroniobium
Ferroniobium is an important iron-niobium alloy, with a niobium content of 60-70%. It is the main source for niobium alloying of HSLA steel and covers more than 80% of the worldwide niobium production. The niobium is mined from pyrochlore deposits and is subsequently transformed into the niobium pentoxide Nb2O5. This oxide is mixed with iron oxide and aluminium and is reduced in an aluminothermic reaction to niobium and iron. The component metals can be purified in an electron beam furnace or the alloy can be used as it is. For alloying with steel the ferroniobium is added to molten steel before casting. The largest producers of ferroniobium are the same as for niobium and are located in Brazil and Canada. External links ISO 5453:1980 Ferroniobium -- Specification and conditions of delivery References Ferroalloys Niobium alloys
Ferroniobium
Chemistry
209
24,917,031
https://en.wikipedia.org/wiki/Williams%E2%80%93Landel%E2%80%93Ferry%20equation
The Williams–Landel–Ferry Equation (or WLF Equation) is an empirical equation associated with time–temperature superposition. The WLF equation has the form where is the decadic logarithm of the WLF shift factor, T is the temperature, Tr is a reference temperature chosen to construct the compliance master curve and C1, C2 are empirical constants adjusted to fit the values of the superposition parameter aT. The equation can be used to fit (regress) discrete values of the shift factor aT vs. temperature. Here, values of shift factor aT are obtained by horizontal shift log(aT) of creep compliance data plotted vs. time or frequency in double logarithmic scale so that a data set obtained experimentally at temperature T superposes with the data set at temperature Tr. A minimum of three values of aT are needed to obtain C1, C2, and typically more than three are used. Once constructed, the WLF equation allows for the estimation of the temperature shift factor for temperatures other than those for which the material was tested. In this way, the master curve can be applied to other temperatures. However, when the constants are obtained with data at temperatures above the glass transition temperature (Tg), the WLF equation is applicable to temperatures at or above Tg only; the constants are positive and represent Arrhenius behavior. Extrapolation to temperatures below Tg is erroneous. When the constants are obtained with data at temperatures below Tg, negative values of C1, C2 are obtained, which are not applicable above Tg and do not represent Arrhenius behavior. Therefore, the constants obtained above Tg are not useful for predicting the response of the polymer for structural applications, which necessarily must operate at temperatures below Tg. The WLF equation is a consequence of time–temperature superposition (TTSP), which mathematically is an application of Boltzmann's superposition principle. It is TTSP, not WLF, that allows the assembly of a compliance master curve that spans more time, or frequency, than afforded by the time available for experimentation or the frequency range of the instrumentation, such as dynamic mechanical analyzer (DMA). While the time span of a TTSP master curve is broad, according to Struik, it is valid only if the data sets did not suffer from ageing effects during the test time. Even then, the master curve represents a hypothetical material that does not age. Effective Time Theory. needs to be used to obtain useful prediction for long term time. Having data above Tg, it is possible to predict the behavior (compliance, storage modulus, etc.) of viscoelastic materials for temperatures T>Tg, and/or for times/frequencies longer/slower than the time available for experimentation. With the master curve and associated WLF equation it is possible to predict the mechanical properties of the polymer out of time scale of the machine (typically to Hz), thus extrapolating the results of multi-frequency analysis to a broader range, out of measurement range of machine. Predicting the Effect of Temperature on Viscosity by the WLF Equation The Williams-Landel-Ferry model, or WLF for short, is usually used for polymer melts or other fluids that have a glass transition temperature. The model is: where T-temperature, , , and are empiric parameters (only three of them are independent from each other). If one selects the parameter based on the glass transition temperature, then the parameters , become very similar for the wide class of polymers. Typically, if is set to match the glass transition temperature , we get 17.44 and K. Van Krevelen recommends to choose K, then and 101.6 K. Using such universal parameters allows one to guess the temperature dependence of a polymer by knowing the viscosity at a single temperature. In reality the universal parameters are not that universal, and it is much better to fit the WLF parameters to the experimental data, within the temperature range of interest. Further reading Williams-Landel-Ferry model Time–temperature superposition Viscoelasticity References Polymers
Williams–Landel–Ferry equation
Chemistry,Materials_science
849
8,421,442
https://en.wikipedia.org/wiki/Trans-acting
In the field of molecular biology, trans-acting (trans-regulatory, trans-regulation), in general, means "acting from a different molecule" (i.e., intermolecular). It may be considered the opposite of cis-acting (cis-regulatory, cis-regulation), which, in general, means "acting from the same molecule" (i.e., intramolecular). In the context of transcription regulation, a trans-acting factor is usually a regulatory protein that binds to DNA. The binding of a trans-acting factor to a cis-regulatory element in DNA can cause changes in transcriptional expression levels. microRNAs or other diffusible molecules are also examples of trans-acting factors that can regulate target sequences. The trans-acting gene may be on a different chromosome to the target gene, but the activity is via the intermediary protein or RNA that it encodes. Cis-acting elements, on the other hand, do not code for protein or RNA. Both the trans-acting gene and the protein/RNA that it encodes are said to "act in trans" on the target gene. Transcription factors are categorized as trans-acting factors. See also Trans-regulatory element Transactivation Transrepression References Genetics terms Molecular biology
Trans-acting
Chemistry,Biology
268
76,034,034
https://en.wikipedia.org/wiki/Promethium%20nitride
Promethium nitride is a binary inorganic compound of promethium and nitrogen with the chemical formula . Physical properties PmN crystals are of cubic system with Fm3m space group. References Nitrides Promethium compounds Nitrogen compounds
Promethium nitride
Chemistry
50
11,571,153
https://en.wikipedia.org/wiki/Low-level%20windshear%20alert%20system
A low-level windshear alert system (LLWAS) measures average surface wind speed and direction using a network of remote sensor stations, situated near runways and along approach or departure corridors at an airport. Wind shear is the generic term for wind differences over an operationally short distance (in relation to flight) which encompass meteorological phenomena including gust fronts, microbursts, vertical shear, and derechos. Background LLWAS compares results over its operating area to determine whether calm, steady winds, wind shifts (in relation to runways), wind gusts, divergent winds, sustained divergent winds (indicative of shear), or strong and sustained divergent winds (indicative of microbursts) are observed. A LLWAS master station polls each remote station every system cycle (nominally every ten seconds) and provides prevailing airport wind averages, runway specific winds, gusts, may set new wind shear alerts or microburst alerts and reset countdown timers of elapsed time since the last alert. By airline rules, pilots must avoid microbursts if warnings are issued by an automated wind shear detection system, and must wait until a safe time interval passes, to assure departure or landing conditions are safe for the performance of the airframe. Pilots may decide whether to land (or conduct a missed approach) after wind shear alerts are issued. LLWAS wind shear alerts are defined as wind speed gain or loss of between 20 and 30 knots aligned with the active runway direction. "Low level" refers to altitudes of or less above ground level (AGL). Arriving aircraft on descent, generally within six nautical miles of touchdown will fly within this low level, maintaining a glide slope and may lack recovery altitude sufficient to avoid a stall or flight-into-terrain if caught unaware by a microburst. LLWAS microburst alerts are issued for greater than 30 knot loss of airspeed at the runway or within three nautical miles of approach or two nautical miles of departure. Microbursts in excess of 110 knots have been observed. Each LLWAS equipped airport may have as few as six or as many as thirty-two remote stations. Each remote station uses a tall pole with anemometer and radio-telecommunication equipment mounted on a lowerable ring. Remote station wind measurements are transmitted to a master station at the Air Traffic Control Tower (ATCT), which polls the remote stations, runs wind shear and gust front algorithms, and generates warnings when windshear or microburst conditions are detected. Current observations and warnings are displayed for approach controllers in the terminal radar approach control facility (TRACON) and for local and ground controllers in the air traffic control tower. Air traffic controller (ATC) users at local, ground and departure positions in the ATCT relay the LLWAS runway specific alerts to pilots via voice radio communication. Recent wind shear alerts may also feature in radio broadcasts by the automated terminal information system (ATIS). LLWAS wind shear and microburst alerts assist pilots during busy times on final approach and on departure, often when heavy traffic, low ceilings, obstructions to vision, and moderate to heavy precipitation add to the difficulty in determining in just a few seconds whether mounting wind and weather hazards should be risked or avoided. Related activities in the United States The original LLWAS system (LLWAS I) was developed by the Federal Aviation Administration (FAA) in 1976 in response to the 1975 Eastern Air Lines Flight 66 windshear accident in New York and the findings of Project NIMROD by Ted Fujita. LLWAS I used a center field anemometer along with five pole mounted anemometers sited around the periphery of a single runway. It was installed at 110 FAA towered airports between 1977 and 1987. Windshear was detected using a simple vector difference algorithm, triggering an alarm when the magnitude of the difference vector between the center field anemometer and any of the five remotes exceeded 15 knots. The LLWAS II deployment included software and hardware upgrades to the existing LLWAS I to improve the windshear detection and reduce false alarms. Between 1988 and 1991, all of the LLWAS I systems were upgraded to be LLWAS II compliant. Windshear deployment studies conducted from 1989 through 1994 determined at which LLWAS-II sites weather exposure justified upgrade to a weather radar (Terminal Doppler Weather Radar (TDWR) or Weather Systems Processor (WSP)) an LLWAS Network Expansion (LLWAS-NE) or LLWAS-Relocate/Sustain (LLWAS-RS) upgrade, singly or in combination. By 2005 all LLWAS-II had been decommissioned for one of these replacement wind shear detection systems or for two in combination. The LLWAS-NE added the ability to cover more than a single runway, using up to 32 remote stations to provide runway specific alerts for parallel and crossing runways at ten large airports in combination with TDWR. The LLWAS-RS further upgrades service at 40 remaining LLWAS-2 operating sites (not justified for a radar solution) to employ LLWAS-NE algorithms and extend service life by 20 years, in part by adding ultrasonic anemometers with no moving parts. The LLWAS-RS program began in response to the National Transportation Safety Board (NTSB) investigation of the USAir Flight 1016 accident at Charlotte, North Carolina, in 1994. From that accident, a determination was made that LLWAS-II must regain and retain its original capability, often degraded by tree growth and airport construction such as hangars that obstruct or deflect wind near LLWAS remote station sensors. See also Index of aviation articles Terminal Doppler Weather Radar Airborne wind shear detection and alert system Center Weather Service Unit NEXRAD References External links Low Level Windshear Alert System – Relocation/Sustainment LLWAS History, System Description, Guide to Literature Meteorological instrumentation and equipment Runway safety Meteorological data and networks
Low-level windshear alert system
Technology,Engineering
1,195
648,566
https://en.wikipedia.org/wiki/QNX4FS
QNX4FS is an extent-based file system used by the QNX4 and QNX6 operating systems. As the file system uses soft updates, it remains consistent even after a power failure, without using journaling. Instead, the writes are carefully ordered and flushed to disk at appropriate intervals so that the on-disk structure always remains consistent, no matter if the operation is interrupted. However, unflushed changes to the file system are nevertheless lost, as the disk cache is typically stored in volatile memory. Another notable property of this file system is that its actual metadata, like inode information and disk bitmaps, are accessible in the same way as any other file on the file system (as /.inodes and /.bitmap, respectively). This is consistent with QNX's (in fact, Plan 9 from Bell Labs's, or historically Unix's) philosophy that "everything is a file". References External links BlackBerry Disk file systems
QNX4FS
Technology
202
4,251,102
https://en.wikipedia.org/wiki/Three-point%20flexural%20test
The three-point bending flexural test provides values for the modulus of elasticity in bending , flexural stress , flexural strain and the flexural stress–strain response of the material. This test is performed on a universal testing machine (tensile testing machine or tensile tester) with a three-point or four-point bend fixture. The main advantage of a three-point flexural test is the ease of the specimen preparation and testing. However, this method has also some disadvantages: the results of the testing method are sensitive to specimen and loading geometry and strain rate. Testing method The test method for conducting the test usually involves a specified test fixture on a universal testing machine. Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart. Calculation of the flexural stress for a rectangular cross section for a circular cross section Calculation of the flexural strain Calculation of flexural modulus in these formulas the following parameters are used: = Modulus of Rupture, the stress required to fracture the sample (MPa) = Strain in the outer surface, (mm/mm) = flexural Modulus of elasticity,(MPa) = load at a given point on the load deflection curve, (N) = Support span, (mm) = Width of test beam, (mm) = Depth or thickness of tested beam, (mm) = maximum deflection of the center of the beam, (mm) = The gradient (i.e., slope) of the initial straight-line portion of the load deflection curve, (N/mm) = The radius of the beam, (mm) Fracture toughness testing The fracture toughness of a specimen can also be determined using a three-point flexural test. The stress intensity factor at the crack tip of a single edge notch bending specimen is where is the applied load, is the thickness of the specimen, is the crack length, and is the width of the specimen. In a three-point bend test, a fatigue crack is created at the tip of the notch by cyclic loading. The length of the crack is measured. The specimen is then loaded monotonically. A plot of the load versus the crack opening displacement is used to determine the load at which the crack starts growing. This load is substituted into the above formula to find the fracture toughness . The ASTM D5045-14 and E1290-08 Standards suggests the relation where The predicted values of are nearly identical for the ASTM and Bower equations for crack lengths less than 0.6. Standards ISO 12135: Metallic materials. Unified method for the determination of quasi-static fracture toughness. ISO 12737: Metallic materials. Determination of plane-strain fracture toughness. ISO 178: Plastics—Determination of flexural properties. ASTM C293: Standard Test Method for Flexural Strength of Concrete (Using Simple Beam With Center-Point Loading). ASTM D790: Standard test methods for flexural properties of unreinforced and reinforced plastics and electrical insulating materials. ASTM E1290: Standard Test Method for Crack-Tip Opening Displacement (CTOD) Fracture Toughness Measurement. ASTM D7264: Standard Test Method for Flexural Properties of Polymer Matrix Composite Materials. ASTM D5045: Standard Test Methods for Plane-Strain Fracture Toughness and Strain Energy Release Rate of Plastic Materials. See also References Materials testing Mechanics
Three-point flexural test
Physics,Materials_science,Engineering
715
43,589,512
https://en.wikipedia.org/wiki/Single-layer%20materials
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes. 2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements). It is predicted that there are hundreds of stable single-layer materials. The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation. Single element materials C: graphene and graphyne Graphene Graphene is a crystalline allotrope of carbon in the form of a nearly transparent (to visible light) one atom thick sheet. It is hundreds of times stronger than most steels by weight. It has the highest known thermal and electrical conductivity, displaying current densities 1,000,000 times that of copper. It was first produced in 2004. Andre Geim and Konstantin Novoselov won the 2010 Nobel Prize in Physics "for groundbreaking experiments regarding the two-dimensional material graphene". They first produced it by lifting graphene flakes from bulk graphite with adhesive tape and then transferring them onto a silicon wafer. Graphyne Graphyne is another 2-dimensional carbon allotrope whose structure is similar to graphene's. It can be seen as a lattice of benzene rings connected by acetylene bonds. Depending on the content of the acetylene groups, graphyne can be considered a mixed hybridization, spn, where 1 < n < 2, compared to graphene (pure sp2) and diamond (pure sp3). First-principle calculations using phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations showed graphyne and its boron nitride analogues to be stable. The existence of graphyne was conjectured before 1960. In 2010, graphdiyne (graphyne with diacetylene groups) was synthesized on copper substrates. In 2022 a team claimed to have successfully used alkyne metathesis to synthesise graphyne though this claim is disputed. However, after an investigation the team's paper was retracted by the publication citing fabricated data. Later during 2022 synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions. Recently, it has been claimed to be a competitor for graphene due to the potential of direction-dependent Dirac cones. B: borophene Borophene is a crystalline atomic monolayer of boron and is also known as boron sheet. First predicted by theory in the mid-1990s in a freestanding state, and then demonstrated as distinct monoatomic layers on substrates by Zhang et al., different borophene structures were experimentally confirmed in 2015. Ge: germanene Germanene is a two-dimensional allotrope of germanium with a buckled honeycomb structure. Experimentally synthesized germanene exhibits a honeycomb structure. This honeycomb structure consists of two hexagonal sub-lattices that are vertically displaced by 0.2 A from each other. Si: silicene Silicene is a two-dimensional allotrope of silicon, with a hexagonal honeycomb structure similar to that of graphene. Its growth is scaffolded by a pervasive Si/Ag(111) surface alloy beneath the two-dimensional layer. Sn: stanene Stanene is a predicted topological insulator that may display dissipationless currents at its edges near room temperature. It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. Its buckled structure leads to high reactivity against common air pollutants such as NOx and COx and it is able to trap and dissociate them at low temperature. A structure determination of stanene using low energy electron diffraction has shown ultra-flat stanene on a Cu(111) surface. Pb: plumbene Plumbene is a two-dimensional allotrope of lead, with a hexagonal honeycomb structure similar to that of graphene. P: phosphorene Phosphorene is a 2-dimensional, crystalline allotrope of phosphorus. Its mono-atomic hexagonal structure makes it conceptually similar to graphene. However, phosphorene has substantially different electronic properties; in particular it possesses a nonzero band gap while displaying high electron mobility. This property potentially makes it a better semiconductor than graphene. The synthesis of phosphorene mainly consists of micromechanical cleavage or liquid phase exfoliation methods. The former has a low yield while the latter produce free standing nanosheets in solvent and not on the solid support. The bottom-up approaches like chemical vapor deposition (CVD) are still blank because of its high reactivity. Therefore, in the current scenario, the most effective method for large area fabrication of thin films of phosphorene consists of wet assembly techniques like Langmuir-Blodgett involving the assembly followed by deposition of nanosheets on solid supports. Sb: antimonene Antimonene is a two-dimensional allotrope of antimony, with its atoms arranged in a buckled honeycomb lattice. Theoretical calculations predicted that antimonene would be a stable semiconductor in ambient conditions with suitable performance for (opto)electronics. Antimonene was first isolated in 2016 by micromechanical exfoliation and it was found to be very stable under ambient conditions. Its properties make it also a good candidate for biomedical and energy applications. In a study made in 2018, antimonene modified screen-printed electrodes (SPE's) were subjected to a galvanostatic charge/discharge test using a two-electrode approach to characterize their supercapacitive properties. The best configuration observed, which contained 36 nanograms of antimonene in the SPE, showed a specific capacitance of 1578 F g−1 at a current of 14 A g−1. Over 10,000 of these galvanostatic cycles, the capacitance retention values drop to 65% initially after the first 800 cycles, but then remain between 65% and 63% for the remaining 9,200 cycles. The 36 ng antimonene/SPE system also showed an energy density of 20 mW h kg−1 and a power density of 4.8 kW kg−1. These supercapacitive properties indicate that antimonene is a promising electrode material for supercapacitor systems. A more recent study, concerning antimonene modified SPEs shows the inherent ability of antimonene layers to form electrochemically passivated layers to facilitate electroanalytical measurements in oxygenated environments, in which the presence of dissolved oxygens normally hinders the analytical procedure. The same study also depicts the in-situ production of antimonene oxide/PEDOT:PSS nanocomposites as electrocatalytic platforms for the determination of nitroaromatic compounds. Bi: bismuthene Bismuthene, the two-dimensional (2D) allotrope of bismuth, was predicted to be a topological insulator. It was predicted that bismuthene retains its topological phase when grown on silicon carbide in 2015. The prediction was successfully realized and synthesized in 2016. At first glance the system is similar to graphene, as the Bi atoms arrange in a honeycomb lattice. However the bandgap is as large as 800mV due to the large spin–orbit interaction (coupling) of the Bi atoms and their interaction with the substrate. Thus, room-temperature applications of the quantum spin Hall effect come into reach. It has been reported to be the largest nontrivial bandgap 2D topological insulator in its natural state. Top-down exfoliation of bismuthene has been reported in various instances with recent works promoting the implementation of bismuthene in the field of electrochemical sensing. Emdadul et al. predicted the mechanical strength and phonon thermal conductivity of monolayer β-bismuthene through atomic-scale analysis. The obtained room temperature (300K) fracture strength is ~4.21 N/m along the armchair direction and ~4.22 N/m along the zigzag direction. At 300 K, its Young's moduli are reported to be ~26.1 N/m and ~25.5 N/m, respectively, along the armchair and zigzag directions. In addition, their predicted phonon thermal conductivity of ~1.3 W/m∙K at 300 K is considerably lower than other analogous 2D honeycombs, making it a promising material for thermoelectric operations. Au: goldene On 16 April 2024, scientists from Linköping University in Sweden reported that they had produced goldene, a single layer of gold atoms 100nm wide. Lars Hultman, a materials scientist on the team behind the new research, is quoted as saying "we submit that goldene is the first free-standing 2D metal, to the best of our knowledge", meaning that it is not attached to any other material, unlike plumbene and stanene. Researchers from New York University Abu Dhabi (NYUAD) previously reported to have synthesised Goldene in 2022, however various other scientists have contended that the NYUAD team failed to prove they made a single-layer sheet of gold, as opposed to a multi-layer sheet. Goldene is expected to be used primarily for its optical properties, with applications such as sensing or as a catalyst. Metals Single and double atom layers of platinum in a two-dimensional film geometry has been demonstrated. These atomically thin platinum films are epitaxially grown on graphene, which imposes a compressive strain that modifies the surface chemistry of the platinum, while also allowing charge transfer through the graphene. Single atom layers of palladium with the thickness down to 2.6 Å, and rhodium with the thickness of less than 4 Å have been synthesized and characterized with atomic force microscopy and transmission electron microscopy. A 2D titanium formed by additive manufacturing (laser powder bed fusion) achieved greater strength than any known material (50% greater than magnesium alloy WE54). The material was arranged in a tubular lattice with a thin band running inside, merging two complementary lattice structures. This reduced by half the stress at the weakest points in the structure. 2D supracrystals The supracrystals of 2D materials have been proposed and theoretically simulated. These monolayer crystals are built of supra atomic periodic structures where atoms in the nodes of the lattice are replaced by symmetric complexes. For example, in the hexagonal structure of graphene patterns of 4 or 6 carbon atoms would be arranged hexagonally instead of single atoms, as the repeating node in the unit cell. 2D alloys Two-dimensional alloys (or surface alloys) are a single atomic layer of alloy that is incommensurate with the underlying substrate. One example is the 2D ordered alloys of Pb with Sn and with Bi. Surface alloys have been found to scaffold two-dimensional layers, as in the case of silicene. Compounds Boron nitride nanosheet Titanate nanosheet Borocarbonitrides MXenes 2D silica Niobium bromide and Niobium chloride () Transition metal dichalcogenide monolayers The most commonly studied two-dimensional transition metal dichalcogenide (TMD) is monolayer molybdenum disulfide (MoS2). Several phases are known, notably the 1T and 2H phases. The naming convention reflects the structure: the 1T phase has one "sheet" (consisting of a layer of S-Mo-S; see figure) per unit cell in a trigonal crystal system, while the 2H phase has two sheets per unit cell in a hexagonal crystal system. The 2H phase is more common, as the 1T phase is metastable and spontaneously reverts to 2H without stabilization by additional electron donors (typically surface S vacancies). The 2H phase of MoS2 (Pearson symbol hP6; Strukturbericht designation C7) has space group P63/mmc. Each layer contains Mo surrounded by S in trigonal prismatic coordination. Conversely, the 1T phase (Pearson symbol hP3) has space group P-3m1, and octahedrally-coordinated Mo; with the 1T unit cell containing only one layer, the unit cell has a c parameter slightly less than half the length of that of the 2H unit cell (5.95 Å and 12.30 Å, respectively). The different crystal structures of the two phases result in differences in their electronic band structure as well. The d-orbitals of 2H-MoS2 are split into three bands: dz2, dx2-y2,xy, and dxz,yz. Of these, only the dz2 is filled; this combined with the splitting results in a semiconducting material with a bandgap of 1.9eV. 1T-MoS2, on the other hand, has partially filled d-orbitals which give it a metallic character. Because the structure consists of in-plane covalent bonds and inter-layer van der Waals interactions, the electronic properties of monolayer TMDs are highly anisotropic. For example, the conductivity of MoS2 in the direction parallel to the planar layer (0.1–1 ohm−1cm−1) is ~2200 times larger than the conductivity perpendicular to the layers. There are also differences between the properties of a monolayer compared to the bulk material: the Hall mobility at room temperature is drastically lower for monolayer 2H MoS2 (0.1–10 cm2V−1s−1) than for bulk MoS2 (100–500 cm2V−1s−1). This difference arises primarily due to charge traps between the monolayer and the substrate it is deposited on. MoS2 has important applications in (electro)catalysis. As with other two-dimensional materials, properties can be highly geometry-dependent; the surface of MoS2 is catalytically inactive, but the edges can act as active sites for catalyzing reactions. For this reason, device engineering and fabrication may involve considerations for maximizing catalytic surface area, for example by using small nanoparticles rather than large sheets or depositing the sheets vertically rather than horizontally. Catalytic efficiency also depends strongly on the phase: the aforementioned electronic properties of 2H MoS2 make it a poor candidate for catalysis applications, but these issues can be circumvented through a transition to the metallic (1T) phase. The 1T phase has more suitable properties, with a current density of 10 mA/cm2, an overpotential of −187 mV relative to RHE, and a Tafel slope of 43 mV/decade (compared to 94 mV/decade for the 2H phase). Graphane While graphene has a hexagonal honeycomb lattice structure with alternating double-bonds emerging from its sp2-bonded carbons, graphane, still maintaining the hexagonal structure, is the fully hydrogenated version of graphene with every sp3-hybrized carbon bonded to a hydrogen (chemical formula of (CH)n). Furthermore, while graphene is planar due to its double-bonded nature, graphane is rugged, with the hexagons adopting different out-of-plane structural conformers like the chair or boat, to allow for the ideal 109.5° angles which reduce ring strain, in a direct analogy to the conformers of cyclohexane. Graphane was first theorized in 2003, was shown to be stable using first principles energy calculations in 2007, and was first experimentally synthesized in 2009. There are various experimental routes available for making graphane, including the top-down approaches of reduction of graphite in solution or hydrogenation of graphite using plasma/hydrogen gas as well as the bottom-up approach of chemical vapor deposition. Graphane is an insulator, with a predicted band gap of 3.5 eV; however, partially hydrogenated graphene is a semi-conductor, with the band gap being controlled by the degree of hydrogenation. Germanane Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom. Germanane's structure is similar to graphane, Bulk germanium does not adopt this structure. Germanane is produced in a two-step route starting with calcium germanide. From this material, the calcium (Ca) is removed by de-intercalation with HCl to give a layered solid with the empirical formula GeH. The Ca sites in Zintl-phase CaGe2 interchange with the hydrogen atoms in the HCl solution, producing GeH and CaCl2. SLSiN SLSiN (acronym for Single-Layer Silicon Nitride), a novel 2D material introduced as the first post-graphene member of Si3N4, was first discovered computationally in 2020 via density-functional theory based simulations. This new material is inherently 2D, insulator with a band-gap of about 4 eV, and stable both thermodynamically and in terms of lattice dynamics. Combined surface alloying Often single-layer materials, specifically elemental allotrops, are connected to the supporting substrate via surface alloys. By now, this phenomenon has been proven via a combination of different measurement techniques for silicene, for which the alloy is difficult to prove by a single technique, and hence has not been expected for a long time. Hence, such scaffolding surface alloys beneath two-dimensional materials can be also expected below other two-dimensional materials, significantly influencing the properties of the two-dimensional layer. During growth, the alloy acts as both, foundation and scaffold for the two-dimensional layer, for which it paves the way. Organic Ni3(HITP)2 is an organic, crystalline, structurally tunable electrical conductor with a high surface area. HITP is an organic chemical (2,3,6,7,10,11-hexaaminotriphenylene). It shares graphene's hexagonal honeycomb structure. Multiple layers naturally form perfectly aligned stacks, with identical 2-nm openings at the centers of the hexagons. Room temperature electrical conductivity is ~40 S cm−1, comparable to that of bulk graphite and among the highest for any conducting metal-organic frameworks (MOFs). The temperature dependence of its conductivity is linear at temperatures between 100 K and 500 K, suggesting an unusual charge transport mechanism that has not been previously observed in organic semiconductors. The material was claimed to be the first of a group formed by switching metals and/or organic compounds. The material can be isolated as a powder or a film with conductivity values of 2 and 40 S cm−1, respectively. Polymer Using melamine (carbon and nitrogen ring structure) as a monomer, researchers created 2DPA-1, a 2-dimensional polymer sheet held together by hydrogen bonds. The sheet forms spontaneously in solution, allowing thin films to be spin-coated. The polymer has a yield strength twice that of steel, and it resists six times more deformation force than bulletproof glass. It is impermeable to gases and liquids. Combinations Single layers of 2D materials can be combined into layered assemblies. For example, bilayer graphene is a material consisting of two layers of graphene. One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers". Layered combinations of different 2D materials are generally called van der Waals heterostructures. Twistronics is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. Characterization Microscopy techniques such as transmission electron microscopy, 3D electron diffraction, scanning probe microscopy, scanning tunneling microscope, and atomic-force microscopy are used to characterize the thickness and size of the 2D materials. Electrical properties and structural properties such as composition and defects are characterized by Raman spectroscopy, X-ray diffraction, and X-ray photoelectron spectroscopy. Mechanical characterization The mechanical characterization of 2D materials is difficult due to ambient reactivity and substrate constraints present in many 2D materials. To this end, many mechanical properties are calculated using molecular dynamics simulations or molecular mechanics simulations. Experimental mechanical characterization is possible in 2D materials which can survive the conditions of the experimental setup as well as can be deposited on suitable substrates or exist in a free-standing form. Many 2D materials also possess out-of-plane deformation which further convolute measurements. Nanoindentation testing is commonly used to experimentally measure elastic modulus, hardness, and fracture strength of 2D materials. From these directly measured values, models exist which allow the estimation of fracture toughness, work hardening exponent, residual stress, and yield strength. These experiments are run using dedicated nanoindentation equipment or an Atomic Force Microscope (AFM). Nanoindentation experiments are generally run with the 2D material as a linear strip clamped on both ends experiencing indentation by a wedge, or with the 2D material as a circular membrane clamped around the circumference experiencing indentation by a curbed tip in the center. The strip geometry is difficult to prepare but allows for easier analysis due to linear resulting stress fields. The circular drum-like geometry is more commonly used and can be easily prepared by exfoliating samples onto a patterned substrate. The stress applied to the film in the clamping process is referred to as the residual stress. In the case of very thin layers of 2D materials bending stress is generally ignored in indentation measurements, with bending stress becoming relevant in multilayer samples. Elastic modulus and residual stress values can be extracted by determining the linear and cubic portions of the experimental force-displacement curve. The fracture stress of the 2D sheet is extracted from the applied stress at failure of the sample. AFM tip size was found to have little effect on elastic property measurement, but the breaking force was found to have a strong tip size dependence due stress concentration at the apex of the tip. Using these techniques the elastic modulus and yield strength of graphene were found to be 342 N/m and 55 N/m respectively. Poisson's ratio measurements in 2D materials is generally straightforward. To get a value, a 2D sheet is placed under stress and displacement responses are measured, or an MD calculation is run. The unique structures found in 2D materials have been found to result in auxetic behavior in phosphorene and graphene and a Poisson's ratio of zero in triangular lattice borophene.   Shear modulus measurements of graphene has been extracted by measuring a resonance frequency shift in a double paddle oscillator experiment as well as with MD simulations. Fracture toughness of 2D materials in Mode I (KIC) has been measured directly by stretching pre-cracked layers and monitoring crack propagation in real-time. MD simulations as well as molecular mechanics simulations have also been used to calculate fracture toughness in Mode I. In anisotropic materials, such as phosphorene, crack propagation was found to happen preferentially along certain directions. Most 2D materials were found to undergo brittle fracture. Applications The major expectation held amongst researchers is that given their exceptional properties, 2D materials will replace conventional semiconductors to deliver a new generation of electronics. Biological applications Research on 2D nanomaterials is still in its infancy, with the majority of research focusing on elucidating the unique material characteristics and few reports focusing on biomedical applications of 2D nanomaterials. Nevertheless, recent rapid advances in 2D nanomaterials have raised important yet exciting questions about their interactions with biological moieties. 2D nanoparticles such as carbon-based 2D materials, silicate clays, transition metal dichalcogenides (TMDs), and transition metal oxides (TMOs) provide enhanced physical, chemical, and biological functionality owing to their uniform shapes, high surface-to-volume ratios, and surface charge. Two-dimensional (2D) nanomaterials are ultrathin nanomaterials with a high degree of anisotropy and chemical functionality. 2D nanomaterials are highly diverse in terms of their mechanical, chemical, and optical properties, as well as in size, shape, biocompatibility, and degradability. These diverse properties make 2D nanomaterials suitable for a wide range of applications, including drug delivery, imaging, tissue engineering, biosensors, and gas sensors among others. However, their low-dimension nanostructure gives them some common characteristics. For example, 2D nanomaterials are the thinnest materials known, which means that they also possess the highest specific surface areas of all known materials. This characteristic makes these materials invaluable for applications requiring high levels of surface interactions on a small scale. As a result, 2D nanomaterials are being explored for use in drug delivery systems, where they can adsorb large numbers of drug molecules and enable superior control over release kinetics. Additionally, their exceptional surface area to volume ratios and typically high modulus values make them useful for improving the mechanical properties of biomedical nanocomposites and nanocomposite hydrogels, even at low concentrations. Their extreme thinness has been instrumental for breakthroughs in biosensing and gene sequencing. Moreover, the thinness of these molecules allows them to respond rapidly to external signals such as light, which has led to utility in optical therapies of all kinds, including imaging applications, photothermal therapy (PTT), and photodynamic therapy (PDT). Despite the rapid pace of development in the field of 2D nanomaterials, these materials must be carefully evaluated for biocompatibility in order to be relevant for biomedical applications. The newness of this class of materials means that even the relatively well-established 2D materials like graphene are poorly understood in terms of their physiological interactions with living tissues. Additionally, the complexities of variable particle size and shape, impurities from manufacturing, and protein and immune interactions have resulted in a patchwork of knowledge on the biocompatibility of these materials. See also Monolayer Two-dimensional semiconductor Transition metal dichalcogenide monolayers References External links "What Are 2D Materials, and Why Do They Interest Scientists?" in Columbia News (March 6, 2024) "Twenty years of 2D materials" in Nature Physics (January 16, 2024) Additional reading Condensed matter physics Semiconductors Monolayers
Single-layer materials
Physics,Chemistry,Materials_science,Engineering
5,696
61,293,075
https://en.wikipedia.org/wiki/List%20of%20galaxies%20with%20richest%20globular%20cluster%20systems
This is a list of galaxies with richest known globular cluster systems. As of 2019, the galaxy NGC 6166 has the richest globular cluster system, with 39 000 globular clusters. Other galaxies with rich globular cluster systems are NGC 4874, NGC 4889, NGC 3311 and Messier 87. For comparison, the Milky Way has a poor globular cluster system, with only 150-180 globular clusters. References Galaxies
List of galaxies with richest globular cluster systems
Astronomy
98
9,194,453
https://en.wikipedia.org/wiki/Danish%20oil
Danish oil is a wood finishing oil, often made of tung oil or polymerized linseed oil. Because there is no defined formulation, its composition varies among manufacturers. Danish oil is a hard drying oil, meaning it can polymerize into a solid form when it reacts with oxygen in the atmosphere. It can provide a hard-wearing, often water-resistant satin finish, or serve as a primer on bare wood before applying paint or varnish. It is a "long oil" finish, a mixture of oil and varnish, typically around one-third varnish and the rest oil. Uses When applied in coats over wood, Danish oil cures to a hard satin finish that resists liquid well. As the finished coating is not glossy or slippery, it is a suitable finish for items such as food utensils or tool handles, giving some additional water resistance and also leaves a dark finish to the wood. Special dyed grades are available if wood staining is also needed. Application Compared to varnish it is simple to apply, usually a course of three coats by brush or cloth with any excess being wiped off shortly after application. The finish is left to dry for around 4–24 hours between coats, depending on the mixture being used and the wood being treated. Danish oil provides a coverage of approximately 12.5 m2/L (600 sq. ft./gallon). Spontaneous combustion Rags used for Danish oil, like those used for linseed oil, have some potential risk of spontaneous combustion and starting fires from exothermic oxidation, so it is best to dry rags flat before disposing of them, or else soak them in water. See also Tung oil References Varnishes Oils Painting materials Vegetable oils Wood finishing materials
Danish oil
Chemistry
354
47,048,061
https://en.wikipedia.org/wiki/Her%20Story%20%28video%20game%29
Her Story is an interactive film video game written and directed by Sam Barlow. It was released on 24 June 2015 for iOS, OS X, and Windows, and the following year for Android. In the game, the player searches and sorts through a database of video clips from fictional police interviews, and uses the clips to solve the case of a missing man. The police interviews focus on the man's wife, Hannah Smith, portrayed by British musician Viva Seifert. The game is Barlow's first project since his departure from Climax Studios, after which he became independent. He wanted to develop a game that was dependent on the narrative, and avoided working on the game until he was settled on an idea that was possible to execute. Barlow eventually decided to create a police procedural game, and incorporate live action footage. He conducted research for the game by watching existing police interviews. Upon doing so, he discovered recurring themes in the suspects' answers, and decided to incorporate ambiguity to the investigation in the game. Her Story was acclaimed by many reviewers, with praise particularly directed at the narrative, unconventional gameplay mechanics, and Seifert's performance. The game has sold over 100,000 copies, and earned multiple year-end accolades, including nominations for Game of the Year awards from several gaming publications. In August 2019, a spiritual sequel titled Telling Lies was released. Gameplay Her Story is an interactive movie game, focusing on a series of seven fictional police interviews from 1994. As the game begins, the player is presented with an old desktop, which contains several files and programs. Among the programs are instructional text files, which explain the game's mechanics. One of the programs automatically open on the desktop is the "L.O.G.I.C. Database", which allows the player to search and sort video clips within the database, of which there are 271. The video clips are police interviews with Hannah Smith, a British woman. The interviews are unable to be watched in their entirety, forcing the player to view short clips. In the interviews, Hannah answers unknown questions to an off-screen detective, prompting the player to decipher the context of the answers. Hannah's answers are transcribed, and the player find clips by searching in the database for words from the transcriptions, attempting to solve the case by piecing together information. As the player selects clips, they can enter user tags, which are then available as searchable terms. One of the files on the desktop is a database checker, which allows the player to review the number of clips that have been viewed; as a clip is viewed, the red box in the database checker changes to green. The desktop also features the minigame Mirror Game, based on the strategy board game Reversi. Plot The interview tapes feature a woman who introduces herself as Hannah Smith (Viva Seifert), whose husband, Simon, has gone missing, and is later found murdered. Hannah admits that she and Simon had argued, but has an alibi placing her in Glasgow when Simon disappeared. As more pieces of the interviews are discovered, it is claimed that "Hannah" is actually two women: Hannah and Eve, identical twins separated at birth by the midwife, Florence. Florence, whose husband died during the war, desperately wanted to have kids, but did not believe in remarrying, so she faked the death of one of the twins to claim one for herself. Florence deliberately kept Eve indoors as much as possible and the twins were unaware of each other's existence until years later. When Florence died, Eve secretly moved in with Hannah at which point they decided to act as a single person, keeping a common diary and a set of rules defining their actions as "Hannah". Hannah's parents were oblivious, and assumed that Eve was Hannah's imaginary friend. Hannah eventually began dating Simon, whom she had met at a glazier where they both worked. Despite their rules to share equally, Hannah slept with Simon and became pregnant. She gradually became possessive of Simon and forbade Eve from interacting with him. Because of the pregnancy, Hannah and Simon were married and moved in together while Eve moved out to her own apartment and began wearing a wig. Hannah miscarried in her eighth month and believed she was infertile afterward. Some time later, Simon encountered Eve in a bar she was singing at. Smitten by her resemblance, the two began an affair. Eve became pregnant but never told Simon and hid the identity of the father from Hannah. On their birthday, after Simon gave Hannah a handmade mirror as a gift, Hannah revealed to him the existence of Eve and her pregnancy. From his reaction, Hannah realised that Simon was the father of Eve's child. After kicking Simon out of the house, she argued with Eve over the affair, causing the latter to leave and drive to Glasgow. When Simon returned, Hannah pretended to be Eve by wearing her wig. Thinking she was Eve, Simon gifted her a similar handmade mirror and professed his desire to be with Eve rather than Hannah. Hannah became furious and revealed her identity. She later claims that, as they fought, she shattered the mirror and inadvertently cut Simon's throat with a shard of it while trying to fend him off. When Eve returned, she found Hannah sitting next to Simon's lifeless body. The two agreed that Eve's baby was the priority so they hid Simon's body and used Eve's trip to Glasgow as an alibi for the time of his disappearance. At the end of the final interview, Eve says that Hannah is "gone ... and she's never coming back" but mockingly asks "can you arrest someone who doesn't exist?". She then requests a lawyer, and says that her comments are just "stories". It is not entirely clear if Eve's story of being an identical twin is true, an intentional fabrication meant to confuse the police, or a case of dissociative identity disorder, with pieces of evidence in-game lending credence to each theory. As the player uncovers enough of the story, a chat window appears asking if they are finished. Upon answering affirmatively, it is revealed that the player is Sarah, Eve's daughter. The chat asks Sarah if she understands her mother's actions, and asks to meet her outside. Development Her Story was developed by Sam Barlow, who previously worked on games such as Silent Hill: Origins (2007) and Silent Hill: Shattered Memories (2009) at Climax Studios. Barlow had conceived the idea of a police procedural game while working at Climax Studios, but decided to become independent to create the game, in order to develop a game that is "deep on story". He became frustrated by publishers rejecting game pitches for being "too kitchen sink [realism]" in favor of video game tropes like a "cyborg assassin from the future", and found that becoming independent allowed him to create his own game of the sort. He also wished to become independent after playing games like Year Walk (2013) and 80 Days (2014). Barlow avoided development until he had an idea that was possible to execute. "I could probably quite easily have gone and made an exploration horror game ... but I kind of knew that there would be big compromises there because of budget," he said. Barlow spent his savings to work on the game, allowing him a year of development time. He followed through with the concept of Her Story, as it focused on an "intimate setting, dialogue and character interaction", which he found was often dismissed in larger titles. Barlow felt particularly inspired to develop Her Story after seeing the continuous support of his 1999 game Aisle. When referring to how Her Story challenges typical game conventions, Barlow compares it to the Dogme 95 filmmaking movement, and Alfred Hitchcock's 1948 film Rope. Her Story was approved through Steam Greenlight, and was crowdfunded by Indie Fund. It was released on 24 June 2015 for iOS, OS X, and Windows. Barlow wanted to launch the game on all platforms simultaneously, as he was unsure where the audience would be. "If I'd just gone for just one I'd have lost a lot of the potential audience," he commented. Barlow found that playing Her Story on mobile devices is a "'sofa' experience". He also noted that it felt "natural" for it to be released on mobile devices, as they are regularly used to watch videos and search the internet; similar tasks are used as gameplay mechanics in Her Story. The iPhone's smaller pixel size of 640×480 as opposed to 800×600 led to Barlow's doubts of a release on the platform, but he was influenced to release it upon receiving positive feedback through testing. As development neared completion, the game underwent testing, which allowed Barlow to "balance some aspects" and "polish items together". An Android version was released on 29 June 2016. Her Story runs on the Unity game engine. Gameplay design Barlow's immediate idea was to create a game involving police interviews, but he "didn't know exactly what that meant". He then conceived the idea to involve real video footage, and the ability to access the footage through a database interface; he described the interface as being "part Apple II, part Windows 3.1 and part Windows 98". The interface design was inspired by Barlow's appreciation of the police procedural genre, commenting that "the conceit of making the computer itself a prop in the game was so neat". He also compared the searching mechanic to the Google search engine, and wanted to "run with the idea" that the player is "essentially Googling". The game's concept was inspired by the TV series Homicide: Life on the Street (1993–1999), which Barlow found depicted police interviews being a "gladiatorial arena for detectives". Barlow intentionally made the game's opening screen to be "slightly too long", to immediately notify the player of the slow pacing that would follow. Inspiration to work on Her Story stemmed from Barlow's disappointment of other detectives games: he felt that L.A. Noire (2011) never allowed him to feel like "the awesome detective who was having to read things and follow up threads of investigation", and he called the Ace Attorney series (2001–present) "rigid". When Barlow began development on Her Story, he added more typical game aspects, but the game mechanics became more minimalist as development progressed. The initial plan for the game was for the player to work towards a definitive resolution, ultimately solving the crime. However, when Barlow tested the concept on pre-existing interview transcripts of convicted murderer Christopher Porco, he began to discover themes surfacing within the interviews, particularly relating to the concept of money, which was ultimately a large factor in Porco's trial. He took this concept of recurring themes and threads, and decided to "move beyond the clearly scripted stuff" when developing Her Story. Barlow felt that the story's appeal was the ambiguity of the investigation, comparing Her Story to the podcast Serial (2014–present), which he listened to late in development. He found that the attraction of Serial was the lack of a definitive solution, noting that "people lean towards certain interpretations ... what makes it interesting is the extent to which it lives on in your imagination". Story and characters Barlow decided to feature live action footage in the game after becoming frustrated with his previous projects, particularly with the technical challenge of translating an actor's performance into a game engine. Barlow set out to work with an actor on Her Story, having enjoyed the process while working at Climax Studios, albeit with a larger budget. He contacted Viva Seifert, whom he had intermittently worked with on Legacy of Kain: Dead Sun for a year, before its cancellation. He felt that Seifert is "very good at picking up a line and intuitively pulling a lot of the subtext into her performance", which led him to believe that she was "perfect" for the role in Her Story. When Barlow asked Seifert to audition, he sent her a 300-page script, which he managed to reduce to 80 pages, by altering font size, as well as some dialogue; she accepted the role. Seifert began to feel pressure midway through filming, when she realised that "the whole game is hinging" on her performance. She described the shoot as "intense" and "rather exhausting", and felt as if she was "subtly being scrutinised" by Barlow, which helped her performance. Barlow also felt that the intensity helped Seifert's performance, taking cues from director Alfred Hitchcock, who would upset his actors in order to achieve the greatest performance. Seifert felt that there were small nuances in her performance that may have "added some twists and turns" for the player that Barlow had not anticipated. The game's seven police interviews were filmed roughly in chronological order over five days, in a process that Barlow called "natural". Like Seifert though, Barlow thought the shoot was intense, remarking "at the end of the shoot, it was just a huge relief it was all over and we hadn't forgotten to record or anything." Barlow travelled to Seifert's home county of Cornwall to film. He felt that finding the locations for the interrogation rooms was the simplest part of production, because "everywhere has crappy looking rooms", with footage being recorded in a council building in Truro. When filming was complete, Barlow wanted to give the impression that the videos had been recorded in 1994, but found digital filters were unable to capture this time frame appropriately. Instead, he recorded the footage through two VHS players to create imperfections in the video before digitising the video into the game. Barlow played the part of the detectives during filming, asking scripted questions to Seifert's character, but is never featured in Her Story. When watching police interviews for research, Barlow found himself empathising with the interviewee, which inspired him to exclude the detective from the game. He stated that the interviews typically feature "double betrayal", in which the detectives are "pretending to be the best friend". Barlow felt that removing the detective from the game empowers Seifert's character, allowing the player to empathise. When conducting research for Her Story, Barlow looked at the case regarding the murder of Travis Alexander, which made him consider the manner that female murder suspects are treated in interrogations, stating that they "tend to be fetishised, more readily turned into archetypes". This was further proved to Barlow when studying the interviews of Casey Anthony and Amanda Knox; he found that media commentary often ignored the evidence of the investigation, instead focusing on the expressions of the suspects during the interviews. Barlow conducted further research by studying texts about psychology, and the use of language. After conceiving the game's main mechanics, Barlow began developing the story, conducting research and "letting [the story] take on a life of its own". To develop the story, Barlow placed the script into a spreadsheet, which became so large it often crashed his laptop upon opening it. He mapped out every character involved in the investigation, including their backstories and agendas. He spent about half of development creating detailed documents charting the story's characters and events. He also determined the dates on which the police interviews would take place, and what the suspect was doing in the interim. Once he had determined the game's concept more precisely, Barlow ensured the script contained "layers of intrigue", in order to interest the player to finish the game. Barlow often replaced words of the script with synonyms, to ensure that some clips were not associated with irrelevant words. When writing the script, Barlow generally avoided supernatural themes, but realised that it would involve a "slight dreamlike surreal edge". Working on the script, he often found that he was "very much in the moment, writing from inside the characters' heads". He found it difficult to create a new idea for the story, as detective fiction has been explored many times before. Audio When searching for music to use in Her Story, Barlow looked for songs that sounded "slightly out-of-time". He ultimately used eight tracks from musician Chris Zabriskie, and found that his music invoked nostalgia, and had a "modern edge". He felt that the music "highlights the gap between the 'fake computer world'" and the game. The "emotional intensity" of the clips also influences the music changes in Her Story. Barlow also intended to feature a song for Seifert to sing in some of the clips that fit within the game. He settled on the murder ballad "The Twa Sisters", which he felt would trigger the mythical elements of the game. Seifert and Barlow both altered the ballad, to fit the game. Barlow intended for the sound design to be "all about authenticity". He used an old keyboard to provide sound effects for the computer, using stereo panning for the keys to have the correct 3D position in playback. Sequel Telling Lies was Barlow's spiritual sequel to Her Story; rather than focusing on one central character, it features live-action footage from the video conversations of four characters (played by Logan Marshall-Green, Alexandra Shipp, Kerry Bishé, and Angela Sarafyan), and requires the player to piece together events by searching the video clips to determine why these characters were under surveillance. The game was published by Annapurna Interactive on Windows and macOS systems on 23 August 2019. Reception Critical reception Her Story received "universal acclaim" for the iOS and "generally favorable" reviews for Windows, according to review aggregator Metacritic. Praise was directed at the narrative, gameplay mechanics, and Seifert's performance. IGNs Brian Albert called Her Story "the most unique game I've played in years", and Steven Burns of VideoGamer.com named it "one of the year's best and most interesting games". Adam Smith of Rock, Paper, Shotgun remarked that it "might be the best FMV game ever made"; Michael Thomsen of The Washington Post declared it "a beautiful amalgam of the cinema and video game formats". Critics lauded the game's narrative. Edge considered it "a superlatively told work of crime fiction." Kimberley Wallace of Game Informer wrote that the "fragmented" delivery of the story "works to its benefit". She appreciated the subtlety of the narrative, and the ambiguity surrounding the ending. Polygons Megan Farokhmanesh noted that Her Story "nails the dark, voyeuristic nature of true crime". Chris Schilling of The Daily Telegraph was impressed by the coherence of the narrative, "even when presented out of order". Eurogamers Simon Parkin found the effects of the narrative to be similar to well-received HBO thrillers, particularly in terms of audience attention. Stephanie Bendixsen of Good Game was disappointed that large plot points were revealed early in the game, but attributed this to the uniqueness of each player's experience. Seifert's performance in the game received praise. GameSpots Justin Clark felt that the performance "anchored" the game. Chris Kohler of Wired similarly described Seifert's performance as "so captivating that I couldn't imagine this game working any other way". Katie Smith of Adventure Gamers wrote that Seifert is convincing in the role, particularly with small details such as body language, but was startled by the lack of emotion. Game Informers Wallace echoed similar remarks, noting that Seifert "nailed the role". Rock, Paper, Shotguns Smith wrote that "the whole thing might collapse" without Seifert's "convincing" performance. IGNs Albert named the acting "believable", stating that Seifert's performance is "appropriately both grounded and absurd". Joe Donnelly of Digital Spy wrote that Seifert's performance has the potential to inspire similar games, and Andy Kelly of PC Gamer called the performance "understated, realistic, and complex". Burns of VideoGamer.com felt generally impressed by Seifert's performance, but noted some "occasional bad acting". Rich Stanton of The Guardian wrote that "Seifert's delivery is usually matter-of-fact and emotionally convincing". Polygon named Hannah among the best video game characters of the 2010s, dubbing Seifert's performance as "superbly". The unconventional gameplay mechanics also received positive remarks from critics. Destructoids Laura Kate Dale felt that the game's pacing and structure assisted the narrative, and Wallace of Game Informer found that making a connection between key points in the narrative was entertaining. Burns of VideoGamer.com praised the game's ability to make the player realise their own biases, and challenge their "sense of self". Albert of IGN felt that the searching tool was "gratifying", and positively contributes to the pacing of the game, while The Washington Posts Thomsen wrote that the database mechanic created "contemplative gaps between scenes", allowing for "poignance and power" within the narrative. Edge thought that by having game mechanics which require the player to deduce the story through investigation and intuition, Her Story was one of few games "that truly deliver on the foundational fantasy of detective work." Bendixsen of Good Game described the desktop as "appropriately retro", noting that she was "drawn in immediately". The game sold over 100,000 copies by 10 August 2015; about 60,000 copies were sold on Windows, with the remaining 40,000 sold on iOS. Barlow stated that the game's instant popularity surprised him, as he had instead expected the game to slowly spread by word of mouth "and maybe over six months it would pay for itself." Accolades Her Story has received multiple nominations and awards from gaming publications. It won Game of the Year from Polygon, as well as Game of the Month from Rock, Paper, Shotgun and GameSpot. It received the Breakthrough Award at the 33rd Golden Joystick Awards, Debut Game and Game Innovation at the 12th British Academy Games Awards, the award for Most Original game from PC Gamer, and the Seumas McNally Grand Prize at the Independent Games Festival Awards. At The Game Awards 2015, Her Story won Best Narrative, and Seifert won Best Performance for her role in the game; she also won the Great White Way Award for Best Acting in a Game at the 5th Annual New York Game Awards. Her Story won Best Emotional Mobile & Handheld at the Emotional Games Awards 2016, Mobile Game of the Year at the SXSW Gaming Awards, Mobile & Handheld at the British Academy Games Awards, and awards for excellence in story and innovation at the International Mobile Gaming Awards, while The Guardian named it the best iOS game of 2015, In 2015, Edge ranked Her Story as 94th in their list of the greatest videogames of all time. References External links 2015 video games Android (operating system) games BAFTA winners (video games) British Academy Games Award for Debut Game winners British Academy Games Award for Technical Achievement winners Detective video games Full motion video based games The Game Awards winners Games financed by Indie Fund IndieCade winners Interactive movie video games IOS games MacOS games Seumas McNally Grand Prize winners Video games about computing Video games developed in the United Kingdom Video games directed by Sam Barlow Video games featuring female protagonists Video games featuring non-playable protagonists Video games set in 1994 Video games set in Glasgow Video games set in Hampshire Video games set in Portsmouth Windows games
Her Story (video game)
Technology
4,848
14,766,388
https://en.wikipedia.org/wiki/MORF4L1
Mortality factor 4-like protein 1 is a protein that in humans is encoded by the MORF4L1 gene. Interactions MORF4L1 has been shown to interact with MYST1, Retinoblastoma protein and MRFAP1. References Further reading
MORF4L1
Chemistry
57
2,122,484
https://en.wikipedia.org/wiki/Iodate
An iodate is the polyatomic anion with the formula . It is the most common form of iodine in nature, as it comprises the major iodine-containing ores. Iodate salts are often colorless. They are the salts of iodic acid. Structure Iodate is pyramidal in structure. The O–I–O angles range from 97° to 105°, somewhat smaller than the O–Cl–O angles in chlorate. Reactions Redox Iodate is one of several oxyanions of iodine, and has an oxidation number of +5. It participates in several redox reactions, such as the iodine clock reaction. Iodate shows no tendency to disproportionate to periodate and iodide, in contrast to the situation for chlorate. Iodate is reduced by sulfite: Iodate oxidizes iodide: Similarly, chlorate oxidizes iodide to iodate: Iodate is also obtained by reducing a periodate with a sulfide. The byproduct of the reaction is a sulfoxide. Acid-base Iodate is unusual in that it forms a strong hydrogen bond with its parent acid: The anion is referred to as biiodate. Principal compounds Calcium iodate, Ca(IO3)2, is the principal ore of iodine. It is also used as a nutritional supplement for cattle. Potassium iodate, KIO3, like potassium iodide, has been issued as a prophylaxis against radioiodine absorption in some countries. It is also one of the iodine compounds used to make iodized salt. Potassium hydrogen iodate (or potassium biiodate), KH(IO3)2, is a double salt of potassium iodate and iodic acid, as well as an acid itself. When some oxygen is replaced by fluorine, fluoroiodates are produced. Natural occurrence Minerals containing iodate are found in the caliche deposits of Chile. The most important iodate minerals are lautarite and brüggenite, but also copper-bearing iodates such as salesite are known. Natural waters contain iodine in the form of iodide and iodate, their ratio being dependent on redox conditions and pH. Iodate is the second most abundant form in water. It is mostly associated with alkaline waters and oxidizing conditions. References Halates
Iodate
Chemistry
500
17,020,599
https://en.wikipedia.org/wiki/Product%20market
In economics, the product market is the marketplace where final goods or services are sold to household and the foreign sector . Focusing on the sale of finished goods, it does not include trading in raw or other intermediate materials. Product market regulation is a term for the placing of restrictions upon the operation of the product market. According to an OECD ranking in 1998, English-speaking and Nordic countries had the least-regulated product markets in the OECD. The least-regulated product markets were to be found in: United Kingdom Australia United States Canada New Zealand Denmark Ireland According to the OECD, indicators for product market regulation include price controls, foreign ownership barriers, and tariffs, among other things. See also Factor market (economics) Product marketing References External links Goods (economics)
Product market
Physics
158
15,387,298
https://en.wikipedia.org/wiki/Wolter%20telescope
A Wolter telescope is a telescope for X-rays that only uses grazing incidence optics – mirrors that reflect X-rays at very shallow angles. Problems with conventional telescope designs Conventional telescope designs require reflection or refraction in a manner that does not work well for X-rays. Visible light optical systems use either lenses or mirrors aligned for nearly normal incidence – that is, the light waves travel nearly perpendicular to the reflecting or refracting surface. Conventional mirror telescopes work poorly with X-rays, since X-rays that strike mirror surfaces nearly perpendicularly are either transmitted or absorbed – not reflected. Lenses for visible light are made of transparent materials with an index of refraction substantially different from 1, but all known X-ray-transparent materials have index of refraction essentially the same as 1, so a long series of X-ray lenses, known as compound refractive lenses, are required in order to achieve focusing without significant attenuation. X-ray mirror telescope design X-ray mirrors can be built, but only if the angle from the plane of reflection is very low (typically 10 arc-minutes to 2 degrees). These are called glancing (or grazing) incidence mirrors. In 1952, Hans Wolter outlined three ways a telescope could be built using only this kind of mirror. These are called Wolter telescopes of type I, II, and III. Each has different advantages and disadvantages. Wolter's key innovation was that by using two mirrors it is possible to create a telescope with a usably wide field of view. In contrast, a grazing incidence telescope with just one parabolic mirror could focus X-rays, but only very close to the centre of the field of view. The rest of the image would suffer from extreme coma. See also List of telescope types Nuclear Spectroscopic Telescope Array (NuSTAR) (2012+) Swift Gamma-Ray Burst Mission Contains a Wolter Type-I X-ray telescope (2004+) Chandra X-ray Observatory Orbiting observatory using a Wolter X-ray telescope. (1999+) XMM-Newton Orbiting X-ray observatory using a Wolter Type-I X-ray telescope. (1999+) ROSAT Orbiting X-ray observatory (1990-1999) eROSITA Orbiting X-ray observatory using Wolter Type-I X-ray telescope on board Spektr-RG (SRG) (2019+) ART-XC Orbiting X-ray observatory using Wolter Type-I X-ray telescope on board Spektr-RG (SRG)(2019+) ATHENA (2031+) Neutron microscope Hans Wolter References X-ray instrumentation X-ray telescopes
Wolter telescope
Technology,Engineering
541
420,231
https://en.wikipedia.org/wiki/Objective%20%28optics%29
In optical engineering, an objective is an optical element that gathers light from an object being observed and focuses the light rays from it to produce a real image of the object. Objectives can be a single lens or mirror, or combinations of several optical elements. They are used in microscopes, binoculars, telescopes, cameras, slide projectors, CD players and many other optical instruments. Objectives are also called object lenses, object glasses, or objective glasses. Microscope objectives The objective lens of a microscope is the one at the bottom near the sample. At its simplest, it is a very high-powered magnifying glass, with very short focal length. This is brought very close to the specimen being examined so that the light from the specimen comes to a focus inside the microscope tube. The objective itself is usually a cylinder containing one or more lenses that are typically made of glass; its function is to collect light from the sample. Magnification One of the most important properties of microscope objectives is their magnification. The magnification typically ranges from 4× to 100×. It is combined with the magnification of the eyepiece to determine the overall magnification of the microscope; a 4× objective with a 10× eyepiece produces an image that is 40 times the size of the object. A typical microscope has three or four objective lenses with different magnifications, screwed into a circular "nosepiece" which may be rotated to select the required lens. These lenses are often color coded for easier use. The least powerful lens is called the scanning objective lens, and is typically a 4× objective. The second lens is referred to as the small objective lens and is typically a 10× lens. The most powerful lens out of the three is referred to as the large objective lens and is typically 40–100×. Numerical aperture Numerical aperture for microscope lenses typically ranges from 0.10 to 1.25, corresponding to focal lengths of about 40 mm to 2 mm, respectively. Mechanical tube length Historically, microscopes were nearly universally designed with a finite mechanical tube length, which is the distance the light traveled in the microscope from the objective to the eyepiece. The Royal Microscopical Society standard is 160 millimeters, whereas Leitz often used 170 millimeters. 180 millimeter tube length objectives are also fairly common. Using an objective and microscope that were designed for different tube lengths will result in spherical aberration. Instead of finite tube lengths, modern microscopes are often designed to use infinity correction instead, a technique in microscopy whereby the light coming out of the objective lens is focused at infinity. This is denoted on the objective with the infinity symbol (∞). Cover thickness Particularly in biological applications, samples are usually observed under a glass cover slip, which introduces distortions to the image. Objectives which are designed to be used with such cover slips will correct for these distortions, and typically have the thickness of the cover slip they are designed to work with written on the side of the objective (typically 0.17 mm). In contrast, so called "metallurgical" objectives are designed for reflected light and do not use glass cover slips. The distinction between objectives designed for use with or without cover slides is important for high numerical aperture (high magnification) lenses, but makes little difference for low magnification objectives. Lens design Basic glass lenses will typically result in significant and unacceptable chromatic aberration. Therefore, most objectives have some kind of correction to allow multiple colors to focus at the same point. The easiest correction is an achromatic lens, which uses a combination of crown glass and flint glass to bring two colors into focus. Achromatic objectives are a typical standard design. In addition to oxide glasses, fluorite lenses are often used in specialty applications. These fluorite or semi-apochromat objectives deal with color better than achromatic objectives. To reduce aberration even further, more complex designs such as apochromat and superachromat objectives are also used. All these types of objectives will exhibit some spherical aberration. While the center of the image will be in focus, the edges will be slightly blurry. When this aberration is corrected, the objective is called a "plan" objective, and has a flat image across the field of view. Working distance The working distance (sometimes abbreviated WD) is the distance between the sample and the objective. As magnification increases, working distances generally shrinks. When space is needed, special long working distance objectives can be used. Immersion lenses Some microscopes use an oil-immersion or water-immersion lens, which can have magnification greater than 100, and numerical aperture greater than 1. These objectives are specially designed for use with refractive index matching oil or water, which must fill the gap between the front element and the object. These lenses give greater resolution at high magnification. Numerical apertures as high as 1.6 can be achieved with oil immersion. Mounting threads The traditional screw thread used to attach the objective to the microscope was standardized by the Royal Microscopical Society in 1858. It was based on the British Standard Whitworth, with a 0.8 inch diameter and 36 threads per inch. This "RMS thread" or "society thread" is still in common use today. Alternatively, some objective manufacturers use designs based on ISO metric screw thread such as and . Photography and imaging Camera lenses (usually referred to as "photographic objectives" instead of simply "objectives") need to cover a large focal plane so are made up of a number of optical lens elements to correct optical aberrations. Image projectors (such as video, movie, and slide projectors) use objective lenses that simply reverse the function of a camera lens, with lenses designed to cover a large image plane and project it at a distance onto another surface. Telescopes In a telescope the objective is the lens at the front end of a refracting telescope (such as binoculars or telescopic sights) or the image-forming primary mirror of a reflecting or catadioptric telescope. A telescope's light-gathering power and angular resolution are both directly related to the diameter (or "aperture") of its objective lens or mirror. The larger the objective, the brighter the objects will appear and the more detail it can resolve. See also List of telescope parts and construction Etendue References External links Lenses Microscope components Microscopy
Objective (optics)
Chemistry
1,304
11,650,517
https://en.wikipedia.org/wiki/Guilty%20pleasure
A guilty pleasure is something, such as an activity or a piece of media, that one enjoys despite understanding that it is not generally held in high regard or is seen as unusual. For example, a person may secretly enjoy a film while acknowledging that is poorly made or generally regarded unfavorably. The term can also be used to refer to a taste for foods that are considered to be advisable to avoid, especially for health reasons. For example, coffee, alcoholic beverages, smoking and eating a little piece of chocolate after dinner are considered by many to be guilty pleasures. See also Guilt Peer pressure Shame References External links Conformity Guilt Morality Popular psychology Social influence
Guilty pleasure
Biology
135
73,785,999
https://en.wikipedia.org/wiki/Phacidiopycnis%20washingtonensis
Phacidiopycnis washingtonensis, is a species of fungus in the family Phacidiaceae, first described by C.L. Xiao & J.D. Rogers in 2005. It is a weak orchard pathogen and a cause of rubbery rot, also known as speck rot, in postharvest apples. First described in North Germany, it affects several apple varieties, including commercially important Jonagold and Elstar. Losses caused by P. washingtonensis during storage are usually below 1% but can reach 5–10% of apples. P. washingtonensis is a weak canker pathogen to apple trees, but while commercial trees in orchards don't seem to be at risk, crabapple pollinators can be susceptible. The fungus causes small black dots (fruiting bodies) to form on infected twigs and tree branches. Fruiting bodies contain millions of spores which serve as the source for fruit infection. Speck rot in postharvest is characterized by an initial light brown skin discoloration that progresses to a more blackish skin discoloration, with a firm rubbery texture. References External links Fungal plant pathogens and diseases Apple tree diseases Leotiomycetes Fungi described in 2005 Fungus species
Phacidiopycnis washingtonensis
Biology
250
11,420,936
https://en.wikipedia.org/wiki/HgcE%20RNA
The HgcE RNA (also known as Pf3 RNA) gene is a non-coding RNA that was identified computationally and experimentally verified in AT-rich hyperthermophiles. The genes in the screen were named hgcA through hgcG ("high GC"). The HgcE has been renamed as Pf3 and identified as an H/ACA snoRNA that is suggested to target 23S rRNA for pseudouridylation. This RNA contains two K-turn motifs. It was later identified as Pab105 H/ACA snoRNA with rRNA targets. See also HgcC family RNA HgcF RNA HgcG RNA SscA RNA References External links Non-coding RNA
HgcE RNA
Chemistry
156
578,666
https://en.wikipedia.org/wiki/Frequency%20counter
A frequency counter is an electronic instrument, or component of one, that is used for measuring frequency. Frequency counters usually measure the number of cycles of oscillation or pulses per second in a periodic electronic signal. Such an instrument is sometimes called a cymometer, particularly one of Chinese manufacture. Operating principle Most frequency counters work by using a counter, which accumulates the number of events occurring within a specific period of time. After a preset period known as the gate time (1 second, for example), the value in the counter is transferred to a display, and the counter is reset to zero. If the event being measured repeats itself with sufficient stability and the frequency is considerably lower than that of the clock oscillator being used, the resolution of the measurement can be greatly improved by measuring the time required for an entire number of cycles, rather than counting the number of entire cycles observed for a pre-set duration (often referred to as the reciprocal technique). The internal oscillator, which provides the time signals, is called the timebase, and must be calibrated very accurately. If the event to be counted is already in electronic form, simple interfacing with the instrument is all that is required. More complex signals may need some conditioning to make them suitable for counting. Most general-purpose frequency counters will include some form of amplifier, filtering, and shaping circuitry at the input. DSP technology, sensitivity control and hysteresis are other techniques to improve performance. Other types of periodic events that are not inherently electronic in nature will need to be converted using some form of transducer. For example, a mechanical event could be arranged to interrupt a light beam, and the counter made to count the resulting pulses. Frequency counters designed for radio frequencies (RF) are also common and operate on the same principles as lower frequency counters. Often, they have more range before they overflow. For very high (microwave) frequencies, many designs use a high-speed prescaler to bring the signal frequency down to a point where normal digital circuitry can operate. The displays on such instruments consider this so they still display the correct value. Microwave frequency counters can currently measure frequencies up to almost 56 GHz. Above these frequencies, the signal to be measured is combined in a mixer with the signal from a local oscillator, producing a signal at the difference frequency, which is low enough to be measured directly. Accuracy and resolution The accuracy of a frequency counter is strongly dependent on the stability of its timebase. A timebase is very delicate, like the hands of a watch, and can be changed by movement, interference, or even drift due to age, meaning it might not "tick" correctly. This can make a frequency reading, when referenced to the timebase, seem higher or lower than the actual value. Highly accurate circuits are used to generate timebases for instrumentation purposes, usually using a quartz crystal oscillator within a sealed temperature-controlled chamber, known as an oven-controlled crystal oscillator or crystal oven. For higher accuracy measurements, an external frequency reference tied to a very high stability oscillator, such as a GPS disciplined rubidium oscillator, may be used. Where the frequency does not need to be known to such a high degree of accuracy, simpler oscillators can be used. It is also possible to measure frequency using the same techniques in software in an embedded system. A central processing unit (CPU), for example, can be arranged to measure its own frequency of operation, provided it has some reference timebase to compare with. Accuracy is often limited by the available resolution of the measurement. The resolution of a single count is generally proportional to the timebase oscillator frequency and the gate time. Improved resolution can be obtained by several techniques such as oversampling/averaging. Additionally, accuracy can be significantly degraded by jitter on the signal being measured. It is possible to reduce this error by oversampling/averaging techniques. I/O Interfaces I/O interfaces allow the user to send information to the frequency counter and receive information from the frequency counter. Commonly used interfaces include RS-232, USB, GPIB and Ethernet. Besides sending measurement results, a counter can notify users when user-defined measurement limits are exceeded. Common to many counters are the SCPI commands used to control them. A new development is built-in LAN-based control via Ethernet complete with GUI's. This allows one computer to control one or several instruments and eliminates the need to write SCPI commands. See also Frequency meter References External links Agilent's AN200: Fundamentals of electronic frequency counters 1 2 LCD Frequency Counter How to build your own Frequency Counter Digital electronics Counting instruments Electronic test equipment
Frequency counter
Mathematics,Technology,Engineering
972
62,773,982
https://en.wikipedia.org/wiki/Radcliffe%20wave
The Radcliffe wave is a neighbouring coherent gaseous structure in the Milky Way, dotted with a related high concentration of interconnected stellar nurseries. It stretches about 8,800 light years. This structure runs with the trajectory of the Milky Way arms. It lies at its closest (the Taurus Molecular Cloud) at around 400 light-years and at its farthest about 5,000 light-years (the Cygnus X star complex) from the Sun, always within the Local Arm (Orion Arm) itself, spanning about 40% of its length and on average 20% of its width. Its discovery was announced in January 2020, and its proximity surprised astronomers. Formation Scientists do not know how the undulation of dust and gas formed. It has been suggested that it could be a result of a much smaller galaxy colliding with the Milky Way, leaving behind "ripples", or could be related to dark matter. Inside the dense clouds, gas can be so compressed that new stars are born. It has been suggested that this may be where the Sun originated. Many of the star-forming regions found in the Radcliffe wave were thought to be part of a similar-sized but somewhat helio-centric ring which contained the Solar System, the "Gould Belt". It is now understood the nearest discrete relative concentration of sparse interstellar matter instead forms a massive wave. Discovery The wave was discovered by an international team of astronomers including Catherine Zucker and João Alves. It was announced by co-author Alyssa A. Goodman at the 235th meeting of the American Astronomical Society, held at Honolulu and published in the journal Nature on 7 January 2020. The discovery was made using data collected by the European Space Agency's Gaia space observatory. The wave was invisible in 2D, requiring new 3D techniques of mapping interstellar matter to reveal its pattern using Glue (software). The proximity of the wave surprised astronomers. It is named after the Radcliffe Institute for Advanced Study in Cambridge, Massachusetts, the place of study of the team. Structure and movement The Radcliffe wave contains four of the five Gould Belt clouds: Orion molecular cloud complex Perseus molecular cloud Taurus molecular cloud Cepheus OB2 The cloud not within its scope is the Rho Ophiuchi Cloud complex, part of a linear structure parallel to the Radcliffe wave. Other structures in the wave, further from the local star system, are Canis Major OB1, the North America Nebula and Cygnus X. The mass of this structure is on the scale of . It has a length of 8,800 light-years (2,700 parsecs) and an amplitude of 520 light-years (160 parsecs). The Radcliffe wave occupies about 20% of the width and 40% of the length of the local arm (Orion Arm). The latter is more dispersed as to its interstellar medium than the wave and has further large star-forming regions such as Monoceros OB1, California Nebula, Cepheus Far, and Rho Ophiuchi. A 2024 paper announced the discovery that the Radcliffe wave is oscillating in the form of a traveling wave. See also Antlia 2, another giant ripple across the Milky Way's disc found in data from the Gaia space telescope List of nearby stellar associations and moving groups Great Rift (astronomy) Serpens-Aquila Rift References Further reading External links Interactive map of the Radcliffe wave on the sky The Radcliffe Wave informational site created by Harvard University Star formation Stellar astronomy Star-forming regions Milky Way Astronomical objects discovered in 2020
Radcliffe wave
Astronomy
723
38,109,230
https://en.wikipedia.org/wiki/Peter%20Buchanan%20%28architect%29
Peter Laurence Alexander Cockburn Buchanan (16 October 1942 – 28 August 2023) was a British architect, urbanist, writer, critic, lecturer and exhibition curator. Buchanan is best known for his series of critical essays for The Big Rethink published by The Architectural Review and for his books on architecture. Life and career After schooling in Zimbabwe, Buchanan studied architecture at the University of Cape Town, and after completing his degree in 1968 he worked with architects in Cape Town, first with Gabriel Fagan and then until 1971 with Revel Fox. Subsequently, he worked as architect and urban designer in Africa, Europe and the Middle East. In 1979 Buchanan took up work with The Architects' Journal and with The Architectural Review, and in 1982 became The Architectural Review's deputy editor. Since 1992 he worked as freelancer. Buchanan curated the travelling exhibitions Renzo Piano Building Workshop: Selected Projects and Ten Shades of Green for the Architectural League of New York. His books included works stemming from these exhibitions: the five volumes of Renzo Piano Building Workshop: Complete Works and Ten Shades of Green. Buchanan has also worked as a consultant on urban design projects and publications. At the end of 2011, The Architectural Review launched "The Big Rethink", a year-long campaign of essays and events with at its core a series of monthly essays by Peter Buchanan. The essays are intended to involve architects in the challenges posed by the global economic and environmental crises. Architects are encouraged to re-evaluate the role of their profession, and to change architecture and design practice in order to meet the challenges posed by the crises while improving the quality of life. Buchanan lectured and taught summer schools and master classes and lectured in a wide range of places including universities, and has published in journals of many countries. Peter Buchanan died from lung cancer on 23 August 2023, at the age of 80. Publications "The Big Rethink" Series of The Architectural Review (AR): The Big Rethink: Spiral Dynamics and Culture, 23 November 2012 The Big Rethink: Rethinking Architectural Education, 28 September 2012 The Big Rethink: Lessons from Peter Zumthor and other living masters, 28 August 2012 (with a review of the work of Herman Hertzberger, Renzo Piano, Emilio Ambasz and Peter Zumthor) The Big Rethink: Place and Aliveness: Pattern, play and the planet, 24 July 2012 The Big Rethink: Learning from Four Modern Masters, 28 May 2012 The Big Rethink: Transcend And include The Past, 24 April 2012 (with a discussion of the work of Christopher Alexander) The Big Rethink: The Purposes of Architecture, 27 March 2012 The Big Rethink: Integral Theory, 29 February 2012 (on architecture viewed from the point of view of integral theory of Ken Wilber) The Big Rethink: Farewell to modernism − and modernity too, 30 January 2012 The Big Rethink: Towards a Complete Architecture, 21 December 2011 Other publications: Ushering in a third industrial revolution, AR, 31 January 2012 1987 January: 'Corb87: Master of a misunderstood modernism', AR, 1 September 2010 (at the occasion of the centenary of Le Corbusier's birth) 1983 July: 'High-Tech', AR, 9 August 2010 Books: Peter Buchanan. Norman Foster: Free University of Berlin: The Philological Library, Prestel, 2011, Peter Buchanan, Paul Finch, Felix Mara, Hattie Hartman, Peter Blundell Jones (and James Pallister (ed.)): The London 2012 Velodrome, Hopkins Architects, Expedition Engineering, BDSP, Grant Associates. The Architects' Journal, Emap Inform, 2011, Anthony Sargent, Peter Buchanan: The Sage Gateshead: Foster + Partners, Prestel, 2010, Peter Buchanan, Michele Alassio: Emilio Ambasz Casa de Retiro Espiritual, Skira, 2005, Peter Buchanan, Frances Dunkels (and with prefaces by R.G.W. Anderson and Norman Foster): The Great Court and The British Museum, British Museum Press, 2000, Norman Foster, Martin Pawley, Helmut Engel, Peter Buchanan: Der neue Reichstag (The new Reichstag), Brockhaus in der Wissenmedia, 1999, P. Buchanan: Renzo Piano Building Workshop: Complete Works, Volumes 1 to 5, Phaidon Press P. Buchanan: Ten Shades of Green, WW Norton Josep Martorell, Peter Buchanan: UIA Barcelona 96 Competitions: Three Areas in Barcelona, Actar Coac Assn of Catalan Arc, 1997, P. Buchanan: Vazquez Consuegra, Gustavo Gili, 1993, Oriol Bohigas, Peter Buchanan, Vittorio Magnago Lampugnani: Barcelona, arquitectura y ciudad, 1980–1992. (English-language translation: Barcelona, city and architecture, 1980–1992, ) References Sources Peter Buchanan, Fundación Arquitectura y Sociedad Speakers 2012: Peter Buchanan, South Africa, World Architecture Festival External links The Big Rethink, The Architectural Review (list) 1942 births 2023 deaths 20th-century South African architects Architecture writers Malawian architects Malawian essayists Urban theorists
Peter Buchanan (architect)
Engineering
1,077
171,087
https://en.wikipedia.org/wiki/Normal%20science
Normal science, identified and elaborated on by Thomas Samuel Kuhn in The Structure of Scientific Revolutions, is the regular work of scientists theorizing, observing, and experimenting within a settled paradigm or explanatory framework. Regarding science as puzzle-solving, Kuhn explained normal science as slowly accumulating detail in accord with established broad theory, without questioning or challenging the underlying assumptions of that theory. The route to normal science Kuhn stressed that historically, the route to normal science could be a difficult one. Prior to the formation of a shared paradigm or research consensus, would-be scientists were reduced to the accumulation of random facts and unverified observations, in the manner recorded by Pliny the Elder or Francis Bacon, while simultaneously beginning the foundations of their field from scratch through a plethora of competing theories. Arguably at least the social sciences remain at such a pre-paradigmatic level today. Normal science at work Kuhn considered that the bulk of scientific work was that done by the 'normal' scientist, as they engaged with the threefold task of articulating the paradigm, precisely evaluating key paradigmatic facts, and testing those new points at which the theoretical paradigm is open to empirical appraisal. Paradigms are central to Kuhn's conception of normal science. Scientists derive rules from paradigms, which also guide research by providing a framework for action that encompasses all the values, techniques, and theories shared by the members of a scientific community. Paradigms gain recognition from more successfully solving acute problems than their competitors. Normal science aims to improve the match between a paradigm's predictions and the facts of interest to a paradigm. It does not aim to discover new phenomena. According to Kuhn, normal science encompasses three classes of scientific problems. The first class of scientific problems is the determination of significant fact, such as the position and magnitude of stars in different galaxies. When astronomers use special telescopes to verify Copernican predictions, they engage the second class: the matching of facts with theory, an attempt to demonstrate agreement between the two. Improving the value of the gravitational constant is an example of articulating a paradigm theory, which is the third class of scientific problems. The breakdown of consensus The normal scientist presumes that all values, techniques, and theories falling within the expectations of the prevailing paradigm are accurate. Anomalies represent challenges to be puzzled out and solved within the prevailing paradigm. Only if an anomaly or series of anomalies resists successful deciphering long enough and for enough members of the scientific community will the paradigm itself gradually come under challenge during what Kuhn deems a crisis of normal science. If the paradigm is unsalvageable, it will be subjected to a paradigm shift. Kuhn lays out the progression of normal science that culminates in scientific discovery at the time of a paradigm shift: first, one must become aware of an anomaly in nature that the prevailing paradigm cannot explain. Then, one must conduct an extended exploration of this anomaly. The crisis only ends when one discards the old paradigm and successfully maps the original anomaly onto a new paradigm. The scientific community embraces a new set of expectations and theories that govern the work of normal science. Kuhn calls such discoveries scientific revolutions. Successive paradigms replace each other and are necessarily incompatible with each other. In this way however, according to Kuhn, normal science possesses a built-in mechanism that ensures the relaxation of the restrictions that previously bound research, whenever the paradigm from which they derive ceases to function effectively. Kuhn's framework restricts the permissibility of paradigm falsification to moments of scientific discovery. Criticism Kuhn's normal science is characterized by upheaval over cycles of puzzle-solving and scientific revolution, as opposed to cumulative improvement. In Kuhn's historicism, moving from one paradigm to the next completely changes the universe of scientific assumptions. Imre Lakatos has accused Kuhn of falling back on irrationalism to explain scientific progress. Lakatos relates Kuhnian scientific change to a mystical or religious conversion ungoverned by reason. With the aim of presenting scientific revolutions as rational progress, Lakatos provided an alternative framework of scientific inquiry in his paper Falsification and the Methodology of Scientific Research Programmes. His model of the research programme preserves cumulative progress in science where Kuhn's model of successive irreconcilable paradigms in normal science does not. Lakatos' basic unit of analysis is not a singular theory or paradigm, but rather the entire research programme that contains the relevant series of testable theories. Each theory within a research programme has the same common assumptions and is supposed by a belt of more modest auxiliary hypotheses that serve to explain away potential threats to the theory's core assumptions. Lakatos evaluates problem shifts, changes to auxiliary hypotheses, by their ability to produce new facts, better predictions, or additional explanations. Lakatos' conception of a scientific revolution involves the replacement of degenerative research programmes by progressive research programmes. Rival programmes persist as minority views. Lakatos is also concerned that Kuhn's position may result in the controversial position of relativism, for Kuhn accepts multiple conceptions of the world under different paradigms. Although the developmental process he describes in science is characterized by an increasingly detailed and refined understanding of nature, Kuhn does not conceive of science as a process of evolution towards any goal or telos. He has noted his own sparing use of the word truth in his writing. An additional consequence of Kuhn's relavitism, which poses a problem for the philosophy of science, is his blurred demarcation between science and non-science. Unlike Karl Popper's deductive method of falsification, under Kuhn, scientific discoveries that do not fit the established paradigm do not immediately falsify the paradigm. They are treated as anomalies within the paradigm that warrant further research, until a scientific revolution refutes the entire paradigm. See also References Further reading W. O. Hagstrom, The Scientific Community (1965) External links Paradigms and normal science Philosophy of science Science and technology studies
Normal science
Technology
1,256
54,501,684
https://en.wikipedia.org/wiki/Reciprocals%20of%20primes
The reciprocals of prime numbers have been of interest to mathematicians for various reasons. They do not have a finite sum, as Leonhard Euler proved in 1737. Like rational numbers, the reciprocals of primes have repeating decimal representations. In his later years, George Salmon (1819–1904) concerned himself with the repeating periods of these decimal representations of reciprocals of primes. Contemporaneously, William Shanks (1812–1882) calculated numerous reciprocals of primes and their repeating periods, and published two papers "On Periods in the Reciprocals of Primes" in 1873 and 1874. In 1874 he also published a table of primes, and the periods of their reciprocals, up to 20,000 (with help from and "communicated by the Rev. George Salmon"), and pointed out the errors in previous tables by three other authors. Rules for calculating the periods of repeating decimals from rational fractions were given by James Whitbread Lee Glaisher in 1878. For a prime , the period of its reciprocal divides . The sequence of recurrence periods of the reciprocal primes appears in the 1973 Handbook of Integer Sequences. List of reciprocals of primes * Full reptend primes are italicised. † Unique primes are highlighted. Full reptend primes A full reptend prime, full repetend prime, proper prime or long prime in base b is an odd prime number p such that the Fermat quotient (where p does not divide b) gives a cyclic number with p − 1 digits. Therefore, the base b expansion of repeats the digits of the corresponding cyclic number infinitely. Unique primes A prime p (where p ≠ 2, 5 when working in base 10) is called unique if there is no other prime q such that the period length of the decimal expansion of its reciprocal, 1/p, is equal to the period length of the reciprocal of q, 1/q. For example, 3 is the only prime with period 1, 11 is the only prime with period 2, 37 is the only prime with period 3, 101 is the only prime with period 4, so they are unique primes. The next larger unique prime is 9091 with period 10, though the next larger period is 9 (its prime being 333667). Unique primes were described by Samuel Yates in 1980. A prime number p is unique if and only if there exists an n such that is a power of p, where denotes the th cyclotomic polynomial evaluated at . The value of n is then the period of the decimal expansion of 1/p. At present, more than fifty decimal unique primes or probable primes are known. However, there are only twenty-three unique primes below 10100. The decimal unique primes are 3, 11, 37, 101, 9091, 9901, 333667, 909091, ... . References External links Prime numbers Rational numbers
Reciprocals of primes
Mathematics
612
44,496,652
https://en.wikipedia.org/wiki/Stained%20glass%20in%20Liverpool%20Cathedral
The stained glass in Liverpool Cathedral all dates from the 20th century. The designs were planned by a committee working in conjunction with the architect of the cathedral, Giles Gilbert Scott, with the intention of forming an integrated scheme throughout the cathedral. A number of stained glass designers were involved in the scheme, but the major contributors came from James Powell and Sons (Whitefriars Glass), in particular J. W. Brown, James Hogan, and Carl Edwards. The subjects portrayed in the windows are numerous and diverse. They include scenes and characters from the Old and New Testaments, evangelists, church fathers, saints, and laymen, some famous, others more humble. The windows in the Lady Chapel celebrate the part that women have played in Christianity. The designs in the windows at the ends of the cathedral are based on canticles, the east window on the Te Deum laudamus, and the west window on the Benedicite. The earlier designs are dark, but the later windows are much brighter and more colourful. Much of the glass was damaged by bombing in the Second World War. The windows replacing them were based on the originals, but often using simpler and more colourful designs. History The foundation stone of Liverpool Cathedral was laid on 19 July 1904, and it was completed in 1979. Giles Gilbert Scott won the competition to design the cathedral, and a Stained Glass Committee under the chairmanship of Sir Frederick Radcliffe was established to organise the design of the stained glass in the windows. The architect worked with the committee initially to decide on "the main lines on which the design of the window should be based and the extent to which is to be of clear glass or coloured". The committee then decided on the subjects to be depicted and, in discussion with the stained glass artist, agreed on the details of the design; Scott was concerned from the outset that "the windows should not detract from the architecture". The committee continued to work during the construction of the cathedral under a series of chairmen, whose discussions were often very detailed. The oldest windows in the cathedral are dark in colour, but with changes in manufacturing techniques from the 1930s, the later windows are much brighter and more colourful. Description Lady Chapel The Lady Chapel was the earliest part of the cathedral to be built. There was a competition in 1907 to design the windows, which was won by James Powell and Sons, who commissioned J. W. Brown as designer. Brown had worked for Powell's until 1886 and then worked freelance, but from 1891 he was "the firm's preferred designer for prestigious projects". As the chapel is dedicated to St Mary, they are based on the role that women have played in the history of Christianity. Running through all the windows is a scroll containing the words of the Magnificat. On the north side are holy women from the British Isles, and on the south side are mainly saints commemorated in the Prayer Book. The Lady Chapel was damaged by bombing on 6 September 1940, and all the glass had to be replaced. The work was undertaken by James Hogan, who used simplified adaptations of the original designs. Following Hogan's death in 1948 the work was continued by Carl Edwards; the resulting windows are much brighter than the originals. The windows at the rear of the chapel and on the staircase were donated by the Girls' Friendly Society, and were designed by Brown. Known as the "Noble Women" windows, they depict women who have made major contributions to society, including Elizabeth Fry, Grace Darling, and Kitty Wilkinson. Ambulatory and Chapter House The four windows in the ambulatory are the only designs in the cathedral by Burlison and Grylls, each depicting two saints from a nation of the British Isles. On the steps leading to the Chapter House is the only window in the cathedral by C. E. Kempe and Company. It commemorates the Woodward family, who were local corn merchants between 1803 and 1915, and includes biblical references to corn and harvest. The Chapter House was donated by local Freemasons as a memorial to their members lost in the First World War. The windows were made by Morris & Co. and designed by Henry Dearle, reflecting the interests and traditions of the Freemasons. The windows were damaged in the Second World War and repaired by James Powell and Sons. East window The east window, designed by Brown, dominates the east end of the cathedral, rising above the reredos, and is based on the theme of the Te Deum laudamus. At the top of the window is the risen Christ, and around and below are members of the heavenly choir. Under this are four lancet windows, each representing one of the communities praising God. The left window represents 'the company of the apostles', with Saint Raphael at the top. Below are fourteen figures; the twelve apostles, excluding Judas Iscariot but including Saint Matthias, with Saint Paul and Saint Barnabas. The next window commemorates 'the goodly fellowship of the apostles'. At the top is Saint Michael, with fifteen figures below. These include Isaiah, Elijah, John the Baptist, Saint Athanasius, Saint Augustine, John Wycliffe, Thomas Cranmer, and John Wesley. The third window represents 'the noble army of martyrs', with Archangel Gabriel at the top. Below are fifteen Christian martyrs, starting with Saint Stephen. Underneath are Zechariah and the Holy Innocents, Saint Alban, Saint Oswald, and Saint Boniface. At the bottom are figures representing martyrs from Madagascar, Africa, Melanesia, and China. The lancet window on the right commemorates 'the holy church throughout all the world', with an angel, possibly Uriel, at the top. Underneath are various representations: King Alfred as a warrior, Dante as a poet, Fra Angelico as a painter, the musician J. S. Bach, the scientist Isaac Newton, and the physician Thomas Linacre. Other figures commemorate law, commerce, scholarship, and architecture. Also included are Christopher Columbus and Francis Drake. Choir aisles There are four main windows in the choir aisles, two on each side, and they are concerned with the four Gospels. The windows on the north side are original, but those on the south side were destroyed by bombing and were renewed. In the renewal, the central mullion of these windows was widened, and the design of the glass was simplified and made more vibrant. Each window, known by its predominant colour, shows the author of the gospel at the top with his symbol. Below are figures linked with the subject matter of the gospel. The windows on the north side are by Brown, the left window, the Sapphire window, represents Saint Matthew and shows a depiction of the Nativity on one side, and the Epiphany on the other. The 'Gold' window commemorates Saint Luke and shows the Feeding of the Five thousand, and the Raising of Jairus' daughter. The windows on the south side are by Hogan. The Ruby window represents Saint John and includes biblical scenes together with the Old Testament figures of Daniel, Ezekiel, Jonah, and Job. Saint Mark is in the Emerald window, with scenes of the Baptism of Jesus and the Transfiguration. Also included are the disciples Saint Simon and Saint Andrew, and the Old Testament figures, Noah, Zechariah, Enoch, and Malachi. At the east ends of the aisles are rose windows by Brown. The window in the north aisle relates to "journeys across the sea and undertaken in faith", namely Moses crossing the Red Sea, Saint Paul's journey to Rome, Saint Columba planting a cross on Iona, and missionaries of the Melanesian Mission landing in the Solomon Islands. The images in the rose window in the south aisle show instances of God's power being demonstrated through water, namely Noah holding a model of the ark, Jesus calming the disciples in a storm, Jesus walking on water, and Saint Paul after his shipwreck in Malta. Central space The windows on the north and south sides of the central space were designed by Hogan; each includes three tall lancet windows topped by a rose window. The area of glass in each window is , the sill is above the level of the floor, and the top of the rose window is above floor level. The north window shows figures and themes from the Old Testament, with Moses with the Ten Commandments in the rose window. Below the figures include Adam and Eve, Noah, Solomon, prophets, and important characters from Israelite history. The south window depicts characters and scenes from the New Testament. The Holy Trinity is depicted in the rose window, below which are depictions of events including the Crucifixion and the Ascension, together with a variety of saints. Transepts The War Memorial Chapel forming the northeast transept has as its themes the aftermath of the First World War, sacrifice and the risen life. The design of its window was started by Brown and completed by Hogan. It shows suffering and death, including a depiction of the Crucifixion. The original window by Brown was destroyed by bombing; the window replacing it shows Christ with his arms outstretched in welcome at the top. Below are scenes of acts of compassion, including figures such as Saint Francis. The southwest transept forms the baptistry, and its window by Herbert Hendrie of Whitefriars depicts salvation, particularly through water and healing. The window in the northwest transept has the theme of the Church and the State. Nave aisles The six windows in the nave aisles deal with historical subjects, all but one designed by Carl Edwards. The exception is the west window on the south side, designed by William Wilson. This is the Bishops' Window, and includes Nicholas Ridley, Hugh Latimer, and William Temple. The middle window is the Parsons' Window, and depicts notable clergymen including Thomas Arnold (with a rugby ball), Revd Peter Green, and Revd W. Farquhar Hook. The Layman's Window includes tradesmen who worked on building the cathedral, members of the committees responsible, and a depiction of Giles Gilbert Scott. The Musicians' Window contains composers, performers, and conductors who have played a part in the development of Anglican church music. The Hymnologists' Window includes hymn writers such as C. F. Alexander and Cecil Spring Rice. Finally there is the Scholars' Window, with theologians, philosophers, and biblical scholars. In the corner is the Very Revd Frederick Dwelly, the first dean of the cathedral. West window Following Scott's death in 1960 it was decided to change the design of the west end of the cathedral, which had consisted of a small rose window and an elaborate porch. Frederick Thomas and Roger Pinkney, who had both worked with Scott, produced a simplified design that gave the opportunity for a large west window. Created by Carl Edwards and based on the theme of the Benedicite, the window consists of a round-headed window at the top, and three tall lancet windows below. It covers an area of , each lancet window being more than high. Revd Noel Vincent, the former canon treasurer of the cathedral, states that the top part of the window represents "the risen Christ in glory looking down ... in compassion on the world", and the images beneath depict "all creation united in peace". Notes References Citations Sources External links Cathedral floor plan (PDF file) Lists of stained glass works Glass architecture Windows Stained glass
Stained glass in Liverpool Cathedral
Materials_science,Engineering
2,323
11,761,825
https://en.wikipedia.org/wiki/Jaroslav%20Josef%20Pol%C3%ADvka
Jaroslav Josef Polivka (20 April 1886 – 9 February 1960), Czech structural engineer who collaborated with Frank Lloyd Wright between 1946 and 1959. Jaroslav Josef Polivka a.k.a. J. J. Polivka Civil Engineer was born in Prague in 1886. He received his undergraduate degree in structural engineering at the College of Technology in Prague in 1909. He then studied at the Federal Polytechnic Institute in Zurich, Switzerland and at the Prague Institute of Technology, where he earned a doctoral degree in 1917. After serving in the First World War, he opened his own architectural and engineering office in Prague and developed his skills in stress analysis of reinforced concrete, pre-stressed reinforced concrete and steel structures. Polivka became an expert in photo-elastic stress analysis, a technique that examines small-scale transparent models in polarized light. In Prague Polivka worked together with avant-garde Czech architect Josef Havlíček on the Habich Building (1927–28) and Chicago Building (1927–28). Polivka designed the structural frame of the Czech Pavilion at the Paris International Exhibition of 1937 collaborating with renown Czech architect, Jaromír Krejcar and Czech engineer René Wiesner. Two years later, he worked with Czech architect Kamil Roškot to design another Czech Pavilion at the 1939 New York World's Fair. In 1939 Polivka immigrated to the United States and took a position as research associate and lecturer at the University of California, Berkeley. In 1941, he and Victor di Suvero co-invented a structural design technique that received a patent for improvements in structures. Polivka with his son Milos translated into English Eduardo Torroja’s ‘Philosophy of Structures’ book published in 1958. In 1946 Polivka began to work with Frank Lloyd Wright collaborating on several major projects until Wright's death in 1959. For Wright's projects Polivka performed stress analyses and investigations of specific building materials. They worked on a total of seven projects, two of which were built: the Johnson Wax Research Tower, 1946–1951 at Racine Wisconsin and the Guggenheim Museum, 1946–1959 in New York City for which Polivka managed to design out the gallery ramp perimeter columns initially required. Their other well-known design proposal was the reinforced concrete Butterfly Bridge (proposed at a world record span of 1000-ft) at the Southern Crossing of San Francisco Bay (1949–52). Polivka performed the photoelasticity for the Podolsko Bridge is an arch bridge that spans the Vltava between Podolsko and Temešvár in Písek District, Czech Republic. At the time of its completion in 1943, it was the longest arch bridge in Czechoslovakia. He died in Berkeley, California. References "Contractor Meets Close Design Tolerances in Building Long-Span Concrete Arch Bridge" J. J. Polivka, Civil Engineering, ASCE American Society of Civil Engineers January 1949 External links Polivka archives at University at Buffalo, The State University of New York University of California Berkeley Structural engineers Engineers from Prague 1886 births 1960 deaths Frank Lloyd Wright ETH Zurich alumni UC Berkeley College of Engineering faculty Czechoslovak engineers Expatriates from Austria-Hungary in Switzerland Czechoslovak emigrants to the United States
Jaroslav Josef Polívka
Engineering
662
51,807,728
https://en.wikipedia.org/wiki/Western%20blot%20normalization
Normalization of Western blot data is an analytical step that is performed to compare the relative abundance of a specific protein across the lanes of a blot or gel under diverse experimental treatments, or across tissues or developmental stages. The overall goal of normalization is to minimize effects arising from variations in experimental errors, such as inconsistent sample preparation, unequal sample loading across gel lanes, or uneven protein transfer, which can compromise the conclusions that can be obtained from Western blot data. Currently, there are two methods for normalizing Western blot data: (i) housekeeping protein normalization and (ii) total protein normalization. Procedure Normalization occurs directly on either the gel or the blotting membrane. First, the stained gel or blot is imaged, a rectangle is drawn around the target protein in each lane, and the signal intensity inside the rectangle is measured. The signal intensity obtained can then be normalized with respect to the signal intensity of the loading internal control detected on the same gel or blot. When using protein stains, the membrane may be incubated with the chosen stain before or after immunodetection, depending on the type of stain. Housekeeping protein controls Housekeeping genes and proteins, including β-Actin, GAPDH, HPRT1, and RPLP1, are often used as internal controls in western blots because they are thought to be expressed constitutively, at the same levels, across experiments. However, recent studies have shown that expression of housekeeping proteins (HKPs) can change across different cell types and biological conditions. Therefore, scientific publishers and funding agencies now require that normalization controls be previously validated for each experiment to ensure reproducibility and accuracy of the results. Fluorescent antibodies When using fluorescent antibodies to image proteins in western blots, normalization requires that the user define the upper and lower limits of quantitation and characterize the linear relationship between signal intensity and the sample mass volume for each antigen. Both the target protein and the normalization control need to fluoresce within the dynamic range of detection. Many HKPs are expressed at high levels and are preferred for use with highly-expressed target proteins. Lower expressing proteins are difficult to detect on the same blot. Fluorescent antibodies are commercially available, and fully characterized antibodies are recommended to ensure consistency of results. When fluorescent detection is not utilized, the loading control protein and the protein of interest must differ considerably in molecular weight so they are adequately separated by gel electrophoresis for accurate analysis. Membrane stripping Membranes need to be stripped and re-probed using a new set of detection antibodies when detecting multiple protein targets on the same blot. Ineffective stripping could result in a weak signal from the target protein. To prevent loss of the antigen, only three stripping incubations are recommended per membrane. It could be difficult to completely eliminate signal from highly-abundant proteins, so it is recommended that one detects lowly-expressed proteins first. Exogenous spike-in controls Since HKP levels can be inconsistent between tissues, scientists can control for the protein of interest by spiking in a pure, exogenous protein of a known concentration within the linear range of the antibody. Compared to HKP, a wider variety of proteins are available for spike-in controls. Total protein normalization In total protein normalization (TPN), the abundance of the target protein is normalized to the total amount of protein in each lane. Because TPN is not dependent on a single loading control, validation of controls and stripping/reprobing of blots for detection of HKPs is not necessary. This can improve precision (down to 0.1 μg of total protein per lane), cost-effectiveness, and data reliability. Fluorescent stains and stain-free gels require special equipment to visualize the proteins on the gel/blot. Stains may not cover the blot evenly; more stain might collect towards the edges of the blot than in the center. Non-uniformity in the image can result in inaccurate normalization. Pre-antibody stains Anionic dyes such as Ponceau S and Coomassie brilliant blue, and fluorescent dyes like Sypro Ruby and Deep Purple, are used before antibodies are added because they do not affect downstream immunodetection. Ponceau S is a negatively charged reversible dye that stains proteins a reddish pink color and is removed easily by washing in water. The intensity of Ponceau S staining decreases quickly over time, so documentation should be conducted rapidly. A linear range of up to 140 μg is reported for Ponceau S with poor reproducibility due to its highly time-dependent staining intensity and low signal-to-noise ratio. Fluorescent dyes like Sypro Ruby have a broad linear range and are more sensitive than anionic dyes. They are permanent, photostable stains that can be visualized with a standard UV or blue-light transilluminator or a laser scan. Membranes can then be documented either on film or digitally using a charge-coupled device camera. Sypro Ruby blot staining is time-intensive and tends to saturate above 50 μg of protein per lane. Post-antibody stains Amido black is a commonly used permanent post-antibody anionic stain that is more sensitive than Ponceau S. This stain is applied after immunodetection. Stain-free technology Stain-free technology employs an in-gel chemistry for imaging. This chemical reaction does not affect protein transfer or downstream antibody binding. Also, it does not involve staining/destaining steps, and the intensity of the bands remain constant over time. Stain-free technology cannot detect proteins that do not contain tryptophan residues. A minimum of two tryptophans is needed to enable detection. The linear range for stain-free normalization is up to 80 μg of protein per lane for 18-well and up to 100 μg per lane for 12-well Criterion mid-sized gels. This range is compatible with typical protein loads in quantitative western blots and enables loading control calculations over a wide protein-loading range. A more efficient stain-free method has also recently become available. When using high protein loads, stain-free technology has demonstrated greater success than stains. References External links V3 Stain-free Workflow for a Practical, Convenient, and Reliable Total Protein Loading Control in Western Blotting Molecular biology techniques
Western blot normalization
Chemistry,Biology
1,322
30,125,638
https://en.wikipedia.org/wiki/Cubic%20metre
The cubic metre (in Commonwealth English and international spelling as used by the International Bureau of Weights and Measures) or cubic meter (in American English) is the unit of volume in the International System of Units (SI). Its symbol is m3. It is the volume of a cube with edges one metre in length. An alternative name, which allowed a different usage with metric prefixes, was the stère, still sometimes used for dry measure (for instance, in reference to wood). Another alternative name, no longer widely used, was the kilolitre. Conversions {| |- |rowspan=6 valign=top|1 cubic metre |= litres (exactly) |- |≈ 35.3 cubic feet |- |≈ 1.31 cubic yards |- |≈ 6.29 oil barrels |- |≈ 220 imperial gallons |- |≈ 264 US fluid gallons |} A cubic metre of pure water at the temperature of maximum density (3.98 °C) and standard atmospheric pressure (101.325 kPa) has a mass of , or one tonne. At 0 °C, the freezing point of water, a cubic metre of water has slightly less mass, 999.972 kilograms. A cubic metre is sometimes abbreviated to , , , , , , when superscript characters or markup cannot be used (e.g. in some typewritten documents and postings in Usenet newsgroups). The "cubic metre" symbol is encoded by Unicode at code point . Multiples and submultiples Multiples Cubic decametre the volume of a cube of side length one decametre (10 m) equal to a megalitre 1 dam3 = = 1 ML Cubic hectometre the volume of a cube of side length one hectometre (100 m) equal to a gigalitre in civil engineering abbreviated MCM for million cubic metres 1 hm3 = = 1 GL Cubic kilometre the volume of a cube of side length one kilometre () equal to a teralitre 1 km3 = = 1 TL (810713.19 acre-feet; 0.239913 cubic miles) Submultiples Cubic decimetre the volume of a cube of side length one decimetre (0.1 m) equal to a litre 1 dm3 = 0.001 m3 = 1 L (also known as DCM (=Deci Cubic Meter) in Rubber compound processing) Cubic centimetre the volume of a cube of side length one centimetre (0.01 m) equal to a millilitre 1 cm3 = = 10−6 m3 = 1 mL Cubic millimetre the volume of a cube of side length one millimetre (0.001 m) equal to a microlitre 1 mm3 = = 10−9 m3 = 1 μL See also Standard cubic foot References Orders of magnitude (volume) Units of volume SI derived units
Cubic metre
Mathematics
603
4,240,854
https://en.wikipedia.org/wiki/Bipolaron
In physics, a bipolaron is a type of quasiparticle consisting of two polarons. In organic chemistry, it is a molecule or a part of a macromolecular chain containing two positive charges in a conjugated system. Bipolarons in physics In physics, a bipolaron is a bound pair of two polarons. An electron in a material may cause a distortion in the underlying lattice. The combination of electron and distortion (which may also be understood as a cloud of phonons) is known as a polaron (in part because the interaction between electron and lattice is via a polarization). When two polarons are close together, they can lower their energy by sharing the same distortions, which leads to an effective attraction between the polarons. If the interaction is sufficiently large, then that attraction leads to a bound bipolaron. For strong attraction, bipolarons may be small. Small bipolarons have integer spin and thus share some of the properties of bosons. If many bipolarons form without coming too close, they might be able to form a Bose–Einstein condensate. This has led to a suggestion that bipolarons could be a possible mechanism for high-temperature superconductivity. For example, they can lead to a very direct interpretation of the isotope effect. Recently, bipolarons were predicted theorethically in a Bose-Einstein condensate. Two polarons interchange sound waves and they attract to each other, forming a bound-state when the strength coupling between the single polarons and the condensate is strong in comparison with the interactions of the host gas. Bipolarons in organic chemistry In organic chemistry, a bipolaron is a molecule or part of a macromolecular chain containing two positive charges in a conjugated system. The charges can be located in the centre of the chain or at its termini. Bipolarons and polarons are encountered in doped conducting polymers such as polythiophene. It is possible to synthesize and isolate bipolaron model compounds for X-ray diffraction studies. The diamagnetic bis(triaryl)amine dication 2 in scheme 1 is prepared from the neutral precursor 1 in dichloromethane by reaction with 4 equivalents of antimony pentachloride. Two resonance structures exist for the dication. Structure 2a is a (singlet) diradical and 2b is the closed shell quinoid. The experimental bond lengths for the central vinylidene group in 2 are 141 pm and 137 pm compared to 144 pm and 134 pm for the precursor 1 implying some contribution from the quinoid structure. On the other hand, when a thiophene unit is added to the core in the structure depicted in scheme 2, these bond lengths are identical (around 138 pm) making it a true hybrid. See also Quinonoid zwitterions References Ions Quasiparticles
Bipolaron
Physics,Materials_science
598
47,301,819
https://en.wikipedia.org/wiki/Cray%20Urika-GD
The Cray Urika-GD is a graph discovery appliance is a computer application that finds and analyzes relationships and patterns in the data collected by a supercomputer. The Cray Urika-GD generates graphs based on large amounts of data, often from multiple sources, and makes useful connections among those data. Many organizations now have vast stores of information like this—called "big data"—that they can analyze and use to improve their operations, products or services. One example of the appliance in use would be a healthcare organization that helps to find, among its 13 million patient records, information that doctors could use to develop treatment plans. By categorizing records based on illness, age, treatment, and outcome, the appliance can provide insights for treating other patients. “Big data” is also being tapped in professional sports. In 2014, Cray revealed that a Major League Baseball team was using a Urika-GD appliance to graph and analyze its own performance statistics. References External links "Global Supercomputer Leader Cray Inc. Awarded $80 million by King Abdullah University of Science and Technology (KAUST)." Dataconomy. 18 November 2014. "The Evolution of Data Analytics ." Infographic. Eileen McNulty (22 May 2014). "Understanding Big Data: The Seven V's." Dataconomy. Cray products
Cray Urika-GD
Technology
291
33,928,494
https://en.wikipedia.org/wiki/Dietary%20diversity
Dietary diversity is the variety or the number of different food groups people eat over the time given. Many researchers might use the word ' dietary diversity' and ‘dietary variety’ interchangeably. However, some researchers differentiate the definition between 2 words that dietary diversity has defined as the difference of food groups while dietary variety has focused on the actual food items people intake. The "Nutritional Diversity" study or "Biodiverse Food Study, Panama," conducted by permaculturist and athlete Brandon Eisler, and team, indicates that a diversity of naturally grown foods in the area of more than 60 different species constitutes complete "evolutionary" or "optimal performance and health diet," and goes on to say the demand for this model can solve several planetary, and ecological health concerns even in the conversation of climate change. Dietary diversity is related to nutrient intakes and is also an indicator of dietary quality. Moreover, dietary diversity associated with health outcomes such as being overweight or an increased mortality. Dietary diversity is influenced by various determinants such as physical and mental health, economic status, or food environment. References Eating behaviors of humans
Dietary diversity
Biology
228
72,901,566
https://en.wikipedia.org/wiki/TIBER
TIBER (Threat Intelligence Based Ethical Red Teaming) is a standard developed by the European Central Bank for Red Teaming. It can be adopted by member states of the European Union. See also ENISA References External links European Central Bank Computer security standards
TIBER
Technology,Engineering
50
693,935
https://en.wikipedia.org/wiki/Trinomial%20expansion
In mathematics, a trinomial expansion is the expansion of a power of a sum of three terms into monomials. The expansion is given by where is a nonnegative integer and the sum is taken over all combinations of nonnegative indices and such that . The trinomial coefficients are given by This formula is a special case of the multinomial formula for . The coefficients can be defined with a generalization of Pascal's triangle to three dimensions, called Pascal's pyramid or Pascal's tetrahedron. Derivation The trinomial expansion can be calculated by applying the binomial expansion twice, setting , which leads to Above, the resulting in the second line is evaluated by the second application of the binomial expansion, introducing another summation over the index . The product of the two binomial coefficients is simplified by shortening , and comparing the index combinations here with the ones in the exponents, they can be relabelled to , which provides the expression given in the first paragraph. Properties The number of terms of an expanded trinomial is the triangular number where is the exponent to which the trinomial is raised. Example An example of a trinomial expansion with is : See also Binomial expansion Pascal's pyramid Multinomial coefficient Trinomial triangle References Factorial and binomial topics
Trinomial expansion
Mathematics
285
74,007
https://en.wikipedia.org/wiki/Technology%20assessment
Technology assessment (TA, , ) is a practical process of determining the value of a new or emerging technology in and of itself or against existing technologies. This is a means of assessing and rating the new technology from the time when it was first developed to the time when it is potentially accepted by the public and authorities for further use. In essence, TA could be defined as "a form of policy research that examines short- and long term consequences (for example, societal, economic, ethical, legal) of the application of technology." General description TA is the study and evaluation of new technologies. It is a way of trying to forecast and prepare for the upcoming technological advancements and their repercussions to the society, and then make decisions based on the judgments. It is based on the conviction that new developments within, and discoveries by, the scientific community are relevant for the world at large rather than just for the scientific experts themselves, and that technological progress can never be free of ethical implications. Technology assessment was initially practiced in the 1960s in the United States where it would focus on analyzing the significance of "supersonic transportation, pollution of the environment and ethics of genetic screening." Also, technology assessment recognizes the fact that scientists normally are not trained ethicists themselves and accordingly ought to be very careful when passing ethical judgement on their own, or their colleagues, new findings, projects, or work in progress. TA is a very broad phenomenon which also includes aspects such as "diffusion of technology (and technology transfer), factors leading to rapid acceptance of new technology, and the role of technology and society." Technology assessment assumes a global perspective and is future-oriented, not anti-technological. TA considers its task as an interdisciplinary approach to solving already existing problems and preventing potential damage caused by the uncritical application and the commercialization of new technologies. Therefore, any results of technology assessment studies must be published, and particular consideration must be given to communication with political decision-makers. An important problem concerning technology assessment is the so-called Collingridge dilemma: on the one hand, impacts of new technologies cannot be easily predicted until the technology is extensively developed and widely used; on the other hand, control or change of a technology is difficult as soon as it is widely used. It emphasizes on the fact that technologies, in their early stage, are unpredictable with regards to their implications and rather tough to regulate or control once it has been widely accepted by the society. Shaping or directing this technology is the desired direction becomes difficult for the authorities at this period of time. There have been several approaches put in place in order to tackle this dilemma, one of the common ones being "anticipation." In this approach, authorities and assessors "anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications." Technology assessments, which are a form of cost–benefit analysis, are a medium for decision makers to evaluate and analyze solutions with regards to the particular technology assessment, and choose a best possible option which is cost effective and obeys the authoritative and budgetary requirements. However, they are difficult if not impossible to carry out in an objective manner since subjective decisions and value judgments have to be made regarding a number of complex issues such as (a) the boundaries of the analysis (i.e., what costs are internalized and externalized), (b) the selection of appropriate indicators of potential positive and negative consequences of the new technology, (c) the monetization of non-market values, and (d) a wide range of ethical perspectives. Consequently, most technology assessments are neither objective nor value-neutral exercises but instead are greatly influenced and biased by the values of the most powerful stakeholders, which are in many cases the developers and proponents (i.e., corporations and governments) of new technologies under consideration. In the most extreme view, as expressed by Ian Barbour in '’Technology, Environment, and Human Values'’, technology assessment is "a one-sided apology for contemporary technology by people with a stake in its continuation." Overall, technology assessment is a very broad field which reaches beyond just technology and industrial phenomenons. It handles the assessment of effects, consequences, and risks of a technology, but also is a forecasting function looking into the projection of opportunities and skill development as an input into strategic planning." Some of the major fields of TA are: information technology, hydrogen technologies, nuclear technology, molecular nanotechnology, pharmacology, organ transplants, gene technology, artificial intelligence, the Internet and many more. Forms and concepts of technology assessment The following types of concepts of TA are those that are most visible and practiced. There are, however, a number of further TA forms that are only proposed as concepts in the literature or are the label used by a particular TA institution. Parliamentary TA (PTA): TA activities of various kinds whose addressee is a parliament. PTA may be performed directly by members of those parliaments (e.g. in France and Finland) or on their behalf of related TA institutions (such as in the UK, in Germany and Denmark) or by organisations not directly linked to a Parliament (such as in the Netherlands and Switzerland). Expert TA (often also referred to as the classical TA or traditional TA concept): TA activities carried out by (a team of) TA and technical experts. Input from stakeholders and other actors is included only via written statements, documents and interviews, but not as in participatory TA. Participatory TA (pTA): TA activities which actively, systematically and methodologically involve various kinds of social actors as assessors and discussants, such as different kinds of civil society organisations, representatives of the state systems, but characteristically also individual stakeholders and citizens (lay persons), technical scientists and technical experts. Standard pTA methods include consensus conferences, focus groups, scenario workshops etc. Sometimes pTA is further divided into expert-stakeholder pTA and public pTA (including lay persons). The participatory assessment makes room for the inclusion of laypeople and establishes the value of varied point of views, interests and knowledge. It shows importance of the need for decision makers and actors to have a varied set of mindsets and perspective to make a combined, informed and rational decision. Constructive TA (CTA): This concept of TA, developed in the Netherlands, but also applied and discussed elsewhere attempts to broaden the design of new technology through feedback of TA activities into the actual construction of technology. Contrary to other forms of TA, CTA is not directed toward influencing regulatory practices by assessing the impacts of technology. Instead, CTA wants to address social issues around technology by influencing design practices. It aims to "mobilize insights on co-evolutionary dynamics of science, technology and society for anticipating and assessing technologies, rather than being predominantly concerned with assessing societal impacts of a quasi-given technology." This assessment established the value of involving users in the development and innovation process, encouraging the development and adaptation of new technology in their daily life. Discursive TA or Argumentative TA: This type of TA wants to deepen the political and normative debate about science, technology and society. It is inspired by ethics, policy discourse analysis and the sociology of expectations in science and technology. This mode of TA aims to clarify and bring under public and political scrutiny the normative assumptions and visions that drive the actors who are socially shaping science and technology. This assessment can be used as a tool to analyse and evaluate the background of each and every reaction or perception that takes place for each technology; often some of the reactions these assessors receive are not related to science or technology. Some of the ways of analyzing actors and their reaction is by "studying prospective users' everyday-life practices in their own right, and in naturalistic settings." Accordingly, argumentative TA not only addresses the side effects of technological change, but deals with both broader impacts of science and technology and the fundamental normative question of why developing a certain technology is legitimate and desirable. Technology assessment institutions around the world Many TA institutions are members of the European Parliamentary Technology Assessment (EPTA) network, some are working for the STOA panel of the European Parliament and formed the European Technology Assessment Group (ETAG). Centre for Technology Assessment (TA-SWISS), Bern, Switzerland. Department of Science, Technology and Policy Studies, University of Twente Institute of Technology Assessment (ITA) of the Austrian Academy of Sciences, Vienna Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Germany (former) Office of Technology Assessment (OTA) The Danish Board of Technology Foundation, Copenhagen Norwegian Board of Technology, Oslo Oficina de Ciencia y Tecnología del Congreso (OficinaC), Spain Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST), Paris Parliamentary Office of Science and Technology (POST), London Rathenau Institute, The Hague Science and Technology Options Assessment (STOA) panel of the European Parliament, Brussels Science and Technology Policy Research (SPRU), Sussex Technology centre CAS (TC CAS), Prague, Czech Republic See also History of science and technology Horizon scanning Scientific lacuna Technology Technology dynamics Technology forecasting Technology readiness level Technology transfer References External links Scientific Technology Options Assessment (STOA), European Parliament European Technology Assessment Group for STOA Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT), Germany Office of Technology Assessment at the German Parliament (TAB) TA-SWISS Centre for Technology Assessment Institute of Technology Assessment (ITA), Austrian Academy of Sciences, Vienna, Austria The Danish Board of Technology Rathenau Institute The Norwegian Board of Technology Technology forecasting Design for X Technology transfer Technology systems
Technology assessment
Technology,Engineering
2,017
24,503,693
https://en.wikipedia.org/wiki/Journal%20of%20Cellular%20Biochemistry
The Journal of Cellular Biochemistry publishes descriptions of original research in which complex cellular, pathogenic, clinical, or animal model systems are studied by biochemical, molecular, genetic, epigenetic, or quantitative ultrastructural approaches. History The journal was previously called the Journal of Supramolecular Structure (1972–1980) and the Journal of Supramolecular Structure and Cellular Biochemistry (1981). Abstracting and indexing The Journal of Cellular Biochemistry is indexed and/or abstracted in the following databases: BIOBASE, Biochemistry and Biophysics Citation Index, Biological Abstracts, BIOSIS Previews, CAB Abstracts, Cambridge Scientific Abstracts, Chemical Abstracts Service/SciFinder, CSA Biological Sciences Database, Current Awareness in Biological Sciences, Current Contents/Life Sciences, EMBASE, EORTC Database, Index Medicus/MEDLINE/PubMed, Reference Update, Science Citation Index, and Scopus. References Biochemistry journals Molecular and cellular biology journals Academic journals established in 1972
Journal of Cellular Biochemistry
Chemistry
199
34,528,578
https://en.wikipedia.org/wiki/Totally%20drug-resistant%20tuberculosis
Totally drug-resistant tuberculosis (TDR-TB) is a generic term for tuberculosis strains that are resistant to a wider range of drugs than strains classified as extensively drug-resistant tuberculosis. Extensively drug resistant tuberculosis is tuberculosis that is resistant to isoniazid and rifampicin, any fluoroquinolone, and any of the three second line injectable TB drugs (amikacin, capreomycin, and kanamycin). TDR-TB has been identified in three countries; India, Iran, and Italy. The term was first presented in 2006, in which it showed that TB was resistant to many second line drugs and possibly all the medicines used to treat the disease. Lack of testing made it unclear which drugs the TDR-TB were resistant to. The emergence of TDR-TB has been documented in four major publications. However, it is not recognized by the World Health Organization. This is because the term defined as "totally drug resistant", has not been applied to the disease of tuberculosis. Certain strains of TB have not been properly tested to be deemed resistant due to lack of in vitro testing. TDR-TB has resulted from further mutations within the bacterial genome to confer resistance, beyond those seen in XDR- and MDR-TB. Development of resistance is associated with poor management of cases. As of 2011, drug susceptibility testing is done in less than 5% of TB cases globally Without testing to determine drug resistance profiles, MDR- or XDR-TB patients may develop resistance to additional drugs and can continue to spread the disease to others. TDR-TB is relatively poorly documented, as many countries do not test patient samples against a broad enough range of drugs to diagnose such a comprehensive array of resistance. The United Nations' Special Programme for Research and Training in Tropical Diseases has set up the TDR Tuberculosis Specimen Bank to archive specimens of TDR-TB. There have been a few examples of cases in several countries, including India, Iran, and Italy. Cases of TDR-TB have also been reported in the United States. The first case was found in a young man from Peru named Oswaldo Juarez. Juarez was in the United States for school and to study the English language. About a year later he was voluntarily sent to A.G. Holley State Hospital. They treated him with unconventional drugs that are not usually used for TB in extremely high doses. He stayed in that hospital for over nineteen months, but left cured of TB. See also Extensively drug-resistant tuberculosis (XDR-TB) Multidrug-resistant tuberculosis (MDR-TB) References External links MDRIpred Tuberculosis Antibiotic-resistant bacteria
Totally drug-resistant tuberculosis
Biology
550
5,067,954
https://en.wikipedia.org/wiki/Z%20Carinae
Z Carinae and z Carinae are designations referring to stars in the constellation Carina. The Bayer designation z Carinae (z Car) is shared by two stars in the constellation Carina: HD 96566 (z1 Carinae), the brighter of the pair, often referred to as simply z Carinae V371 Carinae (z2 Carinae) They are separated by 0.53° on the sky. The variable star designation Z Carinae is used by the star HD 88946, a 10.20m Mira variable. Carina (constellation) Carinae, z
Z Carinae
Astronomy
123
616,196
https://en.wikipedia.org/wiki/Quanta%20Computer
Quanta Computer Incorporated () () is a Taiwan-based manufacturer of notebook computers and other electronic hardware. Its customers include Apple Inc., Dell, Hewlett-Packard Inc., Acer Inc., Alienware, Amazon.com, Cisco, Fujitsu, Gericom, Lenovo, LG, Maxdata, Microsoft, MPC, BlackBerry Ltd, Sharp Corporation, Siemens AG, Sony, Sun Microsystems, Toshiba, Valve, Verizon Wireless, and Vizio. Quanta has extended its businesses into enterprise network systems, home entertainment, mobile communication, automotive electronics, and digital home markets. The company also designs, manufactures and markets GPS systems, including handheld GPS, in-car GPS, Bluetooth GPS and GPS with other positioning technologies. Quanta Computer was announced as the original design manufacturer (ODM) for the XO-1 by the One Laptop per Child project on December 13, 2005, and took an order for one million laptops as of February 16, 2007. In October 2008, it was announced that Acer would phase out Quanta from the production chain, and instead outsource manufacturing of 15 million Aspire One netbooks to Compal Electronics. In 2011, Quanta designed servers in conjunction with Facebook as part of the Open Compute Project. It was estimated that Quanta had a 31% worldwide market share of notebook computers in the first quarter of 2008. History The firm was founded in 1988 by Barry Lam, a Shanghai-born businessman who grew up in Hong Kong and received his education in Taiwan, with a starting capital of less than $900,000. A first notebook prototype was completed in November 1988, with factory production beginning in 1990. Throughout the 1990s, Quanta established contracts with Apple Computers and Gateway, among others, opening an after-sales office in California in 1991 and another one in Augsburg, Germany in 1994. In 1996, Quanta signed a contract with Dell, making the firm Quanta's largest customer at the time. In 2014, Quanta ranked 409th on Fortune's Global 500 list. 2016 is the strongest period with it being in 326. In 2020, Quanta dropped to rank 377. Products Apple Watch Apple Macbook Air Apple Macbook Pro ThinkPad Z60m Subsidiaries Subsidiaries of Quanta Computer include: Quanta Cloud Technology Inc - provider of data center hardware. FaceVsion Technology Inc - telecommunications, webcam, and electronic products. CloudCast Technology Inc - information software and data processing - liquidated in February 2017. TWDT Precision Co., Ltd. (TWDT) - 55% ownership, which was sold in June 2016. RoyalTek International - In January 2006, RoyalTek became a member of Quanta Inc. This allows Quanta to create a top-down integration of technology and manufacturing, and we now have manufacturing factories in Taiwan and Shanghai. Techman Robot Inc. Techman Robot Inc. is a cobot manufacturer founded by Quanta in 2016. It is based in Taoyuan's Hwa Ya Technology Park. It is the world's second-largest manufacturer of robots after Universal Robots. Major facilities Shanghai, China (QSMC) This was the first mainland China plant built by Quanta Computer in December 2000 to focus on OEM and ODM production and currently employs nearly 30,000 people. Huangjian Tang, Quanta's Chairman for China, manages seven major plants, F1 to F7, two large warehouses, H1 and H2, and the Q-BUS Research and Development facility. Chongqing, China (QCMC) Constructed in April 2010. Quanta Computer invested and built a plant in Chongqing, China, the third plant built by Quanta Computer in China. Court case In 2008, LG Electronics sued Quanta Computer company for patent infringement, when Quanta used Intel components with non-Intel components. The Supreme Court of the United States ruled that LG, who had a patent sharing deal with Intel did not have the right to sue, because Quanta, being a consumer, did not need to abide by patent agreements with Intel and LG. See also List of companies of Taiwan References External links Quanta market share Computer hardware companies Computer systems companies Companies based in Taoyuan City Manufacturing companies established in 1988 Technology companies established in 1988 Taiwanese companies established in 1988 Electronics manufacturing companies
Quanta Computer
Technology
887
3,945,314
https://en.wikipedia.org/wiki/Cahill%20cycle
The Cahill cycle, also known as the alanine cycle or glucose-alanine cycle, is the series of reactions in which amino groups and carbons from muscle are transported to the liver. It is quite similar to the Cori cycle in the cycling of nutrients between skeletal muscle and the liver. When muscles degrade amino acids for energy needs, the resulting nitrogen is transaminated to pyruvate to form alanine. This is performed by the enzyme alanine transaminase (ALT), which converts L-glutamate and pyruvate into α-ketoglutarate and L-alanine. The resulting L-alanine is shuttled to the liver where the nitrogen enters the urea cycle and the pyruvate is used to make glucose. The Cahill cycle is less productive than the Cori cycle, which uses lactate, since a byproduct of energy production from alanine is production of urea. Removal of the urea is energy-dependent, requiring four "high-energy" phosphate bonds (3 ATP hydrolyzed to 2 ADP and one AMP), thus the net ATP produced is less than that found in the Cori cycle. However, unlike in the Cori cycle, NADH is conserved because lactate is not formed. This allows for it to be oxidized via the electron transport chain. Studies have demonstrated a clinical relevance of the Cahill cycle in the development of new treatments for liver associated diseases and cancers. Reactions Because skeletal muscle is unable to utilize the urea cycle to safely dispose of ammonium ions generated in the breakdown of branch chain amino acids, it must get rid of it in a different way. To do so, the ammonium is combined with free α-ketoglutarate via a transamination reaction in the cell, yielding glutamate and α-keto acid. Alanine aminotransaminase (ALT), also known as Glutamic-pyruvic transaminase (GPT), then coverts glutamate back into α-ketoglutarate, this time transferring the ammonium to pyruvate resulting from glycolysis, forming free alanine. The alanine amino acid acts as a shuttle - it leaves the cell, entering the blood stream and traveling to hepatocytes in the liver, where essentially this entire process is reversed. Alanine undergoes a transamination reaction with free α-ketoglutarate to yield glutamate, which is then deaminated to form pyruvate and, ultimately, free ammonium ion. Hepatocytes are capable of metabolizing the toxic ammonium by the urea cycle, thus disposing of it safely. Having rid the muscle cells of the ammonium ion successfully, the cycle then provides the energy-deprived skeletal muscle cells with glucose. Pyruvate formed from the deamination of glutamate in the hepatocytes undergoes gluconeogenesis to form glucose, which can then enter the bloodstream and be shuttled to the skeletal muscle tissue, thus providing it with the energy source it needs. Function The Cahill cycle ultimately serves as a method of ridding the muscle tissue of the toxic ammonium ion, as well as indirectly providing glucose to energy-deprived muscle tissue. Under long periods of fasting, skeletal muscle can be degraded for use as an energy source to supplement the glucose being produced from the breakdown of glycogen. The breakdown of branch chain amino acids yields a carbon skeleton utilized for energy purposes, as well as free ammonium ions. However, its presence and physiological significance in non-mammalian land vertebrates is unclear. For example although some fish use alanine as a nitrogen carrier, the cycle is unlikely to take place due to a slower glucose turnover rate and lower release of alanine from exercising muscle tissue. The alanine cycle also serves other purposes, such as the recycling of carbon skeletons in skeletal muscle and the liver, and participation in the transport of ammonium to the liver and conversion into urea. Studies have demonstrated that the glucose-alanine cycle may play a direct role in regulation of hepatic (liver) mitochondrial oxidation, particularly during periods of extended fasting. Hepatic mitochondrial oxidation is a key process in the metabolism of glucose and fatty acids, involving the Citric Acid Cycle and oxidative phosphorylation, for the generation of ATP. Understanding the factors that influence hepatic mitochondrial oxidation are of great interest due to its function in mediating diseases such as Non-Alcoholic Fatty Liver Disease (NAFLD), Non-Alcoholic steatohepatitis (NASH), and Type 2 Diabetes. A current active area of research is attempting to exploit the regulatory role of hepatic mitochondrial oxidation for the purpose of developing both targeted and non targeted therapeutics for such diseases. The glucose-alanine cycle may be one of these key factors. A study performed on both rodents and humans showed that decreased alanine turnover during a 60 hour period of fasting did correlate with a notable reduction in hepatic mitochondrial oxidation, as compared to subjects who underwent a 12 hour overnight fast. The rate of oxidative activity was quantified primarily by monitoring rates of Citrate Synthase flux (VCS ), a critical enzyme in the process of mitochondrial oxidation. To confirm whether or not the glucose-alanine cycle has a causal relationship with the observed effect, a secondary group of patients, also subjected to the same fasting conditions, were subsequently injected with a dose of L-alanine. Post-infusion, the 60 hour fasted patients showed a marked increase in hepatic mitochondrial oxidation, confirming the relationship. The glucose-alanine cycle may also be of significant clinical relevance in oncological (cancer) pathogenesis. A 2020 study explored the role of the glucose-alanine cycle in the metabolic reprogramming of Hepatocellular Carcinoma (HCC). HCC is the most common form of liver cancer and the third most common cause of cancer-related deaths worldwide. The search for alternative treatment options remains a lucrative area of research as current available therapeutics (surgery, radiotherapy, chemotherapy) generally have severe side effects and/or low success rates with HCC. One common characteristic of many novel alternative and/or supplementary treatments is the targeting of cellular metabolism of cancer cells, due to their general hyper-metabolic state which favors rapid growth and proliferation. In conjunction with consuming glucose at a much more rapid rate than healthy cells, cancers cells heavily rely on amino acid metabolism to satisfy their avid nutritional needs. The researchers involved in this study speculated exogenous alanine, processed via the glucose-alanine cycle, to be one of the alternative energy sources for HCC cells in a nutrient deficient environment and that this dependency can be harnessed for targeted therapy. To demonstrate this experimentally, HCC cells were cultured in vitro in a nutrient poor media and then supplied with alanine. The alanine supplication was enough to promote HCC cell growth under those conditions- a phenomenon called metabolic reprogramming. Next, they performed a series of over expression and loss of function experiments and determined that specifically Glutamic Pyruvate Transaminase 1 (GPT1) is the GPT isomer primarily involved in alanine turnover in HCC cells, consistent with previous findings that GPT1 tends to be found in the liver. They proceeded by treating the metabolically reprogrammed HCC cells with Berberine, a naturally occurring inhibitor of GPT1; the observed affect was to curb ATP production and subsequently the growth of the alanine-supplied cancer cells. Their study demonstrated that components of the glucose-alanine cycle, particularly GPT1, may be a good choice as a target for alternative HCC therapies and that Berberine, as a plant- derived selective GPT1 inhibitor, has potential for use in one of these novel medicines. The concept of alanine as an alternative fuel for cancer cells was similarly demonstrated in other studies performed on pancreatic cancer cells. References External links Diagram at Colorado.edu at indstate.edu Carbohydrate metabolism Metabolic pathways de:Cori-Zyklus#Glukose-Alanin-Zyklus
Cahill cycle
Chemistry
1,716
22,932,607
https://en.wikipedia.org/wiki/Trading%20turret
A trading turret or dealer board is a specialized telephony key system that is generally used by financial traders on their trading desks. Trading has progressed from floor trading through phone trading to electronic trading during the later half of the twentieth century with phone trading having dominated during the 1980s and 1990s. Although most trading volume is now done via electronic trading platforms, some phone trading persists and trading turrets are common on trading desks of investment banks. Voice trading turrets Trading turrets, unlike typical phone systems, have a number of features, functions and capabilities specifically designed for the needs of financial traders. Trading turrets enable users to visualize and prioritize incoming call activity from customers or counter-parties and make calls to these same people instantaneously by pushing a single button to access dedicated point-to-point telephone lines (commonly called Ringdown circuits). In addition, many traders have dozens or hundreds of dedicated speed dial buttons and large distribution hoot-n-holler or Squawk box circuits which allow immediate mass dissemination or exchange of information to other traders within their organization or to customers and counter-parties. Due to these requirements many Turrets have multiple handsets and multi-channel speaker units, generally these are shared by teams (for example: equities, fixed income, foreign exchange) or in some cases globally across whole trading organizations. Unlike standard Private Branch Exchange telephone systems (PBX) designed for general office users, Trading turret system architecture has historically relied on highly distributed switching architectures that enable parallel processing of calls and ensure a "non-blocking, non-contended" state where there is always a greater number of trunks (paths in/out of the system) than users as well as fault tolerance which ensures that any one component failure can not affect all users or lines. As processing power has increased and switching technologies have matured, voice trading systems are evolving from digital time-division multiplexing (TDM) system architectures to Internet Protocol (IP) server-based architectures. IP technologies have transformed communications for traders by enabling converged, multimedia communications that include, in addition to traditional voice calls, presence-based communications such as: unified communications and messaging, instant messaging (IM), chat and audio/video conferencing. Some of modern trading turret models are optimised to integrate with PBX platform. By natively registering on CUCM, for example, office users and turret users can have tighter collaboration and reduce total cost of ownership. While some of trading turret systems also include intercom functions, it is common that financial services firms use an independent intercom system along with trading turret systems. See also Electronic trading platform Dedicated line Stock market data systems Straight-through processing (STP) Trading system Trading room References External links A Bankers Guide to Trading Turrets, Peter Redshaw, 12 September 2013. Gartner. Computer telephony integration Financial markets Share trading Telephone exchange equipment
Trading turret
Technology
591
26,536,158
https://en.wikipedia.org/wiki/Cooperative%20coevolution
Cooperative Coevolution (CC) in the field of biological evolution is an evolutionary computation method. It divides a large problem into subcomponents, and solves them independently in order to solve the large problem. The subcomponents are also called species. The subcomponents are implemented as subpopulations and the only interaction between subpopulations is in the cooperative evaluation of each individual of the subpopulations. The general CC framework is nature inspired where the individuals of a particular group of species mate amongst themselves, however, mating in between different species is not feasible. The cooperative evaluation of each individual in a subpopulation is done by concatenating the current individual with the best individuals from the rest of the subpopulations as described by M. Potter. The cooperative coevolution framework has been applied to real world problems such as pedestrian detection systems, large-scale function optimization and neural network training. It has also be further extended into another method, called Constructive cooperative coevolution. Pseudocode i := 0 for each subproblem S do Initialise a subpopulation Pop0(S) calculate fitness of each member in Pop0(S) while termination criteria not satisfied do i := i + 1 for each subproblem S do select Popi(S) from Popi-1(S) apply genetic operators to Popi(S) calculate fitness of each member in Popi(S) See also Constructive cooperative coevolution Genetic algorithms Differential evolution Metaheuristic References Evolutionary computation
Cooperative coevolution
Biology
321
916,366
https://en.wikipedia.org/wiki/Keratin%2021
Keratin 21 is a type I cytokeratin which expresses immunologically specific fusion protein. It is not found in humans, but only in Rattus norvegicus. It is first detectable after 18-19 days of gestation. References Keratins Mammalian proteins
Keratin 21
Chemistry
61
60,380,362
https://en.wikipedia.org/wiki/Rolf%20Prince
Rudolf George Herman Prince (2 August 1928 – 3 July 2017), commonly known as Rolf Prince, was a noted chemical engineering academic, specializing in distillation and mass transfer. Life Prince was born in Chemnitz, Germany on 2 August 1928 from a Jewish family. He and his mother moved to Italy in 1936, to Ireland in 1939 and to New Zealand in 1940, and he became a naturalised New Zealand citizen in 1946. He was educated at Christchurch Boys' High School in Christchurch, then studied chemical engineering and chemistry at Canterbury University College of the University of New Zealand graduating in 1949. He then took a PhD at the University of Sydney, Australia, becoming a lecturer there. In 1953 he moved to the UK as a process engineer with The Distillers Company. From 1958, Prince pursued an academic career, starting as a lecturer at the University of Canterbury, New Zealand, then in 1960 a senior lecturer at the University of Sydney and in 1965 a professor at the University of Queensland, where he established a new department of chemical engineering. From 1969 to 1994 he was professor and head of the department of chemical engineering at the University of Sydney, remaining there until his retirement in 1998. Prince died in Sydney on 3 July 2017. Family He married Laurel Williamson (19 November 1926 – 7 April 2018), whom he met while a student. They had three children. Honours Officer of the Order of Australia (AO) Peter Nicol Russell Memorial Medal from Engineers Australia President, Institution of Chemical Engineers 1986–7 Fellow, Australian Academy of Technology and Engineering His portrait was painted by Robert Hannaford and won an Archibald Prize in 1998 References 1928 births 2017 deaths Australian chemical engineers People from Chemnitz University of Canterbury alumni Officers of the Order of Australia People educated at Christchurch Boys' High School Academic staff of the University of Canterbury Academic staff of the University of Sydney Academic staff of the University of Queensland Jewish emigrants from Nazi Germany to New Zealand Naturalised citizens of New Zealand German emigrants to Australia Chemical engineering academics
Rolf Prince
Chemistry
404