id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
10,899,167
https://en.wikipedia.org/wiki/Toshihide%20Maskawa
was a Japanese theoretical physicist known for his work on CP-violation who was awarded one quarter of the 2008 Nobel Prize in Physics "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature." Early life and education Maskawa was born in Nagoya, Japan. After World War II ended, the Maskawa family operated as a sugar wholesaler. A native of Aichi Prefecture, Toshihide Maskawa graduated from Nagoya University in 1962 and received a Ph.D. degree in particle physics from the same university in 1967. His doctoral advisor was the physicist Shoichi Sakata. From early life Maskawa liked trivia, also studied mathematics, chemistry, linguistics and various books. In high school, he loved novels, especially detective and mystery stories and novels by Ryūnosuke Akutagawa. Career At Kyoto University in the early 1970s, he collaborated with Makoto Kobayashi on explaining broken symmetry (the CP violation) within the Standard Model of particle physics. Maskawa and Kobayashi's theory required that there be at least three generations of quarks, a prediction that was confirmed experimentally four years later by the discovery of the bottom quark. Maskawa and Kobayashi's 1973 article, "CP Violation in the Renormalizable Theory of Weak Interaction", is the fourth most cited high energy physics paper of all time as of 2010. The Cabibbo–Kobayashi–Maskawa matrix, which defines the mixing parameters between quarks was the result of this work. Kobayashi and Maskawa were jointly awarded half of the 2008 Nobel Prize in Physics for this work, with the other half going to Yoichiro Nambu. Maskawa was director of the Yukawa Institute for Theoretical Physics from 1997 to 2003. He was special professor and director general of Kobayashi-Maskawa Institute for the Origin of Particles and the Universe at Nagoya University, director of Maskawa Institute for Science and Culture at Kyoto Sangyo University and professor emeritus at Kyoto University. Nobel lecture On 8 December 2008, after Maskawa told the audience "Sorry, I cannot speak English", he delivered his Nobel lecture on “What Did CP Violation Tell Us?” in Japanese language, at Stockholm University. The audience followed the subtitles on the screen behind him. Personal life Maskawa married Akiko Takahashi in 1967. The couple have two children, Kazuki and Tokifuji. Death On 23 July 2021 at the same day as the opening ceremony of Tokyo Summer Olympic Games, Maskawa died of oral cancer at his home in Kyoto at the age of 81. Although his death was unrelated to triple disaster and COVID-19 infection. He was cremated in October 2021 after the private funeral. Professional record July 1967 – Research Associate of the Faculty of Science, Nagoya University May 1970 – Research Associate of the Faculty of Science, Kyoto University April 1976 – Associate Professor of the Institute for Nuclear Study, University of Tokyo April 1980 – Professor of the Research Institute for Fundamental Physics (present Yukawa Institute for Theoretical Physics), Kyoto University November 1990 – Professor of the Faculty of Science, Kyoto University 1995 – Councilor, Kyoto University 1997 January – Professor of Yukawa Institute for Theoretical Physics, Kyoto University April – Director of Yukawa Institute for Theoretical Physics, Kyoto University 2003 April – Professor Emeritus of Kyoto University April – Professor of Kyoto Sangyo University (till May 2009) October 2004 – Director of the Research Institute, Kyoto Sangyo University October 2007 – Distinguished Invited University Professor of Nagoya University 2009 February – Trustee of Kyoto Sangyo University March – University Professor of Nagoya University June – Head of Maskawa Juku and Professor, Kyoto Sangyo University (till March 2019) 2010 April – Director of the Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI) at Nagoya University December – Member of the Japan Academy 2018 April – Director Emeritus of KMI at Nagoya University April 2019 – Professor Emeritus of Kyoto Sangyo University Recognition 1979 – Nishina Memorial Prize 1985 – Sakurai Prize 1985 – Japan Academy Prize 1995 – Chunichi Culture Award 1995 – Asahi Prize 2007 – High Energy and Particle Physics Prize by European Physical Society 2008 – Nobel Prize in Physics 2008 – Order of Culture 2010 – Member of Japan Academy Political proposition In 2013, Maskawa and chemistry Nobel laureate Hideki Shirakawa issued a statement against the Japanese State Secrecy Law." The following is Maskawa's main political proposition: Support for Article 9 of the Japanese Constitution Criticizing Japanese politician visits to the Yasukuni Shrine Support for selective couple surname system See also Progress of Theoretical Physics List of Japanese Nobel laureates List of Nobel laureates affiliated with Kyoto University References External links Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI), Nagoya University 1940 births 2021 deaths Scientists from Nagoya Japanese Nobel laureates Japanese physicists Nobel laureates in Physics Recipients of the Order of Culture Japanese theoretical physicists J. J. Sakurai Prize for Theoretical Particle Physics recipients Particle physicists Nagoya University alumni Academic staff of Nagoya University Academic staff of the University of Tokyo
Toshihide Maskawa
Physics
1,022
24,008,343
https://en.wikipedia.org/wiki/C4H7NO
{{DISPLAYTITLE:C4H7NO}} The molecular formula C4H7NO (molar mass: 85.10 g/mol) may refer to: Acetone cyanohydrin (ACH) Methacrylamide 2-Pyrrolidone N-Vinylacetamide (NVA) Molecular formulas
C4H7NO
Physics,Chemistry
75
51,239,611
https://en.wikipedia.org/wiki/Techno-authoritarianism
Techno-authoritarianism, also known as IT-backed authoritarianism, digital authoritarianism or digital dictatorship, refers to the state use of information technology in order to control or manipulate both foreign and domestic populations. Tactics of digital authoritarianism may include mass surveillance including through biometrics such as facial recognition, internet firewalls and censorship, internet blackouts, disinformation campaigns, and digital social credit systems. Although some institutions assert that this term should only be used to refer to authoritarian governments, others argue that the tools of digital authoritarianism are being adopted and implemented by governments with "authoritarian tendencies", including democracies. Most notably, China and Russia have been accused by the Brookings Institution of leveraging the Internet and information technology to repress opposition domestically while undermining democracies abroad. Definition IT-backed authoritarianism refers to an authoritarian regime using cutting-edge information technology in order to penetrate, control and shape the behavior of actors within society and the economy. According to reports and articles on China's practice, the basis of the digital authoritarianism is an advanced, all-encompassing and in large parts real-time surveillance system, which merges government-run systems and data bases (e.g. traffic monitoring, financial credit rating, education system, health sector etc.) with company surveillance systems (e.g. of shopping preferences, activities on social media platforms etc.). IT-backed authoritarianism institutionalizes the data transfer between companies and governmental agencies providing the government with full and regular access to data collected by companies. The authoritarian government remains the only entity with unlimited access to the collected data. IT-backed authoritarianism thus increases the authority of the regime vis-à-vis national and multinational companies as well as vis-à-vis other decentral or subnational political forces and interest groups. The collected data is utilized by the authoritarian regime to analyze and influence the behavior of a country’s citizens, companies and other institutions. It does so with the help of algorithms based on the principles and norms of the authoritarian regime, automatically calculating credit scores for every individual and institution. In contrast to financial credit ratings, these “social credit scores” are based on the full range of collected surveillance data, including financial as well as non-financial information. IT-backed authoritarianism only allows full participation in a country’s economy and society for those who have a good credit scoring and thus respect the rules and norms of the respective authoritarian regime. Behavior deviating from these norms incurs automatic punishment through a bad credit scoring, which leads to economic or social disadvantages (loan conditions, lower job opportunities, no participation in public procurement etc.). Severe violation or non-compliance can lead to the exclusion from any economic activities on the respective market or (for individuals) to an exclusion from public services. Examples China China has been viewed as the cutting edge and the enabler of digital authoritarianism. With its Great Firewall of a state-controlled Internet, it has deployed high-tech repression against Uyghurs in Xinjiang and exported surveillance and monitoring systems to 18 countries as of 2019. According to Freedom House, the China model of digital authoritarianism through Internet control against those who are critical of the CCP features legislations of censorship, surveillance using artificial intelligence (AI) and facial recognition, manipulation or removal of online content, cyberattacks and spear phishing, suspension and revocation of social media accounts, detention and arrests, and forced disappearance and torture, among other means. A report by Carnegie Endowment for International Peace also highlights similar digital repression techniques. In 2013, The Diplomat reported that the Chinese hackers behind the malware attacks on Falun Gong supporters in China, the Philippines, and Vietnam were the same ones responsible for attacks against foreign military powers, targeting email accounts and stealing Microsoft Outlook login information and email contents. The 2022 analysis by The New York Times of over 100,000 Chinese government bidding documents revealed a range of surveillance and data collection practices, from personal biometrics to behavioral data, which are fed into AI systems. China utilizes these data capabilities not only to enhance governmental and infrastructural efficiency but also to monitor and suppress dissent among its population, particularly in Xinjiang, where the government targets the Uyghur community under the guise of counterterrorism and public security. Russia The Russian model of digital authoritarianism relies on strict laws of digital expression and the technology to enforce them. Since 2012, as part of a broader crackdown on civil society, the Russian Parliament has adopted numerous laws curtailing speech and expression. Hallmarks of Russian digital authoritarianism include: The surveillance of all Internet traffic through the System for Operative Investigative Activities (SORM) and the Semantic Archive; Restrictive laws on the freedom of speech and expression, including the blacklisting of hundreds of thousands of websites, and punishment including fines and jail time for activities including slander, "insulting religious feelings," and "acts of extremism". Infrastructure regulations including requirements for Internet service providers (ISPs) to install deep packet inspection equipment under the 2019 Sovereign Internet Law. Myanmar Since the coup d'état in February 2021, the military junta blocked all but 1,200 websites and imposed Internet shutdowns, with pro-military dominating the content on the remaining accessible websites. In May 2021, Reuters reported that telecom and Internet service providers had been secretly ordered to install spyware allowing the military to "listen in on calls, view text messages and web traffic including emails, and track the locations of users without the assistance of the telecom and internet firms." In February 2022, Norwegian service provider Telenor was forced to sell its operation to a local company aligned with the military junta. The military junta also sought to criminalize virtual private networks (VPNs), imposed mandatory registration of devices, and increased surveillance on both social media platforms and via telecom companies. In July 2022, the military executed activist Kyaw Min Yu, after arresting him in November 2021 for prodemocracy social media posts criticizing the coup. Africa A study by the African Digital Rights Network (ADRN) revealed that governments in ten African countries—South Africa, Cameroon, Zimbabwe, Uganda, Nigeria, Zambia, Sudan, Kenya, Ethiopia, and Egypt—have employed various forms of digital authoritarianism. The most common tactics include digital surveillance, disinformation, Internet shutdowns, censorship legislation, and arrests for anti-government speech. The researchers highlighted the growing trend of complete Internet or mobile system shutdowns. Additionally, all ten countries utilized Internet surveillance, mobile intercept technologies, or artificial intelligence to monitor targeted individuals using specific keywords. References Authoritarianism Mass surveillance Government by algorithm
Techno-authoritarianism
Engineering
1,356
47,024,473
https://en.wikipedia.org/wiki/Kumdang-2
Kumdang-2 is a cure for AIDS, Ebola, MERS, and tuberculosis created in North Korea. According to the website Minjok Tongshin, a version of the drug was originally produced in 1996. The name means "golden shower" in Korean. It is manufactured by the Pugang Pharmaceutical Company. According to the Korean Central News Agency, the drug's ingredients include ginseng, small amounts of rare earth metals, and trace amounts of gold and platinum. According to the KCNA, it can also cure cancer, morning sickness, and "harm from the use of computers". The drug was mentioned during the deadly bird flu outbreaks in 2006 and 2013. Chemical analysis According to the chemical analysis by the South Korean National Forensic Service, Kumdang-2 turned out to be mostly made of the anesthetic Procaine. See also Koryo medicine Neo-Viagra-Y.R. Royal Blood-Fresh Tetrodocain References Healthcare in North Korea Drugs
Kumdang-2
Chemistry
206
605,727
https://en.wikipedia.org/wiki/Kronecker%27s%20theorem
In mathematics, Kronecker's theorem is a theorem about diophantine approximation, introduced by . Kronecker's approximation theorem had been firstly proved by L. Kronecker in the end of the 19th century. It has been now revealed to relate to the idea of n-torus and Mahler measure since the later half of the 20th century. In terms of physical systems, it has the consequence that planets in circular orbits moving uniformly around a star will, over time, assume all alignments, unless there is an exact dependency between their orbital periods. Statement Kronecker's theorem is a result in diophantine approximations applying to several real numbers xi, for 1 ≤ i ≤ n, that generalises Dirichlet's approximation theorem to multiple variables. The classical Kronecker approximation theorem is formulated as follows. Given real n-tuples and , the condition: holds if and only if for any with the number is also an integer. In plainer language, the first condition states that the tuple can be approximated arbitrarily well by linear combinations of the s (with integer coefficients) and integer vectors. For the case of a and , Kronecker's Approximation Theorem can be stated as follows. For any with irrational and there exist integers and with , such that Relation to tori In the case of N numbers, taken as a single N-tuple and point P of the torus T = RN/ZN, the closure of the subgroup <P> generated by P will be finite, or some torus T′ contained in T. The original Kronecker's theorem (Leopold Kronecker, 1884) stated that the necessary condition for T′ = T, which is that the numbers xi together with 1 should be linearly independent over the rational numbers, is also sufficient. Here it is easy to see that if some linear combination of the xi and 1 with non-zero rational number coefficients is zero, then the coefficients may be taken as integers, and a character χ of the group T other than the trivial character takes the value 1 on P. By Pontryagin duality we have T′ contained in the kernel of χ, and therefore not equal to T. In fact a thorough use of Pontryagin duality here shows that the whole Kronecker theorem describes the closure of <P> as the intersection of the kernels of the χ with χ(P) = 1. This gives an (antitone) Galois connection between monogenic closed subgroups of T (those with a single generator, in the topological sense), and sets of characters with kernel containing a given point. Not all closed subgroups occur as monogenic; for example a subgroup that has a torus of dimension ≥ 1 as connected component of the identity element, and that is not connected, cannot be such a subgroup. The theorem leaves open the question of how well (uniformly) the multiples mP of P fill up the closure. In the one-dimensional case, the distribution is uniform by the equidistribution theorem. See also Weyl's criterion Dirichlet's approximation theorem References Diophantine approximation Topological groups
Kronecker's theorem
Mathematics
652
41,163,767
https://en.wikipedia.org/wiki/Fast%20automatic%20restoration
Fast automatic restoration (FASTAR) is an automated fast response system developed and deployed by American Telephone & Telegraph (AT&T) in 1992 for the centralized restoration of its digital transport network. FASTAR automatically reroutes circuits over a spare protection capacity when a fiber-optic cable failure is detected, hence increasing service availability and reducing the impact of the outages in the network. Similar in operation is real-time restoration (RTR), developed and deployed by MCI and used in the MCI network to minimize the effects of a fiber cut. Restoration techniques It is a recovery technique used in computer networks and telecommunication networks such as mesh optical networks, where the backup path (the alternate path that affected traffic takes after a failure condition) and backup channel are computed in real time after the occurrence of a failure. This technique can be broadly classified into two: centralized restoration and distributed restoration. Centralized restoration techniques This technique uses a central controller which has access to complete up-to-date and accurate information about the network, the available resources, resources used, the physical topology of the network, the service demands etc. When failure is detected in any part of the network through some failure detection, identification and notification scheme, the central controller calculates a new re-route path around the failure based on the information in its database about the current state of the network. After this new route (backup path) is calculated, the central controller sends out commands to all the affected digital cross-connects to make appropriate reconfigurations to their switching elements in order to implement this new path. FASTAR and RTR restoration systems are examples of systems that use this restoration technique. Distributed restoration techniques In this restoration technique, no central controller is used, hence no up-to-date database of the state of the network is needed. In this scheme, all nodes in the network use local controllers that have only local information about how a particular node is connected to its neighboring nodes, available and spare capacity on the links used to connect to neighbors, and the state of their switching elements. When a failure occurs in any part of the network, the local controllers handle the computation and re-routing of the affected traffic. An example of an approach where this technique is used is the Self-Healing Networks (SHN). Recovery architecture evolution As the transport networks gradually developed from digital cross connect system (DCS)-based mesh networks, to SONET ring networks, and to optical mesh networks over the years, so did the recovery architecture used therein. The recovery architectures used for the different transport networks are: DCS-based mesh networks restoration of DS3 facilities, Add-Drop Multiplexer (ADM)-based ring protection of SONET ring networks, and finally Optical Cross Connect (OXC)-based mixed protection and restoration of optical mesh networks DCS-based mesh restoration The first restoration architecture which was used in the 1980s is the DCS-based mesh restoration of DS3 facilities. This architecture used a centralized restoration technique: every restoration event was coordinated from the network operation center (NOC). This restoration architecture is path-based and failure dependent, and is used after a fault occurs, for fault detection and isolation. This architecture is capacity-efficient due to the use of stub release but has a slow failure recovery time (the time it takes to reestablish traffic continuity after a failure by rerouting the signals on diverse facilities) on the order of minutes. ADM-based ring protection This architecture was implemented in the 1990s with the introduction of the SONET/SDH networks, and employed the distributed protection technique. It uses either path-based (UPSR) or span-based (BLSR) protection, and its recovery path is precomputed before the occurrence of a failure. ADM-based ring protection is capacity-inefficient, unlike the DCS-based mesh restoration, but has a faster recovery time (50 ms). OXC-based protection of optical mesh networks This recovery architecture is used in the protection of optical mesh networks which was introduced in early 2000s. This protection architecture has a recovery time between tens and hundreds of milliseconds which is a significant improvement over the recovery time supported in DCS-based mesh restoration but unlike the DCS-based mesh restoration, its recovery path is predetermined and pre-provisioned. This architecture also has the capacity efficiency seen in the preceding mesh restoration architecture (DCS-based). FASTAR architecture FASTAR uses DCS-based mesh restoration architecture. This architecture consists of nodal equipment, central control equipment, and a data communication network interconnecting the nodes to the central controller. The figure on the right explains the architecture of FASTAR and how the different building blocks interact. Central equipments The central processor called the Restoration and Provisioning Integrated Design (RAPID) located at the NOC is responsible for receiving and analyzing alarm reports generated in the event of a fiber failure. it also handles alternate (backup) route computation, re-routing of the affected traffic from the primary path to the already computed backup path, path assurance tests, and enables the roll-back of traffic to the original path after the failure is repaired. The RAPID maintains an up-to-date information about the state of the network and the available spare capacity. The Central Access and Display system (CADS) provides a craft interface for RAPID and other related restoration management systems. The Traffic Maintenance and Administration System (TMAS) enables RAPID to perform and control the protection switch lock-out process on protection channels being used for restoration, by sending commands to the Line Terminating Equipment (LTE). Nodal equipment The Restoration Network Controllers (RNCs) are located at each central office (CO) in the fiber optic network. The alarms generated by the affected digital access and cross-connect system (DACSs) or from the LTE are sent to the RNC, where it is aged to find out if the alarm is as a result of a transient, correlated and finally sent to the RAPID via the data communication network. The LTE, which is either FT Series G digital transmission system or an add drop multiplexer (ADM), reports any fiber failure between LTEs to the RNC and also provides RAPID with immediate access to the backup channels for re-routing of traffic or path assurance tests. The Restoration Test Equipment (RTE) provides RAPID with the means to perform continuity tests used in path assurance. The DACS is responsible for reporting fiber failures and node failures that occur within the office to the RNC. In addition, the DACS enables automatic restoration by providing the central processor access to remotely perform cross-connects at the DS-3 level. Data communication network The data communication network is used to connect the nodal equipments with the central controller. To achieve the needed availability of this network, full redundancy is used in the form of two totally diverse terrestrial and satellite-based networks. In an event of a major restoration process, one of these networks can support the communication burden in the absence of the other. Restoration using FASTAR FASTAR operates at the DS-3 level; it does not restore individual smaller demands. FASTAR restores 90 to 95 percent of the affected DS-3 demand within two to three minutes. When a fiber-optic cut occurs between the output of a DACS equipment and the input of another, each RNC collects alarms from the affected LTEs. The RNC ages these alarms and sends it to RAPID. RAPID determines the amount of spare capacity available after this failure, identifies the DS-3 demands affected, finds the restoration route for each affected traffic in sequential order of priority, and sends a command to the appropriate DACSs to implement the re-route, thus establishing a restoration. In the figure on the right, a route exists between node A and node Q via nodes C, F, K, and L. In the event of a fiber-optic cable failure between nodes F and K, the LTE (FT Series G or the ADM) in these two offices detects and sends alarm reports for this failure to their respective RNCs. Both RNCs age the alarm and send these reports to RAPID, located at the NOC. RAPID initiates a time window to ensure all related alarms generated from the RNCs of the affected nodes and the RNC of any other office whose traffic uses the F to K failed fiber optic cable. When this window times out, RAPID performs route computation, to establish a new backup path for the traffic between node A and node Q. Here it creates a new route through C, F, G, J, K, and L. This route computation is also done sequentially in order of priority for all the traffic between any two nodes in the network that use the same failed fiber-optic cable. Once the backup path for all the traffic going through nodes F and K has been computed, RAPID ensures that there is continuity or connectivity along the established back-up paths by sending a command to the RNCs located at A and Q, both of which in turn use the test signal generated by their respective RTE to check for continuity in the link. When the connectivity of this backup path has been verified, the traffic between nodes A and Q is transferred to this backup path by commanding the DACS IIIs to make the appropriate cross connections. RAPID performs a service verification test to verify that the service transfer was successful. If this test returns a positive result, then the service transfer was successful, else the service transfer was unsuccessful and needs to be repeated. This service or traffic transfer process is performed for all the traffic going through the affected fiber optic cable F–K. FASTAR restores as much of the affected traffic demand as the available protection capacity will allow. Restoring networks with SRLGs using FASTAR Shared Risk Link Groups (SRLGs) refer to situations where links that connect two distinct nodes or offices in a network share a common conduit. In that configuration, links in the group have a shared risk: if one link fails, other links in the group may fail too. Majority of the networks in use today use SRLGs, as most times, the only access into a building or across a bridge is only through a single conduit. To restore the traffic in a link between two offices or nodes that share the same SRLG with other links in the event of a conduit cut, at least one of these two offices must be FASTAR-ompliant. A cut in SRLG1 would be restorable using FASTAR if FASTAR is implemented in either office A or B but B and C were not yet FASTAR-compliant. But given a failure in SRLG2, the DS-3 traffic on link 3 would be restored by FASTAR via a newly re-computed backup path while the DS-3 traffic on link 2 would not be restored as FASTAR is not implemented in either office B or C. To restore all three links in the event of failure of both SRLGs, FASTAR is implemented in offices A and C. A failure in SRLG1 would cause FASTAR to automatically re-route each of the traffic on link 1 and 3 via two re-computed backup paths. Also if at another time failure of SRLG2 is detected, it is reported to RAPID and the traffic through link 2 and 3 are each re-routed through a new backup path. FASTAR network management FASTAR network management is used to integrate and analyze the different data and alarms supplied by the various system elements that make up the FASTAR architecture for centralized display, and to troubleshoot and isolate problems through fault management analysis so that corrective action can be taken. The FASTAR network management cuts across three tiers. The first (lowest) tier consists of all the elements that constitute the FASTAR architecture, and all the interconnecting links between them. The second tier consists of Element Management Systems (EMSs) which are computerized operations systems (OSs) used to manage the elements that are in the first tier. The different EMSs are collectively called FASTAR Element Management Systems (FASTEMS). The two major FASTEMS are the DACS Element Management Systems (DEMS) and the RNC Element Management Systems (RNC-EMS). DEMS is designed to assist NOC with management of DACSs. In the event of a change in the status of the network due to a fiber failure, RAPID forwards this status change to DEMS, which triggers DEMS to isolate the problem. The RNC-EMS monitors the RNCs directly via the data communication network and indirectly monitors the RTE, LTE, and DASC III, and their links to the RNC, via agents residing in the RNC. It consists two components: the manager and the agent. The manager software daemon (NMd) runs on the RNC-EMS machine and is responsible for polling the RNCs. Every RNC is polled twice, once over each of the data communication networks. The agent software daemon (NAd) runs on every RNC as part of the application software. It accesses the RNC application log to respond to manager queries, and has the ability to send autonomous alarms to the manager. The third (highest) tier comprises only the CADS workstation and provides centralized access to the network manager via the lower two tiers. See also Access network Availability Cross-connect Link protection Network node Optical fiber Telecommunications Wireless mesh networks References Further reading Network architecture Fiber-optic communications Mesh networking
Fast automatic restoration
Technology,Engineering
2,763
56,011,855
https://en.wikipedia.org/wiki/Multispecies%20coalescent%20process
Multispecies Coalescent Process is a stochastic process model that describes the genealogical relationships for a sample of DNA sequences taken from several species. It represents the application of coalescent theory to the case of multiple species. The multispecies coalescent results in cases where the relationships among species for an individual gene (the gene tree) can differ from the broader history of the species (the species tree). It has important implications for the theory and practice of phylogenetics and for understanding genome evolution. A gene tree is a binary graph that describes the evolutionary relationships between a sample of sequences for a non-recombining locus. A species tree describes the evolutionary relationships between a set of species, assuming tree-like evolution. However, several processes can lead to discordance between gene trees and species trees. The Multispecies Coalescent model provides a framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. The process is also called the Censored Coalescent. Besides species tree estimation, the multispecies coalescent model also provides a framework for using genomic data to address a number of biological problems, such as estimation of species divergence times, population sizes of ancestral species, species delimitation, and inference of cross-species gene flow. Gene tree-species tree congruence If we consider a rooted three-taxon tree, the simplest non-trivial phylogenetic tree, there are three different tree topologies but four possible gene trees. The existence of four distinct gene trees despite the smaller number of topologies reflects the fact that there are topologically identical gene tree that differ in their coalescent times. In the type 1 tree the alleles in species A and B coalesce after the speciation event that separated the A-B lineage from the C lineage. In the type 2 tree the alleles in species A and B coalesce before the speciation event that separated the A-B lineage from the C lineage (in other words, the type 2 tree is a deep coalescence tree). The type 1 and type 2 gene trees are both congruent with the species tree. The other two gene trees differ from the species tree; the two discordant gene trees are also deep coalescence trees. The distribution of times to coalescence is actually continuous for all of these trees. In other words, the exact coalescent time for any two loci with the same gene tree may differ. However, it is convenient to break up the trees based on whether the coalescence occurred before or after the earliest speciation event. Given the internal branch length in coalescent units it is straightforward to calculate the probability of each gene tree. For diploid organisms the branch length in coalescent units is the number of generations between the speciation events divided by twice the effective population size. Since all three of the deep coalescence tree are equiprobable and two of those deep coalescence tree are discordant it is easy to see that the probability that a rooted three-taxon gene tree will be congruent with the species tree is: Where the branch length in coalescent units (T) is also written in an alternative form: the number of generations (t) divided by twice the effective population size (Ne). Pamilo and Nei also derived the probability of congruence for rooted trees of four and five taxa as well as a general upper bound on the probability of congruence for larger trees. Rosenberg followed up with equations used for the complete set of topologies (although the large number of distinct phylogenetic trees that becomes possible as the number of taxa increases makes these equations impractical unless the number of taxa is very limited). The phenomenon of hemiplasy is a natural extension of the basic idea underlying gene tree-species tree discordance. If we consider the distribution of some character that disagrees with the species tree it might reflect homoplasy (multiple independent origins of the character or a single origin followed by multiple losses) or it could reflect hemiplasy (a single origin of the trait that is associated with a gene tree that disagrees with the species tree). The phenomenon called incomplete lineage sorting (often abbreviated ILS in the scientific literatures) is linked to the phenomenon. If we examine the illustration of hemiplasy with using a rooted four-taxon tree (see image to the right) the lineage between the common ancestor of taxa A, B, and C and the common ancestor of taxa A and B must be polymorphic for the allele with the derived trait (e.g., a transposable element insertion) and the allele with the ancestral trait. The concept of incomplete lineage sorting ultimately reflects on persistence of polymorphisms across one or more speciation events. Mathematical description of the multispecies coalescent The probability density of the gene trees under the multispecies coalescent model is discussed along with its use for parameter estimation using multi-locus sequence data. Assumptions In the basic multispecies coalescent model, the species phylogeny is assumed to be known. Complete isolation after species divergence, with no migration, hybridization, or introgression is also assumed. We assume no recombination so that all the sites within the locus share the same gene tree (topology and coalescent times). However, the basic model can be extended in different ways to accommodate migration or introgression, population size changes, recombination. Data and model parameters The model and implementation of this method can be applied to any species tree. As an example, the species tree of the great apes: humans (H), chimpanzees (C), gorillas (G) and orangutans (O) is considered. The topology of the species tree, , is assumed known and fixed in the analysis (Figure 1). Let be the entire data set, where represent the sequence alignment at locus , with for a total of loci. The population size of a current species is considered only if more than one individual is sampled from that species at some loci. The parameters in the model for the example of Figure 1 include the three divergence times , and and population size parameters for humans; for chimpanzees; and , and for the three ancestral species. The divergence times ('s) are measured by the expected number of mutations per site from the ancestral node in the species tree to the present time (Figure 1 of Rannala and Yang, 2003). Therefore, the parameters are . Distribution of gene genealogies The joint distribution of is derived directly in this section. Two sequences from different species can coalesce only in one populations that are ancestral to the two species. For example, sequences H and G can coalesce in populations HCG or HCGO, but not in populations H or HC. The coalescent processes in different populations are different. For each population, the genealogy is traced backward in time, until the end of the population at time , and the number of lineages entering the population and the number of lineages leaving it are recorded. For example, and , for population H (Table 1). This process is called a censored coalescent process because the coalescent process for one population may be terminated before all lineages that entered the population have coalesced. If the population consists of disconnected subtrees or lineages. With one time unit defined as the time taken to accumulate one mutation per site, any two lineages coalesce at the rate . The waiting time until the next coalescent event, which reduces the number of lineages from to has exponential density If , the probability that no coalescent event occurs between the last one and the end of the population at time ; i.e. during the time interval . This probability is and is 1 if . (Note: One should recall that the probability of no events over time interval for a Poisson process with rate is . Here the coalescent rate when there are lineages is .) In addition, to derive the probability of a particular gene tree topology in the population, if a coalescent event occurs in a sample of lineages, the probability that a particular pair of lineages coalesce is . Multiplying these probabilities together, the joint probability distribution of the gene tree topology in the population and its coalescent times as . The probability of the gene tree and coalescent times for the locus is the product of such probabilities across all the populations. Therefore, the gene genealogy of Figure 1, we have Likelihood-based inference The gene genealogy at each locus is represented by the tree topology and the coalescent times . Given the species tree and the parameters on it, the probability distribution of is specified by the coalescent process as , where is the probability density for the gene tree at locus locus , and the product is because we assume that the gene trees are independent given the parameters. The probability of data given the gene tree and coalescent times (and thus branch lengths) at the locus, , is Felsenstein's phylogenetic likelihood. Due to the assumption of independent evolution across the loci, The likelihood function or the probability of the sequence data given the parameters is then an average over the unobserved gene trees where the integration represents summation over all possible gene tree topologies () and, for each possible topology at each locus, integration over the coalescent times . This is in general intractable except for very small species trees. In Bayesian inference, we assign a prior on the parameters, , and then the posterior is given as where again the integration represents summation over all possible gene tree topologies () and integration over the coalescent times . In practice this integration over the gene trees is achieved through a Markov chain Monte Carlo algorithm, which samples from the joint conditional distribution of the parameters and the gene trees The above assumes that the species tree is fixed. In species-tree estimation, the species tree () changes as well, so that the joint conditional distribution (from which the MCMC samples) is where is the prior on species trees. As a major departure from two-step summary methods, full-likelihood methods average over the gene trees. This means that they make use of information in the branch lengths (coalescent times) on the gene trees and accommodate their uncertainties (due to limited sequence length in the alignments) at the same time. It also explains why full-likelihood methods are computationally much more demanding than two-step summary methods. Markov chain Monte Carlo under the multispecies coalescent The integration or summation over the gene trees in the definition of the likelihood function above is virtually impossible to compute except for very small species trees with only two or three species. Full-likelihood or full-data methods, based on calculation of the likelihood function on sequence alignments, have thus mostly relied on Markov chain Monte Carlo algorithms. MCMC algorithms under the multispecies coalescent model are similar to those used in Bayesian phylogenetics but are distinctly more complex, mainly due to the fact that the gene trees at multiple loci and the species tree have to be compatible: sequence divergence has to be older than species divergence. As a result, changing the species tree while the gene trees are fixed (or changing a gene tree while the species tree is fixed) leads to inefficient algorithms with poor mixing properties. Considerable efforts have been taken to design smart algorithms that change the species tree and gene trees in a coordinated manner, as in the rubber-band algorithm for changing species divergence times, the coordinated NNI, SPR and NodeSlider moves. Consider for example the case of two species (A and B) and two sequences at each locus, with a sequence divergence time at locus . We have for all . When we want to change the species divergence time within the constraint of the current , we may have very little room for change, as may be virtually identical to the smallest of the . The rubber-band algorithm changes without consideration of the , and then modifies the deterministically in the same way that marks on a rubber band move when the rubber band is held from a fixed point pulled towards one end. In general, the rubber-band move guarantees that the ages of nodes in the gene trees are modified so that they remain compatible with the modified species divergence time. Full likelihood methods tend to reach their limit when the data consist of a few hundred loci, even though more than 10,000 loci have been analyzed in a few published studies. Extensions The basic multispecies coalescent model can be extended in a number of ways to accommodate major factors of the biological process of reproduction and drift. For example, incorporating continuous-time migration leads to the MSC+M (for MSC with migration) model, also known as the isolation-with-migration or IM models. Incorporating episodic hybridization/introgression leads to the MSC with introgression (MSci) or multispecies-network-coalescent (MSNC) model. Impact on phylogenetic estimation The multispecies coalescent has profound implications for the theory and practice of molecular phylogenetics. Since individual gene trees can differ from the species tree one cannot estimate the tree for a single locus and assume that the gene tree correspond the species tree. In fact, one can be virtually certain that any individual gene tree will differ from the species tree for at least some relationships when any reasonable number of taxa are considered. However, gene tree-species tree discordance has an impact on the theory and practice of species tree estimation that goes beyond the simple observation that one cannot use a single gene tree to estimate the species tree because there is a part of parameter space where the most frequent gene tree is incongruent with the species tree. This part of parameter space is called the anomaly zone and any discordant gene trees that are more expected to arise more often than the gene tree. that matches the species tree are called anomalous gene trees. The existence of the anomaly zone implies that one cannot simply estimate a large number of gene trees and assume the gene tree recovered the largest number of times is the species tree. Of course, estimating the species tree by a "democratic vote" of gene trees would only work for a limited number of taxa outside of the anomaly zone given the extremely large number of phylogenetic trees that are possible. However, the existence of the anomalous gene trees also means that simple methods for combining gene trees, like the majority rule extended ("greedy") consensus method or the matrix representation with parsimony (MRP) supertree approach, will not be consistent estimators of the species tree (i.e., they will be misleading). Simply generating the majority-rule consensus tree for the gene trees, where groups that are present in at least 50% of gene trees are retained, will not be misleading as long as a sufficient number of gene trees are used. However, this ability of the majority-rule consensus tree for a set of gene trees to avoid incorrect clades comes at the cost of having unresolved groups. Simulations have shown that there are parts of species tree parameter space where maximum likelihood estimates of phylogeny are incorrect trees with increasing probability as the amount of data analyzed increases. This is important because the "concatenation approach," where multiple sequence alignments from different loci are concatenated to form a single large supermatrix alignment that is then used for maximum likelihood (or Bayesian MCMC) analysis, is both easy to implement and commonly used in empirical studies. This represents a case of model misspecification because the concatenation approach implicitly assumes that all gene trees have the same topology. Indeed, it has now been proven that analyses of data generated under the multispecies coalescent using maximum likelihood analysis of a concatenated data are not guaranteed to converge on the true species tree as the number of loci used for the analysis increases (i.e., maximum likelihood concatenation is statistically inconsistent). Software for inference under the multispecies coalescent There are two basic approaches for phylogenetic estimation in the multispecies coalescent framework: 1) full-likelihood or full-data methods which operate on multilocus sequence alignments directly, including both maximum likelihood and Bayesian methods, and 2) summary methods, which use a summary of the original sequence data, including the two-step methods that use estimated gene trees as summary input and SVDQuartets, which use site pattern counts pooled over loci as summary input. References Statistical genetics Statistical inference Population genetics Phylogenetics
Multispecies coalescent process
Biology
3,450
28,734,956
https://en.wikipedia.org/wiki/Spherical%20shell
In geometry, a spherical shell is a generalization of an annulus to three dimensions. It is the region of a ball between two concentric spheres of differing radii. Volume The volume of a spherical shell is the difference between the enclosed volume of the outer sphere and the enclosed volume of the inner sphere: where is the radius of the inner sphere and is the radius of the outer sphere. Approximation An approximation for the volume of a thin spherical shell is the surface area of the inner sphere multiplied by the thickness of the shell: when is very small compared to (). The total surface area of the spherical shell is . See also Spherical pressure vessel Ball Solid torus Bubble Sphere References Elementary geometry Geometric shapes Spherical geometry
Spherical shell
Physics,Mathematics
146
31,182,936
https://en.wikipedia.org/wiki/SoftAP
SoftAP is an abbreviated term for "software enabled access point". Such access points utilize software to enable a computer which hasn't been specifically made to be a router into a wireless access point. It is often used interchangeably with the term "virtual router". History on Windows The first SoftAP software was shipped by Ralink with their Wi-Fi cards for Windows XP. It enabled a Wi-Fi card to act as a wireless access point. While a card was acting as a wireless access point, it could not continue to stay connected as a client, so any Internet access had to come from another device, such as an Ethernet device. Following Ralink's card innovation, a number of other Wi-Fi vendors, including Edimax, later released SoftAP software for their devices. Neither Ralink nor Edimax updated their software to work with Windows Vista, due to the installation of its new driver model, bringing an effective end to this software category until the release of Windows 7 in 2009. Windows gained the ability to natively create software access points with the release of Windows 7 through a virtual Wi-Fi adapter, allowing a network card to function both as a station and access point. This functionality was only exposed through the netsh CLI, indicating that it was unfinished. Upon the release of Windows 10, a user interface to create hotspots was added. Purpose SoftAP is a common method of configuring Wi-Fi products without a display or input device, such as a Wi-Fi enabled appliance, home security camera, smart home product or any other IoT device. The process typically involves these steps: The headless device turns on a SoftAP Wi-Fi hotspot. The user downloads a product-specific app on a smartphone, and the app then either uses the underlying Android or iOS operating system to connect to the SoftAP hotspot, or directs the user to connect the phone manually. The app then asks the user for the user's private Wi-Fi network name (SSID) and passkey The app sends the SSID and passkey to the headless device over the SoftAP connection. The headless device then falls off the SoftAP network and joins the user's private Wi-Fi network. This process can work well, but there are two core problems. First, the process often requires the user to manually connect to the SoftAP network, which can be confusing for mainstream users. Second, if the user enters the passkey incorrectly, or if the phone gets disconnected from the SoftAP network for any reason, it is difficult for the app and device to smoothly recover, so the user is often left having to factory reset the device and start over. Third, different phones (hardware and OS versions) handle SoftAP differently, so the user experience varies dramatically—especially with the wide variety of Android hardware and software. Because of these complexities, many companies making Wi-Fi connected products are now adding BLE, ZipKey, data-over-sound, or another technology to facilitate a better out-of-box experience for users. Platform support Various operating system platforms support SoftAP, including: Linux Windows 7 Windows 8 Windows 10 Windows 11 Android Windows Phone 7.5 Windows Phone 8 Windows 10 Mobile iOS Mac OS Commercial vendors Connectify Edimax Ralink See also Wi-Fi Direct Connectify References Network access
SoftAP
Engineering
694
1,364,342
https://en.wikipedia.org/wiki/Strategies%20for%20engineered%20negligible%20senescence
Strategies for engineered negligible senescence (SENS) is a range of proposed regenerative medical therapies, either planned or currently in development, for the periodic repair of all age-related damage to human tissue. These therapies have the ultimate aim of maintaining a state of negligible senescence in patients and postponing age-associated disease. SENS was first defined by British biogerontologist Aubrey de Grey. Many mainstream scientists believe that it is a fringe theory. De Grey later highlighted similarities and differences of SENS to subsequent categorization systems of the biology of aging, such as the highly influential Hallmarks of Aging published in 2013. While some biogerontologists support the SENS program, others contend that the ultimate goals of de Grey's programme are too speculative given the current state of technology. The 31-member Research Advisory Board of de Grey's SENS Research Foundation have signed an endorsement of the plausibility of the SENS approach. Framework The term "negligible senescence" was first used in the early 1990s by professor Caleb Finch to describe organisms such as lobsters and hydras, which do not show symptoms of aging. The term "engineered negligible senescence" first appeared in print in Aubrey de Grey's 1999 book The Mitochondrial Free Radical Theory of Aging. De Grey defined SENS as a "goal-directed rather than curiosity-driven" approach to the science of aging, and "an effort to expand regenerative medicine into the territory of aging". The ultimate objective of SENS is the eventual elimination of age-related diseases and infirmity by repeatedly reducing the state of senescence in the organism. The SENS project consists in implementing a series of periodic medical interventions designed to repair, prevent or render irrelevant all the types of molecular and cellular damage that cause age-related pathology and degeneration, in order to avoid debilitation and death from age-related causes. Strategies As described by SENS, the following table details major ailments and the program's proposed preventative strategies: Scientific reception While some fields mentioned as branches of SENS are supported by the medical research community, e.g., stem cell research, anti-Alzheimers research and oncogenomics, the SENS programme as a whole has been a highly controversial proposal. Many of its critics argued in 2005 that the SENS agenda was fanciful and that the complicated biomedical phenomena involved in aging contain too many unknowns for SENS to be fully implementable in the foreseeable future. Cancer may deserve special attention as an aging-associated disease, but the SENS claim that nuclear DNA damage only matters for aging because of cancer has been challenged in other literature, as well as by material studying the DNA damage theory of aging. More recently, biogerontologist Marios Kyriazis has criticised the clinical applicability of SENS by claiming that such therapies, even if developed in the laboratory, would be practically unusable by the general public. De Grey responded to one such criticism. 2005 EMBO Reports statement In November 2005, 28 biogerontologists published a statement of criticism in EMBO Reports, "Science fact and the SENS agenda: what can we reasonably expect from ageing research?," arguing "each one of the specific proposals that comprise the SENS agenda is, at our present stage of ignorance, exceptionally optimistic," and that some of the specific proposals "will take decades of hard work [to be medically integrated], if [they] ever prove to be useful." The researchers argue that while there is "a rationale for thinking that we might eventually learn how to postpone human illnesses to an important degree," increased basic research, rather than the goal-directed approach of SENS, is currently the scientifically appropriate goal. Technology Review contest In February 2005, the MIT Technology Review published an article by Sherwin Nuland, a Clinical Professor of Surgery at Yale University and the author of How We Die, that drew a skeptical portrait of SENS, at the time de Grey was a computer associate in the Flybase Facility of the Department of Genetics at the University of Cambridge. While Nuland praised de Grey's intellect and rhetoric, he criticized the SENS framework both for oversimplifying "enormously complex biological problems" and for promising relatively near-at-hand solutions to those unsolved problems. During June 2005, David Gobel, CEO and co-founder of the Methuselah Foundation with de Grey, offered Technology Review $20,000 to fund a prize competition to publicly clarify the viability of the SENS approach. In July 2005, Jason Pontin announced a $20,000 prize, funded 50/50 by Methuselah Foundation and MIT Technology Review. The contest was open to any molecular biologist, with a record of publication in biogerontology, who could prove that the alleged benefits of SENS were "so wrong that it is unworthy of learned debate." Technology Review received five submissions to its challenge. In March 2006, Technology Review announced that it had chosen a panel of judges for the Challenge: Rodney Brooks, Anita Goel, Nathan Myhrvold, Vikram Sheel Kumar, and Craig Venter. Three of the five submissions met the terms of the prize competition. They were published by Technology Review on June 9, 2006. On July 11, 2006, Technology Review published the results of the SENS Challenge. In the end, no one won the $20,000 prize. The judges felt that no submission met the criterion of the challenge and discredited SENS, although they unanimously agreed that one submission, by Preston Estep and his colleagues, was the most eloquent. Craig Venter succinctly expressed the prevailing opinion: "Estep et al. ... have not demonstrated that SENS is unworthy of discussion, but the proponents of SENS have not made a compelling case for it." Summarizing the judges' deliberations, Pontin wrote in 2006 that SENS is "highly speculative" and that many of its proposals could not be reproduced with current scientific technology. Myhrvold described SENS as belonging to a kind of "antechamber of science" where they wait until technology and scientific knowledge advance to the point where it can be tested. Estep and his coauthors challenged the result of the contest by saying both that the judges had ruled "outside their area of expertise" and had failed to consider de Grey's frequent misrepresentations of the scientific literature. SENS Research Foundation The SENS Research Foundation is a non-profit organization co-founded by Michael Kope, Aubrey de Grey, Jeff Hall, Sarah Marr and Kevin Perrott, which is based in California, United States. Its activities include SENS-based research programs and public relations work for the acceptance of and interest in related research. See also References Further reading Fringe science 1999 introductions Senescence Immortality Regenerative biomedicine Biomedical engineering Life extension Gene therapy
Strategies for engineered negligible senescence
Chemistry,Engineering,Biology
1,444
66,690,547
https://en.wikipedia.org/wiki/Bio-like%20structure
Bio-like structures were claimed to be a form of synthetic life obtained by the Soviet microbiologist V. O. Kalinenko in distilled water, as well as on an agar gel, under the influence of an electric field. However, these entities are most likely non-living inorganic structures. Description The original idea behind the experiments was a consequence of Kalinenko's observations made during an expedition studying microorganisms that live at the bottom of sea and ocean waters. It was assumed that test samples were not contaminated, since the experiments were carried out in sterile laboratory conditions and the formed structures did not resemble microorganisms currently known to science. Kalinenko described the process as the creation of life forms from inanimate matter—water, air, and electricity. The structures were reported to have various amoeba-like shapes, resembling discs, cigars and caudate rockets. They seemingly possessed the basic characteristics of living organisms in that they could move, grow and multiply, and cell "nuclei" were observed, which, similarly to naturally occurring nuclei, contained "chromosomes". Some "amoebas" turned out to be "predators" that could envelop and then digest their "victims". Moreover, Kalinenko argued the structures exhibited enzymatic activity by dissolving calcite and magnesite crystals; therefore, it might be concluded that they were not minerals themselves. Kalinenko did not give the structures a formal name; instead, he referred to them as "bio-like structures", "biostructures" and "artificial cells". As evidence to support his claims, Kalinenko presented numerous photographs showing the various stages of the formation and development of "biostructures". Kalinenko termed the process of the synthesis of these structures as "energobiosis" and claimed that the method he described might also be used to synthesize protein-based cells. Scientific reviews The structures have been described by Kalinenko as "living" microorganisms; however, his statements have been called into question. A review study carried out in a laboratory concluded that these structures lack proteins, amino acids, nucleic acids (and their precursors, purine and pyrimidine bases), and adenosine triphosphate; that is, they "contain no important compounds necessary for all known organisms" and therefore "cannot be regarded as living structures". Although it has been actually found that some of the structures produced by Kalinenko slightly resembled bacteria or amoebas in their shape and form, the group of workers did not detect any microorganisms in the samples. However, a possibility that the described process may have something to do with the origin of life on Earth has not been completely ruled out. Scientists have not explicitly stated what "biostructures" truly are, but it has been suggested that they probably consist of inorganic salts. Kalinenko's striking declaration that "biostructures" are living units met with bewilderment and skepticism among biologists of the USSR Academy of Sciences. It has been implied that the conclusions Kalinenko drew resulted from his lack of a critical attitude towards the data, as it seemed impossible to create a cell under the described conditions; there were also those who were more sharply critical of him. According to NASA researchers, "presently known scientific principles of biology and biochemistry" cannot allow inorganic entities to be considered alive, and "the postulated existence" of such entities "has not been proved" as no confirmatory reports by other scientists exist. See also Jeewanu Protocell References Origin of life
Bio-like structure
Biology
751
48,545,257
https://en.wikipedia.org/wiki/Menahem%20Max%20Schiffer
Menahem Max Schiffer (24 September 1911, Berlin – 11 November 1997)) was a German-born American mathematician who worked in complex analysis, partial differential equations, and mathematical physics. Biography Menachem Max Schiffer studied physics from 1930 at the University of Bonn and then at the Humboldt University of Berlin with Max von Laue, Erwin Schrödinger, Walter Nernst, Erhard Schmidt, Issai Schur and Ludwig Bieberbach. In Berlin he worked closely with Issai Schur. In 1934, after being forced by the Nazis to leave the academic world, he immigrated to Mandatory Palestine. On the basis of his prior mathematical publications, Schiffer received a master's degree from the Hebrew University of Jerusalem. In 1938, he received his doctorate under the supervision of Michael Fekete. In his dissertation on Conformal representation and univalent functions he introduced the "Schiffer variation", a method for handling geometric problems in complex analysis. Schiffer married Fanya Rabinivics Schiffer in 1937. His daughter Dinah S. Singer, is an experimental immunologist. Academic career In September 1952, he began to teach at Stanford University, along with George Pólya, Charles Loewner, Stefan Bergman, and Gábor Szegő. With Paul Garabedian, Schiffer worked on the Bieberbach conjecture with a proof in 1955 of the special case n=4. He was a speaker at the International Congress of Mathematicians (ICM) in 1950 at Cambridge, Massachusetts, and was a plenary speaker at the ICM in 1958 at Edinburgh with plenary address Extremum Problems and Variational Methods in Conformal Mapping. In 1970 he was elected to the United States National Academy of Sciences. He retired from Stanford University as professor emeritus in 1977. In 1981, Schiffer became a founding member of the World Cultural Council. Selected publications with Leon Bowden: The role of mathematics in science, Mathematical Association of America 1984 with Stefan Bergman: Kernel functions and elliptic differential equations in mathematical physics, Academic Press 1953 with Donald Spencer: Functionals of finite Riemann Surfaces, Princeton 1954 with Ronald Adler, Maurice Bazin: Introduction to General Relativity, McGraw Hill 1965 xvi+ 451 pp. Illus. References External links 1911 births 1997 deaths 20th-century American mathematicians Founding members of the World Cultural Council Mathematical analysts Jewish German scientists Hebrew University of Jerusalem alumni Stanford University Department of Mathematics faculty Members of the United States National Academy of Sciences Emigrants from Nazi Germany Immigrants to the United States Humboldt University of Berlin alumni
Menahem Max Schiffer
Mathematics
524
62,696,900
https://en.wikipedia.org/wiki/Protective%20colloid
A protective colloid is a lyophilic colloid that when present in small quantities keeps lyophobic colloids from precipitating under the coagulating action of electrolytes. Need for protective colloids When a small amount of hydrophilic colloid is added to hydrophobic colloids it may coagulate the latter. This is due to neutralisation of the charge on the hydrophobic colloidal particles. However, the addition of large amount of hydrophilic colloid increases the stability of the hydrophobic colloidal system. This is due to adsorption. When lyophilic sols are added to lyophobic sols, depending on their sizes, either lyophobic sol is adsorbed in the surface of lyophilic sol or lyophilic sol is adsorbed on the surface of lyophobic sol. The layer of the protective colloid prevents direct collision between the hydrophobic colloidal particles and thus prevents coagulation. Examples Lyophilic sols like starch and gelatin act as protective colloids. Measurement of protective action For a comparative study Zsigmondy introduced a scale of protective action for different protective colloids in terms of gold number. The gold number is the weight in milligrams of a protective colloid which checks the coagulation of 10ml of a given gold sol on adding 1 ml of 10% sodium chloride. Thus smaller the gold number, greater is the protective action. Gold numbers of some materials Gelatin 0.005-0.01 Albumin 0.1 Acacia 0.1-0.2 Sodium oleate 1-5 Tragacanth 2 [4] References Colloids
Protective colloid
Physics,Chemistry,Materials_science
358
36,443,313
https://en.wikipedia.org/wiki/Clavulina%20wisoli
Clavulina wisoli is a species of coral fungus in the family Clavulinaceae. Officially described in 2003, it is found in Africa. References External links Fungi described in 2003 Fungi of Africa wisoli Taxa named by Ron Petersen Fungus species
Clavulina wisoli
Biology
54
33,629
https://en.wikipedia.org/wiki/Weak%20interaction
In nuclear physics and particle physics, the weak interaction, also called the weak force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms: The weak interaction participates in nuclear fission and nuclear fusion. The theory describing its behaviour and effects is sometimes called quantum flavordynamics (QFD); however, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT). The effective range of the weak force is limited to subatomic distances and is less than the diameter of a proton. Background The Standard Model of particle physics provides a uniform framework for understanding electromagnetic, weak, and strong interactions. An interaction occurs when two particles (typically, but not necessarily, half-integer spin fermions) exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either elementary (e.g. electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles. In the weak interaction, fermions can exchange three types of force carriers, namely , , and  bosons. The masses of these bosons are far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. In fact, the force is termed weak because its field strength over any set distance is typically several orders of magnitude less than that of the electromagnetic force, which itself is further orders of magnitude less than the strong nuclear force. The weak interaction is the only fundamental interaction that breaks parity symmetry, and similarly, but far more rarely, the only interaction to break charge–parity symmetry. Quarks, which make up composite particles like neutrons and protons, come in six "flavours" up, down, charm, strange, top and bottom which give those composite particles their properties. The weak interaction is unique in that it allows quarks to swap their flavour for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta-minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Weak interaction is important in the fusion of hydrogen into helium in a star. This is because it can convert a proton (hydrogen) into a neutron to form deuterium which is important for the continuation of nuclear fusion to form helium. The accumulation of neutrons facilitates the buildup of heavy nuclei in a star. Most fermions decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium luminescence, and in the related field of betavoltaics (but not similar to radium luminescence). The electroweak force is believed to have separated into the electromagnetic and weak forces during the quark epoch of the early universe. History In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. In the mid-1950s, Chen-Ning Yang and Tsung-Dao Lee first suggested that the handedness of the spins of particles in weak interaction might violate the conservation law or symmetry. In 1957, Chien Shiung Wu and collaborators confirmed the symmetry violation. In the 1960s, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force. The existence of the and  bosons was not directly confirmed until 1983. Properties The electrically charged weak interaction is unique in a number of respects: It is the only interaction that can change the flavour of quarks and leptons (i.e., of changing one type of quark into another). It is the only interaction that violates P, or parity symmetry. It is also the only one that violates charge–parity (CP) symmetry. Both the electrically charged and the electrically neutral interactions are mediated (propagated) by force carrier particles that have significant masses, an unusual feature which is explained in the Standard Model by the Higgs mechanism. Due to their large mass (approximately 90 GeV/c2) these carrier particles, called the and  bosons, are short-lived with a lifetime of under  seconds. The weak interaction has a coupling constant (an indicator of how frequently interactions occur) between and , compared to the electromagnetic coupling constant of about and the strong interaction coupling constant of about 1; consequently the weak interaction is "weak" in terms of intensity. The weak interaction has a very short effective range (around to  m (0.01 to 0.1 fm)). At distances around  meters (0.001 fm), the weak interaction has an intensity of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. Scaled up by just one and a half orders of magnitude, at distances of around 3 m, the weak interaction becomes 10,000 times weaker. The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact only through gravity and the weak interaction. The weak interaction does not produce bound states, nor does it involve binding energy something that gravity does on an astronomical scale, the electromagnetic force does at the molecular and atomic levels, and the strong nuclear force does only at the subatomic level, inside of nuclei. Its most noticeable effect is due to its first unique feature: The charged weak interaction causes flavour change. For example, a neutron is heavier than a proton (its partner nucleon) and can decay into a proton by changing the flavour (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavour changing, so this can only proceed by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the strange quark and charm quark, respectively) would also be conserved across all interactions. All mesons are unstable because of weak decay. In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual  boson, which then decays into an electron and an electron antineutrino. Another example is electron capture a common variant of radioactive decay wherein a proton and an electron within an atom interact and are changed to a neutron (an up quark is changed to a down quark), and an electron neutrino is emitted. Due to the large masses of the W bosons, particle transformations or decays (e.g., flavour change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about  seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about  seconds, or a hundred million times longer than a neutral pion. A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes. Weak isospin and weak hypercharge All particles have a property called weak isospin (symbol ), which serves as an additive quantum number that restricts how the particle can interact with the of the weak force. Weak isospin plays the same role in the weak interaction with as electric charge does in electromagnetism, and color charge in the strong interaction; a different number with a similar name, weak charge, discussed below, is used for interactions with the . All left-handed fermions have a weak isospin value of either or ; all right-handed fermions have 0 isospin. For example, the up quark has and the down quark has . A quark never decays through the weak interaction into a quark of the same : Quarks with a of only decay into quarks with a of and conversely. In any given strong, electromagnetic, or weak interaction, weak isospin is conserved: The sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed) with a weak isospin of +1 normally decays into a (with ) and a (as a right-handed antiparticle, ). For the development of the electroweak theory, another property, weak hypercharge, was invented, defined as where is the weak hypercharge of a particle with electrical charge (in elementary charge units) and weak isospin . Weak hypercharge is the generator of the U(1) component of the electroweak gauge group; whereas some particles have a weak isospin of zero, all known spin- particles have a non-zero weak hypercharge. Interaction types There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because the weakly interacting fermions form a current with total electric charge that is nonzero. The second type is called the "neutral-current interaction" because the weakly interacting fermions form a current with total electric charge of zero. It is responsible for the (rare) deflection of neutrinos. The two types of interaction follow different selection rules. This naming convention is often misunderstood to label the electric charge of the and bosons, however the naming convention predates the concept of the mediator bosons, and clearly (at least in name) labels the charge of the current (formed from the fermions), not necessarily the bosons. Charged-current interaction In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a  boson (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavour") of neutrino (electron , muon , or tau ) is the same as the type of lepton in the interaction, for example: Similarly, a down-type quark (, , or , with a charge of ) can be converted into an up-type quark (, , or , with a charge of ), by emitting a  boson or by absorbing a  boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a  boson, or absorb a  boson, and thereby be converted into a down-type quark, for example: The W boson is unstable so will rapidly decay, with a very short lifetime. For example: Decay of a W boson to other products can happen, with varying probabilities. In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the limited energy involved in the process (i.e., the mass difference between the down quark and the up quark), the virtual boson can only carry sufficient energy to produce an electron and an electron-antineutrino – the two lowest-possible masses among its prospective decay products. At the quark level, the process can be represented as: Neutral-current interaction In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral boson. For example: Like the  bosons, the  boson also decays rapidly, for example: Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, weak isospin, the neutral-current interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs. The quantum number weak charge () serves the same role in the neutral current interaction with the that electric charge (, with no subscript) does in the electromagnetic interaction: It quantifies the vector part of the interaction. Its value is given by: Since the weak mixing angle , the parenthetic expression , with its value varying slightly with the momentum difference (called "running") between the particles involved. Hence since by convention , and for all fermions involved in the weak interaction . The weak charge of charged leptons is then close to zero, so these mostly interact with the  boson through the axial coupling. Electroweak theory The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam, and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (, , , the three carriers of the weak interaction), and the photon (, the massless gauge boson that carries the electromagnetic interaction). According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless scalar bosons forming a complex scalar Higgs field doublet. Likewise, there are four massless electroweak vector bosons, each similar to the photon. However, at low energies, this gauge symmetry is spontaneously broken down to the symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. Naïvely, the symmetry-breaking would be expected to produce three massless bosons, but instead those "extra" three Higgs bosons become incorporated into the three weak bosons, which then acquire mass through the Higgs mechanism. These three composite bosons are the , , and  bosons actually observed in the weak interaction. The fourth electroweak gauge boson is the photon () of electromagnetism, which does not couple to any of the Higgs fields and so remains massless. This theory has made a number of predictions, including a prediction of the masses of the and  bosons before their discovery and detection in 1983. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125 and , whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, a Higgs boson was tentatively confirmed to exist. In a speculative case where the electroweak symmetry breaking scale were lowered, the unbroken interaction would eventually become confining. Alternative models where becomes confining above that scale appear quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking. Violation of symmetry The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a separately constructed, mirror-reflected copy of the experimental apparatus watched through the mirror. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics. Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V − A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V − A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction. However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics. Unlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis. See also Weakless universe – the postulate that weak interactions are not anthropically necessary Gravity Strong interaction Electromagnetism Footnotes References Sources Technical For general readers External links Harry Cheung, The Weak Force @Fermilab Fundamental Forces @Hyperphysics, Georgia State University. Brian Koberlein, What is the weak force? Weak interaction Fundamental interactions
Weak interaction
Physics
3,865
610,897
https://en.wikipedia.org/wiki/French%20drain
A French drain (also known by other names including trench drain, blind drain, rubble drain, and rock drain) is a trench filled with gravel or rock, or both, with or without a perforated pipe that redirects surface water and groundwater away from an area. The perforated pipe is called a weeping tile (also called a drain tile or perimeter tile). When the pipe is draining, it "weeps", or exudes liquids. It was named during a time period when drainpipes were made from terracotta tiles. French drains are primarily used to prevent ground and surface water from penetrating or damaging building foundations and as an alternative to open ditches or storm sewers for streets and highways. Alternatively, French drains may be used to distribute water, such as a septic drain field at the outlet of a typical septic tank sewage treatment system. French drains are also used behind retaining walls to relieve ground water pressure. History The earliest forms of French drains were simple ditches that were pitched from a high area to a lower one and filled with gravel. These may have been invented in France, but Henry Flagg French (1813–1885) of Concord, Massachusetts, a lawyer and Assistant U.S. Treasury Secretary, described and popularized them in Farm Drainage (1859). French's own drains were made of sections of ordinary roofing tile that were laid with a gap in between the sections to admit water. Later, specialized drain tiles were designed with perforations. To prevent clogging, the size of the gravel varied from coarse in the center to fine on the outside and was selected contingent on the gradation of the surrounding soil. The sizes of particles were critical to prevent the surrounding soil from washing into the pores, i. e., voids between the particles of gravel and thereby clogging the drain. The later development of geotextiles greatly simplified this technique. Subsurface drainage systems have been used for centuries. They have many forms that are similar in design and function to the traditional French drain. Structure Ditches are dug manually or by a trencher. An inclination of 1 in 100 to 1 in 200 is typical. Lining the bottom of the ditch with clay or plastic pipe increases the volume of water that can flow through the drain. Modern French drain systems are made of perforated pipe, for example weeping tile surrounded by sand or gravel, and geotextile or landscaping textile. Landscaping textiles prevent migration of the drainage material and prevent soil and roots from entering and clogging the pipe. The perforated pipe provides a minor subterranean volume of storage for water, yet the prime purpose is drainage of the area along the full length of the pipe via its perforations and to discharge any surplus water at its terminus. The direction of percolation depends on the relative conditions within and without the pipe. Variants Variations of French drains include: Curtain drain This form comprises a perforated pipe surrounded by gravel. It is similar to the traditional French drain, the gravel or aggregate material of which extends to the surface of the ground and is uncovered to permit collection of water, except that a curtain drain does not extend to the surface and instead is covered by soil, in which turf grass or other vegetation may be planted, so that the drain is concealed. Filter drain This form drains groundwater. Collector drain This form combines drainage of groundwater and interception of surface water or run off water, and may connect into the underground pipes so as to rapidly divert surface water; it preferably has a cleanable filter to avoid migration of surface debris to the subterranean area that would clog the pipes. Interceptor drain Dispersal drain This form distributes waste water that a septic tank emits. Fin drain This form comprises a subterranean perforated pipe from which extends perpendicularly upward along its length a thin vertical section, denominated the "fin", of aggregate material for drainage to the pipe. The length is . This form is less expensive to build than a traditional French drain. A French drain can end, i.e., open at a downhill slope, dry well, or rain garden where plants absorb and hold the drained water. This is useful if city water systems or other wastewater areas are unavailable. Depending on the expected level and volume of rainwater or runoff, French drains can be widened or also fitted on two or three underground drainpipes. Multiple pipes also provide for redundancy, in case one pipe becomes overfilled or clogged by a rupture or defect in the piping. A pipe might become overfilled if it is on a side of the drain which receives a much larger volume of water, such as one pipe being closer to an uphill slope, or closer to a roofline that drips near the French drain. When a pipe becomes overfilled, water can seep sideways into a parallel pipe, as a form of load-balancing, so that neither pipe becomes slowed by air bubbles, as might happen in a full-pipe with no upper air space. Filters are made from permeable materials, typically non-woven fabric, may include sand and gravel, placed around the drainage pipe or envelope to restrict migration of from the surrounding soils. Envelopes are the gravel, stone, rock, or surrounding pipe. These are permeable materials placed around pipe or drainage products to improve flow conditions in the area immediately around the drain and for improving bedding and structural backfill conditions. Installation French drains are often installed around a home foundation in two ways: Buried around the external side of the foundation wall Installed underneath the basement floor on the inside perimeter of the basement In most homes, an external French drain or drain tile is installed around the foundation walls before the soil is backfilled. It is laid on the bottom of the excavated area, and a layer of stone is laid on top. A filter fabric is often laid on top of the stone to keep fine sediments and particles from entering. Once the drain is installed, the area is backfilled, and the system is left alone until it clogs. Other uses French drains can be used in farmers' fields for the tile drainage of waterlogged fields. Such fields are called "tiled". Weeping tiles can be used anywhere that soil needs to be drained. Weeping tiles are used for the opposite reason in septic drain fields for septic tanks. Clarified sewage from the septic tank is fed into weeping tiles buried shallowly in the drain field. The weeping tile spreads the liquid throughout the drain field. Legislation In the US, municipalities may require permits for building drainage systems as federal law requires water sent to storm drains to be free of specific contaminants and sediment. In the UK, local authorities may have specific requirements for the outfall of a French drain into a ditch or watercourse. See also References External links Non-residential French drains are regulated in the U.S. – US EPA How to Install French Drains () What are French drains? Why are they called French drains? Drainage Environmental engineering Foundations (buildings and structures) Hydraulic structures Sewerage Stormwater management Water streams
French drain
Chemistry,Engineering,Environmental_science
1,446
76,358,418
https://en.wikipedia.org/wiki/Antoing%20cement%20kiln
The Antoing cement kiln is in the Belgian province of Hainaut. The facility is next to the Scheldt River in the Tournai region. It was built in 1929 under CBR which was later taken over by Heidelberg Materials. History and ownership In 1910 the anonymous association Cimenteries et Briqueteries Réunies Bonne-Espérance et Loën (abbreviated CBR) was formed by the merger of the two companies: La Bonne Esperance and Fabrique de Ciment Portland Artificiel de Loën. In 1929 CBR took over the Société Carrières et Fours à Chaux et à Ciment du Coucou in Antoing. Around 1975, crushing plants were installed on the Antoing site. In 1981, the Société Cimescaut (Société Générale des Ciments Portland de l'Escaut) becomes part of CBR together with its blue limestone quarry. During this year they switch from the wet process to the dry process in Antoing. In 1983, CBR starts building a new clinker factory in Antoing. In 1986, a cement kiln was installed at the Antoing plant. In 1993 CBR was taken over by HeidelbergCement. Heidelberg Materials Heidelberg Materials is a listed building materials company based in Heidelberg, Germany. The company has been operating since 1873 and originated from a brewing family. Heidelberg is the current world leader in producing construction aggregates (293.7 million tonnes/year), 2nd largest in cement production (126.5 million tonnes/year) and 3rd largest producer of ready-mixed concrete (45 million m³/year).   The Heidelberg Group operates in more than 50 countries and has 57,000 employees worldwide working at 3,000 production sites.  Since 1993, they have also been active in Belgium through the acquisition of the company CBR. Because CBR was already a large multinational company, Heidelberg was able to grow strongly internationally. It owns several sites in Belgium including Antoing and Lixhe. Kiln Process At the Antoing site of Heidelberg Materials, there is 1 cement kiln that operates according to the dry process with a capacity of 3250 tons of clinker per day. This kiln consists of 3 zones: the extraction, dosing and mixing of raw materials (step 1 to 3) the kiln (step 4 to 7) the crushing process of the resulting product. (step 8 to 10) The mining of raw materials is done in the quarry next to the cement kiln site. After mining, the raw materials are broken down, mixed and crushed into "raw powder." The crushing of the raw material is done in 2 ball mills with a capacity of 130 tons/h. The crushing process also produces a lot of noise, which can go up to Lp = 140 dB which is louder than a jet engine. The powder produced consists of 3 essential cement components: limestone (CaCO3), silica (SiO2) and alumina (Al2O3). Before the powder goes to the kiln, the composition of the 3 components is carefully monitored to ensure the right ratio. The powder can also sometimes be mixed with iron oxide, blast furnace slag from the steel industry or with fly ash from coal combustion.After composing the raw powder, it is taken to the furnace. The furnace consists of 2 processes: the tower where the calcination process (step 5) takes place and the kiln with the sintering process(step 6). The powder is first brought into the top of the 88m high tower where it is heated up to 900 °C and where the calcination of limestone starts to produce lime (CaCO3 → CaO + CO2). During this process, a lot of CO2 (68% of totale emission) is released. After the calcination process, the materials go to the sintering process in the kiln, which is located at the bottom of the tower. The kiln consists of a 67m long tube of 3.9m in diameter with metal on the outside and covered with a fireproof brick on the inside. It is supported by 3 points and is placed under a slight angle of 2.5%. The kiln slides back-and-forth and rotates at a speed of 4.2 rotations per minute. In the sintering process, the powder is further heated to 1450 °C to activate the silica (SiO2) and allow it to combine with the lime and alumina (and iron oxide/blast furnace slag/flying gas) to form the clinker minerals. The result are clinker granules with a size of 3-25mm, which is first cooled heavily to 150 °C before going into storage (step 7). After the kiln and the manufacturing of the clinker granules, they are stored in 1 large silo of 55000 tons. The granules are then transported by ship to other Heidelberg Materials sites in Ghent, Rotterdam and IJmuiden where they are crushed on these sites (step 8 to 10). CO2 emissions During the processes in the kiln, CO2 is emitted into the atmosphere. 68% of these total emissions are released by the chemical process of calcination of limestone into lime (CaCO3 → CaO + CO2). The other 32% is released during the fossil fuel burning process to reach the temperature of 1800 °C. The kiln must remain constantly active to ensure that the kiln does not expand in different zones and to prevent kiln deformation. A backup burner is provided for this purpose as well as a backup for the backup. Shutting down the kiln is possible but will take 5 days, therefore this is only done once a year during the maintenance period. With a production of 3250 tons of clinker per day or 1.15 million tons of clinker per year and an emission of 0.717 tons of CO2 per tonne clinker, we are looking at an annual emission of 842.55 ktons of CO2 per year in Antoing. This means that 33 million trees needs to be planted or 65.9 thousand hectares of forest (comparable to 85% of the land surface of New York City) to absorb these emissions. To put this further into perspective, a comparison can be made between the CO2 emissions of Antoing and the total CO2 emissions in Belgium. Looking at Belgian industry with an emission of 16863 kton CO2 in 2022, Antoing's cement kiln has a 4.9% share there. To dig even deeper into the total CO2 emissions from cement production in Belgium with an emission of 2443 kton CO2 in 2022, Antoing has a contribution of 34.5% of the total CO2 emissions from Belgian cement production. To reduce environmental impact, fossil fuel is often replaced by industrial wastes as an alternative fuel, however not all fuels are suitable such as sulfates, alkanes and sulfites. The less you use these chemicals as fuel, the less you emit of these pollutants. In Antoing, coal is used as the main component for fuel. The rest (65%) comes from alternative fuels such as fluff, animal meal and dried sludges which are fed to the main burner together with pure O2 injection. The preheater is fed with fluff, saw dust, 3D plastic, dried sludges and animal meal. While these alternative fuels reduce CO2 emissions by reducing the proportion of coal, a critical look should also be taken at the alternative fuels used. For example, dried sludge, burning dried sludge releases a lot of methane (CH4) which is also a harmful greenhouse gas and should also be given attention. Another technique Heidelberg Materials uses to reduce impact is to use cRCP, which is old concrete that has been broken down back to its three basic components: aggregates, sand and powder. The extracted powder can then be reactivated by reacting with CO2. This can serve as a replacement powder for fly ash or blast furnace slag, but cannot completely replace cement. This technology ensures that the CO2 that was released to make the cement for the old concrete is not lost, thus recycling the CO2 that was once emitted, so to speak. Future plans To further reduce environmental impact in the future, Heidelberg Materials says they are working on the Anthemis project (Antoing Heidelberg Emissions Integrated Solutions), which is a CO2 capturing project and should be in place by 2029. For Antoing, they want to aim for a 97% reduction of emissions, which amounts to 800000 ton CO2 per year. This will require an investment of 450 million euros. The process would work according to 3 main steps: first, the emissions are cooled down to 50 °C, then CO2 is unbound from the total emissions and finally this unbound CO2 is compressed and prepared for transport. This captured CO2 would then be transported to harbors where it will be taken by ship to Øygarden. Once it arrives, the CO2 would be further compressed and pumped into pipelines that take it 110 km offshore where it can be stored under the seabed under a depth of 2.6 km. Concrete made with this CO2 captured cement will then be given an evoZero logo. Adjoning quarry The Carrière Cimescaut limestone quarry is one of the three quarries present in Antoing and is an opencast mine where mainly blue limestone is extracted. Carrière Cimescaut has been owned by CBR since 1982 and is operated by Sagrex. Together with adjacent quarries: Carrière Lemay (Sagrex) and Carrière du Milieu (Holcim), the quarry covers an area of 240 hectares, comparable to 240 football fields. Carrière Lemay will in the future be united with Carrière Cimescaut to form one large limestone quarry. Demolition of the separation wall between these 2 quarries has already begun. Annually, Carrière Cimescaut mines 2.3 million tons of limestone for making cement and aggregates (asphalt, concrete applications, etc.). From this, about 900,000 tons of clinker is produced and destined for the cement plants in Ghent, Rotterdam and IJmuiden. The remaining limestone is used as granulates. The entire production is transported via the Scheldt River. The quarry consists of 8 banks with a height of 10-20m. The limestone is mined by controlled explosions. Each blast uses about 5 tons of dynamite and yields about 20,000 tons of limestone. After mining, the limestone is ground with a cone crusher and divided into poor and rich limestone. A conveyor belt transports the limestone to the cement plant's storage silos. The rock in the Antoing region dates from the Triassic and Cretaceous geological eras and was formed between about 245 – 145 million years ago. It consists mainly of dark gray to black, clayey limestones divided into horizontal banks of 20 to 80 cm. This is called the formation of Antoing. 4 layers are distinguishable in this formation, from top to bottom: Member of Warchin Member of Gaurain-Ramecroix Calonne superior Calonne inferieure References Kilns Antoing Buildings and structures in Hainaut (province) Buildings and structures completed in 1929 1929 establishments in Belgium Cement
Antoing cement kiln
Chemistry,Engineering
2,351
75,035,919
https://en.wikipedia.org/wiki/Saber%20Astronautics
Saber Astronautics is an space operations and software company based in Sydney and Adelaide in Australia, and Boulder, Colorado in the United States. Company Structure Saber Astronautics consists of two separately [[incorporation (business)|incorporated entities] ]– Saber Astronautics, LLC in the US and Saber Astronautics Australia Pty Ltd in Australia. The Australian business has a headquarters in Chippendale, New South Wales. The US business has a headquarters in Boulder, Colorado. History Saber Astronautics was first established in 2007 as a research and development startup in Boulder, Colorado, and in 2008, in Sydney, New South Wales with a mission to reduce barriers to spaceflight and the "democratisation of space". The company initially developed machine learning diagnostics software for satellites in 2010 with a focus area on system of systems, and conducted spacecraft design on contract. The machine learning diagnostics tools were demonstrated on the NASA Advanced Composition Explorer space weather satellite in 2012. Saber entered the space operations software market in 2012 with the Predictive Ground station InterfaceTM (PIGI) in 2012. PIGI was first used in live operations in 2014 for the California-based cubesat startup Southern Stars' SkyCube mission in 2014. In 2013 Saber was accepted into NASA's Flight Opportunities Program for an Electrodynamic Deorbit Tether to deorbit cubesats. The tether completed parabolic flight tests in 2015 and 2016. Since 2017 the company has supplids the United States Space Force with the Space Cockpit software. It provided civilian space operators as part of the JCO Pacific in 2020. Between 2020 and 2022 the company led the Sprint Advanced Concepts Training (SACT) wargames for the Pacific timezone. In 2023 the New Zealand Defence Force was selected to take over lead. In 2020 the company won an Australian Space Agency’s Space Infrastructure Fund (SIF) grant to construct an operations center in Adelaide. In 2022, the company opened this new office, operations centre and concurrent design facility on Lot 14 in Adelaide in proximity to the Australian Space Agency (ASA). Notable achievements In 2010 the company, in partnership with the 4 Pines Brewing Company, worked on developing the first space-drinkable beer. The project was funded by a crowdsourcing campaign that failed to achieve its goal investment. The project was continued through internal funding, and has conducted parabolic flight tests every few years. The 'Nitro Stout' beer is sold by 4 Pines in a regular bottle. In 2016 the company founder, Dr Jason Held, was assigned to lead a taskforce to advocate for an Australian space agency. The taskforce delivered its report in late 2017. In 2018, Held was assigned to the government appointed "Expert Reference Group" to design the Australian Space Agency which was founded in 2019. He remains a member of the ASA Space Industry Leaders Forum, which informs the agency on issues relevant to the industry and civil space sector. Since 2020 the company has been a partner of the US Joint Task Force-Space Defense Commercial Operations Cell (JCO) Pacific. In 2022 the company won a contract to conduct the launch tracking and in orbit operations for the AST SpaceMobile BlueWalker 3 satellite. This satellite is a prototype for space to ground mobile phone service. Products Software Predictive Interactive Ground-station Interface (PIGI) - satellite command and control Space Cockpit - defence focused satellite command and control Terrestrial and Astronomical Rapid Observation Toolkit (TAROT) - a space domain awareness visualisation Mission Management Board (MMB) - a space operator focused chat board Services Commercial spaceflight operations Space domain awareness services and advice Facilities Responsive Space Operations Centre, Adelaide Responsive Space Operations Centre, Colorado Springs References External links Australian companies established in 2008 Space technology Private spaceflight companies Aerospace companies of Australia Companies based in Sydney
Saber Astronautics
Astronomy
766
2,290,281
https://en.wikipedia.org/wiki/Ghost%20net
Ghost nets are fishing nets that have been abandoned, lost, or otherwise discarded in the ocean, lakes, and rivers. These nets, often nearly invisible in the dim light, can be left tangled on a rocky reef or drifting in the open sea. They can entangle fish, dolphins, sea turtles, sharks, dugongs, crocodiles, seabirds, crabs, and other creatures, including the occasional human diver. Acting as designed, the nets restrict movement, causing starvation, laceration and infection, and suffocation in those that need to return to the surface to breathe. It's estimated that around 48 million tons (48,000 kt) of lost fishing gear is generated each year, not including those that were abandoned or discarded and these may linger in the oceans for a considerable time before breaking-up. Description Some commercial fishermen use gillnets. These are suspended in the sea by flotation buoys, such as glass floats, along one edge. In this way they can form a vertical wall hundreds of metres long, where any fish within a certain size range can be caught. Normally these nets are collected by fishermen and the catch removed. If this is not done, the net can continue to catch fish until the weight of the catch exceeds the buoyancy of the floats. The net then sinks, and the fish are devoured by bottom-dwelling crustaceans and other fish. Then the floats pull the net up again and the cycle continues. Given the high-quality synthetics that are used today, the destruction can continue for a long time. The problem is not just nets but ghost gear in general; old-fashioned crab traps, without the required "rot-out panel", also sit on the bottom, where they become self-baiting traps that can continue to trap marine life for years. Even balled-up fishing line can be deadly for a variety of creatures, including birds and marine mammals. Over time the nets become more and more tangled. In general, fish are less likely to be trapped in gear that has been down a long time. Fishermen sometimes abandon worn-out nets because it is often the easiest way to get rid of them. The French government offered a reward for ghost nets handed in to local coastguards along sections of the Normandy coast between 1980 and 1981. The project was abandoned when people vandalized nets to claim rewards, without retrieving anything at all from the shoreline or ocean. In September 2015, the Global Ghost Gear Initiative (GGGI) was created by the World Animal Protection to give a unique and stronger voice to the cause. The term ALDFG means "abandoned, lost and discarded fishing gear". Environmental impact From 2000 to 2012, the National Marine Fisheries Service reported an average of 11 large whales entangled in ghost nets every year along the US west coast. From 2002 to 2010, 870 nets were recovered in Washington (state) with over 32,000 marine animals trapped inside. Ghost gear is estimated to account for 10% (640,000 tonnes) of all marine litter. An estimated 46% of the Great Pacific Garbage Patch consists of fishing related plastics. Fishing nets account for about 1% of the total mass of all marine macroplastics larger than , and plastic fishing gear overall constitutes over two-thirds of the total mass. According to the SeaDoc Society, each ghost net kills $20,000 worth of Dungeness crab over 10 years. The Virginia Institute of Marine Science calculated that ghost crab pots capture 1.25 million blue crabs each year in the Chesapeake Bay alone. In May 2016, the Australian Fisheries Management Authority (AFMA) recovered 10 tonnes of abandoned nets within the Australian Exclusive Economic Zone and Torres Strait protected zone perimeters. One protected turtle was rescued. The northern Australian olive ridley sea turtle Lepidochelys olivacea, is a genetically distinct variation of the olive ridley sea turtle. Ghost nets pose a threat to the continued existence of the northern Australian variety. Without further action to preserve the northern Australian olive ridley sea turtle, the population could face extinction. Researchers in Brazil used social media to estimate how ghost nets have negatively affected the Brazilian marine biota. Footage of ghost nets found on Google and YouTube were obtained and analyzed to arrive at the results of the study. They found that ghost nets have an adverse effect on several marine species, including large marine animals, such as the Bryde's whale and Guiana dolphin. Solutions Alternative materials and practice Unlike synthetic fishing nets, biodegradable fishing nets decompose naturally under water after a certain period of time. Coconut fibre (coir) fishing nets are commercially made and are hence a practical solution that can be taken by fishermen. Technology systems for marking and tracking fishing gear, including GPS tracking, are being trialled to promote greater accountability and transparency. Collection and recycling Legalizing gear retrievals and establishing waste management systems is required to manage and mitigate abandoned, lost, and discarded fishing gear at-sea. The company Net-works worked out a solution to turn discarded fishing nets into carpet tiles. Between 2008 and 2015, the US Fishing for Energy initiative collected 2.8 million pounds of fishing gear, and in partnership with Reworld turned this into enough electricity to power 182 homes for one year by incineration. One retrieval initiative in Southwest Nova Scotia in Canada conducted 60 retrieval trips, searched ~1523 square kilometers of the seafloor and removed 7064 kg of abandoned, lost, and discarded fishing gear (ALDFG) (comprising 66% lobster traps and 22% dragger cable). Lost traps continued to capture target and non-target species. A total of 15 different species were released from retrieved ALDFG, including 239 lobsters (67% were market-sized) and seven groundfish (including five species-at-risk). The commercial losses from ALDFG in Southwest Nova Scotia were estimated at $175,000 CAD annually. In 2009 world-renowned Dutch technical diver Pascal van Erp started to recover abandoned ghost fishing gear entangled on North Sea wrecks. He soon inspired others. Organised teams of volunteer technical divers recovered tons of ghost fishing gear off the Netherlands coastline. The loop was then closed - after a season's diving 22 tons of fishing gear was sent to the Aquafil Group for recycling back into new Nylon 6 material. In 2012 Pascal van Erp formally founded the not-for-profit Ghost Fishing organisation. In 2020 the Ghost Fishing Foundation rebranded as the Ghost Diving Foundation. A plan to protect UK seas from ghost fishing was backed by the European Parliament Fisheries Committee in 2018. Mr. Flack, who led the committee, said: "Abandoned fishing nets are polluting our seas, wasting fishing stocks and indiscriminately killing whales, sea lions or even dolphins. The tragedy of ghost fishing must end". Net amnesty schemes such as Fishing for Litter create incentives for the collection and responsible disposal of end of life fishing gear. These schemes address the root cause for many net abandonments, which is the financial cost of their disposal. Fishing nets are often made from extremely high quality plastics to ensure suitable strength, which makes them desirable for recycling. Initiatives like Healthy Seas are connecting environmental cleanup projects to manufacturers to re-use these materials. Recycled waste nets can be made into yarn and consumer products, such as swimwear. In Australia, the Carpentaria Ghost Nets Program has collaborated with indigenous communities to increase awareness of ghost nets and to foster long term solutions. The program has trained indigenous northern Australians in scouting for ghost nets and in removing ghost nets and other plastic pollution. See also Drift netting Monofilament fishing line#Environmental impact The Derelict Crab Trap Program Plastic pollution General: Marine debris List of environmental issues Notes 1 References Macfadyen G, Huntington T and Cappell R (2009) Abandoned, lost or otherwise discarded fishing gear FAO: Fisheries and Aquaculture, Technical paper 523. Rome. External links Film on Ghost nets in the Indian Ocean Ghost nets in the Indian Ocean Ghost Diving - International cleanup projects Ghost Net Project Carpentaria Ghost Net Programme Team Hunts Deadly 'Ghost Nets' in the Pacific Tracking Down Ghost Nets Ghost nets kill sea turtles Ghost nets hurting marine environment: UN report Environmental impact of fishing Nets (devices) Water pollution
Ghost net
Chemistry,Environmental_science
1,676
12,176,463
https://en.wikipedia.org/wiki/Tumbala%20climbing%20rat
The Tumbala climbing rat (Tylomys tumbalensis) is a species of rodent in the family Cricetidae. It is found in Mexico, where it is known only from one locality in Tumbalá, Chiapas. The species is threatened by deforestation. References Musser, G. G. and M. D. Carleton. 2005. Superfamily Muroidea. pp. 894–1531 in Mammal Species of the World a Taxonomic and Geographic Reference. D. E. Wilson and D. M. Reeder eds. Johns Hopkins University Press, Baltimore. Tylomys EDGE species Mammals described in 1901 Taxonomy articles created by Polbot
Tumbala climbing rat
Biology
138
1,835,519
https://en.wikipedia.org/wiki/Caihua
Caihua (), or "colour painting", is the traditional Chinese decorative painting or polychrome used for architecture and one of the most notable and important features of historical Chinese architecture. It held a significant artistic and practical role within the development of East-Asian architecture, as Caihua served not only decoration but also protection of the predominantly wooden architecture from various seasonal elements and hid the imperfections of the wood itself. The use of different colours or paintings would be according to the particular building functions and local regional customs, as well as historical periods. The choice of colours and symbology are based on traditional Chinese philosophies of the Five Elements and other ritualistic principles. The Caihua is often separated into three layer structures; timber or lacquer layer, plaster layer, and pigment layer. History The origins of Caihua can be traced back to the Zhou dynasty, as the Zuo Zhuan and Guliang Zhuan detailed: The Rites of Zhou similarly records a ritualistic usage of motifs and colour, based on each respective aspects' corresponding symbolic value. Gallery See also Hexi Caihua Ancient Chinese wooden architecture Chinese architecture Yingzao Fashi Architecture of the Song Dynasty Dancheong References External links http://www.gd.gov.cn/zjgd/lnwh/fywh/ctms/content/post_108920.html Chinese art Architectural styles Architectural elements Chinese painting Architecture in China Chinese architectural history
Caihua
Technology,Engineering
299
7,490,775
https://en.wikipedia.org/wiki/Shaping%20codes
In digital communications shaping codes are a method of encoding that changes the distribution of signals to improve efficiency. Description Typical digital communication systems uses M-Quadrature Amplitude Modulation(QAM) to communicate through an analog channel (specifically a communication channel with Gaussian noise). For Higher bit rates(M) the minimum signal-to-noise ratio (SNR) required by a QAM system with Error Correcting Codes is about 1.53 dB higher than minimum SNR required by a Gaussian source(>30% more transmitter power) as given in Shannon–Hartley theorem where C is the channel capacity in bits per second; B is the bandwidth of the channel in hertz; S is the total signal power over the bandwidth and N is the total noise power over the bandwidth. S/N is the signal-to-noise ratio of the communication signal to the Gaussian noise interference expressed as a straight power ratio (not as decibels). This 1.53 dB difference is called the shaping gap. Typically digital system will encode bits with uniform probability to maximize the entropy. Shaping code act as buffer between digital sources and modulator communication system. They will receive uniformly distributed data and convert it to Gaussian like distribution before presenting to the modulator. Shaping codes are helpful in reducing transmit power and thus reduce the cost of Power amplifier and the interference caused to other users in the vicinity. Application Some of the methods used for shaping are described in the trellis shaping paper by Dr. G. D. Forney Jr. Shell mapping is used in V.34 modems to get a shaping gain of .8 dB. All the shaping schemes in the literature try to reduce the transmitted signal power. In future this may have find application in wireless networks where the interference from other nodes are becoming the major issue. References Trellis shaping for modulation systems Digital audio Data transmission Error detection and correction Information theory Telecommunication theory
Shaping codes
Mathematics,Technology,Engineering
391
27,606,794
https://en.wikipedia.org/wiki/Chemerin%20peptide
Chemerin peptides are short peptides (on the order of 9 amino acids) that are produced from the carboxyl terminus of the chemokine chemerin. They display the same activities as chemerin, although at higher efficacy and potency. A particular synthetic chemerin-derived peptide, termed C15, was developed at Oxford University. It showed anti-inflammatory activities. Intraperitoneal administration of C15 (0.32 ng/kg) to mice before zymosan challenge conferred significant protection against zymosan-induced peritonitis, suppressing neutrophil (63%) and monocyte (62%) recruitment with a concomitant reduction in proinflammatory mediator expression. C15 was found to promote phagocytosis and efferocytosis in peritoneal macrophages at picomolar concentrations. C15 enhanced macrophage clearance of microbial particles and apoptotic cells by factor of 360% in vitro References See also Chemerin CMKLR1 Chemokine Efferocytosis
Chemerin peptide
Chemistry,Biology
234
57,489,094
https://en.wikipedia.org/wiki/Abr%2C%20rhogef%20and%20gtpase%20activating%20protein
Active breakpoint cluster region-related protein is a protein that in humans is encoded by the ABR gene. Function The ABR activator of RhoGEF and GTPase, also symbolized as ABR, gene has a reported 13 alternatively spliced transcript variants. This gene is found to have ubiquitous expression within 23 human tissues, including the heart and brain. The protein encoded by ABR shares homology with the Breakpoint Cluster Region (BCR) gene located on chromosome 22 and has shown to share similar protein functions. Additionally, the protein encoded by this gene contains a GTPase-activating protein domain, a domain found in members of the Rho family of GTP-binding proteins. The ABR is an inhibitor of ras-related C3 botulinum toxin substrate 1 (RAC1), a protein found to influence cell growth, motility of the cell, and maintain adhesion to neighboring epithelial cells. Recent papers suggest ABR has tumor suppressor properties in leukemia because of its role as a RAC1 inhibitor and is being researched as a potential therapy treatment in leukemia patients. Other studies suggest ABR plays an important role in vestibular morphogenesis. References Further reading
Abr, rhogef and gtpase activating protein
Chemistry
252
43,881,656
https://en.wikipedia.org/wiki/Book%20of%20Nut
The Book of Nut (original title: The Fundamentals of the Course of the Stars) is a collection of ancient Egyptian astronomical texts focusing on mythological subjects, cycles of the stars of the decans, and the movements of the moon, sun, and planets on sundials. It is titled for the sky goddess Nut who is depicted in some copies of the text arching over the earth. She is supported by the god of the air Shu. Texts in the Book of Nut include material from different periods of Egyptian history. The original name of the book, The Fundamentals of the Course of the Stars, was discovered by Alexandra von Lieven in one of the manuscript fragments and published in 2007. One of the major themes of the Book of Nut is the concept of sunrise as the mythological rebirth. Texts There are nine different copies of the book and they have various dates. Three copies are found on monuments and six more are found in the papyri of the second century AD found in the temple library in ancient Tebtunis, a town in the southern Faiyum Oasis. These include texts both in hieratic and demotic; some parts are written in hieroglyphs as well. Three texts of the Book of Nut are preserved on monuments: the tomb of Ramses IV, The Cenotaph of Seti I at the Osireion in Abydos, and the tomb of the noblewoman Mutirdis (TT410) of the 26th Dynasty. These monumental copies are written in hieroglyphs. Currently, the Tebtunis textual material is scattered all over the world due to its complex excavation and acquisition history. There are several thousand fragments of unpublished papyri held by various museums that are being evaluated by scholars. The most highly prized of the manuscripts are the demotic Carlsberg papyri 1, and 1a, because of their completeness. They were written by the same scribe. Other manuscripts are mostly fragmentary. There are substantial differences among all of these copies, indicating that the textual tradition of the Book of Nut was still very much alive even in the second century AD. History of scholarship The early Egyptologists gave a lot of attention to the astronomical parts of the Book of Nut. First available for modern research was the material from the tomb of Ramses IV, which included the astronomical painting of Nut and the list of the decans. The text was first used by Jean-François Champollion and Ippolito Rosellini, later copied by Heinrich Brugsch. A new edition was issued in 1990 by Erik Hornung. In 1933, The Cenotaph of Seti I at the Osireion in Abydos was discovered. This was important because this version represents the oldest text. Adriaan de Buck's translation of the cryptographic sections of the Book of Nut significantly advanced the studies. In 1977, Jan Assmann published another relevant text from the tomb of the noblewoman Mutirdis, dating to the 26th Dynasty. Some important new material has been published since 2007. Dates of composition Most likely the Book of Nut text evolved over a long period of time going back before the time of Seti I. The astronomical data included in the decan list below the body of Nut point to the 12th Dynasty, the time of Sesostris III. There are two different decan lists that cannot be reconciled, so one of them must be secondary. According to von Lieven, the Middle Kingdom data is secondary, and she suggests that the earlier list goes back to the Old Kingdom, the first of the three major divisions of dynasties. See also Egyptian calendar Notes Bibliography External links Early Egyptian Constellations - The decan stars, by Gary David Thompson The Papyrus Carlsberg Collection - Inventory of Published Papyri Ancient Egyptian religion Ancient Egyptian texts Ancient astronomy Astronomical catalogues Astronomy in Egypt Egyptian calendar
Book of Nut
Astronomy
784
1,930,917
https://en.wikipedia.org/wiki/Free%20clinic
A free clinic or walk in clinic is a health care facility in the United States offering services to economically disadvantaged individuals for free or at a nominal cost. The need for such a clinic arises in societies where there is no universal healthcare, and therefore a social safety net has arisen in its place. Core staff members may hold full-time paid positions, however, most of the staff a patient will encounter are volunteers drawn from the local medical community. Free clinics are non-profit facilities, funded by government or private donors, that provide primary care, preventive healthcare, and additional health services to the medically underserved. Many free clinics are made possible through the service of volunteers, the donation of goods, and community support, because many free clinics receive little government funding. Regardless of health insurance coverage, all individuals can receive health services from free clinics. However, said services are intended for persons with limited incomes, no health insurance, and/or who do not qualify for Medicaid and Medicare. Also included are underinsured individuals; meaning those who have only limited medical coverage (such as catastrophic care coverage, but not regular coverage), or who have insurance, but their policies include high medical deductibles that they are unable to afford. To offset costs, some clinics charge a nominal fee to those whose income is deemed sufficient to pay a fee. Clinics often use the term "underinsured" to describe the working poor. Most free clinics provide treatment for routine illness or injuries; and long-term chronic conditions such as high blood pressure, diabetes, asthma and high cholesterol. Many also provide a limited range of medical testing, prescription drug assistance, women's health care, and dental care. Free clinics do not function as emergency care providers, and most do not handle employment related injuries. Few, if any, free clinics offer care for chronic pain as that would require them to dispense narcotics. For a free clinic such care is almost always cost-prohibitive. Handling narcotics requires a high level of physical security for the staff and building along with more paperwork and government regulation compared to what other prescription medications require. History At the turn of the century, healthcare in the United States became privatized despite many efforts by President Roosevelt and others to establish national health insurance, causing the healthcare system to neglect the lower classes. Starting in the 1950s, there have been incremental reforms to offset healthcare market failures and to better deliver healthcare services to low-income and underserved populations, including Medicaid and Medicare. Since then, there have still been increasing inequality and issues in coverage, access, cost, and quality of healthcare in America. In the United States, free clinics are a way to address this inequality and lack of universal healthcare, and as part of a health safety net. The modern concept of a free clinic originated in 1950 in Detroit and was named the St. Frances Cabrini Clinic. However, the first documented free clinic is considered to be the Haight Ashbury Free Medical Clinic in California which was started by Dr. David Smith in 1967. These clinics coined the phrase, "health care is a right not a privilege" and they served vulnerable veteran populations after the Vietnam war, many of whom struggled with drug abuse. The Haight Ashbury Free Clinic revolutionized the practice of handling substance abuse issues by holding national conferences and working directly with the Food and Drug Administration and other government agencies to create comprehensive policies and to destigmatize mental health conditions related to drug abuse. From there free clinics spread to other California cities and then across the United States, such as the Berkeley Free Clinic. Many free clinics were originally started in the 1960s and 1970s to provide drug treatments. Each one offered a unique set of services, reflecting the particular needs and resources of the local community. Some were established to provide medical services in the inner cities, while others opened in the suburbs and many student-run free clinics have emerged that serve the underserved as well as provide a medical training site for students in the health professions. From 1968 through the 1970s, the Black Panther Party established several Peoples’ Free Medical Clinics as part of their efforts to counter systemic discrimination against Black people in hospitals and private medical practices. The Peoples' Free Medical Clinics served as an advocating body as well. These clinics helped integrate the sector of health care into political and social spheres within the United States. Their efforts played a key role in social reform in health care that ultimately led to the passage of the Medicare and Medicaid act of 1965. In 2001 the National Association of Free and Charitable Clinics (NAFC) was founded in Washington, D.C. to advocate for the issues and concerns of free and charitable clinics. Free clinics are defined by the NAFC as "safety-net health care organizations that utilize a volunteer/staff model to provide a range of medical, dental, pharmacy, vision and/or behavioral health services to economically disadvantaged individuals. Such clinics are 501(c)3 tax-exempt organizations, or operate as a program component or affiliate of a 501(c)(3) organization." In time various state and regional organizations where formed including the Free Clinics of the Great Lakes Region, Texas Association of Charitable Clinic (TXACC), North Carolina Association of Free Clinics, Ohio Association of Free Clinics and the Virginia Association of Free and Charitable Clinics (est. 1993). In 2005 Empowering Community Healthcare Outreach (ECHO) was established to assist churches and other community organizations start and run free and charitable clinics. In 2010, the Patient Protection and Affordable Care Act (ACA) was passed as a reform that aimed to make healthcare insurance more accessible to low and middle-class families. Specifically, it subsidized low-income populations’ purchase of individual coverage. It also incentivized employers to provide coverage to low-income employees, and made it mandatory for states to expand Medicaid to include non-disabled and young people with incomes that were below 138% of the federal poverty line. Studies show the ACA has been successful in redressing inequality in the access to healthcare. In 2015, there was a “4.2 percentage point increase in full-year insurance for the poor and 5.3 point increase for the near-poor”. However, the implementation of the ACA proved to be more challenging as some states chose not to enforce it. Additionally, the ACA does not support undocumented immigrants, which means that health care outside of the free clinic to those who are undocumented remain relatively inaccessible. The ACA also does not reach homeless populations. Barriers include this population's low healthcare literacy, the requirements of residency verification, their difficulty accessing/applying for social services, their access to transportation, and the fact that many healthcare facilities do not accept Medicaid. It is not yet clear if and how the Trump administration will influence healthcare reform, specifically the access to healthcare for the most vulnerable. Trump often speaks out against the ACA; some scholars worry that President Trump's 2017 executive order, which eliminated cost-sharing reductions in the ACA, will result in an overall decrease in the number of people who can access affordable healthcare, thus emphasizing the need for free clinics. Patient demographics Of the 41 million uninsured people in the United States, the 355 officially registered free clinics in the country are only able to provide services to about 650,000 of them. On average, free clinics have annual budgets of $458,028 and have 5,989 annual patient visits. In another survey of three free clinics, 82% of patients reported that they began using a free clinics because they have are uninsured, and 59% were referred by friends/family. A similar study found that 65% were unemployed with students making up 17%. There also seems to be little correlation between education or employment status and insurance coverage in free clinic patients. Free clinic patients are mainly low-income, uninsured, female, immigrants, or minorities. About 75% of free clinic patients are between the ages of 18 and 64 years old. According to another study, 70% of all patients 20 years and older make less than US$10,000 a year. In a 1992-1997 survey of the Charlottesville Free Clinic, the patient body consists largely of a low income working class that reflects the demographics of the Charlottesville area. Most of the patients reported that without the free clinic, they would either seek the emergency room or do nothing at all if they got sick. There has been a shift over the years from patients seeking urgent care to patients seeking treatment for chronic illnesses. Combined, these factors suggest that free clinics will require additional resources in order to meet the rising demands of their patient population. In a study of the Miami Rescue Mission Clinic in Florida, the most common conditions were mental health, circulatory system, and musculoskeletal system disorder. The most common of the mental health disorders were depressive disorders and anxiety disorders. Throughout multiple studies about patient demographics in metropolitan settings, there was a higher than national average prevalence of mental health disorder, obesity, diabetes, and smoking in free clinic patients. Operation and services Some free clinics specialize in providing primary care (acute care), while others focus on long-term chronic health issues, and many do both. Most free clinics start out seeing patients only one or two days per week, and then expand as they find additional volunteers. Because they rely on volunteers, most are only open a few hours per day; primarily in the late afternoon and early evening. Some free clinics are faith-based, meaning they are sponsored by and affiliated with a specific church or religious denomination, or they are interfaith and draw support from several different denominations or religions. Free clinics rely on donations for financial support. The amount of money they take in through donations to a large degree determines how many patients they are able to see. Because they are unlikely to have the resources to see everyone who might need their help, they usually limit who they are willing to see to just those from their own community and the surrounding areas, and especially in chronic care will only see patients from within a limited set of medical conditions. Free clinics function as health care safety nets for patients who cannot afford or access other forms of healthcare. They provide essential services regardless of the patient's ability to pay. Hospital emergency rooms are required by federal law to treat everyone regardless of their ability to pay, so people who lack the means to pay for care often seek treatment in emergency rooms for minor ailments. These hospitals function as safety net hospitals. Treating people in the ER is expensive, though, and it ties up resources designed for emergencies. When a community has a free clinic, hospitals can steer patients with simple concerns to the free clinic instead of the emergency room. Free clinics can save hospital emergency rooms thousands of dollars. A $1 investment in a free clinic can save $36 in healthcare costs at another center. For this reason, most hospitals are supportive of free clinics. Hospitals are a primary source for equipment and supplies for free clinics. When they upgrade equipment, they will often donate the older equipment to the local free clinic. In addition some hospitals supply most or all of a local clinics day-to-day medical supplies, and some do lab work free of cost as well. Social workers Social workers support the work of medical practitioners in free clinics. They work alongside medical staff in the clinical setting and help staff address social factors that may be affecting patient health. Their role is to bring a personalized touch to patient care alongside the medical/health perspective. Caseworkers and volunteers are more able to connect with the patients on a personal level. More and more free clinics, such as the Texas Free Clinic, emphasize the importance of simply talking to the patients and creating a one-on-one relationship with them, as research has shown this has a positive impact on the patients’ mental health. This ensures a more comprehensive approach to patient care which has been deemed necessary by many health studies conducted on the work of free clinics. Since free clinics serve as a resource for marginalized groups, it is essential that their practices provide comprehensive care that reflects the concerns of their patients. The presence of a social worker in a free health clinic would bring someone along who has been trained to understand how situational factors could affect a patient's health. With a social worker present, patients would have more representation and acknowledgement regarding their experiences. Numerous studies indicate that the integration of social workers into free clinics remains a work in progress. Even when free clinics offer the services of a social worker, a significant number of patients do not avail themselves of these resources. This can be attributed to a lack of awareness about the services provided or reluctance to seek assistance. Research findings suggest that a primary factor contributing to the underutilization of social work services is the insufficient emphasis placed on them compared to acute medical care within clinic settings. To optimize the effectiveness of this model, it is crucial to establish a balanced presence of both social workers and medical staff within the clinical setting, with equal recognition of their importance to overall health. Medical malpractice liability Free clinics can be granted medical malpractice coverage through the Federal Tort Claims Act (FTCA). FTCA coverage includes health care professionals who are acting as volunteers. In addition it covers officers, board members, clinic employees, and individual contractors. Medical malpractice coverage does not occur automatically, each organization must be "deemed" eligible by the US Department of Health and Human Services. To be eligible the clinic must be an IRS recognized nonprofit, that does not accept payments from insurance companies, the government, or other organizations for the services it performs. It also must not charge patients for services. It may receive donations from anyone and any organization; the stipulation is that it may not receive financial reimbursement for service rendered, which by definition a free clinic does not. The Volunteer Protection Act of 1997 provides immunity from tort claims such as negligence, bodily injury, pain and suffering that might be filed against the volunteers of nonprofit organizations. Thus, volunteers working on behalf of a nonprofit free clinic are covered under the Volunteer Protection Act from most liability claims. Individual states may offer additional legal protections to free clinics, usually under the states Good Samaritan laws. Free clinics must still carry general liability insurance, to cover non-medical liability, such as slips and falls in the parking lot. Prescription and Medical assistance programs Some pharmaceutical companies offer assistance programs for the drugs they manufacture. These programs allow those who are unable to pay for their medications to receive prescription drugs for free or at a greatly reduced cost. Many free clinics work to qualify patients on behalf of these programs. In some cases the clinic receive and then distribute the medications themselves, in others they verify that the patient is eligible for the program, and the medication is then shipped to the patient, or patient receives the medication from a local pharmacy. Some free clinics sole mission is to help those who do not have prescription drug coverage, and cannot afford for their medications, to enroll in prescription assistance programs. Such clinics are known as "clinics without walls" because they dispense with the need to have their own building, exam rooms, or clinical equipment. More generically, there are also medical assistance programs being offered. For example, with a Free Clinic in central Texas, there is a heavy emphasis with partnering with local mental health programs in their area. Additionally, statistics showed that patients were able to engage and socialize with the facilitators working with such programs. Individuals within the clinic were able to learn from the presentations and educational workshops the assistance programs offered. Dentistry Some free clinics are able to assist with dental problems. This is handled either at the clinic itself, if the clinic has its own dental facilities and a dentist; or it is facilitated through a partnership with one or more local dentist who are willing to take referred patients for free. For example, a clinic might have ten local dentists who will each accept two patients per month, so this allows the clinic to treat a total of twenty dental patients each month. Some clinics use a referral system to handle other forms of specialized medical care. Student-run Free Clinics Student-run clinics (SRC) are an increasingly prevalent part of U.S. medical school curricula, and they are designed to improve health-care delivery to underserved populations. The vast majority of these clinics are free-of-charge and they have been shown to result in high patient satisfaction The preventive medicine interventions offered at this clinics have been proved to have significantly high health and economic impacts. Free clinics allow student volunteers to become more socially aware and culturally competent in their medical experience. Medical schools sometimes do not address social determinants of health or treatment of underserved populations, and medical students can use free clinic volunteering to learn about these issues. At free clinics, medical student volunteers learn to listen to the full history of their patients and treat them as a whole rather than a list of symptoms. Medical students also get invaluable hands-on experience that complements their classroom learning. Through direct patient interactions, students develop clinical skills, enhance their ability to take comprehensive patient histories, and learn to approach patient care from a holistic perspective. Additionally, working in SRCs exposes students to the realities of healthcare delivery in under-served communities, which allows them to foster a deeper understanding of the social determinants of health and to develop cultural competency. Medical students balance the power dynamic between the patient and provider, acting as a patient advocate. Furthermore, students who are exposed to SRCs are more likely than their peers to continue to work with underserved populations after graduation. An example of a student-run free clinic that addresses the social determinants of health treatment is one in the University of Washington, called Students in the Community (SITC). This clinic is the only student-run clinic to be run out of a transitional housing facility for the homeless. This clinic model speaks to the embrace by student-run clinics of the increasingly prevalent holistic approach to healthcare—one that considers the social determinants of health, such as housing, as shown through its housing-first model. The Society of Student-Run Free Clinics (SSRFC) hosts a national inter-professional platform for student-run clinics. This allows the sharing of ideas, collaborate on research, information about funding resources and encourages the expansion of existing clinics as well as the cultivate of the new ones. The SSRFC faculty network works to facilitate collaboration between student-run free clinics. Effectiveness There are several proposed advantages to free clinics. They tend to be located in communities where there is a great need for health care. Free clinics are more flexible in structure than established medical institutions. They are also much less expensive - hence the title "free clinic." Due to their small size, their organization tends to be more egalitarian and less hierarchical, which allows for more direct exchange of information across the clinic. Unlike regular practices, they also attempt to do more than just provide healthcare. Some were created as political acts meant to advocate for socialized medicine and society. Evidence shows that patients served by SRCs experience improved access to care, better management of chronic conditions, and higher satisfaction with their healthcare experience. Furthermore, research conducted within SRCs contributes to the body of knowledge on effective strategies for addressing healthcare disparities and advancing health equity. By leveraging research findings, SRCs can advocate for resources, inform best practices, and continuously improve their services to better meet the needs of under-served populations. However, they do come with their own set of problems. Many free clinics lack funding and do not have enough volunteers. This can contribute to a short availability of free clinics' operation hours, and can harm free clinics' ability to provide long-term, sustainable service. For instance, they are a solution aimed towards serving tens of millions of uninsured Americans, but they function solely on the spirit of altruism. Volunteers must be willing to be available during strange hours of the day and provide professional-level care all without the possibility of financial reimbursement. Additionally, the ability of free clinics to provide long term, sustainable service and maintain continuity of care for patients is questionable, considering the instability of funding and providers. One proposition towards overcoming these challenges involves the creation of a national foundation that officially assists and connects free clinics, allowing them to evolve as necessary. In a national level survey of patients and providers at free clinics, 97% of patients were satisfied with their care, and a further 77% preferred it over their prior care. 86% of patients relied on the clinic for primary care, and 80% of patients relied on them for pharmacy services. When asked what they would do if the free clinic did not exist, 47% would look for another free clinic, 24% would not seek care, 21% would not seek care due to costs, and 23% would use the emergency room. We can analyze that the free clinic care not only satisfies the patient, but fulfilled their healthcare needs. Location and Space Free clinics are usually located near the people they are trying to serve. In most cases they are located near other nonprofits that serve the same target community such as food-banks, Head Start, Goodwill Industries, the Salvation Army and public housing. Because free clinics often refer people to other medical facilities for lab work, dentistry, and other services, they may also be found in the same area of town as those medical facilities. Some clinics have working agreements with the other facilities that are willing to assist with the clinics mission. Being close to the other medical facilities makes it easier for patients to get from one to the other. Contrary to a common assumption, currently existing free clinics were not necessarily established to respond to an increase in the number of individuals who cannot afford healthcare in a given community. The prevalence of free clinics in certain areas is due to the availability of financial and human resources. For example, being close to teaching hospitals, universities, and medical facilities makes it easier to find medically trained volunteers. Furthermore, the lack of Federally Qualified Health Centers (FQHC) and other safety-net providers within a certain area often becomes the perceived need that motivates community leaders to establish a free clinic. Most free clinics start out using donated space; others start by renting or leasing space. In time and with enough community support, many go on to acquire their own buildings. Donated space may be an entire building, or it might be a couple of rooms within a church, hospital, or another business. Because the clinic will house confidential medical records, prescription medications, and must remain as clean as possible, donated space is usually set aside for the sole use of the clinic even when the clinic is closed. The National Association of Free & Charitable Clinics maintains a database of 1,200 free and charitable clinics. See also Community health centers in the United States Formulary (pharmacy) Health equity National Association of Free and Charitable Clinics Nonprofit organization Rural health clinic Volunteers in Medicine Marvin Ronning Barbara McInnis References External links Empowering Community Healthcare Outreach (ECHO) Safety Net Center/AmeriCares U.S. Medical Assistance Program Clinics in the United States Free goods and services Private aid programs
Free clinic
Biology
4,720
3,690,682
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20effect
The Eötvös effect is the change in measured Earth's gravity caused by the change in centrifugal acceleration resulting from eastbound or westbound velocity. When moving eastbound, the object's angular velocity is increased (in addition to Earth's rotation), and thus the centrifugal force also increases, causing a perceived reduction in gravitational force. Discovery In the early 1900s, a German team from the Geodetic Institute of Potsdam carried out gravity measurements on moving ships in the Atlantic, Indian, and Pacific oceans. While studying their results, the Hungarian nobleman and physicist Baron Roland von Eötvös (Loránd Eötvös) noticed that the readings were lower when the boat moved eastwards, higher when it moved westward. He identified this as primarily a consequence of Earth's rotation. In 1908, new measurements were made in the Black Sea on two ships, one moving eastward and one westward. The results substantiated Eötvös' claim. Formulation Geodesists use the following formula to correct for velocity relative to Earth during a gravimetric run. Here, is the relative acceleration is the rotation rate of the Earth is the velocity in longitudinal direction (east-west) is the latitude where the measurements are taken. is the velocity in latitudinal direction (north-south) is the radius of the Earth The first term in the formula, 2Ωu cos(ϕ), corresponds to the Eötvös effect. The second term is a refinement that under normal circumstances is much smaller than the Eötvös effect. Physical explanation The most common design for a gravimeter for field work is a spring-based design; a spring that suspends an internal weight. The suspending force provided by the spring counteracts the gravitational force. A well-manufactured spring has the property that the amount of force that the spring exerts is proportional to the extension of the spring from its equilibrium position (Hooke's law). The stronger the effective gravity at a particular location, the more the spring is extended; the spring extends to a length at which the internal weight is sustained. Also, the moving parts of the gravimeter will be dampened, to make it less susceptible to outside influences such as vibration. For the calculations it will be assumed that the internal weight has a mass of . It will be assumed that for surveying a method of transportation is used that gives good speed while moving very smoothly: an airship. Let the cruising velocity of the airship be . Motion along the equator To calculate what it takes for the internal weight of a gravimeter to be neutrally suspended when it is stationary with respect to the Earth, the Earth's rotation must be taken into account. At the equator, the velocity of Earth's surface is about . The amount of centripetal force required to cause an object to move along a circular path with a radius of 6378 kilometres (the Earth's equatorial radius), at 465 m/s, is about 0.034 newtons per kilogram of mass. For a 10,000-gram internal weight, that amounts to about 0.34 newtons. The amount of suspension force required is the mass of the internal weight (multiplied by the acceleration of gravity) minus those 0.34 newtons. In other words: any object co-rotating with the Earth at the equator has its measured weight reduced by 0.34 percent, thanks to the Earth's rotation. When cruising at 10 m/s due East, the total velocity becomes 465 + 10 = 475 m/s, which requires a centripetal force of about 0.0354 newtons per kilogram. Cruising at 10 m/s due West the net velocity is 465 − 10 = 455 m/s, requiring about 0.0325 newtons per kilogram. So if the internal weight is neutrally suspended while cruising due East, after reversing course it will not be neutrally suspended any more: the apparent mass of the 10,000-gram internal weight will increase by about 3 grams, and the spring of the gravimeter must extend some more to accommodate this greater weight. In high-performance meteorological models, this effect needs to be taken into account on a terrestrial scale. Air masses with significant velocity with respect to the Earth have a tendency to migrate to another altitude, and when the accuracy demands are strict this must be taken into account. Derivation of the formula for simplified case Derivation of the formula for motion along the Equator. A convenient coordinate system in this situation is the inertial coordinate system that is co-moving with the center of mass of the Earth. Then the following is valid: objects that are at rest on the surface of the Earth, co-rotating with the Earth, are circling the Earth's axis, so they are in centripetal acceleration with respect to that inertial coordinate system. What is sought is the difference in centripetal acceleration of the surveying airship between being stationary with respect to the Earth and having a velocity with respect to the Earth. The following derivation is exclusively for motion in east–west or west–east direction. Notation: is the total centripetal acceleration when moving along the surface of the Earth. is the centripetal acceleration when stationary with respect to the Earth. is the angular velocity of the Earth: one revolution per Sidereal day. is the angular velocity of the airship relative to the angular velocity of the Earth. is the total angular velocity of the airship. is the airship's velocity (velocity relative to the Earth). is the Earth's radius. It can readily be seen that the formula above for motion along the equator follows from the more general equation below for any latitude where along the equator v = 465 m/s. and The second term represents the required centripetal acceleration for the airship to follow the curvature of the Earth. It is independent of both the Earth's rotation and the direction of motion. For example, when an aeroplane carrying gravimetric reading instruments cruises over one of the poles at constant altitude, the aeroplane's trajectory follows the curvature of the Earth. The first term in the formula is zero then, due to the cosine of the angle being zero, and the second term then represents the centripetal acceleration to follow the curvature of the Earth's surface. Explanation of the cosine in the first term The mathematical derivation for the Eötvös effect for motion along the Equator explains the factor 2 in the first term of the Eötvös correction formula. What remains to be explained is the cosine factor. Because of its rotation, the Earth is not spherical in shape, there is an Equatorial bulge. The force of gravity is directed towards the center of the Earth. The normal force is perpendicular to the local surface. On the poles and on the equator the force of gravity and the normal force are exactly in opposite direction. At every other latitude the two are not exactly opposite, so there is a resultant force, that acts towards the Earth's axis. At every latitude there is precisely the amount of centripetal force that is necessary to maintain an even thickness of the atmospheric layer. (The solid Earth is ductile. Whenever the shape of the solid Earth is not entirely in equilibrium with its rate of rotation, then the shear stress deforms the solid Earth over a period of millions of years until the shear stress is resolved.) Again the example of an airship is convenient for discussing the forces that are at work. When the airship has a velocity relative to the Earth in latitudinal direction then the weight of the airship is not the same as when the airship is stationary with respect to the Earth. If an airship has an eastward velocity, then the airship is in a sense "speeding". The situation is comparable to a racecar on a banked circuit with an extremely slippery road surface. If the racecar is going too fast then the car will drift wide. For an airship in flight that means a reduction of the weight, compared to the weight when stationary with respect to the Earth. If the airship has a westward velocity then the situation is like that of a racecar on a banked circuit going too slow: on a slippery surface the car will slump down. For an airship that means an increase of the weight. The first term of the Eötvös effect is proportional to the component of the required centripetal force perpendicular to the local Earth surface, and is thus described by a cosine law: the closer to the Equator, the stronger the effect. Motion along 60 degrees latitude The same gravimeter is used again, its internal weight has a mass of 10,000 grams. Calculating the weight reduction when stationary with respect to the Earth: An object located at 60 degrees latitude, co-moving with the Earth, is following a circular trajectory, with a radius of about 3190 kilometer, and a velocity of about 233 m/s. That circular trajectory requires a centripetal force of about 0.017 newton for every kilogram of mass; 0.17 newton for the 10,000 gram internal weight. At 60 degrees latitude, the component that is perpendicular to the local surface (the local vertical) is half the total force. Hence, at 60 degrees latitude, any object co-moving with the Earth has its weight reduced by about 0.08 percent, thanks to the Earth's rotation. Calculating the Eötvös effect: When the airship is cruising at 25 m/s towards the east the total velocity becomes 233 + 25 = 258 m/s, which requires a centripetal force of about 0.208 newton; local vertical component about 0.104 newton. Cruising at 25 m/s towards the west the total velocity becomes 233 − 25 = 208 m/s, which requires a centripetal force of about 0.135 newton; local vertical component about 0.068 newton. Hence at 60 degrees latitude the difference before and after the U-turn of the 10,000 gram internal weight is a difference of 4 grams in measured weight. (Popularly spoken as the weight is a force being measured in newtons, not in grams.) The diagrams also show the component in the direction parallel to the local surface. In meteorology and in oceanography, it is customary to refer to the effects of the component parallel to the local surface as the Coriolis effect. References The Coriolis effect PDF-file. 870 KB 17 pages. A general discussion by the meteorologist Anders Persson of various aspects of geophysics, covering the Coriolis effect as it is taken into account in Meteorology and Oceanography, the Eötvös effect, the Foucault pendulum, and Taylor columns. External links In 1915 Eötvös constructed a tabletop device that demonstrates the Eötvös effect. The device is among other instruments on display in a small museum that is dedicated to the work and the life of Eötvös. Larger picture of the tabletop device from the museum website. Geodesy Topography
Eötvös effect
Mathematics
2,265
51,992,495
https://en.wikipedia.org/wiki/Asgardia
Asgardia, also known as the Space Kingdom of Asgardia and Asgardia the Space Nation, is a "virtual nation" formed by a group of people who have launched a satellite into Earth orbit. They refer to themselves as "Asgardians" and they have given their satellite the name . They have declared sovereignty over the space occupied by and contained within Asgardia -1. The Asgardians have adopted a constitution and they intend to access outer space free of the control of existing nations and establish a permanent settlement on the Moon by 2043. Igor Ashurbeyli, the founder of the Asgardia Independent Research Center, proposed the establishment of Asgardia on 12 October 2016. The Constitution of the Space Kingdom of Asgardia was adopted on 18 June 2017 and it became effective on 9 September 2017. Asgardia's administrative center is located in Vienna, Austria. The Cygnus spacecraft that carried Asgardia-1 into space released Asgardia-1 and two other satellites on 12 November 2017. The Space Kingdom of Asgardia has claimed that it is now "the first nation to have all of its territory in space." Legal scholars doubt that Asgardia-1 can be regarded as a sovereign territory and Asgardia has not yet attained the goal of being recognised as a nation state. Etymology Asgardia is taken from the name of one of the Nine Worlds in the Norse religion: Asgard (). Home to the Æsir tribe of gods, Asgard is derived from Old Norse áss, god and garðr, enclosure; from Indo-European roots ansu- spirit, demon (see cognate ahura; also asura) and gher- grasp, enclose (see cognates garden and yard), essentially meaning "garden of gods." History Asgardia Independent Research Center The Asgardia Independent Research Center (AIRC), formerly the Aerospace International Research Center, was founded by Igor Ashurbeyli in 2013. In 2014, the AIRC began the publication of an international space journal, ROOM, of which Ashurbeyli is the editor-in-chief. On February 5, 2016, Ashurbeyli was awarded the UNESCO Medal for contributions to the development of nanoscience and nanotechnologies during a ceremony held at UNESCO headquarters, Paris. AIRC AAS is the only institute in Austria whose activity is fully dedicated to the research of the Solar System, extraterrestrial life and the Earth using space technology and satellite techniques. Since 2013, the AIRC staff constructed, developed and prepared for launch over 30 instruments and participated in the experiments in 15 space missions, for example: ESA's mission Mars Express, Rosetta (mission to a comet), Venus Express, BepiColombo (mission to Mercury) and CNES' DEMETER and TARANIS missions. AIRC has also collaborated with NASA for the IBEX mission. In 2015, the AIRC established a close collaboration with Asgardia. Founding On 12 October 2016, Ashurbeyli announced in a press conference in Paris, France, "the birth of the new space nation Asgardia." The ultimate aim of the project is to create a new nation that allows access to outer space free of the control of existing nations. The current space law framework, the Outer Space Treaty requires governments to authorise and supervise all space activities, including the activities of non-governmental entities such as commercial and non-profit organisations; by attempting to create a nation, those behind Asgardia hope to avoid the tight restrictions that the current system imposes. It officially calls itself the "Space Kingdom of Asgardia." "Asgardia" was chosen as a reference to Asgard, one of the nine worlds of Norse mythology; the world that was inhabited by the gods. People were invited to register for citizenship in 2016, with the aim of Asgardia then applying to the United Nations for recognition as a nation state. In less than two days, there were over 100,000 applications; within three weeks, there were 500,000. After tougher verification requirements were introduced, this declined, and stood at around 210,000 in June, 2017. There is no intention to actually move these members into space. Asgardia intends to apply for membership of the UN. The Constitution of the Space Kingdom of Asgardia was adopted on 18 June 2017 and it became effective on 9 September 2017. The Cygnus spacecraft that carried Asgardia-1 into space released Asgardia-1 and two other satellites on 12 November 2017. The Space Kingdom of Asgardia has claimed that it is now "the first nation to have all of its territory in space." Legal scholars doubt that Asgardia-1 can be regarded as a sovereign territory and Asgardia has not yet attained the goal of being recognised as a nation-state. As of March 2019, Asgardia says that it has more than 290,000 citizens and more than 1,040,000 followers around the world. Governance The Constitution of Asgardia divides Governance of Asgardia into three branches: (1) a legislative branch named the "Parliament," (2) an executive branch named the "Government," and (3) a judicial branch named the "Court." Parliament The Parliament is composed of 150 nonpartisan members and each member is referred to as a "Member of Parliament" (MP). The Members of Parliament elect one Member to the office of "Chairman of the Parliament." The Members of Parliament also appoint the "Chairman of the Government." The Parliament has 12 permanent committees; the Chairman of Parliament of Asgardia is Mr. Lembit Öpik. Executive branch The Head of Nation is the most senior official of the executive branch (i.e., the Government). The Head of Nation is elected to a 5-year term of office. The Head of Nation may dissolve the Parliament and may then order the holding of parliamentary elections. The Head of Nation may initiate legislative proposals and may veto acts adopted by the Parliament. The Head of Nation may issue decrees that must be obeyed by governmental bodies and by the citizens of Asgardia. The Head of Nation is Igor Ashurbeyli. The Chairman of the Government supervises 12 Ministers. Each Minister supervises the operation of one Government Ministry. Each of the permanent committees of Parliament monitors the operation of one Government Ministry. The Parliament may invite Ministers to attend meetings of the Parliament. Judicial branch The judicial branch includes a "Supreme Justice," who supervises the operation of four judicial panels: (1) a "constitutional" panel, (2) a "civil" panel, (3) an "administrative" panel, and (4) a "criminal" panel. The Supreme Justice is appointed by the "Head of Nation." The "Justices" who serve on the judicial panels are appointed by the Parliament. Asgardia's Supreme Justice is Zhao Yun. Zhao, head of the Department of Law at The University of Hong Kong, was appointed as Asgardia's Supreme Justice on 24 June 2018 during the first parliamentary session in Vienna, where he was introduced to the elected Members of Parliament. Mayoral elections The mayoral elections took place in the period between 1 August – 9 September 2018. Based on the results of the first stage of mayoral elections of Asgardia, offices were taken by mayors of 44 cities from 12 October 2018. The Head of Nation delegated to continue elections of the mayors of Asgardia until the Parliament passes the Bill "On Mayors of Asgardia" from 12 October 2018. Until the Parliament has passed the Bill "On Mayors of Asgardia," elected mayors will report to the Head of the Nation of Asgardia. Key people Head of Nation — Igor Ashurbeyli The Chairman of Parliament — Lembit Öpik Head of the Government — Lena De Winne Supreme Justice — Zhao Yun Space activity Asgardia intends to launch a series of satellites into Earth orbit. Its first satellite was successfully launched by Orbital ATK on 12 November 2017 as part of an International Space Station resupply mission. It was a two-unit CubeSat measuring at a weight of , manufactured and deployed into orbit by NanoRacks, and has been named . The overall goal of the mission was to demonstrate the long-term storage of data on a solid-state storage device operating in low Earth orbit. The spacecraft had a 512 gigabyte solid-state storage device. The data stored in this device was to be periodically checked for data integrity and function. Before the launch, the data storage device was loaded with things like family photos supplied by the first 1,500,000 members of Asgardia. After the spacecraft reached orbit, data could be uploaded or downloaded using the Globalstar satellite network. was boosted to space and then deployed by US companies on a NASA-funded mission so the satellite falls under US jurisdiction. Asgardia intends to partner with a non-signatory to the Outer Space Treaty (OST), perhaps an African state such as Ethiopia or Kenya, in the hopes of circumventing the OST's restriction on nation-states claiming territory in outer space. The satellite was expected to have a lifetime of 5 years before its orbit decays and it burns up on reentry. On 12 September 2022, Asgardia-1 reentered the atmosphere. A continuously updated map that shows the location of in its orbit was hosted by NearSpace Launch, Inc. (NORAD satellite identification number 43049) is also being tracked by Satflare. Often described as a billionaire, Ashurbeyli has said that he is currently solely responsible for funding Asgardia, and that members will not be funding the planned first satellite launch. Although the cost has not been made publicly available, NanoRacks have said that similar projects cost $700,000. The project intends to move to crowdfunding to finance itself. Sa'id Mosteshar, of the London Institute of Space Policy and Law, says this suggests that Asgardia lacks a credible business plan. A company, Asgardia AG, has been incorporated, and members can buy shares in it. Asgardia wants to enable its founders' companies to use Asgardia's satellite network for their own services and business activities. These are to be settled via the crypto currency Solar and the reserve currency Lunar. Eventually, Asgardia hopes to have a colony in orbit. This will be expensive: the International Space Station cost $100bn to build, and flights to it cost over $40m per launch. Asgardia has been compared to the troubled Mars One project, which aims to establish a permanent colony on Mars, although Asgardia's organisers point out that setting up a small nation in orbit will be a lot easier than colonising distant Mars. Other proposed goals for the future include shielding the Earth from asteroids and coronal mass ejections, and a Moon base. Legal status Historical There has been at least one previous attempt to set up an independent nation in space. The Nation of Celestial Space, also known as Celestia, was formed in 1949 by James Mangan and claimed all of space. He banned atmospheric nuclear testing and issued protests to the major powers at their encroachment on his territory, but was ignored by both the powers and the UN. However, modern communications mean that Asgardia has a better ability to organise its claim and perhaps raise funds for the satellite that would give it a physical presence in outer space. Recognition and territorial claims Both UN General Assembly Resolution 1962 (XVIII) and the Outer Space Treaty (OST) of 1967 have established all of outer space as an international commons by describing it as the "province of all mankind" and, as a fundamental principle of space law, declaring that space, including Moon and other astronomical objects, is not subject to any national sovereignty claim. Article VI of the Outer Space Treaty vests the responsibility for activities in space to States Parties, regardless of whether they are carried out by governments or non-governmental entities. Article VIII stipulates that the State Party to the Treaty that launches a space object shall retain jurisdiction and control over that object. According to Sa'id Mosteshar of the London Institute of Space Policy and Law: "The Outer Space Treaty... accepted by everybody says very clearly that no part of outer space can be appropriated by any state." Without self-governing territory in space where citizens are present, Mosteshar suggested that the prospect any country would recognise Asgardia was slim. Ram Jakhu, the director of McGill University's Institute of Air and Space Law, and Asgardia's legal expert, believes that Asgardia will be able to fulfil three of the four elements that the UN requires when considering if an entity is a state: citizens; a government; and territory, being an inhabited spacecraft. In that situation, Jakhu considers that fulfilling the fourth element, gaining recognition by the UN member states, will be achievable, and Asgardia will then be able to apply for UN membership. The Security Council would then have to assess the application, as well as obtain approval from two-thirds of the members of the General Assembly. Joanne Gabrynowicz, an expert in space law and a professor at the Beijing Institute of Technology's School of Law, believes that Asgardia will have trouble attaining recognition as a nation. She says there are a "number of entities on Earth whose status as an independent nation have been a matter of dispute for a long time. It is reasonable to expect that the status an unpopulated object that is not on Earth will be disputed." Christopher Newman, an expert in space law at the UK's University of Sunderland, highlights that Asgardia is trying to achieve a "complete re-visitation of the current space-law framework," anticipating that the project will face significant obstacles with getting UN recognition and dealing with liability issues. The Outer Space Treaty requires the country that sends a mission into space to be responsible for the mission, including any damage it might cause. Data security As Asgardia is involved in the storing of private data, there could be legal and ethical issues. For the moment, as the Asgardian satellite is being deployed to orbit by US companies, it will fall under US jurisdiction and data stored on the satellite will be subject to US privacy laws. Economy The ideological component of Asgardia's economy is based on two pillars. The first: In Asgardia, citizens must become owners of the monetary system. The government is simply a middleman, broker and guarantor of monetary transactions. The second: In Asgardia, every citizen must be a participant in the distribution of the Nation's profit. 'Residents' are required to pay a 100 euro or 110 US dollar fee. Legal currency The Head of Nation charged his Administration to hold a contest on the main national Earth currencies in order to determine the initial rate of Solar, the cryptocurrency that will be used for Asgardians. He also charged the Government to introduce a Bill on the National Currency of Asgardia to Parliament. The second Digital Parliamentary Session (the third Parliamentary session) of Asgardia, which took place on 10–12 January 2019, approved the Act of National Currency and Basic Principles of Economic and Financial System of Asgardia. Parliament voted in favour of tasking the Government with drafting legislation regarding the economic system and the national currency of Asgardia by the next Parliamentary session. The financial component of Asgardia's economy is based upon its two national currencies. First, the Solar currency. Because the sun shines for all on Earth, the Solar is to become a universal payment currency converted on the exchanges into not just hard currencies that exist in earthly nations, but also into legitimate cryptocurrencies. Second, the Lunar, which will be an exclusive currency just for the citizens of Asgardia. The Lunar will be an internal financial and monetary asset that confirms the citizenship of Asgardia. As any asset, it is subject to exchanges, sales, loans, gifting, inheritance and more. It is also listed on the exchanges. In January 2019 Asgardia by voting chose the basket of currencies. Using the results of this voting, the Ministry of Finance and its counterpart, the parliamentary Finance Committee, will analyse and examine how the Solar may be freely exchanged against those currencies in open markets and at what future exchange rates. The following 12 currencies have been selected: US Dollar; Euro; British Pound; Japanese Yen; Canadian Dollar; Swiss Franc; Hong Kong Dollar; Mexican Peso; Australian Dollar; Singapore Dollar; Norwegian Krone; Swedish Krona. Economic forum On 26–28 October 2018, the First Economic Forum of Asgardia was held in Nice, France. The Forum was attended by representatives from the professional community, including economists, finance professionals, specialists in the areas of development of currency systems, cryptocurrencies and investment tools from Austria, Belgium, Denmark, India, Germany, the Netherlands, Russia, South Africa, Turkey, United States, UK and other countries. The speakers presented projects of the models of Asgardia's financial system and its economy, monetary system models, and as well issues of creating a balanced financial and economic system of Asgardia. The Memorandum with a general overview and outlines the next steps of developing Asgardia's economic system was adopted on the Forum. Among other things, it was decided to make Asgardia's presentation of model with two currencies at the World Economic Forum in Davos in January 2019 and for development of draft legislation on the national currencies of Asgardia for introduction to the Parliament of Asgardia. On 22–25 January 2019, the Asgardian delegation attended the World Economic Forum in Davos, Switzerland. The Asgardian representatives participated at two sessions—economic and cultural at the Caspian Week Conference 2019. The Caspian Week Conference is a meeting of global leaders, visionaries and experts within the Davos Forum. The Conference was held for the third time since the year 2017. References External links Space advocacy organizations Space colonization Organizations established in 2016 2017 in outer space
Asgardia
Astronomy
3,747
5,726,598
https://en.wikipedia.org/wiki/Versatile%20Toroidal%20Facility
The Versatile Toroidal Facility (VTF) is a research group within the Physics Research Division of the MIT Plasma Science and Fusion Center at the Massachusetts Institute of Technology. The VTF is a laboratory focused on studying the phenomenon of magnetic reconnection. For this purpose the group has a small tokamak designed to observe rarefied plasmas with probes. These probes measure electric and magnetic field behavior as well as various plasma characteristics in order to better understand the poorly understood processes involved in magnetic reconnection. The VTF is a fundamental physics research group, and its research has wide-ranging and immediate impact on our understanding of such plasma-related subjects as solar flares, the aurora borealis, magnetic confinement fusion, and magnetohydrodynamic theory in general. The VTF was built and originally led by Dr. Marcel Gaudreau, and prior to its retirement, was led by Dr. Miklos Porkolab and Dr. Jan Egedal, all MIT faculty at the time. External links PSFC homepage VTF homepage Massachusetts Institute of Technology Plasma physics facilities Tokamaks
Versatile Toroidal Facility
Physics
228
15,161,433
https://en.wikipedia.org/wiki/Cat%27s%20paw%20%28tool%29
A cat's paw or cat's claw is a metal hand tool used for extracting nails, typically from wood, using leverage. A standard tool in carpentry, it has a sharp V-shaped tip on one or both ends, which is driven into the wood by a hammer to capture the nailhead. Essentially, it is a smaller, more ergonomic, purpose-designed crowbar. Historically, the cat's paw had a single significantly rounder, more cup-shaped extracting head, giving it its name. Today, the norm is to have the two much narrower and more pointed heads offset 90-degrees (in plane) from one-another (allowing the bar to be pressed fully down when using the tip on the long end without damaging the surface the free end contacts). By the physics of its design the tip on the short end has substantially more leverage, but is not always convenient to be set with a hammer. Tool stock is typically hexagonal, though it may be round or rectangular. When the latter is sometimes is flattened on its long end to create a combination pry bar/nail extractor. Terms for each type used by popular retail outlets include "claw bar" when it has a claw on each end, and "moulding bar" if one end is flat. The cat's paw is well designed for demolition work, able to removed nails from wood, synthetic wood, and concrete, but because it tears up the surface around the nailhead is only used with care in finish work. History Prior to advances of the Industrial Revolution, nails were individually hand-made by blacksmiths. As a result, they were generally far more valuable than the wood they were driven into. In North America wood was so abundant that it was commonplace to burn an old structure down in order to salvage the nails from it. As a result, nail pullers were designed to preserve the condition of the nail for reuse, resulting in a slide hammer type design, still in specialty use today. With machine made nails capped with distinct heads a cat's paw shaped puller appeared; however, the distinctly rounded shape of the extracting head - which gave the tool its name - is so broad at its V-shaped notch opening that driving it into a board to capture a nailhead does significant damage to the wood. To minimize this, and improve the performance of the tool all round, a Japanese-style very narrow, very pointed head has been adopted in recent years. In addition to greater penetration (with both better nailhead grip and less collateral damage), its design offers greater leverage created by a longer, closer to 90-degree fulcrum end, and typically features extractors on both ends. Alternative tools As old growth wood has become much more valuable than the nails that hold it in place, there has been a move toward designs that take out nails with less damage yet. New designs have been introduced, including the Nail Jack and Nail Hunter nail pullers, which take a pliers-like approach to the old cat's paw design. These tools contain their own built in fulcrum, but can also be struck with a hammer to drive the tips of the tool into the wood with very little damage, allowing them to dig out nails that have been driven into wood at or below the surface. The Nail Hunter nail pulling design has very precise tips that actually come completely together at the ends, for removing finish nails. The pneumatic-powered Nail Kicker allows large numbers of old nails to be efficiently pulled. See also Denailer — power tool used for removing large numbers of nails from used lumber References Reader's Digest, "Cat's Paw Nail Puller," http://www.rd.com/cats-paw-nail-puller/article12922.html Retrieved June 5, 2009. Nail Pullers with Patent Reference, Raymond P. Fredrich, AuthorHouse 2006 Mechanical hand tools Woodworking hand tools Nail (fastener)
Cat's paw (tool)
Physics
812
57,674,142
https://en.wikipedia.org/wiki/Photothermal%20time
Photothermal time (PTT) is a product between growing degree-days (GDD) and day length (hours) for each day. PTT = GDD × DL It can be used to quantify environment, as well as the timing of developmental stages of plants. References Product certification Measurement Ecology
Photothermal time
Physics,Mathematics,Biology
63
2,007,385
https://en.wikipedia.org/wiki/BRLESC
The BRLESC I (Ballistic Research Laboratories Electronic Scientific Computer) was one of the last of the first-generation electronic computers. It was built by the United States Army's Ballistic Research Laboratory (BRL) at Aberdeen Proving Ground with assistance from the National Bureau of Standards (now the National Institute of Standards and Technology), and was designed to take over the computational workload of EDVAC and ORDVAC, which themselves were successors of ENIAC. It began operation in 1962. The Ballistic Research Laboratory became a part of the U.S. Army Research Laboratory in 1992. BRLESC was designed primarily for scientific and military tasks requiring high precision and high computational speed, such as ballistics problems, army logistical problems, and weapons systems evaluations. It contained 1727 vacuum tubes and 853 transistors and had a memory of 4096 72-bit words. BRLESC employed punched cards, magnetic tape, and a magnetic drum as input-output devices, which could be operated simultaneously. It was capable of five million (bitwise) operations per second. A fixed-point addition took 5 microseconds, a floating-point addition took 5 to 10 microseconds, a multiplication (fixed- or floating-point) took 25 microseconds, and a division (fixed- or floating-point) took 65 microseconds. (These times are including the memory access time, which was 4-5 microseconds.) It was the fastest computer in the world until the CDC 6600 was introduced in 1964. BRLESC and its predecessor, ORDVAC, used their own unique notation for hexadecimal numbers. Instead of the sequence A B C D E F universally used today, the digits 10 to 15 were represented by the letters K S N J F L, corresponding to the teletypewriter characters on five-track paper tape. The mnemonic phrase "King Size Numbers Just For Laughs" was used to remember the letter sequence. BRLESC II, using integrated circuits, became operational in November 1967; it was designed to be 200 times faster than ORDVAC. References External links D.K. ARMY ORDNANCE "HISTORICAL MONOGRAPH, ELECTRONIC COMPUTERS WITHIN THE ORDNANCE CORPS" BRLESC (different source) History of Computing at BRL BRL 1964 report, see page 36 One-of-a-kind computers United States Army equipment Vacuum tube computers
BRLESC
Technology
493
6,200,915
https://en.wikipedia.org/wiki/History%20of%20Sony
The history of Sony, a Japanese multinational conglomerate, dates back to 1946. Founding In September 1945, after the end of World War II, Masaru Ibuka started a radio repair shop in the bomb-damaged Shirokiya department store building in the Nihonbashi district of Tokyo. The next year, he was joined by his wartime research colleague, Akio Morita, and on 7 May 1946, they founded a company called Tokyo Tsushin Kogyo K.K. (Tokyo Telecommunications Engineering Corporation). The company produced Japan's first tape recorder, called the Type-G. In the early 1950s, Ibuka traveled to the United States, looking for a market for the company's tape recorder, and heard about Bell Labs' invention of the transistor. He convinced Bell to license the transistor technology to his Japanese company. Bell Labs agreed to do so while recommending Ibuka to produce Hearing aids using the transistor, then a popular application for the technology, suggesting that it would be difficult to apply the technology to radio. While most Japanese companies were researching the transistor for its military applications, Ibuka and Morita looked to apply it to communications. Although the American companies Regency Electronics and Texas Instruments built the first transistor radio as joint venture in 1954, it would be the Ibuka's company that made them commercially successful for the first time. As an innovator In August 1955, Tokyo Tsushin Kogyo released the Sony TR-55, Japan's first commercially produced transistor radio. They followed up in December of the same year by releasing the TR-72, a product that won favor both within Japan and in export markets, including Canada, Australia, the Netherlands and Germany. Featuring six transistors, push-pull output and greatly improved sound quality, the TR-72 continued to be a popular seller into the early sixties. In May 1956, the company released the TR-6, which featured an innovative slim design and sound quality capable of rivaling portable tube radios. The following year, 1957, Tokyo Tsushin Kogyo came out with the TR-63 model, then the smallest (112 × 71 × 32 mm) transistor radio in commercial production. It was a worldwide commercial success. The company marketed the radio as "pocketable", a Japanese-style English word the company came up with to highlight its portability and pocket-size. The word soon featured in English dictionary. University of Arizona professor Michael Brian Schiffer, PhD, says, "Sony was not first, but its transistor radio was the most successful. The TR-63 of 1957 cracked open the U.S. market and launched the new industry of consumer microelectronics." By the mid-1950s, American teens had begun buying portable transistor radios in huge numbers, helping to propel the fledgling industry from an estimated 100,000 units in 1955 to 5,000,000 units by the end of 1968. As a result of the popularity of transistor radios, which empowered privacy and individualism, the way people listen to radio or music has changed forever. Sony also launched the world's first integrated circuit radio, the ICR-100 in 1967. Following the first success in American consumer market, Tokyo Tsushin Kogyo changed its name to Sony in 1958 as people outside Japan struggled to pronounce the original name. Sony established Sony Corporation of America, the company's first subsidiary in America, in 1960. And in the same year, Sony made another innovation by releasing the world's first non-projection type all-transistor and portable television, Sony TV8-301. In 1961, Sony launched the world's first compact transistor VTR, the PV-100. In 1968, Sony launched the legendary color television set, Trinitron. The Trinitron was the reason that Sony had been the world's largest TV manufacturer in terms of annual revenue until 2006. In 1969, Sony launched Sony TC-50, a compact cassette recorder. NASA equipped every astronaut with the device from Apollo 7 onwards. Astronauts were required to use the recorder to log their missions but they also listened to music by inserting and playing the pre-recorded tapes. Masaru Ibuka also enjoyed listening to classical music using recorders, preluding the birth of Walkman. In October the same year, Sony released a prototype of the world's first commercial videocassette recorder. This led to the official launch of the VP-1100 two years later. Sony received the Emmy in 1973 for developing the Trinitron. This was the first Emmy awarded to an electronics. In 1975, Sony launched the Betamax, which took part and lost in the video format war. Walkman, the first stereo cassette player, was launched in 1979. The year 1981 is considered as a starting point of the Digital Revolution with Sony launching the world's first Compact Disc player, the Sony CDP-101, with a Compact Disc (CD) itself, a new data storage format Sony and Philips co-developed. In that year, a 3.5-inch floppy disk structure was introduced by Sony and it soon became the de facto standard. Sony is also the company that produced the first color video camera using a CCD, the XC-1. The Sony Mavica, released in 1981, is a prototype of the world's first commercial electronic still camera. Sony played a significant role in the tech industry in the second half of the 20th century alongside Hewlett-Packard and IBM. Steve Jobs, fascinated by the company's innovative products, culture and work environment, was a big fan of Sony and regarded the company being in a league of their own, apart from the other comparable competitors. In 1991, Sony released the first commercial lithium-ion battery jointly with Asahi Kasei and had been the leader in the rechargeable battery industry until a massive defective battery scandal occurred in 2006. Sony introduced Memory Stick, a flash memory storage format, in 1998, a year earlier than the announcement of SD card. The Sony's format is considered as a yet another failed standard from the company. The list of Sony's unsuccessful attempts to make their proprietary formats universally adopted includes Betamax, MiniDisc or the well-known abbreviated term, the MD, and Universal Media Disc. In June 2006, Sony released the Blu-ray Disc format, a High-definition Optical disc format developed by the company in association with the Blu-ray Disc Association, in which Sony is a member of, alongside Philips, Panasonic and LG Electronics. Beyond an electronics powerhouse Sony played a major role in the development of Japan as a powerful exporter during the late 20th century. From the late 1980s to the early 2000s, it aggressively expanded into a variety of businesses, from film (Sony Pictures Entertainment) and insurance (Sony Life) to banking (Sony Bank) to internet service providing (So-net) and gaming (Sony Interactive Entertainment). It also beefed up the music business it had operated in Japan, CBS/Sony Record, and turned it into Sony Music Entertainment, a multinational music label group. Part of its motivation for expansion was the pursuit of "convergence," linking film, music, and digital electronics via the Internet. However this strategy ultimately failed, merely damaging Sony's balance sheet and making the company's business structure highly complex. Crisis and challenges Howard Stringer, the first non-Japanese CEO of Sony, helped to reinvigorate the company's struggling media businesses, encouraging blockbusters such as Spider-Man while cutting 9,000 jobs. Despite modest success, the company faced continued struggles from the mid-2000s and started to lose the leading position in the tech industry. It became known for its stagnancy, with a fading brand name. Sony's headquarters moved to Minato, Tokyo from Shinagawa, Tokyo around the end of 2006. References Sony Sony Sony
History of Sony
Technology
1,632
64,540,037
https://en.wikipedia.org/wiki/Mirror%20symmetry%20conjecture
In mathematics, mirror symmetry is a conjectural relationship between certain Calabi–Yau manifolds and a constructed "mirror manifold". The conjecture allows one to relate the number of rational curves on a Calabi-Yau manifold (encoded as Gromov–Witten invariants) to integrals from a family of varieties (encoded as period integrals on a variation of Hodge structures). In short, this means there is a relation between the number of genus algebraic curves of degree on a Calabi-Yau variety and integrals on a dual variety . These relations were original discovered by Candelas, de la Ossa, Green, and Parkes in a paper studying a generic quintic threefold in as the variety and a construction from the quintic Dwork family giving . Shortly after, Sheldon Katz wrote a summary paper outlining part of their construction and conjectures what the rigorous mathematical interpretation could be. Constructing the mirror of a quintic threefold Originally, the construction of mirror manifolds was discovered through an ad-hoc procedure. Essentially, to a generic quintic threefold there should be associated a one-parameter family of Calabi-Yau manifolds which has multiple singularities. After blowing up these singularities, they are resolved and a new Calabi-Yau manifold was constructed. which had a flipped Hodge diamond. In particular, there are isomorphisms but most importantly, there is an isomorphism where the string theory (the A-model of ) for states in is interchanged with the string theory (the B-model of ) having states in . The string theory in the A-model only depended upon the Kahler or symplectic structure on while the B-model only depends upon the complex structure on . Here we outline the original construction of mirror manifolds, and consider the string-theoretic background and conjecture with the mirror manifolds in a later section of this article. Complex moduli Recall that a generic quintic threefold in is defined by a homogeneous polynomial of degree . This polynomial is equivalently described as a global section of the line bundle . Notice the vector space of global sections has dimension but there are two equivalences of these polynomials. First, polynomials under scaling by the algebraic torus (non-zero scalers of the base field) given equivalent spaces. Second, projective equivalence is given by the automorphism group of , which is dimensional. This gives a dimensional parameter space since , which can be constructed using Geometric invariant theory. The set corresponds to the equivalence classes of polynomials which define smooth Calabi-Yau quintic threefolds in , giving a moduli space of Calabi-Yau quintics. Now, using Serre duality and the fact each Calabi-Yau manifold has trivial canonical bundle , the space of deformations has an isomorphism with the part of the Hodge structure on . Using the Lefschetz hyperplane theorem the only non-trivial cohomology group is since the others are isomorphic to . Using the Euler characteristic and the Euler class, which is the top Chern class, the dimension of this group is . This is because Using the Hodge structure we can find the dimensions of each of the components. First, because is Calabi-Yau, so giving the Hodge numbers , hence giving the dimension of the moduli space of Calabi-Yau manifolds. Because of the Bogomolev-Tian-Todorov theorem, all such deformations are unobstructed, so the smooth space is in fact the moduli space of quintic threefolds. The whole point of this construction is to show how the complex parameters in this moduli space are converted into Kähler parameters of the mirror manifold. Mirror manifold There is a distinguished family of Calabi-Yau manifolds called the Dwork family. It is the projective family over the complex plane . Now, notice there is only a single dimension of complex deformations of this family, coming from having varying values. This is important because the Hodge diamond of the mirror manifold has The family has symmetry group acting by Notice the projectivity of is the reason for the condition The associated quotient variety has a crepant resolution given by blowing up the singularities giving a new Calabi-Yau manifold with parameters in . This is the mirror manifold and has where each Hodge number is . Ideas from string theory In string theory there is a class of models called non-linear sigma models which study families of maps where is a genus algebraic curve and is Calabi-Yau. These curves are called world-sheets and represent the birth and death of a particle as a closed string. Since a string could split over time into two strings, or more, and eventually these strings will come together and collapse at the end of the lifetime of the particle, an algebraic curve mathematically represents this string lifetime. For simplicity, only genus 0 curves were considered originally, and many of the results popularized in mathematics focused only on this case. Also, in physics terminology, these theories are heterotic string theories because they have supersymmetry that comes in a pair, so really there are four supersymmetries. This is important because it implies there is a pair of operators acting on the Hilbert space of states, but only defined up to a sign. This ambiguity is what originally suggested to physicists there should exist a pair of Calabi-Yau manifolds which have dual string theories, one's that exchange this ambiguity between one another. The space has a complex structure, which is an integrable almost-complex structure , and because it is a Kähler manifold it necessarily has a symplectic structure called the Kähler form which can be complexified to a complexified Kähler form which is a closed -form, hence its cohomology class is in The main idea behind the Mirror Symmetry conjectures is to study the deformations, or moduli, of the complex structure and the complexified symplectic structure in a way that makes these two dual to each other. In particular, from a physics perspective, the super conformal field theory of a Calabi-Yau manifold should be equivalent to the dual super conformal field theory of the mirror manifold . Here conformal means conformal equivalence which is the same as an equivalence class of complex structures on the curve . There are two variants of the non-linear sigma models called the A-model and the B-model which consider the pairs and and their moduli. A-model Correlation functions from String theory Given a Calabi-Yau manifold with complexified Kähler class the nonlinear sigma model of the string theory should contain the three generations of particles, plus the electromagnetic, weak, and strong forces. In order to understand how these forces interact, a three-point function called the Yukawa coupling is introduced which acts as the correlation function for states in . Note this space is the eigenspace of an operator on the Hilbert space of states for the string theory. This three point function is "computed" as using Feynman path-integral techniques where the are the naive number of rational curves with homology class , and . Defining these instanton numbers is the subject matter of Gromov–Witten theory. Note that in the definition of this correlation function, it only depends on the Kahler class. This inspired some mathematicians to study hypothetical moduli spaces of Kahler structures on a manifold. Mathematical interpretation of A-model correlation functions In the A-model the corresponding moduli space are the moduli of pseudoholomorphic curves or the Kontsevich moduli spaces These moduli spaces can be equipped with a virtual fundamental class or which is represented as the vanishing locus of a section of a sheaf called the Obstruction sheaf over the moduli space. This section comes from the differential equation which can be viewed as a perturbation of the map . It can also be viewed as the Poincaré dual of the Euler class of if it is a Vector bundle. With the original construction, the A-model considered was on a generic quintic threefold in . B-model Correlation functions from String theory For the same Calabi-Yau manifold in the A-model subsection, there is a dual superconformal field theory which has states in the eigenspace of the operator . Its three-point correlation function is defined as where is a holomorphic 3-form on and for an infinitesimal deformation (since is the tangent space of the moduli space of Calabi-Yau manifolds containing , by the Kodaira–Spencer map and the Bogomolev-Tian-Todorov theorem) there is the Gauss-Manin connection taking a class to a class, hence can be integrated on . Note that this correlation function only depends on the complex structure of . Another formulation of Gauss-Manin connection The action of the cohomology classes on the can also be understood as a cohomological variant of the interior product. Locally, the class corresponds to a Cech cocycle for some nice enough cover giving a section . Then, the insertion product gives an element which can be glued back into an element of . This is because on the overlaps giving hence it defines a 1-cocycle. Repeating this process gives a 3-cocycle which is equal to . This is because locally the Gauss-Manin connection acts as the interior product. Mathematical interpretation of B-model correlation functions Mathematically, the B-model is a variation of hodge structures which was originally given by the construction from the Dwork family. Mirror conjecture Relating these two models of string theory by resolving the ambiguity of sign for the operators led physicists to the following conjecture: for a Calabi-Yau manifold there should exist a mirror Calabi-Yau manifold such that there exists a mirror isomorphism giving the compatibility of the associated A-model and B-model. This means given and such that under the mirror map, there is the equality of correlation functions This is significant because it relates the number of degree genus curves on a quintic threefold in (so ) to integrals in a variation of Hodge structures. Moreover, these integrals are actually computable! See also Cotangent complex Homotopy associative algebra Kuranishi structure Mirror symmetry (string theory) Moduli of algebraic curves Kontsevich moduli space External links https://ocw.mit.edu/courses/mathematics/18-969-topics-in-geometry-mirror-symmetry-spring-2009/lecture-notes/ References Books/Notes Mirror Symmetry - Clay Mathematics Institute ebook Mirror Symmetry and Algebraic Geometry - Cox, Katz On the work of Givental relative to mirror symmetry First proofs Equivariant Gromov - Witten Invariants - Givental's original proof for projective complete intersections The mirror formula for quintic threefolds Rational curves on hypersurfaces (after A. Givental) - an explanation of Givental's proof Mirror Principle I - Lian, Liu, Yau's proof closing gaps in Givental's proof. His proof required the undeveloped theory of Floer homology Dual Polyhedra and Mirror Symmetry for Calabi-Yau Hypersurfaces in Toric Varieties - first general construction of mirror varieties for Calabi-Yau's in toric varieties Mirror symmetry for abelian varieties Derived geometry in Mirror symmetry Notes on supersymmetric and holomorphic field theories in dimensions 2 and 4 Research Mirror symmetry: from categories to curve counts - relation between homological mirror symmetry and classical mirror symmetry Intrinsic mirror symmetry and punctured Gromov-Witten invariants Homological mirror symmetry Categorical Mirror Symmetry: The Elliptic Curve An Introduction to Homological Mirror Symmetry and the Case of Elliptic Curves Homological mirror symmetry for the genus two curve Homological mirror symmetry for the quintic 3-fold Homological Mirror Symmetry for Calabi-Yau hypersurfaces in projective space Speculations on homological mirror symmetry for hypersurfaces in Mathematical physics Conjectures String theory Algebraic geometry
Mirror symmetry conjecture
Physics,Astronomy,Mathematics
2,481
6,165,182
https://en.wikipedia.org/wiki/Lyman-alpha%20blob
In astronomy, a Lyman-alpha blob (LAB) is a huge concentration of a gas emitting the Lyman-alpha emission line. LABs are some of the largest known individual objects in the Universe. Some of these gaseous structures are more than 400,000 light years across. So far they have only been found in the high-redshift universe because of the ultraviolet nature of the Lyman-alpha emission line. Since Earth's atmosphere is very effective at filtering out UV photons, the Lyman-alpha photons must be redshifted in order to be transmitted through the atmosphere. The most famous Lyman-alpha blobs were discovered in 2000 by Steidel et al. Matsuda et al., using the Subaru Telescope of the National Astronomical Observatory of Japan extended the search for LABs and found over 30 new LABs in the original field of Steidel et al., although they were all smaller than the originals. These LABs form a structure which is more than 200 million light-years in extent. It is currently unknown whether LABs trace overdensities of galaxies in the high-redshift universe (as high redshift radio galaxies—which also have extended Lyman-alpha halos—do, for example), nor which mechanism produces the Lyman-alpha emission line, or how the LABs are connected to the surrounding galaxies. Lyman-alpha blobs may hold valuable clues to determine how galaxies are formed. The most massive Lyman-alpha blobs have been discovered by Tristan Friedrich et al. (2021), Steidel et al. (2000), Francis et al. (2001), Matsuda et al. (2004), Dey et al. (2005), Nilsson et al. (2006), and Smith & Jarvis et al. (2007). Examples Himiko LAB-1 EQ J221734.0+001701, the SSA22 Protocluster Ton 618, hyperluminous quasar powering a Lyman-alpha blob; also possesses one of the most massive black holes known. See also Damped Lyman-alpha system Galaxy filament Green bean galaxy Lyman-alpha forest Lyman-alpha emitter Lyman break galaxy Newfound Blob (disambiguation) Notes Astronomical spectroscopy Intergalactic media Large-scale structure of the cosmos Articles containing video clips
Lyman-alpha blob
Physics,Chemistry,Astronomy
486
33,520,674
https://en.wikipedia.org/wiki/Software-defined%20networking
Software-defined networking (SDN) is an approach to network management that uses abstraction to enable dynamic and programmatically efficient network configuration to create grouping and segmentation while improving network performance and monitoring in a manner more akin to cloud computing than to traditional network management. SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity. SDN was commonly associated with the OpenFlow protocol for remote communication with network plane elements to determine the path of network packets across network switches since OpenFlow's emergence in 2011. However, since 2012, proprietary systems have also used the term. These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform. SD-WAN applies similar technology to a wide area network (WAN). History The history of SDN principles can be traced back to the separation of the control and data plane first used in public switched telephone networks. This provided a manner of simplifying provisioning and management years before the architecture was used in data networks. The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and data forwarding functions in a proposed interface standard published in 2004 named Forwarding and Control Element Separation (ForCES). The ForCES Working Group also proposed a companion SoftRouter architecture. Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP services protocol and a path computation element (PCE)-based architecture. These early attempts failed to gain traction. One reason is that many in the Internet community viewed separating control from data to be risky, especially given the potential for failure in the control plane. Another reason is that vendors were concerned that creating standard application programming interfaces (APIs) between the control and data planes would result in increased competition. The use of open-source software in these separated architectures traces its roots to the Ethane project at Stanford's computer science department. Ethane's simple switch design led to the creation of OpenFlow, and an API for OpenFlow was first created in 2008. In that same year, NOX, an operating system for networks, was created. SDN research included emulators such as vSDNEmul, EstiNet, and Mininet. Work on OpenFlow continued at Stanford, including with the creation of testbeds to evaluate the use of the protocol in a single campus network, as well as across the WAN as a backbone for connecting multiple campuses. In academic settings, there were several research and production networks based on OpenFlow switches from NEC and Hewlett-Packard, as well as those based on Quanta Computer whiteboxes starting in about 2009. Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, codeveloped with NTT and Google. A notable deployment was Google's B4 in 2012. Later, Google announced the first OpenFlow/Onix deployments in is datacenters. Another large deployment exists at China Mobile. The Open Networking Foundation was founded in 2011 to promote SDN and OpenFlow. At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using shortest-path bridging (IEEE 802.1aq) and OpenStack as an automated campus, extending automation from the data center to the end device and removing manual provisioning from service delivery. Concept SDN architectures decouple network control (control plane) and forwarding (data plane) functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. The OpenFlow protocol can be used in SDN technologies. The SDN architecture is: Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions. Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs. Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch. Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software. Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. New network architecture The explosion of mobile devices and content, server virtualization, and the advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture may be ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments. Some of the key computing trends driving the need for a new network paradigm include: Changing traffic patterns Within the enterprise data center, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today's applications access different databases and servers, creating a flurry of east-west machine-to-machine traffic before returning data to the end user device in the classic north-south traffic pattern. At the same time, users are changing network traffic patterns as they push for access to corporate content and applications from any type of device, connecting from anywhere, at any time. Finally, many enterprise data center managers are deploying a utility computing model, which may include a private cloud, public cloud, or some mix of both, resulting in additional traffic across the wide-area network. The consumerization of IT Users are increasingly employing mobile personal devices such as smartphones, tablets, and notebooks to access the corporate network. IT is under pressure to accommodate these personal devices in a fine-grained manner while protecting corporate data and intellectual property and meeting compliance mandates. The rise of cloud services Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Many enterprise businesses want the agility to access applications, infrastructure and other IT resources on demand and discretely. IT planning for cloud services must be performed in an environment of increased security, compliance and auditing requirements, along with business reorganizations, consolidations and mergers that can rapidly change assumptions. Providing self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage and network resources, ideally from a common viewpoint and with a common suite of tools. Big data means more bandwidth Handling today's big data requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of these large data sets is fueling a constant demand for additional network capacity in the data center. Operators of hyperscale data center networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity within a limited budget. Energy use in large data centers As Internet of things, cloud computing and SaaS emerged, the need for larger data centers has increased the energy consumption of those facilities. Many researchers have improved SDN's energy efficiency applying existing routing techniques to dynamically adjust the network data plane to save energy. Also techniques to improve control plane energy efficiency are being researched. Architectural components The following list defines and explains the SDN architectural components: SDN application SDN applications are programs that communicate their network requirements and desired network behavior to the SDN controller via a northbound interface (NBI). In addition, they may consume an abstracted view of the network for their internal decision-making purposes. An SDN Application consists of SDN application logic and one or more NBI drivers. SDN applications may themselves expose another layer of abstracted network control, thus offering one or more higher-level NBIs through respective NBI agents. SDN Controller The SDN Controller is a logically centralized entity in charge of (i) translating the requirements from the SDN Application layer down to the SDN Datapaths and (ii) providing the SDN Applications with an abstract view of the network (which may include statistics and events). An SDN Controller consists of one or more NBI Agents, the SDN Control Logic, and the Control to Data-Plane Interface (CDPI) driver. Definition as a logically centralized entity neither prescribes nor precludes implementation details such as the federation of multiple controllers, the hierarchical connection of controllers, communication interfaces between controllers, nor virtualization or slicing of network resources. SDN Datapath The SDN Datapath is a logical network device that exposes visibility and uncontested control over its advertised forwarding and data processing capabilities. The logical representation may encompass all or a subset of the physical substrate resources. An SDN Datapath comprises a CDPI agent and a set of one or more traffic forwarding engines and zero or more traffic processing functions. These engines and functions may include simple forwarding between the datapath's external interfaces or internal traffic processing or termination functions. One or more SDN Datapaths may be contained in a single (physical) network element—an integrated physical combination of communications resources, managed as a unit. An SDN Datapath may also be defined across multiple physical network elements. This logical definition neither prescribes nor precludes implementation details such as the logical to physical mapping, management of shared physical resources, virtualization or slicing of the SDN Datapath, interoperability with non-SDN networking, nor the data processing functionality, which can include OSI layer 4-7 functions. SDN Control to Data-Plane Interface (CDPI) The SDN CDPI is the interface defined between an SDN Controller and an SDN Datapath, which provides at least (i) programmatic control of all forwarding operations, (ii) capabilities advertisement, (iii) statistics reporting, and (iv) event notification. One value of SDN lies in the expectation that the CDPI is implemented in an open, vendor-neutral and interoperable way. SDN Northbound Interfaces (NBI) SDN NBIs are interfaces between SDN Applications and SDN Controllers and typically provide abstract network views and enable direct expression of network behavior and requirements. This may occur at any level of abstraction (latitude) and across different sets of functionality (longitude). One value of SDN lies in the expectation that these interfaces are implemented in an open, vendor-neutral and interoperable way. SDN Control Plane Centralized - Hierarchical - Distributed The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity has a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, several approaches have been proposed in the literature that fall into two categories, hierarchical and fully distributed approaches. In hierarchical solutions, distributed controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches, controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications. Controller Placement A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices, especially in the context of large networks. Other objectives that have been considered involve control path reliability, fault tolerance, and application requirements. SDN Data Plane In SDN, the data plane is responsible for processing data-carrying packets using a set of rules specified by the control plane. The data plane may be implemented in physical hardware switches or in software implementations, such as Open vSwitch. The memory capacity of hardware switches may limit the number of rules that can be stored where as software implementations may have higher capacity. The location of the SDN data plane and agent can be used to classify SDN implementations: Hardware Switch-based SDNs: This approach implements the data plane processing inside a physical device. OpenFlow switches may use TCAM tables to route packet sequences (flows). These switches may use an ASIC for its implementation. Software Switch-Based SDNs: Some physical switches may implement SDN support using software on the device, such as Open vSwitch, to populate flow tables and to act as the SDN agent when communicating with the controller. Hypervisors may likewise use software implementations to support SDN protocols in the virtual switches used to support their virtual machines. Host-Based SDNs: Rather than deploying the data plane and SDN agent in network infrastructure, host-based SDNs deploy the SDN agent inside the operating system of the communicating endpoints. Such implementations can provide additional context about the application, user, and activity associated with network flows. To achieve the same traffic engineering capabilities of switch-based SDNs, host-based SDNs may require the use of carefully designed VLAN and spanning tree assignments. Flow table entries may be populated in a proactive, reactive, or hybrid fashion. In the proactive mode, the controller populates flow table entries for all possible traffic matches possible for this switch in advance. This mode can be compared with typical routing table entries today, where all static entries are installed ahead of time. Following this, no request is sent to the controller since all incoming flows will find a matching entry. A major advantage in proactive mode is that all packets are forwarded in line rate (considering all flow table entries in TCAM) and no delay is added. In the reactive mode, entries are populated on demand. If a packet arrives without a corresponding match rule in the flow table, the SDN agent sends a request to the controller for further instruction it the reactive mode. The controller examines the SDN agent requests and provides instructions, installing a rule in the flow table for the corresponding packet if necessary. The hybrid mode uses the low-latency proactive forwarding mode for a portion of traffic while relying on the flexibility of reactive mode processing for the remaining traffic. Applications SDMN Software-defined mobile networking (SDMN) is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network. It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities. Since 3GPP Rel.14, a Control User Plane Separation was introduced in the Mobile Core Network architectures with the PFCP protocol. SD-WAN An SD-WAN is a WAN managed using the principles of software-defined networking. The main driver of SD-WAN is to lower WAN costs using more affordable and commercially available leased lines, as an alternative or partial replacement of more expensive MPLS lines. Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration. SD-LAN An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service. SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller. Security using the SDN paradigm SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. While the security of SDN architecture itself remains an open question that has already been studied a couple of times in the research community, the following paragraphs only focus on the security applications made possible or revisited using SDN. Several research works on SDN have already investigated security applications built upon the SDN controller, with different aims in mind. Distributed Denial of Service (DDoS) detection and mitigation, as well as botnet and worm propagation, are some concrete use-cases of such applications: basically, the idea consists in periodically collecting network statistics from the forwarding plane of the network in a standardized manner (e.g. using Openflow), and then apply classification algorithms on those statistics in order to detect any network anomalies. If an anomaly is detected, the application instructs the controller how to reprogram the data plane in order to mitigate it. Another kind of security application leverages the SDN controller by implementing some moving target defense (MTD) algorithms. MTD algorithms are typically used to make any attack on a given system or network more difficult than usual by periodically hiding or changing key properties of that system or network. In traditional networks, implementing MTD algorithms is not a trivial task since it is difficult to build a central authority able of determining - for each part of the system to be protected - which key properties are hidden or changed. In an SDN network, such tasks become more straightforward thanks to the centrality of the controller. One application can for example periodically assign virtual IPs to hosts within the network, and the mapping virtual IP/real IP is then performed by the controller. Another application can simulate some fake opened/closed/filtered ports on random hosts in the network in order to add significant noise during reconnaissance phase (e.g. scanning) performed by an attacker. Additional value regarding security in SDN enabled networks can also be gained using FlowVisor and FlowChecker respectively. The former tries to use a single hardware forwarding plane sharing multiple separated logical networks. Following this approach the same hardware resources can be used for production and development purposes as well as separating monitoring, configuration and internet traffic, where each scenario can have its own logical topology which is called slice. In conjunction with this approach FlowChecker realizes the validation of new OpenFlow rules that are deployed by users using their own slice. SDN controller applications are mostly deployed in large-scale scenarios, which requires comprehensive checks of possible programming errors. A system to do this called NICE was described in 2012. Introducing an overarching security architecture requires a comprehensive and protracted approach to SDN. Since it was introduced, designers are looking at possible ways to secure SDN that do not compromise scalability. One architecture called SN-SECA (SDN+NFV) Security Architecture. Group Data Delivery Using SDN Distributed applications that run across datacenters usually replicate data for the purpose of synchronization, fault resiliency, load balancing and getting data closer to users (which reduces latency to users and increases their perceived throughput). Also, many applications, such as Hadoop, replicate data within a datacenter across multiple racks to increase fault tolerance and make data recovery easier. All of these operations require data delivery from one machine or datacenter to multiple machines or datacenters. The process of reliably delivering data from one machine to multiple machines is referred to as Reliable Group Data Delivery (RGDD). SDN switches can be used for RGDD via installation of rules that allow forwarding to multiple outgoing ports. For example, OpenFlow provides support for Group Tables since version 1.1 which makes this possible. Using SDN, a central controller can carefully and intelligently setup forwarding trees for RGDD. Such trees can be built while paying attention to network congestion/load status to improve performance. For example, MCTCP is a scheme for delivery to many nodes inside datacenters that relies on regular and structured topologies of datacenter networks while DCCast and QuickCast are approaches for fast and efficient data and content replication across datacenters over private WANs. Relationship to NFV Network Function Virtualization, or NFV for short, is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV separates software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF). SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies. SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems. Relationship to DPI Deep packet inspection (DPI) provides network with application-awareness, while SDN provides applications with network-awareness. Although SDN will radically change the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances Quality of Experience (QoE) estimation using SDN When using an SDN based model for transmitting multimedia traffic, an important aspect to take account is the QoE estimation. To estimate the QoE, first we have to be able to classify the traffic and then, it's recommended that the system can solve critical problems on its own by analyzing the traffic. See also Active networking Frenetic (programming language) IEEE 802.1aq Intel Data Plane Development Kit (DPDK) List of SDN controller software Network functions virtualization Network virtualization ONOS OpenDaylight Project SD-WAN Software-defined data center Software-defined mobile network Software-defined protection Virtual Distributed Ethernet References Configuration management Network architecture
Software-defined networking
Engineering
4,720
1,668,287
https://en.wikipedia.org/wiki/Joseph%20LeConte
Joseph Le Conte (alternative spelling: Joseph LeConte) (February 26, 1823 – July 6, 1901) was a physician, geologist, professor at the University of California, Berkeley, early California conservationist, and eugenicist. Early life Of Huguenot descent, he was born in Liberty County, Georgia, to Louis Le Conte, patriarch of the noted LeConte family, and Ann Quarterman. He was educated at Franklin College in Athens, Georgia (now the Franklin College of Arts and Sciences at the University of Georgia), where he was a member of the Phi Kappa Literary Society. After graduation in 1841, he studied medicine and received his degree at the New York College of Physicians and Surgeons in 1845. (In 1844 he travelled with his cousin John Lawrence LeConte for over one thousand miles along the Upper Mississippi River in a birchbark canoe.) After practising for three or four years in Macon, Georgia, he entered Harvard University and studied natural history under Louis Agassiz. An excursion made with Professors J. Hall and Agassiz to the Helderberg mountains of New York developed a keen interest in geology. Career After graduating from Harvard, Le Conte in 1851 accompanied Agassiz on an expedition to study the Florida Reef. On his return he became professor of natural science at Oglethorpe University, which was located in Midway, Georgia, at the time, and from December 1852 until 1856 professor of natural history and geology at Franklin College (the sole college at the University of Georgia at that time). From 1857 to 1869 he was a professor of chemistry and geology at South Carolina College, which is now the University of South Carolina. On January 14, 1846, he married Caroline Nisbet, a niece of Eugenius A. Nisbet. The Le Conte(s) had four children grow to adulthood: Emma Florence Le Conte, Sarah Elizabeth Le Conte, Caroline Eaton Le Conte, and Joseph Nisbet Le Conte. During the Civil War Le Conte continued to teach in South Carolina. He also produced medicine and was involved in research and development operations of the Confederate Nitre and Mining Bureau, associated with the Confederate Secret Service. In his autobiography he wrote that he found Reconstruction intolerable. He referred to "a carpet-bag governor, scalawag officials, and a negro legislature controlled by rascals" and stated that the "sudden enfranchisement of the negro without qualification was the greatest political crime ever perpetrated by any people". Discouraged by unsettled postwar conditions at the University of South Carolina, in 1868 he accepted an offer of a professorship at the newly established University of California. In September 1869, he moved west to Berkeley, California. His older brother John had come to California in April 1869, also to join the faculty of the new university as a professor of physics. Joseph was appointed the first professor of geology and natural history and botany at the university, a post which he held until his death. He was elected as a member of the American Philosophical Society in 1873. He published a series of papers on monocular and binocular vision, and also on psychology. His chief contributions, however, related to geology. He described the fissure-eruptions in western America, discoursed on earth-crust movements and their causes and on the great features of the Earth's surface. As separate works he published Elements of Geology (1878, 5th edition 1889); Religion and Science (1874); and Evolution and its Relation to Religious Thought (1888). This last work anticipates in structure and argument Teilhard de Chardin's "Phenomenon of Man".(1955). LeConte endorsed theistic evolution. Legacy In 1874, he was nominated to the National Academy of Sciences. He was president of the American Association for the Advancement of Science in 1892, and of the Geological Society of America in 1896. Le Conte is also noted for his exploration and preservation of the Sierra Nevada of California, United States. He first visited Yosemite Valley in 1870, where he became friends with John Muir and started exploring the Sierra. He became concerned that resource exploitation (such as sheepherding) would ruin the Sierra, so he co-founded the Sierra Club with Muir and others in 1892. He was a director of the Sierra Club from 1892 through 1898. His son, Joseph Nisbet LeConte, was also a noted professor and Sierra Club member. He died of a heart attack in Yosemite Valley, California, on July 6, 1901, right before the Sierra Club's first High Trip. The Sierra Club built the LeConte Memorial Lodge in his honor in 1904. The Le Conte Glacier, Le Conte Canyon, Le Conte Divide, Le Conte Falls, Le Conte Mountain and Mount Le Conte were named after him. LeConte Hall, which houses the Department of History at the University of Georgia, was named for him and his brother. LeConte College, which houses the Department of Mathematics and Statistics near the Horseshoe at the University of South Carolina, Le Conte Middle School in Hollywood, and Le Conte Avenue in Berkeley also honor the two brothers. Leconte, along with other founders of the Sierra Club were advocates of white supremacy and supporters of the eugenics movement in the United States. The elementary school at 2241 Russell Street in Berkeley was named for Joseph LeConte from 1892 until 2018, when it was renamed due to concerns regarding his views on race. Another building at UC Berkeley was also renamed, as announced on July 7, 2020, due to the LeConte brothers' support of white supremacy and vigorous white supremacy writings in that regard. See also Neo-Lamarckism References External links LeConte Memorial Lodge in Yosemite Valley, a national historic landmark National Academy of Sciences Biographical Memoir 1823 births 1901 deaths Sierra Club directors American geologists Sierra Nevada (United States) Harvard University alumni University of California, Berkeley faculty University of Georgia alumni University of Georgia faculty Oglethorpe University faculty University of South Carolina faculty History of the Sierra Nevada (United States) People from Macon, Georgia People from Liberty County, Georgia Burials at Mountain View Cemetery (Oakland, California) Theistic evolutionists Presidents of the Geological Society of America American eugenicists American white supremacists
Joseph LeConte
Biology
1,291
2,041,176
https://en.wikipedia.org/wiki/Simple%20shear
Simple shear is a deformation in which parallel planes in a material remain parallel and maintain a constant distance, while translating relative to each other. In fluid mechanics In fluid mechanics, simple shear is a special case of deformation where only one component of velocity vectors has a non-zero value: And the gradient of velocity is constant and perpendicular to the velocity itself: , where is the shear rate and: The displacement gradient tensor Γ for this deformation has only one nonzero term: Simple shear with the rate is the combination of pure shear strain with the rate of and rotation with the rate of : The mathematical model representing simple shear is a shear mapping restricted to the physical limits. It is an elementary linear transformation represented by a matrix. The model may represent laminar flow velocity at varying depths of a long channel with constant cross-section. Limited shear deformation is also used in vibration control, for instance base isolation of buildings for limiting earthquake damage. In solid mechanics In solid mechanics, a simple shear deformation is defined as an isochoric plane deformation in which there are a set of line elements with a given reference orientation that do not change length and orientation during the deformation. This deformation is differentiated from a pure shear by virtue of the presence of a rigid rotation of the material. When rubber deforms under simple shear, its stress-strain behavior is approximately linear. A rod under torsion is a practical example for a body under simple shear. If e1 is the fixed reference orientation in which line elements do not deform during the deformation and e1 − e2 is the plane of deformation, then the deformation gradient in simple shear can be expressed as We can also write the deformation gradient as Simple shear stress–strain relation In linear elasticity, shear stress, denoted , is related to shear strain, denoted , by the following equation: where is the shear modulus of the material, given by Here is Young's modulus and is Poisson's ratio. Combining gives See also Deformation (mechanics) Infinitesimal strain theory Finite strain theory Pure shear References Fluid mechanics Continuum mechanics
Simple shear
Physics,Engineering
419
2,102,092
https://en.wikipedia.org/wiki/Gumstix
Gumstix is an American multinational corporation headquartered in Redwood City, California. It develops and manufactures small system boards comparable in size to a stick of gum. In 2003, when it was first fully functional, it used ARM architecture system on a chip (SoC) and an operating system based on Linux 2.6 kernel. It has an online tool called Geppetto that allows users to design their own boards. In August 2013 it started a crowd-funding service to allow a group of users that want to get a custom design manufactured to share the setup costs. See also Arduino Embedded system Raspberry Pi Stick PC References External links Gumstix users wiki Gumstix mailing list archives on nabble Embedded Linux Linux-based devices Computer companies of the United States Computer hardware companies Companies based in Redwood City, California Network computer (brand) Motherboard form factors Motherboard companies Privately held companies based in California Single-board computers
Gumstix
Technology
190
27,300,394
https://en.wikipedia.org/wiki/V%20Hydrae
V Hydrae (V Hya) is a carbon star in the constellation Hydra. To date perhaps uniquely in our galaxy it has plasma ejections/eruptions on a grand scale that could be caused by its near, unseen companion. Variability In the 1870s, Benjamin Apthorp Gould suspected that the star is variable, based on observations with opera glasses. In May 1888, Seth Carlo Chandler confirmed that the star is variable, citing observations from 1797 through 1879, and he derived a period of 535 days, which is very close to the currently accepted value. Later that year, Chandler included the star with its variable star designation, V Hydrae, in his Catalogue of Variable Stars. V Hydrae is a semiregular variable star of type SRa, sometimes considered to be a Mira variable. It pulsates with a period of 530 days and a brightness range of 1-2 magnitudes, but also shows deep fades at intervals of about 17.5 years when it may drop below magnitude 12. Evolutionary stage V Hydrae is a late carbon star, an asymptotic giant branch (AGB) star that has dredged up sufficient material from its interior to have more carbon in its atmosphere than oxygen. The rate of mass loss from V Hydrae indicates that it is almost at the end of the AGB stage and about to lose its atmosphere completely and form a planetary nebula. It is sometimes considered to be a post-AGB object. Companions V Hydrae has a visible binary companion 46" distant. It is a magnitude 11.5 K0 giant. V Hydrae also has an unseen companion inferred by its ultraviolet excess and radial-velocity monitoring. It has been suggested that the steep drops in brightness every 17 years or so are caused by obscuration by a cloud associated with the companion passing in front of the giant star. A study in 2024 used astrometry and radial velocity measurements and constrained the orbital parameters of the companion, as well as its mass, being 36% larger than the mass of the primary and equivalent to 2.6 times the mass of the Sun. Bullets V Hydrae has high-speed outflows of material collimated into jets, and also a disk of material around the star. Since the star itself is considered to be at the end of the Asymptotic Giant Branch (AGB) phase of evolution and starting to generate a planetary nebula, the mechanism for the ejection of this material can give key insights to the formation of planetary nebulae. Microwave spectra of rotational transitions of carbon monoxide show that portions of the envelope, probably the jets, are moving away from the star at 200 km/sec. This is far faster than the ~15 km/sec stellar wind that is typically seen around AGB stars. References Hydra (constellation) Carbon stars Hydrae, V Asymptotic-giant-branch stars Semiregular variable stars 053085 Durchmusterung objects
V Hydrae
Astronomy
602
1,423,924
https://en.wikipedia.org/wiki/Voree%20plates
The Voree plates, also called The Record of Rajah Manchou of Vorito, or the Voree Record, were a set of three tiny metal plates allegedly discovered by James J. Strang, a leader of the Latter Day Saint movement, in Voree, Wisconsin, United States, in 1845. Purportedly the final testament of an ancient American ruler named "Rajah Manchou of Vorito", Strang asserted that this discovery vindicated his claims to be the true successor of Joseph Smith, founder of the Latter Day Saint movement—as opposed to Brigham Young, whom most Latter Day Saints accepted as Smith's successor in 1844. The plates also lent credence to Strang's claim that Voree, not the Salt Lake Valley, was to be the new "gathering place" of the Latter Day Saints. His purported translation of this text is accepted as scripture by his church and some other bodies descending from it, but not by any other Latter Day Saint organization. Unlike the golden plates used by Smith to produce the Book of Mormon, the existence of Strang's plates was verified by independent, non-Mormon witnesses, including Christopher Latham Sholes, inventor of the first practical typewriter. Strang was accused of having fabricated the plates from a brass tea kettle, a claim which he and his partisans vigorously denied. The plates disappeared around 1900 and their current whereabouts are unknown. Discovery According to Latter Day Saint beliefs, many ancient inhabitants of the Americas engraved records on metal plates. Joseph Smith, the movement's founding prophet, claimed that he translated the Book of Mormon from a set of golden plates which he was shown the location of by an angel named Moroni. Following Smith's murder in 1844, a number of claimants came forward to lead his nascent church, including Strang. As a recent convert to Mormonism, Strang did not possess the name recognition among rank-and-file Latter Day Saints enjoyed by Brigham Young and Sidney Rigdon, the two principal contenders for church leadership. Hence, Strang faced an uphill battle in his quest to be recognized as the heir to Smith's prophetic mantle. To advance his cause, Strang asserted that unlike Rigdon and Young, he had hard evidence of his prophetic calling. One of Smith's titles had been "Prophet, Seer, Revelator, and Translator", and Strang wished to substantiate his claim to succession by following in Smith's footsteps. So, while Young and Rigdon never offered their followers any newly revealed ancient records, Strang announced on January 17, 1845, that God had promised to lead him to a hitherto-undiscovered chronicle of a long-lost American people. This, said Strang, would prove that he was Smith's true successor. Strang next testified that on September 1, 1845, an angel of God appeared to him and showed him the location of "the record of my people in whose possession thou dwellest." Accordingly, he went on September 13 to the indicated site, located in Voree, Wisconsin, south of the White River on what is now referred to as the "Hill of Promise." Strang led four witnesses to a large oak on the hillside, inviting them to examine the ground around the tree carefully before digging for the plates. All four later testified that they could discern no evidence of digging or other disturbance of the ground. After removing the tree, Strang's companions dug down approximately three feet, where they discovered three small brass plates in a case of baked clay. Strang subsequently claimed to have deciphered this record, which he said was authored by an ancient Native American named "Rajah Manchou of Vorito." Appearance and dimensions The Voree plates measured approximately 2.5 inches long, and between 1.25 and 1.5 inches wide. According to one anonymous witness, they were "about the thickness of a piece of tin, fastened together in one corner by a ring passing through them." A second witness described them as being "thickly covered with ancient characters of curious workmanship." Stephen Post, the brother of Strangite apostle Warren Post, visited Strang in 1850 and examined the plates for himself, noting that "they were not polished very smooth before engraving, by appearance." Unlike his brother, Stephen Post had trouble believing Strang's account of the plates' origin and discovery: "With all the faith & confidence that I could exercise," he later wrote, "all that I could realize was that Strang made the plates himself, or at least that it was possible that he made them." Post equally observed that the brass used in the Voree plates seemed indistinguishable to him from the French brass used in ordinary tea kettles. Four of the six sides of the Voree plates contained text written in an unknown script. The fifth side contained a map showing the area where the plates were found, while side one contained engravings of the "all-seeing eye" and a man holding a scepter, with the sun, moon, and stars beneath him. These images were said to represent God, the president of the church, his two counselors, the high council of the church, the apostles, and the seventies. Translation and testimony Strang's published translation of the Voree plates reads as follows: My people are no more. The mighty are fallen, and the young slain in battle. Their bones bleached on the plain by the noonday shadow. The houses are leveled to the dust, and in the moat are the walls. They shall be inhabited. I have in the burial served them, and their bones in the Death-shade, towards the sun's rising, are covered. They sleep with the mighty dead, and they rest with their fathers. They have fallen in transgression and are not, but the elect and faithful there shall dwell. The word hath revealed it. God hath sworn to give an inheritance to his people where transgressors perished. The word of God came to me while I mourned in the Death-shade, saying, I will avenge me on the destroyer. He shall be driven out. Other strangers shall inhabit thy land. I an ensign there will set up. The escaped of my people there shall dwell when the flock disown the Shepherd and build not on the Rock. The forerunner men shall kill, but a mighty prophet there shall dwell. I will be his strength, and he shall bring forth thy record. Record my words, and bury it in the Hill of Promise. Strang published facsimiles of the Voree plates, and his translations of them, in his church's newspaper, The Voree Herald. Many Latter Day Saints found this new discovery compelling and believed it to be a sure sign that Strang was indeed Smith's true successor. For those who remained skeptical or merely curious, Mormon or non-Mormon, Strang readily produced the plates themselves for personal examination. One such investigator was the non-Mormon Christopher Sholes, inventor of the first practical typewriter, and a later inspiration to Thomas Edison. As the editor of the Southport Telegraph, Sholes called upon Strang and perused his discovery. Sholes offered no opinion on the plates, but described Strang as "honest and earnest" and opined that his followers ranked "among the most honest and intelligent men in the neighborhood." The Church of Jesus Christ of Latter-day Saints and the Community of Christ, the two largest factions of the Latter Day Saint movement, both reject Strang's claims to prophetic leadership and the purported authenticity of the Voree plates. Possible additional text According to one source, there may have been text translated from the Voree plates that Strang never made public. Strang himself had indicated more than once that he had released to his followers only "part of the record of Rajah Manchou of Vorito." H. V. Reed, who had visited Strang and read his translation, published a possible addition to Strang's text in the Chicago Illustrated Journal in January 1873: It shall come to pass in the latter days, that my people shall hear my voice, and the truth shall speak from the earth, and my people shall hear, and shall come and build the Temple of the Lord. My prophet, unto whom I send my word, shall lead them, and guide them in the ways of peace and salvation. In Voree the name of the Mighty One shall be heard, and the nations shall obey my law, and hear the words of my servant, whom I shall raise up unto them in the latter days. No official determination has been made by the Strangite church as to whether these words should be considered part of Strang's translation or not. Allegations of forgery Some members of competing restoration churches have insisted that the Voree plates were forged by Strang. Isaac Scott, an ex-Strangite, wrote to Joseph Smith III alleging that he learned from Caleb Barnes, Strang's former law partner, that he and Strang had fabricated the plates from a tea kettle belonging to Strang's father-in-law as part of a land speculation scheme. According to Scott, Barnes and Strang "made the 'plates' out of Ben [Perce]'s old kettle and engraved them with an old saw file, and ... when completed they put acid on them to corrode them and give them an ancient appearance; and that to deposit them under the tree, where they were found, they took a large auger ... which Ben [Perce] owned, put a fork handle on the auger and with it bored a long slanting hole under a tree on 'The Hill of Promise,' as they called it, laying the earth in a trail on a cloth as taken out, then put the 'plates' in, tamping in all the earth again, leaving no trace of their work visible.” Wingfield W. Watson, a high priest in the Strangite sect who knew Strang, vigorously challenged these allegations in an 1889 publication entitled The Prophetic Controversy #3. Among other things, Watson points out that the theory advanced fails to explain how the 12"x12"x3" stone covering block was placed above the case containing the plates. Strang was assassinated in 1856. The Voree plates remained with his family until they disappeared sometime around 1900. Their current whereabouts are unknown. Script used on the plates The Voree plates are written in an unknown alphabet. Strang authored a personal diary during his youth, parts of which were written in a secret code which was not deciphered until over one hundred years later by his grandson. Comparison of the script used in the coded portions of Strang's diary and the script used on the Voree plates shows remarkable similarities between the two. Keith Thompson alleges that the text on the plates matches Strang's published translation. Although he did not identify the values of specific characters, he claimed to have shown how words such as "and", "in", and "are" appear in multiple places. According to a Strangite website, Derek J. Masson, a non-Mormon scholar, reportedly argued in an unpublished 1977 paper that Strang's translation was sound. This same site alleges that a second scholar, Robert Madison, concluded in 1990 that the text on the plates appears to represent a genuine, albeit unknown, language, and that Strang's translation appeared to be "a superb (if poetic) rendition of that text into English." Independent scholarly assessment of Masson's and Madison's conclusions does not exist. See also Reformed Egyptian List of plates (Latter Day Saint movement) Notes External links The Voree Plates, Strangite site containing a large reproduction of the Voree plates, with translation and other information. James Strang as Translator, Strangite site on the plates. Giants of Burlington, non-Strangite website seeking to link the Voree plates to the world of psychic and UFO phenomena. 1845 archaeological discoveries Church of Jesus Christ of Latter Day Saints (Strangite) Latter Day Saint texts Latter Day Saint terms 1845 in Christianity 1845 in Wisconsin Territory Mormonism-related controversies Lost objects
Voree plates
Physics
2,517
10,744,611
https://en.wikipedia.org/wiki/Oecologia
Oecologia is an international peer-reviewed English-language journal published by Springer since 1968 (some articles were published in German or French until 1976). The journal publishes original research in a range of topics related to plant and animal ecology. Oecologia has an international focus and presents original papers, methods, reviews and special topics. Papers focus on population ecology, plant-animal interactions, ecosystem ecology, community ecology, global change ecology, conservation ecology, behavioral ecology and physiological ecology. Oecologia had an impact factor of 3.298 (2021) and is ranked 37 out of 136 in the subject category "ecology". Editorial Board As of December 2022, the journal has six editors in chief: Carlos L. Ballaré (plant-microbe/plant-animal interactions), University of Buenos Aires, Argentina Nina Farwig (terrestrial invertebrate ecology), University of Marburg, Germany Indrikis Krams (terrestrial vertebrate ecology), University of Latvia, Latvia Russell K. Monson (plant physiological/ecosystem ecology), University of Colorado Boulder, US Melinda Smith (plant population/community ecology), Colorado State University, US Joel Trexler (aquatic ecology), Florida State University, US References External links Ecology journals English-language journals Publications with year of establishment missing Journals published between 13 and 25 times per year
Oecologia
Environmental_science
279
3,718,113
https://en.wikipedia.org/wiki/Graphic%20violence
Graphic violence refers to the depiction of especially vivid, explicit, brutal and realistic acts of violence in visual media such as film, television, and video games. It may be real, simulated live action, or animated. Intended for viewing by mature audiences, graphic in this context is a synonym for explicit, referring to the clear and unabashed nature of the violence portrayed. Subterms Below are terms that categorized as or related to graphic violence. Gore The definition of gore is imagery depicting blood or gruesome injury. On the internet, the term is used as a catch-all for footage capturing real incidents of extreme body destruction, such as mutilation, work accidents, and zoosadism. Sometimes the term "medical gore" is used to refer to particularly graphic real-life medical imagery, such as intense surgical procedures. The term is often considered a synonym for “graphic violence”, but some people or organizations distinguish between the terms “gore” and “graphic violence”. One example is Adobe Inc., which separates the terms “gore” and “graphic violence” for its publication service. Another example is the news site The Verge. It separates the term “gore” and “violence” when reporting the closure of LiveLeak, a website that was often used to host gore videos before its closure. The sharing of gore videos is mostly illegal. Hurtcore Hurtcore, a portmanteau of the words “hardcore” and “hurt”, is a name given to a particularly extreme form of child pornography, usually involving degrading violence, bodily harm, and murder relating to child sexual abuse. Graphic footage and documentary Some documentary films or photos contain graphic violence. Examples of graphic documentaries and footages are war and crime. Unlike gore contents, sharing graphic documentary and footage is legal, although the publication of graphic footage and documentary caused debates and complaints. Media Graphic violence generally consists of any clear and uncensored depiction of various violent acts. Commonly included depictions include murder, assault with a deadly weapon, dismemberment, accidents which result in death or severe injury, suicide, and torture. In all cases, it is the explicitness of the violence and the injury inflicted which results in it being labeled "graphic". In fictional depictions, appropriately realistic plot elements are usually included to heighten the sense of realism (i.e. blood effects, prop weapons, CGI). In order to qualify for the "graphic" designation, the violence depicted must generally be of a particularly unmitigated and unshielded nature; an example would be a video of a man being shot, bleeding from the wound, and crumpling to the ground. Graphic violence arouses strong emotions, ranging from titillation and excitement to utter revulsion and even terror, depending on the mindset of the viewer and the method in which it is presented. A certain degree of graphic violence has become de rigueur in adult "action" genre, and it is presented in an amount and manner carefully deliberated to excite the emotions of the target demographic without inducing disgust or revulsion. Even more extreme and grotesque acts of graphic violence (generally revolving around mutilation) are often used in the horror genre in order to inspire even stronger emotions of fear and shock (which the viewing demographic would presumably be seeking). It is a highly controversial topic. Many believe that exposure to graphic violence leads to desensitization to committing acts of violence in person. It has led to censorship in extreme cases, and regulation in others. One notable case was the creation of the US Entertainment Software Rating Board in 1994. Many nations now require varying degrees of approval from television, movie, and software rating boards before a work can be released to the public. On the other hand, some critics claim that watching violent media content can be cathartic, providing "acceptable outlets for anti-social impulses". Film Graphic violence is used frequently in horror, action, and crime films. Several of these films have been banned from certain countries for their violent content. Snuff films take horror to its furthest extreme as torture and murder are not simulated. Violence in films is not an old topic, recently a study presented in an annual American Academy of Pediatrics conference showed that the "good guys" in superhero movies were on average more violent than the villains, potentially sending a strongly negative message to young viewers. News media News media on television and online video frequently cover violent acts. The coverage may be preceded with a warning, stating that the footage may be disturbing to some viewers. Sometimes graphic images are censored, by blurring or blocking a portion of the image, cutting the violent portions out of an image sequence or by removing certain portions of film footage from viewing. Music videos Graphic and gory violence has started appearing in music videos in recent times, an example being the controversial music video for the song "Rock DJ" by British rock vocalist Robbie Williams, which features self-mutilation. Another example of a music video containing strong violence is the music video for the song "Hurricane" by American rock band Thirty Seconds to Mars and "Happiness in Slavery" by American industrial rock group Nine Inch Nails. The music video for "Forced Gender Reassignment" by American deathgrind band Cattle Decapitation displays such intense graphic violence that it is not hosted by many popular video hosting sites like YouTube and Dailymotion and is only hosted by Vimeo. Video games Violent content has been a central part of video game controversy. Because violence in video games is interactive and not passive, critics such as Dave Grossman and Jack Thompson argue that violence in games hardens children to unethical acts, calling first-person shooter games "murder simulators", although no conclusive evidence has supported this belief. An example is the display of "gibs" (short for giblets), little bits or giant chunks of internal organs, flesh, and bone, when a character is killed. Internet On the internet, several sites dedicated to recordings of real graphic violence, referred to as "gore", exist, such as Bestgore.com and Goregrish.com. Furthermore, many content-aggregator sites such as Reddit or imageboards and 4chan have their own subsites which are dedicated to or allow that kind of content. Some of those sites also require that gore material to be marked as it, often by the internet slang "NSFL" (shorthand for "not safe for life"). This kind of media might depict reality footage of war, car crashes and other accidents, decapitations, suicide, terrorism, murder, or executions. See also Violence in art Influence of mass media Research on the effects of violence in mass media Motion picture content rating system Pornographic film Snuff film Splatter film Television content rating system Video game content rating system References Film and video terminology Violence Obscenity controversies
Graphic violence
Biology
1,404
77,978,272
https://en.wikipedia.org/wiki/Tert-Butyl%20methacrylate
tert-Butyl methacrylate is an organic compound with the formula . A colorless solid, it is a common monomer for the preparation of methacrylate polymers. It is employed in other kinds of polymerizations. See also Butyl methacrylate References Methacrylate esters Monomers Commodity chemicals Tert-Butyl esters
Tert-Butyl methacrylate
Chemistry,Materials_science
80
71,483,614
https://en.wikipedia.org/wiki/Staphidine
Staphidine is a bis-diterpene alkaloid of the atisane type, found in the tissues of Delphinium staphisagria in the larkspur family (Ranunculaceae) along with staphimine and staphinine. Similar alkaloids are found in the genus Aconitum in that family, as well as Spiraea in the Rosaceae family. References Alkaloids Heterocyclic compounds with 7 or more rings Spiro compounds Nitrogen heterocycles Oxygen heterocycles
Staphidine
Chemistry
117
148,555
https://en.wikipedia.org/wiki/Spin%20glass
In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called the "freezing temperature," Tf. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or without a regular pattern and the couplings too are random. A spin glass should not be confused with a "spin-on glass". The latter is a thin film, usually based on SiO2, which is applied via spin coating. The term "glass" comes from an analogy between the magnetic disorder in a spin glass and the positional disorder of a conventional, chemical glass, e.g., a window glass. In window glass or any amorphous solid the atomic bond structure is highly irregular; in contrast, a crystal has a uniform pattern of atomic bonds. In ferromagnetic solids, magnetic spins all align in the same direction; this is analogous to a crystal's lattice-based structure. The individual atomic bonds in a spin glass are a mixture of roughly equal numbers of ferromagnetic bonds (where neighbors have the same orientation) and antiferromagnetic bonds (where neighbors have exactly the opposite orientation: north and south poles are flipped 180 degrees). These patterns of aligned and misaligned atomic magnets create what are known as frustrated interactions distortions in the geometry of atomic bonds compared to what would be seen in a regular, fully aligned solid. They may also create situations where more than one geometric arrangement of atoms is stable. There are two main aspects of spin glass. On the physical side, spin glasses are real materials with distinctive properties, a review of which was published in 1982. On the mathematical side, simple statistical mechanics models, inspired by real spin glasses, are widely studied and applied. Spin glasses and the complex internal structures that arise within them are termed "metastable" because they are "stuck" in stable configurations other than the lowest-energy configuration (which would be aligned and ferromagnetic). The mathematical complexity of these structures is difficult but fruitful to study experimentally or in simulations; with applications to physics, chemistry, materials science and artificial neural networks in computer science. Magnetic behavior It is the time dependence which distinguishes spin glasses from other magnetic systems. Above the spin glass transition temperature, Tc, the spin glass exhibits typical magnetic behaviour (such as paramagnetism). If a magnetic field is applied as the sample is cooled to the transition temperature, magnetization of the sample increases as described by the Curie law. Upon reaching Tc, the sample becomes a spin glass, and further cooling results in little change in magnetization. This is referred to as the field-cooled magnetization. When the external magnetic field is removed, the magnetization of the spin glass falls rapidly to a lower value known as the remanent magnetization. Magnetization then decays slowly as it approaches zero (or some small fraction of the original value this remains unknown). This decay is non-exponential, and no simple function can fit the curve of magnetization versus time adequately. This slow decay is particular to spin glasses. Experimental measurements on the order of days have shown continual changes above the noise level of instrumentation. Spin glasses differ from ferromagnetic materials by the fact that after the external magnetic field is removed from a ferromagnetic substance, the magnetization remains indefinitely at the remanent value. Paramagnetic materials differ from spin glasses by the fact that, after the external magnetic field is removed, the magnetization rapidly falls to zero, with no remanent magnetization. The decay is rapid and exponential. If the sample is cooled below Tc in the absence of an external magnetic field, and a magnetic field is applied after the transition to the spin glass phase, there is a rapid initial increase to a value called the zero-field-cooled magnetization. A slow upward drift then occurs toward the field-cooled magnetization. Surprisingly, the sum of the two complicated functions of time (the zero-field-cooled and remanent magnetizations) is a constant, namely the field-cooled value, and thus both share identical functional forms with time, at least in the limit of very small external fields. Edwards–Anderson model This is similar to the Ising model. In this model, we have spins arranged on a -dimensional lattice with only nearest neighbor interactions. This model can be solved exactly for the critical temperatures and a glassy phase is observed to exist at low temperatures. The Hamiltonian for this spin system is given by: where refers to the Pauli spin matrix for the spin-half particle at lattice point , and the sum over refers to summing over neighboring lattice points and . A negative value of denotes an antiferromagnetic type interaction between spins at points and . The sum runs over all nearest neighbor positions on a lattice, of any dimension. The variables representing the magnetic nature of the spin-spin interactions are called bond or link variables. In order to determine the partition function for this system, one needs to average the free energy where , over all possible values of . The distribution of values of is taken to be a Gaussian with a mean and a variance : Solving for the free energy using the replica method, below a certain temperature, a new magnetic phase called the spin glass phase (or glassy phase) of the system is found to exist which is characterized by a vanishing magnetization along with a non-vanishing value of the two point correlation function between spins at the same lattice point but at two different replicas: where are replica indices. The order parameter for the ferromagnetic to spin glass phase transition is therefore , and that for paramagnetic to spin glass is again . Hence the new set of order parameters describing the three magnetic phases consists of both and . Under the assumption of replica symmetry, the mean-field free energy is given by the expression: Sherrington–Kirkpatrick model In addition to unusual experimental properties, spin glasses are the subject of extensive theoretical and computational investigations. A substantial part of early theoretical work on spin glasses dealt with a form of mean-field theory based on a set of replicas of the partition function of the system. An important, exactly solvable model of a spin glass was introduced by David Sherrington and Scott Kirkpatrick in 1975. It is an Ising model with long range frustrated ferro- as well as antiferromagnetic couplings. It corresponds to a mean-field approximation of spin glasses describing the slow dynamics of the magnetization and the complex non-ergodic equilibrium state. Unlike the Edwards–Anderson (EA) model, in the system though only two-spin interactions are considered, the range of each interaction can be potentially infinite (of the order of the size of the lattice). Therefore, we see that any two spins can be linked with a ferromagnetic or an antiferromagnetic bond and the distribution of these is given exactly as in the case of Edwards–Anderson model. The Hamiltonian for SK model is very similar to the EA model: where have same meanings as in the EA model. The equilibrium solution of the model, after some initial attempts by Sherrington, Kirkpatrick and others, was found by Giorgio Parisi in 1979 with the replica method. The subsequent work of interpretation of the Parisi solution—by M. Mezard, G. Parisi, M.A. Virasoro and many others—revealed the complex nature of a glassy low temperature phase characterized by ergodicity breaking, ultrametricity and non-selfaverageness. Further developments led to the creation of the cavity method, which allowed study of the low temperature phase without replicas. A rigorous proof of the Parisi solution has been provided in the work of Francesco Guerra and Michel Talagrand. Phase diagram When there is a uniform external magnetic field of magnitude , the energy function becomesLet all couplings are IID samples from the gaussian distribution of mean 0 and variance . In 1979, J.R.L. de Almeida and David Thouless found that, as in the case of the Ising model, the mean-field solution to the SK model becomes unstable when under low-temperature, low-magnetic field state. The stability region on the phase diagram of the SK model is determined by two dimensionless parameters . Its phase diagram has two parts, divided by the de Almeida-Thouless curve, The curve is the solution set to the equationsThe phase transition occurs at . Just below it, we haveAt low temperature, high magnetic field limit, the line is Infinite-range model This is also called the "p-spin model". The infinite-range model is a generalization of the Sherrington–Kirkpatrick model where we not only consider two-spin interactions but -spin interactions, where and is the total number of spins. Unlike the Edwards–Anderson model, but similar to the SK model, the interaction range is infinite. The Hamiltonian for this model is described by: where have similar meanings as in the EA model. The limit of this model is known as the random energy model. In this limit, the probability of the spin glass existing in a particular state depends only on the energy of that state and not on the individual spin configurations in it. A Gaussian distribution of magnetic bonds across the lattice is assumed usually to solve this model. Any other distribution is expected to give the same result, as a consequence of the central limit theorem. The Gaussian distribution function, with mean and variance , is given as: The order parameters for this system are given by the magnetization and the two point spin correlation between spins at the same site , in two different replicas, which are the same as for the SK model. This infinite range model can be solved explicitly for the free energy in terms of and , under the assumption of replica symmetry as well as 1-Replica Symmetry Breaking. Non-ergodic behavior and applications A thermodynamic system is ergodic when, given any (equilibrium) instance of the system, it eventually visits every other possible (equilibrium) state (of the same energy). One characteristic of spin glass systems is that, below the freezing temperature , instances are trapped in a "non-ergodic" set of states: the system may fluctuate between several states, but cannot transition to other states of equivalent energy. Intuitively, one can say that the system cannot escape from deep minima of the hierarchically disordered energy landscape; the distances between minima are given by an ultrametric, with tall energy barriers between minima. The participation ratio counts the number of states that are accessible from a given instance, that is, the number of states that participate in the ground state. The ergodic aspect of spin glass was instrumental in the awarding of half the 2021 Nobel Prize in Physics to Giorgio Parisi. For physical systems, such as dilute manganese in copper, the freezing temperature is typically as low as 30 kelvins (−240 °C), and so the spin-glass magnetism appears to be practically without applications in daily life. The non-ergodic states and rugged energy landscapes are, however, quite useful in understanding the behavior of certain neural networks, including Hopfield networks, as well as many problems in computer science optimization and genetics. Spin-glass without structural disorder Elemental crystalline neodymium is paramagnetic at room temperature and becomes an antiferromagnet with incommensurate order upon cooling below 19.9 K. Below this transition temperature it exhibits a complex set of magnetic phases that have long spin relaxation times and spin-glass behavior that does not rely on structural disorder. History A detailed account of the history of spin glasses from the early 1960s to the late 1980s can be found in a series of popular articles by Philip W. Anderson in Physics Today. Discovery In 1930s, material scientists discovered the Kondo effect, where the resistivity of nominally pure gold reaches a minimum at 10 K, and similarly for nominally pure Cu at 2 K. It was later understood that the Kondo effect occurs when a nonmagnetic metal contains a very small fraction of magnetic atoms (i.e., at high dilution). Unusual behavior was observed in iron-in-gold alloy (AuFe) and manganese-in-copper alloy (CuMn) at around 1 to 10 atom percent. Cannella and Mydosh observed in 1972 that AuFe had an unexpected cusplike peak in the a.c. susceptibility at a well defined temperature, which would later be termed spin glass freezing temperature. It was also called "mictomagnet" (micto- is Greek for "mixed"). The term arose from the observation that these materials often contain a mix of ferromagnetic () and antiferromagnetic () interactions, leading to their disordered magnetic structure. This term fell out of favor as the theoretical understanding of spin glasses evolved, recognizing that the magnetic frustration arises not just from a simple mixture of ferro- and antiferromagnetic interactions, but from their randomness and frustration in the system. Sherrington–Kirkpatrick model Sherrington and Kirkpatrick proposed the SK model in 1975, and solved it by the replica method. They discovered that at low temperatures, its entropy becomes negative, which they thought was because the replica method is a heuristic method that does not apply at low temperatures. It was then discovered that the replica method was correct, but the problem lies in that the low-temperature broken symmetry in the SK model cannot be purely characterized by the Edwards-Anderson order parameter. Instead, further order parameters are necessary, which leads to replica breaking ansatz of Giorgio Parisi. At the full replica breaking ansatz, infinitely many order parameters are required to characterize a stable solution. Applications The formalism of replica mean-field theory has also been applied in the study of neural networks, where it has enabled calculations of properties such as the storage capacity of simple neural network architectures without requiring a training algorithm (such as backpropagation) to be designed or implemented. More realistic spin glass models with short range frustrated interactions and disorder, like the Gaussian model where the couplings between neighboring spins follow a Gaussian distribution, have been studied extensively as well, especially using Monte Carlo simulations. These models display spin glass phases bordered by sharp phase transitions. Besides its relevance in condensed matter physics, spin glass theory has acquired a strongly interdisciplinary character, with applications to neural network theory, computer science, theoretical biology, econophysics etc. Spin glass models were adapted to the folding funnel model of protein folding. See also Amorphous magnet Antiferromagnetic interaction Cavity method Crystal structure Geometrical frustration Orientational glass Phase transition Quenched disorder Random energy model Replica trick Solid-state physics Spin ice Notes References Literature Expositions Popular exposition, with a minimal amount of mathematics. A practical tutorial introduction. 1st 15 chapters of 2008 draft version, available at www.stat.ucla.edu Textbook that focuses on the cavity method and the applications to computer science, especially constraint satisfaction problems. Introduction focused on computer science applications, including neural networks. Focuses on the experimentally measurable properties of spin glasses (such as copper-manganese alloy). Covers mean field theory, experimental data, and numerical simulations. . Early exposition containing the pre-1990 breakthroughs, such as the replica trick. Approach via statistical field theory. and . Compendium of rigorously provable results. Primary sources . ShieldSquare Captcha . Papercore Summary http://papercore.org/Sherrington1975 . . .... Papercore Summary http://papercore.org/Parisi1980. . External links Statistics of frequency of the term "Spin glass" in arxiv.org Magnetic ordering Mathematical physics
Spin glass
Physics,Chemistry,Materials_science,Mathematics,Engineering
3,289
10,252,066
https://en.wikipedia.org/wiki/Choquet%20integral
A Choquet integral is a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. It was initially used in statistical mechanics and potential theory, but found its way into decision theory in the 1980s, where it is used as a way of measuring the expected utility of an uncertain event. It is applied specifically to membership functions and capacities. In imprecise probability theory, the Choquet integral is also used to calculate the lower expectation induced by a 2-monotone lower probability, or the upper expectation induced by a 2-alternating upper probability. Using the Choquet integral to denote the expected utility of belief functions measured with capacities is a way to reconcile the Ellsberg paradox and the Allais paradox. Definition The following notation is used: – a set. – a collection of subsets of . – a function. – a monotone set function. Assume that is measurable with respect to , that is Then the Choquet integral of with respect to is defined by: where the integrals on the right-hand side are the usual Riemann integral (the integrands are integrable because they are monotone in ). Properties In general the Choquet integral does not satisfy additivity. More specifically, if is not a probability measure, it may hold that for some functions and . The Choquet integral does satisfy the following properties. Monotonicity If then Positive homogeneity For all it holds that Comonotone additivity If are comonotone functions, that is, if for all it holds that . which can be thought of as and rising and falling together then Subadditivity If is 2-alternating, then Superadditivity If is 2-monotone, then Alternative representation Let denote a cumulative distribution function such that is integrable. Then this following formula is often referred to as Choquet Integral: where . choose to get , choose to get Applications The Choquet integral was applied in image processing, video processing and computer vision. In behavioral decision theory, Amos Tversky and Daniel Kahneman use the Choquet integral and related methods in their formulation of cumulative prospect theory. See also Nonlinear expectation Superadditivity Subadditivity Notes Further reading Expected utility Functional analysis Definitions of mathematical integration
Choquet integral
Mathematics
462
76,216,842
https://en.wikipedia.org/wiki/Data%20physicalization
A data physicalization (or simply physicalization) is a physical artefact whose geometry or material properties encode data. It has the main goals to engage people and to communicate data using computer-supported physical data representations. History Before the invention of computers and digital devices, the application of data physicalization already existed in ancient artifacts as a medium to represent abstract information. One example is Blombo ocher plaque which is estimated to be 70000 – 80000 years old. The geometric and iconographic shapes engraved at the surface of the artifact demonstrated the cognitive complexity of ancient humans. Moreover, since such representations were deliberately made and crafted, the evidences suggest that the geometric presentation of information is a popular methodology in the context of society. Although researchers still cannot decipher the specific type of information encoded in the artifact, there are several proposed interpretations. For example, the potential functions of the artifact are divided into four categories, categorized as "numerical", "functional", "cognitive", and "social". Later, at around 35,000 B.C, another artifact, the Lebombo bone, emerged and the encoded information became easier to read. There are around 29 distinct notches carved on the baboon fibula. It is estimated that the number of notches is closely related to the number of lunar cycles. Moreover, this early counting system was also regarded as the birth of calculation. Right before the invention of writing, the clay token system was spread across ancient Mesopotamia. When the buyers and sellers want to make a trade, they prepare a set of tokens and seal them inside the clay envelope after impressing the shape on the surface. Such physical entity was widely used in trading, administrative documents, and agricultural settlement. Moreover, the token system is evidence of the early counting system. Each shape corresponds to a physical meaning such as the representation of "sheep", forming a one-to-one mapping relationship. The significance of the token is it uses physical shape to encode numerical information and it is regarded as the precursor of the early writing system. The logical reason is the two-dimension symbol would record the same information as the impression created by the clay token. From 3000 BCE to the 17th century, a more complex visual encoding, Quipus, was developed and widely used in Andean South America. Knotted strings unrelated to quipu have also been used to record information by the ancient Chinese, Tibetans and Japanese. The ancient Inca empire used it for military and taxation purposes. The Base-10 logical-numerical system can record information based on the relative distance of knots, the color of the knots, and the type of knots. Due to the texture (cotton) of Quipus, very few of them survive. By analyzing those remaining artifacts, Erland Nordenskiöld proposed that Quipus is the only writing system used by Inca, and the information encoding technique is sophisticated and distinctive. The idea of data physicalization become popular since the 17th century in which architects and engineers widely used such methods in civil engineering and city management. For example, from 1663 to 1867, Plan-relief model was used to visualize French territorial structure and important military units such as citadels and walled cities. Therefore, one of the functions of the Plan-relief model was to plan defense or offense. It is worth noting that the model can be categorized as a military technology and it did not encode any abstract information. The tradition of using tangible models to represent buildings and architectures still remains today. One of the contemporary examples of data physicalization is the Galton board designed by Francis Galton who promoted the concept of Regression toward the mean. The Galton board, a very useful tool in approximating the Gaussian law of errors, consists of evenly spaced nails and vertical slats at the bottom of the board. After a large number of marbles are released, they will settle down at the bottom, forming the contour of a Bell Curve. Most marbles will agglomerate at the center (smaller deviation) with few on the edge of the board. In 1935, three different electricity companies (e.g. Pacific Gas and Electric Company, Commonwealth Edison Company) created an electricity data physicalization model to visualize the power consumption of their customers so that the company can better forecast the upcoming power demand. The model has one short axis and one long axis. The short axis indicates "day", whereas the long axis spans the whole year. The viewers can gain perspective on when customers consume electricity the most during the day and how does the consumption change across different seasons. The model was built manually by cutting wooden sheets and stacked all pieces together. Researchers began to realize that data physicalization models can not only help agents manage/plan certain tasks, but also can greatly simplify very complex problems by letting users manipulate data in the real world. Therefore, from an epistemic perspective, physical manipulation enables users to uncover hidden patterns that cannot be easily detected. Max Perutz received Nobel Prize in Chemistry in 1962 for his distinguished work in discovering the structure of the globular protein. When a narrow X-ray passes through the haemoglobin molecule, the diffraction pattern can review the inner structure of the atomic arrangements. One of Perutz's works within this research involved creating a physicalized haemoglobin molecule which enables him to manipulate and inspect the structure in a tangible way. In the book, Bertin designed a matrices visualization device called Domino which let users manipulate row and column data. The combination of row and column can be considered as a two-dimensional data space. In Semiology of Graphics, Bertain defined what variables can be reordered and what variables cannot. For example, time can be considered as a one direction variable. We should keep it in a natural order. Compared with the aforementioned work, this model emphasized the visual thinking aspect of data physicalization and supports a variety of data types such as maps, matrices, and timelines. By adjusting the data entries, an analyst can find patterns inside the datasets and repeatedly use Domino on different datasets. More recent physicalization examples include using LEGO bricks to keep track of project progress. For example, people used LEGO to record their thesis writing progress. Users can use the LEGO board to set concrete steps before pushing to real publications such as data analysis, data collection, development, etc. Another application involves using LEGO in bug tracking. For software engineers, keeping track of the issue of the code base is a crucial task and LEGO simplify this progress by physicalize the issues. A specific application of data physicalization involves building tactile maps for visually impaired people. Past example include using microcapsule paper to build tactile maps. With the help of digital fabrication tool such as laser cutter, researchers in Fab Lab at RWTH Aachen University has used it to produce relief-based a tactile map to support visually impaired users. Some tangible user interface researchers combined TUI with tactile maps to render dynamic rendering and enhance collaboration between vision impaired people (e.g. FluxMarkers). References physicalization Visualization (research) physicalization
Data physicalization
Engineering
1,453
8,240,558
https://en.wikipedia.org/wiki/Line%E2%80%93line%20intersection
In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection. In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. Formulas A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see . Given two points on each line First we consider the intersection of two lines and in two-dimensional space, with line being defined by two distinct points and , and line being defined by two distinct points and . The intersection of line and can be defined using determinants. The determinants can be written out as: When the two lines are parallel or coincident, the denominator is zero. Given two points on each line segment The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines and in terms of first degree Bézier parameters: (where and are real numbers). The intersection point of the lines is found with one of the following values of or , where and with There will be an intersection if and . The intersection point falls within the first line segment if , and it falls within the second line segment if . These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. Given two line equations The and coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equations and where and are the slopes (gradients) of the lines and where and are the -intercepts of the lines. At the point where the two lines intersect (if they do), both coordinates will be the same, hence the following equality: We can rearrange this expression in order to extract the value of , and so, To find the coordinate, all we need to do is substitute the value of into either one of the two line equations, for example, into the first: Hence, the point of intersection is Note that if then the two lines are parallel and they do not intersect, unless as well, in which case the lines are coincident and they intersect at every point. Using homogeneous coordinates By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple . The mapping from 3D to 2D coordinates is . We can convert 2D points to homogeneous coordinates by defining them as . Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as and . We can represent these two lines in line coordinates as and . The intersection of two lines is then simply given by If , the lines do not intersect. More than two lines The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the -line intersection problem are as follows. In two dimensions In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the th equation () as and stack these equations into matrix form as where the th row of the matrix is , is the 2 × 1 vector , and the th element of the column vector is . If has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix is also 2, there exists a solution of the matrix equation and thus an intersection point of the lines. The intersection point, if it exists, is given by where is the Moore–Penrose generalized inverse of (which has the form shown because has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. In three dimensions The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form Thus a set of lines can be represented by equations in the 3-dimensional coordinate vector : where now is and is . As before there is a unique intersection point if and only if has full column rank and the augmented matrix does not, and the unique intersection if it exists is given by Nearest points to skew lines In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. In two dimensions In the two-dimensional case, first, represent line as a point on the line and a unit normal vector , perpendicular to that line. That is, if and are points on line 1, then let and let which is the unit vector along the line, rotated by a right angle. The distance from a point to the line is given by And so the squared distance from a point to a line is The sum of squared distances to many lines is the cost function: This can be rearranged: To find the minimum, we differentiate with respect to and set the result equal to the zero vector: so and so In more than two dimensions While is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between and another point giving the distance to the line. In any number of dimensions, if is a unit vector along the th line, then becomes where is the identity matrix, and so General derivation In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin and a unit direction vector . The square of the distance from a point to one of the lines is given from Pythagoras: where is the projection of on line . The sum of distances to the square to all lines is To minimize this expression, we differentiate it with respect to . which results in where is the identity matrix. This is a matrix , with solution , where is the pseudo-inverse of . Non-Euclidean geometry In spherical geometry, any two great circles intersect. In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line. See also Line segment intersection Line intersection in projective space Distance between two parallel lines Distance from a point to a line Line–plane intersection Parallel postulate Triangulation (computer vision) References External links Distance between Lines and Segments with their Closest Point of Approach, applicable to two, three, or more dimensions. Euclidean geometry Linear algebra Geometric algorithms Geometric intersection
Line–line intersection
Mathematics
1,689
14,446,113
https://en.wikipedia.org/wiki/GPR115
Probable G-protein coupled receptor 115 is a protein that in humans is encoded by the GPR115 gene. References Further reading G protein-coupled receptors
GPR115
Chemistry
33
6,571,624
https://en.wikipedia.org/wiki/Composition%20H-6
Composition H-6 is a melt-cast military aluminized high explosive. H-6 was developed in the United States. The chemical composition of H-6 is specified as follows: 45.1 ± 0.3% RDX ; 29.2 ± 3.0% TNT; 21.0 ± 3.0% powdered aluminium; 4.7 ± 1.0% paraffin wax as a phlegmatizing agent; 0.47 ± 0.1% calcium chloride as a desiccant to prevent moisture from reacting with Al. Comp H-6 is used in a number of military applications, specifically as an explosive main fill, in munitions including aerial bombs such as the general purpose Mark 80 bombs in use with the USMC and US Navy (while USAF Mark 80s use a tritonal main fill); and underwater munitions (e.g. naval mines, depth charges and torpedoes) where it has generally replaced torpex, being less shock-sensitive and having more stable storage characteristics. It is approximately 1.35 times more powerful than pure TNT. Properties Density: 1.73—1.74 g/cm3 Velocity of detonation: 7,367 m/s Manufacture Australia is one of the largest manufacturers of Composition H-6, originally being made at St Marys Munitions Filling Factory MFF New South Wales, for the main fill of Mark 82 and 84 general purpose bombs. Manufacturing has since moved to the Mulwala Propellant Facility, NSW. See also Torpex Tritonal Minol Amatol Composition C References Explosives Trinitrotoluene
Composition H-6
Chemistry
332
53,561,725
https://en.wikipedia.org/wiki/Eurypelmella
Eurypelmella is a nomen dubium (doubtful name) for a genus of spiders in the family Theraphosidae. It has been regarded as a synonym for Schizopelma, but this was disputed in 2016. References Theraphosidae Historically recognized spider taxa Nomina dubia
Eurypelmella
Biology
64
23,505,398
https://en.wikipedia.org/wiki/Antonio%20Pe%C3%B1a%20D%C3%ADaz
Antonio Peña Díaz (born in 1936) is a Mexican biochemist who received the Carlos J. Finlay Prize for Microbiology (UNESCO, 2003) and chaired both the Mexican Academy of Sciences (1992–93) and the Mexican Society of Biochemistry (1981–83). Peña Díaz holds a bachelor's degree in Medicine and both a master's and a doctorate degree from the National Autonomous University of Mexico (UNAM). He is currently an emeritus professor of the Institute for Cellular Physiology of the same university and has worked as a visiting scholar at the University of Rochester. Selected works ("Biochemistry", 1979) ("The Membranes of the Cell", 1986) ("Energy and Life: Bioenergetics", with Georges Dreyfus Cortés, 1990) ("How Does a Cell Work: Cellular Physiology", 1995) ("What is Metabolism?", 2001) Notes and references Mexican biochemists Members of the Mexican Academy of Sciences National Autonomous University of Mexico alumni Academic staff of the National Autonomous University of Mexico University of Rochester faculty People from Durango 1936 births Living people 21st-century Mexican scientists 20th-century Mexican scientists
Antonio Peña Díaz
Chemistry
234
627,071
https://en.wikipedia.org/wiki/Computer-aided%20software%20engineering
Computer-aided software engineering (CASE) is a domain of software tools used to design and implement applications. CASE tools are similar to and are partly inspired by computer-aided design (CAD) tools used for designing hardware products. CASE tools are intended to help develop high-quality, defect-free, and maintainable software. CASE software was often associated with methods for the development of information systems together with automated tools that could be used in the software development process.<ref>P. Loucopoulos and V. Karakostas (1995). System Requirements Engineerinuality software which will perform effectively.</ref> History The Information System Design and Optimization System (ISDOS) project, started in 1968 at the University of Michigan, initiated a great deal of interest in the whole concept of using computer systems to help analysts in the very difficult process of analysing requirements and developing systems. Several papers by Daniel Teichroew fired a whole generation of enthusiasts with the potential of automated systems development. His Problem Statement Language / Problem Statement Analyzer (PSL/PSA) tool was a CASE tool although it predated the term. Another major thread emerged as a logical extension to the data dictionary of a database. By extending the range of metadata held, the attributes of an application could be held within a dictionary and used at runtime. This "active dictionary" became the precursor to the more modern model-driven engineering capability. However, the active dictionary did not provide a graphical representation of any of the metadata. It was the linking of the concept of a dictionary holding analysts' metadata, as derived from the use of an integrated set of techniques, together with the graphical representation of such data that gave rise to the earlier versions of CASE. The next entrant into the market was Excelerator from Index Technology in Cambridge, Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/AT platform. While, at the time of launch, and for several years, the IBM platform did not support networking or a centralized database as did the Convergent Technologies or Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence. Hot on the heels of Excelerator were a rash of offerings from companies such as Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's CA Gen and Andersen Consulting's FOUNDATION toolset (DESIGN/1, INSTALL/1, FCP). CASE tools were at their peak in the early 1990s. According to the PC Magazine of January 1990, over 100 companies were offering nearly 200 different CASE tools. At the time IBM had proposed AD/Cycle, which was an alliance of software vendors centered on IBM's Software repository using IBM DB2 in mainframe and OS/2:The application development tools can be from several sources: from IBM, from vendors, and from the customers themselves. IBM has entered into relationships with Bachman Information Systems, Index Technology Corporation, and Knowledgeware wherein selected products from these vendors will be marketed through an IBM complementary marketing program to provide offerings that will help to achieve complete life-cycle coverage''. With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the market for the mainstream CASE tools of today. Many of the leaders of the CASE market of the early 1990s ended up being purchased by Computer Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett Management Systems (LBMS). The other trend that led to the evolution of CASE tools was the rise of object-oriented methods and tools. Most of the various tool vendors added some support for object-oriented methods and tools. In addition new products arose that were designed from the bottom up to support the object-oriented approach. Andersen developed its project Eagle as an alternative to Foundation. Several of the thought leaders in object-oriented development each developed their own methodology and CASE tool set: Jacobson, Rumbaugh, Booch, etc. Eventually, these diverse tool sets and methods were consolidated via standards led by the Object Management Group (OMG). The OMG's Unified Modelling Language (UML) is currently widely accepted as the industry standard for object-oriented modeling. CASE software Tools CASE tools support specific tasks in the software development life-cycle. They can be divided into the following categories: Business and analysis modeling: Graphical modeling tools. E.g., E/R modeling, object modeling, etc. Development: Design and construction phases of the life-cycle. Debugging environments. E.g., IISE LKO. Verification and validation: Analyze code and specifications for correctness, performance, etc. Configuration management: Control the check-in and check-out of repository objects and files. E.g., SCCS, IISE. Metrics and measurement: Analyze code for complexity, modularity (e.g., no "go to's"), performance, etc. Project management: Manage project plans, task assignments, scheduling. Another common way to distinguish CASE tools is the distinction between Upper CASE and Lower CASE. Upper CASE Tools support business and analysis modeling. They support traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees, Decision tables, etc. Lower CASE Tools support development activities, such as physical design, debugging, construction, testing, component integration, maintenance, and reverse engineering. All other activities span the entire life-cycle and apply equally to upper and lower CASE. Workbenches Workbenches integrate two or more CASE tools and support specific software-process activities. Hence they achieve: A homogeneous and consistent interface (presentation integration) Seamless integration of tools and toolchains (control and data integration) An example workbench is Microsoft's Visual Basic programming environment. It incorporates several development tools: a GUI builder, a smart code editor, debugger, etc. Most commercial CASE products tended to be such workbenches that seamlessly integrated two or more tools. Workbenches also can be classified in the same manner as tools; as focusing on Analysis, Development, Verification, etc. as well as being focused on the upper case, lower case, or processes such as configuration management that span the complete life-cycle. Environments An environment is a collection of CASE tools or workbenches that attempts to support the complete software process. This contrasts with tools that focus on one specific task or a specific part of the life-cycle. CASE environments are classified by Fuggetta as follows: Toolkits: Loosely coupled collections of tools. These typically build on operating system workbenches such as the Unix Programmer's Workbench or the VMS VAX set. They typically perform integration via piping or some other basic mechanism to share data and pass control. The strength of easy integration is also one of the drawbacks. Simple passing of parameters via technologies such as shell scripting can't provide the kind of sophisticated integration that a common repository database can. Fourth generation: These environments are also known as 4GL standing for fourth generation language environments due to the fact that the early environments were designed around specific languages such as Visual Basic. They were the first environments to provide deep integration of multiple tools. Typically these environments were focused on specific types of applications. For example, user-interface driven applications that did standard atomic transactions to a relational database. Examples are Informix 4GL, and Focus. Language-centered: Environments based on a single often object-oriented language such as the Symbolics Lisp Genera environment or VisualWorks Smalltalk from Parcplace. In these environments all the operating system resources were objects in the object-oriented language. This provides powerful debugging and graphical opportunities but the code developed is mostly limited to the specific language. For this reason, these environments were mostly a niche within CASE. Their use was mostly for prototyping and R&D projects. A common core idea for these environments was the model–view–controller user interface that facilitated keeping multiple presentations of the same design consistent with the underlying model. The MVC architecture was adopted by the other types of CASE environments as well as many of the applications that were built with them. Integrated: These environments are an example of what most IT people tend to think of first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These environments attempt to cover the complete life-cycle from analysis to maintenance and provide an integrated database repository for storing all artifacts of the software process. The integrated software repository was the defining feature for these kinds of tools. They provided multiple different design models as well as support for code in heterogenous languages. One of the main goals for these types of environments was "round trip engineering": being able to make changes at the design level and have those automatically be reflected in the code and vice versa. These environments were also typically associated with a particular methodology for software development. For example, the FOUNDATION CASE suite from Andersen was closely tied to the Andersen Method/1 methodology. Process-centered: This is the most ambitious type of integration. These environments attempt to not just formally specify the analysis and design objects of the software process but the actual process itself and to use that formal process to control and guide software projects. Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia. These environments were by definition tied to some methodology since the software process itself is part of the environment and can control many aspects of tool invocation. In practice, the distinction between workbenches and environments was flexible. Visual Basic for example was a programming workbench but was also considered a 4GL environment by many. The features that distinguished workbenches from environments were deep integration via a shared repository or common language and some kind of methodology (integrated and process-centered environments) or domain (4GL) specificity. Major CASE risk factors Some of the most significant risk factors for organizations adopting CASE technology include: Inadequate standardization: Organizations usually have to tailor and adopt methodologies and tools to their specific requirements. Doing so may require significant effort to integrate both divergent technologies as well as divergent methods. For example, before the adoption of the UML standard the diagram conventions and methods for designing object-oriented models were vastly different among followers of Jacobsen, Booch, and Rumbaugh. Unrealistic expectations: The proponents of CASE technology—especially vendors marketing expensive tool sets—often hype expectations that the new approach will be a silver bullet that solves all problems. In reality no such technology can do that and if organizations approach CASE with unrealistic expectations they will inevitably be disappointed. Inadequate training: As with any new technology, CASE requires time to train people in how to use the tools and to get up to speed with them. CASE projects can fail if practitioners are not given adequate time for training or if the first project attempted with the new technology is itself highly mission critical and fraught with risk. Inadequate process control: CASE provides significant new capabilities to utilize new types of tools in innovative ways. Without the proper process guidance and controls these new capabilities can cause significant new problems as well. See also Data modeling Domain-specific modeling Method engineering Model-driven architecture Modeling language Rapid application development Automatic programming Test automation Build automation References Data management
Computer-aided software engineering
Technology
2,334
2,404,069
https://en.wikipedia.org/wiki/Leucism
Leucism () is a wide variety of conditions that result in partial loss of pigmentation in an animal—causing white, pale, or patchy coloration of the skin, hair, feathers, scales, or cuticles, but not the eyes. It is occasionally spelled leukism. Some genetic conditions that result in a "leucistic" appearance include piebaldism, Waardenburg syndrome, vitiligo, Chédiak–Higashi syndrome, flavism, isabellinism, xanthochromism, axanthism, amelanism, and melanophilin mutations. Pale patches of skin, feathers, or fur (often referred to as "depigmentation") can also result from injury. Details Leucism is often used to describe the phenotype that results from defects in pigment cell differentiation and/or migration from the neural crest to skin, hair, or feathers during development. This results in either the entire surface (if all pigment cells fail to develop) or patches of body surface (if only a subset are defective) having a lack of cells that can make pigment. Since all pigment cell-types differentiate from the same multipotent precursor cell-type, leucism can cause the reduction in all types of pigment. This is in contrast to albinism, for which leucism is often mistaken. Albinism results in the reduction of melanin production only, though the melanocyte (or melanophore) is still present. Thus in species that have other pigment cell-types, for example xanthophores, albinos are not entirely white, but instead display a pale yellow color. More common than a complete absence of pigment cells is localized or incomplete hypopigmentation, resulting in irregular patches of white on an animal that otherwise has normal coloring and patterning. This partial leucism is known as a "pied" or "piebald" effect; and the ratio of white to normal-colured skin can vary considerably not only between generations, but between different offspring from the same parents, and even between members of the same litter. This is notable in horses, cows, cats, dogs, the urban crow and the ball python but is also found in many other species. Due to the lack of melanin production in both the retinal pigmented epithelium (RPE) and iris, those affected by albinism sometimes have pink pupil due to the underlying blood vessels showing through. However, this is not always the case and many albino animals do not have pink pupils. The common belief that all albinos have pink pupils results in many albinos being incorrectly labeled as 'leucistic'. The neural crest disorders that cause leucism do not result in pink pupils and therefore most leucistic animals have normally colored eyes. This is because the melanocytes of the RPE do not derive from the neural crest. Instead, an out-pouching of the neural tube generates the optic cup that, in turn, forms the retina. As these cells are from an independent developmental origin, they are typically unaffected by the genetic cause of leucism. Notable examples Platypus (Ornithorhynchus anatinus) – in 2021, a leucistic example was found in the Gwydir River, near Armidale, New South Wales, Australia. The Anchorage White Raven - a leucistic common raven known for its "trickster" behavior, spotted in 2023 and 2024 in Anchorage, Alaska. Genetics Genes that, when mutated, can cause leucism include c-kit, mitf and EDNRB. Etymology The terms leucistic and leucism are derived from the stem leuc- + -ism, from Latin leuco- in turn derived from Greek leukos meaning white. Gallery See also Albinism Albino and white squirrels Amelanism Dyschromia Erythrism Heterochromia iridum Melanism Piebaldism Vitiligo Xanthochromism References External links Animal coat colors Articles containing video clips Bird colours Dermatologic terminology Disturbances of pigmentation Genetic disorders with no OMIM
Leucism
Biology
877
45,372,257
https://en.wikipedia.org/wiki/Geoxyle
A geoxyle is a plant in which an enlarged, woody structure occurs beneath the surface of the ground. Such plants have developed independently in various plant lineages, mostly evolving in the Pliocene and subsequently diverging within the last two million years. In contrast to their close relatives, these plants have developed in areas with both high rainfall and a high frequency of fires. They are sometimes known as underground trees, and the areas where they grow as underground forests. The geoxylic growth forms of woody subshrubs is characterised by massive lignotubers or underground woody axes from which emerge aerial shoots which may be ephemeral. These growth forms are found in savannahs in southern Africa. It is thought they developed in tandem with the spread of savannahs which resulted in an increase in tall grasses which are easily flammable during the long dry season associated with the savannah climate. Some well-known examples of geoxyles are the sand apple (Parinari capensis), the plough-breaker (Erythrina zeyheri), the red wings (Combretum platypetalum) and the wild grape (Lannea edulis). Others are Ancylobothrys petersiana, Diospyros galpinii, Elephantorrhiza elephantina, Erythrina resupinata, Eugenia albanensis, Eugenia capensis, Maytenus nemorosa, Pachystigma venosum and Salacia kraussii. Their occurrence is influenced by environmental disturbances and climate seasonality, while soil fertility impacts functional types and their diversity. References Forest ecology Plant life-forms
Geoxyle
Biology
343
23,280,884
https://en.wikipedia.org/wiki/Blowing%20engine
A blowing engine is a large stationary steam engine or internal combustion engine directly coupled to air pumping cylinders. They deliver a very large quantity of air at a pressure lower than an air compressor, but greater than a centrifugal fan. Blowing engines are majorly used to provide the air blast for furnaces, blast furnaces and other forms of smelter. Waterwheel engines The very first blowing engines were the blowing houses: bellows, driven by waterwheels. Smelters are most economically located near the source of their ore, which may not have suitable water power available nearby. There is also the risk of drought interrupting the water supply, or of expanding demand for the furnace outstripping the available water capacity. These restrictions led to the very earliest form of steam engine used for power generation rather than pumping, the water-returning engine. With this engine, a steam pump was used to raise water that in turn drove a waterwheel and thus the machinery. Water from the wheel was then returned by the pump. These early steam engines were only suitable for pumping water, and could not be connected directly to the machinery. The first practical examples of these engines were installed in 1742 at Coalbrookdale and as improvements to the Carron Ironworks on the Clyde in 1765. Beam blowing engines Early steam prime movers were beam engines, firstly of the non-rotative (i.e. solely reciprocating) and later the rotative type (i.e. driving a flywheel). Both of these were used as blowing engines, usually by coupling an air cylinder to the far end of the beam from the steam cylinder. Joshua Field describes an 1821 trip to Foster, Rastrick & Co. of Stourbridge, where he observed eight large beam engines, one of 30 hp working a blowing cylinder of 5 feet diameter and 6 feet stroke. Where the later beam engines drove flywheels, this was useful for providing a more even action to the engine. The air cylinder was still driven by the beam alone and the flywheel was used solely as a flywheel, not driving an output shaft. A well-known surviving example of this type are the paired beam engines "David & Sampson", now preserved at Blists Hill open-air museum, Ironbridge Gorge. These are a pair of single-cylinder condensing beam engines, each driving an air cylinder by their own beam, but sharing a single flywheel between them. They are notable for their decorative Doric arches. The engines had a long working life: 50 years of primary service from 1851 providing the blast for the Priors Lee furnaces of the Lilleshall Company, then a further 50 years until the plant's closure as reserve engines, still being worked occasionally. Semi-rotative blowing engines The large vertical blowing engine illustrated at the top was built in the 1890s by E. P. Allis Co. of Milwaukee (later to form part of Allis-Chalmers). The steam cylinder (lower) is diameter, the air cylinder (upper) and both with a stroke of . The steam cylinder has Reynolds-Corliss valve gear, driven via a bevel-driven auxiliary shaft beneath, at right-angles to the crankshaft. This also means that the Corliss' wrist plate is at right-angles to the flywheel, rather than parallel as is usual. Edwin Reynolds was the designer of the Allis company and in 1876 had developed an improved version of the Corliss valvegear, with improved trip gear capable of working at higher speeds. The air valves are also driven by eccentrics from this same shaft. Like the beam engines, the main force of the piston is transmitted to the air cylinder by a purely reciprocating action and the flywheels exist to smooth the action of the engine. To permit adjustment, the steam piston rod only goes as far as the crosshead. Above this are twinned rods to the air piston. The flywheel shaft is mounted below the steam piston, the paired connecting rods driving downwards and backwards to make this a return connecting rod engine. Internal combustion blowing engines In the late 1800s, internal combustion gas engines were developed to burn gasses produced from blast furnaces, eliminating the need for fuel for steam boilers and increasing efficiency. Bethlehem Steel was one such company to employ this technology. Huge, usually single-cylinder horizontal engines burned blast furnace gas. SA John Cockerill of Belgium and Körting of Hannover were both noted makers of such engines. There are some efforts underway to restore a few of these engines. A few firms still manufacture and install multi cylinder internal combustion engines to burn waste gasses today. Replacement by rotary blowers As blast furnaces re-equipped after World War II, the favoured power source was either the diesel engine or the electric motor. These both had a rotary output, which worked well with contemporary developments in centrifugal fans capable of handling the huge volumes of air. Although the reciprocating steam blowing engine continued where it was already in use, they were rarely installed after the war. These older plants began to close in the 1950s and numbers were drastically reduced throughout the West during the 1970s. Blowing engines of this form are now rare. Surviving examples today Examples of both a beam blowing engine and a vertical engine may be seen at the Blists Hill open-air museum, Ironbridge Gorge. The beam engines "David & Sampson" are scheduled monuments. An 1817 beam blowing engine by Boulton & Watt, formerly used at the Netherton ironworks of M W Grazebrook, now decorates Dartmouth Circus, a traffic island at the start of the A38(M) motorway in Birmingham (see picture above, location: ). References Stationary steam engines Beam engines Steam engines Blast furnaces Steelmaking
Blowing engine
Chemistry
1,169
58,229,662
https://en.wikipedia.org/wiki/Paessler%20PRTG
PRTG (Paessler Router Traffic Grapher ) is a network monitoring software developed by Paessler GmbH. It monitors system conditions like bandwidth usage or uptime and collect statistics from miscellaneous hosts such as switches, routers, servers, and other devices and applications. It was initially released on May 29, 2003 by the German company Paessler GmbH which was founded by Dirk Paessler in 2001. The software is available in three versions: a classic standalone solution (PRTG Network Monitor), one for large and distributed networks (PRTG Enterprise Monitor) and a SaaS-version (PRTG Hosted Monitor). Specifications The software has an auto-discovery mode that scans predefined areas of an enterprise network and creates a device list from this data. Further information on the detected devices can be retrieved using communication protocols like ICMP, SNMP, WMI, NetFlow, jFlow, sFlow, DCOM or the RESTful API. Sensors The software is based on sensors, e.g. HTTP, SMTP/POP3 (e-mail) application sensors and hardware-specific sensors for switches, routers and servers. The software has over 200 different predefined sensors that retrieve statistics from the monitored instances, e.g. response times, processor, memory, database information, temperature or system status. Web interface and desktop client The software can be operated via an AJAX-based web interface which is suitable for both real-time troubleshooting and data exchange with non-technical staff via maps (dashboards) and user-defined reports. An additional administration interface in the form of a desktop application for Windows, Linux, and macOS is available. Notifications, reports and price model In addition to the usual communication channels such as Email and SMS, notification is also provided via an app for iOS or Android. The software also provides customizable reports and its price model is based on sensors. See also Comparison of network monitoring systems References Literature Andrés, Steven, Brian Kenyon, and Erik Pack Birkholz. Security Sage's guide to hardening the network infrastructure. Elsevier, 2004. Elsayed, Abdellatief, and Nashwa Abdelbaki. "Performance evaluation and comparison of the top market virtualization hypervisors." Computer Engineering & Systems (ICCES), 2013 8th International Conference on. IEEE, 2013. System administration Network management Port scanners Network analyzers Windows software
Paessler PRTG
Technology,Engineering
502
529,817
https://en.wikipedia.org/wiki/Punnett%20square
The Punnett square is a square diagram that is used to predict the genotypes of a particular cross or breeding experiment. It is named after Reginald C. Punnett, who devised the approach in 1905. The diagram is used by biologists to determine the probability of an offspring having a particular genotype. The Punnett square is a tabular summary of possible combinations of maternal alleles with paternal alleles. These tables can be used to examine the genotypical outcome probabilities of the offspring of a single trait (allele), or when crossing multiple traits from the parents. The Punnett square is a visual representation of Mendelian inheritance, a fundamental concept in genetics discovered by Gregor Mendel. For multiple traits, using the "forked-line method" is typically much easier than the Punnett square. Phenotypes may be predicted with at least better-than-chance accuracy using a Punnett square, but the phenotype that may appear in the presence of a given genotype can in some instances be influenced by many other factors, as when polygenic inheritance and/or epigenetics are at work. Zygosity Zygosity refers to the grade of similarity between the alleles that determine one specific trait in an organism. In its simplest form, a pair of alleles can be either homozygous or heterozygous. Homozygosity, with homo relating to same while zygous pertains to a zygote, is seen when a combination of either two dominant or two recessive alleles code for the same trait. Recessive are always lowercase letters. For example, using 'A' as the representative character for each allele, a homozygous dominant pair's genotype would be depicted as 'AA', while homozygous recessive is shown as 'aa'. Heterozygosity, with hetero associated with different, can only be 'Aa' (the capital letter is always presented first by convention). The phenotype of a homozygous dominant pair is 'A', or dominant, while the opposite is true for homozygous recessive. Heterozygous pairs always have a dominant phenotype. To a lesser degree, hemizygosity and nullizygosity can also be seen in gene pairs. Monohybrid cross "Mono-" means "one"; this cross indicates that the examination of a single trait. This could mean (for example) eye color. Each genetic locus is always represented by two letters. So in the case of eye color, say "B = Brown eyes" and "b = green eyes". In this example, both parents have the genotype Bb. For the example of eye color, this would mean they both have brown eyes. They can produce gametes that contain either the B or the b allele. (It is conventional in genetics to use capital letters to indicate dominant alleles and lower-case letters to indicate recessive alleles.) The probability of an individual offspring's having the genotype BB is 25%, Bb is 50%, and bb is 25%. The ratio of the phenotypes is 3:1, typical for a monohybrid cross. When assessing phenotype from this, "3" of the offspring have "Brown" eyes and only one offspring has "green" eyes. (3 are "B?" and 1 is "bb") The way in which the B and b alleles interact with each other to affect the appearance of the offspring depends on how the gene products (proteins) interact (see Mendelian inheritance). This can include lethal effects and epistasis (where one allele masks another, regardless of dominant or recessive status). Dihybrid cross More complicated crosses can be made by looking at two or more genes. The Punnett square works, however, only if the genes are independent of each other, which means that having a particular allele of gene "A" does not alter the probability of possessing an allele of gene "B". This is equivalent to stating that the genes are not linked, so that the two genes do not tend to sort together during meiosis. The following example illustrates a dihybrid cross between two double-heterozygote pea plants. R represents the dominant allele for shape (round), while r represents the recessive allele (wrinkled). A represents the dominant allele for color (yellow), while a represents the recessive allele (green). If each plant has the genotype RrAa, and since the alleles for shape and color genes are independent, then they can produce four types of gametes with all possible combinations: RA, Ra, rA, and ra. Since dominant traits mask recessive traits (assuming no epistasis), there are nine combinations that have the phenotype round yellow, three that are round green, three that are wrinkled yellow, and one that is wrinkled green. The ratio 9:3:3:1 is the expected outcome when crossing two double-heterozygous parents with unlinked genes. Any other ratio indicates that something else has occurred (such as lethal alleles, epistasis, linked genes, etc.). Forked-line method The forked-line method (also known as the tree method and the branching system) can also solve dihybrid and multi-hybrid crosses. A problem is converted to a series of monohybrid crosses, and the results are combined in a tree. However, a tree produces the same result as a Punnett square in less time and with more clarity. The example below assesses another double-heterozygote cross using RrYy x RrYy. As stated above, the phenotypic ratio is expected to be 9:3:3:1 if crossing unlinked genes from two double-heterozygotes. The genotypic ratio was obtained in the diagram below, this diagram will have more branches than if only analyzing for phenotypic ratio. See also Mendelian inheritance Karnaugh map, a similar diagram used for Boolean algebra simplification References Further reading External links Online Punnett Square Calculator Online Punnett Square Calculator, monohybrid and dihybrid, autosomal and sex-linked Classical genetics Genetics education Genetics techniques History of genetics
Punnett square
Engineering,Biology
1,338
12,765,380
https://en.wikipedia.org/wiki/Satellite%20formation%20flying
Satellite formation flying is the coordination of multiple satellites to accomplish the objective of one larger, usually more expensive, satellite. Coordinating smaller satellites has many benefits over single satellites including simpler designs, faster build times, cheaper replacement creating higher redundancy, unprecedented high resolution, and the ability to view research targets from multiple angles or at multiple times. These qualities make them ideal for astronomy, communications, meteorology, and environmental uses. Types of formations Depending on the application, there are three formations possible: trailing, cluster, and constellation. Trailing formations are formed by multiple satellites orbiting on the same path. Each one follows the previous one separated by a specific time interval to either view a target at different times, or obtain varied viewing angles of the target. Trailing satellites are especially suited for meteorological and environmental applications such as viewing the progress of a fire, cloud formations, and making 3D views of hurricanes. Notable pairs are GRACE and GRACE-Follow On satellites, Landsat 7 with EO-1, the "A-train" consisting of CALIPSO and CloudSat (among others), and Terra with Aqua. Cluster formations are formed by satellites in a dense (relatively tightly spaced) arrangement, such as twin satellites. These arrangements are best for high resolution interferometry and making maps of Earth. TechSat-21 was a suggested satellite model capable of operating in clusters. The proposed Laser Interferometer Space Antenna would require high-precision cluster flying. Constellation formations (satellite constellations) can be considered to be a number of satellites with coordinated ground coverage, operating together under shared control, synchronised so that they overlap well in coverage and complement rather than interfere with other satellites' important coverage Usually, these formations are made up of numerous small satellites. A micro satellite weighs under 100 kg and a nano satellite weighs under 10kg. Magnetosheric Constellation, for instance, would be composed of 100 micro satellites. This technology has become more viable thanks to the development of autonomous flying. With an onboard computer and this algorithm, satellites may autonomously position themselves into a formation. Previously, ground control would have to adjust each satellite to maintain formations. Now, satellites may arrive at and maintain formations with faster response time and have the ability to change the formation for varied resolution of observations. Also, satellites may be launched from different spacecraft and rendezvous on a particular path. This advance was made possible by Dave Folta, John Bristow, and Dave Quinn at NASA’s Goddard Space Flight Center (GSFC). See also Satellite constellation Fractionated spacecraft Magnetospheric Multiscale Mission References External links BNSC "Formation Flying" Cranfield School of Engineering – Space Research Center Asia Times Sep 2, 2009 Satellites flying in formation over Asia Satellites Earth observation satellites Wireless communication systems Satellites by type
Satellite formation flying
Astronomy,Technology
563
11,512,166
https://en.wikipedia.org/wiki/Schiffnerula%20cannabis
Schiffnerula cannabis is a plant pathogen infecting hemp. It is one of the many 88 species of fungi which attack Cannabis and with time. The black mildew of Cannabis, compared to gray mold caused by Botrytis cinerea, and the problems it causes are often not extreme as opposed to the gray mold with the potential to wipe a crop in merely a week. Host and symptoms A host species for Schiffnerula cannabis has yet to be named. However, a variety of signs are present with Schiffnerula cannabis. Respectively as black mildew are, flat, gray-black, and film like areas throughout the leaf, colonies of conidia on the surface of leaves, as well as ascomata and ascii are present on hosts as Schiffnerula cannabis runs its course. In addition, black mold colonies can be amphigenous and thin; the hyphae with a brown hue and flexuous like the pathogen Schiffnerula azadirachtae. Disease cycle Infection typically takes place at leaves, soft stems, and petioles via airborne inoculum from conidia. Closely related to sooty moulds, black mildews produce hyphopodia which function similarly to haustoria. There are two types of hyphopodia which are capitate and mucronate; capitate hyphopodia are lobed appressoria where haustoria are formed, while mucronate hyphopodia are single celled structures that direct away from the leaf. These do not form haustoria nor are their function known yet. Black mildews and sooty moulds also develop on insect secretions or glands of plants which produce nectar. Over time, colonies develop ascomata as its sexual form of reproduction and conidiophores as its asexual form of reproduction. Within the ascii are eight ascospores, and the ascomata are circular and contain 1 to 3 ascii. Dispersal can also take place through water-transported ascospores. Environment Schiffnerula cannabis is found in Nepal, though other Schiffnerula are abundant in tropical and subtropical environments. These environments help facilitate fungal growth. With plenty of free water present, in addition to a food source and oxygen, the black fungus can easily thrive in the aforementioned environment. Moisture aside, the temperature of these environments also contribute to the ease of fungal growth. Evidently, the higher temperatures provided in a tropical setting clearly favor Schiffnerula’s development as its place of origin resides within Nepal. Through thorough examination of colonies in the field, there is an abundance of ant, thrip, and small insect activity. Thus, insects are likely a vector of the black mildew. References Enigmatic Dothideomycetes taxa Fungi described in 1995 Fungal plant pathogens and diseases Hemp diseases Taxa named by Stanley Hughes Fungus species
Schiffnerula cannabis
Biology
599
15,417,513
https://en.wikipedia.org/wiki/KIAA1109
Uncharacterized protein KIAA1109 is a protein that in humans is encoded by the KIAA1109 gene. This protein has a function that is not yet understood. KIAA1109 has 3 aliases, FSA (fragile site-associated) protein, MGC110967 and DKFZp781P0474. An ortholog of KIAA1109 has been found in Drosophila, responsible for regulating synaptic vesicle recycling and certain brain related functions. Drosophila exhibiting mutated KIAA1109 were unable to walk or stand upright for long periods and exhibited seizures, reminiscent of severe neurological defects. This led to it being given the name "Tweek", after a character in South Park. Gene Location KIAA1109 is found on the long arm of chromosome 4 (4q27), with the genomic sequence starting at 118,818,167 bp and ending at 119,010,362 bp Gene Neighborhood The gene neighborhood of KIAA1109 involves 4 other genes. KIAA1109 is a part of the KIAA1109/Tenr/IL2/IL21 gene region. This region consists of the three genes to the right of KIAA1109; ADAD1, IL2 and IL21. Another gene located in the neighborhood of KIAA1109 is TRPC3. This gene is to the left of KIAA1109 on the opposite side of the genes described above. Expression According to data on NCBI's EST Abundance Profile page for KIAA1109, the gene is expressed in many different tissues in humans. Human expression is seen most predominately in parathyroid, muscle, ear, eye, mammary gland, lymph node, thymus in addition to 27 other tissues. KIAA1109 is also expressed in various disease states including 12 different tumors as well as bladder carcinoma, chondrosarcoma, glioma, leukemia, lymphoma, non-neoplasia, retinoblastoma tissues. KIAA1109 is expressed in all stages of development from embryoid body to adult, except in infants. No expression of my gene is seen during the infant stage of development. Promoter According to Genomatix's ElDorado program the promoter region of KIAA1109 is predicted to be 601 base pairs in length. The promoter region starts 500 base pairs upstream of the 5’ UTR of KIAA1109 mRNA transcript and contains part of this 5’ UTR. Homology KIAA1109 is conserved throughout many species. Orthologs have been found in many mammals and other vertebrates. More distant homologs have been identified in animals such as insects. See the mRNA and protein conservation sections below for more details. No human paralogs for KIAA1109 have been identified. mRNA Splice Variants KIAA1109 has 13 mRNA splice variants and 6 unspliced variants. Variant A is the longest and most commonly occurring variant of the gene and is the subject of this article. KIAA1109 variant A is made up of 84 exons and is 15,592 base pairs in length. The accession number for this nucleotide is NM_015312.3. Conservation The mRNA sequence of KIAA1109 is highly conserved throughout mammals. The mRNA sequence identity to mammals is no less than 81.9% (in platypus) and ranging up to 99.5% (in chimpanzees). Birds also show fairly high conservation with mRNA sequence identities around 78% in zebra finches. The table blow shows information on the mRNA orthologs. Protein General Properties KIAA1109 protein is 5005 amino acids in length, and has a predicted molecular weight of 555519.38 daltons. The isoelectirc point of KIAA1109 protein is predicted to be 6.12. Composition The amino acid composition of KIAA1109 protein showed amino acid frequencies within 1.5% of that of normal human proteins for all but Alanine, Serine and Threonine. Alanine has a lower frequency in KIAA1109 than in that of a normal human protein while Serine and Threonine both have a higher frequency in KIAA1109 than in the average human protein. Conservation The amino acid sequence of KIAA1109 is highly conserved throughout mammals. The protein identity ranges from 93.2% in Opossum to 99.8% in Chimpanzees and protein similarity is no less than 97% in all mammals included. Birds continue to show fairly high conservation with protein identities around 90% and proteins similarities at a high 96%. While conservation is still high the lower numbers may be due to small truncations on either, the 5’ and 3’ ends of these sequences. As we move to the more distant species of zebra fish and then the red four beetle and carpenter ant the conservations drops. In the insects the protein identities are down to around 34%. Conserved Domains NCBI conserved domains search identified two domains in KIAA1109. The first is the fragile site associated C-terminus, which is said to be linked to celiac disease susceptibility according to genome-wide-association studies and may also be associated with polycystic kidney disease. The second conserved region identified by NCBI in KIAA1109 is an uncharacterized conserved protein (DUF2246), whose function is unknown and is conserved in various species from humans to worms. Post Translation Modifications KIAA1109 is predicted to undergo various types of post translational modifications including glycate, N-glycosylation, O-GlcNAc, O Glycosylation, Sulfonation and Phosphorylation. Subcellular Localization KIAA1109 contains one transmembrane domain from amino acids 26–46. No signal peptides, mitochondrial targeting sequences or chloroplast peptides were predicted for my protein and it is therefore not predicted to localize to secretory pathway, mitochondria or chloroplast. Interacting Proteins MADH2 and Beta-catenin were both found to have a physical interaction with my protein as detached by display technonloy by Miyamoto-Sato et al. 2010. References Further reading Uncharacterized proteins
KIAA1109
Biology
1,327
16,003,024
https://en.wikipedia.org/wiki/Maxus%20%28rocket%29
Maxus is a sounding rocket that are used in the MAXUS microgravity rocket programme, a joint venture between Swedish Space Corporation and EADS Astrium Space Transportation used by ESA. It is launched from Esrange Space Center in Sweden and provides access to microgravity for up to 14 minutes. Technical characteristics Overall length: 15.5 m Overall mass: 12 400 kg Payload mass: approx. 800 kg Max. velocity: 3500 m/s Max. acceleration: 15 g Propellant mass: 10 042 kg Motor burn time: 63 s Microgravity: up to 14 minutes Apogee: > 700 km Thrust(max. in vacuum): 500 kN Missions See also Texus Maser Rexus/Bexus Esrange References Sounding rockets of Sweden
Maxus (rocket)
Astronomy
161
5,257,635
https://en.wikipedia.org/wiki/NICMOSlook
NICMOSlook is a computer program to analyze spectral data obtained with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) on board the Hubble Space Telescope (HST). The program originated as Calnic C, which was designed at the Space Telescope European Coordinating Facility (ST-ECF) and programmed in IDL. NICMOSlook is available from their website. One of the capabilities of NICMOS is its grism mode for slitless spectrometry at low resolution. Typically, a direct image is taken in conjunction with grism images for wavelength calibration. NICMOSlook is a highly specialized interactive tool to extract one-dimensional spectra (flux versus wavelength) from such data. NICMOSlook is most commonly used for small amounts of data when users prefer to have full control of all the parameters for individual spectrum extraction, or for cases where Calnic C did not extract the spectra satisfactorily. Unlike Calnic C, NICMOSlook requires the user to determine the best way to find an object and provides a number of different ways to accomplish this. Similarly, the user decides whether to use a weighting appropriate for point sources or weighting by the size of the object for the extraction of the spectra. Calnic C Calnic C is a non-interactive counterpart, a program that performs a subset of NICMOSlook's functions with a "pipelined" approach. Maintenance of Calnic C stopped in 2000 and the program is now considered to be obsolete. External links ST-ECF page for NICMOSlook Hubble Space Telescope Astronomy software
NICMOSlook
Astronomy
331
2,773,183
https://en.wikipedia.org/wiki/Overseas%20experience
Overseas Experience (OE) is a New Zealand term for an extended overseas working period or holiday. It is sometimes referred to as "The big OE", in reference to the extended duration of the travel: typically at least one year, and often extended far longer. It is however generally expected that the person returns after a few years with work and life experience and a wider outlook. This is considered important to career development, especially among professionals. From the 1950s, OEs were often centred on London, and were described as "going home", a "working holiday", or an "overseas trip", until the term OE was popularised by New Zealand cartoonist and columnist Tom Scott in the mid 1970s. Description The term OE is part of the New Zealand vernacular, to the extent that official government literature, including the Inland Revenue Department website, name it as a sub-category. The abbreviation OE is sometimes spoken as if it is a formal qualification — as in "do you have your OE?" — because before 2004 the top secondary school qualification was UE ("University Entrance"). The phrase also indicates that the trip is considered to be an important milestone in one's career development, especially for professional and employees of large multinationals, and is often asked for during job interviews. A typical OE is mostly or entirely self-funded and involves working overseas. The typical OE traveller is in their 20s. Variations are common. Many people spend a year or so teaching English overseas, especially in Japan or South Korea. Enough Māori take OEs for there to be a permanent Māori culture group (Ngāti Rānana) in London. The European OE usually includes travel within Europe and, often, a pilgrimage to the Gallipoli battle site. In recent years, Asian destinations including South Korea, Singapore, Japan and China have become increasingly popular. London's historical popularity as an OE destination was described by historian James Belich as "recolonisation". He said New Zealand developed very strong cultural ties to the United Kingdom, and tended to see London as the centre of the universe. During the 19th and early 20th centuries many white colonials in the British Empire viewed Britain as "Home" even if they had never been there. London, as the capital of Britain and the Empire, was especially attractive. From the late 1880s to the early 1900s 10,000 Australians and New Zealanders traveled to Britain each year, with the number doubling between the World Wars. British immigration law until the 1970s allowed Australians and New Zealanders to live and work in Britain as British citizens. The continuation of the trend may be residual recolonialism, but in addition most New Zealanders have friends and often relatives in London, and its favourable working holiday scheme, proximity to the rest of Europe, and the fact that it is English-speaking also make Britain a desirable destination. In 2003 then Prime Minister Helen Clark described the OE to Britain as "an important tradition for many New Zealanders". Working holiday schemes New Zealand has reciprocal working holiday schemes with a large number of countries. These allow young people from those countries to apply for a working holiday visa for New Zealand, but also for New Zealand citizens to apply for a working holiday visa to be able to work and live in those countries for (usually) up to a year. In order to be eligible, travellers need to be between 18 and 30 (sometimes 35) years of age. Some countries (like the USA) require applications to be made through organisations like IEP, CCUSA, Camp Leaders, or Camp Canada. The full current list of countries covered is: Argentina Austria Belgium Brazil Canada Chile China Croatia Czech Republic Denmark Estonia Finland France Germany Hong Kong Hungary Ireland Israel Italy Japan Korea Latvia Lithuania Luxembourg Malaysia Malta Mexico Netherlands Norway Peru Philippines Poland Portugal Singapore Slovakia Slovenia Spain Sweden Taiwan Thailand Turkey United Kingdom United States Uruguay Vietnam See also TNT Magazine Gap year References External links OE Travel Blogs 1970s neologisms Culture of New Zealand Types of travel Transport culture New Zealand diaspora
Overseas experience
Physics
817
12,074,089
https://en.wikipedia.org/wiki/Komatsu%20930E
The Komatsu 930E is an off-highway, ultra class, rigid frame, two-axle, diesel/AC electric powertrain haul truck designed and manufactured by Komatsu in Peoria, Illinois, United States. Although the 930E is neither Komatsu's largest nor highest payload capacity haul truck, Komatsu considers the 930E to be the flagship of their haul truck product line. The 930E is the best selling ultra class haul truck in the world. As of September 2016, Komatsu has sold 1,900 units of 930E. The current model, the 930E-5 offer a payload capacity of up to . Public Debut The 930E was introduced in Morenci, Arizona in May, 1995 with a payload capacity of up to . Innovations The 930E was the first two-axle, six tire haul truck to be offered with a payload capacity in excess of , making it the world's first regular production "ultra class" haul truck. The 930E remained the world's largest, highest payload capacity haul truck until the September, 1998 debut of the payload capacity Caterpillar 797. Prior to the introduction of the 930E, diesel/electric haul trucks employed AC from an electric alternator where it was rectified to power the DC traction motors at the rear wheels. The 930E was the first haul truck to employ two AC electric traction motors. The diesel/electric AC powertrain is more efficient, offers better operating characteristics and is more cost effective than a comparable DC powertrain. Product Improvements In 1996, the 930E-2 debuted, offering an increased payload capacity by adding larger Bridgestone 50-80R57 radial tires. In 2000, at MINExpo International, Komatsu debuted the 930E-2SE featuring a Komatsu SSDA18V170 V-18, twin-turbocharged, diesel engine developed by Industrial Power Alliance, a joint venture between Komatsu and Cummins. This is the same engine that powers the larger Komatsu 960E-1 and allows the 930E-2SE to operate at elevations up to without deration. On December 15, 2003 Komatsu introduced the 930E-3, powered by a Komatsu SSDAl6V160 V-16 engine and a GDY106 traction motor on each side of the rear axle. The current models are the 930E-4 with a Komatsu SSDAl6V160 V-16 diesel engine and the 930E-4SE with a Komatsu SSDA18V170 V-18 diesel engine. Assembly All Komatsu electric drive haul trucks, including the 930E, are manufactured at Komatsu America Corp's Peoria Manufacturing Operation located at 2300 NE Adams Street in Peoria, Illinois, USA. Standing in the Komatsu model lineup The 930E was the largest, highest capacity haul truck in Komatsu's model lineup prior to the May 27, 2008 introduction of the , payload capacity 960E-1. The payload capacity 930-E4 and 930E-4SE are now the second highest payload capacity haul trucks in Komatsu's line up, although the 930E-4SE uses the same Komatsu SSDA18V170 V-18 engine as the 960E-1. Specifications See also Haul truck Komatsu Limited References External links Komatsu 930E-4 Product Brochure (PDF) Komatsu 930E-4 Website - Komatsu America Corp. Komatsu 930E-4SE Product Brochure (PDF) Komatsu 930E-4SE Website - Komatsu America Corp. Komatsu 930E-3 Product Brochure (PDF) _930E-2.pdf]_930E-2.pdf Komatsu 930E-2 Product Brochure (PDF)] Haul trucks Hybrid trucks Komatsu vehicles Vehicles introduced in 1995
Komatsu 930E
Engineering
810
3,689,569
https://en.wikipedia.org/wiki/Ives%E2%80%93Stilwell%20experiment
In physics, the Ives–Stilwell experiment tested the contribution of relativistic time dilation to the Doppler shift of light. The result was in agreement with the formula for the transverse Doppler effect and was the first direct, quantitative confirmation of the time dilation factor. Since then many Ives–Stilwell type experiments have been performed with increased precision. Together with the Michelson–Morley and Kennedy–Thorndike experiments it forms one of the fundamental tests of special relativity theory. Other tests confirming the relativistic Doppler effect are the Mössbauer rotor experiment and modern Ives–Stilwell experiments. Both time dilation and the relativistic Doppler effect were predicted by Albert Einstein in his seminal 1905 paper. Einstein subsequently (1907) suggested an experiment based on the measurement of the relative frequencies of light perceived as arriving from "canal rays" (positive ion beams created by certain types of gas-discharge tubes) in motion with respect to the observer, and he calculated the additional Doppler shift due to time dilation. This effect was later called "transverse Doppler effect" (TDE), since such experiments were initially imagined to be conducted at right angles with respect to the moving source, in order to avoid the influence of the longitudinal Doppler shift. Eventually, Herbert E. Ives and G. R. Stilwell (referring to time dilation as following from the theory of Lorentz and Larmor) gave up the idea of measuring this effect at right angles. They used rays in longitudinal direction and found a way to separate the much smaller TDE from the much bigger longitudinal Doppler effect. The experiment was performed in 1938 and was reprised several times. Similar experiments were conducted several times with increased precision, for example, by Otting (1939), Mandelberg et al. (1962), Hasselkamp et al. (1979), and Botermann et al. Experiments with "canal rays" Experimental challenges Initial attempts to measure the second order transverse Doppler effect in canal rays completely failed. For example, Stark's 1906 measurements showed systematic errors ten times the predicted effect. The maximum speed achievable in early gas-discharge tubes was about , which implied a transverse Doppler shift of only about 1.25×10−5. The small TDE achievable was considerably less than the width of the emission lines, which were relatively diffuse due to the Doppler line-broadening resulting from non-uniformity of ion speeds. By the 1930s, improvements in canal-ray tubes allowed for considerable sharpening of the emission lines. Even with these improvements, however, performing the experiment as usually imagined (with the observation being made at right angles to the beam) would be extremely difficult since small errors in the angle of observation would result in line-shifts of magnitude comparable to the magnitude of the anticipated effect. To avoid the issues associated with observing the beam at right angles, Ives and Stilwell used a small mirror within the canal ray tube (See and ) to observe the beam simultaneously in two directions both with and against the motions of the particles. The TDE would manifest itself as a shift of the center of gravity of the simultaneously red- and blue-shifted spectral lines. Theory In 1937, Ives performed a detailed analysis of the spectral shifts to be expected of particle beams observed at different angles following a "test theory" which was consistent with the Michelson-Morley experiment (MMX) and the Kennedy-Thorndike experiment (KTX), but which differed from special relativity (and the mathematically equivalent theory of Lorentz and Lamor) in including a parameter whose value can not be determined by MMX and KTX alone. Various values of would correspond to various combinations of length contraction, width expansion, and time dilation, where would be the value predicted by special relativity. Ives proposed the optical experiment described in this article to determine the precise value of We will not present Ives' 1937 analysis, but instead will compare the predictions of special relativity against the predictions of "classical" aether theory with the apparatus stationary in the hypothetical aether, even though the classical aether had already long been ruled out by MMX and KTX. Classical analysis In the classical Doppler effect, the wavelength of light observed by a stationary observer of light emitted by a source moving at speed away from or towards the observer is given by where The top sign is used if the source is receding, and the bottom sign if it is approaching the observer. We note that the magnitude of the wavelength shift for the source moving away from the observer exactly equals the magnitude of the wavelength shift for the source moving towards the observer The average of the observed wavelengths for a source moving away from the observer and the source moving towards the observer at the same speed exactly equals the wavelength of the source. Relativistic analysis In the relativistic longitudinal Doppler effect, the observed wavelength with source and observer moving away from each other at speed is given by where The signs will be reversed with the source and observer moving towards each other. In the Ives and Stilwell experiment, the direct view of the particle beam will be blueshifted, while the reflected view of the particle beam will be redshifted. The first few terms of the Taylor series expansion for the direct view of the particle beam is given by while the first few terms of the Taylor series expansion for the reflected view of the particle beam is given by The even power terms have the same sign for both views, meaning that both the direct and reflected rays will show an increase in wavelength over that predicted by the classical Doppler analysis. The average of the direct and reflected wavelengths is given by where is the Lorentz factor. Special relativity therefore predicts that the center of gravity of Doppler-shifted emission lines emitted by a source moving towards an observer and its reflected image moving away from the observer will be offset from unshifted emission lines by an amount equal to the transverse Doppler effect. The experiment of 1938 In the experiment, Ives and Stilwell used hydrogen discharge tubes as the source of canal rays which consisted primarily of positive H2+ and H3+ ions. (Free H+ ions were present in too small an amount to be usable, since they quickly combined with H2 molecules to form H3+ ions.) These ions, after being accelerated to high speed in the canal ray tube, would interact with molecules of the fill gas (which sometimes included other gases than H2) to release excited atomic hydrogen atoms whose velocities were determined by the charge-to-mass ratios of the parent H2+ and H3+ ions. The excited atomic hydrogen atoms emitted bright emission lines. For their paper, Ives and Stilwell focused on the blue-green line of the Balmer series. shows an example of the results that they obtained, with an undisplaced emission line in the center, and lines from Doppler-shifted atomic hydrogen released from H2+ and H3+ ions at three different voltages on either side of the center line. The particle velocities, as measured by the first-order Doppler displacements, were consistently within 1% of the values computed by the theoretical relationship where e is the charge on the hydrogen atom, E is the voltage between the electrode plates, and M is the mass of the observed particle. The asymmetry of the Doppler-shifted lines with respect to the undisplaced central emission line is not evident to casual inspection, but requires extreme precision of measurement with careful attention to sources of systematic error. In their optical arrangement, illustrated in , the first order (classical Doppler) displacement of emissions from H2+ ions at 20,000 volts was about . The expected second order shift of the center of gravity of direct and reflected views of the emissions was only about which corresponded to , requiring measurement accuracies of several tenths of a micron. Initial measurements of the displacements were very erratic. The source of the unsystematic errors in measurement of the center of gravity of the displaced lines was found to be due to the complex molecular absorption spectrum of the fill gas. An emission line, passing adjacent to a molecular absorption line of the fill gas, would be differentially absorbed on one side or the other of its nominal center, and the measurement of its wavelength would thus be disturbed. illustrates the issue. illustrates an undisplaced emission line. illustrates the molecular absorption spectrum of the fill gas, obtained by photographing the spectrum of the arc behind the electrode of the canal ray tube (see ). illustrates an undisplaced emission line surrounded by displaced emission lines from H2+ and H3+. At the particular voltage chosen, the lines from H2+ are clear of the molecular absorption lines (see arrows), but the lines from H3+ are not. As a result of this issue, the number of voltages available yielding direct and reflected lines in clear spaces was relatively limited. Ives and Stilwell compared their results against theoretical expectation using several approaches. compares theoretical versus measured center-of-gravity shifts plotted against the emission lines' first-order Doppler shifts The advantage of this method over the other method presented in their paper (plotting center-of-gravity shifts against the computed velocity, based on voltage) is that it was independent of any errors of voltage measurement and did not require any assumptions of the voltage-velocity relationship. In terms of Ives' 1937 test theory, the close agreement between the observed center-of-gravity displacements versus theoretical expectation support which corresponds to length contraction by the Lorentz factor in the direction of motion, no length changes at right angles to the motion, and time dilation by the Lorentz factor. The results therefore validated a key prediction of the theory of relativity, although it might be noted that Ives himself preferred to interpret the results in terms of the obsolescent theory of Lorentz and Lamor. The experiment of 1941 In the 1938 experiment, the maximum TDE was limited to 0.047 Å. The chief difficulty that Ives and Stilwell encountered in attempts to achieve larger shifts was that when they raised the electric potential between the accelerating electrodes to above 20,000 volts, breakdown and sparking would occur that could lead to destruction of the tube. This difficulty was overcome by using multiple electrodes. Using a four-electrode version of the canal ray tube with three gaps, a total potential difference of 43,000 volts could be achieved. A voltage drop of 5,000 volts was used across the first gap, while the remaining voltage drop was distributed between the second and third gaps. With this tube, a highest shift of 0.11 Å was achieved for H2+ ions. Other aspects of the experiment were also improved. Careful tests showed that the "undisplaced" particles yielding the central line actually acquired a small velocity imparted to them in the same direction of motion as the moving particles (no more than about 750 meters per second). Under normal circumstances, this would be of no consequence, since this effect would only result in a slight apparent broadening of the direct and reflected images of the central line. But if the mirror were tarnished, the central line might be expected to shift slightly, since the redshifted reflected view of the emission line would contribute less to the measured wavelength than the blueshifted direct view. Other controls were performed to address various objections of critics of the original experiment. The net result of all of this attention to detail was the complete verification of Ives and Stilwell's 1938 results and the extension of these results to higher speeds. Mössbauer rotor experiments Relativistic Doppler effect A more precise confirmation of the relativistic Doppler effect was achieved by the Mössbauer rotor experiments. From a source in the middle of a rotating disk, gamma rays are sent to an absorber at the rim (in some variations this scheme was reversed), and a stationary counter was placed beyond the absorber. According to relativity, the characteristic resonance absorption frequency of the moving absorber at the rim should decrease due to time dilation, so the transmission of gamma rays through the absorber increases, which is subsequently measured by the stationary counter beyond the absorber. This effect was actually observed using the Mössbauer effect. The maximal deviation from time dilation was 10−5, thus the precision was much higher than that (10−2) of the Ives–Stilwell experiments. Such experiments were performed by Hay et al. (1960), Champeney et al. (1963, 1965), and Kündig (1963). Isotropy of the speed of light Mössbauer rotor experiments were also used to measure a possible anisotropy of the speed of light. That is, a possible aether wind should exert a disturbing influence on the absorption frequency. However, as in all other aether drift experiments (Michelson–Morley experiment), the result was negative, putting an upper limit to aether drift of 2.0 cm/s. Experiments of that kind were performed by Champeney and Moon (1961), Champeney et al. (1963), Turner and Hill (1964), and Preikschat supervised by Isaak (1968). Modern experiments Fast moving clocks A considerably higher precision has been achieved in modern variations of Ives–Stilwell experiments. In heavy-ion storage rings, as the TSR at the MPIK or ESR at the GSI Helmholtz Centre for Heavy Ion Research, the Doppler shift of lithium ions traveling at high speed is evaluated by using saturated spectroscopy or optical–optical double resonance. Due to their frequencies emitted, these ions can be considered as optical atomic clocks of high precision. Using the framework of Mansouri–Sexl a possible deviation from special relativity can be quantified by with as frequency of the laser beam propagating anti-parallel to the ion beam and as frequency of the laser beam propagating parallel to the ion beam. and are the transition frequencies of the transitions in rest. with as ion velocity and as speed of light. In the case of saturation spectroscopy the formula changes to with as the transition frequency in rest. In the case that special relativity is valid is equal to zero. Slow moving clocks Meanwhile, the measurement of time dilation at everyday speeds has been accomplished as well. Chou et al. (2010) created two clocks each holding a single 27Al+ ion in a Paul trap. In one clock, the Al+ ion was accompanied by a 9Be+ ion as a "logic" ion, while in the other, it was accompanied by a 25Mg+ ion. The two clocks were situated in separate laboratories and connected with a 75 m long, phase-stabilized optical fiber for exchange of clock signals. These optical atomic clocks emitted frequencies in the petahertz (1 PHz = 1015 Hz) range and had frequency uncertainties in the 10−17 range. With these clocks, it was possible to measure a frequency shift due to time dilation of ~10−16 at speeds below 36 km/h (< 10 m/s, the speed of a fast runner) by comparing the rates of moving and resting aluminum ions. It was also possible to detect gravitational time dilation from a difference in elevation between the two clocks of 33 cm. See also Experimental testing of time dilation Tests of special relativity References Further reading Doppler effects Tests of special relativity 1938 in science
Ives–Stilwell experiment
Physics
3,173
59,284,197
https://en.wikipedia.org/wiki/Polyethylene%20wax
Polyethylene wax can be used as a dispersant, slip agent, resin additive, and mold release agent. As an oxidised product, OPEW is authorized in the EU as E number reference E914 only for the surface treatment of some fruits. There are a variety of methods for producing Polyethylene wax. Polyethylene wax can be made by direct polymerization of ethylene under special conditions that control molecular weight and chain branching of the final polymer. Another method involves thermal and/or mechanical decomposition of high molecular weight polyethylene resin to create lower molecular weight fractions. A third method involves separation of the low molecular weight fraction from a production stream of high molecular weight polymer. These last two methods produce very low molecular weight fractions that should be removed to avoid a product with low flash point that can result in flammability, migration, equipment build up, fouling and other safety and processing issues. Volatiles in these un refined waxes can also account for significant yield loss during processing Polyethylene wax may be used as lubricant additive to the PVC pipes. References E-number additives Waxes
Polyethylene wax
Physics
241
69,235,243
https://en.wikipedia.org/wiki/Daimler%20M836%20engine
The Daimler-Mercedes M836 engine is a naturally-aspirated and supercharged, 3.9-liter to 4.0-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz, in partnership with Daimler; between 1924 and 1929. M836 engine The six-cylinder in-line 3920 cc engine featured an overhead camshaft which at the time was an unusual feature, with “bevel linkage”. However, it was the switchable supercharger (”Kompressor”), adopted from the company's racing cars, that attracted most of the attention. With the device switched off maximum claimed output was of at 3,100 rpm: with the supercharger operating, maximum output rose to . The top speed listed was 105 km/h (65 mph) or 112 km/h (70 mph) according to which of the two offered final drive ratios was fitted. Applications Mercedes 15/70/100 PS References Mercedes-Benz engines Straight-six engines Engines by model Gasoline engines by model
Daimler M836 engine
Technology
223
77,538,436
https://en.wikipedia.org/wiki/Combustion%20efficiency
Combustion efficiency refers to the effectiveness of the burning process in converting fuel into heat energy. It is measured by the proportion of fuel that is efficiently burned and converted into useful heat, while minimizing the emissions of pollutants. Specifically, it may refer to: fuel efficiency engine efficiency depending on whether the level of efficiency is determined by the fuel itself or the combustion chamber or engine. References Energy conversion Combustion engineering
Combustion efficiency
Engineering
84
35,961,512
https://en.wikipedia.org/wiki/Jeffrey%20Karp
Jeffrey Karp (born 1975) is a Canadian biomedical engineer working as a Professor of Medicine at Harvard Medical School, Brigham and Women's Hospital, and the principal faculty at the Harvard Stem Cell Institute and Affiliate Faculty at the Massachusetts Institute of Technology through the Harvard–MIT Division of Health Sciences and Technology. He is also an affiliate faculty at the Broad Institute. Education Karp was born and raised in Peterborough, Ontario. He graduated from McGill University in 1999 with a degree in chemical engineering. While at McGill he also co-founded the McGill Engineering Code of Ethics "The Blueprint". He received a Ph.D. in Chemical and Biomedical Engineering from the University of Toronto in 2004. From 2004 until 2006, Karp was a postdoctoral fellow in Robert Langer's laboratory at MIT in the Harvard–MIT Program of Health Sciences and Technology; Karp had applied for the position but Langer had no funds to pay him, so Karp secured funding from the Natural Sciences and Engineering Research Council and Langer accepted him. Research In the Langer lab, Karp was inspired by another lab's publication in Nature that described adhesives based on the way that gecko's feet stick to surfaces; he and Langer applied for funding from NSF to make medical adhesives based on geckos, and received the funding. In 2007 he received an appointment and his own lab at Brigham and Women’s Hospital, at a location near MIT; he retained his association with the Harvard–MIT HST program. He has made mentoring high school students, undergraduates, PhD students, and post docs a priority, and in 2008 he won the Outstanding Undergraduate Student Mentor Award at MIT, and in 2010 he won the Thomas A. McMahon Mentoring Award from the HST program. In 2013 the company Gecko Biomedical was founded based on the gecko adhesive work, as well as subsequent work done in the Karp lab based on secretions of sandcastle worms. In 2014 the company Skintifique was formed to commercialize a barrier cream Karp had invented to prevent skin reactions in people with nickel allergy. In 2015 the company Frequency Therapeutics was founded to create treatments for hearing loss based on work done by Langer and Karp inspired by the ability of some amphibians and birds to regrow hair cells that have been damaged. In 2016 the company Alivio Therapeutics was founded based on work by Langer and Karp on a hydrogel to deliver drugs, intended to stick to tissue and only release the drug in response to inflammation. Recognition Technology Review listed him in 2008 as one of the top innovators under the age of 35 (TR35). Karp received the 2011 Young Investigator award from the Society for Biomaterials, and also in 2011, the Boston Business Journal profiled him as a Champion in Health Care Innovation. References External links TEDmed talk American bioengineers Stem cell researchers Harvard Medical School faculty McGill University Faculty of Engineering alumni University of Toronto alumni People from Peterborough, Ontario Living people Fellows of the American Institute for Medical and Biological Engineering 1976 births Massachusetts Institute of Technology alumni
Jeffrey Karp
Biology
634
36,149,934
https://en.wikipedia.org/wiki/Ventus%20%28wireless%20company%29
Ventus is an American company founded in 1999 and headquartered in Norwalk, Connecticut that provides secure private line wireless services, and manufactures cellular wireless hardware. Services Ventus provides managed networks for cellular wireless networking and fixed line services including PCI-DSS (Payment Card Industry) compliant data transport, integration services, data encryption, and integrated network administration and monitoring systems. Ventus' IT services are delivered via products developed by the company's network hardware technologies division. Ventus Technologies Ventus Technologies designs and manufactures cellular routers and other wireless hardware for machine-to-machine and enterprise wireless applications. Ventus' hardware includes dual-carrier, modular, multi-interface embedded wireless 4G LTE/3G routers and multi-band cellular antennas. See also List of companies based in Norwalk, Connecticut References 1999 establishments in New York (state) Companies based in Norwalk, Connecticut American companies established in 1999 Computer companies established in 1999 Manufacturing companies based in Connecticut Networking companies of the United States Networking hardware companies Computer companies of the United States Computer hardware companies
Ventus (wireless company)
Technology
214
57,691,958
https://en.wikipedia.org/wiki/NGC%201259
NGC 1259 is a lenticular galaxy located about 243 million light-years away in the constellation Perseus. The galaxy was discovered by astronomer Guillaume Bigourdan on October 21, 1884 and is a member of the Perseus Cluster. A type Ia supernova designated as SN 2008L was discovered in NGC 1259 on January 14, 2008. See also List of NGC objects (1001–2000) NGC 1260 References External links Perseus Cluster Perseus (constellation) Lenticular galaxies 1259 12208 Astronomical objects discovered in 1884 Discoveries by Guillaume Bigourdan
NGC 1259
Astronomy
121
1,046,155
https://en.wikipedia.org/wiki/Projection-valued%20measure
In mathematics, particularly in functional analysis, a projection-valued measure (or spectral measure) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-valued measure (PVM) is formally similar to a real-valued measure, except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space. Projection-valued measures are used to express results in spectral theory, such as the important spectral theorem for self-adjoint operators, in which case the PVM is sometimes referred to as the spectral measure. The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics, PVMs are the mathematical description of projective measurements. They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state. Definition Let denote a separable complex Hilbert space and a measurable space consisting of a set and a Borel σ-algebra on . A projection-valued measure is a map from to the set of bounded self-adjoint operators on satisfying the following properties: is an orthogonal projection for all and , where is the empty set and the identity operator. If in are disjoint, then for all , for all The second and fourth property show that if and are disjoint, i.e., , the images and are orthogonal to each other. Let and its orthogonal complement denote the image and kernel, respectively, of . If is a closed subspace of then can be wrtitten as the orthogonal decomposition and is the unique identity operator on satisfying all four properties. For every and the projection-valued measure forms a complex-valued measure on defined as with total variation at most . It reduces to a real-valued measure when and a probability measure when is a unit vector. Example Let be a -finite measure space and, for all , let be defined as i.e., as multiplication by the indicator function on L2(X). Then defines a projection-valued measure. For example, if , , and there is then the associated complex measure which takes a measurable function and gives the integral Extensions of projection-valued measures If is a projection-valued measure on a measurable space (X, M), then the map extends to a linear map on the vector space of step functions on X. In fact, it is easy to check that this map is a ring homomorphism. This map extends in a canonical way to all bounded complex-valued measurable functions on X, and we have the following. The theorem is also correct for unbounded measurable functions but then will be an unbounded linear operator on the Hilbert space . This allows to define the Borel functional calculus for such operators and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. That is, if is a measurable function, then a unique measure exists such that Spectral theorem Let be a separable complex Hilbert space, be a bounded self-adjoint operator and the spectrum of . Then the spectral theorem says that there exists a unique projection-valued measure , defined on a Borel subset , such that where the integral extends to an unbounded function when the spectrum of is unbounded. Direct integrals First we provide a general example of projection-valued measure based on direct integrals. Suppose (X, M, μ) is a measure space and let {Hx}x ∈ X be a μ-measurable family of separable Hilbert spaces. For every E ∈ M, let (E) be the operator of multiplication by 1E on the Hilbert space Then is a projection-valued measure on (X, M). Suppose , ρ are projection-valued measures on (X, M) with values in the projections of H, K. , ρ are unitarily equivalent if and only if there is a unitary operator U:H → K such that for every E ∈ M. Theorem. If (X, M) is a standard Borel space, then for every projection-valued measure on (X, M) taking values in the projections of a separable Hilbert space, there is a Borel measure μ and a μ-measurable family of Hilbert spaces {Hx}x ∈ X , such that is unitarily equivalent to multiplication by 1E on the Hilbert space The measure class of μ and the measure equivalence class of the multiplicity function x → dim Hx completely characterize the projection-valued measure up to unitary equivalence. A projection-valued measure is homogeneous of multiplicity n if and only if the multiplicity function has constant value n. Clearly, Theorem. Any projection-valued measure taking values in the projections of a separable Hilbert space is an orthogonal direct sum of homogeneous projection-valued measures: where and Application in quantum mechanics In quantum mechanics, given a projection-valued measure of a measurable space to the space of continuous endomorphisms upon a Hilbert space , the projective space of the Hilbert space is interpreted as the set of possible (normalizable) states of a quantum system, the measurable space is the value space for some quantum property of the system (an "observable"), the projection-valued measure expresses the probability that the observable takes on various values. A common choice for is the real line, but it may also be (for position or momentum in three dimensions ), a discrete set (for angular momentum, energy of a bound state, etc.), the 2-point set "true" and "false" for the truth-value of an arbitrary proposition about . Let be a measurable subset of and a normalized vector quantum state in , so that its Hilbert norm is unitary, . The probability that the observable takes its value in , given the system in state , is We can parse this in two ways. First, for each fixed , the projection is a self-adjoint operator on whose 1-eigenspace are the states for which the value of the observable always lies in , and whose 0-eigenspace are the states for which the value of the observable never lies in . Second, for each fixed normalized vector state , the association is a probability measure on making the values of the observable into a random variable. A measurement that can be performed by a projection-valued measure is called a projective measurement. If is the real number line, there exists, associated to , a self-adjoint operator defined on by which reduces to if the support of is a discrete subset of . The above operator is called the observable associated with the spectral measure. Generalizations The idea of a projection-valued measure is generalized by the positive operator-valued measure (POVM), where the need for the orthogonality implied by projection operators is replaced by the idea of a set of operators that are a non-orthogonal partition of unity. This generalization is motivated by applications to quantum information theory. See also Spectral theorem Spectral theory of compact operators Spectral theory of normal C*-algebras Notes References * Mackey, G. W., The Theory of Unitary Group Representations, The University of Chicago Press, 1976 G. Teschl, Mathematical Methods in Quantum Mechanics with Applications to Schrödinger Operators, https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/, American Mathematical Society, 2009. Varadarajan, V. S., Geometry of Quantum Theory V2, Springer Verlag, 1970. Linear algebra Measures (measure theory) Spectral theory
Projection-valued measure
Physics,Mathematics
1,625
67,685,023
https://en.wikipedia.org/wiki/Norzivirales
Norzivirales is an order of viruses, which infect prokaryotes. Most of these bacteriophages were discovered by metagenomics. Taxonomy Norzivirales contains the following four families: Atkinsviridae Duinviridae Fiersviridae Solspiviridae References Virus orders
Norzivirales
Biology
65
67,432,182
https://en.wikipedia.org/wiki/Ticoa
Ticoa is an extinct genus originally assigned to the Cycadales from the Early Cretaceous of Argentina, Chile, and Antarctica. Other authors view this genus as a member of the polyphyletic "seed ferns". Taxonomy The genus was erected by Sergio Archangelsky based on material from the Anfiteatro de Ticó formation. The genus first comprised two species, T. harrisii and T. magnipinnulata, with T. lamellata being described from the Bajo Grande locality in Patagonia, Argentina. The species T. magallanica was described from the Springhill formation in Chile. T. jeffersonii was described from Hope Bay in Antarctica. T. lanceolata was described much later from the Anfiteatro de Ticó formation . Description Ticoa includes large, bipinnate or tripinnate leaves with pecopteroid pinnules and a robust rachis. The cuticle, either hypostomatous or amphistomatous, presents large stomata sunken in a pit formed by multiple subsidiary and encircling cells. References Pteridospermatophyta Controversial plant taxa
Ticoa
Biology
243
12,084,606
https://en.wikipedia.org/wiki/Yalda%20Night
Yaldā Night ( or Chelle Night (also Chellah Night, ) is an ancient festival in Iran, Afghanistan, Azerbaijan, Uzbekistan, Tajikistan, Turkmenistan that is celebrated on the winter solstice. This corresponds to the night of December 20/21 (±1) in the Gregorian calendar, and to the night between the last day of the ninth month (Azar) and the first day of the tenth month (Dey) of the Iranian solar calendar.The longest and darkest night of the year is a time when friends and family gather together to eat, drink and read poetry (especially Hafez) and Shahnameh until well after midnight. Fruits and nuts are eaten and pomegranates and watermelons are particularly significant. The red colour in these fruits symbolizes the crimson hues of dawn and the glow of life. The poems of Divan-e Hafez, which can be found in the bookcases of most Iranian families, are read or recited on various occasions such as this festival and Nowruz. Shab-e Yalda was officially added to UNESCO Intangible Cultural Heritage Lists in December 2022. Names The longest and darkest night of the year marks "the night opening the initial forty-day period of the three-month winter", from which the name Chelleh, "fortieth", derives. There are all together three 40-day periods, one in summer, and two in winter. The two winter periods are known as the "great Chelleh" period ( to , 40 full days), followed/overlapped by the "small Chelleh" period ( to , 20 days + 20 nights = 40 nights and days). Shab-e Chelleh is the night opening the "big Chelleh" period, that is the night between the last day of autumn and the first day of winter. The other name of the festival, 'Yaldā', is ultimately borrowing from Syriac-speaking Christians. According to Dehkhoda, "Yalda is a Syriac word meaning birthday, and because people have adapted Yalda night with the nativity of Messiah, it's called the name; however, the celebration of Christmas (Noël) established on December 25, is set as the birthday of Jesus. Yalda is the beginning of winter and the last night of autumn, and it is the longest night of the year". In the first century, significant numbers of Eastern Christians were settled in Parthian and Sasanian territories, where they had received protection from religious persecution. Through them, Iranians (i.e. Parthians, Persians etc.) came in contact with Christian religious observances, including, it seems, Nestorian Christian Yalda, which in Syriac (a Middle Aramaic dialect) literally means "birth" but in a religious context was also the Syriac Christian proper name for Christmas, and which—because it fell nine months after Annunciation—was celebrated on eve of the winter solstice. The Christian festival's name passed to the non-Christian neighbors and although it is not clear when and where the Syriac term was borrowed into Persian, gradually 'Shab-e Yalda' and 'Shab-e Chelleh' became synonymous and the two are used interchangeably. History Yalda Night was one of the holy nights in ancient Iran and included in the official calendar of the Iranian Achaemenid Empire from at least 502 BCE under Darius I. Many of its modern festivities and customs remain unchanged from this period. Ancient peoples such as the Aryans and Indo-Europeans were well attuned to natural phenomena such as the changing of seasons, as their daily activities were dictated by the availability of sunlight, while their crops were impacted by climate and weather. They found that the shortest days are the last days of autumn and the first night of winter, and that immediately after, the days gradually become longer and the nights shorter. As such, the winter solstice, as the longest night, was called "The night of sun’s birth (Mehr)" and considered to mark the beginning of the year. The Iranian calendar The Iranian (Persian) calendar was founded and framed by Hakim Omar Khayyam. The history of Persian calendars initially points back to the time when the region of modern-day Persia celebrated their new years according to the Zoroastrian calendar. As Zoroastrianism was then the main religion in the region, their years consisted of "Exactly 365 days, distributed among twelve months of 30 days each plus five special month-less days, known popularly as the ‘stolen ones’, or, in religious parlance, as the ‘five Gatha Days'". Before the creation of the Solar Hijri calendar, the Jalali calendar was put in place through the order of Sulṭān Jalāl al-Dīn Malikshāh-i Saljūqī in the 5th c. A.H. According to the Biographical Encyclopedia of Astronomers, “After the death of Yazdigird III (the last king of the Sassanid dynasty), the Yazdigirdī Calendar, as a solar one, gradually lost its position, and the Hijrī Calendar replaced it”. Yalda Night is celebrated on the winter solstice, the longest and darkest night of the year. Customs and traditions In Zoroastrian tradition the longest and darkest night of the year was a particularly inauspicious day, and the practices of what is now known as "Shab-e Chelleh/Yalda" were originally customs intended to protect people from evil (see dews) during that long night, at which time the evil forces of Ahriman were imagined to be at their peak. People were advised to stay awake most of the night, lest misfortune should befall them, and people would then gather in the safety of groups of friends and relatives, share the last remaining fruits from the summer, and find ways to pass the long night together in good company. The next day (i.e. the first day of Dae month) was then a day of celebration, and (at least in the 10th century, as recorded by Al-Biruni), the festival of the first day of Dae month was known as Ḵorram-ruz (joyful day) or Navad-ruz (ninety days [left to Nowruz]). Although the religious significance of the long dark night has been lost, the old traditions of staying up late in the company of friends and family have been retained in Iranian culture to the present day. References to other older festivals held around the winter solstice are known from both Middle Persian texts as well as texts of the early Islamic period. In the 10th century, Al-Biruni mentions the mid-year festival (Maidyarem Gahanbar) that ran from . This festival is generally assumed to have been originally on the winter solstice, and which gradually shifted through the introduction of intercalation. Al-Biruni also records an Adar Jashan festival of fire celebrated on the intersection of Adar day (9th) of Adar month (9th), which is the last autumn month. This was probably the same as the fire festival called Shahrevaragan (Shahrivar day of Shahrivar month), which marked the beginning of winter in Tokarestan. In 1979, journalist Hashem Razi theorized that Mehregan the day-name festival of Mithra that in pre-Islamic times was celebrated on the autumn equinox and is today still celebrated in the autumn had in early Islamic times shifted to the winter solstice. Razi based his hypothesis on the fact that some of the poetry of the early Islamic era refers to Mihragan in connection with snow and cold. Razi's theory has a significant following on the Internet, but while Razi's documentation has been academically accepted, his adduction has not. Sufism's Chella, which is a 40-day period of retreat and fasting, is also unrelated to winter solstice festival. Food plays a central role in the present-day form of the celebrations. In most parts of Iran the extended family come together and enjoy a fine dinner. A wide variety of fruits and sweetmeats specifically prepared or kept for this night are served. Foods common to the celebration include watermelon, pomegranate, nuts, and dried fruit. These items and more are commonly placed on a korsi, which people sit around. In some areas it is custom that forty varieties of edibles should be served during the ceremony of the night of Chelleh. Light-hearted superstitions run high on the night of Chelleh. These superstitions, however, are primarily associated with consumption. For instance, it is believed that consuming watermelons on the night of Chelleh will ensure the health and well-being of the individual during the months of summer by protecting him from falling victim to excessive heat or disease produced by hot humors. In Khorasan, there is a belief that whoever eats carrots, pears, pomegranates, and green olives will be protected against the harmful bite of insects, especially scorpions. Eating garlic on this night protects one against pains in the joints. In khorasan, one of the attractive ceremony was and still is preparing Kafbikh a kind of traditional Iranian sweet is made in Khorasan, specially in the cities of Gonabad and Birjand. This is made for Yalda. After dinner the older individuals entertain the others by telling them tales and anecdotes. Another favorite and prevalent pastime of the night of Chelleh is fāl-e Ḥāfeẓ, which is divination using the Dīvān of Hafez (i.e. bibliomancy). It is believed that one should not divine by the Dīvān of Hafez more than three times, however, or the poet may get angry. Activities common to the festival include staying up past midnight, conversation, drinking, reading poems out loud, telling stories and jokes, and, for some, dancing. Prior to the invention and prevalence of electricity, decorating and lighting the house and yard with candles was also part of the tradition, but few have continued this tradition. Another tradition is giving dried fruits and nuts and gift to family and friends specially to the bride, wrapped in tulle and tied with ribbon (similar to wedding and shower "party favors") in khorasan giving gift to the bride was obligatory. Gallery See also Dongzhi Festival Hanukkah Mehregan Nowruz Tirgan Footnotes References Group 1 Group 2 External links Article about Yalda night on Irpersiatour website Festivals in Iran December observances Persian culture Observances set by the Solar Hijri calendar Winter events in Iran Winter solstice Intangible Cultural Heritage of Iran
Yalda Night
Astronomy
2,245
49,537,551
https://en.wikipedia.org/wiki/DuPont%20Fabros%20Technology
DuPont Fabros Technology, Inc. (DFT) was a real estate investment trust that invested in carrier-neutral data centers and provided colocation and peering services. In 2017, the company was acquired by Digital Realty. Operations As of December 31, 2016, the company owned 11 operating data center facilities comprising over 3.3 million net rentable square feet. Eight of the properties were in Northern Virginia, two were in Elk Grove Village, Illinois, and one was in Santa Clara, California. The company leased space to companies, on a wholesale level, in which such companies rented space to build their own data centers. The company had 32 customers and derived 92% of its revenue from its 15 largest customers. The company's largest customers included Microsoft (25.4% of revenue), Facebook (20.2% of revenue), Rackspace (9.0% of revenue), and Yahoo! (6.0% of revenue). History The company was co-founded by Lammot J. du Pont, an analyst for JPMorgan Chase and Hossein Fateh, a real estate developer in the Washington metropolitan area. The company sought to acquire data centers that belonged to defunct internet service providers. In 2004, the company's predecessor acquired 5 data centers from Savvis for $52 million in a leaseback transaction. In 2005, the company's predecessor acquired a 230,000 square foot data center from AOL for $58.5 million. On March 2, 2007, the company was incorporated as a real estate investment trust. In October 2007, the company became a public company via an initial public offering that raised $640 million, the 7th largest initial public offering of a real estate investment trust at that time. In early 2008, the company halted construction projects due to a lack of financing. In 2009, the company was named as the fastest growing company in the Washington metro area by American City Business Journals. In February 2011, Mohammed Mark Amin resigned from the board of directors and was replaced by John T. Roberts Jr. In 2012, Hossein Fateh, the chief executive officer of the company, forgone his $450,000 salary in exchange for use of the company jet. In 2012, the company reported that the volume of leasing was the largest in company history. In May 2012, Mohammed Mark Amin, formerly a director of the company, was accused by the U.S. Securities and Exchange Commission of making a $618,000 profit as a result of insider trading in the company's securities. In September 2014, the company opened a new data center in Ashburn, Virginia. In February 2015, Christopher P. Eldredge was named chief executive officer of the company. In March 2015, the company won the Brill Award For Data Center Design issued by Uptime Institute. In March 2016, the company acquired a 46.7 acre parcel of land in Hillsboro, Oregon for $11.2 million. In June 2016, the company sold a 38-acre data center in New Jersey to Quality Technology Services for $125 million. In October 2016, the company acquired the former printing plant of the Toronto Star for C$54.25 million, with plans to convert it to a data center. In May 2017, the company acquired a 56.5-acre undeveloped site in Mesa, Arizona with plans to construct a data center campus. In September 2017, the company was acquired by Digital Realty. References Data centers American companies established in 2007 Real estate investment trusts of the United States Companies listed on the New York Stock Exchange Technology companies established in 2007
DuPont Fabros Technology
Technology
733
67,299
https://en.wikipedia.org/wiki/Allan%20Wells
Allan Wipper Wells (born 3 May 1952) is a British former track and field sprinter who became the 100 metres Olympic champion at the 1980 Summer Olympics in Moscow. In 1981, he was both the IAAF Golden Sprints and IAAF World Cup gold medallist. He is also a three-time European Cup gold medallist. He was a multiple medallist for Scotland at the Commonwealth Games, winning two golds at the 1978 Commonwealth Games and completing a 100 metres/200 metres sprint double at the 1982 Commonwealth Games. Wells also recorded the fastest British 100/200 times in 1978, 1979, 1980, 1981, 1982, 1983 and 100 m in 1984. Biography Early years and long jump Born in Edinburgh, Wells was educated at Fernieside Primary School and then Liberton High School. He left school at age 15 to begin an engineering apprenticeship. He was initially a triple jumper and long jumper, and was the Scottish indoor long jump champion in 1974. Commonwealth and European sprint titles He began concentrating on sprint events in 1976. In 1977 he won the Amateur Athletic Association (AAA) Indoor 60 metres title, and won his first of seven outdoor Scottish sprint titles. In the 1978 season his times and victories continued to improve. He set a new British record at Gateshead 10.29, beating Don Quarrie and James Sanford, and also won the UK 100/200 Championships. At the Commonwealth Games in Edmonton, Alberta, Canada, he won the gold medal in the 200 m and silver in the 100 m. He also won the 4 × 100 m running the second leg with Drew McMaster, David Jenkins and Cameron Sharp running the other three legs. This success continued in 1979, when he won the European Cup 200 metres in Turin, Italy, beating the new world record holder Pietro Mennea on his home ground; he also finished 3rd in the 100 metres. 1980 – Olympic success and the showdown in Koblenz At the start of the 1980 season, Wells won the AAA's 100 metres, then went to the Côte d'Azur to finish preparing for the 1980 Moscow Olympic Games. He never used starting blocks, until a rule change forced him to do so for the Moscow Olympics. Prior to the Olympics he was put under pressure by Margaret Thatcher in the boycott of the games led by the Americans. He responded by declining all media requests. His Olympic participation was threatened by chronic back pain that struck him shortly before the games began. Each day he underwent four exhausting treatment sessions that left him too tired to train. Instead when not undergoing treatment he spent his time relaxing. In Moscow, Wells qualified for the final, with a new British record 10.11 s, where he faced pre-race favourite Silvio Leonard of Cuba. Wells finished with an extreme lean which allowed his head and shoulder to cross the finish line before Leonard's chest in a photo finish; both men were given a final time of 10.25 s. Wells became the oldest Olympic 100 m champion at that time at the age of 28 years 83 days. The 200 m final was another close affair. Wells won the silver medal behind Pietro Mennea, who beat him by 0.02 s; again he set a British record of 20.21 s. He went on to break a third British record, 38.62 s, with the sprint relay team that finished fourth in the final. In a later interview Wells said the two issues he faced prior to the games were inadvertently key factors in his success. He said in an interview to The Scotsman, "When we got to Moscow, [my wife and coach] Margot and I decided that I'd do six starts and see how it went. The fourth and fifth were full-out as if I was competing and I asked Margot what she thought: she said they were the best she'd ever seen me do. The rest had done me a lot of good, I was really fresh and committed, and those starts gave me the psychological edge over everyone else, which was key because the Olympics is all about your mental aptitude. You're at your fastest when you're relaxed and flowing (Wells' 10.11secs to qualify for the 100m final remains the Scottish record) rather than having to be aggressive." Following the Moscow Olympics, there was some suggestion that Wells's gold medal had been devalued by the boycott of the games. Wells accepted an invitation to take on the best USA sprinters of the day, among others, the ASV Weltklasse track meeting in Cologne in West Germany. Less than two weeks after the Moscow gold, he won the final in Cologne in a time of 10.19s, beating Americans Stanley Floyd (10.21), Mel Lattany (10.25), Carl Lewis (10.30) and Harvey Glance (10.31). Lattany went straight over to Wells after crossing the line to say, "For what it's worth, Allan, You're the Olympic champion and you would have been Olympic champion no matter who you ran against in Moscow." At the end of 1980, Wells was awarded Scottish Sports Personality of the Year. 1981 World Cup win In 1981, after a tour of Australia and New Zealand, Wells won the European Cup 100 metres, beating East German Frank Emmelmann. Wells also finished 2nd in the 200 m. He then won the "IAAF Golden sprints" in Berlin, which was the most prominent sprint meeting in the world that year. Although finishing second to the Frenchman Hermann Panzo by 0.01 secs in the 100, Wells won the 200 beating the top four American sprinters Mel Lattany, Jeff Phillips, Stanley Floyd, Steve Williams as well as Canada's Ben Johnson in the 100/200, 10.15/20.15 (200 wind assist) for Wells to win the event in an aggregate 30.30. Wells won the 100 metres at the IAAF World cup in Rome, beating Carl Lewis; Wells then finished 2nd in the world cup 200 in 20.53. Afterwards, he beat Mel Lattany and Stanley Floyd again, when he won a 200 in 20.26 in the Memorial Van Damme meeting in Brussels, Belgium. Later sprinting career In 1982, in Brisbane, Queensland, Australia, Wells won two more Commonwealth Games titles in the 100 m, a wind assisted 10.02. and then the 200 m, and a bronze medal in the relay. He shared the 200 m title with Mike McFarlane of England in 20.43 in a rare dead heat. In 1983, he won his third European Cup title by winning the 200 metres in 20.72, beating his old adversary Pietro Mennea in London, and again took 2nd in the 100 m. He then finished 4th in both the 100/200 sprint finals at the IAAF World Championships in Helsinki. At age 32, he reached the 100 m semi-finals at the 1984 Los Angeles Olympics, and was a member of the relay team that finished 7th in the final. Wells missed most of 1985 with injury. He was not selected for the Commonwealth Games in Edinburgh in 1986, as he had failed to compete at the Scottish trials. However, on 5 August at Gateshead, he beat both Ben Johnson and Atlee Mahorn, the respective Commonwealth 100 m and 200 m champions. Wells gained additional attention at Gateshead for being the first to be seen sporting the now common Lycra running shorts. The sight of these led to him being dubbed Wilson of the Wizard (a comic book character). Wells was consequently selected for Stuttgart in the European championships, coming fifth in both the 100 m and 200 m finals. He also had a victory against Linford Christie at Crystal Palace at the end of 1986 in 100m at 10.31. One of his last victories was winning the Inverness Highland Games 100/200 double in 1987. In 1987 his best time was 10.28 and he qualified for the Rome World Championships but he was injured. Although his later career was plagued by repeated back injuries, he still won a career total of 18 medals at major championships before retiring in his mid-30s. He and Don Quarrie and Pietro Mennea set a trend for sprinters in their mid thirties to compete longer in the late Eighties. After competitive retirement Since 1982 Wells has lived in Surrey, with his wife Margot. After retirement, he was a coach for the British bobsleigh team. Margot was also a Scottish 100/100 hurdles champion. They are now based in Guildford, Surrey where she is a fitness consultant, and Allan is a retired systems engineer. Allan coached the Bank of Scotland specialist sprint squad alongside another former Scottish sprinter, Ian Mackie. Wells's personal best for the 100 metres is 10.11, and for the 200 metres is 20.21, run at the Moscow 1980 games, and both are still Scottish records. He also ran a wind-assisted (+5.9 m/s) 10.02 in Brisbane, 1982 (still the track record as of August 2024 which he shares with Rohan Browning of Sydney, Australia from April 2023), and (+3.7 m/s) 20.11 in Edinburgh, 1980. In June 2015, a BBC documentary (Panorama: Catch Me If You Can) uncovered allegations by Wells' former teammate of historical doping by the 1980 Olympic 100m champion, beginning in 1977. Wells denied the allegations. As of August 2024, Wells holds two track records for 200 metres, both of which had wind-assistance. They are Turin (20.29, 1979, +2.2 m/s) and Venice (20.26, 1981, +8.5 m/s). Honours and awards Wells was appointed Member of the Order of the British Empire (MBE) in the 1982 Birthday Honours for services to athletics. He was also inducted alongside Eric Liddell and Wyndham Halswelle (two other former Scottish Athletic Olympic Champions) into the Scottish Sports Hall of Fame. Wells was the first baton holder for the Queen's Baton Relay for the 2014 Commonwealth Games, carrying the baton from Buckingham Palace in London in October 2013. In July 2014, Wells received, along with his wife Margot, an Honorary Doctorate of Science from Edinburgh Napier University. References External links NO EASY WAY: Allan Wells, One Man's Olympics at National Library of Scotland's Moving Image Archive 1952 births Living people Scottish male sprinters British male sprinters Scottish male long jumpers Athletes from Edinburgh Scottish Olympic competitors Olympic athletes for Great Britain Olympic gold medallists for Great Britain Olympic silver medallists for Great Britain Athletes (track and field) at the 1980 Summer Olympics Athletes (track and field) at the 1984 Summer Olympics Commonwealth Games gold medallists for Scotland Commonwealth Games silver medallists for Scotland Commonwealth Games bronze medallists for Scotland Commonwealth Games medallists in athletics Athletes (track and field) at the 1978 Commonwealth Games Athletes (track and field) at the 1982 Commonwealth Games World Athletics Championships athletes for Great Britain Members of the Order of the British Empire Systems engineers Scottish engineers People educated at Liberton High School Medalists at the 1980 Summer Olympics Olympic gold medalists in athletics (track and field) Olympic silver medalists in athletics (track and field) Medallists at the 1978 Commonwealth Games Medallists at the 1982 Commonwealth Games
Allan Wells
Engineering
2,285
2,781,170
https://en.wikipedia.org/wiki/Omega%20Andromedae
Omega Andromedae (ω And, ω Andromedae) is the Bayer designation for a slowly co-rotating binary star system in the northern constellation of Andromeda. Parallax measurements made during the Gaia mission make this system to be approximately from Earth. Its apparent visual magnitude is +4.83, which makes it bright enough to be seen with the naked eye. The primary component has a stellar classification of F5 IVe. The IV luminosity class indicates that it is probably a subgiant star that is in the process of evolving away from the main sequence as the supply of hydrogen at its core depletes. However, Abt (1985) gives a classification of F3 V, suggesting it is an F-type main-sequence star. The measured angular diameter of the primary star is . At the system's estimated distance this yields a size of about 2.2 times that of the Sun. It is emitting about seven times solar luminosity from its outer atmosphere at an effective temperature of . This heat gives it the yellow-white-hued glow of an F-type star. In 2008, the companion star was resolved using adaptive optics at the Lick Observatory. Later observations showed the magnitude difference between the two stars is 3.65 ± 0.03 and they are separated by 0.669 arcsecond. Abt (1985) lists the class as F5 V. References External links Image ω Andromedae 008799 Andromedae, Omega Andromedae, 48 Andromeda (constellation) F-type subgiants 006813 0417 Durchmusterung objects Binary stars F-type main-sequence stars
Omega Andromedae
Astronomy
352
8,088,986
https://en.wikipedia.org/wiki/Complex%20conjugate%20line
In complex geometry, the complex conjugate line of a straight line is the line that it becomes by taking the complex conjugate of each point on this line. This is the same as taking the complex conjugates of the coefficients of the line. So if the equation of is , then the equation of its conjugate is . The conjugate of a real line is the line itself. The intersection point of two conjugated lines is always real. References Complex numbers
Complex conjugate line
Mathematics
101
542,343
https://en.wikipedia.org/wiki/Square%20pyramidal%20number
In mathematics, a pyramid number, or square pyramidal number, is a natural number that counts the stacked spheres in a pyramid with a square base. The study of these numbers goes back to Archimedes and Fibonacci. They are part of a broader topic of figurate numbers representing the numbers of points forming regular patterns within different shapes. As well as counting spheres in a pyramid, these numbers can be described algebraically as a sum of the first positive square numbers, or as the values of a cubic polynomial. They can be used to solve several other counting problems, including counting squares in a square grid and counting acute triangles formed from the vertices of an odd regular polygon. They equal the sums of consecutive tetrahedral numbers, and are one-fourth of a larger tetrahedral number. The sum of two consecutive square pyramidal numbers is an octahedral number. History The pyramidal numbers were one of the few types of three-dimensional figurate numbers studied in Greek mathematics, in works by Nicomachus, Theon of Smyrna, and Iamblichus. Formulas for summing consecutive squares to give a cubic polynomial, whose values are the square pyramidal numbers, are given by Archimedes, who used this sum as a lemma as part of a study of the volume of a cone, and by Fibonacci, as part of a more general solution to the problem of finding formulas for sums of progressions of squares. The square pyramidal numbers were also one of the families of figurate numbers studied by Japanese mathematicians of the wasan period, who named them "kirei saijō suida" (with modern kanji, 奇零 再乗 蓑深). The same problem, formulated as one of counting the cannonballs in a square pyramid, was posed by Walter Raleigh to mathematician Thomas Harriot in the late 1500s, while both were on a sea voyage. The cannonball problem, asking whether there are any square pyramidal numbers that are also square numbers other than 1 and 4900, is said to have developed out of this exchange. Édouard Lucas found the 4900-ball pyramid with a square number of balls, and in making the cannonball problem more widely known, suggested that it was the only nontrivial solution. After incomplete proofs by Lucas and Claude-Séraphin Moret-Blanc, the first complete proof that no other such numbers exist was given by G. N. Watson in 1918. Formula If spheres are packed into square pyramids whose number of layers is 1, 2, 3, etc., then the square pyramidal numbers giving the numbers of spheres in each pyramid are: These numbers can be calculated algebraically, as follows. If a pyramid of spheres is decomposed into its square layers with a square number of spheres in each, then the total number of spheres can be counted as the sum of the number of spheres in each square, and this summation can be solved to give a cubic polynomial, which can be written in several equivalent ways: This equation for a sum of squares is a special case of Faulhaber's formula for sums of powers, and may be proved by mathematical induction. More generally, figurate numbers count the numbers of geometric points arranged in regular patterns within certain shapes. The centers of the spheres in a pyramid of spheres form one of these patterns, but for many other types of figurate numbers it does not make sense to think of the points as being centers of spheres. In modern mathematics, related problems of counting points in integer polyhedra are formalized by the Ehrhart polynomials. These differ from figurate numbers in that, for Ehrhart polynomials, the points are always arranged in an integer lattice rather than having an arrangement that is more carefully fitted to the shape in question, and the shape they fit into is a polyhedron with lattice points as its vertices. Specifically, the Ehrhart polynomial of an integer polyhedron is a polynomial that counts the integer points in a copy of that is expanded by multiplying all its coordinates by the number . The usual symmetric form of a square pyramid, with a unit square as its base, is not an integer polyhedron, because the topmost point of the pyramid, its apex, is not an integer point. Instead, the Ehrhart polynomial can be applied to an asymmetric square pyramid with a unit square base and an apex that can be any integer point one unit above the base plane. For this choice of , the Ehrhart polynomial of a pyramid is . Geometric enumeration As well as counting spheres in a pyramid, these numbers can be used to solve several other counting problems. For example, a common mathematical puzzle involves counting the squares in a large by square grid. This count can be derived as follows: The number of squares in the grid is . The number of squares in the grid is . These can be counted by counting all of the possible upper-left corners of squares. The number of squares in the grid is . These can be counted by counting all of the possible upper-left corners of squares. It follows that the number of squares in an square grid is: That is, the solution to the puzzle is given by the -th square pyramidal number. The number of rectangles in a square grid is given by the squared triangular numbers. The square pyramidal number also counts the acute triangles formed from the vertices of a -sided regular polygon. For instance, an equilateral triangle contains only one acute triangle (itself), a regular pentagon has five acute golden triangles within it, a regular heptagon has 14 acute triangles of two shapes, etc. More abstractly, when permutations of the rows or columns of a matrix are considered as equivalent, the number of matrices with non-negative integer coefficients summing to , for odd values of , is a square pyramidal number. Relations to other figurate numbers The cannonball problem asks for the sizes of pyramids of cannonballs that can also be spread out to form a square array, or equivalently, which numbers are both square and square pyramidal. Besides 1, there is only one other number that has this property: 4900, which is both the 70th square number and the 24th square pyramidal number. The square pyramidal numbers can be expressed as sums of binomial coefficients: The binomial coefficients occurring in this representation are tetrahedral numbers, and this formula expresses a square pyramidal number as the sum of two tetrahedral numbers in the same way as square numbers split into two consecutive triangular numbers. If a tetrahedron is reflected across one of its faces, the two copies form a triangular bipyramid. The square pyramidal numbers are also the figurate numbers of the triangular bipyramids, and this formula can be interpreted as an equality between the square pyramidal numbers and the triangular bipyramidal numbers. Analogously, reflecting a square pyramid across its base produces an octahedron, from which it follows that each octahedral number is the sum of two consecutive square pyramidal numbers. Square pyramidal numbers are also related to tetrahedral numbers in a different way: the points from four copies of the same square pyramid can be rearranged to form a single tetrahedron with twice as many points along each edge. That is, To see this, arrange each square pyramid so that each layer is directly above the previous layer, e.g. the heights are 4321 3321 2221 1111 Four of these can then be joined by the height pillar to make an even square pyramid, with layers . Each layer is the sum of consecutive triangular numbers, i.e. , which, when totalled, sum to the tetrahedral number. Other properties The alternating series of unit fractions with the square pyramidal numbers as denominators is closely related to the Leibniz formula for , although it converges faster. It is: In approximation theory, the sequences of odd numbers, sums of odd numbers (square numbers), sums of square numbers (square pyramidal numbers), etc., form the coefficients in a method for converting Chebyshev approximations into polynomials. References External links Figurate numbers Pyramids Articles containing video clips
Square pyramidal number
Mathematics
1,695
51,295,302
https://en.wikipedia.org/wiki/Lateral%20movement%20%28cybersecurity%29
Lateral movement refers to the techniques that cyber attackers, or threat actors, use to progressively move through a network as they search for the key data and assets that are ultimately the target of their attack campaigns. While the development of more sophisticated sequences of attack has helped threat actors develop better strategies and evade detection as compared to the past, similar to planning a heist, cyber defenders have also learned to use lateral movement against attackers in that they use it to detect their location and respond more effectively to an attack. Lateral movement is a part of the ATT&CK framework within the 14 categories of Tactics, Techniques, and Procedures. References Cybercrime
Lateral movement (cybersecurity)
Technology
130
13,942,689
https://en.wikipedia.org/wiki/Isopropyl%20palmitate
Isopropyl palmitate is the ester of isopropyl alcohol and palmitic acid. It is an emollient, moisturizer, thickening agent, and anti-static agent . The chemical formula is CH3(CH2)14COOCH(CH3)2. References Cosmetics chemicals Isopropyl esters Lipids Palmitate esters
Isopropyl palmitate
Chemistry
78
54,427,559
https://en.wikipedia.org/wiki/VoIP%20vulnerabilities
VoIP vulnerabilities are weaknesses in the VoIP protocol or its implementations that expose users to privacy violations and other problems. VoIP is a group of technologies that enable voice calls online. VoIP contains similar vulnerabilities to those of other internet use. Risks are not usually mentioned to potential customers. VoIP provides no specific protections against fraud and illicit practices. Vulnerabilities Eavesdropping Unencrypted connections are vulnerable to security breaches. Hackers/trackers can eavesdrop on conversations and extract valuable data. Network attacks Attacks on the user network or internet provider can disrupt or destroy the connection. Since VoIP requires an internet connection, direct attacks on the internet connection, or provider, can be effective. Such attacks target office telephony. Mobile applications that do not rely on an internet connection to make calls are immune to such attacks. Default security settings VoIP phones are smart devices that need to be configured. In some cases, Chinese manufacturers are using default passwords that lead to vulnerabilities. VOIP over Wi-Fi While VoIP is relatively secure, it still needs a source of internet, which is often a Wi-Fi network, making VoIP subject to Wi-Fi vulnerabilities Packet loss Since VoIP runs over an internet connection, via wired, Wi-Fi or 4G, it is susceptible to packet loss which affects the ability to make and receive calls or makes the calls hard to hear. The susceptibility is due to the real time nature of the communication. Packet loss is the biggest reason for VoIP support calls. SIP ALG When VoIP was first setup, a setting called SIP ALG was added to routers to prevent VoIP Packets being modified. However, on more modern VoIP systems, the SIP ALG router setting causes routing issues with VoIP Packets causing calls to drop. Routers are usually shipped with SIP ALG turned on. Exploits Spam VoIP is subject to spam called SPIT (Spam over Internet Telephony). Using the extensions provided by VoIP PBX capabilities, the spammer can harass their target from different numbers. The process can be automated and can fill the target's voice mail with notifications. The spammer can make calls often enough to block the target from getting important calls. Phishing VoIP users can change their Caller ID if they have admin rights on the VoIP server. Anyone who resells VoIP or manages their own VoIP server can allocate any phone number as an outgoing number. This is commonly used for genuinue reasons when a customer is porting a number, so they can use their number of a new plaform while the port takes place. But it can be used maliciously to mask any number. (a.k.a. Caller ID spoofing), allowing a caller to pose as a relative or colleague in order to extract information, money or benefits from the target. See also Comparison of VoIP software INVITE of Death List of VoIP companies Mobile communications over IP - Mobile VoIP Voice over WLAN - VoIP over a WiFi network References Internet security
VoIP vulnerabilities
Technology
650
3,621,246
https://en.wikipedia.org/wiki/Flooded%20engine
A flooded engine is an internal combustion engine that has been fed an excessively rich air-fuel mixture that cannot be ignited. This is caused by the mixture exceeding the upper explosive limit for the particular fuel. An engine in this condition will not start until the excessively rich mixture has been cleared. It is also possible for an engine to stall from a running state due to this condition. Condition Engine flooding was a common problem with carbureted cars, but newer fuel-injected ones are immune to the problem when operating within normal tolerances. Flooding usually occurs during starting, especially under cold conditions or because the accelerator has been pumped. It can also occur during hot starting; high temperatures may cause fuel in the carburetor float chamber to evaporate into the inlet manifold, causing the air/fuel mixture to exceed the upper explosive limit. High temperature fuel may also result in a vapor lock, which is unrelated to flooding but has a similar symptom. Flooding can also occur if the choke has been over applied or has malfunctioned. A severe form of engine flooding occurs when excessive liquid fuel enters the combustion chamber. This reduces the dead volume of the combustion chamber and thus places a heavy load on the starter motor, such that it fails to turn the engine. Damage (due to excessive compression and even dilution of the lubricating oil with fuel) can also occur. This condition is known as the engine "flooding out." Possible causes of too much liquid fuel in the engine include a defective carburetor float that is not closing the fuel inlet needle valve, or debris caught in the needle valve preventing it from sealing. Liquids inside an internal combustion engine are extremely detrimental because of the low compressibility of liquids. Although not the most common cause, a severely flooded engine could result in a hydrolock. A hydrolock occurs when a liquid fills a combustion chamber to the point that it is impossible to turn the crankshaft without a catastrophic failure of the engine or one of its vital components. The conventional remedy for a flooded carbureted engine is to steadily hold the throttle full open (full power position) while continuing to crank the engine. This permits the maximum flow of air through the engine, flushing the overly rich fuel mixture out of the exhaust. If the exhaust system is hot enough to autoignite, an after-fire may result; this can be seen as a flame discharging through the exhaust system. On a fuel-injected engine, ignoring the throttle (no fuel) while starting permits electronic logic systems to produce the correct fuel mixture, often based on exhaust gases. Some fuel injection computers interpret "pumping" the throttle to indicate a flooded engine, and alter the fuel-air mixture accordingly. In a carbureted engine equipped with an accelerator pump (which advances fuel flow to match air ingestion under rapid throttle acceleration), "pumping" the throttle will force excess fuel into the engine, further flooding it. In worst cases, the excess fuel can foul spark plugs, sometimes necessitating their cleaning or replacement before the engine will start. This is most likely to occur on a carbureted engine in cold weather, after a running engine has been shut off briefly before being restarted. Doing so can cause the choke valve to configure the mixture for a cold engine start, despite higher actual temperatures, resulting in an overly rich mixture and flooded engine. See also Vapor lock References Engine problems
Flooded engine
Technology
698
171,728
https://en.wikipedia.org/wiki/Drag%20equation
In fluid dynamics, the drag equation is a formula used to calculate the force of drag experienced by an object due to movement through a fully enclosing fluid. The equation is: where is the drag force, which is by definition the force component in the direction of the flow velocity, is the mass density of the fluid, is the flow velocity relative to the object, is the reference area, and is the drag coefficient – a dimensionless coefficient related to the object's geometry and taking into account both skin friction and form drag. If the fluid is a liquid, depends on the Reynolds number; if the fluid is a gas, depends on both the Reynolds number and the Mach number. The equation is attributed to Lord Rayleigh, who originally used L2 in place of A (with L being some linear dimension). The reference area A is typically defined as the area of the orthographic projection of the object on a plane perpendicular to the direction of motion. For non-hollow objects with simple shape, such as a sphere, this is exactly the same as the maximal cross sectional area. For other objects (for instance, a rolling tube or the body of a cyclist), A may be significantly larger than the area of any cross section along any plane perpendicular to the direction of motion. Airfoils use the square of the chord length as the reference area; since airfoil chords are usually defined with a length of 1, the reference area is also 1. Aircraft use the wing area (or rotor-blade area) as the reference area, which makes for an easy comparison to lift. Airships and bodies of revolution use the volumetric coefficient of drag, in which the reference area is the square of the cube root of the airship's volume. Sometimes different reference areas are given for the same object in which case a drag coefficient corresponding to each of these different areas must be given. For sharp-cornered bluff bodies, like square cylinders and plates held transverse to the flow direction, this equation is applicable with the drag coefficient as a constant value when the Reynolds number is greater than 1000. For smooth bodies, like a cylinder, the drag coefficient may vary significantly until Reynolds numbers up to 107 (ten million). Discussion The equation is easier understood for the idealized situation where all of the fluid impinges on the reference area and comes to a complete stop, building up stagnation pressure over the whole area. No real object exactly corresponds to this behavior. is the ratio of drag for any real object to that of the ideal object. In practice a rough un-streamlined body (a bluff body) will have a around 1, more or less. Smoother objects can have much lower values of . The equation is precise – it simply provides the definition of (drag coefficient), which varies with the Reynolds number and is found by experiment. Of particular importance is the dependence on flow velocity, meaning that fluid drag increases with the square of flow velocity. When flow velocity is doubled, for example, not only does the fluid strike with twice the flow velocity, but twice the mass of fluid strikes per second. Therefore, the change of momentum per time, i.e. the force experienced, is multiplied by four. This is in contrast with solid-on-solid dynamic friction, which generally has very little velocity dependence. Relation with dynamic pressure The drag force can also be specified as where PD is the pressure exerted by the fluid on area A. Here the pressure PD is referred to as dynamic pressure due to the kinetic energy of the fluid experiencing relative flow velocity u. This is defined in similar form as the kinetic energy equation: Derivation The drag equation may be derived to within a multiplicative constant by the method of dimensional analysis. If a moving fluid meets an object, it exerts a force on the object. Suppose that the fluid is a liquid, and the variables involved – under some conditions – are the: speed u, fluid density ρ, kinematic viscosity ν of the fluid, size of the body, expressed in terms of its wetted area A, and drag force Fd. Using the algorithm of the Buckingham π theorem, these five variables can be reduced to two dimensionless groups: drag coefficient cd and Reynolds number Re. That this is so becomes apparent when the drag force Fd is expressed as part of a function of the other variables in the problem: This rather odd form of expression is used because it does not assume a one-to-one relationship. Here, fa is some (as-yet-unknown) function that takes five arguments. Now the right-hand side is zero in any system of units; so it should be possible to express the relationship described by fa in terms of only dimensionless groups. There are many ways of combining the five arguments of fa to form dimensionless groups, but the Buckingham π theorem states that there will be two such groups. The most appropriate are the Reynolds number, given by and the drag coefficient, given by Thus the function of five variables may be replaced by another function of only two variables: where fb is some function of two arguments. The original law is then reduced to a law involving only these two numbers. Because the only unknown in the above equation is the drag force Fd, it is possible to express it as Thus the force is simply ρ A u2 times some (as-yet-unknown) function fc of the Reynolds number Re – a considerably simpler system than the original five-argument function given above. Dimensional analysis thus makes a very complex problem (trying to determine the behavior of a function of five variables) a much simpler one: the determination of the drag as a function of only one variable, the Reynolds number. If the fluid is a gas, certain properties of the gas influence the drag and those properties must also be taken into account. Those properties are conventionally considered to be the absolute temperature of the gas, and the ratio of its specific heats. These two properties determine the speed of sound in the gas at its given temperature. The Buckingham pi theorem then leads to a third dimensionless group, the ratio of the relative velocity to the speed of sound, which is known as the Mach number. Consequently when a body is moving relative to a gas, the drag coefficient varies with the Mach number and the Reynolds number. The analysis also gives other information for free, so to speak. The analysis shows that, other things being equal, the drag force will be proportional to the density of the fluid. This kind of information often proves to be extremely valuable, especially in the early stages of a research project. Air viscosity in a rotating sphere Air viscosity in a rotating sphere has a coefficient, similar to the drag coefficient in the drag equation. Experimental methods To empirically determine the Reynolds number dependence, instead of experimenting on a large body with fast-flowing fluids (such as real-size airplanes in wind tunnels), one may just as well experiment using a small model in a flow of higher velocity because these two systems deliver similitude by having the same Reynolds number. If the same Reynolds number and Mach number cannot be achieved just by using a flow of higher velocity it may be advantageous to use a fluid of greater density or lower viscosity. See also Aerodynamic drag Angle of attack Morison equation Newton's sine-square law of air resistance Stall (flight) Terminal velocity References External links Drag (physics) Equations of fluid dynamics Aircraft wing design
Drag equation
Physics,Chemistry
1,506
1,429,083
https://en.wikipedia.org/wiki/Comparison%20of%20file%20transfer%20protocols
This article lists communication protocols that are designed for file transfer over a telecommunications network. Protocols for shared file systems—such as 9P and the Network File System—are beyond the scope of this article, as are file synchronization protocols. Protocols for packet-switched networks A packet-switched network transmits data that is divided into units called packets. A packet comprises a header (which describes the packet) and a payload (the data). The Internet is a packet-switched network, and most of the protocols in this list are designed for its protocol stack, the IP protocol suite. They use one of two transport layer protocols: the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). In the tables below, the "Transport" column indicates which protocol(s) the transfer protocol uses at the transport layer. Some protocols designed to transmit data over UDP also use a TCP port for oversight. The "Server port" column indicates the port from which the server transmits data. In the case of FTP, this port differs from the listening port. Some protocols—including FTP, FTP Secure, FASP, and Tsunami—listen on a "control port" or "command port", at which they receive commands from the client. Similarly, the encryption scheme indicated in the "Encryption" column applies to transmitted data only, and not to the authentication system. Overview Features The "Managed" column indicates whether the protocol is designed for managed file transfer (MFT). MFT protocols prioritise secure transmission in industrial applications that require such features as auditable transaction records, monitoring, and end-to-end data security. Such protocols may be preferred for electronic data interchange. Ports In the table below, the data port is the network port or range of ports through which the protocol transmits file data. The control port is the port used for the dialogue of commands and status updates between client and server. The column "Assigned by IANA" indicates whether the port is listed in the Service Name and Transport Protocol Port Number Registry, which is curated by the Internet Assigned Numbers Authority (IANA). IANA devotes each port number in the registry to a specific service with a specific transport protocol. The table below lists the transport protocol in the "Transport" column. Serial protocols The following protocols were designed for serial communication, mostly for the RS-232 standard. They are used for uploading and downloading computer files via modem or serial cable (e.g., by null modem or direct cable connection). UUCP is one protocol that can operate with either RS-232 or the Transmission Control Protocol as its transport. The Kermit protocol can operate over any computer-to-computer transport: direct serial, modem, or network (notably TCP/IP, including on connections secured by SSL, SSH, or Kerberos). OBject EXchange is a protocol for binary object wireless transfer via the Bluetooth standard. Bluetooth was conceived as a wireless replacement for RS-232. Overview Features See also Notes References Further reading Lists of software Lists of network protocols Network software comparisons
Comparison of file transfer protocols
Technology
641
25,912,923
https://en.wikipedia.org/wiki/Psilocybe%20natalensis
Psilocybe natalensis is a species of psilocybin mushroom in the family Hymenogastraceae. It is found in South Africa. The specific epithet refers to its type locality in Natal. The species was described as new to science in 1995 by Jochen Gartz, Derek Reid, Michael Smith, and Albert Eicker. It is very closely related to Psilocybe cubensis, and differs in its habitat preference, less persistent annulus and genetic sequence. See also List of psilocybin mushrooms List of Psilocybe species References External links Psychoactive fungi natalensis Psychedelic tryptamine carriers Fungi of Africa Taxa named by Derek Reid Fungi described in 1995 Fungus species
Psilocybe natalensis
Biology
145
67,779,776
https://en.wikipedia.org/wiki/Sergei%20Viktorovich%20Bochkarev
Sergei (or Sergey) Viktorovich Bochkarev (or Bočkarev) (Сергей Викторович Бочкарёв, born July 24, 1941, in Kuybyshev now renamed Samara) is a Soviet and Russian mathematician. Education and career He received in 1964 his undergraduate degree from Moscow Institute of Physics and Technology and in 1969 his Russian Candidate of Sciences degree (PhD) from Moscow State University. His dissertation о рядах Фурье по системе Хаара (On Fourier series in the Haar system) was supervised by Pyotr Lavrentyevich Ulyanov. From Moscow State University, Bochkarev received in 1974 his Russian Doctor of Science degree (habilitation). Since 1971 he has worked at the Steklov Institute of Mathematics, where he holds the title of leading scientific researcher in the Department of Function Theory. His research deals with harmonic analysis, BMO spaces, Hardy spaces, functional analysis, construction of orthogonal bases in various function spaces, and exponential sums. In 1977 he was awarded the Salem Prize. In 1978 he was an Invited Speaker with talk Метод усреднения в теории ортогональных рядов (The averaging method in the theory of orthogonal bases) at the International Congress of Mathematicians in Helsinki. Selected publications On a problem of Zygmund, Mathematics of the USSR-Izvestia, vol. 7, no. 3, 1973, p. 629 Existence of a basis in the space of functions analytic in the disk, and some properties of Franklin's system, Math. USSR Sbornik, vol. 24, 1974, pp. 1–16 The method of averaging in the theory of orthogonal series and some questions in the theory of bases, Tr. MIAN SSSR, vol. 146, 1978, pp. 3–87 The method of averaging in the theory of orthogonal series and some questions in the theory of bases, Proc. Steklov Inst. Math., vol. 146, 1980, pp. 1–92 Everywhere divergent Fourier series with respect to the Walsh system and with respect to multiplicative systems, Russian Math. Surveys, vol. 59, 2004, pp. 103–124 Multiplicative Inequalities for the L1 Norm: Applications in Analysis and Number Theory, Proc. Steklov Inst. Math., vol. 255, 2006, pp. 49–64 A Generalization of Kolmogorov's Theorem to Biorthogonal Systems, Proc. Steklov Inst. Math., vol. 260, 2008, pp. 37–49 References External links 1941 births Living people Moscow Institute of Physics and Technology alumni Moscow State University alumni Soviet mathematicians 20th-century Russian mathematicians 21st-century Russian mathematicians Mathematical analysts
Sergei Viktorovich Bochkarev
Mathematics
616
56,410,801
https://en.wikipedia.org/wiki/Yui%20%28behavior%29
Yui (Japanese/Okinawan:結,ゆい) involves a system of collaborative work in small settlements and autonomous units. It consists of mutual aid that helps and cooperates within the residents' village, which requires a great deal of time, money, and effort. Though the loan word has crept into standard Japanese, the cultural concept is more particular to Okinawan life. Nevertheless, traditional informal fire brigades in other parts of Japan have been considered a type of Yui as labor on demand, in addition to more ubiquitous agricultural collectives. Yui Maaru 「ゆいまーる」 and Ii Maaru 「いーまーる」 mean "mutual assistance" equal and in order. No reward is expected. They are rooted in Okinawan concepts, not limited to mutual farm work, but also extends to the construction of houses and graveyards, therefore is not completely agriculture based.  Such informal groups are called Yui-gumi 「結い組」. They consist of relatives, friends, neighborhood residents, etc. As modernization progresses and agriculture dies out, the practice is becoming less common and more monetary. However, unlike volunteer moles of Mexico who rescue trapped people in emergencies like quakes, (Topos de Tlatelolco), these informal groups are generalists rather than specialists in a particular task. It may be compared/contrasted to other societies, such as pumasi (품앗이) culture of Korea. Modern associations Notably, the local railway there is named after the concept and practice. Organizational behavior Okinawan culture
Yui (behavior)
Biology
313
1,220,976
https://en.wikipedia.org/wiki/Reclaimer
A reclaimer is a large machine used in bulk material handling applications. A reclaimer's function is to recover bulk material such as ores and cereals from a stockpile. A stacker is used to stack the material. Reclaimers are volumetric machines and are rated in m3/h (cubic meters per hour) for capacity, which is often converted to t/h (tonnes per hour) based on the average bulk density of the material being reclaimed. Reclaimers normally travel on a rail between stockpiles in the stockyard. A bucket wheel reclaimer can typically move in three directions: horizontally along the rail; vertically by "luffing" its boom and rotationally by slewing its boom. Reclaimers are generally electrically powered by means of a trailing cable. Reclaimer types Bucket wheel reclaimers use "bucket wheels" to remove material from the pile they are reclaiming. Scraper reclaimers use a series of scrapers on a chain to reclaim the material. The reclaimer structure can be of a number of types, including portal and bridge. Reclaimers are named based on their type, for example, "Bridge reclaimer." Portal and bridge reclaimers can both use either bucket wheels or scrapers to reclaim the product. Bridge type reclaimers blend the stacked product as it is reclaimed. Whenever material is laid down during any reclaiming process, it creates a pile. Blending bed stacker reclaimers form such piles in a circular fashion. They do this by taking reclaimed material and passing it through a conveyor system that rotates around the center of the pile to create a circle. This allows the pile to be evenly spread out during the reclaiming process and allows for the oldest material in the pile to be reclaimed before the newer material. During this process, a harrow tool is used to cut through the reclaimed material so that the material can be combined. Some Blending bed reclaimers are equipped with rakes to ensure that no material gets stuck in the machine. These rakes are made with various materials and sizes based on the climate in which the reclaimer operates. In below freezing temperatures, a harder material is used to create a rake with modified edges, which allow for any ice or debris to be broken up before piling. Cantilever chain reclaimers are designed to use longer booms. Cantilever chain reclaimers use a truss system that is connected to a liner and then to a chain, this chain is bolted onto the elevation chute and fixed to the reclaimer. The angle of this boom is then set by a cable-winch system and is supported using a cable system. With the cable, the boom can be lowered slightly during each reclaiming cycle. This chain system creates a push and pull effect that allows for any loose material to be collected and moved to the edge of the reclaimed pile. After the loose material is collected it is lifted and moved for further processing. Reclaimer applications A reclaimer is used principally in reclaiming processes. These processes can have low, medium, and high material flow rates. Reclaimers are made up of a bucket-wheel, a counterweight boom, and a rocker; they also use a conveyor system to move any material reclaimed from the boom to its specific pile. These machines can be assembled differently based on the required reclaiming load rate and boom length. These changes are made in order to accommodate for the associated fluctuations in flow rates and load patterns. In the event of high material flow rates, a combination of a boom and bucket wheel is used. Control systems Stackers and Reclaimers were originally manually controlled machines with no remote control. Modern machines are typically fully automated with their parameters (for stacking or reclaiming) remotely set. Some older reclaimers may still be manually controlled, as reclaiming is more difficult to automate than stacking because the automatic detection of pile edges is complicated by different environmental conditions and different bulk materials. See also Stacker Bucket-wheel excavator References Bulk material handling Engineering vehicles Mining equipment
Reclaimer
Engineering
812
66,037,756
https://en.wikipedia.org/wiki/Bmon
bmon is a free and open-source monitoring and debugging tool to monitor bandwidth and capture and display networking-related statistics. It features various output methods including an interactive curses user interface and programmable text output for scripting. bmon allows the user to see: Network bandwidth real-time visualization Total amount of transmitted data CRC errors Collisions ICMPv6 traffic packets References Linux software Free software programmed in C Software using the BSD license Software using the MIT license Debuggers Network software
Bmon
Engineering
104
18,971,743
https://en.wikipedia.org/wiki/Green%20PR
Green PR is a sub-field of public relations that communicates an organization's corporate social responsibility or environmentally friendly practices to the public. The goal is to produce increased brand awareness and improve the organization's reputation. Tactics include placing news articles, winning awards, communicating with environmental groups and distributing publications. The term is derived from the "green movement", an ideology which seeks to minimize the effect of human activity on the environment. In this sector of public relations, much of the work focuses on combatting the misinformation spread by Big Oil and natural gas companies. Many of these companies like, ExxonMobil and BP, have disseminated content that implies they’ve become the leaders in sustainable policy. Practitioners must educate and their publics and rebuild relationships that are mutually beneficial for the company and the planet. Importance Environmentalism has become increasingly popular among consumers and media. A nine-country survey found 85% of consumers around the world are willing to change their consumption habits to make tomorrow’s world a better place, and over half (55%) would help a brand “promote” a product if a good cause were behind it. The study also found when choosing between two brands of same quality and price, social purpose affected consumers’ decision the most (41%), ahead of design and innovation (32%) and the loyalty to the brand (26%). According to PR Week:The significance of corporate America embracing the green movement cannot be denied. Some still think it's a fad, but all signs point to the contrary - a sustained commitment to sustainability, either for economic efficiencies or reach out to a public whose goals and values are changing.As a management tool, public relations practitioners have the ability to disseminate information and change perception and awareness. A role of this practice is to bolster organizations within their community, using trade tools like the press release and lobbyists to persuade legislative bodies to enact changes in policy that fortifies the goals of a socially responsible corporation. In principle, sustainability seeks to dismantle existing infrastructures and replace them with clean green technologies. In doing so, the spread of information and recalibration of traditional practice is vital for a successful campaign. Communication in this field is also underscored by the need to make decisions at the company level, as these practices are not often federally implemented. As far back as 2008, the majority of companies that were surveyed by the Economist Intelligence Unit cite corporate social responsibility as “a necessary cost of doing business” as well as a potential tactic that can create distinguishing marketability. Now that these issues cannot be ignored in the marketplace, companies must align their manufacturing, sourcing, and sales strategies to curry favor with a consumer base that continually grows more anxious about the state of the environment. See also Greenwash Corporate Social Responsibility References Further reading Environmental communication Public relations terminology
Green PR
Environmental_science
585
1,086,083
https://en.wikipedia.org/wiki/X-ray%20absorption%20fine%20structure
X-ray absorption fine structure (XAFS) is a specific structure observed in X-ray absorption spectroscopy (XAS). By analyzing the XAFS, information can be acquired on the local structure and on the unoccupied local electronic states. Atomic spectra The atomic X-ray absorption spectrum (XAS) of a core-level in an absorbing atom is separated into states in the discrete part of the spectrum called "bounds final states" or "Rydberg states" below the ionization potential (IP) and "states in the continuum" part of the spectrum above the ionization potential due to excitations of the photoelectron in the vacuum. Above the IP the absorption cross section attenuates gradually with the X-ray energy. Following early experimental and theoretical works in the thirties, in the sixties using synchrotron radiation at the National Bureau of Standards it was established that the broad asymmetric absorption peaks are due to Fano resonances above the atomic ionization potential where the final states are many body quasi-bound states (i.e., a doubly excited atom) degenerate with the continuum. Spectra of molecules and condensed matter The XAS spectra of condensed matter are usually divided in three energy regions: Edge region The edge region usually extends in a range of few eV around the absorption edge. The spectral features in the edge region i) in good metals are excitations to final delocalized states above the Fermi level; ii) in insulators are core excitons below the ionization potential; iii) in molecules are electronic transitions to the first unoccupied molecular levels above the chemical potential in the initial states which are shifted into the discrete part of the core absorption spectrum by the Coulomb interaction with the core hole. Multi-electron excitations and configuration interaction between many body final states dominate the edge region in strongly correlated metals and insulators. For many years the edge region was referred to as the “Kossel structure” but now it is known as "absorption edge region" since the Kossel structure refers only to unoccupied molecular final states which is a correct description only for few particular cases: molecules and strongly disordered systems. X-ray Absorption Near Edge Structure The XANES energy region extends between the edge region and the EXAFS region over a 50-100 eV energy range around the core level x-ray absorption threshold. Before 1980 the XANES region was wrongly assigned to different final states: a) unoccupied total density of states, or b) unoccupied molecular orbitals (kossel structure) or c) unoccupied atomic orbitals or d) low energy EXAFS oscillations. In the seventies, using synchrotron radiation in Frascati and Stanford synchrotron sources, it was experimentally shown that the features in this energy region are due to multiple scattering resonances of the photoelectron in a nanocluster of variable size. Antonio Bianconi in 1980 invented the acronym XANES to indicate the spectral region dominated by multiple scattering resonances of the photoelectron in the soft x-ray range and in the hard X-ray range. In the XANES energy range the kinetic energy of the photoelectron in the final state is between few eV and 50-100 eV. In this regime the photoelectron has a strong scattering amplitude by neighboring atoms in molecules and condensed matter, its wavelength is larger than interatomic distances, its mean free path could be smaller than one nanometer and finally the lifetime of the excited state is in the order of femtoseconds. The XANES spectral features are described by full multiple scattering theory proposed in the early seventies. Therefore, the key step for XANES interpretation is the determination of the size of the atomic cluster of neighbor atoms, where the final states are confined, which could range from 0.2 nm to 2 nm in different systems. This energy region has been called later (in 1982) also near-edge X-ray absorption fine structure (NEXAFS), which is synonymous with XANES. During more than 20 years the XANES interpretation has been object of discussion but recently there is agreement that the final states are "multiple scattering resonances" and many body final states play an important role. Intermediate region There is an intermediate region between the XANES and EXAFS regions where low n-body distribution functions play a key role. Extended X-ray absorption fine structure The oscillatory structure extending for hundreds of electron volts past the edges was called the “Kronig structure” after the scientist, Ralph Kronig, who assigned this structure in the high energy range ( i.e., for a kinetic energy range - larger than 100 eV - of the photoelectron in the weak scattering regime) to the single scattering of the excited photoelectron by neighbouring atoms in molecules and condensed matter. This regime was called EXAFS in 1971 by Sayers, Stern and Lytle. and it developed only after the use of intense synchrotron radiation sources. Applications of x-ray absorption spectroscopy X-ray absorption edge spectroscopy corresponds to the transition from a core-level to an unoccupied orbital or band and mainly reflects the electronic unoccupied states. EXAFS, resulting from the interference in the single scattering process of the photoelectron scattered by surrounding atoms, provides information on the local structure. Information on the geometry of the local structure is provided by the analysis of the multiple scattering peaks in the XANES spectra. The XAFS acronym has been later introduced to indicate the sum of the XANES and EXAFS spectra. See also SEXAFS EXAFS XANES References External links M. Newville, Fundamentals of XAFS S. Bare, XANES measurements and interpretation B. Ravel, A practical introduction to multiple scattering X-ray absorption spectroscopy fr:Spectrométrie d'absorption it:EXAFS
X-ray absorption fine structure
Chemistry,Materials_science,Engineering
1,248
2,695,651
https://en.wikipedia.org/wiki/Browser%20sniffing
Browser sniffing (also known as browser detection) is a set of techniques used in websites and web applications in order to determine the web browser a visitor is using, and to serve browser-appropriate content to the visitor. It is also used to detect mobile browsers and send them mobile-optimized websites. This practice is sometimes used to circumvent incompatibilities between browsers due to misinterpretation of HTML, Cascading Style Sheets (CSS), or the Document Object Model (DOM). While the World Wide Web Consortium maintains up-to-date central versions of some of the most important Web standards in the form of recommendations, in practice no software developer has designed a browser which adheres exactly to these standards; implementation of other standards and protocols, such as SVG and XMLHttpRequest, varies as well. As a result, different browsers display the same page differently, and so browser sniffing was developed to detect the web browser in order to help ensure consistent display of content. Sniffer methods Client-side sniffing Web pages can use programming languages such as JavaScript which are interpreted by the user agent, with results sent to the web server. For example: var isIEBrowser = false; if (window.ActiveXObject) { isIEBrowser = true; } // Or, shorter: var isIE = (window.ActiveXObject !== undefined); This code is run by the client computer, and the results are used by other code to make necessary adjustments on client-side. In this example, the client computer is asked to determine whether the browser can use a feature called ActiveX. Since this feature was proprietary to Microsoft, a positive result will indicate that the client may be running Microsoft's Internet Explorer. This is no longer a reliable indicator since Microsoft's open-source release of the ActiveX code, however, meaning that it can be used by any browser. Standard Browser detection method The web server communicates with the client using a communication protocol known as HTTP, or Hypertext Transfer Protocol, which specifies that the client send the server information about the browser being used to view the website in a User-Agent header. Server-side sniffing Extensive browser techniques enable persistent user tracking even if users try to stay anonymous. See device fingerprint for more details on browser fingerprinting. Issues and standards Many websites use browser sniffing to determine whether a visitor's browser is unable to use certain features (such as JavaScript, DHTML, ActiveX, or cascading style sheets), and display an error page if a certain browser is not used. However, it is virtually impossible to account for the tremendous variety of browsers available to users. Generally, a web designer using browser sniffing to determine what kind of page to present will test for the three or four most popular browsers, and provide content tailored to each of these. If a user is employing a user agent not tested for, there is no guarantee that a usable page will be served; thus, the user may be forced either to change browsers or to avoid the page. The World Wide Web Consortium, which sets standards for the construction of web pages, recommends that web sites be designed in accordance with its standards, and be arranged to "fail gracefully" when presented to a browser which cannot deal with a particular standard. Browser sniffing increases maintenance needed. Websites treating some browsers differently should provide an alternative version for other browsers. Use of user agent strings are error-prone because the developer must check for the appropriate part, such as "Gecko" instead of "Firefox". They must also ensure that future versions are supported. Furthermore, some browsers allow changing the user agent string, making the technique useless. See also Computer programming HTTP Web browser Feature detection (web development) ("Browser sniffing" synonym in some contexts) Browser fingerprint Document Object Model User agent Web standards Content sniffing References Web browsers Web development
Browser sniffing
Engineering
814
71,277,483
https://en.wikipedia.org/wiki/Ludwig%20Pohl
Ludwig Maria Pohl (28 September 1932 – 24 October 2020) was an organic chemist, who was instrumental in developing new liquid crystal substance classes and compounds which made liquid crystal displays (LCDs) widely used. His team at Merck KGaA, Darmstadt, developed liquid crystal mixtures optimized for various applications. Over years, the Merck Group became a leading supplier of liquid crystal compounds worldwide. Education and career Pohl was born and raised in Liebau, Lower Silesia. After World War II, his family moved out of Poland to Northern Germany. Starting in 1954, he studied chemistry at the Technische Universität Hannover and at the University of Würzburg. In 1962, he obtained his PhD in physical chemistry at the University of Hanover. The following two years, he acted as assistant at this university. Then, he moved to Würzburg again and worked there as postdoc until 1966. That year, he accepted a research position at Merck KGaA in Darmstadt, where he worked on the analysis of the structure of pharmaceutical drugs. Liquid crystals On a trip to the United States in 1968, Pohl became aware of the potential of liquid crystals for display applications, which were still largely unknown at the time. Initially, liquid crystals were not considered to be a business opportunity for the Merck Group. Pohl and his colleagues repeatedly had to overcome internal resistance. They sought outside financing for their research, which was granted by German federal agencies. The resilience of Pohl paid off after years of efforts to find better-suited liquid crystals when he, Rudolf Eidenschink and colleagues successfully synthesized and tested the new class of cyanophenylcyclohexanes based on 4-pentylphenol. Together with other developments in this field, this enable a profitable industrial production and the Merck Group became the leading supplier of liquid crystal substances for various types of LCDs. Later, the Merck Group bought patents from former competitors and attracted senior professionals such as in 1990 George William Gray to work with Pohl's team. Honors 2014: Ludwig Pohl was inducted into the Hall of Fame of German Research honoring him for his unwavering scientific curiosity and recognizing him with a special distinction for his life's work. He received this honor together with Stefan Hell, who a few weeks later received the Nobel Prize for Chemistry in 2014. Publications Ludwig Pohl: Publications after 2006. researchgate.net. Retrieved 12 July 2022. Ludwig Pohl, D. Demus, G. Pelzl, Heino Finkelmann, Karl Hiltrop: Liquid Crystals. (englisch), Stegemeyer Steinkopff, 2012, ISBN 3662083957 Patents Pohl was named as inventor or co-inventor of over 100 patents. Private life Pohl was married to Hannelore Pohl, had two daughters and a son who is professor at Stanford University. References Further reading David Dunmur & Tim Sluckin (2011): Soap, Science, and Flat-screen TVs: a history of liquid crystals. Oxford University Press Hirohisa Kawamoto (2002): The history of liquid-crystal displays. Proceedings of the IEEE, Vol. 90, No. 4, April 2002 1932 births 2020 deaths Liquid crystals German organic chemists People from Lubawka People from the Province of Lower Silesia
Ludwig Pohl
Chemistry
688
33,149,795
https://en.wikipedia.org/wiki/Aharonov%E2%80%93Casher%20effect
The Aharonov–Casher effect is a quantum mechanical phenomenon predicted in 1984 by Yakir Aharonov and Aharon Casher, in which a traveling magnetic dipole is affected by an electric field. It is dual to the Aharonov–Bohm effect, in which the quantum phase of a charged particle depends upon which side of a magnetic flux tube it comes through. In the Aharonov–Casher effect, the particle has a magnetic moment and the tubes are charged instead. It was observed in a gravitational neutron interferometer in 1989 and later by fluxon interference of magnetic vortices in Josephson junctions. It has also been seen with electrons and atoms. In both effects the particle acquires a phase shift () while traveling along some path P. In the Aharonov–Bohm effect it is While for the Aharonov–Casher effect it is where is its charge and is the magnetic moment. The effects have been observed together. References Bibliography See also Duality (electricity and magnetism) Quantum mechanics Physical phenomena
Aharonov–Casher effect
Physics
216
2,428,490
https://en.wikipedia.org/wiki/Cache%20poisoning
Cache poisoning refers to a computer security vulnerability where invalid entries can be placed into a cache, which are then assumed to be valid when later used. Two common varieties are DNS cache poisoning and ARP cache poisoning. Web cache poisoning involves the poisoning of web caches (which has led to security issues in programming languages, including all Python versions at the time in 2021, and expedited security updates). Attacks on other, more specific, caches also exist. References Computer security exploits Cache (computing)
Cache poisoning
Technology
104
9,553,082
https://en.wikipedia.org/wiki/Fernseh
The Fernseh AG television company was registered in Berlin on July 3, 1929, by John Logie Baird, Robert Bosch, Zeiss Ikon and D.S. Loewe as partners. John Baird owned Baird Television Ltd. in London, Zeiss Ikon was a camera company in Dresden, D.S. Loewe owned a company in Berlin and Robert Bosch owned a company, Robert Bosch GmbH, in Stuttgart. with an initial capital of 100,000 Reichsmark. Fernseh AG did research and manufacturing of television equipment. Etymology The company name "Fernseh AG" is a compound of Fernsehen ‘television’ and Aktiengesellschaft (AG) ‘joint-stock company’. The company was mainly known by its German abbreviation "FESE". See the see also section on this page for other uses. Early years In 1929 Fernseh AG's original board of directors included: Emanuel Goldberg, Oliver George Hutchinson (for Baird), David Ludwig Loewe, and Erich Carl Rassbach (for Bosch) and Eberhard Falkenstein who did the legal work. Carl Zeiss's company worked alongside the early Bosch company. Much of the early work was in the area of research and development. Along with early TV sets (DE-6, E1, DE10) Fernseh AG made the first "Remote Truck"/"OB van", an "intermediate-film" mobile television camera in August 1932. This was a film camera that had its film developed in the truck and a "telecine" then transmitted the signal almost "live". Fernseh GmbH In 1939 Robert Bosch GmbH took complete ownership of Fernseh AG when Zeiss Ikon AG sold its share of Fernseh AG. In 1952 Fernseh moved to Darmstadt, Germany, and increased its broadcast product line. In 1967 Fernseh, by then commonly called "Bosch Fernseh", introduced color TV products. Fernseh offered a full line of video and film equipment: professional video cameras, VTRs and telecine devices. On August 27, 1967, the first color TV program in Germany aired, with a live broadcast from a Bosch Fernseh outside broadcast (OB) van. The networks ZDF, NDR and WDR each acquired a new color OB van from Bosch Fernseh to begin broadcasting in color. Fernsehanlagen GmbH In 1972 Robert Bosch renamed its TV division: Fernsehanlagen GmbH (Fernseh facilities). The company supplied almost all the studio equipment for the 1972 Summer Olympics in Munich. The Darmstadt HQ had over 2000 employees in 1972. In 1972 Fernseh started to manufacture SECAM TV studio equipment for Moscow. Fernseh Inc. In October 1979 Bell and Howell's TeleMation Inc. Division located in Salt Lake City, Utah, entered a joint venture with Robert Bosch GmbH, Bosch's Fernseh Division. The new joint venture was called Fernseh Inc., Bosch Fernseh Division, located in Darmstadt, Germany. In April 1982 Bosch fully acquired Fernseh Inc., renaming it "Robert Bosch Corporation, Fernseh Division". In 1986 Bosch entered into a new joint venture with Philips Broadcast in Breda, Netherlands. This new company was called Broadcast Television Systems or BTS inc. Philips had been in the Broadcast market for many years with a line of PC- and LDK- Norelco professional video cameras and other video products. In 1995 Philips Electronics North America Corp. fully acquired BTS Inc., renaming it Philips Broadcast-Philips Digital Video Systems. Philips sold many of the Spirit DataCines. In March 2001 this Philips division was sold to Thomson SA, the Division was call Thomson Multimedia. In 2002, the French electronics giant Thomson SA also acquired the Grass Valley Group from a private investor that had acquired it three years earlier from Tektronix in Beaverton, Oregon, USA. The name of this division of Thomson was shortened to Grass Valley. The Fernseh's Darmstadt factory, near the Darmstadt Train Station and European Space Operations Centre was moved a short distance to Weiterstadt, Germany. (Later, Grass Valley was sold to Belden on February 6, 2014. Belden also owned Miranda.) Thomson Film Division, located in Weiterstadt including the product line of Spirit DataCine 4K, Bones Workstation, Scanity realtime film scanner and LUTher 3D Color Space converter, was sold to Parter Capital Group. The sale was made public on Sept. 9, 2008 and completed on Dec. 1, 2008. The new Headquarters was still in Weiterstadt, the former Bosch Fernseh — BTS factory. Parter Capital Group continued to have worldwide offices to support products from Weiterstadt, Germany. The new name of the company is Digital Film Technology. DFT Digital Film Technology became part of a new company: Precision Mechatronics GmbH in Weiterstadt, Germany. On October 1, 2012 Precision Mechatronics and DFT were acquired by Prasad Group, part of Prasad Studios (2012-2024). In 2013 DFT moved from Weiterstadt to Arheilgen-Darmstadt, Germany. Products Home Television sets (later moved to the Blaupunkt Division) (1930- ) Home Radios (later moved to the Blaupunkt Division) (1930- ) Vacuum tubeTube tester Early mechanical Camera for Mechanical television 1938, "Universal mechanical scanner" Intermediate film system for Remote Truck (1936) "Farvimeter" a universal electrical testing device (1947) "Farvigraph" a universal Oscilloscope (1949) Slide scanner for station ID and test patterns (1955) Filmgeber film chain F16LP15 Analog Fernseh Theater TV system, 1935 TV transmitter 1944 Master control, B&W Video switcher Sound recorder — player, Diaabtaster DAT15, Fernseh B & W film chain OMY Color film chain - Analog BCM-40 and B&W BM-20 2 inch Quadruplex videotape 1966-70s B & W cameras like Videokamera s/w K11 VK9 HA M series monitors TV Oscilloscope Video-signal generators Television standards conversion(1970s) Analog Model NC 56 P 40, with Plumbicon tube camera inside. Transcoder to convert PAL to SECAM and SECAM to PAl. (1972) BCR pre BCN VTR BCN series 1 inch type B videotape (1979–1989) Analog VTR KC series color professional video camera KCU-40, KCR, KCK-40, KCK-R-40, KCP-40, KCP-60, KCA, KCF-1, KCM-125, KCA, KCN92, KCN(1967–1990), Color film chain, with KCU-40 camera MC series color and B&WVideo monitorMC-37, MC-50 MCH 51, MH 21 OB Van - TV Remote Trucks - and Terminal Rack Equipment RME series Mixers — Switcher - Vision mixer, Analog FDL-60 Telecine - The world's first CCD telecine (1979–1989) FRP-60 Color Corrector-Color grading (1983–1989) FDL-90 Telecine (1989–1993) (now under BTS) Noise/Grain Reducer: FDGR, DNR7, MNR9, MNR10, MNR11, VS4, Scream, Scream 4k KCA-110 ENG Camera KCF-1 ENG Camera (later Quartercam, not sold) CCIR 601 Products CD7, DC7, 4X4 Booster, Test Gen. Encoders, Decoders. DD series CCIR 601-D1 Mixers - Vision mixer DD5, DD10, DD20, DD30 DCR series D1 VTR DCR-100 DCR-300 DCR-500 BCH 1000 HDTV 1" VTR KCH 1000 HDTV camera (RMH 1000) FLH 1000 Telecine The world's first HDTV CCD Telecine (1994–1996) Quadra 4:4:4 Telecine (1993–1998) D6 HDTV VTR Uncompressed HDTV VTR (VooDoo)-(Gigabit Data Recorder) (2000–2006) (Now under Philips) Spirit DataCine motion picture film scanner and HDTV Telecine SDC-2000 (1996–2006) also: SDC2001, SDC2002 Phantom Transfer Engine Software for Spirit Datacine Telecine for Virtual telecine(1998-) Shadow HDTV Telecine STE (2000–2006) VDC-2000 Specter Virtual telecine (1999–2002) Specter FS Virtual telecine (2002–2006) Spirit DataCine 4k Datacine - Telecine (2004-2014) also Spirit Spirit 2k/Spirit-HD (now Under Thomson-Grassvalley) Bones Linux-Based Software for Spirit Datacine Telecine Transfer Engine Software (2005-2014 ) Bones Dailies (2008-2014) LUTher 3D LUT Color Space (2005-2013) Flexxity (2011-2014) Scanity film scanner (2009- ) (Now under DFT) Phantom 2: Linux-Based transfer Engine software and workstation for Spirit Datacine (2014-) (Now under DFT) Polar HQ a 9.3K native scanner came out in 2023. Photo gallery Offices Past and current offices in the cities of acquisitions (see History): Cergy, France (Thomson World Headquarters) Salt Lake City, Utah, United States - from TeleMation Inc -Bell and Howell Beaverton, Oregon, United States- from Tektronix Nevada City, California, United States — from Grassvalley Group Breda, Netherlands - from Philips - Norelco Weiterstadt - Darmstadt, Germany from Bosch Fernseh-(DFT), In 2013 DFT moved from Weiterstadt to Arheilgen-Darmstadt, Germany. See also Hans Walz Post-production Video camera Fernseh prefix: Fernsehturm Berlin Television Tower. Fernsehen, German word for "television". Fernseh sprechstellen German Videotelephony. Fernsehturm Stuttgart telecommunications tower in Stuttgart. Fernsehsender Paul Nipkow first public television station in the world. Fernsehturm (disambiguation) German word for television tower. Fernsehen der DDR state television broadcaster in East Germany. Fernsehturm Heidelberg Heidelberg transmission tower. Fernsehturm Dresden-Wachwitz TV tower in Dresden. Fernsehserien German for TV series which comprises several episodes. ZDF Fernsehgarten "ZDF Television garden" is a German entertainment show. Deutscher Fernseh-Rundfunk Early German Television Broadcasting Fernsehproduktion a television production. Fernsehnorm TV standard. Fernsehpitaval Crime TV show from 1958 to 1978 on GDR. References and notes External links History of Bosch Fernseh Bosch.com History Film Products Early Fernseh TV set Early Camera BCN Pictures Fernseh mechanical TVs Deutsches Fernsehmuseum - German TV Museum Mass media companies of Germany Cameras Video storage Film production Television technology Video hardware Technicolor SA
Fernseh
Technology,Engineering
2,318