id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
52,349,136 | https://en.wikipedia.org/wiki/Mykola%20Yankovsky | Mykola Andriyovych Yankovsky (born 12 August 1944 in Pokotilovo, Ukrainian SSR) is a former Ukrainian businessman who has influenced Ukraine’s chemical production landscape and made it environmentally friendly. Candidate of Economic Sciences (1998), Professor, Head of the Department of Advanced Technologies in Management of Donetsk State Academy of Management (since 1999); Academician of AINU (1992), AENU (1995). Member of the Academy of Russian Entrepreneurs (1997). Professor Emeritus of the Ukrainian State University of Chemical Technology. Hero of Ukraine ( 2003 ).
Biography
Mykola Yankovsky was born on August 12, 1944, in ( Pokotilovo village , Novoarkhangelsk district, Kirovohrad Oblast ); wife Galina (1949) - housewife; daughters Irina (1967) and Tatiana (1973); son Igor (1974).
Yankovsky is now retired, spending time with his family and friends. He has lived a private life for some years, but during his career as an entrepreneur, Yankovsky has impacted Ukraine’s chemical production landscape and brought in inventions and business practices that were largely unknown when the Soviet Union came to an end. He transformed Donetsk’s industry to meet accepted social and ecological standards.
Education and publication
Mykola Yankovsky studied at Dneprodzerzhinsk Crafts School from 1961 to 1963; and later on at Dnepropetrovsk Chemical Technology Institute (1963-1969) as a chemical engineer technologist with a specialization of "Technology of inorganic substances and mineral fertilizers"; PhD thesis "Model of foreign economic activity of chemical enterprises". His Doctoral dissertation is "Management of enterprise competitiveness in world markets" (Institute of Industrial Economics of NASU, 2005).
Yankovsky is a Doctor of Economics, Professor, and Head of Department Advanced Management Technologies since 1999 at Donetsk State University of Management. He has been an incumbent member of the Academy of Engineering Science of Ukraine since 1992 as well as an honorary Professor of the Ukrainian State University of Chemical Technology. He is an honorary Doctor of Odesa State Academy of Technical Regulation and Quality as well as a member of the International Academy of Standardization.
Yankovsky is an author of more than 150 scientific works and academic publications, including more than 10 monographs and textbooks published in Ukraine and abroad. His monographs include: “Model of the foreign economic activity of chemical industry enterprises” (1997), “Forecasting the development of a large industrial complex: theory and practice” (1999), “Improving the efficiency of foreign economic activity of a large industrial complex” (2000), and “Innovative and classical theories of catastrophes and economic crises” (2010).
In total, Yankovsky published more than 100 papers and manuals on production management, the chemical industry, international economy and bulk chemicals technology. Some of his monographies and manuals on chemical science are highly regarded and recommended by professors, and are used for teaching in higher and secondary specialized educational institutions of Ukraine
Career
In 1982, Yankovsky became the director at the Dneprodzerzhinsk production plant. Two years later in 1984, he took on the responsibility as the director at “Azot”. From 1986 to 1987 he was deputy director at Pershamaiski machine factory. In 1987, he joined Gorlovka ON “Stirol” as chief engineer and took on the position of general director in 1988. He led the construction from scratch of two of the ten currently existing Ukrainian factories specialized in the production of ammonia. Another three of the ten currently existing plants were modernized under the supervision and active involvement of Mykola Yankovsky.
In 1995, Yankovsky and his son Igor took an active part in the development of Trading House of the Corporate Group "Stirol" (city of Gorlovka, Donetsk Region). To date, the company is the largest chemical complex of Ukraine, specializing in the production of polymers, fertilizers, ammonia, pharmaceuticals. With Stirol becoming one of the major chemical production businesses in Eastern Europe. Yankovsky was able to gain ownership of Stirol by the end of the 1990s.
In accordance with internationally recognized standards, the Big Four international audit firms have for many years audited OJSC “Stirol”, which allowed it to be one of the first Ukrainian joint-stock companies to issue Euro 125 million Eurobonds aimed at modernizing production.
Under Yankovsky’s leadership, OSJC Stirol became the most valuable brand in the Ukrainian chemical industry. His efforts were recognized with the honorary title of “Hero of Ukraine”, awarded to him in 2003 for his extraordinary personal contribution to the development of the Ukrainian chemical industry and the production of internationally competitive products.
OSJC Stirol also was one of the first companies in Eastern Europe and CIS countries to produce granulated carbamide and the first in Europe to use a closed water supply cycle for its production processes without the use of river water and with complete cessation of runoff into the environment.
Prior to the sale of OSJC Stirol, the company prepared for an IPO, though this failed due to uncertainties on the natural gas markets.
The company was highly successful and in 2010 the majority of Stirol shares were sold to Ostchem Holdings. According to the Kyiv Post, Yankovsky was among the wealthiest Ukrainians in 2010.
Yankovsky also takes the following positions: Chairman of the Union of Donbass Chemists, Chairman of the Board of Directors of Gorlovka; Member of the Exporters Council under the CM of Ukraine (since 02.1999); Member of the Committee on State Prizes of Ukraine in Science and Technology (since 03.1997). Trustee of the candidate for the post of President of Ukraine Viktor Yanukovych in ballot paper # 48 (2004-2005).
As Igor Sharov notes in his book, Mykola Andriyovych was a fervent supporter of the privatization of large industrial enterprises. He said that "Our" failing "enterprises should be privatized, at least for the hryvnia. As soon as possible." He considered every privatization case to be specific and did not see anything wrong with the growth of foreign capital, if investors wanted to develop the economy of the enterprise. Member of the Board and representative of the International Organization of Mineral Fertilizer Manufacturers and Traders (IFA) in Ukraine.
Social commitments
Yankovsky is a member of Ukraine's national council on philanthropy.
Working conditions
During his time as CEO of OJSC Stirol, Yankovsky reformed the standards under which chemical plants in Ukraine were run. Working conditions and production methods were significantly improved, leading to greater safety for employees and notably increasing efficiency and production quality.
Environmental improvements
Yankovsky further improved working conditions by adopting environmentally friendly production processes and greatly reducing the environmental footprint of OJSC Stirol. He said that, “we want to create an environment in which even rare animals would like to live”. He managed this by opening a zoo with over 500 animals and birds living on the territory of the production plant.
Church
Mykola Yankovsky supported the diocese of the Ukrainian Orthodox Church, facilitating the construction of temples in Gorlovka and other cities of Ukraine.
Political activity
Yankovsky’s sense of social responsibility and desire to care for his employees led him into the world of politics. From 1994 to 1998, he served as Deputy of Donetsk Regional Council. This was followed by four consecutive terms as People’s Deputy of Ukraine. from 1998 to 2012.
In the 1998 elections, he was elected People's Deputy of the Verkhovna Rada of Ukraine of the III convocation in a single-mandate constituency in the Donetsk region (received 44.9% of the vote).
In the 2002 elections, he was elected People's Deputy of the Verkhovna Rada of Ukraine of the IV convocation in a single-mandate constituency in the Donetsk region (nominated from the electoral bloc " For United Ukraine! ", Received 34.45% of the vote).
In the 2006 elections, he was elected People's Deputy of the Verkhovna Rada of Ukraine of the V convocation (was No. 17 on the list of the Party of Regions).
In the 2007 election, he was elected People's Deputy of the Verkhovna Rada of Ukraine of the VI convocation (was No. 15 on the list of the Party of Regions).
Family
Yankovsky is married to Galina (born 1949). Together they have three children: Irene (born 1967), Tatiana (born 1973) and Igor (born 1974).
His son Yankovskyi Igor has been prominently involved in his business activities.
Awards and achievements
Hero of Ukraine with the award of the order of the state (May 22, 2003): for outstanding personal services to the Ukrainian state in the development of the chemical industry, production of competitive domestic products.
Order of Prince Yaroslav the Wise (Fifth Class) (November 30, 2012): for significant personal contribution to the socio-economic, scientific, technical, cultural and educational development of the Ukrainian state; significant work achievements and many years of diligent work.
Honorary Mention of the President of Ukraine (December 7, 1994): or his significant personal contribution to the organization of production of new types of high quality chemical products, recognition of them on the world market.
Order of Honor (for the construction of an ammonia production plant in 1979).
Certificate of Honor of the Presidium of the Verkhovna Rada of the USSR (for the construction of the second ammonia production plant in 1985).
Certificate of Honor of the Verkhovna Rada of Ukraine.
Order of the Ukrainian Orthodox Church (2008).
Order of Saint Equal Apostle Prince Vladimir (1st degree 2008).
Order of the Reverend Nestor the Chronicler with a moire ribbon (first degree for the construction of the Temple of the Image of Christ Non-man-made in Gorlovka 2008).
Order of Saint Apostle Andrew the First Called (2013).
Best Top Manager of Ukraine (2006): “Investgazeta” and “Top 100” have named Yankovsky as the best manager in Ukraine.
References
Ukrainian businesspeople
Ukrainian politicians
1944 births
Living people
Laureates of the Honorary Diploma of the Verkhovna Rada of Ukraine
Recipients of the Honorary Diploma of the Cabinet of Ministers of Ukraine
People in the chemical industry
People from Kirovohrad Oblast | Mykola Yankovsky | Chemistry | 2,136 |
31,779,345 | https://en.wikipedia.org/wiki/SPECvirt | SPECvirt_sc2010 is a computer benchmark that evaluates the performance of a server computer for virtualization. It is available from the Standard Performance Evaluation Corporation (SPEC). It was introduced in July, 2010.
The SPECvirt_sc2010 benchmark measures the maximum number of workloads that a platform can simultaneously run while maintaining specific quality of service metrics. Each workload, called a tile, consists of a specific set of virtual machines.
In addition to generating results that show performance, the benchmark can also be used to generate performance per watt results.
The SPECvirt_sc2010 was not supported by SPEC from February 26, 2014 and its successor is SPECvirt_sc2013.
See also
Performance per watt
VMmark
References
External links
Official SPECvirt website
Benchmarks (computing)
Virtualization_software | SPECvirt | Technology | 176 |
31,630,482 | https://en.wikipedia.org/wiki/Oliver%20Patterson%20Watts | Oliver Patterson Watts (July 16, 1865 – February 6, 1953) was a professor of chemical engineering and applied electrochemistry at the University of Wisconsin–Madison. Born in Thomaston, Maine, Watts received his bachelor's degree from Bowdoin College in 1889. He received his doctoral degree in 1905; he was the first person to be awarded a Ph.D. in chemical engineering at the University of Wisconsin, where he served as a professor until 1935, after which he was an emeritus professor in the university's college of engineering. Watts is known for his development of the hot nickel plating bath known as the "Watts Bath", which he first described in a paper published in 1915.
References
External links
1865 births
1953 deaths
People from Thomaston, Maine
Scientists from Madison, Wisconsin
American chemical engineers
Electrochemists
Bowdoin College alumni
University of Wisconsin–Madison faculty
University of Wisconsin–Madison College of Engineering alumni | Oliver Patterson Watts | Chemistry | 188 |
30,656,533 | https://en.wikipedia.org/wiki/Hyers%E2%80%93Ulam%E2%80%93Rassias%20stability | The stability problem of functional equations originated from a question of Stanisław Ulam, posed in 1940, concerning the stability of group homomorphisms. In the next year, Donald H. Hyers gave a partial affirmative answer to the question of Ulam in the context of Banach spaces in the case of additive mappings, that was the first significant breakthrough and a step toward more solutions in this area. Since then, a large number of papers have been published in connection with various generalizations of Ulam's problem and Hyers's theorem. In 1978, Themistocles M. Rassias succeeded in extending Hyers's theorem for mappings between Banach spaces by considering an unbounded Cauchy difference subject to a continuity condition upon the mapping. He was the first to prove the stability of the linear mapping. This result of Rassias attracted several mathematicians worldwide who began to be stimulated to investigate the stability problems of functional equations.
By regarding a large influence of S. M. Ulam, D. H. Hyers, and Th. M. Rassias on the study of stability problems of functional equations, the stability phenomenon proved by Th. M. Rassias led to the development of what is now known as Hyers–Ulam–Rassias stability of functional equations. For an extensive presentation of the stability of functional equations in the context of Ulam's problem, the interested reader is referred to the books by S.-M. Jung, S. Czerwik, Y.J. Cho, C. Park, Th.M. Rassias and R. Saadati, Y.J. Cho, Th.M. Rassias and R. Saadati, and Pl. Kannappan, as well as to the following papers. In 1950, T. Aoki considered an unbounded Cauchy difference which was generalised later by Rassias to the linear case. This result is known as Hyers–Ulam–Aoki stability of the additive mapping. Aoki (1950) had not considered continuity upon the mapping, whereas Rassias (1978) imposed extra continuity hypothesis which yielded a formally stronger conclusion.
References
See also
Th. M. Rassias, On the stability of functional equations and a problem of Ulam, Acta Applicandae Mathematicae, 62(1)(2000), 23-130.
P. Gavruta, A generalization of the Hyers-Ulam-Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 184(1994), 431–436.
P. Gavruta and L. Gavruta, A new method for the generalized Hyers–Ulam–Rassias stability, Int. J. Nonlinear Anal. Appl. 1(2010), No. 2, 6 pp.
J. Chung, Hyers-Ulam-Rassias stability of Cauchy equation in the space of Schwartz distributions, J. Math. Anal. Appl. 300(2)(2004), 343 – 350.
T. Miura, S.-E. Takahasi, and G. Hirasawa, Hyers-Ulam-Rassias stability of Jordan homomorphisms on Banach algebras, J. Inequal. Appl. 4(2005), 435–441.
A. Najati and C. Park, Hyers–Ulam-Rassias stability of homomorphisms in quasi-Banach algebras associated to the Pexiderized Cauchy functional equation, J. Math. Anal. Appl. 335(2007), 763–778.
Th. M. Rassias and J. Brzdek (eds.), Functional Equations in Mathematical Analysis, Springer, New York, 2012, .
D. Zhang and J. Wang, On the Hyers-Ulam-Rassias stability of Jensen’s equation, Bull. Korean Math. Soc. 46(4)(2009), 645–656.
T. Trif, Hyers-Ulam-Rassias stability of a Jensen type functional equation, J. Math. Anal. Appl. 250(2000), 579–588.
Pl. Kannappan, Functional Equations and Inequalities with Applications, Springer, New York, 2009, .
P. K. Sahoo and Pl. Kannappan, Introduction to Functional Equations, CRC Press, Chapman & Hall Book, Florida, 2011, .
W. W. Breckner and T. Trif, Convex Functions and Related Functional Equations. Selected Topics, Cluj University Press, Cluj, 2008.
Mathematical analysis
Functional equations
Functional analysis | Hyers–Ulam–Rassias stability | Mathematics | 998 |
47,571,590 | https://en.wikipedia.org/wiki/NGC%20151 | NGC 151 is a mid-sized barred spiral galaxy located in the constellation Cetus.
The galaxy was discovered by English astronomer William Herschel on November 28, 1785. In 1886, Lewis Swift observed the same galaxy and catalogued it as NGC 153, only for it later to be identified as NGC 151.
The galaxy, viewed from almost face on, has several bright, blue, dusty spiral arms filled with active star formation. One noticeable feature of the galaxy is a large gap between the spiral arms.
Two supernovae have been observed in NGC 151. On 22 July 2011, PTF11iqb (type IIn, mag. 17.1) was discovered by the Palomar Transient Factory, and on 24 June 2023, SN 2023lnh (type Ia, mag. 18) was discovered by ATLAS.
See also
List of NGC objects (1–1000)
References
External links
0151
NGC 0151
NGC 0151
002035
Astronomical objects discovered in 1785
Discoveries by William Herschel
-02-02-054 | NGC 151 | Astronomy | 211 |
697,574 | https://en.wikipedia.org/wiki/Be%20File%20System | The Be File System (BFS) is the native file system for the BeOS. In the Linux kernel, it is referred to as "BeFS" to avoid confusion with Boot File System.
BFS was developed by Dominic Giampaolo and Cyril Meurillon over a ten-month period, starting in September 1996, to provide BeOS with a modern 64-bit-capable journaling file system. It is case-sensitive and capable of being used on floppy disks, hard disks and read-only media such as CD-ROMs. However, its use on small removable media is not advised, as the file-system headers consume from 600 KB to 2 MB, rendering floppy disks virtually useless.
Like its predecessor, OFS (Old Be File System, written by Benoit Schillings - formerly BFS), it includes support for extended file attributes (metadata), with indexing and querying characteristics to provide functionality similar to that of a relational database.
Whilst intended as a 64-bit-capable file system, the size of some on-disk structures mean that the practical size limit is approximately 2 exabytes. Similarly the extent-based file allocation reduces the maximum practical file size to approximately 260 gigabytes at best and as little as a few blocks in a pathological worst case, depending on the degree of fragmentation.
Its design process, application programming interface, and internal workings are, for the most part, documented in the book Practical File System Design with the Be File System.
Implementations
In addition to the original 1996 BFS used in BeOS, there are several implementations for Linux. In early 1999, Makoto Kato developed a Be File System driver for Linux; however, the driver never reached a completely stable state, so in 2001 Will Dyson developed his own version of the Linux BFS driver.
In 2002, Axel Dörfler and a few other developers created and released a reimplemented BFS called OpenBFS for Haiku (OpenBeOS back then). In January 2004, Robert Szeleney announced that he had developed a fork of this OpenBFS file system for use in his SkyOS operating system. The regular OpenBFS implementation was also ported to Syllable, with which it has been included since version 0.6.5.
See also
Comparison of file systems
AtheOS File System
References
External links
The BeOS file system: an OS geek retrospective, by Andrew Hudson, 2010-06-03, Ars Technica
Disk file systems
BeOS
Haiku (operating system) | Be File System | Technology | 517 |
42,803,144 | https://en.wikipedia.org/wiki/Uncertainties%20in%20building%20design%20and%20building%20energy%20assessment | The detailed design of buildings needs to take into account various external factors, which may be subject to uncertainties. Among these factors are prevailing weather and climate; the properties of the materials used and the standard of workmanship; and the behaviour of occupants of the building. Several studies have indicated that it is the behavioural factors that are the most important among these. Methods have been developed to estimate the extent of variability in these factors and the resulting need to take this variability into account at the design stage.
Sources of uncertainty
Earlier work includes a paper by Gero and Dudnik (1978) presenting a methodology to solve the problem of designing heating, ventilation and air conditioning systems subjected to uncertain demands. Since then, other authors have shown an interest in the uncertainties that are present in building design. Ramallo-González (2013) classified uncertainties in energy building assessment tools in three different groups:
Environmental. Uncertainty in weather prediction under changing climate; and uncertain weather data information due to the use of synthetic weather data files: (1) use of synthetic years that do not represent a real year, and (2) use of a synthetic year that has not been generated from recorded data in the exact location of the project but in the closest weather station.
Workmanship and quality of building elements. Differences between the design and the real building: Conductivity of thermal bridges, conductivity of insulation, value of infiltration (air leakage), or U-values of walls and windows.
Behavioural. All other parameters linked to human behaviour, e.g. opening of doors and windows, use of appliances, occupancy patterns or cooking habits.
Weather and climate
Climate change
Buildings have long life spans: for example, in England and Wales, around 40% of the office blocks existing in 2004 were built before 1940 (30% if considered by floor area), and 38.9% of English dwellings in 2007 were built before 1944. This long life span makes buildings likely to operate with climates that might change due to global warming. De Wilde and Coley (2012) showed how important is to design buildings that take into consideration climate change and that are able to perform well in future weathers.
Weather data
The use of synthetic weather data files may introduce further uncertainty. Wang et al. (2005) showed the impact that uncertainties in weather data (among others) may cause in energy demand calculations. The deviation in calculated energy use due to variability in the weather data were found to be different in different locations from a range of (-0.5% to 3%) in San Francisco to a range of (-4% to 6%) in Washington D.C. The ranges were calculated using a Typical Meteorological Year (TMY) as the reference.
The spatial resolution of weather data files was the concern covered by Eames et al. (2011). Eames showed how a low spatial resolution of weather data files can be the cause of disparities of up to 40% in the heating demand. The reason is that this uncertainty is not understood as an aleatory parameter but as an epistemic uncertainty that can be solved with the appropriate improvement of the data resources or with specific weather data acquisition for each project.
Building materials and workmanship
A large study was carried out by Leeds Metropolitan University at Stamford Brook in England. This project saw 700 dwellings built to high efficiency standards. The results of this project show a significant gap between the energy used expected before construction and the actual energy use once the house is occupied. The workmanship is analysed in this work. The authors emphasise the importance of thermal bridges that were not considered for the calculations, and that the thermal bridges that have the largest impact on the final energy use are those originated by the internal partitions that separate dwellings. The dwellings that were monitored in use in this study show a large difference between the real energy use and that estimated using the UK Standard Assessment Procedure (SAP), with one of them giving +176% of the expected value when in use.
Hopfe has published several papers concerning uncertainties in building design. A 2007 publication looks into uncertainties of types 2 and 3. In this work the uncertainties are defined as normal distributions. The random parameters are sampled to generate 200 tests that are sent to the simulator (VA114), the results from which will be analysed to check the uncertainties with the largest impact on the energy calculations. This work showed that the uncertainty in the value used for infiltration is the factor that is likely to have the largest influence on cooling and heating demands. De Wilde and Tian (2009) agreed with Hopfe on the impact of uncertainties in infiltration upon energy calculations, but also introduced other factors.
The work of Schnieders and Hermelink (2006) showed a substantial variability in the energy demands of low-energy buildings designed under the same (Passivhaus) specification.
Occupant behaviour
Blight and Coley (2012) showed that substantial variability in energy use can be occasioned due to variance in occupant behaviour, including the use of windows and doors. Their paper also demonstrated that their method of modelling occupants’ behaviour accurately reproduces actual behavioural patterns of inhabitants. This modelling method was the one developed by Richardson et al. (2008), using the Time-Use Survey (TUS) of the United Kingdom as a source for real behaviour of occupants, based on the activity of more than 6000 occupants as recorded in 24-hour diaries with a 10-minute resolution. Richardson's paper shows how the tool is able to generate behavioural patterns that correlate with the real data obtained from the TUS.
Multifactorial studies
In the work of Pettersen (1994), uncertainties of group 2 (workmanship and quality of elements) and group 3 (behaviour) of the previous grouping were considered. This work shows how important occupants’ behaviour is on the calculation of the energy demand of a building. Pettersen showed that the total energy use follows a normal distribution with a standard deviation of around 7.6% when the uncertainties due to occupants are considered, and of around 4.0% when considering those generated by the properties of the building elements.
Wang et al. (2005) showed that deviations in energy demand due to local variability in weather data were smaller than the ones due to operational parameters linked with occupants’ behaviour. For those, the ranges were (-29% to 79%) for San Francisco and (-28% to 57%) for Washington D.C. The conclusion of this paper is that occupants will have a larger impact in energy calculations than the variability between synthetically generated weather data files.
Another study performed by de Wilde and Wei Tian (2009) compared the impact of most of the uncertainties affecting building energy calculations, including uncertainties in: weather, U-Value of windows, and other variables related with occupants’ behaviour (equipment and lighting), and taking into account climate change. De Wilde and Tian used a two-dimensional Monte Carlo simulation analysis to generate a database obtained with 7280 runs of a building simulator. A sensitivity analysis was applied to this database to obtain the most significant factors on the variability of the energy demand calculations. Standardised regression coefficients and standardised rank regression coefficients were used to compare the impacts of the uncertainties. Their paper compares many of the uncertainties with a good sized database providing a realistic comparison for the scope of the sampling of the uncertainties.
See also
ASHRAE
Building energy simulation
References
Building engineering | Uncertainties in building design and building energy assessment | Engineering | 1,526 |
47,489,952 | https://en.wikipedia.org/wiki/Biclique-free%20graph | In graph theory, a branch of mathematics, a -biclique-free graph is a graph that has no (complete bipartite graph with vertices) as a subgraph. A family of graphs is biclique-free if there exists a number such that the graphs in the family are all -biclique-free. The biclique-free graph families form one of the most general types of sparse graph family. They arise in incidence problems in discrete geometry, and have also been used in parameterized complexity.
Properties
Sparsity
According to the Kővári–Sós–Turán theorem, every -vertex -biclique-free graph has edges, significantly fewer than a dense graph would have. Conversely, if a graph family is defined by forbidden subgraphs or closed under the operation of taking subgraphs, and does not include dense graphs of arbitrarily large size, it must be -biclique-free for some , for otherwise it would include large dense complete bipartite graphs.
As a lower bound, conjectured that every maximal -biclique-free bipartite graph (one to which no more edges can be added without creating a -biclique) has at least edges, where and are the numbers of vertices on each side of its bipartition.
Relation to other types of sparse graph family
A graph with degeneracy is necessarily -biclique-free. Additionally, any nowhere dense family of graphs is biclique-free. More generally, if there exists an -vertex graph that is not a 1-shallow minor of any graph in the family, then the family must be -biclique-free, because all -vertex graphs are 1-shallow minors of .
In this way, the biclique-free graph families unify two of the most general classes of sparse graphs.
Applications
Discrete geometry
In discrete geometry, many types of incidence graph are necessarily biclique-free. As a simple example, the graph of incidences between a finite set of points and lines in the Euclidean plane necessarily has no subgraph.
Parameterized complexity
Biclique-free graphs have been used in parameterized complexity to develop algorithms that are efficient for sparse graphs with suitably small input parameter values. In particular, finding a dominating set of size , on -biclique-free graphs, is fixed-parameter tractable when parameterized by , even though there is strong evidence that this is not possible using alone as a parameter. Similar results are true for many variations of the dominating set problem. It is also possible to test whether one dominating set of size at most can be converted to another one by a chain of vertex insertions and deletions, preserving the dominating property, with the same parameterization.
References
Extremal graph theory
Graph families | Biclique-free graph | Mathematics | 574 |
44,737,697 | https://en.wikipedia.org/wiki/WaferCatalyst | WaferCatalyst is a Multi-Project Wafer (MPW) consolidation service by King Abdulaziz City for Science and Technology (KACST), Saudi Arabia. WaferCatalyst is a concept to silicon service and provides a number of tools for community building in the field of integrated circuit (IC) design. These include Multi-project wafer service fabrication, multi-layer mask (MLM), design support, consultancy services and fabrication support.
History
Microsystems Infrastructure Development initiative (MIDI) was launched by Micro-Sensors Division (MSD) under the National Center of Electronics, Communications and Photonics in early 2012 and was chartered to create and develop integrated circuit ecosystem in the Kingdom of Saudi Arabia. The service developed as a result of this initiative is called 'WaferCatalyst' which has been chartered to serve Saudi Arabia, greater Middle East & North Africa region as well as the global community in semiconductor design. It was formally launched on 28 April 2013 by H.H. Prince Dr. Turki bin Saud bin Mohammad Al Saud, Vice President for Research Institutes of the Kingdom of Saudi Arabia.
Programs
WaferCatalyst has a number of programs to enhance the ecosystem in IC development area. These include support services through its Support and Design Portal, support on process development kits, support for universities in taping out of ICs, project titles for undergraduate/post-graduate students and partners program.
WaferCatalyst has been working to develop close coordination and partnerships among the various institutions, research and commercial organizations to add value for themselves while also contributing to the development of a virtual cluster ecosystem.
Related links
Multi-project wafer service
External links
WaferCatalyst Website www.wafercat.com
WaferCatalyst Portal http://portal-wafercat.com
King Abdulaziz City for Science and Technology www.kacst.edu.sa
References
2013 establishments in Saudi Arabia
Science and technology in Saudi Arabia
Semiconductor device fabrication | WaferCatalyst | Materials_science | 407 |
31,813,996 | https://en.wikipedia.org/wiki/Basel%20Declaration | The Basel Declaration is a call for greater transparency and communication on the use of animals in research. It is supported by an international scientific non profit society, the Basel Declaration Society, a forum of scientists established to foster the greatest dissemination and acceptance of the Declaration, and the dialogue with the public and stakeholders.
Summary
The Declaration was issued on 30 November 2010 by over 60 scientists from Switzerland, Germany, the United Kingdom, France and Sweden. The signatories commit to accepting greater responsibility in animal experiments and to intensive cooperation with the public in the form of a dialogue with prejudice. At the same time, they demand that essential animal experiments for obtaining research results remain permitted both now and in the future. With their Basel Declaration, researchers are seeking to achieve a more impartial approach to scientific issues by the public and a more trusting and reliable cooperation with national and international decision makers.
The signatories to the Basel Declaration are actively seeking to show that science and animal welfare are not diametrically opposed and to make a constructive contribution to the dialogue taking place in society – for example in the incorporation of the new EU Directive of 22 September 2010 on the protection of animals used for scientific purposes into the national laws.[1] (The revised EU Directive provides for the use of fewer laboratory animals for scientific purposes in the future and better reconciles the needs of research with the protection of animals without making research more difficult. The EU Member States must incorporate the Directive into national law within two years and apply these national regulations as from January 2013.)
Alternatives to animal experiments
“Animal experiments will remain necessary in biomedical research for the foreseeable future, but we are constantly working to refine the methods with animal welfare in mind.”[2] The signatories to the Declaration commit, amongst other things, to the use of animal experiments only when the research concerns fundamentally important knowledge and no alternative methods are available. As part of this commitment, their two-day conference in November 2010 ended with an affirmation of their allegiance to the 3R principle “Reduction, Refinement, Replacement”:
The 3R principle (replace, reduce, refine) has its origins with William M. S. Russell & Rex L. Burch, who published their “Principles of Humane Experimental Technique” in 1959. These principles are regarded internationally as the guideline for avoiding or reducing animal experiments and the suffering of laboratory animals:
Replacement: replacement of animal experiments by methods that do not involve animals
Reduction: reduction in the number of animals in unavoidable animal experiments
Refinement: improvement in experimental procedures, so that unavoidable animal experiments
Need for improved communication
The participants in the symposium that adopted the Basel Declaration were unanimously agreed that science must not only take a clear stand with regard to the responsible handling of laboratory animals, but also has to show greater transparency toward the public.[3] To make their motivation and methods more comprehensible to the public and the decision makers, the researchers aim to cooperate more closely with politicians, the media and schools in the future and to give greater importance to the communication of science.
Obligation to the public
The authors of the Basel Declaration acknowledge the need for greater discussion of animal experiment issues in public and also of the risks of research approaches and possible misuse of new technological developments. In addition, they declare their intention to communicate not only results und scientific controversies, but also processes and approval procedures in the science process, in order to foster a deeper understanding of research.[4] With regard to the improvement of information for the public on research involving experiments the signatories to the Basel Declaration commit to the following:
We communicate openly and with transparency – also with regard to animal experiments. We proactively address the problems and openly declare that part of our research involves animal experiments.
We grant journalists access to our laboratories.
We invite opinion formers, media people and teachers to enter into a dialogue with researchers on the problem area of basic research.
We endeavor to use a language that is comprehensible to the general public.
We declare our solidarity with all researchers who have to rely on animal experiments. We are united in rejecting unjustified allegations against individuals. We shall jointly and publicly condemn vandalism, threats and other criminal acts.
Animal experiments in basic research
Modern medicine is based on discoveries of basic biological research and their implementation in applied research. The initial signatories to the Basel Declaration see the tendency to restrict animal experiments, especially in the field of basic research, as a major risk. And they maintain that no stage of research (neither basic nor applied research) must be categorically excluded from those purposes of animal experiments that are deemed permissible. Apart from the difficulty of differentiating the two stages in the field of medical research, applied research is generally inconceivable without basic research. Basic research is not an end in itself, but serves as the basis for further consideration. Basic and applied research is all part of the same continuum in biomedical research and the assignment of a research project to one part or the other is often rather arbitrary. On the other hand, the categorization of an experiment as basic research does not yet justify the use of animals per se. The demonstration that an animal experiment is indispensable is just as necessary as a weighing the interests of animal welfare against the benefits according to the research objective.
Better animal models
Genetically modified animals represent an important instrument of modern biomedical research. In many cases, species higher on the evolutionary scale in animal experiments can be replaced by the use of simpler organisms bred by means of gene technology, such as fruit flies, laboratory worms or fish. This plays a major part in helping to promote the 3R principles of replacement, reduction and refinement of animal experiments. Disease models in genetically modified animals are mainly in rodents, such as mice and rats. However, they cannot adequately depict human physiology in all cases. Research in animal models using mammals, such as even-toed ungulates (especially for animal health) and in very rare cases also monkeys, remains necessary according to participants at the symposium on the Basel Declaration. They see the following advantages in the use of genetically modified organisms in animal experiments:
Possibility of developing tests for therapeutic antibodies that are being increasingly used in modern medical therapy in humans
Production of recombinant products such as anticoagulants or therapeutic antibodies
Research on disease mechanisms in complex organisms (e.g. diabetes)
Research and understanding of the underlying mechanisms and metabolic pathways in human diseases
Fundamental principles for efficient and targeted treatment of diseases such as leukemia, hypertension or obesity
Experiments in non-human primates
The participants at the symposium on the Basel Declaration at the end of November 2010 summarized the outcome of their discussions on the subject of experiments in non-human primates as follows:
1. Research in non-human primates is an essential part of biomedical progress in the 21st century. Research in non-human primates has led to the development of crucial medical treatments, such as vaccines against poliomyelitis and hepatitis (jaundice), as well as to improved drug safety thanks to indispensable contributions to the basic principles of physiology, immunology, infectious diseases, genetics, pharmacology, reproduction biology and neuroscience. We predict an increased need for research using non-human primates in the future, e.g. for personalized medicine and neurodegenerative diseases in an aging society. This continuing need is also reflected in the EU Directive of 2010 (2010 /63/EU) on animals used for scientific purposes, in which it is recognized that research in non-human primates will remain irreplaceable in the foreseeable future.
2. Biomedical research cannot be divided into “basic research” and “applied research”: it is a continuum that includes both basic studies on normal functions and their failure in diseases and also the development of treatments. This fundamental research is indispensable for biomedical progress. Any categorical restriction of research in non-human primates in basic research is shortsighted and not justified by any scientific evidence.
3. Researchers working with non-human primates are committed to the 3R principle on the replacement, reduction and refinement of animal experiments. Research in animals must satisfy the highest ethical standards. Non-human primates are only used when there are no alternatives. We are working constantly and intensively on refining experimental methods and keeping the number of non-human primates used to a minimum. A strong commitment to the 3Rs guarantees the best science and the best welfare of the animals.
4. We are committed to informing the public and providing objective information on research in non-human primates.
External links
Basel Declaration in English https://de.basel-declaration.org/basel-declaration-de/assets/basel_declaration_en.pdf
Basel Declaration homepage http://www.basel-declaration.org
Basler Declaration in Nature http://www.nature.com/news/2010/101206/full/468742a.html
Basel Declaration in Scientific American: http://www.scientificamerican.com/article.cfm?id=basel-declaration-defends-animal
Animal testing | Basel Declaration | Chemistry | 1,872 |
63,417,934 | https://en.wikipedia.org/wiki/Mary%20Leng | Mary Leng is a British philosopher specialising in the philosophy of mathematics and philosophy of science. She is a professor at the University of York.
Career
Leng studied as an undergraduate at Balliol College, University of Oxford and as postgraduate student at the University of Toronto. She worked at the University of Cambridge from 2002 to 2006, and then the University of Liverpool from 2006 to 2011. In 2007, she co-edited a collection called Mathematical Knowledge with Alexander Paseau and Michael Potter, which was published by Oxford University Press, and, in 2010, she published a monograph called Mathematics and Reality, again with Oxford University Press. In Mathematics and Reality, Leng defends mathematical fictionalism. Leng joined the University of York in 2012, where she is now a professor.
References
Living people
Year of birth missing (living people)
British women philosophers
21st-century British philosophers
Philosophers of mathematics
Academics of the University of York
Academics of the University of Liverpool
Academics of the University of Cambridge
University of Toronto alumni
Alumni of Balliol College, Oxford | Mary Leng | Mathematics | 207 |
6,011,769 | https://en.wikipedia.org/wiki/Quantum%20mutual%20information | In quantum information theory, quantum mutual information, or von Neumann mutual information, after John von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog of Shannon mutual information.
Motivation
For simplicity, it will be assumed that all objects in the article are finite-dimensional.
The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variables p(x, y), the two marginal distributions are
The classical mutual information I(X:Y) is defined by
where S(q) denotes the Shannon entropy of the probability distribution q.
One can calculate directly
So the mutual information is
Where the logarithm is taken in basis 2 to obtain the mutual information in bits. But this is precisely the relative entropy between p(x, y) and p(x)p(y). In other words, if we assume the two variables x and y to be uncorrelated, mutual information is the discrepancy in uncertainty resulting from this (possibly erroneous) assumption.
It follows from the property of relative entropy that I(X:Y) ≥ 0 and equality holds if and only if p(x, y) = p(x)p(y).
Definition
The quantum mechanical counterpart of classical probability distributions are modeled with density matrices.
Consider a quantum system that can be divided into two parts, A and B, such that independent measurements can be made on either part. The state space of the entire quantum system is then the tensor product of the spaces for the two parts.
Let ρAB be a density matrix acting on states in HAB. The von Neumann entropy of a density matrix S(ρ), is the quantum mechanical analogy of the Shannon entropy.
For a probability distribution p(x,y), the marginal distributions are obtained by integrating away the variables x or y. The corresponding operation for density matrices is the partial trace. So one can assign to ρ a state on the subsystem A by
where TrB is partial trace with respect to system B. This is the reduced state of ρAB on system A. The reduced von Neumann entropy of ρAB with respect to system A is
S(ρB) is defined in the same way.
It can now be seen that the definition of quantum mutual information, corresponding to the classical definition, should be as follows.
Quantum mutual information can be interpreted the same way as in the classical case: it can be shown that
where denotes quantum relative entropy. Note that there is an alternative generalization of mutual information to the quantum case. The difference between the two for a given state is called quantum discord, a measure for the quantum correlations of the state in question.
Properties
When the state is pure (and thus ), the mutual information is twice the entanglement entropy of the state:
A positive quantum mutual information is not necessarily indicative of entanglement, however. A classical mixture of separable states will always have zero entanglement, but can have nonzero QMI, such as
In this case, the state is merely a classically correlated state.
References
Quantum mechanical entropy
Quantum information theory | Quantum mutual information | Physics | 641 |
46,926,261 | https://en.wikipedia.org/wiki/Sub-Doppler%20cooling | Sub-Doppler cooling is a class of laser cooling techniques that reduce the temperature of atoms and molecules below the Doppler cooling limit. In experiment implementation, Doppler cooling is limited by the broad natural linewidth of the lasers used in cooling. Regardless of the transition used, however, Doppler cooling processes have an intrinsic cooling limit that is characterized by the momentum recoil from the emission of a photon from the particle. This is called the recoil temperature and is usually far below the linewidth-based limit mentioned above. By laser cooling methods beyond the two-level approximations of atoms, temperature below this limit can be achieved.
Optical pumping between the sublevels that make up an atomic state introduces a new mechanism for achieving ultra-low temperatures. The essential feature of sub-Doppler cooling is the non-adiabaticity of the moving atoms to the light field. For a spatially dependent light field, the orientation of moving atoms is adjusted by optical pumping to fit the conditions of the light field. Yet the moving atoms do not instantly adjust to the light field as they move, their orientation always lags behind the orientation that would exist for stationary atoms, which determines the velocity-dependent differential absorption and hence the cooling. With this cooling process, lower temperatures can be obtained.
Various methods have been used independently or combined in an experimental sequence to achieve sub-Doppler cooling. One method to produce spatially dependent optical pumping is polarization gradient cooling, where the superposition of two counter-propagating laser beams of orthogonal polarizations lead to a light field with polarization varying on the wavelength scale. A specific mechanism within polarization gradient cooling is Sisyphus cooling, where atoms climb "potential hills" created by the interaction of their internal energy states with spatially varying light fields. The light field in optical molasses in three-dimension also has polarization gradient.
Other methods of sub-Doppler cooling include evaporative cooling, free space Raman cooling, Raman side-band cooling, resolved sideband cooling, electromagnetically induced transparency (EIT) cooling, and the use of a dark magneto-optical trap. These techniques can be used depending on the minimum temperature needed and specifications of the individual setup. For example, an optical molasses time-of-flight technique was used to cool sodium (Doppler limit ) to .
Motivations for sub-doppler cooling include motional ground state cooling, cooling to the motional ground state, a requirement for maintaining fidelity during many quantum computation operations.
Dark magneto-optical trap
A magneto-optical trap (MOT) is commonly used for cooling and trapping a substance by Doppler cooling. In the process of Doppler cooling, the red detuned light would be absorbed by atoms from one certain direction and re-emitted in a random direction. The electrons of the atoms would decay to an alternative ground states if the atoms have more than one hyperfine ground level. There is the case of all the atoms in the other ground states rather than the ground states of Doppler cooling, then system cannot cool the atoms further.
In order to solve this problem, the other re-pumping light would be incident on the system to repopulate the atoms to restart the Doppler cooling process. This would induce higher amounts of fluorescence being emitted from the atoms which can be absorbed by other atoms, acting as a repulsive force. Due to this problem, the Doppler limit would increase and is easy to meet. When there is a dark spot or lines on the shape of the re-pumping light, the atoms in the middle of the atomic gas would not be excited by the re-pumping light which can decrease the repulsion force from the previous cases.
This can help to cool the atoms to a lower temperature than the typical Doppler cooling limit. This is called a dark magneto-optical trap (DMOT).
Limits
The Doppler cooling limit is set by balancing the heating from the momentum kicks. Applying the results from the Fokker-Planck equation to the sub-Doppler processes would lead to an arbitrarily low final temperature as the damping coefficient become arbitrarily large. A few more considerations are needed. For instance, When a photon is scattered, the momentum change of the atom is assumed to be small relative to its overall momentum, but when the atom slows down to around the region of , the momentum change becomes significant. Thus at low velocities, spontaneous emission would leave the atom with a residual momentum around , which sets a minimum velocity scale. The velocity distribution around cannot be well described by the Fokker Planck equation, and this sets an intuitive lower limit on the temperature.
Furthermore, polarization gradient cooling depends on the ability to localize atoms to a scale of , where is the wavelength of the light. Due to the uncertainty principle, this localization also imposes a minimum momentum spread , which also leads to a limit on how much the atoms can be cooled.
These theories are tested in the analytical and numerical calculations in with a one-dimensional polarization gradient molasses. It was shown that in the limit of large detuning, the velocity distribution depends only on a dimensionless parameter, the light shift of the ground state divided by the recoil energy. The minimum kinetic energy was found to be on the order of 40 times the recoil energy.
References
Atomic, molecular, and optical physics | Sub-Doppler cooling | Physics,Chemistry | 1,120 |
20,511,149 | https://en.wikipedia.org/wiki/Entanglement%20distillation | Entanglement distillation (also called entanglement purification) is the transformation of N copies of an arbitrary entangled state into some number of approximately pure Bell pairs, using only local operations and classical communication. Entanglement distillation can overcome the degenerative influence of noisy quantum channels by transforming previously shared, less-entangled pairs into a smaller number of maximally-entangled pairs.
History
The limits for entanglement dilution and distillation are due to C. H. Bennett, H. Bernstein, S. Popescu, and B. Schumacher, who presented the first distillation protocols for pure states in 1996; entanglement distillation protocols for mixed states were introduced by Bennett, Brassard, Popescu, Schumacher, Smolin and Wootters the same year. Bennett, DiVincenzo, Smolin and Wootters established the connection to quantum error-correction in a ground-breaking paper published in August 1996, also in the journal of Physical Review, which has stimulated a lot of subsequent research.
Motivation
Suppose that two parties, Alice and Bob, would like to communicate classical information over a noisy quantum channel. Either classical or quantum information can be transmitted over a quantum channel by encoding the information in a quantum state. With this knowledge, Alice encodes the classical information that she intends to send to Bob in a (quantum) product state, as a tensor product of reduced density matrices where each is diagonal and can only be used as a one time input for a particular channel .
The fidelity of the noisy quantum channel is a measure of how closely the output of a quantum channel resembles the input, and is therefore a measure of how well a quantum channel preserves information. If a pure state is sent into a quantum channel emerges as the state represented by density matrix , the fidelity of transmission is defined as .
The problem that Alice and Bob now face is that quantum communication over large distances depends upon successful distribution of highly entangled quantum states, and due to unavoidable noise in quantum communication channels, the quality of entangled states generally decreases exponentially with channel length as a function of the fidelity of the channel. Entanglement distillation addresses this problem of maintaining a high degree of entanglement between distributed quantum states by transforming N copies of an arbitrary entangled state into approximately Bell pairs, using only local operations and classical communication. The objective is to share strongly correlated qubits between distant parties (Alice and Bob) in order to allow reliable quantum teleportation or quantum cryptography.
Entanglement entropy
Entanglement entropy quantifies entanglement. Several different definitions have been proposed.
Von Neumann Entropy
Von Neumann entropy is a measure of the "quantum uncertainty" or "quantum randomness" associated with a quantum state, analogous to the concept of Shannon entropy in classical information theory. Von Neumann entropy measures how "mixed" or "pure" a quantum state is. Pure states (e.g., states that are entirely definite like
) have a von Neumann entropy of 0. In pure states, there is no uncertainty about the system’s state. Mixed states (e.g., probabilistic mixtures of pure states) have a positive entropy value, reflecting an inherent uncertainty in the system's state.
For a given quantum system, the von Neumann entropy is defined as:
where is the density matrix representing the state of the quantum system and \textrm{Tr} denotes the trace operation, summing over the diagonal elements of a matrix.
For a maximally mixed state (where all states are equally probable), von Neumann entropy is maximal. Von Neumann entropy is invariant under unitary transformations, meaning that if is transformed by a unitary matrix , .
It is widely used in quantum information theory to study entanglement, quantum thermodynamics, and the coherence of quantum systems.
Rényi entanglement entropy
Rényi entropy is a generalization of the various concepts of entropy, depending on a parameter , which adjusts the sensitivity of the entropy measure to different probabilities.
For a quantum state represented by a density matrix , the Rényi entropy of order is defined as:
where is the trace of raised to the power .
Rényi entropy is a non-increasing function of , meaning that higher values of
emphasize the more probable outcomes more heavily, leading to a lower entropy value. Different values of allow Rényi entropy to highlight different aspects of the probability distribution (or quantum state), with higher emphasizing high-probability events. Rényi entropy is often used in contexts such as fractal dimensions, signal processing, and statistical mechanics, where a flexible measure of uncertainty or diversity is useful.
As an example of Renyi entropy, a two qubit system can be written as a superposition of possible computational basis qubit states: , each with an associated complex coefficient :
As in the case of a single qubit, the probability of measuring a particular computational basis state is the square of the modulus of its amplitude, or associated coefficient, , subject to the normalization condition . The normalization condition guarantees that the sum of the probabilities add up to 1, meaning that upon measurement, one of the states will be observed.
The Bell state is a particularly important example of a two qubit state:
Bell states possess the property that measurement outcomes on the two qubits are correlated. As can be seen from the expression above, the two possible measurement outcomes are zero and one, both with probability of 50%. As a result, a measurement of the second qubit always gives the same result as the measurement of the first qubit.
Bell states can be used to quantify entanglement. Let m be the number of high-fidelity copies of a Bell state that can be produced using local operations and classical communication (LOCC). Given a large number of Bell states the amount of entanglement present in a pure state can then be defined as the ratio of , where is the number of states transform to Bell state, called the distillable entanglement of a particular state , which gives a quantified measure of the amount of entanglement present in a given system. The process of entanglement distillation aims to saturate this limiting ratio. The number of copies of a pure state that may be converted into a maximally entangled state is equal to the von Neumann entropy of the state, which is an extension of the concept of classical entropy for quantum systems. Mathematically, for a given density matrix , the von Neumann entropy is . Entanglement can then be quantified as the entropy of entanglement, which is the von Neumann entropy of either or as:
Which ranges from 0 for a product state to for a maximally entangled state (if the is replaced by then maximally entangled has a value of 1).
Entanglement concentration
Pure states
Given n particles in the singlet state shared between Alice and Bob, local actions and classical communication will suffice to prepare m arbitrarily good copies of with a yield
Let an entangled state have a Schmidt decomposition:
where coefficients p(x) form a probability distribution, and thus are positive valued and sum to unity. The tensor product of this state is then,
Now, omitting all terms which are not part of any sequence which is likely to occur with high probability, known as the typical set: the new state is
And renormalizing,
Then the fidelity
Suppose that Alice and Bob are in possession of m copies of . Alice can perform a measurement onto the typical set subset of , converting the state with high fidelity. The theorem of typical sequences then shows us that is the probability that the given sequence is part of the typical set, and may be made arbitrarily close to 1 for sufficiently large m, and therefore the Schmidt coefficients of the renormalized Bell state will be at most a factor larger. Alice and Bob can now obtain a smaller set of n Bell states by performing LOCC on the state with which they can overcome the noise of a quantum channel to communicate successfully.
Mixed states
Many techniques have been developed for doing entanglement distillation for mixed states, giving a lower bounds on the value of the distillable entanglement for specific classes of states .
One common method involves Alice not using the noisy channel to transmit source states directly but instead preparing a large number of Bell states, sending half of each Bell pair to Bob. The result from transmission through the noisy channel is to create the mixed entangled state , so that Alice and Bob end up sharing copies of . Alice and Bob then perform entanglement distillation, producing almost perfectly entangled states from the mixed entangled states by performing local unitary operations and measurements on the shared entangled pairs, coordinating their actions through classical messages, and sacrificing some of the entangled pairs to increase the purity of the remaining ones. Alice can now prepare an qubit state and teleport it to Bob using the Bell pairs which they share with high fidelity. What Alice and Bob have then effectively accomplished is having simulated a noiseless quantum channel using a noisy one, with the aid of local actions and classical communication.
Let be a general mixed state of two spin-1/2 particles which could have resulted from the transmission of an initially pure singlet state
through a noisy channel between Alice and Bob, which will be used to distill some pure entanglement. The fidelity of
is a convenient expression of its purity relative to a perfect singlet. Suppose that M is already a pure state of two particles for some . The entanglement for , as already established, is the von Neumann entropy where
and likewise for , represent the reduced density matrices for either particle. The following protocol is then used:
Performing a random bilateral rotation on each shared pair, choosing a random SU(2) rotation independently for each pair and applying it locally to both members of the pair transforms the initial general two-spin mixed state M into a rotationally symmetric mixture of the singlet state and the three triplet states and : The Werner state has the same purity F as the initial mixed state M from which it was derived due to the singlet's invariance under bilateral rotations.
Each of the two pairs is then acted on by a unilateral rotation, which we can call , which has the effect of converting them from mainly Werner states to mainly states with a large component of while the components of the other three Bell states are equal.
The two impure states are then acted on by a bilateral XOR, and afterwards the target pair is locally measured along the z axis. The unmeasured source pair is kept if the target pair's spins come out parallel as in the case of both inputs being true states; and it is discarded otherwise.
If the source pair has not been discarded it is converted back to a predominantly state by a unilateral rotation, and made rotationally symmetric by a random bilateral rotation.
Repeating the outlined protocol above will distill Werner states whose purity may be chosen to be arbitrarily high from a collection M of input mixed states of purity but with a yield tending to zero in the limit . By performing another bilateral XOR operation, this time on a variable number of source pairs, as opposed to 1, into each target pair prior to measuring it, the yield can be made to approach a positive limit as . This method can then be combined with others to obtain an even higher yield.
Distillation Protocols
BBPSSW Protocol
The BBPSSW protocol is one of the simplest protocols that uses CNOT (Controlled-NOT) gates and measurements to probabilistically increase the entanglement of Bell states (standard maximally entangled two-qubit states). Here’s a step-by-step example:
Setup:
Suppose Alice and Bob share many copies of a noisy Bell state, represented by the density matrix: where , and are the other Bell states: , , .
The parameter represents the fidelity of with respect to , and the goal is to increase closer to 1 through distillation.
Protocol Steps:
CNOT Operation: Alice and Bob each take two qubits, say and , and apply a CNOT gate between the pairs, with one qubit of each state as the control and the other as the target:, where is addition modulo 2. This step correlates the two copies.
Measurement and Postselection: Alice and Bob each measure the target qubits in the -basis (measuring 0 or 1). If both measure same output (i.e., 0 or 1), they keep the control qubits and discard the targets; otherwise, they discard both pairs. This postselection step has a probability of success but increases the fidelity of the remaining entangled pair.
Example Calculation:
After one round, if the initial fidelity of was 0.6, the protocol can increase it to around 0.8 with some probability.
If multiple rounds are performed on the surviving pairs, can approach 1, producing a near-perfect state.
DEJMPS Protocol
The DEJMPS protocol is an optimized version of BBPSSW and works especially well for Bell diagonal states.
Setup:
Assume the initial state is in the form: where , and we assume is the largest coefficient.
Protocol Steps:
Apply Local Unitaries: Alice and Bob apply unitary operations on their qubits to transform the state into a form where can be maximized. This involves bit and phase flips to swap the Bell states without affecting the target state, .
CNOT Operations: Similar to the BBPSSW protocol, Alice and Bob each apply a CNOT operation between their pairs.
Basis Measurement: After the CNOT, Alice and Bob measure the target qubits in the Bell basis, postselecting on successful outcomes.
Example Calculation:
If the initial fidelity of is 0.6, a single round of DEJMPS can increase it more effectively than BBPSSW, pushing fidelity closer to 0.9, depending on the values of , , and .
Filtering protocol
Filtering protocols apply local filtering operations to probabilistically enhance entanglement without requiring multiple pairs. This approach is useful when operations are limited, such as in photon-based quantum communication.
Protocol Steps:
Consider a noisy entangled state: where .
Local Filtering Operators: Alice and Bob apply filtering operators and : .
Normalization and Success Probability: After applying the filters, the resulting state is re-normalized: . The probability of successful filtering (success probability) is: .
Resulting Fidelity: If initially, filtering can increase the fidelity to 0.8 or higher, but it reduces the probability of obtaining this result due to the probabilistic nature of the filter.
Procrustean method
The Procrustean method of entanglement concentration can be used for as little as one partly entangled pair, being more efficient than the Schmidt projection method for entangling less than 5 pairs, and requires Alice and Bob to know the bias () of the n pairs in advance. The method derives its name from Procrustes because it produces a perfectly entangled state by chopping off the extra probability associated with the larger term in the partial entanglement of the pure states:
Assuming a collection of particles for which is known as being either less than or greater than the Procrustean method may be carried out by keeping all particles which, when passed through a polarization-dependent absorber, or a polarization-dependent-reflector, which absorb or reflect a fraction of the more likely outcome, are not absorbed or deflected. Therefore, if Alice possesses particles for which , she can separate out particles which are more likely to be measured in the up/down basis, and left with particles in maximally mixed state of spin up and spin down. This treatment corresponds to a POVM (positive-operator-valued measurement). To obtain a perfectly entangled state of two particles, Alice informs Bob of the result of her generalized measurement while Bob doesn't measure his particle at all but instead discards his if Alice discards hers.
Stabilizer protocol
The purpose of an entanglement distillation protocol is to distill pure ebits from noisy ebits where . The yield of such a protocol is . Two parties can then use the noiseless ebits for quantum communication protocols.
The two parties establish a set of shared noisy ebits in the following way. The sender Alice first prepares Bell states locally. She sends the second qubit of each pair over a noisy quantum channel to a receiver Bob. Let be the state rearranged so that all of Alice's qubits are on the left and all of Bob's qubits are on the right. The noisy quantum channel applies a Pauli error in the error set to the set of qubits sent over the channel. The sender and receiver then share a set of noisy ebits of the form where the identity acts on Alice's qubits and is some Pauli operator in acting on Bob's qubits.
A one-way stabilizer entanglement distillation protocol uses a stabilizer code for the distillation procedure. Suppose the stabilizer for an quantum error-correcting code has generators . The distillation procedure begins with Alice measuring the generators in . Let be the set of the projectors that project onto the orthogonal subspaces corresponding to the generators in . The measurement projects randomly onto one of the subspaces. Each commutes with the noisy operator on Bob's side so that
The following important Bell-state matrix identity holds for an arbitrary matrix :
Then the above expression is equal to the following:
Therefore, each of Alice's projectors projects Bob's qubits onto a subspace corresponding to Alice's projected subspace . Alice restores her qubits to the simultaneous +1-eigenspace of the generators in . She sends her measurement results to Bob. Bob measures the generators in . Bob combines his measurements with Alice's to determine a syndrome for the error. He performs a recovery operation on his qubits to reverse the error. He restores his qubits . Alice and Bob both perform the decoding unitary corresponding to stabilizer to convert their logical ebits to physical ebits.
Entanglement-assisted stabilizer code
Luo and Devetak provided a straightforward extension of the above protocol (Luo and Devetak 2007). Their method converts an entanglement-assisted stabilizer code into an entanglement-assisted entanglement distillation protocol.
Luo and Devetak form an entanglement distillation protocol that has entanglement assistance from a few noiseless ebits. The crucial assumption for an entanglement-assisted entanglement distillation protocol is that Alice and Bob possess noiseless ebits in addition to their noisy ebits. The total state of the noisy and noiseless ebits is
where is the identity matrix acting on Alice's qubits and the noisy Pauli operator affects Bob's first qubits only. Thus the last ebits are noiseless, and Alice and Bob have to correct for errors on the first ebits only.
The protocol proceeds exactly as outlined in the previous section. The only difference is that Alice and Bob measure the generators in an entanglement-assisted stabilizer code. Each generator spans over qubits where the last qubits are noiseless.
We comment on the yield of this entanglement-assisted entanglement distillation protocol. An entanglement-assisted code has generators that each have Pauli entries. These parameters imply that the entanglement distillation protocol produces ebits. But the protocol consumes initial noiseless ebits as a catalyst for distillation. Therefore, the yield of this protocol is .
Entanglement dilution
The reverse process of entanglement distillation is entanglement dilution, where large copies of the Bell state are converted into less entangled states using LOCC with high fidelity. The aim of the entanglement dilution process, then, is to saturate the inverse ratio of n to m, defined as the distillable entanglement.
Applications
Besides its important application in quantum communication, entanglement purification also plays a crucial role in error correction for quantum computation, because it can significantly increase the quality of logic operations between different qubits. The role of entanglement distillation is discussed briefly for the following applications.
Quantum error correction
Entanglement distillation protocols for mixed states can be used as a type of error-correction for quantum communications channels between two parties Alice and Bob, enabling Alice to reliably send mD(p) qubits of information to Bob, where D(p) is the distillable entanglement of p, the state that results when one half of a Bell pair is sent through the noisy channel connecting Alice and Bob.
In some cases, entanglement distillation may work when conventional quantum error-correction techniques fail. Entanglement distillation protocols are known which can produce a non-zero rate of transmission D(p) for channels which do not allow the transmission of quantum information due to the property that entanglement distillation protocols allow classical communication between parties as opposed to conventional error-correction which prohibits it.
Quantum cryptography
The concept of correlated measurement outcomes and entanglement is central to quantum key exchange, and therefore the ability to successfully perform entanglement distillation to obtain maximally entangled states is essential for quantum cryptography.
If an entangled pair of particles is shared between two parties, anyone intercepting either particle will alter the overall system, allowing their presence (and the amount of information they have gained) to be determined so long as the particles are in a maximally entangled state. Also, in order to share a secret key string, Alice and Bob must perform the techniques of privacy amplification and information reconciliation to distill a shared secret key string. Information reconciliation is error-correction over a public channel which reconciles errors between the correlated random classical bit strings shared by Alice and Bob while limiting the knowledge that a possible eavesdropper Eve can have about the shared keys. After information reconciliation is used to reconcile possible errors between the shared keys that Alice and Bob possess and limit the possible information Eve could have gained, the technique of privacy amplification is used to distill a smaller subset of bits maximizing Eve's uncertainty about the key.
Quantum teleportation
In quantum teleportation, a sender wishes to transmit an arbitrary quantum state of a particle to a possibly distant receiver. Quantum teleportation is able to achieve faithful transmission of quantum information by substituting classical communication and prior entanglement for a direct quantum channel. Using teleportation, an arbitrary unknown qubit can be faithfully transmitted via a pair of maximally-entangled qubits shared between sender and receiver, and a 2-bit classical message from the sender to the receiver. Quantum teleportation requires a noiseless quantum channel for sharing perfectly entangled particles, and therefore entanglement distillation satisfies this requirement by providing the noiseless quantum channel and maximally entangled qubits.
See also
Quantum channel
Quantum cryptography
Quantum entanglement
Quantum state
Quantum teleportation
LOCC
Purification theorem
Notes and references
.
.
.
Mark M. Wilde, "From Classical to Quantum Shannon Theory", arXiv:1106.1445.
Quantum information science
Statistical mechanics | Entanglement distillation | Physics | 4,830 |
14,778,022 | https://en.wikipedia.org/wiki/Geography%20of%20food | The geography of food is a field of human geography. It focuses on patterns of food production and consumption on the local to global scale. Tracing these complex patterns helps geographers understand the unequal relationships between developed and developing countries in relation to the innovation, production, transportation, retail and consumption of food. It is also a topic that is becoming increasingly charged in the public eye. The movement to reconnect the 'space' and 'place' in the food system is growing, spearheaded by the research of geographers.
History
Spatial variations in food production and consumption practices have been noted for thousands of years. In fact, Plato commented on the destructive nature of agriculture when he referred to the soil erosion from the mountainsides surrounding Athens, stating "[In previous years] Athens yielded far more abundant produce. In comparison of what then was, there are remaining only the bones of the wasted body; all the richer and softer parts of the soil having fallen away, and the mere skeleton of the land being left". Societies beyond those of ancient Greece have struggled under the pressure to feed expanding populations. The people of Easter Island, the Maya of Central America and most recently the inhabitants of Montana have been experiencing similar difficulties in production due to several interconnecting factors related to land and resource management. These events have been extensively studied by geographers and other interested parties (the study of food has not been confined to a single discipline, and has received attention from a huge range of diverse sources).
Modern geographers initially focused on food as an economic activity, especially in terms of agricultural geography. It was not until recently that geographers have turned their attention to food in a wider sense: "The emergence of an agro-food geography that seeks to examine issues along the food chain or within systems of food provision derives, in part, from the strengthening of political economy approaches in the 1980s".
Overlapping areas of study
Food has received attention from both the physical sciences and the social sciences because it is a bridge between the natural and social worlds. Some of the earliest numerical data about food production come from bureaucratic sources linked to the ancient civilizations of Ancient Egypt and the Roman Empire. Traders have also been influential in documenting food networks. Early Indian merchants and traders mapped the location of trading posts associated with food production nodes.
Food production
Food production was the first element of food to receive extensive attention from geographers in the field of cultural geography, particularly in agricultural geography.
Globally, the production of food is unequal. This is because there are two main components involved in sustenance production that are also distributed irregularly. These components are the environmental capacity of the area, and the human capacity. Environmental capacity is its ability ‘to accommodate a particular activity or rate of an activity without unacceptable impact’. The climate, soil types, and availability of water affect it. Human capacity, in relation to food production, is the size of the population and the amount of agricultural skill within that population. When these two are at ideal levels and partnered with financial capital, the creation of intense agricultural infrastructure is possible, as the Green Revolution clearly portrays.
Simultaneously, the ability of a country to produce food is being severely impacted by a plethora of other factors:
Pests are becoming resistant to pesticides, or pesticides may be killing off the useful and necessary insects. Examples of this happening occur around the globe. Tanzania experienced a particularly horrible infection of armyworms in 2005. At the infections peak, there were over 1000 larva per square meter. In 2009, Liberia experienced a state of emergency when invading African armyworm caterpillars began what became a regional food crisis. The caterpillars traveled through 65 towns and 20 000 people were forced to leave their homes, markets, and farms. Losses like this can cost millions to billions, depending on size and duration, and have severe effects on food security. The FAO has created an international team, the Plant Production and Protection Division, which is attempting to ‘reduce reliance on pesticides’ and ‘demonstrate that pesticide use often can be reduced considerably without affecting yields or farmer profits' in these, and other hard-struck areas.
Water stress, desertification, and erosion are leading to loss of arable land. Agricultural practices use the bulk of the Earth’s fresh water – up to 70 percent – and those numbers are predicted to rise by 50-100 percent by 2025’. Countries are being forced to divert more water than ever before to irrigate their land. Hydroelectric dams and mega-canal projects are becoming the new standard for countries like Egypt that can no longer depend on rainfall or natural flood cycles. These water shortages are also causing a source of conflict between neighboring nations as they live with increasingly high levels of water scarcity. Policy responses to these events could be implemented in order to strengthen the socio-economic growth, human health statuses, and environmental sustainability of these areas. Combining current limitations with water and transitions from practices such as agroforestry and shifting cultivation makes land susceptible to aeolian erosion by weakening soil composition and exposing larger areas of land to destructive wind. Aeolian erosion largely effects deserted areas, reducing air quality, polluting water sources, and limiting fertility of nearby land.
Climate change is creating more extreme weather patterns, and agricultural practices are estimated to cause from 10 to 12 percent of greenhouses gas emissions. Warming will increase the previously mentioned rates of desertification and insect activity and agricultural zones near the equator may be lost. However, due to the uneven warming that will probably occur, higher latitudes are expected to warm up at faster rates than other areas of the globe. Scientists are now presenting the idea that areas in Canada and Siberia may become suitable for farming at the industrial scale, and that those areas will be able to account for any farmland that is lost at the equator. Conservative estimates place the shift of traditional crops (maize, grain, potatoes) northward at 50 to 70 kilometers a decade. It is also believed that non-traditional crops (berries, sunflowers, melons) could be established on the southern sides of these countries. Changes in climate may force humans to adapt, adopt new practices, and alter old habits to promote success in the uncertain age of climate change ahead.
Food consumption
Criticisms of the industrialized food system regarding its inability to provide nutritious, ecologically sound, equitable food for the world's population has increased in recent history. Systems that are currently in place focus on providing relatively cheap food to millions, but often cost the Earth in terms of water and soil degradation, local food insecurity, animal welfare, rising obesity and health-related problems, and declining rural communities. Variations in diet and consumption practices on global and regional scales became the focus of geographers and economists with the vastly expanding population and widely publicized famines of the 1960s, and the food riots of 2007-2008 in 60 different countries. Due in part to these events, differences in the caloric intake of food and the composition of an average diet have been estimated and mapped for many countries since the 1960s.
Canada, USA, and Europe consume the highest amount of calories with an average per capita consumption of around 3400 calories daily. The recommended daily caloric intake for men and women living in these areas is 2500 and 2000 respectively. Studies focused on consumption patterns in these areas lay the blame for increased caloric intake on soft drink and fast food consumption, and decreased physical activity. Many developing countries are beginning to follow the leaders in rising caloric intake as they develop further due to increased availability of these high-impact items. Ballooning weight and associated health problems such as high blood pressure, high cholesterol, heart problems, and diabetes are being recorded in skyrocketing numbers.
Globally, consumption is still extremely uneven, with areas such as Sub-Saharan Africa still having some of the lowest rates of caloric intake per capita, often falling below the recommended levels. Much of this is due to lack of access of particular foods, which is a leading factor as to why much of the undernourished population is located in this region. In the world today, there are over 800 million people that are undernourished. The Democratic Republic of Congo holds the lowest average, at 1800 calories daily; however, averages do not represent the range of inequality between the best and worst fed people within a region. Currently, steps are being made to reduce caloric inequality. In parts of South Africa, the government has implemented a widespread electrification system featuring a free electricity allowance due to a study was conducted from 1991 to 2002 that found a positive increase in consumption habits within villages if given access to electricity. Access to electricity allowed for less time to be spent on menial tasks such as gathering firewood, and more time working on higher-level tasks that could increase income. In fact, villages often exceeded their electrical allowances.
See also
Local food
Food security
Right to food
Food rescue
Food speculation
References
External links
History of Thought Wiki: Geography of Food
Geography all the Way: Food Miles
Food Tank: The Food Think Tank
Food Manufacturing: The Leading Source for Food Manufacturing News
Zurich University of Applied Sciences - Research Group Geography of Food
Human geography
Food science
Agriculture | Geography of food | Environmental_science | 1,867 |
2,860,528 | https://en.wikipedia.org/wiki/Pentagrammic%20prism | In geometry, the pentagrammic prism is one of an infinite set of nonconvex prisms formed by square sides and two regular star polygon caps, in this case two pentagrams.
It is a special case of a right prism with a pentagram as base, which in general has rectangular non-base faces. Topologically it is the same as a convex pentagonal prism.
It is the 78th model in the list of uniform polyhedra, as the first representative of uniform star prisms, along with the pentagrammic antiprism, which is the 79th model.
Geometry
It has 7 faces, 15 edges and 10 vertices. This polyhedron is identified with the indexed name U78 as a uniform polyhedron.
The pentagram face has an ambiguous interior because it is self-intersecting. The central pentagon region can be considered interior or exterior, depending on how the interior is defined. One definition of the interior is the set of points from which a ray crosses the boundary an odd number of times; this makes the central pentagon exterior, as every ray beginning within it crosses two edges.
Gallery
Pentagrammic dipyramid
In geometry, the pentagrammic dipyramid (or bipyramid) is first of the infinite set of face-transitive star dipyramids containing star polygon arrangement of edges. It has 10 intersecting isosceles triangle faces. It is topologically identical to the pentagonal dipyramid.
Each star dipyramid is the dual of a star polygon based uniform prism.
Related polyhedra
There are two pentagrammic trapezohedra (or deltohedra), being dual to the pentagrammic antiprism and pentagrammic crossed antiprism respectively, each having intersecting kite-shaped faces (convex or concave), and a total of 12 vertices:
References
External links
http://www.mathconsult.ch/showroom/unipoly/78.html
http://bulatov.org/polyhedra/uniform/u03.html
Paper model of pentagrammic prism
https://web.archive.org/web/20050313234702/http://www.math.technion.ac.il/~rl/kaleido/data/03.html
https://web.archive.org/web/20060211140715/http://www.ac-noumea.nc/maths/amc/polyhedr/no_conv5_.htm
Paper Model (net) Pentagrammic Prism
Uniform polyhedra
Prismatoid polyhedra | Pentagrammic prism | Physics | 559 |
18,075,548 | https://en.wikipedia.org/wiki/Units%20of%20information | A unit of information is any unit of measure of digital data size. In digital computing, a unit of information is used to describe the capacity of a digital data storage device. In telecommunications, a unit of information is used to describe the throughput of a communication channel. In information theory, a unit of information is used to measure information contained in messages and the entropy of random variables.
Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes.
For binary hardware, by far the most common hardware today, the smallest unit is the bit, a portmanteau of binary digit, which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble, 4 bits, represents the value of a single hexadecimal digit. The byte, 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two).
Information theory
In 1928, Ralph Hartley observed a fundamental storage principle, which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted . Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely .
Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.
When b is 2, the unit is the shannon, equal to the information content of one "bit". A system with 8 possible states, for example, can store up to bits of information. Other units that have been named include:
Base b = 3 the unit is called "trit", and is equal to (≈ 1.585) bits.
Base b = 10 the unit is called decimal digit, hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.
Base b = e, the base of natural logarithms the unit is called a nat, nit, or nepit (from Neperian), and is worth (≈ 1.443) bits.
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.
Units derived from bit
Several conventional names are used for collections or groups of bits.
Byte
Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet. An 8-bit byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.
Nibble
A group of four bits, or half a byte, is sometimes called a nibble, nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has.
Word, block, and page
Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72 bits or others.
Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").
Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.
Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.
Multiplicative prefixes
A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits.
Use of metric prefixes is common, but often inaccurate since binary storage hardware is organized with capacity that is a power of 2 not 10 as the metric prefixes are. In the context of computing, the metric prefixes are often intended to mean something other than their normal meaning. For example, a kilobyte is actually 1024 bytes even though the standard meaning of kilo is 1000. And, mega normally means one million, but in computing is often used to mean 220 = . The table below illustrates the differences between normal metric sizes and the implied actual size the binary size.
The International Electrotechnical Commission (IEC) issued a standard that introduces binary prefixes that accurately represent binary sizes without changing the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2.
The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated.
Size examples
1 bit: Answer to a yes/no question
1 byte: A number from 0 to 255
90 bytes: Enough to store a typical line of text from a book
512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes).
1024 bytes = 1 KiB: A block size in some older UNIX filesystems
2048 bytes = 2 KiB: A CD-ROM sector
4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size.
4 kB: About one page of text from a novel
120 kB: The text of a typical pocket book
1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth)
3 MB: A three-minute song (133 kbit/s)
650–900 MB – a CD-ROM
1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s
16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024)
32/64/128 GB: Three common sizes of USB flash drives
1 TB: The size of a $30 hard disk (as of early 2024)
6 TB: The size of a $100 hard disk (as of early 2022)
16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive
24 TB: The size of $440 (as of early 2024) "video" hard disk drive
32 TB: Largest hard disk drive (as of mid-2024)
100 TB: Largest commercially available solid-state drive (as of mid-2024)
200 TB: Largest solid-state drive constructed (prediction for mid-2022)
1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives).
1.3 ZB: Prediction of the volume of the whole internet in 2016
Obsolete and unusual units
Some notable unit names that are today obsolete or only used in limited contexts.
1 bit: unibit
2 bits: dibit, crumb, quartic digit, quad, semi-nibble, nyp
3 bits: tribit, triad, triade
4 bits: see nibble
5 bits: pentad, pentade,
6 bits: byte (in early IBM machines using BCD alphamerics), hexad, hexade, sextet
7 bits: heptad, heptade
8 bits: octad, octade
9 bits: nonet, rarely used
10 bits: declet, decle
12 bits: slab
15 bits: parcel (on CDC 6600 and CDC 7600)
16 bits: doublet, wyde, parcel (on Cray-1), chawmp (on a 32-bit machine)
18 bits: chomp, chawmp (on a 36-bit machine)
32 bits: quadlet, tetra
64 bits: octlet, octa
96 bits: bentobox (in ITRON OS)
128 bits: hexlet, paragraph (on Intel x86 processors)
256 bytes: page (on Intel 4004, 8080 and 8086 processors, also many other 8-bit processors – typically much larger on many 16-bit/32-bit processors)
6 trits: tryte
combit, comword
See also
File size
ISO 80000-13 (Quantities and units – Part 13: Information science and technology)
References
External links
Representation of numerical values and SI units in character strings for information interchanges
Bit CalculatorMake conversions between bits, bytes, kilobits, kilobytes, megabits, megabytes, gigabits, gigabytes, terabits, terabytes, petabits, petabytes, exabits, exabytes, zettabits, zettabytes, yottabits, yottabytes.
Paper on standardized units for use in information technology
Data Byte Converter
High Precision Data Unit Converters | Units of information | Mathematics | 2,353 |
50,669,027 | https://en.wikipedia.org/wiki/Tommy%20Bonnesen | Tommy Bonnesen (27 March 1873 – 14 March 1935) was a Danish mathematician, known for Bonnesen's inequality.
Bonnesen studied at the University of Copenhagen, where in 1902 he received his PhD with thesis Analytiske studier over ikke-euklidisk geometri (Analytic studies of non-Euclidean geometry). He was the Professor for Descriptive Geometry at the Polytekniske Læreanstalt.
He did research on convex geometry and wrote a book on this subject with his student Werner Fenchel. Bonessen was an invited speaker at the ICM in 1924 in Toronto and in 1928 in Bologna.
With Harald Bohr he was for many years the co-editor-in-chief of the Matematisk Tidsskrift of the Danish Mathematical Society.
His younger daughter was the theatrical and cinematic star Beatrice Bonnesen (1906–1979). His elder daughter Merete Bonnesen (1901–1980) was a journalist employed by the newspaper Politiken.
Selected publications
Analytiske Studier over ikke-euklidisk Geometri, Kopenhagen 1902
with Werner Fenchel: Theorie der konvexen Körper, Springer 1934, English translation: Theory of convex bodies, Moscow (Idaho), BCS Associates 1987
Les Problèmes des Isopérimètres et des Isépiphanes, Paris, Gauthier-Villars 1929
Extréma liés, Kopenhagen 1931
Sources
Klaus Voss: Integralgeometrie für Stereologie und Bildrekonstruktion, Springer 2007, p.161
References
1873 births
1935 deaths
Geometers
20th-century Danish mathematicians
University of Copenhagen alumni
Academic staff of the Technical University of Denmark | Tommy Bonnesen | Mathematics | 359 |
51,649,142 | https://en.wikipedia.org/wiki/Urban%20air%20mobility | Urban air mobility (UAM) is the use of small, highly automated aircraft to carry passengers or cargo at lower altitudes in urban and suburban areas which have been developed in response to traffic congestion. It usually refers to existing and emerging technologies such as traditional helicopters, vertical-takeoff-and-landing aircraft (VTOL), electrically propelled vertical-takeoff-and-landing aircraft (eVTOL), and unmanned aerial vehicles (UAVs). These aircraft are characterized by the use of multiple electric-powered rotors or fans for lift and propulsion, along with fly-by-wire systems to control them. Inventors have explored urban air mobility concepts since the early days of powered flight. However, advances in materials, computerized flight controls, batteries and electric motors improved innovation and designs beginning in the late 2010s. Most UAM proponents envision that the aircraft will be owned and operated by professional operators, as with taxis, rather than by private individuals.
Urban air mobility is a subset of a broader advanced air mobility (AAM) concept that includes other use cases than intracity passenger transport; NASA describes advanced air mobility as including small drones, electric aircraft, and automated air traffic management among other technologies to perform a wide variety of missions including cargo and logistics. This is also supported by the drone market consulting firm Drone Industry Insights, who also includes vertiports into the definition of AAM and UAM.
History
Pre-history
The development of the earliest predecessors of UAM aircraft began in the early 1900s with early concepts of “flying cars” such as Glenn Curtiss's Autoplane, developed in 1917. Three years later, Henry Ford began prototyping “plane cars” as single-seat aircraft, but halted development after a fatal crash in early tests.
One of the first vertical-takeoff-and-landing aircraft (VTOLs) was the 1924 Berliner No. 5. It recorded its best performance when it reached a height of 4.57 m (15 ft) during a one-minute, thirty-five second flight.
Pitcairn, Cierva, Buhl and other manufacturers developed autogyros prototypes.
The Avrocar was a disk-shaped aircraft designed for military use. Initially funded by the Canadian government, the project was dropped due to costs until the U.S. Army and Air Force took over the development of the Avrocar in 1958. The Avrocar encountered issues with both thrust and stability and the project was eventually canceled in 1961.
Helicopters and air taxi services
Beginning in the early 1950s, air operators offered UAM air taxi services via helicopters in a handful of U.S. cities, including New York, Los Angeles, and San Francisco. In 1964, New York Airways (NYA) and Pan American offered more than 30 flights between John F. Kennedy International Airport and Newark Liberty International Airport with stops in Manhattan such as Wall Street. The average cost for a one-way fare was $4–11.
From 1964 to 1968, PanAm offered regular helicopter connections between midtown Manhattan and John F. Kennedy International Airport, allowing passengers to connect directly to their flights from the New York City Pan American building. The service was halted in 1979 after a crash in 1977 killed four people on the roof and one on the ground below. In the 1980s, Trump Shuttle offered helicopter service between Wall Street and LaGuardia Airport, utilizing Sikorsky S-61 helicopters. The service was discontinued in the 1990s after Trump Shuttle was acquired by US Airways. In 1986, Helijet began as a helicopter airline with routes between Vancouver and Victoria in British Columbia.
BLADE, launched in 2014 in New York City, providing helicopter-based air taxi services. BLADE has since launched similar services in the San Francisco Bay Area and Mumbai.
In 2017 Voom, a subsidiary of aircraft maker Airbus, flew more than 15,000 passengers in São Paulo, Brazil using Airbus helicopters. The Voom UAM demonstration program operated for four years and was shut down in March 2020. In 2019, Uber began to offer Uber Copter in Lower Manhattan New York to John F. Kennedy International Airport. Some cities have encouraged the idea of inexpensive, point-to-point air travel as a way of reducing traffic congestion and moving goods.
VTOLs and eVTOLs
By the mid-2000s, aircraft designers were incorporating technologies pioneered in small drones into new aircraft designs for passengers. These technologies included distributed propulsion (the use of multiple rotors or fans), lithium ion batteries, inexpensive accelerometers, miniaturized navigation systems and carbon-fiber construction.
In 2010, Kitty Hawk Corporation, funded by Google Co-founder Larry Page, began development of the Kitty Hawk Flyer. On October 5, 2011, Marcus Leng, Founder of Opener, piloted the first manned flight of a fixed-wing all electric VTOL aircraft. On October 21, 2011, the co-founder and primary designer of Volocopter, Thomas Senkel, flew the first manned flight of an electric multicopter, the Volocopter VC1 prototype. In 2012, Joby Aviation and NASA partnered to prototype an experimental eVTOL. In 2014, The Leading Edge Asynchronous Propeller Technology (LEAPTech) project was launched as a collaboration of NASA Langley Research Center and NASA Armstrong Flight Research Center along with Empirical Systems Aerospace (ESAero) and Joby Aviation.
Lockheed Martin debuted their optionally-piloted helicopter, the S-76B Sikorsky Autonomous Research Aircraft (SARA) in 2019, in downtown Los Angeles. In 2018, the Wisk Cora eVTOL test flight occurred in Mountain View, CA. That same year, Opener flew the BlackFly a personal air vehicle, after nine years of development. Joby Aviation tested its tilt-rotor UAM vehicle in flight in March 2021. In June 2021, EHang completed the first pilotless test flight of the AAV EHang216 in Honshu, China. In the same month, Volocopter demonstrated its first public flight of an electric air taxi in France along with remote-controlled flight of its eVTOL, the Volocopter 2X. In July 2021, Joby completed a flight of its eVTOL that flew a 150-mile flight on a single battery charge by flying in a 14-mile circle 11 times for a total flight time of one hour and 17 minutes.
Air mobility is progressing in both manned and UAV directions. In Hamburg, the WiNDroVe project – (use of drones in a metropolitan area) was implemented from May 2017 through January 2018. In Ingolstadt, Germany the Urban Air Mobility project began in June 2018, involving Audi, Airbus, the Carisma Research Center, the Fraunhofer Application Center for Mobility, the THI University of Applied Sciences (THI in the artificial intelligence research network) and other partners. Envisioned was use of UAM in emergency services, transport of blood and organs, traffic monitoring, public safety and passenger transport.
The German, Dutch and Belgian cities Maastricht, Aachen, Hasselt, Heerlen and Liège joined the UAM Initiative of the European Innovation Partnership on Smart Cities and Communities (EIP-SCC). Toulouse, France, is participating in the European Urban Air Mobility Initiative. The project is coordinated by Airbus, the European institutional partner Eurocontrol and EASA (European Aviation Safety Agency).
Implementation
The concept was realized in São Paulo, Brazil, with over 15,000 passengers flown by Voom. There, urban air mobility was provided by helicopters. Helicopter air taxis are already available in Mexico City, Mexico. Fast air connections are still associated with high costs, and cause considerable noise and high energy consumption.
The Voom UAM demonstration program operated for four years, and was shut down in March 2020.
Urban-Air Port, a UK Government-sponsored helipad+ startup R&D firm, with a prototype at Coventry, equipped for eVTOLs, PAVs and drones, in conjunction with Hyundai.
Aircraft
Personal air vehicles (PAVs) are under development for urban air mobility. These include projects such as the CityAirbus demonstrator, the Lilium Jet or the Volocopter, the EHang 216 and the experimental Boeing Passenger Air Vehicle.
In the concept phase, urban air mobility aircraft, having VTOL capabilities, are deployed to take off and land vertically in a relatively small area to avoid the need of a runway. The majority of designs are electric and use multiple rotors to minimize noise (due to rotational speed) while providing high system redundancy. Many of them have completed their first flight.
The most common configurations of urban air mobility aircraft are multicopters (such as the Volocopter) or so-called tiltwing convertiplane aircraft (e.g. A³ Vahana). The first type uses only rotors with vertical axis, while the second additionally have propulsion and lift systems for horizontal flight (e.g. pressure propeller and wing).
Power source
In order for UAM aircraft to be most efficient, recharging and refueling must be done as quickly as possible, whether that is swapping batteries, fast recharging batteries, or hydrogen refueling.
Conventional fuel
Conventional fossil fuels are readily available and offer high power density (the amount of power produced per kilogram of fuel). However, traditional piston or turbine engines emit smoke and noise. The heavy mechanical linkages needed to distribute power limit the number and configuration of rotors on an aircraft.
Sustainable or synthetic aviation fuel
Synthetic fuels have the potential to produce nearly -neutral energy while utilizing existing refueling infrastructure. But they pose the same challenges as conventional fuel in terms of noise and mechanical limitations.
Electric
Rechargeable batteries are often used in UAVs and eVTOLs. Emerging eVTOL vehicles are limited by the relatively low energy density to weight ratio in current battery technology, as well as the lack of infrastructure required for recharging stations.
Hybrid-electric
Hybrid-electric systems use a combination of internal combustion engine (ICE) and electric propulsion system components. Different combinations are possible. These systems can provide combined advantages from different energy sources, but still must be viewed in terms of the overall system's efficiency.
Hydrogen fuel cells
Hydrogen fuel cells generate electricity by circulating hydrogen gas through a catalytic membrane. Small fuel cells can power light drones for three times longer than equivalent batteries. Fuel cells are in development for larger aircraft. Experimental regional aircraft retrofitted with fuel cell-electric propulsion systems have flown in 2023. In January 2023, ZeroAvia flew a Dornier 228 with one original Honeywell TPE 331 turboprop engine on the right wing and a proprietary ZeroAvia hydrogen-electric engine on the left wing. In March 2023, Universal Hydrogen's electric Dash 8-300 made its maiden flight.
Propulsion
Common VTOL and eVTOL configurations include:
Multirotor or multicopter
Multirotor aircraft have small wings, or no wings at all. They use downward-facing propellers or fans to generate the majority of their lift.
Lift-plus-cruise
Lift-plus-cruise aircraft utilize vertically mounted propellers for take-off and landing, but a horizontal propeller and wings for sustained cruise flight.
Ducted fans
Ducted fans are a type of propeller mounted within a duct, which optimizes the thrust from the tips of the blades.
Tiltrotors
Tiltrotor aircraft lift exclusively by rigid propeller and have no other horizontal propulsion type. They generate horizontal thrust by physically tilting the rotors into a horizontal position once airborne.
Tiltwing
Tiltwing aircraft are similar to tiltrotor aircraft, but rather than independently rotating the rotors, the entire wing is rotated.
Flight controls
Flight controls consists of flight control surfaces, cockpit controls, and operating mechanisms to control an aircraft's direction in flight. Honeywell, Pipistrel, Vertical Aerospace, Lilium and other companies are collaborating to create new flight controls for a variety of eVTOL aircraft. Honeywell developed a fly-by-wire computer that controls multiple rotors, a detection and avoidance radar to navigate traffic, and software to track landing zones for repeatable vertical landings.
Fly by wire
Fly-by-wire systems translate a pilot's inputs into commands sent to an aircraft's motors, propeller governors, ailerons, elevators and other moving surfaces. They are essential in multirotor designs because human pilots cannot control multiple propellers without computer assistance. In June 2019, Honeywell introduced a miniaturized computer specifically designed for UAM aircraft.
Software
Advanced autonomous eVTOL fleets require management software to scale to profitable levels. Pilot training is costly and expensive, and pilots themselves take up much of an aircraft's payload. So many manufacturers are designing aircraft that can fly autonomously as automation technology improves. Sikorsky is developing MATRIX technology, while Honeywell is partnered with Pipistrel and other manufacturers to develop automatic landing systems for their respective aircraft. Artificial intelligence (AI) and machine learning are necessary to develop autonomous craft, but pose a complication to certification because they are non-deterministic, i.e. they may behave differently given the same input in the same scenario.
Avionics
Avionics are electronic systems designed for aircraft. Honeywell is developing integrated avionics systems comprising a vehicle management system, autonomous navigation, a fly-by-wire control system, and compact satellite connectivity. The avionics are modular and able to integrate with third-party applications. The architecture can also incorporate simplified vehicle operations, which replaces traditional pilot displays with imagery that is similar to a car GPS system or smartphone app.
Infrastructure
UAM requires infrastructure for vehicles to take off, land, be repaired, recharge or refuel, and park. The size of the physical infrastructure determines the market size, as trips can only be completed between established landing areas. While some components can be integrated into existing aviation and aerospace infrastructure, additional facilities need to be constructed. For large cities it is estimated that there could be 85–100 take-off and landing pads to accommodate a UAM environment.
Vertiports
See main article vertiport
According to the FAA, a vertiport is an identifiable ground or elevated area, that can be associated with various equipment and facilities, used for the take off and landing of tiltrotor aircraft and rotorcraft. The industry has used different terms for describing the various levels of equipment and sizes of these facilities. Vertipads are simple landing pads designed to be used by one aircraft at a time. Vertiports or vertibases can feature one or more final approach and takeoff (FATO) and touch-down and lift-off (TLOF) areas, as well as several VTOL stands and other aircraft and passenger facilities. Vertihubs are larger aviation facilities serving the largest structure in the UAM environment. They can offer service ssuch as FBOs and MROs. Vertihubs would serve concentrated high-traffic regions.
In 2020, Lilium announced their plans to construct a vertiport near Orlando International Airport. Joby has partnered with REEF Technology and Neighborhood Property Group (NPG) to use the rooftops of parking structures as take-off and landing areas.
Helipads
Existing helipads, or helicopter landing pads, can be used to accommodate UAM aircraft. Helipads are insufficient to sustain the industry without construction of additional infrastructure or modification of existing helipads.
Airports
Airports are already being used in limited locations to facilitate on-demand helicopter and eVTOL services. Such airports include John Wayne Airport, John F. Kennedy International Airport, and Portland International Airport.
Air traffic management
Unmanned aircraft systems (UAS) traffic management (collectively UTM) is a specific air traffic management system designed around the unique needs of unmanned and low-altitude aircraft. UTM provides airspace integrations necessary for ensuring safe operation through services such as design of the actual airspace, delineations of air corridors, dynamic geofencing to maintain flight paths, weather avoidance, and route planning without continuous human monitoring. Airspace Link developed AirHub, a system to connect cities, states, drone operators, and the FAA into a single space to map out the safest routes for autonomous drones using publicly available flight data.
Regulations
Governments around the world have begun debating changes to their airspace rules to accommodate high numbers of autonomous or semi-autonomous aircraft operating at low altitudes. NASA and EASA have proposed concepts for the requirements of a UAM system. NASA's concept of operations, or ConOps, relies on defined corridors for UAM craft which must then abide by specific protocols when inside the corridor. EASA's regulatory approach leaves local decision to “local actors” and will instead seek to certify the aircraft themselves for safety. They developed the VTOL special condition to certify the specific class of aircraft that were previously undefined.
Certifications
Aircraft
Aircraft need to be certified as airworthy, as well as registered with the appropriate governing body. Regulations for UAM aircraft are most similar to helicopter regulations but will need additional regulations for electric and/or autonomous craft. FAA established certification basis for its eVTOL craft. eVTOLs are classified with the FAA as an airplane that can take off and land vertically. EASA released special condition VTOL certification to separate VTOLs and eVTOLs from conventional rotocraft or fixed-wing aircraft. Archer Aviation uses a blend of the FAA Part 23, 27, 33, 35, and 36 requirements to certify its eVTOL. BETA applied for eVTOL certification under Part 23 with the FAA. BETA was the first manned eVTOL to receive military airworthiness from the Air Force.
Operations
All VTOL and eVTOL aircraft that carry persons or property for hire must be flown by an appropriately certificated operator. Joby applied for a FAA Part 135 certificate to operate their own aircraft for UAM projects. Lilium partnered with Luxaviation to operate eVTOL jets in Europe.
Pilots
Pilots need to be certified to operate an eVTOL and remote eVTOLs. Pilots can obtain a commercial pilot license (CPL(H)) or an air transport pilot license (ATPL(H)) for manned craft. CAE is developing training programs utilizing data analytics with complex simulators. CAE and BETA partnered to offer eVTOL pilot and maintenance technician training for ALIA eVTOLs. CAE and Volocopter partnered to develop a pilot training program for Volocopter eVTOLs.
Mechanics
Mechanics also need to be certified, but as this is an emerging industry, there are not yet regulations in place to do so for the relevant aircraft and technologies.
Applications
Applications include commute, law enforcement, air medical, fire, private security, and military.
Public acceptance
Public acceptance of UAM relies on a variety of factors, including but not limited to safety, energy consumption, noise, security, and social equity. Safety risks overlap with most current aircraft risks, including the potential for flights outside of approved airspace, proximity to people and/or buildings, critical system failures or loss of control, and hull loss.
In the case of autonomous or remote-piloted aircraft, cybersecurity becomes a risk as well. The type of and volume of the noise caused by aircraft and rotorcraft are two leading factors regarding the public perception of eVTOL craft in UAM applications.
Specific security concerns include the physical security of passengers in the absence of crew members and the cybersecurity of both the craft and the systems governing it. In regard to social equity, the high initial costs of UAM services could prove to be detrimental to public opinion, especially as the affordability of services and technologies is not guaranteed. In the NASA UAM market study, respondents with higher incomes were more likely to take UAM trips.
An EASA survey showed that 83% of respondents had a positive attitude towards UAM, while 71% were ready to try UAM services. Projects underway include Lilium announcing to create the first U.S. vertihub in Orlando for its on-demand electric jet service and EHang created an UAM pilot program in Spain in the city of Seville.
Training and education
In December 2016, the Vertical Lift Research Centers of Excellence (VLRCOE) announced its new academic teams for its program. The joint effort of the United States Army, United States Navy, and NASA aims to foster direct collaboration between the government and academic institutions. Universities have been associated into various teams: Georgia Institute of Technology, Iowa State University, Purdue University, University of Michigan, and Washington University; University of Liverpool, Pennsylvania State University, Embry Riddle Aeronautical University, University of California, Davis, and University of Tennessee, University of Maryland, United States Naval Academy, University of Texas at Arlington, University of Texas at Austin, and Texas A&M University; Technical University of Munich, Roma Tre University, and Technion – Israel Institute of Technology.
Volocopter and CAE partnered to create the first eVTOL pilot training and development program in July 2021.
See also
Lists of aviation topics
List of aviation, avionics, aerospace and aeronautical abbreviations
Index of aviation articles
References
Infrastructure
Proposed transport infrastructure
Technology in society | Urban air mobility | Engineering | 4,307 |
43,721,974 | https://en.wikipedia.org/wiki/E-dense%20semigroup |
In abstract algebra, an E-dense semigroup (also called an E-inversive semigroup) is a semigroup in which every element a has at least one weak inverse x, meaning that xax = x. The notion of weak inverse is (as the name suggests) weaker than the notion of inverse used in a regular semigroup (which requires that axa=a).
The above definition of an E-inversive semigroup S is equivalent with any of the following:
for every element a ∈ S there exists another element b ∈ S such that ab is an idempotent.
for every element a ∈ S there exists another element c ∈ S such that ca is an idempotent.
This explains the name of the notion as the set of idempotents of a semigroup S is typically denoted by E(S).
The concept of E-inversive semigroup was introduced by Gabriel Thierrin in 1955. Some authors use E-dense to refer only to E-inversive semigroups in which the idempotents commute.
More generally, a subsemigroup T of S is said dense in S if, for all x ∈ S, there exists y ∈ S such that both xy ∈ T and yx ∈ T.
A semigroup with zero is said to be an E*-dense semigroup if every element other than the zero has at least one non-zero weak inverse. Semigroups in this class have also been called 0-inversive semigroups.
Examples
Any regular semigroup is E-dense (but not vice versa).
Any eventually regular semigroup is E-dense.
Any periodic semigroup (and in particular, any finite semigroup) is E-dense.
See also
Dense set
E-semigroup
References
Further reading
Mitsch, H. "Introduction to E-inversive semigroups." Semigroups (Braga, 1999), 114–135. World Scientific Publishing Co., Inc., River Edge, NJ, 2000.
Semigroup theory
Algebraic structures | E-dense semigroup | Mathematics | 429 |
1,982,434 | https://en.wikipedia.org/wiki/Positron%E2%80%93Electron%20Tandem%20Ring%20Accelerator | The Positron–Electron Tandem Ring Accelerator (PETRA) is one of the particle accelerators at the German national laboratory DESY in Hamburg, Germany. At the time of its construction, it was the biggest storage ring of its kind and still is DESY's second largest synchrotron after HERA. PETRA's original purpose was research in elementary particle physics. From 1978 to 1986, it was used to study electron–positron collisions with the four experiments JADE, MARK-J, PLUTO and TASSO. The discovery of the gluon, the carrier particle of the strong nuclear force, by the TASSO collaboration in 1979 is counted as one of the biggest successes. PETRA was able to accelerate electrons and positrons to 19 GeV.
Research at PETRA led to an intensified international use of the facilities at DESY. Scientists from China, France, Israel, the Netherlands, Norway, the United Kingdom and the USA participated in the first experiments at PETRA alongside many German colleagues.
PETRA II
In 1990, the facility was taken into operation again under the name PETRA II as a pre-accelerator for protons and electrons/positrons for the new particle accelerator HERA. In March 1995, PETRA II was equipped with undulators to create greater amounts of synchrotron radiation with higher energies, especially in the X-ray part of the spectrum. PETRA II served the Hamburg Synchrotron Radiation Laboratory (HASYLAB) at DESY as a source of high-energy synchrotron radiation in three test experimental areas. In PETRA II, positrons were accelerated to up to 12 GeV.
PETRA III
PETRA III is the third incarnation for the PETRA storage ring, serving a regular user programme as one of the most brilliant storage-ring-based X-ray sources worldwide since 2009. The accelerator produces a particle energy of 6 GeV. There are currently three experimental halls (named after various famous scientists). The largest, named Max von Laue Hall, has a concrete floor over 300 m long that was poured as a single piece in order to limit vibrations. PETRA III delivers hard X-ray beams of very high brilliance to over 40 experimental stations.
See also
Materials oscilloscope
References
Further reading
External links
PETRA III website
DESY website
Particle physics facilities
Gluons
Buildings and structures in Altona, Hamburg
Synchrotron radiation facilities
Particle accelerators | Positron–Electron Tandem Ring Accelerator | Materials_science | 486 |
59,799,789 | https://en.wikipedia.org/wiki/Ogbonnaya%20Onu | Ogbonnaya Onu (1 December 1951 – 11 April 2024) was a Nigerian politician, author and engineer. He was the first civilian governor of Abia State and was the minister of science, technology and innovation of Nigeria from November 2015 until his resignation in 2022. He was the longest serving minister of the ministry.
Biography
Ogbonnaya Onu was born on 1 December 1951, to the family of Eze David Aba Onu in Amata, Uburu, Ohaozara Local Government Area of then Eastern region, later Imo State, then Abia State and now Ebonyi State Nigeria. He started his education at Izzi High School in Abakaliki, now the Ebonyi State capital. Here, he obtained grade one with distinction in his West African School Certificate Examination. He also sat for the High School Examination at College of Immaculate Conception (C.I.C) Enugu, graduating as the overall best student. He proceeded to the University of Lagos and graduated with a first class degree in chemical engineering in 1976. He went for his doctoral studies at the University of California, Berkeley, and obtained a Doctor of Philosophy degree in chemical engineering in 1980. Onu died on 11 April 2024, at the age of 72.
Career
Teaching career
After his graduation from the University of Lagos, Ogbonnaya Onu became a teacher at St. Augustine's Seminary, Ezzamgbo, Ebonyi State. After the completion of his doctoral studies at the University of California, Berkeley, Onu became a lecturer in the Department of Chemical Engineering at the University of Port Harcourt, and later became the pioneer head of the department. He also served as the acting dean of the Faculty of Engineering and was also elected as a member of the Governing Council of the university.<
Political career
Ogbonnaya Onu started his political career as an aspirant for a senatorial seat in the old Imo State on the platform of the National Party of Nigeria (NPN).< He contested for the position of Governor of Abia State in 1991 under the umbrella of the National Republican Convention and won. He was sworn in as the first executive governor of the state in January 1992. He was the first chairman, Conference of Nigerian elected governors. In 1999, he was the presidential flag bearer for the All People's Party but relinquished the position to Olu Falae after a merger of his party with the Alliance for Democracy who lost to Olusegun Obasanjo of the PDP. He became the national party chairman of the All Nigerian People's Party in 2010. In 2013, he and his party (ANPP) successfully merged with the Action Congress of Nigeria (ACN), Congress for Progressive Change (CPC), Democratic People's Party (DPP) and some members of the All Progressives Grand Alliance (APGA) to form the All Progressives Congress (APC). In November 2015, he was appointed Minister of Science and Technology by President Muhammadu Buhari. On 21 August 2019, he was sworn in again as Minister of Science and Technology by President Muhammadu Buhari.
Awards and achievements
Onu was a certified member of Council for the Regulation of Engineering in Nigeria, a fellow of the Nigerian Academy of Engineering, fellow of the Nigerian Society of Chemical Engineers.
In October 2022, a Nigerian national honour of Commander of the Order of the Niger (CON) was conferred on him by President Muhammadu Buhari.
Controversies
Onu said Nigeria would begin local production of pencils by 2018 which he said would provide 400,000 jobs. As of 2019, he said production of pencils had not commenced. In 1999, prior to the presidential election and the alliance between the All People's Party and Alliance for Democracy, Onu was involved in a conflict involving both APP/AD picking Olu Falae as the joint presidential flag bearer.
See also
List of people from Ebonyi State
Federal Ministry of Science, Technology and Innovation
Cabinet of Nigeria
References
1951 births
2024 deaths
Governors of Abia State
University of Lagos alumni
University of California, Berkeley alumni
People from Ebonyi State
Writers from Ebonyi State
Nigerian chemical engineers
Chemical engineering academics
Commanders of the Order of the Niger | Ogbonnaya Onu | Chemistry | 859 |
29,657,021 | https://en.wikipedia.org/wiki/Green%20home | A green home is a type of house designed to be environmentally sustainable. Green homes focus on the efficient use of "energy, water, and building materials". A green home may use sustainably sourced, environmentally friendly, and/or recycled building materials. This includes materials like reclaimed wood, recycled metal, and low VOC (volatile organic compound) paints. Additionally, green homes often prioritize energy efficiency by incorporating features, such as high-performance insulation, energy-efficient appliances, and smart home technologies that monitor and optimize energy usage. Water conservation is another important aspect, with green homes often featuring water-saving fixtures, rainwater harvesting systems, and grey water recycling systems to reduce water waste. It may include sustainable energy sources such as solar or geothermal, and be sited to take maximum advantage of natural features such as sunlight and tree cover to improve energy efficiency.
Elements
No government standards define what constitutes a green remodel, beyond non-profit certification. In general, a green home is a house that is built or remodeled in order to conserve "energy or water; improve indoor air quality; use sustainable, recycled or used materials; and produce less waste in the process." This may include buying more energy-efficient appliances or employing building materials that are more efficient in managing temperature.
A green home often incorporates design elements that maximize natural lighting and ventilation, reducing the need for artificial lighting and HVAC systems. Additionally, sustainable landscaping practices, such as native plantings and rainwater harvesting systems, can further enhance the eco-friendliness of the property. Integration of renewable energy sources like solar panels can also contribute to the overall sustainability of the home, reducing reliance on non-renewable energy sources and decreasing greenhouse gas emissions. In essence, a green home strives to minimize its environmental impact throughout its lifecycle, from construction to daily operation and eventual disposal or repurposing of materials.
History
United States
In the United States, the green building movement began in the 1970s, after the price of oil began to increase sharply. In response, researchers began to look into more energy efficient systems.
Nevertheless, individuals required persistence to navigate the bewildering array of incomplete and imperfect information that was the wilderness frontier of what is now known as green building. In 1999, Richard and Katherine Homan began building Dallas, Texas' first comprehensive green home.
It took until 2012 for the city to issue a proclamation for having the city’s first comprehensive green home.
Many organizations were founded in the 1990s to promote green buildings. Some organizations worked to improve consumer knowledge so that they could have more green homes. The International Code Council and the National Association of Home Builders began working in 2006 to create a "voluntary green home building standard".
The Energy Policy Act was enacted in 2005, which allowed tax reductions for homeowners who could show the use of energy efficient changes to their homes, such as solar panels and other solar-powered devices.
Certifications
Various types of certifications certify a home as a green home.
United States
LEED: The U.S. Green Building Council has a green certification titled Leadership in Energy and Environmental Design. The factors that it considers include "the site location, use of energy and water, incorporation of healthier building and insulation materials, recycling, use of renewable energy, and protection of natural resources". LEED offers a specific certification track for residential buildings, known as LEED for Homes. This program assesses the environmental performance of single-family homes, multifamily buildings, and mixed-use developments, considering factors such as location, water efficiency, materials selection, and indoor air quality.
Model Green Home Building Guidelines: The US National Association of Home Builders independently created its Model Green Home Building Guidelines as a type of certification, along with programs for utilities.
ENERGY STAR: In the United States, the ENERGY STAR program, administered by the Environmental Protection Agency (EPA), certifies homes that meet stringent energy efficiency standards. ENERGY STAR-certified homes are designed to use less energy for heating, cooling, and water heating, resulting in lower utility bills and reduced greenhouse gas emissions.
Passive House: The Passive House Institute and Passive House Institute US (PHIUS+) standards focus on designing and constructing ultra-low energy buildings that require very little energy for heating or cooling. Passive House certification requires rigorous adherence to specific energy performance criteria, including airtightness, high-quality insulation, and mechanical ventilation with heat recovery.
International
Living Building Challenge: This certification program, administered by the International Living Future Institute, goes beyond traditional green building standards by emphasizing regenerative design principles. Buildings certified under the Living Building Challenge must meet strict criteria related to energy, water, materials, equity, and beauty, and must demonstrate net-positive impacts on the environment and community.
BREEAM (Building Research Establishment Environmental Assessment Method): Originating in the United Kingdom, BREEAM is a widely recognized green building certification system used internationally. It assesses the environmental performance of buildings based on criteria such as energy and water use, materials selection, waste management, and ecological impact.
Australia
Green Star: Green Star is an Australian sustainability rating system for buildings and communities, developed by the Green Building Council of Australia. It evaluates the environmental attributes of buildings across categories such as energy efficiency, indoor environmental quality, transport, and innovation.
India
IGBC-Certified Green Home The Indian Green Building Council (IGBC), part of the Confederation of Indian Industry was formed in the year 2001. The council offers a wide array of services that include developing new green building rating programmes, certification services and green building training programs.
Climate Smart Homes
Climate Smart Homes is Housing initiative developed by B.H.R.T. Infra Private Limited These homes are designed for sustainable living, countering the challenges posed by climate change on housing and residents, such as heatwaves and cold waves, while promoting energy efficiency and environmental sustainability. India’s first Climate Smart Home is located in Ludhiana, punjab on a 20-by-50-foot east-facing plot, with a total built-up area of 2,500 square feet. The design incorporates Vastu Shastra principles and is specifically tailored to India’s Composite Climatic Zone, utilizing government data on weather patterns and temperature trends.
Features
Climate Smart Homes incorporate various design features intended to adapt to India’s diverse climate zones and mitigate the impacts of extreme weather. Key characteristics include:
Climatic Zone Adaptation: Tailored to India’s geographical and climatic diversity, these homes factor in site-specific conditions and anticipated climatic changes.
Thermal Protection: The homes utilize construction materials with low thermal conductivity, featuring walls and roofs with U-values of 0.175 W/m²K and 0.235 W/m²K, respectively, to reduce energy consumption and thermal stress.
UV Ray Mitigation: Orientation-specific designs are used to block ultraviolet rays, contributing to healthier indoor environments.
Natural Ventilation and Daylight: Techniques such as cross ventilation, stack ventilation, and strategic window placement enhance airflow and natural lighting, reducing dependence on mechanical systems.
Health and Environmental Benefits
The initiative aims to address both human health and environmental sustainability. Key benefits include:
Improved indoor comfort during extreme temperatures through thermally efficient roofs and walls.
Enhanced natural lighting to support Vitamin D synthesis and reduce reliance on artificial lighting.
Water-efficient systems that conserve resources while promoting sustainability.
Impacts on Health
Climate Smart Homes are designed to mitigate health risks associated with extreme weather. These include:
Reducing heat-related conditions such as dehydration and heatstroke.
Minimizing cardiovascular and respiratory stress during extreme weather.
Alleviating mental health challenges associated with climatic stress through improved living conditions.
Broader Context
The Climate Smart Homes initiative aligns with national programs such as Make in India and Atmanirbhar Bharat, and contributes to India’s Vision 2047 and global climate goals. The project is led by Taniksh Gaur Verma, who has certifications in climate change from the United Nations. The initiative seeks to promote sustainable and resilient housing for all sectors of society.
Global examples
Earthship Biotecture (Taos, New Mexico, USA): Earthships are a unique type of sustainable home pioneered by architect Michael Reynolds. These homes are built using recycled materials such as tires, bottles, and cans, and they utilize passive solar heating, natural ventilation, and rainwater harvesting systems to achieve off-grid living. The thick walls made of rammed earth or tires provide excellent thermal mass, helping to regulate indoor temperatures year-round. Earthships often incorporate greenhouse spaces for food production, further enhancing their self-sufficiency and sustainability.
The Zero Carbon House (Birmingham, UK): The Zero Carbon House, also known as the 'Balsall Heath House,' is an innovative example of sustainable retrofitting. Originally a Victorian terraced house, it was transformed into a zero-carbon dwelling through extensive renovation and the integration of energy-efficient technologies. The house features high levels of insulation, triple-glazed windows, airtight construction, and rooftop solar panels for renewable energy generation. It also incorporates passive design principles to minimize energy demand while maximizing comfort for occupants. The Zero Carbon House serves as a model for reducing carbon emissions in existing urban housing stock.
The Bosco Verticale (Vertical Forest) (Milan, Italy): The Bosco Verticale is a pair of residential towers designed by Stefano Boeri Architects. What makes these buildings unique is their extensive greenery, with thousands of trees and shrubs planted on balconies at every level. The vegetation helps to absorb carbon dioxide, produce oxygen, filter pollutants, and regulate indoor temperatures, reducing the buildings' environmental impact and enhancing urban biodiversity. The Bosco Verticale demonstrates how high-density urban living can be combined with nature to create sustainable and aesthetically pleasing living spaces.
See also
Biophilic design
Blue roof
Building-integrated photovoltaics
Ecohouse
Ecological design
Ecovillage
Energy-plus building
Green roof
Natural building
Passive solar building design
Rainwater harvesting
Rainwater tank
Sustainable architecture
Sustainable city
Sustainable design
Zero-energy building
References
Jones, Sarah. "Sustainable Living: Green Homes and Eco-Friendly Lifestyles." Earth Books, 2019.
Smith, John. "The Green Home Handbook: Eco-Friendly Homes for the Future." Green Press, 2020.
Home
Sustainable building
Low-energy building
Climate smart home | Green home | Engineering | 2,103 |
1,783,512 | https://en.wikipedia.org/wiki/Bonini%27s%20paradox | Bonini's paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).
Statements
In modern discourse, the paradox was articulated by John M. Dutton and William H. Starbuck: "As a model of a complex system becomes more complete, it becomes less understandable. Alternatively, as a model grows more realistic, it also becomes just as difficult to understand as the real-world processes it represents."
This paradox may be used by researchers to explain why complete models of the human brain and thinking processes have not been created and will undoubtedly remain difficult for years to come.
This same paradox was observed earlier from a quote by philosopher-poet Paul Valéry (1871–1945): "Ce qui est simple est toujours faux. Ce qui ne l’est pas est inutilisable". ("If it's simple, it's always false. If it's not, it's unusable.")
Also, the same topic has been discussed by Richard Levins in his classic essay "The Strategy of Model Building in Population Biology", in stating that complex models have 'too many parameters to measure, leading to analytically insoluble equations that would exceed the capacity of our computers, but the results would have no meaning for us even if they could be solved.
Related issues
Bonini's paradox can be seen as a case of the map–territory relation: simpler maps are less accurate though more useful representations of the territory. An extreme form is given in the fictional stories Sylvie and Bruno Concluded and "On Exactitude in Science", which imagine a map of a scale of 1:1 (the same size as the territory), which is precise but unusable, illustrating one extreme of Bonini's paradox.
Isaac Asimov's fictional science of "Psychohistory" in his Foundation series also faces with this dilemma; Asimov even had one of his psychohistorians discuss the paradox.
See also
References
Eponymous paradoxes
Systems biology | Bonini's paradox | Biology | 429 |
2,667,574 | https://en.wikipedia.org/wiki/MailSlot | A Mailslot is a one-way interprocess communication mechanism, available on the Microsoft Windows operating system, that allows communication between processes both locally and over a network. The use of Mailslots is generally simpler than named pipes or sockets when a relatively small number of relatively short messages are expected to be transmitted, such as for example infrequent state-change messages, or as part of a peer-discovery protocol. The Mailslot mechanism allows for short message broadcasts ("datagrams") to all listening computers across a given network domain.
Features
Mailslots function as a server-client interface. A server can create a Mailslot, and a client can write to it by name. Only the server can read the mailslot, as such mailslots represent a one-way communication mechanism. A server-client interface could consist of two processes communicating locally or across a network. Mailslots operate over the RPC protocol and work across all computers in the same network domain. Mailslots offer no confirmation that a message has been received. Mailslots are generally a good choice when one client process must broadcast a message to multiple server processes.
Uses
The most widely known use of the Mailslot IPC mechanism is the Windows Messenger service that is part of the Windows NT-line of products, including Windows XP. The Messenger Service, not to be confused with the MSN Messenger internet chat service, is essentially a Mailslot server that waits for a message to arrive. When a message arrives it is displayed in a popup onscreen. The NET SEND command is therefore a type of Mailslot client, because it writes to specified mailslots on a network.
A number of programs also use Mailslots to communicate. Generally these are amateur chat clients and other such programs. Commercial programs usually prefer pipes or sockets.
Mailslots are implemented as files in a mailslot file system (MSFS). Examples of Mailslots include:
MAILSLOT\Messngr - Microsoft NET SEND Protocol
MAILSLOT\Browse - Microsoft Browser Protocol
MAILSLOT\Alerter
MAILSLOT\53cb31a0\UnimodemNotifyTSP
MAILSLOT\HydraLsServer - Microsoft Terminal Services Licensing
MAILSLOT\CheyenneDS - CA BrightStor Discovery Service
External links
Mailslots (MSDN Documentation)
Using Mailslots for Interprocess Communication
Using a Mailslot to read/write data over a network
The beginning of the end of Remote Mailslots
Inter-process communication | MailSlot | Technology | 525 |
46,024 | https://en.wikipedia.org/wiki/Abraham%20Robinson | Abraham Robinson (born Robinsohn; October 6, 1918 – April 11, 1974) was a mathematician who is most widely known for development of nonstandard analysis, a mathematically rigorous system whereby infinitesimal and infinite numbers were reincorporated into modern mathematics. Nearly half of Robinson's papers were in applied mathematics rather than in pure mathematics.
Biography
He was born to a Jewish family with strong Zionist beliefs, in Waldenburg, Germany, which is now Wałbrzych, in Poland. In 1933, he emigrated to British Mandate of Palestine, where he earned a first degree from the Hebrew University. Robinson was in France when the Nazis invaded during World War II, and escaped by train and on foot, being alternately questioned by French soldiers suspicious of his German passport and asked by them to share his map, which was more detailed than theirs. While in London, he joined the Free French Air Force and contributed to the war effort by teaching himself aerodynamics and becoming an expert on the airfoils used in the wings of fighter planes.
After the war, Robinson worked in London, Toronto, and Jerusalem, but ended up at the University of California, Los Angeles in 1962.
Work in model theory
He became known for his approach of using the methods of mathematical logic to attack problems in analysis and abstract algebra. He "introduced many of the fundamental notions of model theory". Using these methods, he found a way of using formal logic to show that there are self-consistent nonstandard models of the real number system that include infinite and infinitesimal numbers. Others, such as Wilhelmus Luxemburg, showed that the same results could be achieved using ultrafilters, which made Robinson's work more accessible to mathematicians who lacked training in formal logic. Robinson's book Non-standard Analysis was published in 1966. Robinson was strongly interested in the history and philosophy of mathematics, and often remarked that he wanted to get inside the head of Leibniz, the first mathematician to attempt to articulate clearly the concept of infinitesimal numbers.
While at UCLA his colleagues remember him as working hard to accommodate PhD students of all levels of ability by finding them projects of the appropriate difficulty. He was courted by Yale, and after some initial reluctance, he moved there in 1967. In the Spring of 1973 he was a member of the Institute for Advanced Study. He died of pancreatic cancer in 1974.
See also
Notes
Publications
References
External links
Kutateladze S.S., Abraham Robinson, the creator of nonstandard analysis
1918 births
1974 deaths
20th-century American mathematicians
Alumni of the University of London
20th-century German mathematicians
Jewish emigrants from Nazi Germany to Mandatory Palestine
Jews who emigrated to escape Nazism
German emigrants to the United States
People from Wałbrzych
University of California, Los Angeles faculty
Yale University faculty
Brouwer Medalists
Mathematical logicians
Model theorists
Institute for Advanced Study visiting scholars
Yale Sterling Professors | Abraham Robinson | Mathematics | 596 |
18,339,791 | https://en.wikipedia.org/wiki/Akanbe | Akanbe (, and ) is a Japanese facial gesture indicating sarcasm but also used as a taunt, especially by children. It consists of someone pulling down one's lower eyelid to expose the red underside towards someone, often accompanied by the person sticking their tongue out.
The word "akanbe" is also used as an interjection, generally expressing disapproval or displeasure. It can be used as a noun, describing a pest who meddles in other people's affairs.
In addition, akanbe is a technique in image composition for animating the hand gesture of a character in anime, comics, or manga. It involves making a character raise their index finger to their eye and making a V-shaped mouth with their lips. This technique is commonly used by characters in the medium when they are angry, surprised, or in disbelief.
See also
Eyelid pull
References
Culture of Japan
gestures
humour
human eye
Facial expressions | Akanbe | Biology | 189 |
40,890,162 | https://en.wikipedia.org/wiki/Cross-domain%20interoperability | Cross-domain interoperability exists when organizations or systems from different domains interact in information exchange, services, and/or goods to achieve their own or common goals. Interoperability is the method of systems working together (inter-operate). A domain in this instance is a community with its related infrastructure, bound by common purpose and interests, with consistent mutual interactions or rules of engagement that is separable from other communities by social, technical, linguistic, professional, legal or sovereignty related boundaries. The capability of cross-domain interoperability is becoming increasingly important as business and government operations become more global and interdependent. Cross-domain interoperability enables synergy, extends product utility and enables users to be more effective and successful within their own domains and the combined effort.
Cross-domain interoperability is characterized by common understanding and agreements on both sides of a domain boundary that enable individual organizations to tailor or make their products, assets or services interoperable within the larger community. Each participant accepts and enforces use of mutual, domain-wide or worldwide standards and interface protocols. Consequently, cross-domain interfaces may not be under the control of any single element or authority -- unlike an integrated system-of-systems environment where one domain or its authority may control the interfaces to be used between domains.
Two examples of activities that can benefit when information systems are interoperable across domains are disaster response work (such as the 2013 typhoon relief in Philippines) and multi-national peacekeeping missions (such as the Allied Forces support of France during the 2012–2013 conflict in Mali). Another effort where cross-domain interoperability will be critical to overall success is implementation of the U.S. Affordable Care Act, in which federal and state governments, insurance companies and healthcare providers perform their individual functions using a variety of networks and divergent computer platforms – an interoperable environment will enable participants in these different domains to effectively exchange information and perform their essential services, while protecting the privacy and rights of individual patients during the exchange. The healthcare-related community has begun to focus on establishing cross-domain interoperability, but not yet on a large-scale basis.
Cloud computing promotes communication and collaboration, but connecting to the Internet and migrating information to a cloud or group of clouds does not guarantee cross-domain interoperability. Just because the organizations are all connected to the Internet does not mean that cross-domain interoperability automatically happens. Eliminating technological barriers and enabling information sharing and collaboration involves not only designing and building computer programs and environments so they interoperate, but also having cooperative agreements in place regarding management and administrative policies governing issues such as security, user identification, trust and information assurance. Internal policies and government regulations also have an impact and can either promote or impede cross-domain interoperability. To establish cross-domain interoperability, there needs to be a spirit of cooperation among the different participants, and domains must have agreed-to standards, translations and other interface conversions that enable each entity to exchange information and extract the data it needs in order to perform its role and to contribute knowledge that adds value to the overall mission.
A number of organizations, businesses, and institutions work on the technology and policies to make cross-domain interoperability a reality, including National Institute of Standards and Technology, United States Department of Defense, NATO, and Network Centric Operations Industry Consortium (NCOIC). NCOIC has a number of resources for government and industry to foster cross-domain interoperability, including the open process, NCOIC Rapid Response Capability (NRRC™), which was first designed for the National Geospatial-Intelligence Agency.
References
Interoperability | Cross-domain interoperability | Engineering | 743 |
22,304,099 | https://en.wikipedia.org/wiki/Static%20discipline | In a digital circuit or system, static discipline is a guarantee on logical elements that "if inputs meet valid input thresholds, then the system guarantees outputs will meet valid output thresholds", named by Stephen A. Ward and Robert H. Halstead in 1990, but practiced for decades earlier.
The valid output thresholds voltages VOH (output high) and VOL (output low), and valid input thresholds VIH (input high) and VIL (input low), satisfy a robustness principle such that
VOL < VIL < VIH < VOH
with sufficient noise margins in the inequalities.
References
External links
MIT course 6002x section on static discipline
Digital electronics | Static discipline | Technology,Engineering | 142 |
36,579,651 | https://en.wikipedia.org/wiki/Luteal%20support | Luteal support is the administration of medication, generally progesterone, progestins, hCG or GnRH agonists, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. It can be combined with for example in vitro fertilization and ovulation induction.
Progesterone appears to be the best method of providing luteal phase support, with a relatively higher live birth rate than placebo, and a lower risk of ovarian hyperstimulation syndrome (OHSS) than hCG. Addition of other substances such as estrogen or hCG does not seem to improve outcomes.
Progesterone and progestins
The live birth rate is significantly higher with progesterone for luteal support in IVF cycles with or without intracytoplasmic sperm injection (ICSI). Co-treatment with GnRH agonists further improves outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%).
Routes and formulations
There is no evidence of any route of administration of progesterone or progestins being more beneficial than others for luteal support. The main ones are:
Oral administration of progesterone or progestin pills. Oral administration of progestins provides at least similar live birth rate than vaginal progesterone capsules when used for luteal support in embryo transfer, with no evidence of increased risk of miscarriage.
Intravaginal administration of gel, tablets or other inserts, such as endometrin. A weekly vaginal ring is an effective and safe method for intravaginal administration.
Intramuscular administration. Daily intramuscular injections of progesterone-in-oil (PIO) have been the standard route of administration, but are not FDA-approved for use in pregnancy.
Time of initiation
The time for beginning luteal support can be put in relation to various events:
In IVF, generally somewhere between the evening of oocyte retrieval and day 3 after oocyte retrieval, with weak evidence indicating that 2 days after oocyte retrieval may be optimal.
In artificial insemination, luteal support is generally started on the day of insemination, or 1 to 2 days after.
Duration
Luteal support given for a shorter duration than 7 weeks results in an increased risk of miscarriage in women with a dysfunctional corpus luteum (as can be diagnosed by blood tests for endogenous progesterone). In general, however, luteal support can safely be discontinued at the time of a positive pregnancy test (approximately 2 weeks after fertilization).
Other substances tested in luteal phase
The addition of estrogen or hCG as adjunctives to progesterone do not appear to affect outcomes pregnancy rate and live birth rate in IVF. In fact, luteal support with human chorionic gonadotropin (hCG) alone or as a supplement to progesterone has been associated with a higher risk of ovarian hyperstimulation syndrome (OHSS). Low molecular weight heparin as luteal support may improve the live birth rate but has substantial side effects and has no reliable data on long-term effects. Glucocorticoids such as cortisol has limited evidence of efficacy as luteal support.
References
Assisted reproductive technology | Luteal support | Biology | 724 |
12,928,257 | https://en.wikipedia.org/wiki/Detinets | A detinets ( ), dytynets ( ) or detinetz ( ) is an ancient Rus' city-fort or central fortified part of a city, similar to the meaning of kremlin or citadel. The term was used in many regions of Kievan Rus', including Chernihiv, Novgorod, and Kyiv.
Old Russian manuscripts mention detinets in various places of Kievan Rus' since the end of the 11th century. From the 13th to the 14th century, detinets were used only in the Russian Pskov-Novgorod region.
The origin of the term is uncertain. Some derive it from the Old East Slavic word deti—"children", suggesting it was used to hide children and other less able people during a siege. Polish philologist Lucyjan Malinowski derives the similarly sounding Polish term –"courtyard", from detinets.
See also
Novgorod Detinets, a fortified complex in Veliky Novgorod, Russia
Dytynets Park, a park in Chernihiv, Ukraine
References
Engineering barrages
Kremlins | Detinets | Engineering | 221 |
3,313,527 | https://en.wikipedia.org/wiki/Geometric%20graph%20theory | Geometric graph theory in the broader sense is a large and amorphous subfield of graph theory, concerned with graphs defined by geometric means. In a stricter sense, geometric graph theory studies combinatorial and geometric properties of geometric graphs, meaning graphs drawn in the Euclidean plane with possibly intersecting straight-line edges, and topological graphs, where the edges are allowed to be arbitrary continuous curves connecting the vertices; thus, it can be described as "the theory of geometric and topological graphs" (Pach 2013). Geometric graphs are also known as spatial networks.
Different types of geometric graphs
A planar straight-line graph is a graph in which the vertices are embedded as points in the Euclidean plane, and the edges are embedded as non-crossing line segments. Fáry's theorem states that any planar graph may be represented as a planar straight line graph. A triangulation is a planar straight line graph to which no more edges may be added, so called because every face is necessarily a triangle; a special case of this is the Delaunay triangulation, a graph defined from a set of points in the plane by connecting two points with an edge whenever there exists a circle containing only those two points.
The 1-skeleton of a polyhedron or polytope is the set of vertices and edges of said polyhedron or polytope. The skeleton of any convex polyhedron is a planar graph, and the skeleton of any k-dimensional convex polytope is a k-connected graph. Conversely, Steinitz's theorem states that any 3-connected planar graph is the skeleton of a convex polyhedron; for this reason, this class of graphs is also known as the polyhedral graphs.
A Euclidean graph is a graph in which the vertices represent points in the plane, and each edge is assigned the length equal to the Euclidean distance between its endpoints. The Euclidean minimum spanning tree is the minimum spanning tree of a Euclidean complete graph. It is also possible to define graphs by conditions on the distances; in particular, a unit distance graph is formed by connecting pairs of points that are a unit distance apart in the plane. The Hadwiger–Nelson problem concerns the chromatic number of these graphs.
An intersection graph is a graph in which each vertex is associated with a set and in which vertices are connected by edges whenever the corresponding sets have a nonempty intersection. When the sets are geometric objects, the result is a geometric graph. For instance, the intersection graph of line segments in one dimension is an interval graph; the intersection graph of unit disks in the plane is a unit disk graph. The Circle packing theorem states that the intersection graphs of non-crossing circles are exactly the planar graphs. Scheinerman's conjecture (proven in 2009) states that every planar graph can be represented as the intersection graph of line segments in the plane.
A Levi graph of a family of points and lines has a vertex for each of these objects and an edge for every incident point-line pair. The Levi graphs of projective configurations lead to many important symmetric graphs and cages.
The visibility graph of a closed polygon connects each pair of vertices by an edge whenever the line segment connecting the vertices lies entirely in the polygon. It is not known how to test efficiently whether an undirected graph can be represented as a visibility graph.
A partial cube is a graph for which the vertices can be associated with the vertices of a hypercube, in such a way that distance in the graph equals Hamming distance between the corresponding hypercube vertices. Many important families of combinatorial structures, such as the acyclic orientations of a graph or the adjacencies between regions in a hyperplane arrangement, can be represented as partial cube graphs. An important special case of a partial cube is the skeleton of the permutohedron, a graph in which vertices represent permutations of a set of ordered objects and edges represent swaps of objects adjacent in the order. Several other important classes of graphs including median graphs have related definitions involving metric embeddings .
A flip graph is a graph formed from the triangulations of a point set, in which each vertex represents a triangulation and two triangulations are connected by an edge if they differ by the replacement of one edge for another. It is also possible to define related flip graphs for partitions into quadrilaterals or pseudotriangles, and for higher-dimensional triangulations. The flip graph of triangulations of a convex polygon forms the skeleton of the associahedron or Stasheff polytope. The flip graph of the regular triangulations of a point set (projections of higher-dimensional convex hulls) can also be represented as a skeleton, of the so-called secondary polytope.
See also
Topological graph theory
Chemical graph
Spatial network
References
External links | Geometric graph theory | Mathematics | 1,001 |
18,022,186 | https://en.wikipedia.org/wiki/SunEdison | SunEdison, Inc. (formerly MEMC Electronic Materials) is a renewable energy company headquartered in the U.S. In addition to developing, building, owning, and operating solar power plants and wind energy plants, it also manufactures high-purity polysilicon, monocrystalline silicon ingots, silicon wafers, solar modules, solar energy systems, and solar module racking systems. Originally a silicon-wafer manufacturer established in 1959 as the Monsanto Electronic Materials Company, the company was sold by Monsanto in 1989.
It is one of the leading solar-power companies worldwide, and with its acquisition of wind-energy company First Wind in 2014, SunEdison became the leading renewable energy development company in the world. In 2015, SunEdison sold off its subsidiary SunEdison Semiconductor, marking the completion of SunEdison's transition from a semiconductor-wafer company to a dedicated renewable-energy corporation.
Following years of major expansion and the announcement of the intent—which eventually fell through—to acquire the residential-rooftop solar company Vivint Solar in 2015, SunEdison's stock plummeted, and its more than $11billion in debt caused it to file for Chapter 11 bankruptcy protection on April 21, 2016, eventually emerging in December 2017 as a restructured, smaller, private company.
History
Foundation
The establishment of Monsanto Electronic Materials Company (MEMC), a silicon wafer–manufacturing division to serve the emerging electronics industry, was announced on August 6, 1959, as an arm of the U.S.-based multinational corporation Monsanto. In February 1960 MEMC started production of 19mm silicon ingots at its location in St. Peters, Missouri, 30 miles west of Monsanto's headquarters in St. Louis. As one of the first companies to produce semiconductor wafers, MEMC was a pioneer in the field, and some of its innovations became industry standards into the 21st century. MEMC used the Czochralski process (CZ process) of silicon crystal production, and developed the Chemical Mechanical Polishing (CMP) process of wafer finishing. In 1966 MEMC installed its first reactors for the production of epitaxial wafers, and developed zero-dislocation crystal growing, which made large-diameter silicon crystals possible.
Expansion
In the early 1970s, MEMC opened a production plant in Kuala Lumpur, Malaysia, and sent its St. Peters–produced 2.25-inch ingots there for slicing and polishing. In 1979, MEMC became the first company to manufacture 125mm (5-inch) wafers; in 1981 the first to produce 150mm (6-inch) wafers; and, in partnership with IBM, in 1984 the first to produce 200mm (8-inch) wafers. In 1986 MEMC opened its production and R&D facility in Utsunomiya, Japan to serve the Japanese semiconductor market, becoming the first non-Japanese wafer maker with manufacturing and research facilities in Japan.
Change of ownership
MEMC experienced heavy price-pressure from Japanese competition during the mid 1980s. Despite its success and increasing revenues, MEMC had to account for losses for a few years, leading to the decision of Monsanto, which was refocusing on chemicals, agriculture, and biotechnology products, to sell the electronic materials division. In 1989 the German company Hüls AG, the chemicals arm of the German conglomerate VEBA, acquired Monsanto Electronic Materials and combined it with Hüls' previous acquisition Dynamite Nobel Silicon (DNS) to form MEMC Electronic Materials. DNS already operated silicon wafer plants in Merano and Novara, Italy and integrated them within the new MEMC Electronic Materials. Hüls supported the new subsidiary with $50million, for research and development and for manufacturing expansion. In 1991 MEMC developed the first process using granular polysilicon, which provided cost and productivity advantages over "chunk" polysilicon. Four years later MEMC acquired Albemarle Corporation's granular polysilicon production facility in Pasadena, Texas, which had been producing granular polysilicon since 1987.
MEMC's stock began trading on the New York Stock Exchange with an initial public offering in 1995. The IPO raised over $400million, which went to finance an aggressive growth plan and repay some of the debt to its parent company, and Hüls/VEBA retained a majority interest in the company.
The cyclical downturn in the semiconductor business in the late 1990s hit MEMC hard. In 1998 the company reported a loss of $316million on revenues of $759million. In June 2000 VEBA AG, still holding 72% of MEMC, was merged with VIAG to form the new E.ON AG. E.ON wanted to focus on its core business of electric utilities, and assigned Merrill Lynch to sell MEMC. Merrill was unable to find a buyer until MEMC announced that it was on the verge of illiquidity in the middle of 2001. Finally in October 2001 E.ON was able to agree on a deal with the private equity company Texas Pacific Group (TPG), which purchased E.ON's stake in MEMC for a symbolic dollar and offered MEMC $150million in credit lines. TPG restructured MEMC's debt, increased its stake in the company to 90%, and cut one third of its workforce.
MEMC returned to profitability following the appointment of Nabeel Gareeb, who was CEO of MEMC from April 2002 to November 2008. MEMC's market share rose again and by 2003 it reported positive earning figures. MEMC's sales topped $1billion in 2004, and it was number three in market share. Through a secondary offering, TPG reduced its share of MEMC in 2005 to 34%, and by the end of 2007, to zero. On October 30, 2008, Gareeb stepped down as president and CEO of the company.
Solar market entry
In 2006 MEMC announced its large-scale entry into the burgeoning solar wafer market, via longterm agreements to supply China-based Suntech Power and Taiwan-based Gintech Energy with solar-grade silicon wafers. Similar contracts followed with Germany-based Conergy in 2007, and Taiwan-based Tainergy Tech in 2008. The company cultivated short-term solar wafer customers as well. By 2007, MEMC held approximately 14% of the solar wafer market. Having returned MEMC to a foundation of profitability and having helped it enter the solar market, CEO Nabeel Gareeb resigned in November 2008. Ahmad Chatila was appointed president and CEO in February 2009.
In July 2009 MEMC and Q-Cells, which specialized in construction and operation of photovoltaic plants, formed a joint venture to build Strasskirchen Solar Park, a 50 MW photovoltaic plant in Bavaria, Germany, with MEMC supplying the solar wafers and Q-Cells converting them into solar cells. Both partners invested $100million each, in return for a 50%-each ownership of the project. As planned, the plant was sold to an investment firm, Nordcapital, after operations started at the beginning of 2010.
Acquisition of SunEdison
In November 2009 MEMC acquired the privately owned company SunEdison LLC, North America's largest solar energy services provider. Founded by Jigar Shah and Claire Broido Johnson SunEdison had been developing, financing, building, operating, and monitoring large-scale photovoltaic plants for commercial customers, including many national retail outlets, government agencies, and utilities, since 2003. The company had pioneered solar-as-a-service, and the solar power purchase agreement (PPA) for no-money-down customer financing. With the acquisition of SunEdison, MEMC became a developer of solar power projects and North America's largest solar energy services provider. CEO Ahmad Chatila announced that "MEMC will now participate in the actual development of solar power plants and commercialization of clean energy, in addition to supplying the solar and semiconductor industries with our traditional silicon wafer products." SunEdison was purchased for $200million, 70% in cash and 30% in MEMC stock, plus retention payments, transaction expenses, and the assumption of net debt.
Following its acquisition of SunEdison, MEMC also began to focus on developing and acquiring advanced technologies used in the production of low-cost, high-performance solar panels. It acquired the California-based solar tech company Solaicx in mid 2010. The acquisition included Solaicx's high-volume proprietary "continuous crystal growth" manufacturing technology, which produces low-cost monocrystalline silicon ingots for high-efficiency solar cells.
In February 2011 Samsung Fine Chemicals and MEMC announced a 50/50 joint venture to build a polysilicon production plant in Ulsan, South Korea. The plant was to have an initial capacity of 10,000 metric tons per annum. As of late 2014 the joint venture, called SMP, is 85% owned by SunEdison (50% by SunEdison, Inc. and 35% by SunEdison Semiconductor) and 15% by Samsung, and the plant has a capacity of 13,500 metric tons per annum. By October 2014, the plant began producing the world's first high-pressure fluidized bed reactor (HP-FBR) polysilicon, enabling sizeable reductions in the cost of solar energy.
In 2011 MEMC also extended its solar-energy business. In June 2011, it acquired another North American solar-power project developer, Axio Power. Axio Power, founded in 2007, developed, financed, and constructed large-scale solar projects, and had more than 500 MW of utility-scale photovoltaic power projects in Canada and the western U.S. In July 2011, MEMC established a joint venture with Korea-based Jusung Engineering, to combine its proprietary Solaicx CCZ monocrystalline wafers with Jusung's high-efficiency cell manufacturing equipment to provide low-cost, high-efficiency solar cells. In September 2011, MEMC acquired Fotowatio Renewable Ventures Inc., the U.S. unit of Fotowatio Renewable Ventures, a developer, operator and owner of solar power plants. The FRV purchase added up to 1.4 GW of solar projects in the U.S. to MEMC's portfolio.
In December 2011 MEMC undertook restructuring measures in reaction to a cyclical downturn in its semiconductor business and a slump in the whole supply chain of photovoltaic modules. The company announced a headcount reduction of 1,300 employees (18% of the workforce), plus capacity reduction and productivity increase for polysilicon and wafers.
In 2012 MEMC developed its Silvantis line of multi-crystalline 290-watt solar modules. With 1,000-volt UL certification, the modules created considerable overall energy-production and systems savings on solar projects due to the ability to be more efficiently wired.
Name changes and Acquisitions 2013–15
On May 30, 2013, MEMC Electronic Materials changed its name to SunEdison, Inc., and also changed its stock-market ticker from "WFR" to "SUNE", reflecting the company's focus on solar energy.
In May 2014, SunEdison formally separated its electronics-wafer business from its solar-wafer and solar-energy business. SunEdison Semiconductor, Ltd. spun off in an IPO on the NASDAQ under the ticker "SEMI", with SunEdison, Inc. maintaining a majority stake as the largest shareholder. The IPO generated $94million, used to fund the company's growth.
In July 2014, SunEdison created a yieldco subsidiary, called TerraForm Power, Inc., with SunEdison, Inc. maintaining a majority stake as the largest shareholder. TerraForm began publicly trading in an IPO under the ticker "TERP". This IPO of the power-generation subsidiary spin-off raised roughly $500million. SunEdison launched a second yieldco subsidiary, TerraForm Global, in 2015, to manage renewable-energy projects in emerging markets like Brazil, China, and India. This second yieldco trades on the NASDAQ under the ticker "GLBL".
In October 2014 SunEdison announced the development of "zero white space" solar modules, which eliminate wasted space on the solar module surface. That month it also announced the implementation of "high-pressure fluidized bed reactor" (HP-FBR) technology, producing high-purity polysilicon up to 10 times more efficiently and with 90% less energy used than non-FBR technologies.
In November 2014, with its subsidiary TerraForm Power, SunEdison purchased First Wind, one of the largest wind power developers in the United States, for $2.4billion. The acquisition added wind energy to SunEdison's capacity, and made it the leading renewable energy development company in the world.
In 2015, MIT Technology Review named SunEdison #6, and the top energy company, in its annual "50 Smartest Companies" list. The review characterized SunEdison as "Aggressively expanding its renewable energy products and building a business to provide electricity to the developing world."
In June 2015, SunEdison, Inc. announced its full divestiture from its semiconductor business, the publicly traded company SunEdison Semiconductor. SunEdison Semiconductors was acquired by GlobalWafers Co., Ltd on December 2, 2016 and subsequently had its name changed back to MEMC LLC. The completion of the sell-off finalized SunEdison's transition into a dedicated renewable-energy company.
Bankruptcy
Following years of major expansion and the announcement of the intent – which eventually fell through – to acquire the residential-rooftop solar company Vivint Solar in 2015, SunEdison's stock plummeted and its more than $11billion in debt caused it to face bankruptcy in April 2016. It filed for Chapter 11 bankruptcy protection on April 21, 2016. To continue its operations and pay staff, the company received $300 million in bankruptcy debt financing. It continued operations during bankruptcy. The debt financing money came from first-lien and second-lien lenders. The bankruptcy court approved the money.
When it filed for bankruptcy, the company asked the court for an independent examiner to audit the company's recent financial transactions. SunEdison requested that the examiner's work start immediately and finish within 60 days, and that the maximum budget be $1 million. Reuters noted that, comparatively, the 2015 independent examination in the bankruptcy of Caesars Entertainment Corp. took one year and cost $40 million.
During the summer of 2015, SunEdison was worth almost $10 billion, and in July 2015 shares traded upward of $33.44. On the day of the bankruptcy filing, the company's trading price on the New York Stock Exchange was 34 cents per share.
According to the Wall Street Journal: "SunEdison used a combination of financial engineering and cheap debt to buy up renewable-power projects around the world before the market turned sour last summer and investors soured on its business model." During the three years preceding the bankruptcy filing, SunEdison invested $18 billion in acquisitions. During that time, the company also raised $24 billion in debt and equity.
In March 2016, the U.S. Department of Justice began an investigation into the company regarding its financial practices. Internally, SunEdison's board completed its own investigation, concluding that the company's leaders were "overly optimistic" but did not make "material misstatements" or commit any fraud.
In July 2017, the U.S. Bankruptcy Court approved SunEdison's bankruptcy-exit plan and it eventually emerged from bankruptcy December 29, 2017.
Brookfield Asset Management acquired a 51% ownership share of Terraform Power in October 2017 along with full acquisition of Terraform Global in December 2017. Brookfield fully acquired Terraform Power in July 2020.
SunEdison companies
SunEdison, Inc.
SunEdison's solar materials group produces granular polysilicon, silicon-crystal ingots, silicon wafers, and specifies the production of solar cells and solar modules. It produces granular polysilicon in purities usable in the solar and semiconductor industries. The granular polysilicon is produced in Pasadena, Texas, and, through a joint venture with Samsung and SunEdison Semiconductor, in Ulsan, Korea.
SunEdison's solar power group plans, designs, develops, finances, underwrites, builds, installs, operates, monitors, and maintains large-scale solar energy and wind energy systems and plants for commercial customers, including numerous national retail outlets, shopping centers, businesses and corporations; government agencies and other public-sector customers; and utilities and other power companies. Through an extensive dealer network, it also provides complete solar systems and services for residential homeowners.
References
External links
SunEdison
TerraForm
Solar energy companies of the United States
Electric power companies of the United States
Renewable energy companies of the United States
Wind power companies of the United States
Photovoltaics manufacturers
Silicon wafer producers
Energy companies established in 1959
Renewable resource companies established in 1959
1959 establishments in Missouri
Energy companies established in 2003
Renewable resource companies established in 2003
2003 establishments in California
Companies that filed for Chapter 11 bankruptcy in 2016
Companies based in San Mateo County, California
Solar energy in California
Technology companies based in the San Francisco Bay Area
Monsanto
Manufacturing companies based in Missouri
Companies based in St. Louis County, Missouri
Energy in the San Francisco Bay Area
American companies established in 2003
American companies established in 1959 | SunEdison | Engineering | 3,645 |
14,769,433 | https://en.wikipedia.org/wiki/SKIL | Ski-like protein is a protein that in humans is encoded by the SKIL gene.
Interactions
SKIL interacts with SKI protein, Mothers against decapentaplegic homolog 3 and Mothers against decapentaplegic homolog 2.
Protein Family
SKIL belongs to the Ski/Sno/Dac family, shared by SKI protein, Dachshund, and SKIDA1. Members of the Ski/Sno/Dac family share a domain that is roughly 100 amino acids long.
References
Further reading
Proteins
Genes on human chromosome 3 | SKIL | Chemistry | 113 |
3,661,969 | https://en.wikipedia.org/wiki/Jean-Yves%20B%C3%A9ziau | Jean-Yves Beziau (; born January 15, 1965, in Orléans, France is a Swiss Professor in logic at the University of Brazil, Rio de Janeiro, and Researcher of the Brazilian Research Council. He is permanent member and former president of the Brazilian Academy of Philosophy. Before going to Brazil, he was Professor of the Swiss National Science Foundation at the University of Neuchâtel in Switzerland and researcher at Stanford University working with Patrick Suppes.
Career
Béziau works in the field of logic—in particular, paraconsistent logic, the square of opposition and universal logic. He holds a Maîtrise in Philosophy from Pantheon-Sorbonne University, a DEA in Philosophy from Pantheon-Sorbonne University, a PhD in Philosophy from the University of São Paulo, a MSc and a PhD in Logic and Foundations of Computer Science from Paris Diderot University.
Béziau is the editor-in-chief of the journal Logica Universalis and of the South American Journal of Logic—an online, open-access journal—as well as of the Springer book series Studies in Universal Logic. He is also the editor of College Publication's book series Logic PhDs
He has launched four major international series of events: UNILOG (World Congress and School on Universal Logic), SQUARE (World Congress on the Square of Opposition), WOCOLOR (World Congress on Logic and Religion), LIQ (Logic in Question).
Béziau created the World Logic Day (January 14).
Selected publications
"What is paraconsistent logic?" In D. Batens et al. (eds.), Frontiers of Paraconsistent Logic, Research Studies Press, Baldock, 2000, pp. 95–111.
Handbook of Paraconsistency (ed. with Walter Carnielli and Dov Gabbay). London: College Publication, 2007.
"Semantic computation of truth based on associations already learned" (with Patrick Suppes), Journal of Applied Logic, 2 (2004), pp. 457–467.
"From paraconsistent logic to universal logic", Sorites, 12 (2001), pp. 5–32.
Logica Universalis: Towards a General Theory of Logic (ed.) Basel: Birkhäuser Verlag, 2005, Second Edition 2007.
"Logic is not logic", 'Abstracta' 6 (2010), pp. 73–102.
"The power of the hexagon", Logica Universalis, 6 (2012), pp. 1–43.
"History of truth-values", in D.M.Gabbay and J.Woods (eds) Handbook of the History of Logic, Vol. 11 - Logic: a history of its central concepts, Elsevier, Amsterdam, 2012, pp. 233–305
The Square of Opposition: a General Framework for Cognition (ed. with Gillman Payette). Bern: Peter Lang, 2012.
"The new rising of the square of opposition", in J.-Y.Béziau and D.Jacquette (eds), Around and Beyond the Square of Opposition, Birkhäuser, Basel, 2012, pp. 6–24.
La pointure du symbole (ed.) Paris: Petra, 2014.
"The relativity and universality of logic", Synthese, 192 (2015), pp 1939–1954.
"Logical Autobiography 50", in A.Koslow and A.Buchsbaum (eds), The Road to Universal Logic, Vol.2, Birkhäuser, Basel, 2015, pp. 19–104.
"MANY 1 - A Transversal Imaginative Journey across the Realm of Mathematics", in M.Chakraborty and M.Friend (eds), Special Issue on Mathematical Pluralism of the Journal of Indian Council of Philosophical Research, May 2017, Volume 34, Issue 2, pp 259–287.
"Being aware of rational animals", in G.Dodig-Crnkovic and R.Giovagnoli (eds), Representation and Reality: Humans, Animals and Machines, Springer International Publishing, Cham, 2017, pp. 319–331.
"A Chromatic Hexagon of Psychic Dispositions", in M.Silva (ed), How Colours Matter to Philosophy, Springer International Publishing, Cham, 2017, pp. 273–388.
"An Analogical Hexagon", International Journal of Approximate Reasoning, 94 (2018), pp. 1–17.
"The Pyramid of Meaning", in J.Ceuppens, H.Smessaert, J. van Craenenbroeck and G.Vanden Wyngaerd (eds), A Coat of Many Colours - D60, Brussels, 2018.
"An unexpected feature of classical propositional logic in the Tractatus", in G.Mras, P.Weingartner and B.Ritter (eds), Philosophy of Logic and Mathematics: Proceedings of the 41st International Ludwig Wittgenstein Symposium, De Gruyter, Berlin, Munich, Boston, 2019.
"Is God Paraconsistent?" (with Newton da Costa) in Beyond Faith and Rationality Essays on Logic, Religion and Philosophy, Springer International Publishing, Cham, 2020, pp. 321–333
"Metalogic, Schopenhauer and Universal Logic", in J.Lemanski (ed), Language, Logic, and Mathematics in Schopenhauer, Birkhäuser, Basel, 2020, pp. 207–257.
"The Mystery of the Fifth Logical Notion (Alice in the Wonderful Land of Logical Notions)", Studia Humana, Volume 9:3/4 (2020), pp. 19–36.
References
External links
Jean-Yves Beziau's personal homepage
1965 births
Mathematical logicians
Living people
Logicians
Scientists from Orléans
Paraconsistent logic
Academic staff of the University of Neuchâtel | Jean-Yves Béziau | Mathematics | 1,230 |
1,045,142 | https://en.wikipedia.org/wiki/Debris | Debris (, ) is rubble, wreckage, ruins, litter and discarded garbage/refuse/trash, scattered remains of something destroyed, or, as in geology, large rock fragments left by a melting glacier, etc. Depending on context, debris can refer to a number of different things. The first apparent use of the French word in English is in a 1701 description of the army of Prince Rupert upon its retreat from a battle with the army of Oliver Cromwell, in England.
Disaster
In disaster scenarios, tornadoes leave behind large pieces of houses and mass destruction overall. This debris also flies around the tornado itself when it is in progress. The tornado's winds capture debris it kicks up in its wind orbit, and spins it inside its vortex. The tornado's wind radius is larger than the funnel itself. Tsunamis and hurricanes also bring large amounts of debris, such as Hurricane Katrina in 2005 and Hurricane Sandy in 2012. Earthquakes rock cities to rubble debris.
Geological
In geology, debris usually applies to the remains of geological activity including landslides, volcanic explosions, avalanches, mudflows or Glacial lake outburst floods (Jökulhlaups) and moraine, lahars, and lava eruptions. Geological debris sometimes moves in a stream called a debris flow. When it accumulates at the base of hillsides, it can be called "talus" or "scree".
In mining, debris called attle usually consists of rock fragments which contain little or no ore.
Marine
Marine debris applies to floating garbage such as bottles, cans, styrofoam, cruise ship waste, offshore oil and gas exploration and production facilities pollution, and fishing paraphernalia from professional and recreational boaters. Marine debris is also called litter or flotsam and jetsam. Objects that can constitute marine debris include used automobile tires, detergent bottles, medical wastes, discarded fishing line and nets, soda cans, and bilge waste solids.
In addition to being unsightly, it can pose a serious threat to marine life, boats, swimmers, divers, and others. For example, each year millions of seabirds, sea turtles, fish, and marine mammals become entangled in marine debris, or ingest plastics which they have mistaken for food. As many as 30,000 northern fur seals per year get caught in abandoned fishing nets and either drown or suffocate. Whales mistake plastic bags for squid, and birds may mistake plastic pellets for fish eggs. At other times, animals accidentally eat the plastic while feeding on natural food.
The largest concentration of marine debris is the Great Pacific Garbage Patch.
Marine debris most commonly originates from land-based sources. Various international agencies are currently working to reduce marine debris levels around the world.
Meteorological
In meteorology, debris usually applies to the remains of human habitation and natural flora after storm related destruction. This debris is also commonly referred to as storm debris. Storm debris commonly consists of roofing material, downed tree limbs, downed signs, downed power lines and poles, and wind-blown garbage. Storm debris can become a serious problem immediately after a storm, in that it often blocks access to individuals and communities that may require emergency services. This material frequently exists in such large quantities that disposing of it becomes a serious issue for a community. In addition, storm debris is often hazardous by its very nature, since, for example, downed power lines annually account for storm-related deaths.
Space
Space debris usually refers to the remains of spacecraft that have either fallen to Earth or are still orbiting Earth. Space debris may also consist of natural components such as chunks of rock and ice. The problem of space debris has grown as various space programs have left legacies of launches, explosions, repairs, and discards in both low Earth orbit and more remote orbits. These orbiting fragments have reached a great enough proportion to constitute a hazard to future space launches of both satellite and crewed vehicles. Various government agencies and international organizations are beginning to track space debris and also research possible solutions to the problem. While many of these items, ranging in size from nuts and bolts to entire satellites and spacecraft, may fall to Earth, other items located in more remote orbits may stay aloft for centuries. The velocity of some of these pieces of space junk have been clocked in excess of 17,000 miles per hour (27,000 km/h). A piece of space debris falling to Earth leaves a fiery trail, just like a meteor.
A debris disk is a circumstellar disk of dust and debris in orbit around a star.
Surgical
In medicine, debris usually refers to biological matter that has accumulated or lodged in surgical instruments and is referred to as surgical debris. The presence of surgical debris can result in cross-infections or nosocomial infections if not removed and the affected surgical instruments or equipment properly disinfected.
War
In the aftermath of a war, large areas of the region of conflict are often strewn with war debris in the form of abandoned or destroyed hardware and vehicles, mines, unexploded ordnance, bullet casings and other fragments of metal.
Much war debris has the potential to be lethal and continues to kill and maim civilian populations for years after the end of a conflict. The risks from war debris may be sufficiently high to prevent or delay the return of refugees. In addition war debris may contain hazardous chemicals or radioactive components that can contaminate the land or poison civilians who come into contact with it. Many Mine clearance agencies are also involved in the clearance of war debris.
Land mines in particular are very dangerous as they can remain active for decades after a conflict, which is why they have been banned by international war regulations.
In November 2006 the Protocol on Explosive Remnants of War
came into effect with 92 countries subscribing to the treaty that requires the parties involved in a conflict to assist with the removal of unexploded ordnance following the end of hostilities.
Some of the countries most affected by war debris are Afghanistan, Angola, Cambodia, Iraq and Laos.
Similarly military debris may be found in and around firing range and military training areas.
Debris can also be used as cover for military purposes, depending on the situation.
Culinary
In South Louisiana's Creole and Cajun cultures, debris (pronounced "DAY-bree") refers to chopped organs such as liver, heart, kidneys, tripe, spleen, brain, lungs and pancreas.
See also
Debris fallout
Woody debris
References
External links
United States Geological Survey: Debris Flows, Mudflows, Jökulhlaups, and Lahars
Matter
Pollution | Debris | Physics | 1,333 |
357,027 | https://en.wikipedia.org/wiki/Automated%20analyser | An automated analyser is a medical laboratory instrument designed to measure various substances and other characteristics in a number of biological samples quickly, with minimal human assistance. These measured properties of blood and other fluids may be useful in the diagnosis of disease.
Photometry is the most common method for testing the amount of a specific analyte in a sample. In this technique, the sample undergoes a reaction to produce a color change. Then, a photometer measures the absorbance of the sample to indirectly measure the concentration of analyte present in the sample. The use of an ion-selective electrode (ISE) is another common analytical method that specifically measures ion concentrations. This typically measures the concentrations of sodium, calcium or potassium present in the sample.
There are various methods of introducing samples into the analyser. Test tubes of samples are often loaded into racks. These racks can be inserted directly into some analysers or, in larger labs, moved along an automated track. More manual methods include inserting tubes directly into circular carousels that rotate to make the sample available. Some analysers require samples to be transferred to sample cups. However, the need to protect the health and safety of laboratory staff has prompted many manufacturers to develop analysers that feature closed tube sampling, preventing workers from direct exposure to samples. Samples can be processed singly, in batches, or continuously.
The automation of laboratory testing does not remove the need for human expertise (results must still be evaluated by medical technologists and other qualified clinical laboratory professionals), but it does ease concerns about error reduction, staffing concerns, and safety.
Routine biochemistry analysers
These are machines that process a large portion of the samples going into a hospital or private medical laboratory. Automation of the testing process has reduced testing time for many analytes from days to minutes. The history of discrete sample analysis for the clinical laboratory began with the introduction of the "Robot Chemist" invented by Hans Baruch and introduced commercially in 1959[1].
The AutoAnalyzer is an early example of an automated chemistry analyzer using a special flow technique named "continuous flow analysis (CFA)", invented in 1957 by Leonard Skeggs, PhD and first made by the Technicon Corporation. The first applications were for clinical (medical) analysis. The AutoAnalyzer profoundly changed the character of the chemical testing laboratory by allowing significant increases in the numbers of samples that could be processed. Samples used in the analyzers include, but are not limited to, blood, serum, plasma, urine, cerebrospinal fluid, and other fluids from within the body. The design based on separating a continuously flowing stream with air bubbles largely reduced slow, clumsy, and error-prone manual methods of analysis. The types of tests include enzyme levels (such as many of the liver function tests), ion levels (e.g. sodium and potassium, and other tell-tale chemicals (such as glucose, serum albumin, or creatinine).
Simple ions are often measured with ion selective electrodes, which let one type of ion through, and measure voltage differences. Enzymes may be measured by the rate they change one coloured substance to another; in these tests, the results for enzymes are given as an activity, not as a concentration of the enzyme. Other tests use colorimetric changes to determine the concentration of the chemical in question. Turbidity may also be measured.
Immuno-based analysers
Antibodies are used by some analysers to detect many substances by immunoassay and other reactions that employ the use of antibody-antigen reactions.
When concentration of these compounds is too low to cause a measurable increase in turbidity when bound to antibody, more specialised methods must be used.
Recent developments include automation for the immunohaematology lab, also known as transfusion medicine.
Hematology analysers
These are used to perform complete blood counts, erythrocyte sedimentation rates (ESRs), or coagulation tests.
Cell counters
Automated cell counters sample the blood, and quantify, classify, and describe cell populations using both electrical and optical techniques.
Electrical analysis involves passing a dilute solution of the blood through an aperture across which an electrical current is flowing. The passage of cells through the current changes the impedance between the terminals (the Coulter principle). A lytic reagent is added to the blood solution to selectively lyse the red cells (RBCs), leaving only white cells (WBCs), and platelets intact. Then the solution is passed through a second detector. This allows the counts of RBCs, WBCs, and platelets to be obtained. The platelet count is easily separated from the WBC count by the smaller impedance spikes they produce in the detector due to their lower cell volumes.
Optical detection may be utilised to gain a differential count of the populations of white cell types. A dilute suspension of cells is passed through a flow cell, which passes cells one at a time through a capillary tube past a laser beam. The reflectance, transmission and scattering of light from each cell is analysed by sophisticated software giving a numerical representation of the likely overall distribution of cell populations.
Some of the latest hematology instruments may report Cell Population Data that consist in Leukocyte morphological information that may be used for flagging Cell abnormalities that trigger the suspect of some diseases.
Reticulocyte counts can now be performed by many analysers, giving an alternative to time-consuming manual counts. Many automated reticulocyte counts, like their manual counterparts, employ the use of a supravital dye such as new methylene blue to stain the red cells containing reticulin prior to counting. Some analysers have a modular slide maker which is able to both produce a blood film of consistent quality and stain the film, which is then reviewed by a medical laboratory professional.
Coagulometers
Automated coagulation machines or Coagulometers measure the ability of blood to clot by performing any of several types of tests including Partial thromboplastin times, Prothrombin times (and the calculated INRs commonly used for therapeutic evaluation), Lupus anticoagulant screens, D dimer assays, and factor assays.
Coagulometers require blood samples that have been drawn in tubes containing sodium citrate as an anticoagulant. These are used because the mechanism behind the anticoagulant effect of sodium citrate is reversible. Depending on the test, different substances can be added to the blood plasma to trigger a clotting reaction. The progress of clotting may be monitored optically by measuring the absorbance of a particular wavelength of light by the sample and how it changes over time.
..
Other hematology apparatus
Automatic erythrocyte sedimentation rate (ESR) readers, while not strictly analysers, do preferably have to comply to the 2011-published CLSI (Clinical and Laboratory Standards Institute) "Procedures for the Erythrocyte Sedimentation Rate Test: H02-A5 and to the ICSH (International Council for Standardization in Haematology) published "ICSH review of the measurement of the erythrocyte sedimentation rate", both indicating the only reference method, being Westergren, explicitly indicating the use of diluted blood (with sodium citrate), in 200 mm pipettes, bore 2.55 mm. After 30 or 60 minutes being in a vertical position, with no draughts and vibration or direct sunlight allowed, an optical reader determines how far the red cells have fallen by detecting the level.
Miscellaneous analysers
Some tests and test categories are unique in their mechanism or scope, and require a separate analyser for only a few tests, or even for only one test. Other tests are esoteric in nature—they are performed less frequently than other tests, and are generally more expensive and time-consuming to perform. Even so, the current shortage of qualified clinical laboratory professionals has spurred manufacturers to develop automated systems for even these rarely performed tests.
Analysers that fall into this category include instruments that perform:
DNA labeling and detection
Osmolarity and osmolality measurement
Measurement of glycated haemoglobin (haemoglobin A1C), and
Aliquotting and routing of samples throughout the laboratory
See also
Comprehensive metabolic panel
Medical technologist
Notes
1. Rosenfeld, Louis. Four Centuries of Clinical Chemistry. Gordon and Breach Science Publishers, 1999. . Pp. 490–492
References
Laboratory equipment
Measuring instruments
Clinical pathology
Drugs developed by Hoffmann-La Roche
Articles containing video clips | Automated analyser | Technology,Engineering | 1,779 |
11,160,058 | https://en.wikipedia.org/wiki/Welsh%20numerals | The traditional counting system used in the Welsh language is vigesimal, i.e. based on twenties where numbers from 11 to 14 are "x on ten", 16–19 are "x on fifteen" (though 18 is more usually "two nines"); numbers from 21 to 39 are "1–19 on twenty", 40 is "two twenty", 60 is "three twenty", etc.
There is also a decimal counting system, where numbers are "x ten y" unit(s), e.g. thirty-five (35) in decimal is (three ten five) while in vigesimal it is (fifteen – itself "five-ten" – on twenty).
Numerals
Variation in form
There is some syntactically and phonologically triggered variation in the form of numerals. There are, for example, masculine and feminine forms of the numbers "two" ( and ), "three" ( and ) and "four" ( and ), which must agree with the grammatical gender of the objects being counted. The numerals for "five", "six" and "hundred" (, and ) also have reduced forms (, , ) when they precede the object they are counting. The words for "ten", "twelve", and "fifteen" (, , ) have the alternative forms , , used before nasals (which may be the result of mutation) and, occasionally, vowels; these forms are becoming less common. Numerals change as expected according to normal rules of consonant mutation; some also trigger mutation in some following words (see below for details).
Use of the decimal system
The decimal system is widely used, but is rather uncommon for dates and ages. Larger numbers, however, tend to be expressed in this system e.g. 1,965 . In referring to years, on the other hand, the number of thousands is stated, followed by the individual digits, e.g. 1965 . This system appears to have broken down for years after 2000, e.g. whereas 1905 is , 2005 is .
The Welsh decimal counting system was devised by 19th-century Patagonian Welsh businessmen in Argentina for accountancy purposes. It was recommended to teachers for use in the first Welsh language schools in Patagonia by Richard Jones Berwyn in a book published in 1878. The system was later adopted in Wales in the late 1940s with the beginning of Welsh-medium education.
Use with nouns
The singular form of the noun is used with numbers, but for larger numbers an alternative form is permitted, where ("of") with the plural noun follows the number. Except where using this plural form, the noun is placed directly after the number but before any parts of the number that are added using ("on") in the traditional system.
Nouns are also mutated following many numbers. triggers the soft mutation () of feminine nouns, other than those beginning with "ll" and "rh", but not masculine nouns. and both trigger the soft mutation (ll and rh included). (but not ) and trigger the aspirate mutation. Several higher numbers (, , , , , and ) trigger the nasal mutation when used with ("year(s)"). The part of the number immediately preceding the noun will determine any mutation of the noun. In the plural form with , the soft mutation is used as is normal after .
The following example illustrates several of these points:
Notes
numerals
Numerals | Welsh numerals | Mathematics | 724 |
1,435,243 | https://en.wikipedia.org/wiki/Cocaethylene | Cocaethylene (ethylbenzoylecgonine) is the ethyl ester of benzoylecgonine. It is structurally similar to cocaine, which is the methyl ester of benzoylecgonine. Cocaethylene is formed by the liver when cocaine and ethanol coexist in the blood. In 1885, cocaethylene was first synthesized (according to edition 13 of the Merck Index), and in 1979, cocaethylene's side effects were discovered.
Metabolic production from cocaine
Cocaethylene is the byproduct of concurrent consumption of alcohol and cocaine as metabolized by the liver. Normally, metabolism of cocaine produces two primarily biologically inactive metabolites—benzoylecgonine and ecgonine methyl ester. The hepatic enzyme carboxylesterase is an important part of cocaine's metabolism because it acts as a catalyst for the hydrolysis of cocaine in the liver, which produces these inactive metabolites. If ethanol is present during the metabolism of cocaine, a portion of the cocaine undergoes transesterification with ethanol, rather than undergoing hydrolysis with water, which results in the production of cocaethylene.
cocaine + H2O → benzoylecgonine + methanol (with liver carboxylesterase 1)
benzoylecgonine + ethanol → cocaethylene + H2O
cocaine + ethanol → cocaethylene + methanol (with liver carboxylesterase 1)
Physiological effects
Cocaethylene is largely considered a recreational drug in and of itself, with stimulant, euphoriant, anorectic, sympathomimetic, and local anesthetic properties. The monoamine neurotransmitters serotonin, norepinephrine, and dopamine play important roles in cocaethylene's action in the brain. Cocaethylene increases the levels of serotonergic, noradrenergic, and dopaminergic neurotransmission in the brain by inhibiting the action of the serotonin transporter, norepinephrine transporter, and dopamine transporter. These pharmacological properties make cocaethylene a serotonin-norepinephrine-dopamine reuptake inhibitor (SNDRI; also known as a "triple reuptake inhibitor").
In most users, cocaethylene produces euphoria and has a longer duration of action than cocaine. Some studies suggest that consuming alcohol in combination with cocaine may be more cardiotoxic than cocaine and "it also carries an 18 to 25 fold increase over cocaine alone in risk of immediate death". Cocaethylene has a higher affinity for the dopamine transporter than does cocaine, but has a lower affinity for the serotonin and norepinephrine transporters.
A 2000 study by Hart et al. on the effects of intravenous cocaethylene in humans found that "cocaethylene has pharmacological properties in common with cocaine, but is less potent," consistent with prior research.
See also
Ethylphenidate
Euphoriants
Methylvanillylecgonine
Local anesthetics
RTI-160
Stimulants
Tropanes
Vin Mariani
Pemberton's French Wine Coca
References
Further reading
Benzoate esters
Carboxylate esters
Tropanes
Euphoriants
Stimulants
Local anesthetics
Sympathomimetics
Cocaine
Serotonin–norepinephrine–dopamine reuptake inhibitors
Human drug metabolites
Recreational drug metabolites | Cocaethylene | Chemistry | 765 |
1,031,810 | https://en.wikipedia.org/wiki/MSISDN | MSISDN ( ) is a number uniquely identifying a subscription in a Global System for Mobile communications or a Universal Mobile Telecommunications System mobile network. It is the mapping of the telephone number to the subscriber identity module in a mobile or cellular phone. This abbreviation has several interpretations, the most common one being "Mobile Station International Subscriber Directory Number".
The MSISDN and international mobile subscriber identity (IMSI) are two important numbers used for identifying a mobile subscriber. The IMSI is stored in the SIM (the card inserted into the mobile phone), and uniquely identifies the mobile station, its home wireless network, and the home country of the home wireless network. The MSISDN is used for routing calls to the subscriber. The IMSI is often used as a key in the home location register ("subscriber database") and the MSISDN is the number normally dialed to connect a call to the mobile phone. A SIM has a unique IMSI that does not change, while the MSISDN can change in time, i.e. different MSISDNs can be associated with the SIM.
The MSISDN follows the numbering plan defined in the International Telecommunication Standard Sector recommendation E.164.
Abbreviation
Depending on source or standardization body, the abbreviation MSISDN can be written out in several different ways. These are today the most widespread and common in use.
MSISDN format
The ITU-T recommendation E.164 limits the maximum length of an MSISDN to 15 digits. 1-3 digits are reserved for country code. Prefixes are not included (e.g., 00 prefixes an international MSISDN when dialing from Sweden). Minimum length of the MSISDN is not specified by ITU-T but is instead specified in the national numbering plans by the telecommunications regulator in each country.
In GSM and its variant DCS 1800, MSISDN is built up as
MSISDN = CC + NDC + SN
CC = Country Code
NDC = National Destination Code, identifies one or part of a PLMN
SN = Subscriber Number
In the GSM variant PCS 1900, MSISDN is built up as
MSISDN = CC + NPA + SN
CC = Country Code
NPA = Number Planning Area
SN = Subscriber Number
The country code identifies a country or geographical area, and may be between 1-3 digits. The ITU defines and maintains the list of assigned country codes.
Example
Example Number: +880 15 00121121 (Teletalk Hotline Number)
Has the following subscription number:
MSISDN=8801500121121MSISDN=CCCXXN1N2N3N4N5N6N7N8
For further information on the MSISDN format, see the ITU-T specification E.164.
See also
E.164
International Mobile Equipment Identity (IMEI)
International Mobile Subscriber Identity (IMSI)
SIM card
Mobile phone
GSM
HLR
E.214
Mobile identification number
References
External links
http://www.3gpp.org, GSM 03.03 (see section 3.3)
http://www.openmobilealliance.org
http://www.itu.int, E.164, E.212, E.213, E.214
https://web.archive.org/web/20110222090438/http://www.gsmworld.com/
ITU-T recommendations
Telephone numbers
Identifiers | MSISDN | Mathematics | 754 |
44,934,944 | https://en.wikipedia.org/wiki/Rampart%20%28fortification%29 | In fortification architecture, a rampart is a length of embankment or wall forming part of the defensive boundary of a castle, hillfort, settlement or other fortified site. It is usually broad-topped and made of excavated earth and/or masonry.
Types
The composition and design of ramparts varied from the simple mounds of earth and stone, known as dump ramparts, to more complex earth and timber defences (box ramparts and timberlaced ramparts), as well as ramparts with stone revetments. One particular type, common in Central Europe, used earth, stone and timber posts to form a Pfostenschlitzmauer or "post-slot wall". Vitrified ramparts were composed of stone that was subsequently fired, possibly to increase its strength.
Early fortifications
Many types of early fortification, from prehistory through to the Early Middle Ages, employed earth ramparts usually in combination with external ditches to defend the outer perimeter of a fortified site or settlement. Hillforts, ringforts or "raths" and ringworks all made use of ditch and rampart defences, and they are the characteristic feature of circular ramparts. The ramparts could be reinforced and raised in height by the use of palisades. This type of arrangement was a feature of the motte and bailey castle of northern Europe in the early medieval period.
Classical fortifications
During the classical era, societies became sophisticated enough to create tall ramparts of stone or brick, provided with a platform or wall walk for the defenders to hurl missiles from and a parapet to protect them from the missiles thrown by attackers. Well known examples of classical stone ramparts include Hadrian's Wall and the Walls of Constantinople.
Medieval fortifications
After the fall of the Western Roman Empire, there was a return to the widespread use of earthwork ramparts which lasted well into the 11th century, an example is the Norman motte and bailey castle. As castle technology evolved during the Middle Ages and Early Modern times, ramparts continued to form part of the defences, but now they tended to consist of thick walls with crenellated parapets. Fieldworks, however, continued to make use of earth ramparts due to their relatively temporary nature.
Elements of a rampart in a stone castle or town wall from the 11th to 15th centuries included:
Parapet: a low wall on top of the rampart to shelter the defenders.
Crenellation: rectangular gaps or indentations at intervals in the parapet, the gaps being called embrasures or crenels, and the intervening high parts being called merlons.
Loophole or arrowslit: a narrow opening in a parapet or in the main body of the rampart, allowing defenders to shoot out without exposing themselves to the enemy.
Chemin de ronde or wallwalk: a pathway along the top of the rampart but behind the parapet, which served as a fighting platform and a means of communication with other parts of the fortification.
Machicolation: an overhanging projection supported by corbels, the floor of which was pierced with openings so that missiles and hot liquids could be thrown down on attackers.
Brattice: a timber gallery built on top of the rampart and projecting forward from the parapet, to give the defenders a better field of fire.
Artillery fortifications
In response to the introduction of artillery, castle ramparts began to be built with much thicker walling and a lower profile, one of earliest examples first being Ravenscraig Castle in Scotland which was built in 1460. In the first half of the 16th century, the solid masonry walls began to be replaced by earthen banks, sometimes faced with stone, which were better able to withstand the impact of shot; the earth being obtained from the ditch which was dug in front of the rampart. At the same time, the plan or "trace" of these ramparts began to be formed into angular projections called bastions which allowed the guns mounted on them to create zones of interlocking fire. This bastion system became known as the trace italienne because Italian engineers had been at the forefront of its development, although it was later perfected in northern Europe by engineers such as Van Coehoorn and Vauban and was the dominant style of fortification until the mid-19th century.
Elements of a rampart in an artillery fortification from the 16th to 19th centuries included:
Exterior slope: the front face of the rampart, often faced with stone or brick.
Interior slope: the back of the rampart on the inside of the fortification; sometimes retained with a masonry wall but usually a grassy slope.
Parapet (or breastwork) which protected and concealed the defending soldiers.
Banquette: a continuous step built onto the interior of the parapet, enabling the defenders to shoot over the top with small arms.
Barbette: a raised platform for one or more guns enabling them to fire over the parapet.
Embrasure: an opening in the parapet for guns to fire through.
Terreplein: the top surface or "fighting platform" of the rampart, behind the parapet.
Traverse: an earthen embankment, the same height as the parapet, built across the terreplein to prevent it being swept by enfilade fire.
Casemate: a vaulted chamber built inside the rampart for protected accommodation or storage, but sometimes pierced by an embrasure at the front for a gun to fire through.
Bartizan (also guérite or echauguette): a small turret projecting from the parapet, intended to give a good view to a sentry while remaining protected.
Archaeological significance
As well as the immediate archaeological significance of such ramparts in indicating the development of military tactics and technology, these sites often enclose areas of historical significance that point to the local conditions at the time the fortress was built.
See also
References
Fortification (architectural elements)
Engineering barrages
Castle architecture | Rampart (fortification) | Engineering | 1,192 |
42,013,674 | https://en.wikipedia.org/wiki/Heptagrammic-order%20heptagonal%20tiling | In geometry, the heptagrammic-order heptagonal tiling is a regular star-tiling of the hyperbolic plane. It has Schläfli symbol of {7,7/2}. The vertex figure heptagrams are {7/2}, . The heptagonal faces overlap with density 3.
Related tilings
It has the same vertex arrangement as the regular order-7 triangular tiling, {3,7}. The full set of edges coincide with the edges of a heptakis heptagonal tiling.
It is related to a Kepler-Poinsot polyhedron, the great dodecahedron, {5,5/2}, which is polyhedron and a density-3 regular star-tiling on the sphere (resembling a regular icosahedron in this state, similarly to this tessellation resembling the order-7 triangular tiling):
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Heptagonal tilings
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Regular tilings
Heptagrammic-order tilings | Heptagrammic-order heptagonal tiling | Physics | 257 |
51,365,432 | https://en.wikipedia.org/wiki/Clovamide | Clovamide is a chemical compound found in cacao. It has only been found in small amounts. It is also found in Trifolium pratense (red clover).
Clovamide can exist as either the cis- or trans- isomer.
In isolated neuroblastoma cells, clovamide has in vitro neuroprotective effects.
See also
Rosmarinic acid
References
Carboxylic acids
Carboxamides
Chocolate | Clovamide | Chemistry | 95 |
6,949,803 | https://en.wikipedia.org/wiki/Concept%20processing | Concept processing is a technology that uses an artificial intelligence engine to provide flexible user interfaces. This technology is used in some electronic medical record (EMR) software applications, as an alternative to the more rigid template-based technology.
Some methods of data entry in electronic medical records
The most widespread methods of data entry into an EMR are templates, voice recognition, transcription, and concept processing.
Templates
The physician selects either a general, symptom-based or diagnosis-based template pre-fabricated for the type of case at that moment, making it specific through use of forms, pick-lists, check-boxes and free-text boxes. This method became predominant especially in emergency medicine during the late 1990s.
Voice recognition
The physician dictates into a computer voice recognition device that enters the data directly into a free-text area of the EMR.
Transcription
The physician dictates the case into a recording device, which is then sent to a transcriptionist for entry into the EMR, usually into free text areas.
concept processing
Based on artificial intelligence technology and Boolean logic, concept processing attempts to mirror the mind of each physician by recalling elements from past cases that are the same or similar to the case being seen at that moment.
How concept processing works
For every physician the bell-shaped curve effect is found, representing a frequency distribution of case types. Some cases are so rare that physicians will have never handled them before. The majority of other cases become repetitive, and are found on top of this bell shape curve.
A concept processor brings forward the closest previous encounter in relation to the one being seen at that moment, putting that case in front of the physician for fine-tuning.
There are only three possibilities of cases : The closest encounter could be identical to the current encounter (not an impossible event). It could be similar to the current note, or it could be a rare new case.
If the closest encounter is identical to your present one, the physician has effectively completed charting. A concept processor will pull through all the related information needed.
If the encounter is similar but not identical, the physician modifies the differences from the closest case using hand-writing recognition, voice recognition, or keyboard. A Concept Processor then memorizes all the changes, so that when the next encounter falls between two similar cases, the editing is cut in half, and then by a quarter for the next case, and then by an eighth....and so on. In fact, the more a Concept Processor is used, the faster and smarter it becomes.
concept processing also can be used for rare cases. These are usually combinations of SOAP note elements, which in themselves are not rare. If the text of each element is saved for a given type of case, there will be elements available to use with other cases, even though the other cases may not be similar overall.
The role of a concept processor is simply to reflect that thinking process accurately in a doctor's own words.
See also
Electronic health record
Electronic medical record
Health informatics
Medical record
Health informatics
Electronic health record software
Electronic health records
Medical software | Concept processing | Technology,Biology | 626 |
1,192,008 | https://en.wikipedia.org/wiki/Sunspot%20number | The Wolf number (also known as the relative sunspot number or Zürich number) is a quantity that measures the number of sunspots and groups of sunspots present on the surface of the Sun. Historically, it was only possible to detect sunspots on the far side of the Sun indirectly using helioseismology. Since 2006, NASA's STEREO spacecrafts allow their direct observation.
History
Astronomers have been observing the Sun recording information about sunspots since the advent of the telescope in 1609.
However, the idea of compiling the information about the sunspot number from various observers originates in Rudolf Wolf in 1848 in Zürich, Switzerland. The produced series initially had his name, but now it is more commonly referred to as the international sunspot number series.
The international sunspot number series is still being produced today at the observatory of Brussels. The international number series shows an approximate periodicity of 11 years, the solar cycle, which was first found by Heinrich Schwabe in 1843, thus sometimes it is also referred to as the Schwabe cycle. The periodicity is not constant but varies roughly in the range 9.5 to 11 years. The international sunspot number series extends back to 1700 with annual values while daily values exist only since 1818.
Since 1 July 2015 a revised and updated international sunspot number series has been made available. The biggest difference is an overall increase by a factor of 1.6 to the entire series. Traditionally, a scaling of 0.6 was applied to all sunspot counts after 1893, to compensate for Alfred Wolfer's better equipment, after taking over from Wolf. This scaling has been dropped from the revised series, making modern counts closer to their raw values. Also, counts were reduced slightly after 1947 to compensate for bias introduced by a new counting method adopted that year, in which sunspots are weighted according to their size.
Calculation
The relative sunspot number is computed using the formula
where
is the number of individual spots,
is the number of sunspot groups, and
is a factor that varies with observer and is referred to as the observatory factor or the personal reduction coefficient.
The observatory factor compensates for the differing number of recorded individual sunspots and sunspot groups by different observers. These differences in recorded values occur due to differences in instrumentation, local seeing, personal experience, and other factors between observers. Since Wolf was the primary observer for the relative sunspot number, his observatory factor was 1.
Smoothed monthly mean
To calculate the 13-month smoothed monthly mean sunspot number, which is commonly used to calculate the minima and maxima of solar cycles, a tapered-boxcar smoothing function is used. For a given month , with a monthly sunspot number of , the smoothed monthly mean can be expressed as
where is the monthly sunspot number months away from month . The smoothed monthly mean is intended to dampen any sudden jumps in the monthly sunspot number and remove the effects of the 27-day solar rotation period.
Alternative series
The accuracy of the compilation of the group sunspot number series has been questioned, motivating the development of several alternative series suggesting different behavior of sunspot group activity before the 20th century.
However, indirect indices of solar activity favor the group sunspot number series by Chatzistergos T. et al.
A different index of sunspot activity was introduced in 1998 in the form of the number of groups apparent on the solar disc.
With this index it was made possible to include sunspot data acquired since 1609, being the date of the invention of the telescope.
See also
Solar cycle
Joy's law (astronomy)
References
External links
The Exploratorium's Guide to Sunspots
Solar Influences Data Analysis Center (SIDC) for the Sunspot Index
NASA Solar Physics Sunspot Cycle page and Table of Sunspot Numbers (txt) by month since 1749 CE
Stellar phenomena
Solar phenomena
de:Sonnenfleck#Sonnenflecken-Relativzahl | Sunspot number | Physics | 801 |
32,185,392 | https://en.wikipedia.org/wiki/Alpha%20factor | The α-factor is a dimensionless quantity used to predict the solid–liquid interface type of a material during solidification. It was introduced by physicist Kenneth A. Jackson in 1958. In his model, crystal growth with larger values of α is smooth, whereas crystals growing at smaller α (below the threshold value of 2) have rough surfaces.
Method
According to John E. Gruzleski in his book Microstructure Development During Metalcasting (1996):
where is the latent heat of fusion; is the Boltzmann constant; is the freezing temperature at equilibrium; is the number of nearest neighbours an atom has in the interface plane; and is the number of nearest neighbours in the bulk solid.
As , where is the molar entropy of fusion of the material,
According to Martin Glicksman in his book Principles of Solidification: An Introduction to Modern Casting and Crystal Growth Concepts (2011):
where is the universal gas constant. is similar to previous, always < 1.
References
Materials science | Alpha factor | Physics,Materials_science,Engineering | 206 |
58,083,234 | https://en.wikipedia.org/wiki/Exterior%20calculus%20identities | This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry.
Notation
The following summarizes short definitions and notations that are used in this article.
Manifold
, are -dimensional smooth manifolds, where . That is, differentiable manifolds that can be differentiated enough times for the purposes on this page.
, denote one point on each of the manifolds.
The boundary of a manifold is a manifold , which has dimension . An orientation on induces an orientation on .
We usually denote a submanifold by .
Tangent and cotangent bundles
, denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold .
, denote the tangent spaces of , at the points , , respectively. denotes the cotangent space of at the point .
Sections of the tangent bundles, also known as vector fields, are typically denoted as such that at a point we have . Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as such that at a point we have . An alternative notation for is .
Differential k-forms
Differential -forms, which we refer to simply as -forms here, are differential forms defined on . We denote the set of all -forms as . For we usually write , , .
-forms are just scalar functions on . denotes the constant -form equal to everywhere.
Omitted elements of a sequence
When we are given inputs and a -form we denote omission of the th entry by writing
Exterior product
The exterior product is also known as the wedge product. It is denoted by . The exterior product of a -form and an -form produce a -form . It can be written using the set of all permutations of such that as
Directional derivative
The directional derivative of a 0-form along a section is a 0-form denoted
Exterior derivative
The exterior derivative is defined for all . We generally omit the subscript when it is clear from the context.
For a -form we have as the -form that gives the directional derivative, i.e., for the section we have , the directional derivative of along .
For ,
Lie bracket
The Lie bracket of sections is defined as the unique section that satisfies
Tangent maps
If is a smooth map, then defines a tangent map from to . It is defined through curves on with derivative such that
Note that is a -form with values in .
Pull-back
If is a smooth map, then the pull-back of a -form is defined such that for any -dimensional submanifold
The pull-back can also be expressed as
Interior product
Also known as the interior derivative, the interior product given a section is a map that effectively substitutes the first input of a -form with . If and then
Metric tensor
Given a nondegenerate bilinear form on each that is continuous on , the manifold becomes a pseudo-Riemannian manifold. We denote the metric tensor , defined pointwise by . We call the signature of the metric. A Riemannian manifold has , whereas Minkowski space has .
Musical isomorphisms
The metric tensor induces duality mappings between vector fields and one-forms: these are the musical isomorphisms flat and sharp . A section corresponds to the unique one-form such that for all sections , we have:
A one-form corresponds to the unique vector field such that for all , we have:
These mappings extend via multilinearity to mappings from -vector fields to -forms and -forms to -vector fields through
Hodge star
For an n-manifold M, the Hodge star operator is a duality mapping taking a -form to an -form .
It can be defined in terms of an oriented frame for , orthonormal with respect to the given metric tensor :
Co-differential operator
The co-differential operator on an dimensional manifold is defined by
The Hodge–Dirac operator, , is a Dirac operator studied in Clifford analysis.
Oriented manifold
An -dimensional orientable manifold is a manifold that can be equipped with a choice of an -form that is continuous and nonzero everywhere on .
Volume form
On an orientable manifold the canonical choice of a volume form given a metric tensor and an orientation is for any basis ordered to match the orientation.
Area form
Given a volume form and a unit normal vector we can also define an area form on the
Bilinear form on k-forms
A generalization of the metric tensor, the symmetric bilinear form between two -forms , is defined pointwise on by
The -bilinear form for the space of -forms is defined by
In the case of a Riemannian manifold, each is an inner product (i.e. is positive-definite).
Lie derivative
We define the Lie derivative through Cartan's magic formula for a given section as
It describes the change of a -form along a flow associated to the section .
Laplace–Beltrami operator
The Laplacian is defined as .
Important definitions
Definitions on Ωk(M)
is called...
closed if
exact if for some
coclosed if
coexact if for some
harmonic if closed and coclosed
Cohomology
The -th cohomology of a manifold and its exterior derivative operators is given by
Two closed -forms are in the same cohomology class if their difference is an exact form i.e.
A closed surface of genus will have generators which are harmonic.
Dirichlet energy
Given , its Dirichlet energy is
Properties
Exterior derivative properties
( Stokes' theorem )
( cochain complex )
for ( Leibniz rule )
for ( directional derivative )
for
Exterior product properties
for ( alternating )
( associativity )
for ( compatibility of scalar multiplication )
( distributivity over addition )
for when is odd or . The rank of a -form means the minimum number of monomial terms (exterior products of one-forms) that must be summed to produce .
Pull-back properties
( commutative with )
( distributes over )
( contravariant )
for ( function composition )
Musical isomorphism properties
Interior product properties
( nilpotent )
for ( Leibniz rule )
for
for
for
Hodge star properties
for ( linearity )
for , , and the sign of the metric
( inversion )
for ( commutative with -forms )
for ( Hodge star preserves -form norm )
( Hodge dual of constant function 1 is the volume form )
Co-differential operator properties
( nilpotent )
and ( Hodge adjoint to )
if ( adjoint to )
In general,
for
Lie derivative properties
( commutative with )
( commutative with )
( Leibniz rule )
Exterior calculus identities
if
( bilinear form )
( Jacobi identity )
Dimensions
If
for
for
If is a basis, then a basis of is
Exterior products
Let and be vector fields.
Projection and rejection
( interior product dual to wedge )
for
If , then
is the projection of onto the orthogonal complement of .
is the rejection of , the remainder of the projection.
thus ( projection–rejection decomposition )
Given the boundary with unit normal vector
extracts the tangential component of the boundary.
extracts the normal component of the boundary.
Sum expressions
given a positively oriented orthonormal frame .
Hodge decomposition
If , such that
Poincaré lemma
If a boundaryless manifold has trivial cohomology , then any closed is exact. This is the case if M is contractible.
Relations to vector calculus
Identities in Euclidean 3-space
Let Euclidean metric .
We use differential operator
for .
( scalar triple product )
( cross product )
if
( scalar product )
( gradient )
( directional derivative )
( divergence )
( curl )
where is the unit normal vector of and is the area form on .
( divergence theorem )
Lie derivatives
( -forms )
( -forms )
if ( -forms on -manifolds )
if ( -forms )
References
Calculus
Mathematical identities
Mathematics-related lists
Differential forms
Differential operators
Generalizations of the derivative | Exterior calculus identities | Mathematics,Engineering | 1,627 |
412,703 | https://en.wikipedia.org/wiki/Cosmic%20egg | The cosmic egg, world egg or mundane egg is a mythological motif found in the cosmogonies of many cultures and civilizations, including in Proto-Indo-European mythology. Typically, there is an egg which, upon "hatching", either gives rise to the universe itself or gives rise to a primordial being who, in turn, creates the universe. The egg is sometimes lain on the primordial waters of the Earth. Typically, the upper half of the egg, or its outer shell, becomes the heaven (firmament) and the lower half, or the inner yolk, becomes the Earth. The motif likely stems from simple elements of an egg, including its ability to offer nourishment and give rise to new life, as is reflected by the Latin proverb omne vivum ex ovo ('all life comes from an egg').
The term "cosmic egg" is also used in the modern study of cosmology in the context of emergent Universe scenarios.
Chinese mythology
Various versions of the cosmic egg myth are related to the creator, Pangu. Heaven and earth are said to have originally existed in a formless state, like the egg of a chicken. The egg opens and unfolds after 18,000 years: the light part rose to become heaven and the heavy part sank to become the earth. A version of this myth deriving from the Zhejiang Province holds that Pangu, experiencing discomfort in being contained in a dark and stuffy egg, shatters it into pieces, after which heaven and earth form by the same process (with the addition that parts of the shell then form the sun, moon, and stars).
Dogon mythology
In Dogon mythology from Burkina Faso, the creator-god Amma takes the form of an egg. The egg is divided into four sections representing the four elements: air, fire, water, and earth. This also establishes the four cardinal directions. Failing to create the Earth on her first attempt, Amma plants a seed in herself that forms two placentas, each containing a pair of twins. One twin, Ogo, breaks out and unsuccessfully tries to create a universe. Amma however is able to create the Earth now from a part of Ogo's placenta. Ogo's twin, Nommo, is killed by Amma and parts of the body are scattered across the world to give it order. The parts were then reconstituted to revive Nommo. Nommo creates four spirits that become the ancestors of the Dogon people. These spirits are sent with Nommo into an ark to populate the world.
The creation account proceeds as follows:In the beginning, Amma dogon, alone, was in the shape of an egg: the four collar bones were fused, dividing the egg into air, earth, fire, and water, establishing also the four cardinal directions. Within this cosmic egg was the material and the structure of the universe, and the 266 signs that embraced the essence of all things. The first creation of the world by Amma was, however, a failure. The second creation began when Amma planted a seed within herself, a seed that resulted in the shape of man. But in the process of its gestation, there was a flaw, meaning that the universe would now have within it the possibilities for incompleteness. Now the egg became two placentas, each containing a set of twins, male and female. After sixty years, one of the males, Ogo, broke out of the placenta and attempted to create his own universe, in opposition to that being created by Amma. But he was unable to say the words that would bring such a universe into being. He then descended, as Amma transformed into the earth the fragment of placenta that went with Ogo into the void. Ogo interfered with the creative potential of the earth by having incestuous relations with it. His counterpart, Nommo, a participant in the revolt, was then killed by Amma, the parts of his body cast in all directions, bringing a sense of order to the world. When, five days later, Amma brought the pieces of Nommo's body together, restoring him to life, Nommo became ruler of the universe. He created four spirits, the ancestors of the Dogon people; Amma sent Nommo and the spirits to earth in an ark, and so the earth was restored. Along the way, Nommo uttered the words of Amma, and the sacred words that create were made available to humans. In the meantime, Ogo was transformed by Amma into Yuguru, the Pale Fox, who would always be alone, always be incomplete, eternally in revolt, ever wandering the earth seeking his female soul.
Egyptian mythology
The ancient Egyptians accepted multiple creation myths as valid, including those of the Hermopolitan, Heliopolitan, and Memphite theologies. The cosmic egg myth can be found from Hermopolitus. Although the site, located in Middle Egypt, currently sports a name deriving from the name of the god Hermes, the ancient Egyptians called it Khemnu, or “Eight-Town.” The number eight, in turn, refers to the Ogdoad, a group of eight gods who are the main characters in the Hermopolitan creation myth. Four of these gods are male, and have the heads of frogs, and the other four are female with the heads of serpents. These eight existed in the primordial, chaotic water that pre-existed the rest of creation. At some point these eight gods, in one way or another, bring about the formation of a cosmic egg, although variants of the myth describe the origins of the egg in different ways. In any case, the egg in turn gives rise to the deity who forms the rest of the world as well as the first land to arise out of the primordial waters, called the primeval mound. When the mound appeared, a lotus blossom bloomed to signal the birth of the sun god, after which the formation of the rest of creation could finally proceed.
Greek and Roman mythology
Ideas similar to the cosmic egg myth are mentioned in two different sources from Greek and Roman mythology. One is in the Roman author Marcus Terentius Varro, living in the 1st century BC. According to Varro, heaven and earth can respectively be likened to an egg shell and its yolk. The air, in turn, is represented by the moisture functioning as a form of humidity between the shell and yolk. The second mention is found in the Pseudo-Clementine Recognitions 10:17, although from an oppositional standpoint, insofar as Clement is presented as summarizing a ridiculous cosmological belief found among pagans: according to the description given, there is a primordial chaos which, over time, solidified into an egg. As is with an egg, a creature began to grow inside, until at some point it broke open to produce a human that was both male and female (i.e. androgynous) named Phanetas. When Phanetas appeared, a light shone forth that resulted in "substance, prudence, motion, and coition," and these in turn resulted in the creation of the heavens and the earth. The Recognitions 10:30 presents, then, a second summary of the idea, this time attributed to the cosmogony of Orpheus as described by a "good pagan" named Niceta. This summary, in contrast to the first one, is presented in a serious manner. This myth appears to have had occasional influence, insofar as a manuscript of it is associated with the reappearance of the idea at a library of Saint Gall in a 9th-century commentary on Boethius. Another three appearances occur again in the twelfth century.
Hindu mythology
In one Vedic myth recorded in the Jaiminīya Brāhmaṇa, the earliest phase of the cosmos involves a primordial ocean out of which an egg arose. Once the egg split, it began the process of forming heaven (out of the upper part) and earth (out of the lower part) over the course of one hundred divine years. Another text, the Śatapatha Brāhmaṇa, also has the sequence of a primordial ocean and then an egg, but this time, the god Prajapati emerges from the egg after one year. He creates the cosmos and then the gods and antigods from his speech and breath. The Rigveda speaks of a golden embryo (called the hiraṇyagarbha) which is located on a "high waters" out of which all else develops. Finally, a version of the story appears in the Chāndogya Upaniṣad.
Finnish mythology
In the Kalevala, the national epic of Finland, there is a myth of the world being created from the fragments of an egg. The goddess of the air, Ilmatar, longed to have a son. To achieve this, she and the East Wind make love until she conceives Väinämöinen, the child of the wind. However, she was not able to give birth to her child. A pochard swooped down and impregnated her: as a result, six golden cosmic eggs were birthed or laid, as well as an iron egg. The pochard took these eggs for himself and protected them by sitting on them, but this came with sitting on Ilmatar as well. Upon the movement of the air goddess, they rolled into the sea and the shell broke: the fragments formed heaven, earth, the sun, moon, stars, and (from the iron egg) a thundercloud.
The following is the translation of the part of the text describing the formation of the cosmos from the fragments of the egg, published by William Forsell Kirby in 1906:
In the ooze they were not wasted,
Nor the fragments in the water,
But a wondrous change came o'er them,
And the fragments all grew lovely.
From the cracked egg's lower fragment,
Now the solid earth was fashioned,
From the cracked egg's upper fragment,
Rose the lofty arch of heaven,
From the yolk, the upper portion,
Now became the sun's bright lustre;
From the white, the upper portion,
Rose the moon that shines so brightly;
Whatso in the egg was mottled,
Now became the stars in heaven,
Whatso in the egg was blackish,
In the air as cloudlets floated.
Zoroastrian mythology
In Zoroastrian cosmography, the sky was considered to be spherical with an outer boundary (called a parkān), an idea that likely goes back to Aristotle. The Earth is also spherical and exists within the spherical sky. To help convey this cosmology, a number of ancient writers, including Empedocles, came up with the analogy of an egg: the outer spherical and bounded sky is like the outer shell, whereas the Earth is represented by the inner round yolk within. This analogy, in turn, is found in a number of Zoroastrian texts, including the Selections of Zadspram.
Modern representations
Literature
In 1955 poet and writer Robert Graves published the mythography The Greek Myths, a compendium of Greek mythology normally published in two volumes. Within this work Graves' imaginatively reconstructed "Pelasgian creation myth" features a supreme creatrix, Eurynome, "The Goddess of All Things", who arose naked from Chaos to part sea from sky so that she could dance upon the waves. Catching the north wind at her back and, rubbing it between her hands, she warms the pneuma and spontaneously generates the serpent Ophion, who mates with her. In the form of a dove upon the waves, she lays the Cosmic Egg and bids Ophion to incubate it by coiling seven times around until it splits in two and hatches "all things that exist... sun, moon, planets, stars, the earth with its mountains and rivers, its trees, herbs, and living creatures".
Film
The ending of Stanley Kubrick’s film 2001: A Space Odyssey depicts the rebirth of humanity as a journey from beyond infinity back to earth in the form of a cosmic human embryo (or “Star Child”).
Cosmology
As the concept of a true singularity came under increasing criticism, alternative nonsingular "cosmic egg" (emergent Universe) scenarios started being developed.
In 1913, Vesto Slipher published his observations that light from remote galaxies was redshifted, which was gradually accepted as meaning that all galaxies (except Andromeda) are receding from the Earth.
Alexander Friedmann predicted the same consequence in 1922 from Einstein's equations of general relativity, once the previous ad-hoc cosmological constant was removed from it (which had been inserted to conform to the preconceived eternal, static universe).
Georges Lemaître proposed in 1927 that the cosmos originated from what he called the primeval atom.
Edwin Hubble observationally confirmed Lemaître's findings two years later, in 1929.
In the late 1940s, George Gamow's assistant cosmological researcher Ralph Alpher, proposed the name ylem for the primordial substance that existed between the Big Crunch of the previous universe and the Big Bang of our own universe. Ylem is closely related to the concept of supersymmetry.
See also
Ancient near eastern cosmology
Brahma
Brahman
Brahmanda
Hiranyagarbha
Orphic egg
Phanes
References
Sources
External links
Creation
Creation myths
Eggs in culture
pl:Jajko w kulturze#Symbolika | Cosmic egg | Astronomy | 2,821 |
24,649,771 | https://en.wikipedia.org/wiki/Protocol%20composition%20logic | Protocol Composition Logic is a formal method that can be used for proving security properties of cryptographic protocols that use symmetric-key and public-key cryptography. PCL is designed around a process calculus with actions for various possible protocol steps (e.g. generating random numbers, performing encryption, decryption and digital signature operations as well as sending and receiving messages).
Some problems with the logic have been found, implying that some currently claimed results cannot be proven within the logic.
References
Cryptography | Protocol composition logic | Mathematics,Engineering | 101 |
62,578,813 | https://en.wikipedia.org/wiki/Neumorphism | Neumorphism is a design style used in graphical user interfaces. It is commonly identified by a soft and light look (for which it is sometimes referred to as soft UI) with elements that appear to protrude from or dent into the background rather than float on top of it. It is sometimes considered a medium between skeuomorphism and flat design.
History
The term neumorphism was coined by Jason Kelly in 2019 as a portmanteau of neo and skeuomorphism, emphasizing its role as a semi-revival of skeuomorphism. Many neumorphic design concepts can be traced to Alexander Plyuto, who created a mockup for a banking app showing various elements of neumorphic design. He posted it to the website Dribbble, where it quickly blew up to 3,000 views.
On November 12, 2020, Apple released macOS Big Sur. The update included graphical designs that featured neumorphism prominently, such as the app icons and use of translucency.
Characteristics and purpose
Neumorphism is a form of minimalism characterized by a soft and light look, often using pastel colors with low contrast. Elements are usually the same color as the background, and are only distinguished by shadows and highlights surrounding the element. This gives the elements the appearance that they are "protruding" from the background, or that they are dented into it.
Designers may like the look and feel of neumorphism because it provides a middle ground between skeuomorphism and flat design. Specifically, it aims to look plausibly realistic, while still looking clean and adhering to minimalism.
Criticism
Neumorphism has received a lot of criticism, notably for its lack of accessibility, difficulty in implementation, low contrast, and incompatibility with certain brands.
References
Design
Graphical user interfaces
2019 neologisms | Neumorphism | Engineering | 392 |
49,023,532 | https://en.wikipedia.org/wiki/Random-sampling%20mechanism | A random-sampling mechanism (RSM) is a truthful mechanism that uses sampling in order to achieve approximately-optimal gain in prior-free mechanisms and prior-independent mechanisms.
Suppose we want to sell some items in an auction and achieve maximum profit. The crucial difficulty is that we do not know how much each buyer is willing to pay for an item. If we know, at least, that the valuations of the buyers are random variables with some known probability distribution, then we can use a Bayesian-optimal mechanism. But often we do not know the distribution. In this case, random-sampling mechanisms provide an alternative solution.
RSM in large markets
Market-halving scheme
When the market is large, the following general scheme can be used:
The buyers are asked to reveal their valuations.
The buyers are split to two sub-markets, ("left") and ("right"), using simple random sampling: each buyer goes to one of the sides by tossing a fair coin.
In each sub-market , an empirical distribution function is calculated.
The Bayesian-optimal mechanism (Myerson's mechanism) is applied in sub-market with distribution , and in with .
This scheme is called "Random-Sampling Empirical Myerson" (RSEM).
The declaration of each buyer has no effect on the price he has to pay; the price is determined by the buyers in the other sub-market. Hence, it is a dominant strategy for the buyers to reveal their true valuation. In other words, this is a truthful mechanism.
Intuitively, by the law of large numbers, if the market is sufficiently large then the empirical distributions are sufficiently similar to the real distributions, so we expect the RSEM to attain near-optimal profit. However, this is not necessarily true in all cases. It has been proved to be true in some special cases.
The simplest case is digital goods auction. There, step 4 is simple and consists only of calculating the optimal price in each sub-market. The optimal price in is applied to and vice versa. Hence, the mechanism is called "Random-Sampling Optimal Price" (RSOP). This case is simple because it always calculates feasible allocations. I.e, it is always possible to apply the price calculated in one side to the other side. This is not necessarily the case with physical goods.
Even in a digital goods auction, RSOP does not necessarily converge to the optimal profit. It converges only under the bounded valuations assumption: for each buyer, the valuation of the item is between 1 and , where is some constant. The convergence rate of RSOP to optimality depends on . The convergence rate also depends on the number of possible "offers" considered by the mechanism.
To understand what an "offer" is, consider a digital goods auction in which the valuations of the buyers, in dollars, are known to be bounded in . If the mechanism uses only whole dollar prices, then there are only possible offers.
In general, the optimization problem may involve much more than just a single price. For example, we may want to sell several different digital goods, each of which may have a different price. So instead of a "price", we talk on an "offer". We assume that there is a global set of possible offers. For every offer and agent , is the amount that agent pays when presented with the offer . In the digital-goods example, is the set of possible prices. For every possible price , there is a function such that is either 0 (if ) or (if ).
For every set of agents, the profit of the mechanism from presenting the offer to the agents in is:
and the optimal profit of the mechanism is:
The RSM calculates, for each sub-market , an optimal offer , calculated as follows:
The offer is applied to the buyers in , i.e.: each buyer who said that receives the offered allocation and pays ; each buyer in who said that do not receive and do not pay anything. The offer is applied to the buyers in in a similar way.
Profit-oracle scheme
Profit oracle is another RSM scheme that can be used in large markets. It is useful when we do not have direct access to agents' valuations (e.g. due to privacy reasons). All we can do is run an auction and watch its expected profit. In a single-item auction, where there are bidders, and for each bidder there are at most possible values (selected at random with unknown probabilities), the maximum-revenue auction can be learned using:
calls to the oracle-profit.
RSM in small markets
RSMs were also studied in a worst-case scenario in which the market is small. In such cases, we want to get an absolute, multiplicative approximation factor, that does not depend on the size of the market.
Market-halving, digital goods
The first research in this setting was for a digital goods auction with Single-parameter utility.
For the Random-Sampling Optimal-Price mechanism, several increasingly better approximations have been calculated:
By, the mechanism profit is at least 1/7600 of the optimal.
By, the mechanism profit is at least 1/15 of the optimal.
By, the mechanism profit is at least 1/4.68 of the optimal, and in most cases 1/4 of the optimal, which is tight.
Single-sample, physical goods
When the agents' valuations satisfy some technical regularity condition (called monotone hazard rate), it is possible to attain a constant-factor approximation to the maximum-profit auction using the following mechanism:
Sample a single random agent and query his value (the agents are assumed to have single-parameter utility).
On the other agents, run a VCG auction with reserve-price determined by the sampled agent.
The profit of this mechanism is at least , where is the number of agents. This is 1/8 when there are two agents, and grows towards 1/4 as the number of agents grows. This scheme can be generalized to handle constraints on the subsets of agents that can win simultaneously (e.g., there is only a finite number of items). It can also handle agents with different attributes (e.g. young vs. old bidders).
Sample complexity
The sample complexity of a random-sampling mechanism is the number of agents it needs to sample in order to attain a reasonable approximation of the optimal welfare.
The results in imply several bounds on the sample-complexity of revenue-maximization of single-item auctions:
For a -approximation of the optimal expected revenue, the sample-complexity is - a single sample suffices. This is true even when the bidders are not i.i.d.
For a -approximation of the optimal expected revenue, when the bidders are i.i.d OR when there is an unlimited supply of items (digital goods), the sample-complexity is when the agents' distributions have monotone hazard rate, and when the agents' distributions are regular but do not have monotone-hazard-rate.
The situation becomes more complicated when the agents are not i.i.d (each agent's value is drawn from a different regular distribution) and the goods have limited supply. When the agents come from different distributions, the sample complexity of -approximation of the optimal expected revenue in single-item auctions is:
at most - using a variant of the empirical Myerson auction.
at least (for monotone-hazard-rate regular valuations) and at least (for arbitrary regular valuations).
discuss arbitrary auctions with single-parameter utility agents (not only single-item auctions), and arbitrary auction-mechanisms (not only specific auctions). Based on known results about sample complexity, they show that the number of samples required to approximate the maximum-revenue auction from a given class of auctions is:
where:
the agents' valuations are bounded in ,
the pseudo-VC dimension of the class of auctions is at most ,
the required approximation factor is ,
the required success probability is .
In particular, they consider a class of simple auctions called -level auctions: auctions with reserve prices (a Vickrey auction with a single reserve price is a 1-level auction). They prove that the pseudo-VC-dimension of this class is , which immediately translates to a bound on their generalization error and sample-complexity. They also prove bounds on the representation error of this class of auctions.
Envy
A disadvantage of the random-sampling mechanism is that it is not envy-free. E.g., if the optimal prices in the two sub-markets and are different, then buyers in each sub-market are offered a different price. In other words, there is price discrimination. This is inevitable in the following sense: there is no single-price strategyproof auction that approximates the optimal profit.
See also
Market research
Pricing
Consensus estimate - an alternative approach to prior-free mechanism design.
References
Mechanism design
Sampling techniques | Random-sampling mechanism | Mathematics | 1,846 |
25,685,354 | https://en.wikipedia.org/wiki/Journal%20of%20Circuits%2C%20Systems%2C%20and%20Computers | The Journal of Circuits, Systems and Computers was founded in 1991 and is published eight times annually by World Scientific. It covers a wide range of topics regarding circuits, systems and computers, from basic mathematics to engineering and design.
The editor-in-chief of the journal is Professor Wai-Kai Chen and the five regional editors include Piero Malcovati from the University of Pavia, Emre Salman from Stony Brook University, Masazaku Sengoku from Niigata University, Zoran Stamenkovic from IHP GmbH, and Tongquan Wei from East China Normal University.
Abstracting and indexing
The journal is abstracted and indexed in:
SciSearch
Scopus
ISI Alerting Services
Current Contents/Engineering, Computing & Technology
Mathematical Reviews
Inspec
io-port.net
Compendex
Computer Abstracts
References
English-language journals
Academic journals established in 1991
Electrical and electronic engineering journals
Computer science journals
World Scientific academic journals | Journal of Circuits, Systems, and Computers | Engineering | 191 |
45,494,298 | https://en.wikipedia.org/wiki/Scioptic%20ball | The scioptic ball is a universal joint allowing an optical instrument mounted on a ball to be swiveled to point anywhere in a wide arc. It was inspired by studies of the human eye. It has a number of applications. The scioptic ball may provide a firm anchor for a microscope, camera or telescope allowing it to be swiveled in all directions, for example to follow the course of an eclipse or for drawing panoramic views. Scioptic balls have been used as camera obscuras, projecting images from the outside on walls in darkened rooms. Scioptic balls have been used simply as light sources. It was an early example of a type of wide-angle lens.
History
Daniel Schwenter (1585–1636), professor of mathematics and oriental languages, developed the scioptic ball in 1636. In 1685, Johann Zahn illustrated a large workshop camera obscura for solar observations using the telescope and scioptic ball.
Sources
Scioptic ball example in museum
Scioptic ball example in museum
Optical devices
Microscopes | Scioptic ball | Chemistry,Materials_science,Technology,Engineering | 220 |
34,073,649 | https://en.wikipedia.org/wiki/Approximate%20entropy | In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. For example, consider two series of data:
Series A: (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, ...), which alternates 0 and 1.
Series B: (0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, ...), which has either a value of 0 or 1, chosen randomly, each with probability 1/2.
Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series A is perfectly regular: knowing a term has the value of 1 enables one to predict with certainty that the next term will have the value of 0. In contrast, series B is randomly valued: knowing a term has the value of 1 gives no insight into what value the next term will have.
Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures.
However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise, therefore it is not practical to apply these methods to experimental data. ApEn was first proposed (under a different name) by A. Cohen and I. Procaccia,
as an approximate algorithm to compute an exact regularity statistic, Kolmogorov–Sinai entropy, and later popularized by Steve M. Pincus. ApEn was initially used to analyze chaotic dynamics and medical data, such as heart rate, and later spread its applications in finance, physiology, human factors engineering, and climate sciences.
Algorithm
A comprehensive step-by-step tutorial with an explanation of the theoretical foundations of Approximate Entropy is available. The algorithm is:
Step 1 Assume a time series of data . These are raw data values from measurements equally spaced in time.
Step 2 Let be a positive integer, with , which represents the length of a run of data (essentially a window).Let be a positive real number, which specifies a filtering level.Let .
Step 3 Define for each where . In other words, is an -dimensional vector that contains the run of data starting with .Define the distance between two vectors and as the maximum of the distances between their respective components, given by
for .
Step 4 Define a count as
for each where . Note that since takes on all values between 1 and , the match will be counted when (i.e. when the test subsequence, , is matched against itself, ).
Step 5 Define
where is the natural logarithm, and for a fixed , , and as set in Step 2.
Step 6 Define approximate entropy () as
Parameter selection Typically, choose or , whereas depends greatly on the application.
An implementation on Physionet, which is based on Pincus, use instead of in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice.
Example
Consider a sequence of samples of heart rate equally spaced in time:
Note the sequence is periodic with a period of 3. Let's choose and (the values of and can be varied without affecting the result).
Form a sequence of vectors:
Distance is calculated repeatedly as follows. In the first calculation,
which is less than .
In the second calculation, note that , so
which is greater than .
Similarly,
The result is a total of 17 terms such that . These include . In these cases, is
Note in Step 4, for . So the terms such that include , and the total number is 16.
At the end of these calculations, we have
Then we repeat the above steps for . First form a sequence of vectors:
By calculating distances between vector , we find the vectors satisfying the filtering level have the following characteristic:
Therefore,
At the end of these calculations, we have
Finally,
The value is very small, so it implies the sequence is regular and predictable, which is consistent with the observation.
Python implementation
import math
def approx_entropy(time_series, run_length, filter_level) -> float:
"""
Approximate entropy
>>> import random
>>> regularly = [85, 80, 89] * 17
>>> print(f"{approx_entropy(regularly, 2, 3):e}")
1.099654e-05
>>> randomly = [random.choice([85, 80, 89]) for _ in range(17*3)]
>>> 0.8 < approx_entropy(randomly, 2, 3) < 1
True
"""
def _maxdist(x_i, x_j):
return max(abs(ua - va) for ua, va in zip(x_i, x_j))
def _phi(m):
n = time_series_length - m + 1
x = [
[time_series[j] for j in range(i, i + m - 1 + 1)]
for i in range(time_series_length - m + 1)
]
counts = [
sum(1 for x_j in x if _maxdist(x_i, x_j) <= filter_level) / n for x_i in x
]
return sum(math.log(c) for c in counts) / n
time_series_length = len(time_series)
return abs(_phi(run_length + 1) - _phi(run_length))
if __name__ == "__main__":
import doctest
doctest.testmod()
MATLAB implementation
Fast Approximate Entropy from MatLab Central
approximateEntropy
Interpretation
The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations. A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
Advantages
The advantages of ApEn include:
Lower computational demand. ApEn can be designed to work for small data samples ( points) and can be applied in real time.
Less effect from noise. If data is noisy, the ApEn measure can be compared to the noise level in the data to determine what quality of true information may be present in the data.
Limitations
The ApEn algorithm counts each sequence as matching itself to avoid the occurrence of in the calculations. This step might introduce bias in ApEn, which causes ApEn to have two poor properties in practice:
ApEn is heavily dependent on the record length and is uniformly lower than expected for short records.
It lacks relative consistency. That is, if ApEn of one data set is higher than that of another, it should, but does not, remain higher for all conditions tested.
Applications
ApEn has been applied to classify electroencephalography (EEG) in psychiatric diseases, such as schizophrenia, epilepsy, and addiction.
See also
Recurrence quantification analysis
Sample entropy
References
Time series
Entropy and information
Articles with example Python (programming language) code | Approximate entropy | Physics,Mathematics | 1,523 |
11,421,478 | https://en.wikipedia.org/wiki/RyeB%20RNA | The SdsR/RyeB RNA is a non-coding RNA that was identified in a large scale screen of E. coli. The exact 5′ and 3′ ends of this RNA are uncertain. This RNA overlaps the SraC/RyeA RNA on the opposite strand suggesting that the two may act in a concerted manner. It is transcribed by general stress factor σs and is most highly expressed in stationary phase. SdsR/RyeB RNA interacts with Hfq.
The homologous sRNA in S. enterica was shown to regulate synthesis of major porin OmpD. A study using Salmonella identified 20 targets of this sRNA including transcriptional regulator, CRP, global DNA-binding factors, StpA and HupB, the antibiotic transporter protein, TolC, and the RtsA/B two-component system (TCS), and validated their post-transcriptional control by SdsR/RyeB RNA.
See also
RydB RNA
RyhB RNA
RyeE RNA
References
External links
Non-coding RNA | RyeB RNA | Chemistry | 217 |
28,327,688 | https://en.wikipedia.org/wiki/WikiPathways | WikiPathways is a community resource for contributing and maintaining content dedicated to biological pathways. Any registered WikiPathways user can contribute, and anybody can become a registered user. Contributions are monitored by a group of admins, but the bulk of peer review, editorial curation, and maintenance is the responsibility of the user community. WikiPathways is originally built using MediaWiki software, a custom graphical pathway editing tool (PathVisio) and integrated BridgeDb databases covering major gene, protein, and metabolite systems. WikiPathways was founded in 2008 by Thomas Kelder, Alex Pico, Martijn Van Iersel, Kristina Hanspers, Bruce Conklin and Chris Evelo. Current architects are Alex Pico and Martina Summer-Kutmon.
Pathway content
Each article at WikiPathways is dedicated to a particular pathway. Many types of molecular pathways are covered, including metabolic, signaling, regulatory, etc. and the supported species include human, mouse, zebrafish, fruit fly, C. elegans, yeast, rice and arabidopsis, as well as bacteria and plant species. Using a search feature, one can locate a particular pathway by name, by the genes and proteins it contains, or by the text displayed in its description. The pathway collection can also be browsed with combinations of species names and ontology-based categories.
In addition to the pathway diagram, each pathway page also includes a description, bibliography, pathway version history and list of component genes and proteins with linkouts to public resources. For individual pathway nodes, users can access a list of other pathways with that node. Pathway changes can be monitored by displaying previous revisions or by viewing differences between specific revisions. Using the pathway history one can also revert to a previous revision of a pathway.
Pathways can also be tagged with ontology terms from three major BioPortal ontologies (Pathway, Disease and Cell Type).
The pathway content at WikiPathways is freely available for download in several data and image formats. WikiPathways is completely open access and open source. All content is available under Creative Commons 0. All source code for WikiPathways and the PathVisio editor is available under the Apache License, Version 2.0.
Access and integration
In addition to various primary data formats (e.g. GPML, BioPAX, Reactome, KEGG, and RDF), WikiPathways supports a variety of ways to integrate and interact with pathway content. These include directed link-outs, image maps, RSS feeds and deep web services. This enables reuse in projects like COVID19 Disease Map.
WikiPathways content is used to annotate and cross-link Wikipedia articles covering various genes, proteins, metabolites and pathways. Here are a few examples:
Citric acid cycle § Interactive pathway map
Articles that link to Citric acid cycle template
:Category:WikiPathways templates
See also
Reactome
KEGG
GenMAPP
PathVisio
Genenetwork
Cytoscape
BioPAX
References
External links
Biological databases
Molecular biology
Systems biology
Online databases | WikiPathways | Chemistry,Biology | 635 |
20,585,429 | https://en.wikipedia.org/wiki/DVTk | DVTk is an open-source project for testing, validating and diagnosing communication protocols and scenarios in medical environments. It supports DICOM, HL7 and IHE integration profiles.
History
The history of DVTk goes back to 1997. Within the ARC group (Architecture Re-use and Communications) of Philips, the first version of the Validation Test Suite (VTS) was developed. This was a DICOM Validation tool with a command line interface. Based on this, later on, the ADVT (Agfa DICOM Validation Tool) was created. This was the first tool with a graphical user interface that made DICOM validation more pleasant.
Because Philips and Agfa wanted to join forces, in 2001 their cooperation became a fact and the first version of DVT (1.2) was born in 2002.
After major redesign and improvements, DVT 2.1 was released in June 2005. This version was the transition of the application from the Philips-AGFA cooperation project to the open-source community.
In 2006, ICT Automatisering joined the DVTk open-source project as the third participating company next to Philips and Agfa.
In 2007, DVTk started participating in the IHE Gazelle project by supplying an External Validation Service (DICOM Validation Web Service).
In 2008, Services for the DVTk open-source project was introduced.
DVTk based DICOM application
DICOM Anonymizer
DICOM Compare
DICOM Editor
DICOM Network Analyzer
DICOM Viewer and Validator
DVT
Modality Emulator
Query Retrieve SCP Emulator
RIS Emulator
Storage SCP Emulator
Storage SCU Emulator
External links
Main page of the DVTk website
Download page of DVTk based DICOM applications
The DVTk library and DVTk based applications hosted on SourceForge
Article "Mastering DICOM with DVTk" on Journal of Digital Imaging
Medical imaging | DVTk | Technology | 408 |
45,254,276 | https://en.wikipedia.org/wiki/GL%20Virginis | GL Virginis, also known as G 12-30, is a star in the constellation of Virgo. It is a faint red dwarf, like more than 70% of the stars located within 10 parsecs of the Solar System; its magnitude visual magnitude is 13.898, making it impossible to see with the naked eye.
Located 21.1 light years away, GL Virginis has a spectral type of M4.5V and an effective temperature of approximately 3110 K. Its luminosity (emitted in the visible section of the electromagnetic spectrum) is only one ten-thousandth compared to the Sun; however, since a significant fraction of its radiation is emitted as invisible infrared light, its bolometric luminosity increases to 0.5% of that of the Sun. Its mass is 12% that of the Sun and its radius is 16% of the Sun. It is a fairly rapid rotator: its rotational velocity is least 17 km/s, which implies that it takes less than half a day to complete a rotation on its axis. The star is emitting a frequent flares, with at least five detected by 2019.
The closest known star system to GL Virginis is Gliese 486, 6.4 light-years away.
References
Virgo (constellation)
M-type main-sequence stars
1156
Virginis, GL
J12185939+1107338 | GL Virginis | Astronomy | 294 |
53,971,317 | https://en.wikipedia.org/wiki/Sexpionage | Sexpionage is the involvement of sexual activity (or the possibility of sexual activity), intimacy, romance, or seduction to conduct espionage. Sex, or the possibility of sex, can function as a distraction, incentive, cover story, or unintended part of any intelligence operation.
In the Soviet Union, female agents assigned to use such tactics were referred to as swallows, while male ones were known as ravens. A commonly known type of sexpionage is a honey trap operation, which is designed to compromise an opponent sexually to elicit information from that person.
Sexpionage is a historically documented phenomenon, though a book review published by a CIA publication in 2008 noted that the three English-language books about it suffered from errors of fact and lack of documentation.
Homosexual entrapment with the NSA
Discrimination and cultural attitudes toward homosexuals have pressured them into spying or not spying for a certain entity, sometimes with drastic consequences. For example, Admiral Bobby Ray Inman, former director of the NSA, decided to not fire openly gay employees in exchange for each employee's written promise not to give in to blackmail and that each gay employee would inform his family, eliminating any further potential for blackmail. This was a serious issue, as two NSA analysts defected to Moscow in 1960 following a purge of homosexuals from the agency.
Soviet and Russian examples
Yakov Agranov, deputy of the NKVD, known as one of main organizers of Soviet political repressions and Stalinist show trials in 1920s and 1930s, was responsible for sex spy operations among creative-class intelligentsia. He used Bolshoi ballerinas, as well as cinema and theater actresses. Agranov created a school named the Lenin Technical School (Ленинская техническая школа). The school was opened in 1931 by Vyacheslav Menzhinsky, who was the head of the Joint State Political Directorate. According to legend, Richard Sorge and Nikolai Kuznetsov studied at a Moscow Sexpionage school.
Kazan, Tatarstan sexpionage school
According to former CIA officer Jason Matthews, the Soviet Union had a sexpionage school called "State School 4" in Kazan, Tatarstan, southeast of Moscow, on the banks of the Volga river. The school trained female agents to be "swallows". This school was depicted in Matthews' 2013 novel Red Sparrow. In 2018, a film of the same name was adapted from it.
Matthews believed the Kazan school has been closed, but that Russia now uses independent contractors as honey traps. Matthews has said, "If a human target with access to classified information went to Moscow [today], he’d probably see a modern-day Swallow at one of the bars of the five-star hotels in Moscow."
Specific examples
In a 2015 lecture, former CIA officer Jonna Mendez explained how Czechoslovakian husband and wife KGB spies Karl and Hana Koecher used sex to infiltrate the CIA and gather top-secret information. One popular Washington, D.C., swinger’s club frequented by the couple counted at least 10 CIA staffers and a United States senator as members.
In 2018, Mendez told The New York Times that an American Marine stationed at the American embassy in Moscow was seduced by a swallow, and subsequently allowed Russian agents onto the property. Mendez said China and other countries also had such programs.
In 1963, the playwright and screenwriter Yuri Krotkov defected to the West. He revealed that he had been told by the KGB to seek out attractive young women who could be used to seduce men. He would recruit actresses while doing film work, promising better film roles, money and clothes.
Trapped targets during the Soviet Union period included:
Sukarno, President of Indonesia;
, French ambassador in the 1950s;
Clayton J. Lonetree, a Marine guarding the US embassy;
Roy Guindon, a Canadian diplomat;
Col. Louis Guibaud, a French military attache who committed suicide;
Jeremy Wolfenden, a homosexual British journalist in Moscow in the early 1960s;
John Watkins, homosexual Canadian ambassador in Moscow in 1954;
Geoffrey Harrison, British ambassador;
U.S. Army Major James R. Holbrook;
William John Vassall, a homosexual British navy clerk;
British MP Anthony Courtney;
The Washington Post reported in 1987 that "most westerners who have spent any length of time in Moscow have their favorite tale of an attempted seduction by a KGB swallow or raven."
Ghislaine Maxwell’s father, Robert Maxwell, was engaged with both MI6 and the KGB. Ghislaine Maxwell was later alleged to have been involved with a group of young women who were, perhaps, over the UK age of consent, but who were under the age of consent in the USA. Coincidental or otherwise, the interaction between these facts and the timing of legal action against HRH Queen Elizabeth II's son Andrew very close to the Queen’s Platinum Jubilee (and to the death of her husband, HRH Prince Philip) largely remains subject to significant debate.
East German spies
Spies for East Germany were called "Romeos" created by Markus Wolf, the former head of East Germany's foreign intelligence service the Stasi. Around 40 women were prosecuted for espionage in the Federal Republic of Germany.
Notable people and events
Kursk Nightingale – Russia
Nadezhda Plevitskaya, a former opera singer known as the "Kursk Nightingale" before the Russian Civil War, found herself living without her former luxuries following the Bolshevik Revolution. The Cheka recruited Plevitskaya through her lust for money. "Traveling throughout the white-held areas, she entertained the troops at free concerts, at the same time ingratiating herself with anti-Bolshevik leaders who had long admired the 'Kursk Nightingale.' In the process, she began to collect interesting intelligence tidbits from some of the more indiscreet Whites (including those she slept with to pry even more information)." However, Plevitskaya was captured by the Whites after intercepting some of her messages to the Cheka and ordered to be executed by firing squad. Nikolai Skoblin, then a young White cavalry officer and a megalomaniac obsessed with the idea of recreating the "Holy Russia", a mythical land that existed before the time of the Tsars, saw Plevitskaya refuse a blindfold before her execution. Motivated by her beauty and courage, Skoblin rode up, ordered the firing squad not to fire, and released her in his custody. Then the Cheka used Plevitskaya to recruit Skoblin, and both got married (with Nadezhda's then-husband understandingly serving as best man in the wedding) and moved to Paris, working for the Cheka among the Russian Exile Movement.
Cynthia – Britain
Amy Thorpe Pack was an American who married a senior British diplomat and began extramarital affairs upon finding her marriage passionless. She volunteered her services to MI6 while living with her husband in Warsaw in 1937. In Warsaw, she seduced a Polish Foreign Ministry Official eliciting from him Poland's plans regarding how to deal with Hitler and Stalin. Following this, she learned from another Polish official that some Polish mathematicians had started cracking the German Enigma Ciphers. Later, in Czechoslovakia, she discovered the German plans to invade Czechoslovakia. After a colorless stint of boredom at a posting in Santiago, Chile, Pack separated from her husband and went to New York City in 1941, when William Stephenson, then an MI6 chief of station, contacted her and asked her to infiltrate embassies in Washington, D.C. Realizing her motivation was a lust for danger and excitement, Stephenson gave her the code name Cynthia, after a long-lost love. Pack then seduced the chief of station for Italian military intelligence and acquired the Italian navy cipher. Beginning in early 1942, Pack posed as a pro-Vichy journalist and got Charles Brousse, the Vichy French embassy's press attaché and a Vichy politician, to fall in love with her and agree to work as an OSS asset. In a near six-hour night burglary operation, Pack and Brousse let an OSS safecracker into the embassy to carry away the Vichy code books for photographing, and at one point Pack undressed to cover for the operation by deceiving a suspicious night guard. After the operation for the Vichy codes, Pack retired from espionage because she fell in love with Brousse.
Commander Courtney Affair – Soviet Union
Commander Anthony Courtney was a "tough and opinionated former naval officer and Member of Parliament who denounced the government of the day and the Foreign Office for softness in permitting Soviet and Iron Curtain diplomats to abuse their privileges for espionage purposes." The Commander spoke fluent Russian, and in 1961 he went to bed with his Intourist guide, Zinaida Grigorievna Volkova, who was in fact a regular KGB seductress, and KGB photographers captured their intimate activity. The KGB tried to blackmail Courtney into ending his Parliamentary tirades, though he refused; and they circulated the pictures to other members of Parliament and business associates. Furthermore, Private Eye, a London satirical journal, obtained the photos and published them. Courtney lost his seat in the following election.
Ambassador Dejean Affair – Soviet Union
Maurice Dejean, the former French ambassador to the Soviet Union, was an old friend with close connections to President De Gaulle, who had a fondness for women. The KGB took advantage of this and set up Dejean first with Lydia Khovanskaya, a divorcee who spoke French, and later Larisa Kronberg-Sobolevskaya, an actress. While Dejean was with Kronberg-Sobolevskaya, her pretend husband returned home, as staged, from a geological expedition in Siberia, and beat Dejean, but allowed him to leave upon Larisa's pleading. Dejean went to a Soviet friend, who unbeknownst to him worked for the KGB, to quiet the affair. The Soviets took no immediate action, but preferred to hold their operation as leverage just in case to keep the French ambassador within their sway. Similar KGB honey traps on Dejean's wife, Marie-Claire, were unsuccessful. President De Gaulle and the French found out about the affair from British intelligence, who in turn learned of it from Yuri Krotkov, a defector. Krotkov defected in 1963 after a French Air Force attaché, Colonel Louis Guibard, shot himself when the KGB showed him pictures they took of his affair with a Russian woman and presented him with the choice of either exposure or collaboration.
Sir Geoffrey and Galya – Soviet Union
Sir Geoffrey Harrison, British Ambassador to Moscow, was the target of a KGB blackmail attempt in 1968, when they placed an attractive maid named Galya in the diplomatic mission. Sir Geoffrey fell for the honey trap, and Galya told him that pictures had been taken and that he would be exposed unless he provided information to the KGB. The scandal broke, but Sir Geoffrey had no action taken against him and he retired on full pension.
KGB break-in at Swedish Embassy in Moscow – Soviet Union
Yuri Nosenko, a Soviet defector to the West, detailed the use of a honey trap when the KGB launched a night operation to raid the Swedish Embassy in Moscow with a twelve-strong crew of safe-pickers and break-in experts. According to Nosenko, a female KGB seductress lured away the embassy's night watchman, and another agent distracted a guard dog by feeding it meat.
Donald Maclean – Soviet Union
Donald Duart Maclean was a British diplomat who spied for the Soviet Union mostly out of love for it, and he never received pay, although did get a KGB pension. However, to make sure that Maclean would not so easily double-cross the Soviets, they had Guy Burgess, another British homosexual spying for the Soviets, take photos of Maclean in bed with another man during an orgy.
William Vassall – Soviet Union
William John Vassall was an openly gay man who boasted that men said he had "come to bed eyes", and in 1954, as a clerk in the office of the British naval attaché, Vassall went to Moscow. A Polish clerk from the embassy brought Vassall to a party with much alcohol, and he became involved in homosexual activity. Soon, Vassall had been blackmailed and was stealing classified information for the Soviets.
American use
Former Assistant FBI Director William C. Sullivan, in testimony before the Church Committee on November 1, 1975, stated: "The use of sex is a common practice among intelligence services all over the world. This is a tough, dirty business. We have used that technique against the Soviets. They have used it against us."
Aleksandr Ogorodnik, Russian Ministry of Foreign Affairs' planning department, was codenamed TRIGON by the Central Intelligence Agency. He dated a Spanish woman who was recruited by the CIA. In 1973, she persuaded him to supply the CIA with information.
British Undercover Police use
Around the end of 2010 and during 2011, it was disclosed in UK media that a number of undercover police officers had, as part of their 'false persona', entered into intimate relationships with members of targeted groups and in some cases proposed marriage or fathered children with protesters who were unaware their partner was a police officer in a role as part of their official duties. In the majority of publicly reported cases these police officers were male "ravens".
Chinese use
Beginning with his time as a Dublin, California, city councilor, Eric Swalwell was targeted by a Chinese woman believed to be a clandestine officer of China's Ministry of State Security. The FBI gave Swalwell a "defensive briefing" in 2015, informing him that Christine Fang was a suspected Chinese agent. She also engaged with two midwestern city mayors in relationships which were of either a sexual or romantic nature. In the media, Swalwell's general relationship with Fang has been characterized as problematic, particularly given the high-profile role that he occupied – a member of the House Intelligence Committee – within the intelligence community.
Spies mistaken as ravens
A male spy with a promiscuous lifestyle is not necessarily a professional raven. For example, Duško Popov was a double agent working for MI5 and feeding information to the Abwehr in World War II. He came from a moderately wealthy Yugoslavian family, and had a taste for expensive restaurants, women, and nightclubs. MI5 code-named him TRICYCLE because he was the head of a group of three double-agents. Despite being seen as an inspiration for James Bond, he was not a raven, but instead used supposed commercial connections to feed faked intelligence to the Nazis.
Agent falling for their mission partner
An instance of sex or intimacy which can happen during espionage is when an agent falls for his or her partner. In one example, an Israeli "champagne spy", Wolfgang Lotz, who pretended to be a former Afrika Corps vet, covered himself deep in German social circles in Egypt prior to the Six-Day War, and fell in love with his fake "German" wife, who converted to Judaism. Lotz divorced his real wife, who was Israeli, for his partner.
In popular culture
Most variations of the Black Widow in Marvel Comics are fictional characters depicted as swallows, deliberately based on the Russian program.
James Bond is a fictional character depicted as a raven; his parodical counterpart Austin Powers also uses sexpionage to elicit information. It is also suggested each of those characters was repeatedly targeted by overseas agencies with sexploitation. Examples, for James Bond, may include the (somewhat different to the film of the same name) text of Ian Fleming's 1957 novel, From Russia with Love (particularly, the character Tatiana Romanova) and the 1995 movie (with no link to Ian Fleming's book), GoldenEye (particularly, the character Xenia Onatopp).
A 1987 espionage-themed American pornographic film featuring Dana Dylan, Rachel Ashley, and Britt Morgan was titled "Sexpionage".
In the 2014 film The Interview, use of a swallow is somewhat colloquially referred to as "honeypotting", and use of a raven is referred to as "honeydicking".
The 2018 film Red Sparrow shows a modern version of sexpionage.
The 2021 Indian action espionage thriller streaming television series Special Ops 1.5: The Himmat Story shows honey trapping by trained and contract based swallows.
See also
Recruitment of spies § Love, honeypots, and recruitment
LOVEINT
History of espionage
Sextortion
References
Further reading
Sex and Soviet espionage (2002) Inna Svechenovskaya. Olma-Press
The A to Z of Sexpionage
Espionage techniques
Counterintelligence
Types of espionage
Sexuality | Sexpionage | Biology | 3,419 |
47,391,798 | https://en.wikipedia.org/wiki/Sensibo | Sensibo is a manufacturer of air conditioning controllers.
History
Sensibo was founded in November 2013 by Omer Enbar and Ran Roth. It is headquartered in Redwood City, California.
The company's mission was to bring about a change in the way people interact with their climate control systems. Recognizing the increasing demand for smart home technology and the potential for energy savings, Sensibo introduced its flagship product, the Sensibo Sky, which allowed users to make their old remote-controlled air conditioners smart.
The idea originated around 2004 when Omer Enbar had built a personal control system to activate his air conditioner via email prior to biking home from work. The system connected an IR blaster to a laptop that would send a signal to the AC every time he sent an email with the title "AC on" or "AC off".
During 2013, Omer Enbar and Ran Roth manually built and deployed several prototypes to friends and family. They later proceeded to found the company.
Products and Services
The Sensibo product line primarily focuses on smart controllers that connect air conditioners and heat pumps to the internet. These controllers enable users to control their devices remotely through a mobile app, set timers, and use geofencing features. Additionally, Sensibo products integrate with other smart home systems and voice assistants, allowing for a seamless user experience.
Sensibo Sky: Connects to the user's air conditioner via infrared and allows for remote control through the Sensibo app and offers additional features such as climate react, which adjusts the air conditioner settings based on external weather conditions.
Sensibo Air: An advanced version of the Sensibo Sky, the Sensibo Air offers like room presence sensors and HomeKit integration.
Sensibo Air Pro: Builds on the Sensibo Air and adds an air quality sensor.
Crowdfunding and video
In May 2014, Sensibo launched a successful crowdfunding campaign on Indiegogo. The campaign raised $165,000 on July 20, 2014. Its campaign video was later selected by Indiegogo as the funniest pitch video of 2014. The video, created by Tross Media and starring Michael Harpaz, has been often compared to Dollar Shave Club and the TV series House of Cards.
Product launches
Sensibo delivered on its crowdfunding campaign during the summer of 2015, shipping thousands of units worldwide, according to the company. In May 2015, Sensibo launched an IFTTT channel, allowing its system to interface with other apps and devices. The devices are being distributed in many countries.
In January 2017 the company launched its 2nd generation device, Sensibo Sky.
Features include: 7-day scheduling, Location based on/off, Multiple users controlling a single device, integration with Amazon Echo and IFTTT.
References
Home automation companies
Electronics companies established in 2013
2013 establishments in California | Sensibo | Technology | 587 |
44,306,711 | https://en.wikipedia.org/wiki/Dendrochytridium | Dendrochytridium is a fungal genus in the order Chytridiales. The genus is monotypic, containing the single saprobic species Dendrochytridium crassum, isolated from detritus collected from an Australian tree canopy. Both the genus and species were described as new to science in 2013. Phylogenetically, Dendrochytridium crassum groups together in a clade with other fungi possessing Group II-type zoospores. These fungi, which include representatives from the genera Chytridium, Phlyctochytrium, Chytriomyces, and Polyphlyctis are classified in the family Chytridiaceae.
The generic name combines dendro (derived from Greek, meaning "tree"), which refers to the origin of the first collection, and Chytridium, the type genus of the order Chytridiales. The specific epithet crassum is Latin for "broad", and refers to the broad rhizoids.
References
External links
Chytridiomycota genera
Monotypic fungus genera | Dendrochytridium | Biology | 222 |
5,691,144 | https://en.wikipedia.org/wiki/Sludge%20%28comics%29 | Sludge is a comic book series from Malibu Comics, set in the Ultraverse. It was created by Steve Gerber, Gary Martin and Aaron Lopresti. It depicted a dirty cop called Frank Hoag who was killed by the local mafia and was transformed after his death into a superpowered and viscous creature, called Sludge.
Publication history
Sludge made his first appearance in Sludge #1, dated October 1993, written by Steve Gerber and illustrated by Aaron Lopresti. As part of the Ultraverse imprint, the comic was set within a shared universe of super-powered beings conceptualized by writers and artists of Malibu comics. Sludge ran for only twelve issues, with one special: Sludge: Red X-Mas. A second special, Sludge: Swamp of Souls, was planned but never completed. Sludge also appeared in other Ultraverse books. After the Black September event, Sludge appeared in the first two issues of Foxfire (1996).
Character history
Frank Hoag was an experienced but corrupt NYPD detective who finally decided to change to take action when he was asked by his mob bosses (John Paul Marcello and Vittorio Sabatini) to kill a fellow dirty cop. When he refused, his own murder is ordered; he dies by a hail of bullets as well as a bomb. The explosion covers him with chemicals, which combine with the sewage from where the mobsters dump his body. The chemicals had regenerative properties and tried to heal Hoag, but combined the sewer substances with his body, transforming him into a huge mass of living slime. He awakens with a raging anger against criminals and an inability to think and speak coherently, with many words coming out replaced with one that sounds only vaguely similar, such as 'munch' instead of 'mutual'. There existed a connection between the chemicals that transformed Frank Hoag into Sludge and Dr. Gross' research. Dr. Gross conducted the experiments that allowed Kevin Green to transform into Prime. One of Sludge's allies was Chas, a blind homeless man who sold newspapers. He didn't comprehend that Frank has transformed; he only thinks Frank has gained an 'underwater voice'. Frank took a newspaper from Chas, claiming to be good for it and reads about deaths in the sewers. Marcello hired an assassin called Bloodstorm to kill the creature and he attacked Sludge with an explosive arrow.
Frank meets Shelley Winters, a sensationalistic reporter, in the sewers. She was investigating the same case that interests Frank and she discovers Veffir Voon Iyax, a humanoid, albino alligator-man. Veffir had killed the two people and many more. During the fight, Veffir claims he is from another world, and that nobody who meets him lives. Despite this, Sludge kills him in battle and demands 35 cents from Winters. He uses this to pay back Chas. Sludge also met the villain Lord Pumpkin alias The Pump who offered the creature a swift death if he obeys him. The Pump was beginning a drug sales operation using a new drug called Zuke, that was extracted from a carnivorous plant from the Godwheel. Lord Pumpkin also had a young henchmen known as Pistol. The Dragon Fang, a local Asian mafia, began a drug war against Lord Pumpkin. Marcello joined them in the fight. Lord Pumpkin sent Sludge against Marcello, who found death at the hands of the creature. Sludge also found that the zuke had the property to cure his body's condition, so he helped Pumpkin more. Vittorio Sabatini inherited the mafia and hired Bloodstorm again. The Pump and Sludge defeated the mercenary and drugged him with Zuke. The drugged Bloodstorm was sent to Sabatini and slaughtered the mafia, but the Dragon Fang began new attacks against Pumpkin gang, killing much of his henchmen. They sent a new agent, a battle cyborg against Pumpkin, destroying the candle that gave life to his body. Pistol took the Pumpkin head, hoping to revivify the villain, but desisted after a time. Lord Pumpkin resurrected in other book.
Powers and abilities
Sludge has tremendous strength and durability, as well as vast regenerative capabilities, allowing him to heal from near-fatal wounds in seconds. Submersion in water speeds up the process. He does not need food or air and is immune to most chemical toxins. Sludge can cause spontaneous tissue growth in others by touch.
Possibility of revival
In 2003, Steve Englehart was commissioned by Marvel to relaunch the Ultraverse with the most recognizable characters, including Sludge, but the editorial decided finally not to resurrect the Ultraverse imprint. In June 2005, when asked by Newsarama whether Marvel had any plans to revive the Ultraverse, Marvel editor-in-chief Joe Quesada replied that:
Appearances in other media
Sludge appears in the Ultraforce animated cartoon. In the series, he is an underling of Lord Pumpkin, like the comics, forced into being so due his addiction to the Zuke drug that Pumpkin created, that restores him to human form. He sacrifices himself to stop a demon plant created by Pumpkin, helping Prototype (Jimmy Ruiz).
References
External links
Ultraverse
Malibu Comics characters
Malibu Comics titles
Fictional police detectives
Fictional monsters
Fictional superorganisms
Marvel Comics characters with accelerated healing
Marvel Comics characters with superhuman durability or invulnerability
Marvel Comics characters with superhuman strength
Marvel Comics male superheroes
Marvel Comics mutates
Characters created by Steve Gerber
Comics by Steve Gerber
Vigilante characters in comics
Comics about monsters | Sludge (comics) | Biology | 1,151 |
2,866,094 | https://en.wikipedia.org/wiki/Epsilon%20Aquilae | Epsilon Aquilae, Latinized from ε Aquilae, is the Bayer designation for a binary star system in the equatorial constellation of Aquila, near the western constellation boundary with Hercules. It has an apparent visual magnitude of 4.02 and is visible to the naked eye. Based upon an annual parallax of , Epsilon Aquilae lies at a distance of approximately from Earth, but is drifting closer with a radial velocity of –46 km/s.
It has the traditional name Deneb el Okab , from an Arabic term ذنب العقاب ðanab al-ʽuqāb "the tail of the eagle", and the Mandarin names Woo and Yuë , derived from and represent the state Wú (吳), an old state was located at the mouth of the Yangtze River, and Yuè (越), an old state in Zhejiang province (together with 19 Capricorni in Twelve States asterism). According to the R.H. Allen's works, it shares names with ζ Aquilae. Epsilon Aquilae is more precisely called Deneb el Okab Borealis, because is situated to the north of Zeta Aquilae, which can therefore be called Deneb el Okab Australis.
The binary nature of this system was reported by German astronomer F. Kustner in 1914, but it was not confirmed until 1974. It is a single-lined spectroscopic binary system; the pair orbit each other over a period of 1,271 days (3.5 years) with an eccentricity (ovalness) of 0.27. There are two visual companions to Epsilon Aquilae, both reported by German astronomer R. Engelmann in 1887. Component B is a magnitude 10.56 star at an angular separation of along a position angle (PA) of 184° relative to the primary, as of 2014. At magnitude 11.25, component C is at a separation of with a PA of 159°, as of 2015.
The primary component of this system is an evolved giant star with a stellar classification of K1-IIICN0.5, showing a mild overabundance of the CN molecule in the spectrum. The chemical abundances of the star suggest it has gone through first dredge-up. It has more than double the mass of the Sun and has expanded to ten times the Sun's radius. The star shines with 54–fold the Sun's luminosity, which is being radiated from its outer envelope at an effective temperature of 4,760 K. At this heat, it glows with the orange-hue of a K-type star. This has been designated a barium star, meaning its atmosphere is extremely enriched with barium and other heavy elements. However, this is disputed, with astronomer Andrew McWilliam (1990) finding normal abundances from an s-process.
References
External links
Deneb el Okab Borealis by Jim Kaler
Image Epsilon Aquilae
K-type giants
Barium stars
Astrometric binaries
Spectroscopic binaries
Deneb el Okab
Aquila (constellation)
Aquilae, Epsilon
BD+14 3736
Aquilae, 13
176411
093244
7176 | Epsilon Aquilae | Astronomy | 663 |
9,278,355 | https://en.wikipedia.org/wiki/Chsh | chsh (an abbreviation of "change shell") is a command on Unix-like operating systems that is used to change a login shell. Users can either supply the pathname of the shell that they wish to change to on the command line, or supply no arguments, in which case allows the user to change the shell interactively.
Usage
is a setuid program that modifies the file, and only allows ordinary users to modify their own login shells. The superuser can modify the shells of other users, by supplying the name of the user whose shell is to be modified as a command-line argument. For security reasons, the shells that both ordinary users and the superuser can specify are limited by the contents of the file, with the pathname of the shell being required to be exactly as it appears in that file. (This security feature is alterable by re-compiling the source code for the command with a different configuration option, and thus is not necessarily enabled on all systems.) The superuser can, however, also modify the password file directly, setting any user's shell to any executable file on the system without reference to and without using .
On most systems, when is invoked without the command-line option (to specify the name of the shell), it prompts the user to select one. On Mac OS X, if invoked without the option, displays a text file in the default editor (initially set to vim) allowing the user to change all of the features of their user account that they are permitted to change, the pathname of the shell being the name next to "Shell:". When the user quits vim, the changes made there are transferred to the /etc/passwd file which only root can change directly.
Using the option (for example: ) greatly simplifies the task of changing shells.
Depending on the system, may or may not prompt the user for a password before changing the shell, or entering interactive mode. On some systems, use of by non-root users is disabled entirely by the sysadmin.
On many Linux distributions, the command is a PAM-aware application. As such, its behaviour can be tailored, using PAM configuration options, for individual users. For example, an directive that specifies the module can be used to deny access to individual users, by specifying a file of the usernames to deny access to with the option to that module (along with the option).
Portability
POSIX does not describe utilities such as , which are used for modifying the user's entry in . Most Unix-like systems provide . SVr4-based systems provided a similar capability with passwd. Two of the three remaining systems (IBM AIX and HP-UX) provide in addition to . The exception is Solaris, where non-administrators are unable to change their shell unless a network name server such as NIS or NIS+ is installed. The obsolete SGI SVr4 system IRIX64 also lacked .
See also
Comparison of command shells
References
Further reading
— some examples of invoking with the and options
External links
Unix user management and support-related utilities
Standard Unix programs | Chsh | Technology | 649 |
60,466,822 | https://en.wikipedia.org/wiki/Demitoilet | Demi-toilet refers to a style of dresses based on a small skirt that can be worn on formal occasions or in daily life. It is different from full dresses, such as evening dresses, dress suits, and wedding dresses due to having a skirt length of five centimetres below the knee. The demi-toilet style is suitable for a simple, casual look. For example, lace could be added to the bottom of the dress when attending banquets or decorated with flowers for an afternoon tea. The dress is lightweight and comfortable, and it is suitable for many ceremonial occasions, including cocktail parties, birthday parties, business meetings, dates, vacations, and weddings. The style used to serve as a symbol of reputation when well-designed dresses and luxurious accessories directly represented one's elevated status and authority.
Evolution
In the 1920s, the demi-toilet first appeared at wine tastings. At that time, the dress was described as a skirt. It was a fashionable daytime outfit with long, loose, gem-coloured skirt straight to the knee and collocated by a small-cap, gloves and a small handbag. In 1926, Coco Chanel first displayed this skirt in Vogue and caught public attention with a black dress. This dress had only a small amount of diagonal decoration. Vogue magazine called it Chanel's "Ford T model car" and predicted the prototype of this modest dress would become essential clothing for all women.
At the end of the 1920s, the demi-toilet dress was still modelled after the daytime outfit, yet it had begun to enter its evolution.
During the 1930s and 1940s, all-day dresses became popular, and the demi-toilet reappeared. The dress was most often black, but other colours were also available. Silk and satin were the most commonly used fabrics, and the length was generally only to the knee. The rise of black dresses was mainly due to their straightforward appearance, making it a good office dress. Black dresses were also frequently embellished with various accessories, such as brooches and sequins. The dress was suitable enough to appear in evening activities like dance and cocktail parties. Moreover, as they were limited to the black-and-white photography techniques of the time, female stars usually wore black dresses in the movies to avoid the colour distortion of the dress in the film imaging. All of these factors enabled the demi-toilet dress to quickly become a symbol of women's freedom of movement, mind, and body at that time, giving people a carefree feeling.
After World War II, the return of soldiers from foreign countries caused the clothing of other nationalities and different fabrics to enter into Western society. The fashion scene changed considerably. The style of dress became more open, the length of the dress became shorter, the neckline became lower, the body was tighter, and the sleeves were removed. The post-war dresses became more eye-catching and glamorous with more reflective sequins and shiny embroidery. The mindset at the time was that a shiny dress was better.
In the late 1940s, Christian Dior introduced the word 'demi-toilet' to describe the dress. A slightly exaggerated hat, along with long elbow gloves, a small chain bag with a flash powder box, and shoes that matched the colour of the handbag were the most popular style at cocktail parties. This tradition continued into the 1960s.
In the 1960s, the colour became relatively lighter, and pastel, silver, and gold replaced black as the main colour of the dress. The decorations on the dress were reduced. Many people stayed home at night due to the decreasing popularity of cocktail parties, and by the end of the 1960s, more women started to wear a simple dress in daily life. Home-made dresses started to replace demi-toilet dresses with fancy decorations. However, the design of the dresses still followed the trend of the time.
In the 1970s, the casual style was born, and loose jumpsuits and pants replaced the dresses.
In the 1980s, after the casual trend, the demi-toilet once again became popular, usually with satin as the fabric and decorated with lace.
In the 1990s, the demi-toilet style increased in popularity. People started to wear it on formal occasions. For example, some Hollywood actresses wore it on the red carpet, and leaders wore it to give speeches to the public. Since then, fashion designers of various brands had re-launched a collection of demi-toilet dresses that had been ignored for several decades.
Modern characteristics
Today, demi-toilet dresses are a must-have for many fashion brands. Significant changes in the style of the dress through history have evolved into a variety of styles that are emerging today. These kinds of dresses lead the fashion trend and allow girls to have many different dresses. The demi-toilet brings women not only noble temperament and elegant femininity, but also a symbol of taste and status, which is popular among women.
Nowadays, the style of the demi-toilet is very diverse. Some styles include court retro, ethnic style, rock style, and civilian fashion. These styles are all novel and unique. The type of skirt on the dress also varies widely, with suspender skirts, slanted skirts, mini skirts, fishtail skirts, and pleated skirts all making appearances, along with many others. Materials such as chiffon, cotton, lace, silk, wool, linen, satin, denim, and leather may be used.
Style categories
Princess pattern
The princess dress has many different styles, such as a tube top and a sling. These dresses are closely fitted to the waistline, which is unbroken by a seam. The dress gets its name from its resemblance to the stereotypical dress of a princess. Layers of chiffon and pettiskirts are the main characteristics. The high-waist design and fashionable details of the princess dress fit the proportions of many girls' bodies, especially petite and thin women. Nowadays, to keep simplicity, the princess dress eliminates the cumbersome feeling of layered skirts and brings a sense of ease and romance. Different necklines are also popular, giving a full-bodied bride a choice between a deep neckline or V-neck to make the neck look slimmer.
Crinoline pattern
This type of dress is representative of decent elegance and strays from fashion trends. A tight waist, proper upper body, and cotton dress lining are the main characteristics. The exquisite design of the crinoline dress reflects the feeling of the nobility, and the large skirt makes women seem more solemn. In the British royal family, some dresses have had a history of several hundred years and were passed down from generation to generation Many may even be treated as a cultural relic.
Personal pattern
The personal dress is often simple, using draping fabric and a narrow design tailored to the body curve. Among all the patterns, it can best reflect the body shape of women and the modern style of cutting. The personal pattern dress may be the most similar to the types of skirts people wear today. It has many characteristics dependent on different designs, such as highlighting the neckline and adding glamour, femininity, and elongating the proportions of the upper body and legs. Unique tailoring methods make women high and thin visually and enable them to show their figures more confidently.
Colour
Beige
The beige demi-toilet is a perfect interpretation of classical and modern, considered to be elegant and fashionable as well as holding aristocratic traits. In addition, beige can give others a gentle and benignant feeling. This kind of dress is excellently designed, with attention to detail, unique materials, and high quality. Its design concept incorporates neoclassicism and expresses noble tastes with simplicity.
Black
Black is probably the most suitable colour for all to wear on every occasion. It uses a simple and elegant style to convey feelings of solemnity, stability, seriousness, loneliness and seniority. The black demi-toilet presents a charming, style—noble, self-cultivating, capable and generous.
Blue
Blue demi-toilets give an impression of nobility and maturity. Blue is often considered a universal colour and may represent a calm demeanour. At the same time, the many shades of blue allow these dresses to take on several other meanings, such as vitality or success.
Green
Green dresses can give off a mysterious, confident air. These dresses are often elegant and somewhat bold, as green is an eye-catching colour. However, green is also considered to be a neutral colour, making it popular among those who want to wear a unique dress without being too distracting.
Pink
Pink is the most popular dress colour among girls and teenagers. It represents a cute, gentle, innocent, elegant and noble demeanour. Deep pink shows kindness and gratitude, while light pink shows a more tender and beautiful image. Pink also reminds many of love, which contributes to its popularity among younger girls.
Purple
After 1862, when Queen Victoria wore a silk gown dyed with purple to the Royal Exhibition, the purple dress quickly became fashionable. However, at that time, only the aristocracy and the wealthy could afford to wear purple clothing because of the expensive dye. Purple is a mysterious and impressive dress colour. It may give a sense of oppression or inspiration. Dark purple dresses give a sense of terror, whereas dresses in light purple can leave others with an impression of a calm and modest individual. The vast differences between shades make purple a very unique dress colour.
White
When first meeting someone, a white demi-toilet may not be the preferred choice, as it may convey an impression of indifference. However, some specialists think that white has a calming effect on irritability. A pure white dress is different from those that are a more ordinary shade of white. A high-quality, bright white dress can be elegant and eye-catching. It has smooth lines, a simple style, and no exaggerated natural shape. The sophisticated craftsmanship, decent cuts, and precious fabrics of the white demi-toilet bring charm to many women. With different neckline designs, this dress can give a stylish, positive, transcendental, and progressive image, reflecting that of a romantic lady.
Yellow
Yellow is a bright and light colour and shows clear, refreshed, innocent and friendly characteristics. A yellow dress might be chosen by a girl who is very optimistic about life, always looks forward, likes new things, and is eager to change her life frequently. Wearing yellow can also be a manifestation of spoiled psychology and a strong desire to win the favour of others. Yellow can represent frankness, straightforwardness and enterprise.
References
Dresses
Design history | Demitoilet | Engineering | 2,131 |
16,916,026 | https://en.wikipedia.org/wiki/Zinc%20smelting | Zinc smelting is the process of converting zinc concentrates (ores that contain zinc) into pure zinc. Zinc smelting has historically been more difficult than the smelting of other metals, e.g. iron, because in contrast, zinc has a low boiling point. At temperatures typically used for smelting metals, zinc is a gas that will escape from a furnace with the flue gas and be lost, unless specific measures are taken to prevent it.
The most common zinc concentrate processed is zinc sulfide, which is obtained by concentrating sphalerite using the froth flotation method. Secondary (recycled) zinc material, such as zinc oxide, is also processed with the zinc sulfide. Approximately 30% of all zinc produced is from recycled sources.
Methods
There are two methods of smelting zinc: the pyrometallurgical process and the electrolysis process. Both methods are still used. Both of these processes share the same first step: roasting.
Roasting
Roasting is a process of oxidizing zinc sulfide concentrates at high temperatures into an impure zinc oxide, called "Zinc Calcine". The chemical reactions that take place are as follows:
2ZnS + 3O2 -> 2ZnO + 2SO2
2SO2 + O2 -> 2SO3
Approximately 90% of zinc in concentrates are oxidized to zinc oxide. However, at the roasting temperatures around 10% of the zinc reacts with the iron impurities of the zinc sulfide concentrates to form zinc ferrite. A byproduct of roasting is sulfur dioxide, which is further processed into sulfuric acid, a commodity. The linked refinery flow sheet shows a schematic of Noranda's eastern Canadian zinc roasting operation
The process of roasting varies based on the type of roaster used. There are three types of roasters: multiple-hearth, suspension, and fluidized-bed.
Multiple-hearth roaster
In a multiple-hearth roaster, the concentrate drops through a series of 9 or more hearths stacked inside a brick-lined cylindrical column. As the feed concentrate drops through the furnace, it is first dried by the hot gases passing through the hearths and then oxidized to produce calcine. The reactions are slow and can be sustained only by the addition of fuel. Multiple hearth roasters are unpressurized and operate at about . Operating time depends upon the composition of concentrate and the amount of the sulfur removal required. Multiple hearth roasters have the capability of producing a high-purity calcine.
Suspension roaster
In a suspension roaster, the concentrates are blown into a combustion chamber very similar to that of a pulverized coal furnace. The roaster consists of a refractory-lined cylindrical steel shell, with a large combustion space at the top and 2 to 4 hearths in the lower portion, similar to those of a multiple hearth furnace. Additional grinding, beyond that required for a multiple hearth furnace, is normally required to ensure that heat transfer to the material is sufficiently rapid for the desulfurization and oxidation reactions to occur in the furnace chamber. Suspension roasters are unpressurized and operate at about .
Fluidized-bed roaster
In a fluidized-bed roaster, finely ground sulfide concentrates are suspended and oxidized in feedstock bed supported on an air column. As in the suspension roaster, the reaction rates for desulfurization are more rapid than in the older multiple-hearth processes. Fluidized-bed roasters operate under a pressure slightly lower than atmospheric and at temperatures averaging . In the fluidized-bed process, no additional fuel is required after ignition has been achieved. The major advantages of this roaster are greater throughput capacities, greater sulfur removal capabilities, and lower maintenance.
Electrolysis process
The electrolysis process, also known as the hydrometallurgical process, Roast-Leach-Electrowin (RLE) process, or electrolytic process, is more widely used than the pyrometallurgical processes.
The electrolysis process consists of 4 steps: leaching, purification, electrolysis, and melting and casting.
Leaching
The basic leaching chemical formula that drives this process is:
ZnO + SO3 -> ZnSO4
This is achieved in practice through a process called double leaching. The calcine is first leached in a neutral or slightly acidic solution (of sulfuric acid) in order to leach the zinc out of the zinc oxide. The remaining calcine is then leached in strong sulfuric acid to leach the rest of the zinc out of the zinc oxide and zinc ferrite. The result of this process is a solid and a liquid; the liquid contains the zinc and is often called leach product; the solid is called leach residue and contains precious metals (usually lead and silver) which are sold as a by-product. There is also iron in the leach product from the strong acid leach, which is removed in an intermediate step, in the form of goethite, jarosite, and haematite. There is still cadmium, copper, arsenic, antimony, cobalt, germanium, nickel, and thallium in the leach product. Therefore, it needs to be purified.
Purification
The purification process utilizes the cementation process to further purify the zinc. It uses zinc dust and steam to remove copper, cadmium, cobalt, and nickel, which would interfere with the electrolysis process. After purification, concentrations of these impurities are limited to less than 0.05 milligram per liter (4×10−7 pound per U.S. gallon). Purification is usually conducted in large agitated tanks. The process takes place at temperatures ranging from , and pressures ranging from atmospheric to (absolute scale). The by-products are sold for further refining.
The zinc sulfate solution must be very pure for electrowinning to be at all efficient. Impurities can change the decomposition voltage enough to where the electrolysis cell produces largely hydrogen gas rather than zinc metal.
Electrolysis
Zinc is extracted from the purified zinc sulfate solution by electrowinning, which is a specialized form of electrolysis. The process works by passing an electric current through the solution in a series of cells. This causes the zinc to deposit on the cathodes (aluminium sheets) and oxygen to form at the anodes. Sulfuric acid is also formed in the process and reused in the leaching process. Every 24 to 48 hours, each cell is shut down, the zinc-coated cathodes are removed and rinsed, and the zinc is mechanically stripped from the aluminium plates.
Electrolytic zinc smelters contain as many as several hundred cells. A portion of the electrical energy is converted into heat, which increases the temperature of the electrolyte. Electrolytic cells operate at temperature ranges from and at atmospheric pressure. A portion of the electrolyte is continuously circulated through the cooling towers both to cool and concentrate the electrolyte through evaporation of water. The cooled and concentrated electrolyte is then recycled to the cells. This process accounts for approximately one-third of all the energy usage when smelting zinc.
There are two common processes for electrowinning the metal: the low current density process, and the Tainton high current density process. The former uses a 10% sulfuric acid solution as the electrolyte, with current density of 270–325 amperes per square meter. The latter uses 22–28% sulfuric acid solution as the electrolyte with a current density of about 1,000 amperes per square metre. The latter gives better purity and has higher production capacity per volume of electrolyte, but has the disadvantage of running hotter and being more corrosive to the vessel in which it is done. In either of the electrolytic processes, each metric ton of zinc production expends about of electric power.
Melting and casting
Depending on the type of end-products produced, the zinc cathodes coming out of the electro-winning plant can undergo an additional transformation step in a foundry. Zinc cathodes are melted in induction furnaces and cast into marketable products such as ingots. Other metals and alloy components may be added to produce zinc containing alloys used in die-casting or general galvanization applications. Finally, molten zinc may be transported to nearby conversion plants or third parties using specially-designed insulated containers.
Pyrometallurgical processes
There are also several pyrometallurgical processes that reduce zinc oxide using carbon, then distil the metallic zinc from the resulting mix in an atmosphere of carbon monoxide. The major downfall of any of the pyrometallurgical process is that it is only 98% pure; a standard composition is 1.3% lead, 0.2% cadmium, 0.03% iron, and 98.5% zinc. This may be pure enough for galvanization, but not enough for die casting alloys, which requires special high-grade zinc (99.995% pure). In order to reach this purity the zinc must be refined.
The four types of commercial pyrometallurgical processes are the St. Joseph Minerals Corporation's (electrothermic) process, the blast furnace process, the New Jersey Zinc continuous vertical-retort process, and the Belgian-type horizontal retort process.
St. Joseph Mineral Company (electrothermic) process
This process was developed by the St. Joseph Mineral Company in 1930, and is the only pyrometallurgical process still used in the US to smelt zinc. The advantage of this system is that it is able to smelt a wide variety of zinc-bearing materials, including electric arc furnace dust. The disadvantage of this process is that it is less efficient than the electrolysis process.
The process begins with a downdraft sintering operation. The sinter, which is a mixture of roaster calcine and EAF (electric arc furnace) calcine, is loaded onto a gate type conveyor and then combustions gases are pumped through the sinter. The carbon in the combustion gases react with some impurities, such as lead, cadmium, and halides. These impurities are driven off into filtration bags. The sinter after this process, called product sinter, usually has a composition of 48% zinc, 8% iron, 5% aluminium, 4% silicon, 2.5% calcium, and smaller quantities of magnesium, lead, and other metals. The sinter product is then charged with coke into an electric retort furnace. A pair of graphite electrodes from the top and bottom of the furnace produce current flow through the mixture. The coke provides electrical resistance to the mixture in order to heat the mixture to and produce carbon monoxide. These conditions allow for the following chemical reaction to occur:
ZnO + CO -> Zn (^) + CO2
The zinc vapour and carbon dioxide pass to a vacuum condenser, where zinc is recovered by bubbling through a molten zinc bath. Over 95% of the zinc vapour leaving the retort is condensed to liquid zinc. The carbon dioxide is regenerated with carbon, and the carbon monoxide is recycled back to the retort furnace.
Blast furnace process (Imperial Smelting Process)
This process was developed by the National Smelting Company at Avonmouth Docks, England, in order to increase production, increase efficiency, and decrease labour and maintenance costs. L. J. Derham proposed using a spray of molten lead droplets to rapidly cool and absorb the zinc vapour, despite the high concentration of carbon dioxide. The mixture is then cooled, where the zinc separates from the lead. The first plant using this design opened up in 1950. One of the advantages of this process is that it can co-produce lead bullion and copper dross. In 1990, it accounted for 12% of the world's zinc production.
The process starts by charging solid sinter and heated coke into the top of the blast furnace. Preheated air at is blown into the bottom of the furnace. Zinc vapour and sulfides leave through the top and enter the condenser. Slag and lead collect at the bottom of the furnace and are tapped off regularly. The zinc is scrubbed from the vapour in the condenser via liquid lead. The liquid zinc is separated from the lead in the cooling circuit. Approximately of lead are required each year for this process, however this process recovers 25% more lead from the starting ores than other processes.
New Jersey Zinc continuous vertical retort
The New Jersey Zinc process is no longer used to produce primary zinc in the U.S., in Europe and Japan, but it still is used to treat secondary operations. This process peaked in 1960, when it accounted for 5% of world zinc production. A modified version of this process is still used at a Huludao plant in China (originally established by the Japanese in 1937), which produced 65,000 metric tons per year as of 1991 and increased capacity to at least 210,000 t/year by 2023.
This process begins by roasting concentrates that are mixed with coal and briquetted in two stages. The briquettes are then heated in an autogenous coker at and then charged into the retort. There are three reasons to briquette the calcine: to ensure free downward movement of the charge; to permit heat transfer across a practical size cross-section; to allow adequate porosity for the passage of reduced zinc vapour to the top of the retort. The reduced zinc vapour that is collected at the top of the retort is then condensed to a liquid.
Overpelt improved upon this design by using only one large condensation chamber, instead of many small ones, as it was originally designed. This allowed for the carbon monoxide to be recirculated into the furnaces for heating the retorts.
This process was licensed to the Imperial Smelting Corporation (ISC), based in Avonmouth, England, which had a large vertical retort (VR) plant in production for many years. It was used until the mid-1970s when it was superseded by the company's Imperial Smelting Furnace (ISF) plant. The VR plant was demolished in 1975.
Belgian-type horizontal retort process
This process was the main process used in Britain from the mid-19th century until 1951. The process was very inefficient as it was designed as a small scale batch operation. Each retort only produced so companies would put them together in banks and used one large gas burner to heat all of them. The Belgian process requires redistillation to remove impurities of lead, cadmium, iron, copper, and arsenic.
History
The first production of zinc in quantity seems to have been in India starting from 12th century and later in China from 16th century. In India, zinc was produced at Zawar from the 12th to the 18th centuries, although some zinc artifacts appear to have been made during classical antiquity in Europe. The sphalerite ore found here was presumably converted to zinc oxide via roasting, although no archaeological evidence of this has been found. Smelting is thought to have been done in sealed cylindrical clay retorts which were packed with a mixture of roasted ore, dolomite, and an organic material, perhaps cow dung, and then placed vertically in a furnace and heated to around 1100 °C. Carbon monoxide produced by the charring of the organic material would have reduced the zinc oxide to zinc vapour, which then liquefied in a conical clay condenser at the bottom of the retort, dripping down into a collection vessel. Over the period 1400–1800, production is estimated to have been about 200 kg/day. Zinc was also smelted in China from the mid-sixteenth century on.
Large-scale zinc production in Europe began with William Champion, who patented a zinc distillation process in 1738. In Champion's process, zinc ore (in this case, the carbonate, ZnCO3) was sealed in large reduction pots with charcoal and heated in a furnace. The zinc vapor then descended through an iron condensing pipe until reaching a water-filled vessel at the bottom. Champion set up his first zinc works in Bristol, England, but soon expanded to Warmley and by 1754 had built four zinc furnaces there. Although Champion succeeded in producing about 200 tons of zinc, his business plans were not successful and he was bankrupt by 1769. However, zinc smelting continued in this area until 1880.
Early European zinc production also took place in Silesia, in Carinthia, and in Liège, Belgium. In the Carinthian process, used in works established in 1798 by Bergrath Dillinger, a wood-fueled furnace heated a large number of small vertical retorts, and zinc vapor then dropped through a ceramic pipe into a common condensation chamber below. This process was out of use by 1840. The Belgian and Silesian processes both used horizontal retorts. In Silesia, Johann Ruhberg built a furnace to distill zinc in 1799, at first using pots but later changing to flat-bottomed retorts called "muffles", attached to horizontal tubes bent downwards in which the zinc condensed. The Silesian process eventually merged with the Belgian process. This process, developed by Jean-Jacques Daniel Dony, was introduced 1805–1810, and used retorts with a cylindrical cross-section. Condensers were horizontal clay tubes extending from the ends of the retorts. The merged "Belgo-Silesian" horizontal retort process was widely adopted in Europe by the third quarter of the 19th century, and later in the United States.
Experimental attempts to extract zinc via electrolysis begun in the 19th century, but the only commercially successful application before 1913 was a process, used in Great Britain and Austria, where zinc and chlorine were co-produced by electrolysis of an aqueous zinc chloride solution. The Anaconda Copper Company, at Anaconda, Montana, and the Consolidated Mining and Smelting Company, at Trail, British Columbia, both built successful electrolytic plants in 1915 using the currently used zinc sulfate process. This method has continued to grow in importance and in 1975 accounted for 68% of world zinc production.
The continuous vertical retort process was introduced in 1929 by the New Jersey Zinc Company. This process used a retort with silicon carbide walls, around 9 meters high and with a cross section of 2 by 0.3 meters. The walls of the retort were heated to 1300 °C and briquettes consisting of sintered zinc ore, coke, coal, and recycled material were fed into the top of the retort. Gaseous zinc was drawn off from the top of the column and, after a 20-hour journey through the retort, spent briquettes were removed from the bottom. To condense the gaseous zinc, the company first used a simple brick chamber with carborundum baffles, but efficiency was poor. During the 1940s a condenser was developed which condensed the zinc vapor on a spray of liquid zinc droplets, thrown up by an electrical impeller.
The electrothermic process, developed by the St. Joseph's Lead Company, was somewhat similar. The first commercial plant using this process was built in 1930 at the present site of Josephtown, Pennsylvania. The electrothermic furnace was a steel cylinder around 15 meters high and 2 meters in diameter, lined with firebrick. A mixture of sintered ore and coke was fed into the top of the furnace, and a current of 10,000–20,000 amperes, at a potential difference of 240 volts, was applied between carbon electrodes in the furnace, raising the temperature to 1200–1400 °C. An efficient condenser was devised for this process from 1931–1936; it consisted of a bath of liquid zinc which the exhaust gases were drawn through by suction. The zinc content of the gas stream was absorbed into the liquid bath.
The blast-furnace process was developed starting in 1943 at Avonmouth, England by the Imperial Smelting Corporation, which became part of Rio Tinto Zinc in 1968. It uses a spray of molten lead droplets to condense the zinc vapor.
See also
Waelz process
References
Metallurgical processes
Smelting
Smelting | Zinc smelting | Chemistry,Materials_science | 4,191 |
5,015,476 | https://en.wikipedia.org/wiki/Nitrate%20test | A nitrate test is a chemical test used to determine the presence of nitrate ion in solution. Testing for the presence of nitrate via wet chemistry is generally difficult compared with testing for other anions, as almost all nitrates are soluble in water. In contrast, many common ions give insoluble salts, e.g. halides precipitate with silver, and sulfate precipitate with barium.
The nitrate anion is an oxidizer, and many tests for the nitrate anion are based on this property. However, other oxidants present in the analyte may interfere and give erroneous results.
Nitrate can also be detected by first reducing it to the more reactive nitrite ion and using one of many nitrite tests.
Brown ring test
A common nitrate test, known as the brown ring test can be performed by adding iron(II) sulfate to a solution of a nitrate, then slowly adding concentrated sulfuric acid such that the acid forms a layer below the aqueous solution. A brown ring will form at the junction of the two layers, indicating the presence of the nitrate ion. Note that the presence of nitrite ions will interfere with this test.
The overall reaction is the reduction of the nitrate ion to nitric oxide by iron(II), which is oxidised to iron(III), followed by the formation of nitrosyl ferrous sulfate between the nitric oxide and the remaining iron(II), where nitric oxide is reduced to NO−.
2HNO3 + 3H2SO4 + 6FeSO4 → 3Fe2(SO4)3 + 2NO + 4H2O
[Fe(H2O)6]SO4 + NO → [Fe(H2O)5(NO)]SO4 + H2O
This test is sensitive up to 2.5 micrograms and a concentration of 1 in 25,000 parts.
Devarda's test
Devarda's alloy (Copper/Aluminium/Zinc) is a reducing agent. When reacted with nitrate in sodium hydroxide solution, ammonia is liberated. The ammonia formed may be detected by its characteristic odor, and by damp red litmus paper's turning blue, signalling that it is an alkali — very few gases other than ammonia evolved from wet chemistry are alkaline.
3 + 8 Al + 5 + 18 → 3 + 8
Aluminium is the reducing agent in this reaction that will occur.
Diphenylamine test
Diphenylamine may be used as a wet chemical test for the presence of the nitrate ion. In this test, a solution of diphenylamine and ammonium chloride in sulfuric acid is used. In the presence of nitrates, diphenylamine is oxidized, giving a blue coloration. This reaction has been used to test for organic nitrates as well, and has found use in gunshot residue kits detecting nitroglycerine and nitrocellulose.
Copper turnings test
The nitrate ion can easily be identified by heating copper turnings along with concentrated sulfuric acid. Effervescence of a brown, pungent gas is observed which turns moist blue litmus paper red.
Here sulfuric acid reacts with the nitrate ion to form nitric acid. Nitric acid then reacts with the copper turnings to form nitric oxide. Nitric oxide is thus oxidised to nitrogen dioxide.
+ 4 → + 2 +2
See also
Nitrite test
References
Chemical tests
Nitrates | Nitrate test | Chemistry | 717 |
749,827 | https://en.wikipedia.org/wiki/Polarization%20mode%20dispersion | Polarization mode dispersion (PMD) is a form of modal dispersion where two different polarizations of light in a waveguide, which normally travel at the same speed, travel at different speeds due to random imperfections and asymmetries, causing random spreading of optical pulses. Unless it is compensated, which is difficult, this ultimately limits the rate at which data can be transmitted over a fiber.
Overview
In an ideal optical fiber, the core has a perfectly circular cross-section. In this case, the fundamental mode has two orthogonal polarizations (orientations of the electric field) that travel at the same speed. The signal that is transmitted over the fiber is randomly polarized, i.e. a random superposition of these two polarizations, but that would not matter in an ideal fiber because the two polarizations would propagate identically (are degenerate).
In a realistic fiber, however, there are random imperfections that break the circular symmetry, causing the two polarizations to propagate with different speeds. In this case, the two polarization components of a signal will slowly separate, e.g. causing pulses to spread and overlap. Because the imperfections are random, the pulse spreading effects correspond to a random walk, and thus have a mean polarization-dependent time-differential (also called the differential group delay, or DGD) proportional to the square root of propagation distance :
is the PMD parameter of the fiber, typically measured in ps/, a measure of the strength and frequency of the imperfections.
The symmetry-breaking random imperfections fall into several categories. First, there is geometric asymmetry, e.g. slightly elliptical cores. Second, there are stress-induced material birefringences, in which the refractive index itself depends on the polarization. Both of these effects can stem from either imperfection in manufacturing (which is never perfect or stress-free) or from thermal and mechanical stresses imposed on the fiber in the field — moreover, the latter stresses generally vary over time.
Compensating for PMD
A PMD compensation system is a device which uses a polarization controller to compensate for PMD in fibers. Essentially, one splits the output of the fiber into two principal polarizations (usually those with , i.e. no first-order variation of time-delay with frequency), and applies a differential delay to re-synchronize them. Because the PMD effects are random and time-dependent, this requires an active device that responds to feedback over time. Such systems are therefore expensive and complex; combined with the fact that PMD is not yet the limiting factor in the lower data rates still in common use, this means that PMD-compensation systems have seen limited deployment in large-scale telecommunications systems.
Another alternative would be to use a polarization maintaining fiber (PM fiber), a fiber whose symmetry is so strongly broken (e.g. a highly elliptical core) that an input polarization along a principal axis is maintained all the way to the output. Since the second polarization is never excited, PMD does not occur. Such fibers currently have practical problems, however, such as higher losses than ordinary optical fiber and higher cost. An extension of this idea is a single-polarization fiber in which only a single polarization state is allowed to propagate along the fiber (the other polarization is not guided and escapes).
Related phenomena
A related effect is polarization-dependent loss (PDL), in which two polarizations suffer different rates of loss in the fiber due, again, to asymmetries. PDL similarly degrades signal quality.
Strictly speaking, a circular core is not required in order to have two degenerate polarization states. Rather, one requires a core whose symmetry group admits a two-dimensional irreducible representation. For example, a square or equilateral-triangle core would also have two equal-speed polarization solutions for the fundamental mode; such general shapes also arise in photonic-crystal fibers. Again, any random imperfections that break the symmetry would lead to PMD in such a waveguide.
References
Rajiv Ramaswami and Kumar N. Sivarajan, Optical Networks: A Practical Perspective (Harcourt: San Diego, 1998).
Jay N. Damask, Polarization Optics in Telecommunications (Springer: New York, 2004)
See also
Optical polarization multiplexing
Polarization (waves) | Polarization mode dispersion | Physics | 914 |
508,070 | https://en.wikipedia.org/wiki/Telescoping%20series | In mathematics, a telescoping series is a series whose general term is of the form , i.e. the difference of two consecutive terms of a sequence . As a consequence the partial sums of the series only consists of two terms of after cancellation.
The cancellation technique, with part of each term cancelling with part of the next term, is known as the method of differences.
An early statement of the formula for the sum or partial sums of a telescoping series can be found in a 1644 work by Evangelista Torricelli, De dimensione parabolae.
Definition
Telescoping sums are finite sums in which pairs of consecutive terms partly cancel each other, leaving only parts of the initial and final terms. Let be the elements of a sequence of numbers. Then
If converges to a limit , the telescoping series gives:
Every series is a telescoping series of its own partial sums.
Examples
The product of a geometric series with initial term and common ratio by the factor yields a telescoping sum, which allows for a direct calculation of its limit:when so when
The seriesis the series of reciprocals of pronic numbers, and it is recognizable as a telescoping series once rewritten in partial fraction form
Let k be a positive integer. Then where Hk is the kth harmonic number.
Let k and m with k m be positive integers. Then where denotes the factorial operation.
Many trigonometric functions also admit representation as differences, which may reveal telescopic canceling between the consecutive terms. Using the angle addition identity for a product of sines, which does not converge as
Applications
In probability theory, a Poisson process is a stochastic process of which the simplest case involves "occurrences" at random times, the waiting time until the next occurrence having a memoryless exponential distribution, and the number of "occurrences" in any time interval having a Poisson distribution whose expected value is proportional to the length of the time interval. Let Xt be the number of "occurrences" before time t, and let Tx be the waiting time until the xth "occurrence". We seek the probability density function of the random variable Tx. We use the probability mass function for the Poisson distribution, which tells us that
where λ is the average number of occurrences in any time interval of length 1. Observe that the event {Xt ≥ x} is the same as the event {Tx ≤ t}, and thus they have the same probability. Intuitively, if something occurs at least times before time , we have to wait at most for the occurrence. The density function we seek is therefore
The sum telescopes, leaving
For other applications, see:
Proof that the sum of the reciprocals of the primes diverges, where one of the proofs uses a telescoping sum;
Fundamental theorem of calculus, a continuous analog of telescoping series;
Order statistic, where a telescoping sum occurs in the derivation of a probability density function;
Lefschetz fixed-point theorem, where a telescoping sum arises in algebraic topology;
Homology theory, again in algebraic topology;
Eilenberg–Mazur swindle, where a telescoping sum of knots occurs;
Faddeev–LeVerrier algorithm.
Related concepts
A telescoping product is a finite product (or the partial product of an infinite product) that can be canceled by the method of quotients to be eventually only a finite number of factors. It is the finite products in which consecutive terms cancel denominator with numerator, leaving only the initial and final terms. Let be a sequence of numbers. Then,
If converges to 1, the resulting product gives:
For example, the infinite product
simplifies as
References
Mathematical series | Telescoping series | Mathematics | 778 |
961,616 | https://en.wikipedia.org/wiki/PSI%20%28computational%20chemistry%29 | Psi is an ab initio computational chemistry package originally written by the research group of Henry F. Schaefer, III (University of Georgia). Utilizing Psi, one can perform a calculation on a molecular system with various kinds of methods such as Hartree-Fock, Post-Hartree–Fock electron correlation methods, and density functional theory. The program can compute energies, optimize molecular geometries, and compute vibrational frequencies. The major part of the program is written in C++, while Python API is also available, which allows users to perform complex computations or automate tasks easily.
Psi4 is the latest release of the program package - it is open source, released as free under the GPL through GitHub. Primary development of Psi4 is currently performed by the research groups of David Sherrill (Georgia Tech), T. Daniel Crawford (Virginia Tech), Francesco Evangelista (Emory University), and Henry F. Schaefer, III (University of Georgia), with substantial contributions by Justin Turney (University of Georgia), Andy Simmonett (NIH), and Rollin King (Bethel University). Psi4 is available on Linux releases such as Fedora and Ubuntu.
Features
The basic capabilities of Psi are concentrated around the following methods of quantum chemistry:
Hartree–Fock method
Density functional theory
Møller–Plesset perturbation theory
Coupled cluster
CASSCF
multireference configuration interaction methods
symmetry-adapted perturbation theory
Several methods are available for computing excited electronic states, including configuration interaction singles (CIS), the random phase approximation (RPA), time-dependent density functional theory (TD-DFT), and equation-of-motion coupled cluster (EOM-CCSD).
Psi4 has introduced the density-fitting approximation in many portions of the code, leading to faster computations and reduced I/O requirements.
Psi4 is the preferred quantum chemistry backend for the OpenFermion project, which seeks to perform quantum chemistry computations on quantum computers.
In Psi4 1.4, the program was adapted to facilitate high-throughput workflows and can be connected to BrianQC to speed up calculations for Hartree-Fock and Density functional theory methods.
See also
List of quantum chemistry and solid-state physics software
References
External links
Psi4 Homepage
Psi4 Source Code (GitHub)
Computational chemistry software
Free chemistry software
Chemistry software for Linux | PSI (computational chemistry) | Chemistry | 507 |
764,956 | https://en.wikipedia.org/wiki/George%20Porter | George Porter, Baron Porter of Luddenham, (6 December 1920 – 31 August 2002) was a British chemist. He was awarded the Nobel Prize in Chemistry in 1967.
Education and early life
Porter was born in Stainforth, near Thorne, in the then West Riding of Yorkshire. He was educated at Thorne Grammar School, then won a scholarship to the University of Leeds and gained his first degree in chemistry. During his degree, Porter was taught by Meredith Gwynne Evans, who he later said was the most brilliant chemist he had ever met. He was awarded a PhD from the University of Cambridge in 1949 for research investigating free radicals produced by photochemical means. He would later become a fellow at Emmanuel College, Cambridge.
Career and research
Porter served in the Royal Naval Volunteer Reserve during the Second World War. Porter then went on to do research at the University of Cambridge supervised by Ronald George Wreyford Norrish where he began the work that ultimately led to them becoming Nobel Laureates.
His original research in developing the technique of flash photolysis to obtain information on short-lived molecular species provided the first evidence of free radicals. His later research utilised the technique to study the detailed aspects of the light-dependent reactions of photosynthesis, with particular regard to possible applications to a hydrogen economy, of which he was a strong advocate.
He was Assistant Director of the British Rayon Research Association from 1953 to 1954, where he studied the phototendering of dyed cellulose fabrics in sunlight.
Porter served as professor in the Chemistry department at the University of Sheffield in 1954–65. It was here he started his work on flash photolysis with equipment designed and made in the departmental workshop. During this tenure he also took part in a television programme describing his work. This was in the "Eye on Research" series. Porter became Fullerian Professor of Chemistry and Director of the Royal Institution in 1966. During his directorship of the Royal Institution, Porter was instrumental in the setting up of Applied Photophysics, a company created to supply instrumentation based on his group's work. He was awarded the Nobel Prize in Chemistry in 1967 along with Manfred Eigen and Ronald George Wreyford Norrish. In the same year he became a visiting professor at University College London.
Porter was a major contributor to the Public Understanding of science. He became president of the British Association in 1985 and was the founding Chair of the Committee on the Public Understanding of Science (COPUS). He gave the Romanes Lecture, entitled "Science and the human purpose", at the University of Oxford in 1978; and in 1988 he gave the Dimbleby Lecture, "Knowledge itself is power." From 1990 to 1993 he gave the Gresham lectures in astronomy.
Awards and honours
Porter was elected a Fellow of the Royal Society (FRS) in 1960, a member of the American Academy of Arts and Sciences in 1979, a member of the American Philosophical Society in 1986, and served as President of the Royal Society from 1985 to 1990. He was also awarded the Davy Medal in 1971, the Rumford Medal in 1978, the Ellison-Cliffe Medal in 1991 and the Copley Medal in 1992.
Porter also received an Honorary Doctorate from Heriot-Watt University in 1971.
He was knighted in 1972, appointed to the Order of Merit in 1989, and was made a life peer as Baron Porter of Luddenham, of Luddenham in the County of Kent, in 1990. In 1995, he was awarded an Honorary Degree (Doctor of Laws) from the University of Bath.
In 1976 he gave the Royal Institution Christmas Lecture on The Natural History of a Sunbeam.
Porter served as Chancellor of the University of Leicester between 1984 and 1995. In 2001, the university's chemistry building was named the George Porter Building in his honour.
Family
In 1949 Porter married Stella Jean Brooke.
Publications
Chemistry for the Modern World (1962)
Chemistry in Microtime (1996)
See also
List of presidents of the Royal Society
References
External links
Profile – Royal Institution of Great Britain
The Life and Scientific Legacy of George Porter, World Scientific Publishing, 2006
Obituary in The Guardian, 3 September 2002
Biographical Database of the British Chemical Community, 1880–1970
"The Relevance of Science". George Porter. JASA (Journal of the American Scientific Affiliation) Vol. 28. March 1976. pp. 2–3.(Includes editorial responses from astronomer Owen Gingerich and theologian Bernard Ramm amongst others.)
1920 births
2002 deaths
Chemists at the University of Cambridge
Academics of the University of Sheffield
Academics of University College London
Alumni of Emmanuel College, Cambridge
Alumni of the University of Leeds
British humanists
Crossbench life peers
Fellows of the Royal Society
Kalinga Prize recipients
Knights Bachelor
Members of the Order of Merit
Members of the Pontifical Academy of Sciences
Foreign associates of the National Academy of Sciences
Foreign members of the USSR Academy of Sciences
Foreign members of the Russian Academy of Sciences
Foreign fellows of the Indian National Science Academy
Nobel laureates in Chemistry
British Nobel laureates
People associated with the University of Leicester
People from Stainforth, South Yorkshire
English physical chemists
Presidents of the Royal Society
Directors of the Royal Institution
Recipients of the Copley Medal
Presidents of the British Science Association
English Nobel laureates
People educated at Thorne Grammar School
Photochemists
Royal Naval Volunteer Reserve personnel of World War II
Members of the American Philosophical Society
Life peers created by Elizabeth II
Royal Navy sailors
Military personnel from South Yorkshire | George Porter | Chemistry | 1,084 |
63,550,672 | https://en.wikipedia.org/wiki/N%2CN-Dimethylaminomethylferrocene | N,N-Dimethylaminomethylferrocene is the dimethylaminomethyl derivative of ferrocene, (C5H5)Fe(C5H4CH2N(CH3)2. It is an air-stable, dark-orange syrup that is soluble in common organic solvents. The compound is prepared by the reaction of ferrocene with formaldehyde and dimethylamine:
(C5H5)2Fe + CH2O + HN(CH3)2 → (C5H5)Fe(C5H4CH2N(CH3)2 + H2O
It is a precursor to prototypes of ferrocene-containing redox sensors and diverse ligands.
The amine can be quaternized, which provides access to many derivatives.
References
Ferrocenes
Sandwich compounds
Cyclopentadienyl complexes | N,N-Dimethylaminomethylferrocene | Chemistry | 187 |
65,351,650 | https://en.wikipedia.org/wiki/Neo-colonial%20science | Neo-colonial research or neo-colonial science, frequently described as helicopter research, parachute science or research, parasitic research, or safari study, is when researchers from wealthier countries go to a developing country, collect information, travel back to their country, analyze the data and samples, and publish the results with no or little involvement of local researchers. A 2003 study by the Hungarian Academy of Sciences found that 70% of articles in a random sample of publications about least-developed countries did not include a local research co-author.
Frequently, during this kind of research, the local colleagues might be used to provide logistics support as fixers but are not engaged for their expertise or given credit for their participation in the research. Scientific publications resulting from parachute science frequently only contribute to the career of the scientists from rich countries, thus limiting the development of local science capacity (such as funded research centers) and the careers of local scientists. This form of "colonial" science has reverberations of 19th century scientific practices of treating non-Western participants as "others" in order to advance colonialism—and critics call for the end of these extractivist practices in order to decolonize knowledge.
This kind of research approach reduces the quality of research because international researchers may not ask the right questions or draw connections to local issues. The result of this approach is that local communities are unable to leverage the research to their own advantage. Ultimately, especially for fields dealing with global issues like conservation biology which rely on local communities to implement solutions, neo-colonial science prevents institutionalization of the findings in local communities in order to address issues being studied by scientists.
Effects
The use of helicopter research has also led to a stigma of research within minority groups; some going so far as to deny research within their communities. Such safari studies lead to long-term negative effects for the scientific community and researchers, as distrust develops within peripheral communities.
Donor robbery
Funds for research in developing countries are often provided by bilateral and international academic and research programmes for sustainable development. Through 'donor robbery' a large proportion of such international funds may end up in the wealthier countries via consultancy fees, laboratory costs in rich universities, overhead or purchase of expensive equipment, hiring expatriates and running "enclave" research institutes, depending on international conglomerates.
Use of open data
The current tendency of freely availing research datasets may lead to exploitation of, and rapid publication of results based on data pertaining to developing countries by rich and well-equipped research institutes, without any further involvement and/or benefit to local communities; similarly to the historical open access to tropical forests that has led to the disappropriation ("Global Pillage") of plant genetic resources from developing countries.
Professional discourse
In certain fields of research, such as global public health, both the journals and professionals creating the field have defined much of their work under colonial structures and assumptions. This in turn prevents participation in the field from early in the process, even before authorship or credit is given during the publishing representation of editorial boards of journals publishing in environmental sciences and public health, with a vast majority of editors based in high-income countries despite the global scope of the journals' fields.
Mitigation
Some journals and publishers are implementing policies that should mitigate the impact of parachute science. One of the conditions for publication set by the journal Global Health Action is that, "Articles reporting research involving primary data collection will normally include researchers and institutions from the countries concerned as authors, and include in-country ethical approval." Similarly The Lancet Global Health placed restriction encouraged submissions to review their practices for including local participants. Similarly in 2021, PLOS announced a policy that required changes in reporting for researchers working in other countries.
A number of research communities are putting protocols in place for indigenous health information. In the US, the Cherokee Nation established a specific Institutional Review Board, aiming at ensuring the protection of the rights and welfare of tribal members involved in research projects. The Cherokee Nation IRB does not allow helicopter research. The Human Heredity and Health in Africa (H3Africa) Initiative launched guidelines for working with genetic information from the continent in 2018.
An Ethiopian soil scientist, Mitiku Haile, suggests that such "free riding" should be "condemned by all partners and, if found, should be brought to the attention of the scientific community and the international and national funding agencies".
Also in Africa, since the outbreak of the coronavirus pandemic in 2020, travel restrictions on international scholars tend to local scientists stepping up to lead research.
Examples by field
Examples of neo-colonial approaches to science include:
In the medical world: "A popular term for a clinical or epidemiologic research project conducted by foreign scientists who use local contacts to gain access to a population group and obtain samples"
In anthropology, particularly when related to peripheral ethnic groups: "Any investigation within the community in which a researcher collects data, leaves to disseminate it, and never again has contact with the tribe."
In geosciences, a 2020 study found that 30% of studies about Africa contained an African author. (See also: Ubirajara jubatus.)
When scientists from a central, dominant ethnic or sociological group conduct research in areas where minority groups are living (often peripheral areas), there is also a risk for helicopter research, though it may not appear directly from the academic affiliation of the researchers. For instance, within the United States, it has been used primarily in the study of Native Americans.
Climate change
An analysis of research money from 1990 to 2020 for climate change, found that 78% of research money for research on Climate change in Africa, was spent in European and North American institutions and more was spent for former British colonies than other countries. This in turn both prevents local researchers from doing groundbreaking work, because they don't have the funding for experimental activities and reduces investment in local researchers ideas and in topics important to the Global South, such as climate change adaptation.
Soil science
Soil scientists have qualified helicopter research as a perpetuation of "colonial" science. Typically researchers from rich countries would come to establish soil profile pits or collect soil and peat samples, which is often more easily done in poor countries given the availability of cheap labour and goodwill of villagers to dig a pit on their land against small payment. The profile will be described and samples taken with the help of local people, possibly also university staff. In case of helicopter research, the outcomes are then published such as discovery in tropical peatlands, sometimes in high-level journals without the involvement of local colleagues. "Overall, helicopter research tends to produce academic papers that further the career of scientists from developed countries, but provide little practical outcomes for nations where the studies are conducted, nor develop the careers of their local scientists."
Coral Reef research
A 2021 study in Current Biology quantified the amount of parachute research happening in coral reef studies and found such approaches to be the norm.
Examples by region
Europe
The 2015 description of Tetrapodophis was performed by three European scientists. When the Brazilan newspaper Estadão – Brazil being the country where the fossil hails from – questioned lead researcher David M. Martill, he replied "It should be fossils for all. No countries existed when the animals were fossilized. [..] what difference would it make [partnering with Brazilian scientists]? I mean, do you want me also to have a black person on the team for ethnicity reasons, and a cripple and a woman, and maybe a homosexual too, just for a bit of all round balance? [..] Now I don't work in Brazil. But I still work on Brazilian fossils. There are hundreds of them in museums all over Europe, America and in Japan."
Central Africa
A 2009 study found that Europeans participated in 77% of regionally co-authored papers in Central African countries. Even though local authors are credited with the work, they aren't always given participatory roles in the final production of the research itself—instead playing roles in fieldwork.
Indonesia
In April 2018, a publication about Indonesia's Bajau people received great attention. These "sea nomads" had a genetic adaptation resulting in large spleens that supply additional oxygenated red blood cells. A month later this publication was criticised by Indonesian scientists. Their article in Science questioned the ethics of scientists from the United States and Denmark who took DNA samples of the Bajau people and analyzed them, without much involvement of Bajau or other Indonesian people.
See also
Academic dishonesty
Bioethics
Biopiracy
Bullying in academia
Committee on Publication Ethics
Conflicts of interest in academic publishing
Research ethics
Research integrity
Scientific method
References
Misconduct
Science in society | Neo-colonial science | Technology | 1,761 |
61,212,878 | https://en.wikipedia.org/wiki/NGC%20931 | NGC 931 is a spiral galaxy located in the constellation Triangulum. It is located at a distance of circa 200 million light-years from Earth, which, given its apparent dimensions, means that NGC 931 is about 200,000 light years across. It was discovered by Heinrich d'Arrest on September 26, 1865. It is classified as a Seyfert galaxy.
Characteristics
The nucleus of NGC 931 has been found to be active and it has been categorised as a type I Seyfert galaxy due to its narrow H-beta emission line. The most accepted theory for the energy source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. The mass of the black hole in the centre of NGC 931 is estimated to be 107.64 ± 0.40 (17- 110 million) based on the stellar velocity dispersion.
NGC 931 has been found to emit radiowaves, ultraviolet and X-rays. Observations by ASCA revealed the X-ray spectrum was composed of soft and hard emission. The hard element was identified as a strong and wide fluorescent FeKa line, which is created when X-rays meet an optically cold thick material. The soft element has been identified as warm absorbing material. The galaxy was further observed by XMM-Newton, where it was observed that there were significant fluctuations and time lags in the flux changes observed both in the soft and hard elements.
A detail X-ray spectrum of NGC 931 was obtained by Chandra X-ray Observatory. It revealed the presence of many absorption lines from neon, magnesium, and silicon, with a variety of ionisation states. These lines were attributed to low ionisation gases surrounding the nuclear X-ray source. No significant outflowing gas was detected in the large scale.
Supernova
One supernova has been observed in NGC 931, SN 2009lw. The supernova was discovered by W. Li, S. B. Cenko, and A. V. Filippenko during the Lick Observatory Supernova Search on 24.24 November 2009, when it had an apparent magnitude of 18.8. It was identified as a type-Ib or possibly a type-IIb supernova a few months past maximum light.
Nearby galaxies
NGC 931 has been identified as a member of the NGC 973 group, one of the largest galaxy groups of the Perseus–Pisces Supercluster, with at least 39 galaxies identified as its members. Other members of the group include NGC 917, NGC 940, NGC 969, NGC 973, NGC 974, NGC 978, NGC 987, NGC 983, NGC 1060, NGC 1066, NGC 1067, and UGC 2105. On the other hand, Garcia recognised NGC 931 as the largest galaxy in a galaxy group known as the NGC 931 group, which also included NGC 940.
A smaller companion galaxy, measuring 0.2 by 0.1 arcminutes, is superimposed on the galaxy, lying about 0.35 arcminutes towards the north.
See also
NGC 1532 - a similar spiral galaxy
References
External links
NGC 931 on SIMBAD
Spiral galaxies
Seyfert galaxies
Triangulum
0931
01935
09399
Markarian 1040
Astronomical objects discovered in 1865
Discoveries by Heinrich Louis d'Arrest | NGC 931 | Astronomy | 697 |
505,526 | https://en.wikipedia.org/wiki/HAKMEM | HAKMEM, alternatively known as AI Memo 239, is a February 1972 "memo" (technical report) of the MIT AI Lab containing a wide variety of hacks, including useful and clever algorithms for mathematical computation, some number theory and schematic diagrams for hardware – in Guy L. Steele's words, "a bizarre and eclectic potpourri of technical trivia".
Contributors included about two dozen members and associates of the AI Lab. The title of the report is short for "hacks memo", abbreviated to six upper case characters that would fit in a single PDP-10 machine word (using a six-bit character set).
History
HAKMEM is notable as an early compendium of algorithmic technique, particularly for its practical bent, and as an illustration of the wide-ranging interests of AI Lab people of the time, which included almost anything other than AI research.
HAKMEM contains original work in some fields, notably continued fractions.
Introduction
Compiled with the hope that a record of the random things people do around here can save some duplication of effort -- except for fun.
Here is some little known data which may be of interest to computer hackers. The items and examples are so sketchy that to decipher them may require more sincerity and curiosity than a non-hacker can muster. Doubtless, little of this is new, but nowadays it's hard to tell. So we must be content to give you an insight, or save you some cycles, and to welcome further contributions of items, new or used.
See also
Hacker's Delight
AI Memo
References
External links
HAKMEM facsimile (PDF) (searchable version)
Algorithms
Computer science papers
1972 in Massachusetts
Memoranda
February 1972 events in the United States
History of the Massachusetts Institute of Technology | HAKMEM | Mathematics | 370 |
1,310,968 | https://en.wikipedia.org/wiki/Reciprocating%20compressor | A reciprocating compressor or piston compressor is a positive-displacement compressor that uses pistons driven by a crankshaft to deliver gases at high pressure. Pressures of up to 5,000 psig are commonly produced by multistage reciprocating compressors.
The intake gas enters the suction manifold, then flows into the compression cylinder where it gets compressed by a piston driven in a reciprocating motion via a crankshaft, and is then discharged. Applications include railway and road vehicle air brake systems oil refineries, gas pipelines, oil and gas production drilling and well services, air and nitrogen injection, offshore platforms, chemical plants, natural gas processing plants, air conditioning, and refrigeration plants. One specialty application is the blowing of plastic bottles made of polyethylene terephthalate (PET).
See also
Centrifugal compressor
Diving air compressor
Vapor-compression refrigeration
References
External links
Calculation of required cylinder compression for a multistage reciprocating compressor
Gas compressors | Reciprocating compressor | Chemistry | 210 |
14,723,308 | https://en.wikipedia.org/wiki/Myocyte-specific%20enhancer%20factor%202A | Myocyte-specific enhancer factor 2A is a protein that in humans is encoded by the MEF2A gene. MEF2A is a transcription factor in the Mef2 family. In humans it is located on chromosome 15q26. Certain mutations in MEF2A cause an autosomal dominant form of coronary artery disease and myocardial infarction.
Function
The process of differentiation from mesodermal precursor cells to myoblasts has led to the discovery of a variety of tissue-specific factors that regulate muscle gene expression. The myogenic basic helix-loop-helix proteins, including myoD (MIM 159970), myogenin (MIM 159980), MYF5 (MIM 159990), and MRF4 (MIM 159991) are 1 class of identified factors. A second family of DNA binding regulatory proteins is the myocyte-specific enhancer factor-2 (MEF2) family. Each of these proteins binds to the MEF2 target DNA sequence present in the regulatory regions of many, if not all, muscle-specific genes. The MEF2 genes are members of the MADS gene family (named for the yeast mating type-specific transcription factor MCM1, the plant homeotic genes 'agamous' and 'deficiens' and the human serum response factor SRF (MIM 600589)), a family that also includes several homeotic genes and other transcription factors, all of which share a conserved DNA-binding domain.[supplied by OMIM]
Interactions
Myocyte-specific enhancer factor 2A has been shown to interact with:
ASCL1,
EP300,
HDAC4,
HDAC9,
Histone deacetylase 5,
MAPK14,
MEF2D,
Mothers against decapentaplegic homolog 2, and
Thyroid hormone receptor alpha and
References
Further reading
External links
Transcription factors
Human proteins | Myocyte-specific enhancer factor 2A | Chemistry,Biology | 404 |
17,434,355 | https://en.wikipedia.org/wiki/Arthur%20W%20Graham%20III | Arthur "Art" W. Graham III (Nov 20, 1940 - May 12, 2008) was the Director of Timing & Scoring for the Indianapolis 500 from 1978-1998
A native of Columbus, IN, but a longtime resident of Cincinnati, OH and then Brownsburg, IN. Graham designed and implemented the first fully automated electronic race timing and scoring system and introduced many of the timing-and-scoring innovations now used in American and International open-wheel racing.
Graham was also a Computer Engineer for IBM for 30 years from 1962-1992, overseeing the PC Divisions unprecedented growth in home computers. His dual roles with IBM and Indy, birthed a partnership with "Big Blue" and USAC that enabled innovations not seen in other Motorsports.
Indy Racing League
A lifelong racing enthusiast who recalled watching the first live television coverage of the "500" in 1949 on a tiny screen through an appliance store window, Graham first became involved with the United States Auto Club in 1965 while living in Cincinnati. It wasn't long before he was serving on USAC's various competition commissions, eventually becoming Chairman of the Rules Committee. In 1982 he was named to USAC's Board of Directors, remaining there until 1997 as the Director of Corporate Development.
Computers were being used at Indianapolis when Graham first came onto the scene, but he revolutionized their use into timing & scoring procedures. He designed and installed the first automated system that tracked and communicated drivers position and speed in Real-time. It simultaneously displayed race leaders and laps on the position board. Utilizing proprietary in-track antenna loops and on-car position transponders, the information was automatically fed to live TV broadcasts allowing home viewers to follow the race and position of their favorite drivers.
For many years prior, it was traditional for an all-night audit of individual manual scoring sheets and DOS-based computers to verify race results, with the results not being officially posted until the following day. By the late 1980s, under Graham's leadership, they would be posted within an hour of the race finish. Graham has been recognized as the "Father of Autosport Timing Technology".
In the early 1990s, Graham began championing the cause of the National Midget Auto Racing Hall of Fame, and later served for several years as the organization's Secretary.
Interests
A great lover of big-band music, Graham was the Indiana representative of the Four Freshmen Society, and he had put in a considerable amount of effort toward the planning of a 60th anniversary celebration of the group's formation, to be held in Indianapolis in August, 2008.
Family
Graham is a member of Sigma Alpha Epsilon fraternity, Indiana Beta '62. His family includes wife Dina, daughter Susan L. Moore, sons Daniel A. and Matthew S. Graham, brother Andrew S. Graham, mother Martha S. Graham, and four grandchildren, Sydney, Reagan, Taylor and Kyle.
References
External links
United States Auto Club
Indianapolis Motor Speedway
Indianapolis 500
National Midget Auto Racing Hall of Fame
Indy Racing League
1940 births
2008 deaths
Auto racing executives
Indianapolis 500
Systems engineers
fi:USAC | Arthur W Graham III | Engineering | 619 |
28,285,943 | https://en.wikipedia.org/wiki/LLNL%20RISE%20process | The LLNL RISE process was an experimental shale oil extraction technology developed by the Lawrence Livermore National Laboratory. The name comes from the abbreviation of the Lawrence Livermore National Laboratory and words 'rubble in situ extraction'.
LLNL RISE is a modified in situ extraction technology originally proposed by Rio Blanco Oil Shale Co. and developed by the Lawrence Livermore National Laboratory. It is classified as an internal combustion technology. The process was described in 1975 by Lewis A. E. and A. J. Rothman.
In the LLNL RISE process a part of the oil shale deposit (roughly 20% of the total deposit) is removed by the conventional mining technique. The remaining deposit is then broken up with explosives to increase porosity of the deposit. As a result, a large underground retort chamber by square and high is created. The retort chamber is ignited at the top. The combustion zone moves downward as an oxygen gas provided, similar to the process developed by the Occidental Petroleum. The heat causes retorting process converting kerogen in oil shale to oil shale gas and shale oil vapors. Some oil is collected at the bottom of the retort, other collected at the surface as vapors.
The process was never used commercially. It was tested by using experimental simulated retort with capacity of 6 tonnes of oil shale per day.
References
Oil shale technology
Thermal treatment
Lawrence Livermore National Laboratory | LLNL RISE process | Chemistry | 283 |
23,179,216 | https://en.wikipedia.org/wiki/Battlefield%203 | Battlefield 3 is a 2011 first-person shooter video game developed by DICE and published by Electronic Arts for Microsoft Windows, PlayStation 3 and Xbox 360.
In Battlefield 3s campaign, players take on the personas of several military roles: a U.S. Marine, an F/A-18F Super Hornet weapon systems officer, an M1A2 Abrams tank operator, and a Spetsnaz GRU operative. The campaign takes place in various locations and follows the stories of two characters, Henry Blackburn and Dimitri Mayakovsky.
The game sold 5 million copies in its first week of release, and received mostly positive reviews. The game's sequel, Battlefield 4, was released in 2013.
Gameplay
Battlefield 3 features the combined arms battles across single-player, co-operative and multiplayer modes. It reintroduces several elements absent from the Bad Company games, including fighter jets, the prone position and 64-player battles on PC. To accommodate the lower player count on consoles, the ground area is limited for Xbox 360 and PS3, though fly space remains the same.
The game features maps set in Paris, Tehran (as well as other locations in Iran), Sulaymaniyah in Iraq, New York City, Wake Island, Oman, Kuwait City, and other parts of the Persian Gulf. The maps cover urban streets, metropolitan downtown areas, and open landscapes suited to vehicle combat. Battlefield 3 introduces the "Battlelog"; a free cross-platform social service with built-in text messaging, voice communications, game statistics, and the ability to join games that friends are already playing (though both players need to be on the same platform).
Cooperative
A demo featuring the new co-op mode was featured at Gamescom 2011. Split screen is not available. Battlefield 3s new Battlelog social network, DICE noted, would be tied to all co-op matches, allowing players to try to beat friends' scores and to track their performance. Participating in co-op mode allows the player to collect points that unlock additional content that can be used in multiplayer.
Multiplayer
Battlefield 3s multiplayer matches see players take on one of four roles: Assault, Support, Engineer and Recon. The Assault class focuses on assault rifles and healing teammates. The Support class focuses on light machine guns and supplying ammunition. The Engineer class focuses on supporting and destroying vehicles. The Recon class focuses on sniping and spotting enemies. The mechanics of the weapons have been changed to utilize the new engine: compatible weapons may have bipods attached which can then be deployed when in the prone position or near suitable scenery, and provide a significant boost to accuracy and recoil reduction. Suppressive fire from weapons blurs the vision and reduces the accuracy of those under fire, as well as health regeneration. The Recon class can put a radio beacon anywhere on the map and all squad members will be able to spawn on the location of the beacon.
Several game modes are present, including Conquest, Rush, Squad Deathmatch, Squad Rush and for the first time since Battlefield 1942, Team Deathmatch. However, more game modes are available through the purchase of extra downloadable content packs. The PC version of Battlefield 3 is by default launched via a web browser from the Battlelog web site. A server browser is present in console versions of the game.
Synopsis
Setting and characters
Battlefield 3s Campaign story is set during the fictional "War of 2014" and covers events that occur over nine months. Most of the story takes place in the Iran–Iraq region. Other locations include the Iranian-Azerbaijani border, Paris, and New York City. Most missions occur as flashbacks on part of the interrogation of Sergeant Henry Blackburn, and do not occur in order of events.
The Campaign puts the player in control of four different player characters. For most of the story, the player controls Sergeant Henry "Black" Blackburn (portrayed by Gideon Emery), a member of the U.S. Marine Corps 1st Recon Battalion and main protagonist. The player also controls Sgt. Jonathan "Jono" Miller, a M1 Abrams tank operator deployed in Tehran; Lt. Jennifer "Wedge" Colby Hawkins, an F/A-18F Super Hornet weapon systems officer; and Dimitri "Dima" Mayakovsky, a Russian GRU operative. The main antagonist, Solomon, is an overseas asset for the Central Intelligence Agency. Non-player characters include: Misfit 1, Blackburn's squad (David Montes, Steve Campo, Christian Matkovic, Jack Chaffin, and Cpt. Quinton Cole); Dima's GRU squadmates Vladimir and Kiril; and CIA Agents Gordon and Whistler, who interrogate Blackburn for much of the Campaign.
Plot
On March 15, 2014, Blackburn's squad, Misfit 1, attempts to locate a U.S. squad investigating an improvised explosive device in Sulaymaniyah, Iraqi Kurdistan, whose last known position was in territory controlled by the People's Liberation & Resistance (PLR), an Iranian paramilitary insurgent group.<ref name=tru_achieve>{{cite web|url=http://www.trueachievements.com/n3003/battlefield-3-information-roundup.htm|title=Battlefield 3''' Information Roundup|publisher=True Achievements|author=PunkyLiar|access-date=24 February 2011|archive-url=https://web.archive.org/web/20110216082820/http://www.trueachievements.com/n3003/battlefield-3-information-roundup.htm|archive-date=16 February 2011|url-status=live}}</ref> They are eventually ambushed by the PLR, who critically wound Chaffin, and is required to be extracted. They find the missing squad, which the PLR had ambushed, but a massive earthquake wrecks the city before they can escape. Blackburn, fellow squadmate Montes, and other survivors fight their way out of the ruins of the city. On the same day, the PLR stage a coup d'état in Iran, turning it into a military dictatorship, and the U.S. subsequently invades. Lt. Hawkins takes part in a raid on enemy fighters over Iran and an air strike over Mehrabad Airport. In the aftermath of the air strikes, Misfit 1 is sent into Tehran to perform battle damage assessment and apprehend the leader of the PLR, Faruk Al-Bashir. While investigating an underground bank vault in the target's suspected location, Blackburn and his team learn that the PLR has acquired Russian suitcase nukes, with two of the three devices missing. Being overrun, Misfit requests backup from an M1 Abrams column "Anvil 3", including Sergeant Miller. Miller facilitates Misfit 1's helicopter extraction, but his Abrams tank is disabled and overrun while the crew awaits the arrival of the Quick Reaction Force. Knocked out and taken prisoner, Miller is promptly executed by Solomon and Al-Bashir, with the event being filmed and posted on the Internet.
Later, Misfit 1 manages to capture Al-Bashir, who is fatally wounded when they cause his escape vehicle to crash. Realizing that he had been betrayed and used, Al-Bashir reveals some of Solomon's plan—to detonate the nukes in Paris and New York City—before succumbing to his wounds. Misfit 1 gets a lead on arms dealer Amir Kaffarov, who was working with Solomon and Al-Bashir. They attempt to capture Kaffarov from his villa on the Caspian Sea coast, near the Azerbaijani border. However, they run into a Russian paratrooper battalion, also after Kaffarov, and are forced to engage them. In the ensuing chaos, Blackburn's squadmates, Campo and Matkovic, are killed in an enemy strafing run. Meanwhile, a Spetsnaz team led by Dima assaults Kaffarov's villa. Kaffarov tries to bribe the team but is promptly beaten by Dima. Blackburn arrives at the villa and finds Dima with a deceased Kaffarov. Dima reveals Solomon's plot to Blackburn and requests his cooperation to prevent "a war between [their] nations." Meanwhile, Misfit 1's commanding officer Cole arrives, and Blackburn is forced to shoot him before he can kill Dima. Blackburn's shooting of his CO results in his eight-hour interrogation by the C.I.A. at Hunters Point, Queens.
During Blackburn's captivity, Dima's Spetsnaz squad attempts to stop the attack in Paris. However, Vladimir is killed, and the nuke detonates, killing 80,000 people. Meanwhile, the C.I.A. agents do not believe Blackburn's story since Solomon is a C.I.A. informant, and there is no concrete proof of his involvement in the terrorist attacks. They instead believe that Russia is responsible for the attacks and that Dima has tricked Blackburn.
With no other options, Blackburn and Montes break out of captivity to stop the attack in New York. Evading police, Blackburn manages to sneak into the hijacked Long Island Rail Road commuter train full of Solomon's men and explosive charges. He fights his way to the front car, where Solomon ambushes him. In the melee, Blackburn gains the upper hand by obtaining and activating the detonator to the charges, causing the train to crash. Blackburn pursues Solomon through the sewers and up to street level. Having obtained a vehicle, Montes picks up Blackburn and engages Solomon and the PLR in a brief vehicular chase, ending with both cars crashing in Times Square. As a bewildered crowd watches on, Solomon shoots Montes. Still, Blackburn manages to kill Solomon by bludgeoning him to death with a brick in the ensuing brawl and prevents the nuclear bomb from detonating.
The epilogue reveals that Dima had survived the Paris detonation, albeit suffering from radiation poisoning. He writes about the efforts of both himself and Blackburn to stop Solomon's plan "to set fire to the world." As he finishes, he examines a pistol. A knock comes from his door, the screen cuts to black, and the last sound is that of the pistol being loaded as Dima presumably prepares to defend himself.
DevelopmentBattlefield 3s lead platform was originally the PC until it was switched to consoles midway through development. The Xbox 360 version of Battlefield 3 is shipped on two discs due to the disc size limit; however, the PS3 version ships on one Blu-ray Disc. It is the first game in the series that does not support versions of Windows prior to Windows Vista as the game only supports DirectX 10 and 11. The PC version was exclusive to EA's Origin platform, through which PC users also authenticate when connecting to the game; however the game eventually arrived on Steam in June 2020.Battlefield 3 debuts the new Frostbite 2 engine. This updated Frostbite engine can realistically portray the destruction of buildings and scenery to a greater extent than previous versions. Unlike previous iterations, the new version can also support dense urban areas. Battlefield 3 used a new type of character animation technology called ANT that has been used in EA Sports games such as FIFA, but for Battlefield 3 is adapted to create a more realistic soldier, with the ability to transition into cover and turn the head before the body, as well as "drag fallen comrades into safety and mount weapons on almost any part of the terrain". EA stated that Commander Mode was unlikely to be included, which was met with some criticism on the EA forum.
PlayStation 3 exclusive content
During Sony's E3 2011 press conference, Jack Tretton of Sony Computer Entertainment of America announced that the PlayStation 3 version of the game would be bundled with a free copy of Battlefield 1943, however, at launch, the game was not included. EA then said that Battlefield 3 PlayStation 3 owners would receive timed-exclusive DLC for the game instead. On 20 November 2011, Law firm Edelson McGuire took EA to court on behalf of disappointed gamers. The complaint focuses on EA's communication of the change of plan, second proposal with early DLC that had already been announced. Shortly after EA was threatened with being taken to court over its failure to deliver the free game as announced at E3, EA announced they will offer owners of the PlayStation 3 version of Battlefield 3 a free downloadable copy of Battlefield 1943.
Wii U version
On 7 June 2011, during Nintendo's E3 2011 press conference, John Riccitiello of EA games expressed interest in Nintendo's upcoming system, the Wii U. Patrick Liu, the executive producer of Battlefield 3, stated that DICE currently have no games in development for the Wii U and a port for the console "probably won't happen".
Beta
The open beta commenced on 29 September 2011, for all platforms, and ended on 10 October 2011. 48 hour early access was granted to players who bought the Tier 1 edition of Medal of Honor or pre-ordered the digital version of Battlefield 3: Limited Edition through Origin.
Soundtrack
A soundtrack album was released on 24 October 2011, one day before the game was released. It is available on iTunes and Amazon. The music was composed by Johan Skugge and Jukka Rintamäki.
Easter eggsBattlefield 3 and its five DLC packs contain numerous Easter eggs. Hiding in obscure areas of the game are references to other Electronic Arts games and franchises including Mirror's Edge and Mass Effect. Some sources suggested other Easter eggs pointed to future Battlefield games or DLC packs. During the firefight in the mall trying to protect Al-Bashir, the store in which they are defending Al-Bashir has copies of games on the shelf and one game being titled "Frostbite" which references the engine used by Battlefield 3. A wall on the multiplayer map "Wake Island" featuring the number 2143 and a futuristic hovercraft on a different map in the End Game DLC have been said to suggest a follow-up to 2006's Battlefield 2142 is in the works.
Several dinosaur Easter eggs—such as a flying pterodactyl on the map "Nebandan Flats" and tyrannosaurus skulls or toy statues on various maps—have led others to believe a dinosaur-related game or DLC would be released in the future. The idea of a dinosaur-related minigame originated from fan feedback through social media outlets prior to Battlefield 3s release. Neither of these possible upcoming games has been confirmed by EA.
Marketing and releaseBattlefield 3 was revealed on 3 February 2011 by Game Informer. The coverage included information on building the game and interviews with the developer DICE, as well as three trailers: a teaser and the first two parts in a series of gameplay from the level "Fault Line". Several other trailers were released showing different aspects of the game, including both single and multiplayer, as well as emphasizing the new engine. On 16 August 2011, co-op gameplay and a "Caspian Border Multiplayer Gameplay Trailer" were shown at Gamescom 2011 illustrating the co-op mode and the first footage of air combat, respectively.
Trailer releases gained momentum in the week before the release of the game. EA released a multiplayer trailer which showed the variety of maps available in multiplayer, with short scenes of actual gameplay. It also featured shots of a map that is included in the "Back to Karkand" downloadable content (DLC). EA released a launch trailer, showing off the various missions in the single-player campaign.
EA CEO John Riccitiello stated that Battlefield 3 is aimed at competing with the Call of Duty series. EA planned on spending over $100 million on a marketing campaign for Battlefield 3. Electronic Arts stated that Battlefield 3 is "flat out superior" to Call of Duty. EA has said it is going on the "offense" in regards to its marketing on Battlefield 3, saying that it started its campaign early to establish a "beachhead". Anyone who had watched a trailer for the upcoming film Act of Valor through the official Battlefield 3 website could receive free downloadable dogtags for use with any version of the game.
For marketing and distribution to Japan, EA worked with Sega's Japanese branch to distribute the game nationwide. In January 2017, the game was made backwards compatible with the Xbox One (alongside Battlefield: Bad Company 2) and was subsequently added to the EA Access service.
Pre-order promotion
All pre-orders of the Limited Edition grant free access to the "Back to Karkand" DLC pack, a reference to the "Strike at Karkand" map (a popular Battlefield 2 map), to include four maps brought over from Battlefield 2, ten new weapons, four new vehicles, five new achievements/trophies, and a new addition to the series, "Assignments". The maps from the expansion pack will be: Strike at Karkand, Gulf of Oman, Wake Island, and Sharqi Peninsula.
Pre-ordering at selected retailers and Origin included the "Physical Warfare Pack", granting access to time-based exclusive weapons and items; including a light machine gun, a sniper rifle accessory, and armor-piercing ammunition. Also included is launch day access to the DAO-12 semi-automatic shotgun, which other players can unlock through game play. Pre-order at select retailers also provide the "SPECACT Kit Upgrade", the "Dog Tag Pack" and Battlefield 3 gear for the player's console avatar. Pre-ordering at Origin gave players a shotgun and beret for Battlefield Play4Free, and 48 hour early access to the Battlefield 3 beta.
Originally the "Physical Warfare Pack" was to be exclusive to pre-orders, but fan reaction to this was negative, causing EA to clarify that it would be made available to all players for free later in the year. On 2 September 2011, a trailer for the Physical Warfare Pack was released on YouTube showing all the content included within the pack in action in-game.
All the content except the "Back to Karkand" pack was available from day one. The pack was released on 6 December 2011 for the PlayStation 3, and a week later for the Xbox 360 and PC.
Online pass
To access the game's online multiplayer mode on consoles, players need to activate an online pass. New copies of the game include one online pass for the original owner of the game to access the multiplayer; however, if a player buys a used copy or rents the game, they must purchase an online pass separately, or access a 48-hour trial via the official game site. When asked why the developers implemented the pass system, game designer Alan Kertz replied, "because servers cost money, and used games don't make developers any money." Some of the online pass codes were invalid from the time of purchase, which EA responded to by telling affected consumers to ask the retailer for a replacement code.
Downloadable content
Back to Karkand
The first DLC pack, "Back to Karkand", was announced before launch and was released on 13 December 2011 for PC and Xbox 360, while PlayStation 3 owners received it a week earlier. It was priced at US$15 but was free for all users who purchased the limited edition. It features four maps remade from Battlefield 2, three new vehicles and ten new weapons.
Close Quarters
At GDC 2012 DICE revealed it would release three more DLCs. The second DLC, "Close Quarters" arrived in June 2012 featuring four new infantry-oriented maps, ten new weapons, HD Destruction, ten new assignments, five unique Dogtags, and a new game mode, Conquest Domination, a Conquest mode adapted for smaller spaces.
Armored Kill
The third DLC, "Armored Kill" arrived on 4 September 2012 for premium PlayStation 3 users and 11 September for Xbox 360 and PC users. The DLC was made available for non-premium PlayStation 3 and Xbox 360 users on 25 September 2012. Armored Kill included new vehicles, specifically tanks, ATVs, and mobile artillery, as well as new vehicle-oriented maps and what is called "the biggest map in Battlefield history".
Aftermath
A DLC titled "Aftermath" was revealed in a trailer for Battlefield 3 Premium. It was released to PlayStation 3 Premium subscribers on 27 November 2012 and to PC and Xbox 360 Premium subscribers on 4 December 2012. "Aftermath" was released to non-Premium subscribers on PlayStation 3 on 11 December 2012, and on PC and Xbox 360 on 18 December 2012. A video released by DICE games revealed that there would be a crossbow, with customizable scopes and various bolts before its release.
End Game
The fifth DLC, "End Game", was released for PlayStation 3 Premium members on 5 March 2013 and to PC and Xbox 360 Premium members on 12 March 2013, then to non-Premium PlayStation 3 players on 19 March 2013 and to PC and Xbox 360 players on 26 March 2013. It was developed in conjunction with Visceral Games.
Novel
Andy McNab penned a tie-in novel titled Battlefield 3: The Russian, which follows the story of GRU Spetsnaz commando Dmitri "Dima" Mayakovsky and his involvement against the PLR, as well as his connection to the antagonist, Solomon. McNab also served as the game's consultant on military tactics. The novel was released on 25 October 2011.
Reception
Critical receptionBattlefield 3 received mostly positive reviews. IGN gave it a score of 9.0 out of 10.0 for all platforms, and praised the graphics and multiplayer game, but criticism for the occasional glitches of the game engine, it still gave the game a mostly positive review: "Regardless of the narrative missteps or the occasional glitches, Battlefield 3 offers an unforgettable, world-class multiplayer suite that's sure to excite shooter fans."Joystiq awarded the game 4.5 out of 5 stars, stating that the campaign was "tactically linear" and that the A.I. within the game were "murderously un-fun to fight". Complaints were also made of the multiplayer aspect, stating that destruction was less than expected: "It's not Bad Company 2, and levels won't start out intact and end looking like the surface of the moon the way they often did in that game." They did, however, praise the multiplayer experience as "unmatched", stating that this should be the sole reason to buy the game.GameSpot gave Battlefield 3 a score of 8.5 out of 10 across all platforms. They praised the deep multiplayer mode, great variety of vehicles, many well-designed environments, and a great reward system for team play. The cooperative mode was viewed favorably; the only criticism on the cooperative missions was that "there aren't more of them to keep you busy".Official Xbox Magazine gave the game 9 out of 10, commending the game for its multiplayer mode, but criticizing the solo campaign. Similarly, Official Xbox Magazine (UK) gave the game 8 out of 10, applauding its multiplayer gaming and calling it "The most expansive, refined Battlefield multiplayer yet".
Sales
According to EA, Battlefield 3 garnered 3 million pre-orders by the day of its release. The pre-order total makes it "the biggest first-person shooter launch in EA history", according to the publisher. Two days after launch, EA CEO John Riccitiello announced via a conference call to investors that Battlefield 3 has already shipped 10 million units within a week of release, with 3 million of those being pre-orders. Electronic Arts stated that the title sold 5 million units within the first week of availability, easily becoming its fastest-selling game. After one month, EA chief financial officer Eric Brown announced Battlefield 3 had sold 8 million copies, and that the publisher has shipped 12 million copies of the game to retailers, 2 million more than it shipped for launch week. Peter Moore, the high-profile COO of EA, insisted that Battlefield 3 successfully captured a slice of Call of Duty: Modern Warfare 3's market share. On 29 June 2012 EA revealed that the game has sold 15 million copies.
In Japan, Battlefield 3 had sold around 123,379 copies for the PlayStation 3 and 27,723 copies for the Xbox 360 when it was released. In the first week, the game had sold 18,792 copies for the PlayStation 3 for a total of 142,171 copies. The PlayStation 3 version later sold 8,094 copies for a total of 150,265 copies.
Other responses
A scene in which the player is prompted to kill a rat that is attacking their character was criticized by People for the Ethical Treatment of Animals (PETA). In a press release issued by the organization's German office, it claimed that the game "treats animals in a sadistic manner". The release also went further on to say that the scene can have "a brutalising effect on the young male target audience".
The reproduction of various scenes in Battlefield 3 are highly accurate of their real-life counterparts such as the Grand Bazaar. Iran has reacted to the scenes set within Iran by banning the sale of the game. As the game had not been officially released in the country, the authorities were strictly enforcing the prevention of the distribution of pirated copies of the game. This comes after Iranian gamers had protested the release of the game and called for an apology.
Awards
Best Shooter, 2011 IGN People's Choice Award
Best Multiplayer Game, 2011 IGN People's Choice Award
Best Xbox 360 Shooter, Best of 2011 IGN Award & 2011 IGN People's Choice Award
Best Xbox 360 Multiplayer Game, Best of 2011 IGN Award & 2011 IGN People's Choice Award
Best PS3 Shooter, 2011 IGN People's Choice Award
Best PS3 Multiplayer Game, Best of 2011 IGN Award & 2011 IGN People's Choice Award
Best PC Shooter, Best of 2011 IGN Award & 2011 IGN People's Choice Award
Best PC Multiplayer Game, Best of 2011 IGN Award & 2011 IGN People's Choice Award
During the 15th Annual Interactive Achievement Awards, the Academy of Interactive Arts & Sciences awarded Battlefield 3'' with "Outstanding Achievement in Sound Design", along with nominations for "Action Game of the Year" and outstanding achievement in "Art Direction", "Connectivity", "Online Gameplay", and "Visual Engineering".
Notes
References
External links
2011 video games
Fiction about aerial warfare
Asymmetrical multiplayer video games
09
Electronic Arts games
Esports games
First-person shooters
Video game interquels
IOS games
Martyrdom in fiction
Mass murder in fiction
Multiplayer and single-player video games
Multiplayer online games
PlayStation 3 games
Trains in fiction
War video games set in the United States
Video games about nuclear war and weapons
Video games about the United States Marine Corps
Video game sequels
Video games developed in Sweden
Video games set in 2014
Video games set in Azerbaijan
Video games set in France
Video games set in Iran
Video games set in Iraq
Video games set in Kuwait
Video games set in New York City
Video games set in Oman
Video games set in Paris
Video games set in Ukraine
Video games set in the United States
Video games featuring female protagonists
Windows games
Xbox 360 games
Frostbite (game engine) games
Video games adapted into novels
Video games using Havok
Fiction about aircraft carriers
British Academy Games Award for Multiplayer winners | Battlefield 3 | Physics | 5,590 |
20,361,847 | https://en.wikipedia.org/wiki/Methiodal | Methiodal is a pharmaceutical drug that was used as an iodinated contrast medium for X-ray imaging. Its uses included myelography (imaging of the spinal cord); for this use, cases of adhesive arachnoiditis have been reported, similar to those seen under the contrast medium iofendylate.
It is not known to be marketed anywhere in the world in 2021.
References
Radiocontrast agents
Organoiodides
Sulfonates | Methiodal | Chemistry | 98 |
15,253,921 | https://en.wikipedia.org/wiki/Cone-shape%20distribution%20function | The cone-shape distribution function, also known as the Zhao–Atlas–Marks time-frequency distribution, (acronymized as the ZAM distribution or ZAMD), is one of the members of Cohen's class distribution function. It was first proposed by Yunxin Zhao, Les E. Atlas, and Robert J. Marks II in 1990. The distribution's name stems from the twin cone shape of the distribution's kernel function on the plane. The advantage of the cone kernel function is that it can completely remove the cross-term between two components having the same center frequency. Cross-term results from components with the same time center, however, cannot be completely removed by the cone-shaped kernel.
Mathematical definition
The definition of the cone-shape distribution function is:
where
and the kernel function is
The kernel function in domain is defined as:
Following are the magnitude distribution of the kernel function in domain.
Following are the magnitude distribution of the kernel function in domain with different values.
As is seen in the figure above, a properly chosen kernel of cone-shape distribution function can filter out the interference on the axis in the domain, or the ambiguity domain. Therefore, unlike the Choi-Williams distribution function, the cone-shape distribution function can effectively reduce the cross-term results form two component with same center frequency. However, the cross-terms on the axis are still preserved.
The cone-shape distribution function is in the MATLAB Time-Frequency Toolbox and National Instruments' LabVIEW Tools for Time-Frequency, Time-Series, and Wavelet Analysis
See also
Cohen's class distribution function
Choi-Williams distribution function
Wigner distribution function
Ambiguity function
Short-time Fourier transform
References
Time–frequency analysis
Transforms | Cone-shape distribution function | Physics,Mathematics | 346 |
40,624,297 | https://en.wikipedia.org/wiki/Vulgamycin | Vulgamycin is an antibiotic made by Streptomyces.
References
Antibiotics | Vulgamycin | Chemistry,Biology | 20 |
63,243 | https://en.wikipedia.org/wiki/Chemical%20symbol | Chemical symbols are the abbreviations used in chemistry, mainly for chemical elements; but also for functional groups, chemical compounds, and other entities. Element symbols for chemical elements, also known as atomic symbols, normally consist of one or two letters from the Latin alphabet and are written with the first letter capitalised.
History
Earlier symbols for chemical elements stem from classical Latin and Greek vocabulary. For some elements, this is because the material was known in ancient times, while for others, the name is a more recent invention. For example, Pb is the symbol for lead (plumbum in Latin); Hg is the symbol for mercury (hydrargyrum in Greek); and He is the symbol for helium (a Neo-Latin name) because helium was not known in ancient Roman times. Some symbols come from other sources, like W for tungsten (Wolfram in German) which was not known in Roman times.
A three-letter temporary symbol may be assigned to a newly synthesized (or not yet synthesized) element. For example, "Uno" was the temporary symbol for hassium (element 108) which had the temporary name of unniloctium, based on the digits of its atomic number. There are also some historical symbols that are no longer officially used.
Extension of the symbol
In addition to the letters for the element itself, additional details may be added to the symbol as superscripts or subscripts a particular isotope, ionization, or oxidation state, or other atomic detail. A few isotopes have their own specific symbols rather than just an isotopic detail added to their element symbol.
Attached subscripts or superscripts specifying a nuclide or molecule have the following meanings and positions:
The nucleon number (mass number) is shown in the left superscript position (e.g., 14N). This number defines the specific isotope. Various letters, such as "m" and "f" may also be used here to indicate a nuclear isomer (e.g., 99mTc). Alternately, the number here can represent a specific spin state (e.g., 1O2). These details can be omitted if not relevant in a certain context.
The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd). The atomic number is redundant to the chemical element, but is sometimes used to emphasize the change of numbers of nucleons in a nuclear reaction.
If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Ca2+).
The number of atoms of an element in a molecule or chemical compound is shown in the right subscript position (e.g., N2 or Fe2O3). If this number is one, it is normally omitted - the number one is implicitly understood if unspecified.
A radical is indicated by a dot on the right side (e.g., Cl• for a neutral chlorine atom). This is often omitted unless relevant to a certain context because it is already deducible from the charge and atomic number, as generally true for nonbonded valence electrons in skeletal structures.
Many functional groups also have their own chemical symbol, e.g. Ph for the phenyl group, and Me for the methyl group.
A list of current, dated, as well as proposed and historical signs and symbols is included here with its signification. Also given is each element's atomic number, atomic weight, or the atomic mass of the most stable isotope, group and period numbers on the periodic table, and etymology of the symbol.
Symbols for chemical elements
Symbols and names not currently used
The following is a list of symbols and names formerly used or suggested for elements, including symbols for placeholder names and names given by discredited claimants for discovery.
Systematic chemical symbols
These symbols are based on systematic element names, which are now replaced by trivial (non-systematic) element names and symbols. Data is given in order of: atomic number, systematic symbol, systematic name; trivial symbol, trivial name.
101: Unu, unnilunium; Md, mendelevium.
102: Unb, unnilbium; No, nobelium.
103: Unt, unniltrium; Lr, lawrencium.
104: Unq, unnilquadium; Rf, rutherfordium.
105: Unp, unnilpentium; Db, dubnium.
106: Unh, unnilhexium; Sg, seaborgium.
107: Uns, unnilseptium; Bh, bohrium.
108: Uno, unniloctium; Hs, hassium.
109: Une, unnilennium; Mt, meitnerium.
110: Uun, ununnilium; Ds, darmstadtium.
111: Uuu, unununium; Rg, roentgenium.
112: Uub, ununbium; Cn, copernicium.
113: Uut, ununtrium; Nh, nihonium.
114: Uuq, ununquadium; Fl, flerovium.
115: Uup, ununpentium; Mc, moscovium.
116: Uuh, ununhexium; Lv, livermorium.
117: Uus, ununseptium; Ts, tennessine.
118: Uuo, ununoctium; Og, oganesson.
When elements beyond oganesson (starting with ununennium, Uue, element 119), are discovered; their systematic name and symbol will presumably be superseded by a trivial name and symbol.
Alchemical symbols
The following ideographic symbols were used in alchemy to denote elements known since ancient times. Not included in this list are spurious elements, such as the classical elements fire and water or phlogiston, and substances now known to be compounds. Many more symbols were in at least sporadic use: one early 17th-century alchemical manuscript lists 22 symbols for mercury alone.
Planetary names and symbols for the metals – the seven planets and seven metals known since Classical times in Europe and the Mideast – was ubiquitous in alchemy. The association of what are anachronistically known as planetary metals started breaking down with the discovery of antimony, bismuth and zinc in the 16th century. Alchemists would typically call the metals by their planetary names, e.g. "Saturn" for lead and "Mars" for iron; compounds of tin, iron and silver continued to be called "jovial", "martial" and "lunar"; or "of Jupiter", "of Mars" and "of the moon", through the 17th century. The tradition remains today with the name of the element mercury, where chemists decided the planetary name was preferable to common names like "quicksilver", and in a few archaic terms such as lunar caustic (silver nitrate) and saturnism (lead poisoning).
Daltonian symbols
The following symbols were employed by John Dalton in the early 1800s as the periodic table of elements was being formulated. Not included in this list are substances now known to be compounds, such as certain rare-earth mineral blends. Modern alphabetic notation was introduced in 1814 by Jöns Jakob Berzelius; its precursor can be seen in Dalton's circled letters for the metals, especially in his augmented table from 1810.
A trace of Dalton's conventions also survives in ball-and-stick models of molecules, where balls for carbon are black and for oxygen red.
Symbols for named isotopes
The following is a list of isotopes which have been given unique symbols. This is not a list of current systematic symbols (in the Atom form); such a list can instead be found in Template:Navbox element isotopes. The symbols for isotopes of hydrogen, deuterium (D) and tritium (T), are still in use today, as is thoron (Tn) for radon-220 (though not actinon; An usually instead means a generic actinide). Heavy water and other deuterated solvents are commonly used in chemistry, and it is convenient to use a single character rather than a symbol with a subscript in these cases. The practice also continues with tritium compounds. When the name of the solvent is given, a lowercase d is sometimes used. For example, d-benzene or CD can be used instead of C[H].
The symbols for isotopes of elements other than hydrogen and radon are no longer used in the scientific community. Many of these symbols were designated during the early years of radiochemistry, and several isotopes (namely those in the decay chains of actinium, radium, and thorium) bear placeholder names using the early naming system devised by Ernest Rutherford.
Other symbols
In Chinese, each chemical element has a dedicated character, usually created for the purpose (see Chemical elements in East Asian languages). However, in Chinese Latin symbols are also used, especially in formulas.
General:
A: A deprotonated acid or an anion
An: any actinide
B: A base, often in the context of Lewis acid–base theory or Brønsted–Lowry acid–base theory
E: any element or electrophile
L: any ligand
Ln: any lanthanide
M: any metal
Mm: mischmetal (occasionally used)
Ng: any noble gas (Rg is sometimes used, but that is also used for the element roentgenium: see above)
Nu: any nucleophile
R: any unspecified radical (moiety) not important to the discussion
St: steel (occasionally used)
X: any halogen (or sometimes pseudohalogen)
From organic chemistry:
Ac: acetyl – (also used for the element actinium: see above)
Ad: 1-adamantyl
All: allyl
Am: amyl (pentyl) – (also used for the element americium: see above)
Ar: aryl – (also used for the element argon: see above)
Bn: benzyl
Bs: brosyl or (outdated) benzenesulfonyl
Bu: butyl (i-, s-, or t- prefixes may be used to denote iso-, sec-, or tert- isomers, respectively)
Bz: benzoyl
Cp: cyclopentadienyl
Cp*: pentamethylcyclopentadienyl
Cy: cyclohexyl
Cyp: cyclopentyl
Et: ethyl
Me: methyl
Mes: mesityl (2,4,6-trimethylphenyl)
Ms: mesyl (methylsulfonyl)
Np: neopentyl – (also used for the element neptunium: see above)
Ns: nosyl
Pent: pentyl
Ph, Φ: phenyl
Pr: propyl – (i- prefix may be used to denote isopropyl. Also used for the element praseodymium: see above)
R: In organic chemistry contexts, an unspecified "R" is often understood to be an alkyl group
Tf: triflyl (trifluoromethanesulfonyl)
Tr, Trt: trityl (triphenylmethyl)
Ts, Tos: tosyl (para-toluenesulfonyl) – (Ts also used for the element tennessine: see above)
Vi: vinyl
Exotic atoms:
Mu: muonium
Pn: protonium
Ps: positronium
Hazard pictographs are another type of symbols used in chemistry.
See also
List of chemical elements naming controversies
List of elements
Nuclear notation
Notes
References
Elementymology & Elements Multidict, element name etymologies. Retrieved July 15, 2005.
Atomic Weights of the Elements 2001, Pure Appl. Chem. 75(8), 1107–1122, 2003. Retrieved June 30, 2005. Atomic weights of elements with atomic numbers from 1–109 taken from this source.
IUPAC Standard Atomic Weights Revised (2005).
WebElements Periodic Table. Retrieved June 30, 2005. Atomic weights of elements with atomic numbers 110–116 taken from this source.
Leighton, Robert B. Principles of Modern Physics. New York: McGraw-Hill. 1959.
Scerri, E.R. "The Periodic Table, Its Story and Its Significance". New York, Oxford University Press. 2007.
External links
Berzelius' List of Elements
History of IUPAC Atomic Weight Values (1883 to 1997)
Committee on Nomenclature, Terminology, and Symbols , American Chemical Society
Chemistry
pl:Symbol chemiczny | Chemical symbol | Physics,Mathematics | 2,658 |
61,791,166 | https://en.wikipedia.org/wiki/Generation%20expansion%20planning | Generation expansion planning (also known as GEP) is finding an optimal solution for the planning problem in which the installation of new generation units satisfies both technical and financial limits. GEP is a challenging problem because of the large scale, long-term and nonlinear nature of generation unit size. Due to lack of information, companies have to solve this problem in a risky environment because the competition between generation companies for maximizing their benefit make them to conceal their strategies. Under such an ambiguous condition, various nonlinear solutions have been proposed to solve this sophisticated problem. These solutions are based on different strategies including: game theory, two-level game model, multi-agent system, genetic algorithm, particle swarm optimization and so forth.
See also
Demand response
power system
References
Electric power generation
Planning
Power engineering | Generation expansion planning | Engineering | 159 |
24,350 | https://en.wikipedia.org/wiki/Projective%20plane | In mathematics, a projective plane is a geometric structure that extends the concept of a plane. In the ordinary Euclidean plane, two lines typically intersect at a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thus any two distinct lines in a projective plane intersect at exactly one point.
Renaissance artists, in developing the techniques of drawing in perspective, laid the groundwork for this mathematical topic. The archetypical example is the real projective plane, also known as the extended Euclidean plane. This example, in slightly different guises, is important in algebraic geometry, topology and projective geometry where it may be denoted variously by , RP2, or P2(R), among other notations. There are many other projective planes, both infinite, such as the complex projective plane, and finite, such as the Fano plane.
A projective plane is a 2-dimensional projective space. Not all projective planes can be embedded in 3-dimensional projective spaces; such embeddability is a consequence of a property known as Desargues' theorem, not shared by all projective planes.
Definition
A projective plane is a rank 2 incidence structure consisting of a set of points , a set of lines , and a symmetric relation on the set called incidence, having the following properties:
Given any two distinct points, there is exactly one line incident with both of them.
Given any two distinct lines, there is exactly one point incident with both of them.
There are four points such that no line is incident with more than two of them.
The second condition means that there are no parallel lines. The last condition excludes the so-called degenerate cases (see below). The term "incidence" is used to emphasize the symmetric nature of the relationship between points and lines. Thus the expression "point P is incident with line ℓ" is used instead of either "P is on ℓ" or "ℓ passes through P".
It follows from the definition that the number of points incident with any given line in a projective plane is the same as the number of lines incident with any given point. The (possibly infinite) cardinal number is called order of the plane.
Examples
The extended Euclidean plane
To turn the ordinary Euclidean plane into a projective plane, proceed as follows:
To each parallel class of lines (a maximum set of mutually parallel lines) associate a single new point. That point is to be considered incident with each line in its class. The new points added are distinct from each other. These new points are called points at infinity.
Add a new line, which is considered incident with all the points at infinity (and no other points). This line is called the line at infinity.
The extended structure is a projective plane and is called the extended Euclidean plane or the real projective plane. The process outlined above, used to obtain it, is called "projective completion" or projectivization. This plane can also be constructed by starting from R3 viewed as a vector space, see below.
Projective Moulton plane
The points of the Moulton plane are the points of the Euclidean plane, with coordinates in the usual way. To create the Moulton plane from the Euclidean plane some of the lines are redefined. That is, some of their point sets will be changed, but other lines will remain unchanged. Redefine all the lines with negative slopes so that they look like "bent" lines, meaning that these lines keep their points with negative x-coordinates, but the rest of their points are replaced with the points of the line with the same y-intercept but twice the slope wherever their x-coordinate is positive.
The Moulton plane has parallel classes of lines and is an affine plane. It can be projectivized, as in the previous example, to obtain the projective Moulton plane. Desargues' theorem is not a valid theorem in either the Moulton plane or the projective Moulton plane.
A finite example
This example has just thirteen points and thirteen lines. We label the points P1, ..., P13 and the lines m1, ..., m13. The incidence relation (which points are on which lines) can be given by the following incidence matrix. The rows are labelled by the points and the columns are labelled by the lines. A 1 in row i and column j means that the point Pi is on the line mj, while a 0 (which we represent here by a blank cell for ease of reading) means that they are not incident. The matrix is in Paige–Wexler normal form.
{| class="wikitable" style="text-align:center;"
|-
!
! m1
! m2 !! m3 !! m4
! m5 !! m6 !! m7
! m8 !! m9 !! m10
! m11!! m12!! m13
|- style="border-bottom:2px solid #999;"
! P1
| bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || || || || || || || ||
|-
! P2
| bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || || || || ||
|-
! P3
| bgcolor="#9cf"|1 || || || || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || ||
|- style="border-bottom:2px solid #999;"
! P4
| bgcolor="#9cf"|1 || || || || || || || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || bgcolor="#9cf"|1
|-
! P5
| || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || ||
|-
! P6
| || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 ||
|- style="border-bottom:2px solid #999;"
! P7
| || bgcolor="#9cf"|1 || || || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1
|-
! P8
| || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1
|-
! P9
| || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || ||
|- style="border-bottom:2px solid #999;"
! P10
| || || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || || || bgcolor="#9cf"|1 ||
|-
! P11
| || || || bgcolor="#9cf"|1 || bgcolor="#9cf"|1 || || || || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 ||
|-
! P12
| || || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || || || || bgcolor="#9cf"|1
|-
! P13
| || || || bgcolor="#9cf"|1 || || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || || bgcolor="#9cf"|1 || ||
|}
To verify the conditions that make this a projective plane, observe that every two rows have exactly one common column in which 1s appear (every pair of distinct points are on exactly one common line) and that every two columns have exactly one common row in which 1s appear (every pair of distinct lines meet at exactly one point). Among many possibilities, the points P1, P4, P5, and P8, for example, will satisfy the third condition. This example is known as the projective plane of order three.
Vector space construction
Though the line at infinity of the extended real plane may appear to have a different nature than the other lines of that projective plane, this is not the case. Another construction of the same projective plane shows that no line can be distinguished (on geometrical grounds) from any other. In this construction, each "point" of the real projective plane is the one-dimensional subspace (a geometric line) through the origin in a 3-dimensional vector space, and a "line" in the projective plane arises from a (geometric) plane through the origin in the 3-space. This idea can be generalized and made more precise as follows.
Let K be any division ring (skewfield). Let K3 denote the set of all triples x = of elements of K (a Cartesian product viewed as a vector space). For any nonzero x in K3, the minimal subspace of K3 containing x (which may be visualized as all the vectors in a line through the origin) is the subset
of K3. Similarly, let x and y be linearly independent elements of K3, meaning that implies that . The minimal subspace of K3 containing x and y (which may be visualized as all the vectors in a plane through the origin) is the subset
of K3. This 2-dimensional subspace contains various 1-dimensional subspaces through the origin that may be obtained by fixing k and m and taking the multiples of the resulting vector. Different choices of k and m that are in the same ratio will give the same line.
The projective plane over K, denoted PG(2, K) or KP2, has a set of points consisting of all the 1-dimensional subspaces in K3. A subset L of the points of PG(2, K) is a line in PG(2, K) if there exists a 2-dimensional subspace of K3 whose set of 1-dimensional subspaces is exactly L.
Verifying that this construction produces a projective plane is usually left as a linear algebra exercise.
An alternate (algebraic) view of this construction is as follows. The points of this projective plane are the equivalence classes of the set modulo the equivalence relation
x ~ kx, for all k in K×.
Lines in the projective plane are defined exactly as above.
The coordinates of a point in PG(2, K) are called homogeneous coordinates. Each triple represents a well-defined point in PG(2, K), except for the triple , which represents no point. Each point in PG(2, K), however, is represented by many triples.
If K is a topological space, then KP2 inherits a topology via the product, subspace, and quotient topologies.
Classical examples
The real projective plane RP2 arises when K is taken to be the real numbers, R. As a closed, non-orientable real 2-manifold, it serves as a fundamental example in topology.
In this construction, consider the unit sphere centered at the origin in R3. Each of the R3 lines in this construction intersects the sphere at two antipodal points. Since the R3 line represents a point of RP2, we will obtain the same model of RP2 by identifying the antipodal points of the sphere. The lines of RP2 will be the great circles of the sphere after this identification of antipodal points. This description gives the standard model of elliptic geometry.
The complex projective plane CP2 arises when K is taken to be the complex numbers, C. It is a closed complex 2-manifold, and hence a closed, orientable real 4-manifold. It and projective planes over other fields (known as pappian planes) serve as fundamental examples in algebraic geometry.
The quaternionic projective plane HP2 is also of independent interest.
Finite field planes
By Wedderburn's Theorem, a finite division ring must be commutative and so be a field. Thus, the finite examples of this construction are known as "field planes". Taking K to be the finite field of elements with prime p produces a projective plane of points. The field planes are usually denoted by PG(2, q) where PG stands for projective geometry, the "2" is the dimension and q is called the order of the plane (it is one less than the number of points on any line). The Fano plane, discussed below, is denoted by PG(2, 2). The third example above is the projective plane PG(2, 3).
The Fano plane is the projective plane arising from the field of two elements. It is the smallest projective plane, with only seven points and seven lines. In the figure at right, the seven points are shown as small balls, and the seven lines are shown as six line segments and a circle. However, one could equivalently consider the balls to be the "lines" and the line segments and circle to be the "points" – this is an example of duality in the projective plane: if the lines and points are interchanged, the result is still a projective plane (see below). A permutation of the seven points that carries collinear points (points on the same line) to collinear points is called a collineation or symmetry of the plane. The collineations of a geometry form a group under composition, and for the Fano plane this group () has 168 elements.
Desargues' theorem and Desarguesian planes
The theorem of Desargues is universally valid in a projective plane if and only if the plane can be constructed from a three-dimensional vector space over a skewfield as above. These planes are called Desarguesian planes, named after Girard Desargues. The real (or complex) projective plane and the projective plane of order 3 given above are examples of Desarguesian projective planes. The projective planes that can not be constructed in this manner are called non-Desarguesian planes, and the Moulton plane given above is an example of one. The PG(2, K) notation is reserved for the Desarguesian planes. When K is a field, a very common case, they are also known as field planes and if the field is a finite field they can be called Galois planes.
Subplanes
A subplane of a projective plane is a pair of subsets where ,
and is itself a projective plane with respect to the restriction of the incidence relation to .
proves the following theorem. Let Π be a finite projective plane of order N with a proper subplane Π0 of order M. Then either N = M2 or N ≥ M2 + M.
A subplane of is a Baer subplane if every line in is incident with exactly one point in and every point in is incident with exactly one line of .
A finite Desarguesian projective plane of order
admits Baer subplanes (all necessarily Desarguesian) if and
only if is square; in this
case the order of the Baer subplanes is .
In the finite Desarguesian planes PG(2, pn), the subplanes have orders which are the orders of the subfields of the finite field GF(pn), that is, pi where i is a divisor of n. In non-Desarguesian planes however, Bruck's theorem gives the only information about subplane orders. The case of equality in the inequality of this theorem is not known to occur. Whether or not there exists a subplane of order M in a plane of order N with M2 + M = N is an open question. If such subplanes existed there would be projective planes of composite (non-prime power) order.
Fano subplanes
A Fano subplane is a subplane isomorphic to PG(2, 2), the unique projective plane of order 2.
If you consider a quadrangle (a set of 4 points no three collinear) in this plane, the points determine six of the lines of the plane. The remaining three points (called the diagonal points of the quadrangle) are the points where the lines that do not intersect at a point of the quadrangle meet. The seventh line consists of all the diagonal points (usually drawn as a circle or semicircle).
In finite desarguesian planes, PG(2, q), Fano subplanes exist if and only if q is even (that is, a power of 2). The situation in non-desarguesian planes is unsettled. They could exist in any non-desarguesian plane of order greater than 6, and indeed, they have been found in all non-desarguesian planes in which they have been looked for (in both odd and even orders).
An open question, apparently due to Hanna Neumann though not published by her, is: Does every non-desarguesian plane contain a Fano subplane?
A theorem concerning Fano subplanes due to is:
If every quadrangle in a finite projective plane has collinear diagonal points, then the plane is desarguesian (of even order).
Affine planes
Projectivization of the Euclidean plane produced the real projective plane. The inverse operation—starting with a projective plane, remove one line and all the points incident with that line—produces an affine plane.
Definition
More formally an affine plane consists of a set of lines and a set of points, and a relation between points and lines called incidence, having the following properties:
Given any two distinct points, there is exactly one line incident with both of them.
Given any line l and any point P not incident with l, there is exactly one line incident with P that does not meet l.
There are four points such that no line is incident with more than two of them.
The second condition means that there are parallel lines and is known as Playfair's axiom. The expression "does not meet" in this condition is shorthand for "there does not exist a point incident with both lines".
The Euclidean plane and the Moulton plane are examples of infinite affine planes. A finite projective plane will produce a finite affine plane when one of its lines and the points on it are removed. The order of a finite affine plane is the number of points on any of its lines (this will be the same number as the order of the projective plane from which it comes). The affine planes which arise from the projective planes PG(2, q) are denoted by AG(2, q).
There is a projective plane of order N if and only if there is an affine plane of order N. When there is only one affine plane of order N there is only one projective plane of order N, but the converse is not true. The affine planes formed by the removal of different lines of the projective plane will be isomorphic if and only if the removed lines are in the same orbit of the collineation group of the projective plane. These statements hold for infinite projective planes as well.
Construction of projective planes from affine planes
The affine plane K2 over K embeds into KP2 via the map which sends affine (non-homogeneous) coordinates to homogeneous coordinates,
The complement of the image is the set of points of the form . From the point of view of the embedding just given, these points are the points at infinity. They constitute a line in KP2—namely, the line arising from the plane
in K3—called the line at infinity. The points at infinity are the "extra" points where parallel lines intersect in the construction of the extended real plane; the point (0, x1, x2) is where all lines of slope x2 / x1 intersect. Consider for example the two lines
in the affine plane K2. These lines have slope 0 and do not intersect. They can be regarded as subsets of KP2 via the embedding above, but these subsets are not lines in KP2. Add the point to each subset; that is, let
These are lines in KP2; ū arises from the plane
in K3, while ȳ arises from the plane
The projective lines ū and ȳ intersect at . In fact, all lines in K2 of slope 0, when projectivized in this manner, intersect at in KP2.
The embedding of K2 into KP2 given above is not unique. Each embedding produces its own notion of points at infinity. For example, the embedding
has as its complement those points of the form , which are then regarded as points at infinity.
When an affine plane does not have the form of K2 with K a division ring, it can still be embedded in a projective plane, but the construction used above does not work. A commonly used method for carrying out the embedding in this case involves expanding the set of affine coordinates and working in a more general "algebra".
Generalized coordinates
One can construct a coordinate "ring"—a so-called planar ternary ring (not a genuine ring)—corresponding to any projective plane. A planar ternary ring need not be a field or division ring, and there are many projective planes that are not constructed from a division ring. They are called non-Desarguesian projective planes and are an active area of research. The Cayley plane (OP2), a projective plane over the octonions, is one of these because the octonions do not form a division ring.
Conversely, given a planar ternary ring (R, T), a projective plane can be constructed (see below). The relationship is not one to one. A projective plane may be associated with several non-isomorphic planar ternary rings. The ternary operator T can be used to produce two binary operators on the set R, by:
a + b = T(a, 1, b), and
a ⋅ b = T(a, b, 0).
The ternary operator is linear if . When the set of coordinates of a projective plane actually form a ring, a linear ternary operator may be defined in this way, using the ring operations on the right, to produce a planar ternary ring.
Algebraic properties of this planar ternary coordinate ring turn out to correspond to geometric incidence properties of the plane. For example, Desargues' theorem corresponds to the coordinate ring being obtained from a division ring, while Pappus's theorem corresponds to this ring being obtained from a commutative field. A projective plane satisfying Pappus's theorem universally is called a Pappian plane. Alternative, not necessarily associative, division algebras like the octonions correspond to Moufang planes.
There is no known purely geometric proof of the purely geometric statement that Desargues' theorem implies Pappus' theorem in a finite projective plane (finite Desarguesian planes are Pappian). (The converse is true in any projective plane and is provable geometrically, but finiteness is essential in this statement as there are infinite Desarguesian planes which are not Pappian.) The most common proof uses coordinates in a division ring and Wedderburn's theorem that finite division rings must be commutative; give a proof that uses only more "elementary" algebraic facts about division rings.
To describe a finite projective plane of order N(≥ 2) using non-homogeneous coordinates and a planar ternary ring:
Let one point be labelled (∞).
Label N points, (r) where r = 0, ..., (N − 1).
Label N2 points, (r, c) where r, c = 0, ..., (N − 1).
On these points, construct the following lines:
One line [∞] = { (∞), (0), ..., (N − 1)}
N lines [c] = {(∞), (c, 0), ..., (c, N − 1)}, where c = 0, ..., (N − 1)
N2 lines [r, c] = {(r) and the points (x, T(x, r, c)) }, where x, r, c = 0, ..., (N − 1) and T is the ternary operator of the planar ternary ring.
For example, for we can use the symbols {0, 1} associated with the finite field of order 2. The ternary operation defined by with the operations on the right being the multiplication and addition in the field yields the following:
One line [∞] = { (∞), (0), (1)},
2 lines [c] = {(∞), (c,0), (c,1) : c = 0, 1},
[0] = {(∞), (0,0), (0,1) }
[1] = {(∞), (1,0), (1,1) }
4 lines [r, c]: (r) and the points (i, ir + c), where i = 0, 1 : r, c = 0, 1.
[0,0]: {(0), (0,0), (1,0) }
[0,1]: {(0), (0,1), (1,1) }
[1,0]: {(1), (0,0), (1,1) }
[1,1]: {(1), (0,1), (1,0) }
Degenerate planes
Degenerate planes do not fulfill the third condition in the definition of a projective plane. They are not structurally complex enough to be interesting in their own right, but from time to time they arise as special cases in general arguments. There are seven kinds of degenerate plane according to . They are:
the empty set;
a single point, no lines;
a single line, no points;
a single point, a collection of lines, the point is incident with all of the lines;
a single line, a collection of points, the points are all incident with the line;
a point P incident with a line m, an arbitrary collection of lines all incident with P and an arbitrary collection of points all incident with m;
a point P not incident with a line m, an arbitrary (can be empty) collection of lines all incident with P and all the points of intersection of these lines with m.
These seven cases are not independent, the fourth and fifth can be considered as special cases of the sixth, while the second and third are special cases of the fourth and fifth respectively. The special case of the seventh plane with no additional lines can be seen as an eighth plane. All the cases can therefore be organized into two families of degenerate planes as follows (this representation is for finite degenerate planes, but may be extended to infinite ones in a natural way):
1) For any number of points P1, ..., Pn, and lines L1, ..., Lm,
L1 = { P1, P2, ..., Pn}
L2 = { P1 }
L3 = { P1 }
...
Lm = { P1 }
2) For any number of points P1, ..., Pn, and lines L1, ..., Ln, (same number of points as lines)
L1 = { P2, P3, ..., Pn }
L2 = { P1, P2 }
L3 = { P1, P3 }
...
Ln = { P1, Pn }
Collineations
A collineation of a projective plane is a bijective map of the plane to itself which maps points to points and lines to lines that preserves incidence, meaning that if σ is a bijection and point P is on line m, then Pσ is on mσ.
If σ is a collineation of a projective plane, a point P with P = Pσ is called a fixed point of σ, and a line m with m = mσ is called a fixed line of σ. The points on a fixed line need not be fixed points, their images under σ are just constrained to lie on this line. The collection of fixed points and fixed lines of a collineation form a closed configuration, which is a system of points and lines that satisfy the first two but not necessarily the third condition in the definition of a projective plane. Thus, the fixed point and fixed line structure for any collineation either form a projective plane by themselves, or a degenerate plane. Collineations whose fixed structure forms a plane are called planar collineations.
Homography
A homography (or projective transformation) of PG(2, K) is a collineation of this type of projective plane which is a linear transformation of the underlying vector space. Using homogeneous coordinates they can be represented by invertible matrices over K which act on the points of PG(2, K) by , where x and y are points in K3 (vectors) and M is an invertible matrix over K. Two matrices represent the same projective transformation if one is a constant multiple of the other. Thus the group of projective transformations is the quotient of the general linear group by the scalar matrices called the projective linear group.
Another type of collineation of PG(2, K) is induced by any automorphism of K, these are called automorphic collineations. If α is an automorphism of K, then the collineation given by is an automorphic collineation. The fundamental theorem of projective geometry says that all the collineations of PG(2, K) are compositions of homographies and automorphic collineations. Automorphic collineations are planar collineations.
Plane duality
A projective plane is defined axiomatically as an incidence structure, in terms of a set P of points, a set L of lines, and an incidence relation I that determines which points lie on which lines. As P and L are only sets one can interchange their roles and define a plane dual structure.
By interchanging the role of "points" and "lines" in
C = (P, L, I)
we obtain the dual structure
C* = (L, P, I*),
where I* is the converse relation of I.
In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known as dualizing the statement.
If a statement is true in a projective plane C, then the plane dual of that statement must be true in the dual plane C*. This follows since dualizing each statement in the proof "in C" gives a statement of the proof "in C*."
In the projective plane C, it can be shown that there exist four lines, no three of which are concurrent. Dualizing this theorem and the first two axioms in the definition of a projective plane shows that the plane dual structure C* is also a projective plane, called the dual plane of C.
If C and C* are isomorphic, then C is called self-dual. The projective planes PG(2, K) for any division ring K are self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes.
The Principle of plane duality says that dualizing any theorem in a self-dual projective plane C produces another theorem valid in C.
Correlations
A duality is a map from a projective plane to its dual plane (see above) which preserves incidence. That is, a duality σ will map points to lines and lines to points ( and ) in such a way that if a point Q is on a line m (denoted by ) then . A duality which is an isomorphism is called a correlation. If a correlation exists then the projective plane C is self-dual.
In the special case that the projective plane is of the PG(2, K) type, with K a division ring, a duality is called a reciprocity. These planes are always self-dual. By the fundamental theorem of projective geometry a reciprocity is the composition of an automorphic function of K and a homography. If the automorphism involved is the identity, then the reciprocity is called a projective correlation.
A correlation of order two (an involution) is called a polarity. If a correlation φ is not a polarity then φ2 is a nontrivial collineation.
Finite projective planes
It can be shown that a projective plane has the same number of lines as it has points (infinite or finite). Thus, for every finite projective plane there is an integer N ≥ 2 such that the plane has
N2 + N + 1 points,
N2 + N + 1 lines,
N + 1 points on each line, and
N + 1 lines through each point.
The number N is called the order of the projective plane.
The projective plane of order 2 is called the Fano plane. See also the article on finite geometry.
Using the vector space construction with finite fields there exists a projective plane of order , for each prime power pn. In fact, for all known finite projective planes, the order N is a prime power.
The existence of finite projective planes of other orders is an open question. The only general restriction known on the order is the Bruck–Ryser–Chowla theorem that if the order N is congruent to 1 or 2 mod 4, it must be the sum of two squares. This rules out . The next case has been ruled out by massive computer calculations. Nothing more is known; in particular, the question of whether there exists a finite projective plane of order is still open.
Another longstanding open problem is whether there exist finite projective planes of prime order which are not finite field planes (equivalently, whether there exists a non-Desarguesian projective plane of prime order).
A projective plane of order N is a Steiner system
(see Steiner system). Conversely, one can prove that all Steiner systems of this form () are projective planes.
The number of mutually orthogonal Latin squares of order N is at most . exist if and only if there is a projective plane of order N.
While the classification of all projective planes is far from complete, results are known for small orders:
2 : all isomorphic to PG(2, 2)
3 : all isomorphic to PG(2, 3)
4 : all isomorphic to PG(2, 4)
5 : all isomorphic to PG(2, 5)
6 : impossible as the order of a projective plane, proved by Tarry who showed that Euler's thirty-six officers problem has no solution. However, the connection between these problems was not known until Bose proved it in 1938.
7 : all isomorphic to PG(2, 7)
8 : all isomorphic to PG(2, 8)
9 : PG(2, 9), and three more different (non-isomorphic) non-Desarguesian planes: a Hughes plane, a Hall plane, and the dual of this Hall plane. All are described in .
10 : impossible as an order of a projective plane, proved by heavy computer calculation.
11 : at least PG(2, 11), others are not known but possible.
12 : it is conjectured to be impossible as an order of a projective plane.
Projective planes in higher-dimensional projective spaces
Projective planes may be thought of as projective geometries of dimension two. Higher-dimensional projective geometries can be defined in terms of incidence relations in a manner analogous to the definition of a projective plane. These turn out to be "tamer" than the projective planes since the extra degrees of freedom permit Desargues' theorem to be proved geometrically in the higher-dimensional geometry. This means that the coordinate "ring" associated to the geometry must be a division ring (skewfield) K, and the projective geometry is isomorphic to the one constructed from the vector space Kd+1, i.e. PG(d, K). As in the construction given earlier, the points of the d-dimensional projective space PG(d, K) are the lines through the origin in Kd+1 and a line in PG(d, K) corresponds to a plane through the origin in Kd+1. In fact, each i-dimensional object in PG(d, K), with , is an -dimensional (algebraic) vector subspace of Kd+1 ("goes through the origin"). The projective spaces in turn generalize to the Grassmannian spaces.
It can be shown that if Desargues' theorem holds in a projective space of dimension greater than two, then it must also hold in all planes that are contained in that space. Since there are projective planes in which Desargues' theorem fails (non-Desarguesian planes), these planes can not be embedded in a higher-dimensional projective space. Only the planes from the vector space construction PG(2, K) can appear in projective spaces of higher dimension. Some disciplines in mathematics restrict the meaning of projective plane to only this type of projective plane since otherwise general statements about projective spaces would always have to mention the exceptions when the geometric dimension is two.
See also
Block design – a generalization of a finite projective plane.
Combinatorial design
Difference set
Incidence structure
Generalized polygon
Projective geometry
Non-Desarguesian plane
Smooth projective plane
Transversals in finite projective planes
Truncated projective plane – a projective plane with one vertex removed.
VC dimension of a finite projective plane
Notes
References
External links
G. Eric Moorhouse, Projective Planes of Small Order, (2003)
Ch. Weibel: Survey of Nondesarguesian planes
Projective geometry
Incidence geometry
Euclidean plane geometry
Algebraic geometry
Hypergraphs
Computer-assisted proofs | Projective plane | Mathematics | 8,549 |
238,928 | https://en.wikipedia.org/wiki/Cosmic%20variance | The term cosmic variance is the statistical uncertainty inherent in observations of the universe at extreme distances. It has three different but closely related meanings:
It is sometimes used, incorrectly, to mean sample variance – the difference between different finite samples of the same parent population. Such differences follow a Poisson distribution, and in this case the term sample variance should be used instead.
It is sometimes used, mainly by cosmologists, to mean the uncertainty because we can only observe one realization of all the possible observable universes. For example, we can only observe one Cosmic Microwave Background, so the measured positions of the peaks in the Cosmic Microwave Background spectrum, integrated over the visible sky, are limited by the fact that only one spectrum is observable from Earth. The observable universe viewed from another galaxy will have the peaks in slightly different places, while remaining consistent with the same physical laws, inflation, etc. This second meaning may be regarded as a special case of the third meaning.
The most widespread use, to which the rest of this article refers, reflects the fact that measurements are affected by cosmic large-scale structure, so a measurement of any region of sky (viewed from Earth) may differ from a measurement of a different region of sky (also viewed from Earth) by an amount that may be much greater than the sample variance.
This most widespread use of the term is based on the idea that it is only possible to observe part of the universe at one particular time, so it is difficult to make statistical statements about cosmology on the scale of the entire universe, as the number of observations (sample size) must be not too small.
Background
The standard Big Bang model is usually supplemented with cosmic inflation. In inflationary models, the observer only sees a tiny fraction of the whole universe, much less than a billionth (1/109) of the volume of the universe postulated in inflation. So the observable universe (the so-called particle horizon of the universe) is the result of processes that follow some general physical laws, including quantum mechanics and general relativity. Some of these processes are random: for example, the distribution of galaxies throughout the universe can only be described statistically and cannot be derived from first principles.
Philosophical issues
This raises philosophical problems: suppose that random physical processes happen on length scales both smaller than and bigger than the particle horizon. A physical process (such as an amplitude of a primordial perturbation in density) that happens on the horizon scale only gives us one observable realization. A physical process on a larger scale gives us zero observable realizations. A physical process on a slightly smaller scale gives us a small number of realizations.
In the case of only one realization it is difficult to draw statistical conclusions about its significance. For example, if the underlying model of a physical process implies that the observed property should occur only 1% of the time, does that really mean that the model is excluded? Consider the physical model of the citizenship of human beings in the early 21st century, where about 30% are Indian and Chinese citizens, about 5% are American citizens, about 1% are French citizens, and so on. For an observer who has only one observation (of his/her own citizenship) and who happens to be French and cannot make any external observations, the model can be rejected at the 99% significance level. Yet the external observers with more information unavailable to the first observer, know that the model is correct.
In other words, even if the bit of the universe observed is the result of a statistical process, the observer can only view one realization of that process, so our observation is statistically insignificant for saying much about the model, unless the observer is careful to include the variance. This variance is called the cosmic variance and is separate from other sources of experimental error: a very accurate measurement of only one value drawn from a distribution still leaves considerable uncertainty about the underlying model. Variance is normally plotted separately from other sources of uncertainty. Because it is necessarily a large fraction of the signal, workers must be very careful in interpreting the statistical significance of measurements on scales close to the particle horizon.
In physical cosmology, the common way of dealing with this on the horizon scale and on slightly sub-horizon scales (where the number of occurrences is greater than one but still quite small), is to explicitly include the variance of very small statistical samples (Poisson distribution) when calculating uncertainties. This is important in describing the low multipoles of the cosmic microwave background and has been the source of much controversy in the cosmology community since the COBE and WMAP measurements.
Similar problems
A similar problem is faced by evolutionary biologists. Just as cosmologists have a sample size of one universe, biologists have a sample size of one fossil record. The problem is closely related to the anthropic principle.
Another problem of limited sample sizes in astronomy, here practical rather than essential, is in the Titius–Bode law on spacing of satellites in an orbital system. Originally observed for the Solar System, the difficulty in observing other solar systems has limited data to test this.
References
Sources
Stephen Hawking (2003). Cosmology from the Top Down. Proceedings of the Davis Meeting on Cosmic Inflation.
External links
Cosmology from the Top Down (online)
Physical cosmology
Statistical deviation and dispersion | Cosmic variance | Physics,Astronomy | 1,091 |
529,056 | https://en.wikipedia.org/wiki/Massively%20multiplayer%20online%20game | A massively multiplayer online game (MMOG or more commonly MMO) is an online video game with a large number of players to interact in the same online game world. MMOs usually feature a huge, persistent open world, although there are games that differ. These games can be found for most network-capable platforms, including the personal computer, video game console, or smartphones and other mobile devices.
MMOs can enable players to cooperate and compete with each other on a large scale, and sometimes to interact meaningfully with people around the world. They include a variety of gameplay types, representing many video game genres.
History
The most popular type of MMOG, and the subgenre that pioneered the category, is the massively multiplayer online role-playing game (MMORPG), which descended from university mainframe computer MUD and adventure games such as Rogue and Dungeon on the PDP-10. These games predate the commercial gaming industry and the Internet, but still featured persistent worlds and other elements of MMOGs still used today.
The first graphical MMOG, and a major milestone in the creation of the genre, was the multiplayer flight combat simulation game Air Warrior by Kesmai on the GEnie online service, which first appeared in 1986. Kesmai later added 3D graphics to the game, making it the first 3D MMO.
Commercial MMORPGs gained acceptance in the late 1980s and early 1990s. The genre was pioneered by the GemStone series on GEnie, also created by Kesmai, and Neverwinter Nights, the first such game to include graphics, which debuted on AOL in 1991.
As video game developers applied MMOG ideas to other computer and video game genres, new acronyms started to develop, such as MMORTS. MMOG emerged as a generic term to cover this growing class of games.
The debuts of The Realm Online, Meridian 59 (the first 3D MMORPG), Castle Infinity (the first kid-focused MMORPG),Ultima Online, Underlight and EverQuest in the late 1990s popularized the MMORPG genre. The growth in technology meant that where Neverwinter Nights in 1991 had been limited to 50 simultaneous players (a number that grew to 500 by 1995), by 2000 a multitude of MMORPGs was each serving thousands of simultaneous players and led the way for games such as World of Warcraft and EVE Online.
Despite the genre's focus on multiplayer gaming, AI-controlled characters are still common. NPCs and mobs who give out quests or serve as opponents are typical in MMORPGs. AI-controlled characters are not as common in action-based MMOGs.
The popularity of MMOGs was mostly restricted to the computer game market until the sixth-generation consoles, with the launch of Phantasy Star Online on the Dreamcast, as well as the emergence and growth of the online service Xbox Live. There have been a number of console MMOGs, including EverQuest Online Adventures (PlayStation 2), and the multi-console Final Fantasy XI. On PCs, the MMOG market has always been dominated by successful fantasy MMORPGs.
MMOGs have only recently begun to break into the mobile phone market. The first, Samurai Romanesque set in feudal Japan, was released in 2001 on NTT DoCoMo's iMode network in Japan. More recent developments are CipSoft's TibiaME and Biting Bit's MicroMonster, which features online and bluetooth multiplayer gaming. SmartCell Technology is in development of Shadow of Legend, which will allow gamers to continue their game on their mobile device when away from their PC.
Science fiction has also been a popular theme, featuring games such as Mankind, Anarchy Online, Eve Online, Star Wars Galaxies and The Matrix Online.
MMOGs emerged from the hard-core gamer community to the mainstream strongly in December 2003, with an analysis in the Financial Times measuring the value of the virtual property in the then-largest MMOG, EverQuest, to result in a per-capita GDP of 2,266 dollars, which would have placed the virtual world of EverQuest as the 77th wealthiest nation, on par with Croatia, Ecuador, Tunisia or Vietnam.
World of Warcraft is a dominant MMOG with 8-9 million monthly subscribers worldwide. The subscriber base dropped by one million after the expansion Wrath of the Lich King, bringing it to nine million subscribers in 2010, though it remained the most popular Western title among MMOGs. In 2008, Western consumer spending on World of Warcraft represented a 58% share of the subscription MMOG market in 2009. The title has generated over $2.2 billion in cumulative consumer spending on subscriptions from 2005 through 2009.
Virtual economies
Within a majority of the MMOGs created, there is virtual currency where the player can earn and accumulate money. The uses for such virtual currency are numerous and vary from game to game. The virtual economies created within MMOGs often blur the lines between real and virtual worlds. The result is often seen as an unwanted interaction between the real and virtual economies by the players and the provider of the virtual world. This practice (economy interaction) is mostly seen in this genre of games. The two seem to come hand in hand with even the earliest MMOGs, such as Ultima Online having this kind of trade: real money for virtual things.
The importance of having a working virtual economy within an MMOG is increasing as they develop. A sign of this is CCP Games hiring the first real-life economist for its MMOG Eve Online to assist and analyze the virtual economy and production within this game.
The results of this interaction between the virtual economy, and our real economy, which is really the interaction between the company that created the game and the third-party companies that want a share of the profits and success of the game. This battle between companies is defended on both sides. The company originating the game and the intellectual property argue that this is in violation of the terms and agreements of the game as well as copyright violation since they own the rights to how the online currency is distributed and through what channels. The case that the third-party companies and their customers defend, is that they are selling and exchanging the time and effort put into the acquisition of the currency, not the digital information itself. They also express that the nature of many MMOGs is that they require time commitments not available to everyone. As a result, without external acquisition of virtual currency, some players are severely limited to being able to experience certain aspects of the game.
The practice of acquiring large volumes of virtual currency for the purpose of selling to other individuals for tangible and real currency is called gold farming. Many players who have poured in all of their personal effort resent that there is this exchange between real and virtual economies since it devalues their own efforts. As a result, the term 'gold farmer' now has a very negative connotation within the games and their communities. This slander has unfortunately also extended itself to racial profiling and to in-game and forum insulting.
The reaction from many of the game companies varies. In games that are substantially less popular and have a small player base, the enforcement of the elimination of 'gold farming' appears less often. Companies in this situation most likely are concerned with their personal sales and subscription revenue over the development of their virtual economy, as they most likely have a higher priority to the games viability via adequate funding. Games with an enormous player base, and consequently much higher sales and subscription income, can take more drastic actions more often and in much larger volumes. This account banning could also serve as an economic gain for these large games, since it is highly likely that, due to demand, these 'gold farming' accounts will be recreated with freshly bought copies of the game.
The virtual goods revenue from online games and social networking exceeded US$7 billion in 2010.
In 2011, it was estimated that up to 100,000 people in China and Vietnam are playing online games to gather gold and other items for sale to Western players. While this 'gold farming' is considered to ruin the game for actual players, many rely on 'gold farming' as their main source of income.
However, single player in MMOs is quite viable, especially in what is called 'player vs environment' gameplay. This may result in the player being unable to experience all content, as many of the most significant and potentially rewarding game experiences are events that require large and coordinated teams to complete.
Technical aspect
Most MMOGs also share other characteristics that make them different from other multiplayer online games. MMOGs host many players in a single game world, and all of those players can interact with each other at any given time. Popular MMOGs might have hundreds of players online at any given time, usually on company-owned servers. Non-MMOGs, such as Battlefield 1942 or Half-Life, usually have fewer than 50 players online (per server) and are usually played on private servers. Also, MMOGs usually do not have any significant mods, since the game must work on company servers. There is some debate if a high head-count is a requirement to be an MMOG. Some say that it is the size of the game world and its capability to support many players that should matter. For example, despite technology and content constraints, most MMOGs can fit up to a few thousand players on a single game server at a time.
To support all those players, MMOGs need large-scale game worlds, and servers to connect players to those worlds. Some games have all of their servers connected so all players are connected in a shared universe. Others have copies of their starting game world put on different servers, called "shards", for a sharded universe. Shards got their name from Ultima Online, where in the story, the shards of Mondain's gem created the duplicate worlds.
Still, others will only use one part of the universe at any time. For example, Tribes (which is not an MMOG) comes with a number of large maps, which are played in rotation (one at a time). In contrast, the similar title PlanetSide allows all map-like areas of the game to be reached via flying, driving, or teleporting.
MMORPGs usually have sharded universes, as they provide the most flexible solution to the server load problem, but not always. For example, the space simulation Eve Online uses only one large cluster server peaking at over 60,000 simultaneous players.
It is challenging to develop the database engines that are needed to run a successful MMOG with millions of players. Many developers have created their own, but attempts have been made to create middleware, software that would help game developers concentrate on their games more than technical aspects. One such piece of middleware is called BigWorld.
An early, successful entry into the field was VR-1 Entertainment, whose Conductor platform was adopted and endorsed by a variety of service providers around the world including Sony Communications Network in Japan; the Bertelsmann Game Channel in Germany; British Telecom's Wireplay in England; and DACOM and Samsung SDS in South Korea. Games that were powered by the Conductor platform included Fighter Wing, Air Attack, Fighter Ace, Evernight, Hasbro Em@ail Games (Clue, NASCAR and Soccer), Towers of Fallow, The SARAC Project, VR1 Crossroads and Rumble in the Void.
Typical MUDs and other predecessor games were limited to about 64 or 256 simultaneous player connections; this was a limit imposed by the underlying operating system, which was usually Unix-like. One of the biggest problems with modern engines has been handling the vast number of players. Since a typical server can handle around 10,000–12,000 players, 4000–5000 active simultaneously, dividing the game into several servers has up until now been the solution. This approach has also helped with technical issues, such as lag, that many players experience. Another difficulty, especially relevant to real-time simulation games, is time synchronization across hundreds or thousands of players. Many games rely on time synchronization to drive their physics simulation as well as their scoring and damage detection.
Although there is no specific limit to where an online multiplayer online game is considered massive, there are broad features that are often used as a metric. Garriott's famed 1997 definition referred to the fundamental architecture shift required to support tens of thousands of concurrent players, which required shifting from individual servers to data centers on multiple continents. Games may have MMO features like large worlds with online persistence but still not generally be considered an MMO, such as Grand Theft Auto Vs online play, while other games like League of Legends have small individual sessions but the global infrastructure requirements often allow for classification as an MMO. The term is often used differently by players who tend to refer to their play experience versus game developers who refer to the engineering experience. MMO game developers tend to require tremendous investments in developing and maintaining servers around the globe, network bandwidth infrastructure often on the order of terabytes per second, and large engineering problems relating to managing data spread between multiple computer clusters.
Game types
There are several types of massively multiplayer online games.
Role-playing
Massively multiplayer online role-playing games, known as MMORPGs, are the most common type of MMOG. Some MMORPGs are designed as a multiplayer browser game in order to reduce infrastructure costs and utilise a thin client that most users will already have installed. The acronym BBMMORPGs has sometimes been used to describe these as "browser-based".
Bulletin board role-playing games
Many games are categorized as MMOBBGs,, Massively Multiplayer Online Bulletin Board Games, also called MMOBBRPGs. These particular types of games are primarily made up of text and descriptions, although images are often used to enhance the game.
First-person shooter
MMOFPS is an online gaming genre which features many simultaneous players in a first-person shooter fashion. These games provide large-scale, sometimes team-based combat. The addition of persistence in the game world means that these games add elements typically found in RPGs, such as experience points. However, MMOFPS games emphasize player skill more than player statistics, as no number of in-game bonuses will compensate for a player's inability to aim and think tactically.
Real-time strategy
Massively multiplayer online real-time strategy games, also known as "MMORTS", combine real-time strategy (RTS) with a persistent world. Players often assume the role of a general, king, or other types of figurehead leading an army into battle while maintaining the resources needed for such warfare. The titles are often based in a sci-fi or fantasy universe and are distinguished from single or small-scale multiplayer RTSes by the number of players and common use of a persistent world, generally hosted by the game's publisher, which continues to evolve even when the player is offline.
Turn-based strategy
Steve Jackson Games' UltraCorps is an example of an MMO turn-based strategy game. Hundreds of players share the same playing field of conquest. In a "mega" game, each turn fleets are built and launched to expand one's personal empire. Turns are usually time-based, with a "tick" schedule usually daily. All orders are processed, and battles resolved, at the same time during the tick. Similarly, in Darkwind: War on Wheels, vehicle driving and combat orders are submitted simultaneously by all players and a "tick" occurs typically once per 30 seconds. This allows each player to accurately control multiple vehicles and pedestrians in racing or combat.
Simulations
Some MMOGs have been designed to accurately simulate certain aspects of the real world. They tend to be very specific to industries or activities of very large risk and huge potential loss, such as rocket science, airplanes, trucks, battle tanks, submarines etc. Gradually as simulation technology is getting more mainstream, so too various simulators arrive into more mundane industries.
The initial goal of World War II Online was to create a map (in northwestern Europe) that had real-world physics (gravity, air/water resistance, etc.), and ability for players to have some strategic abilities to its basic FPS/RPG role. While the current version is not quite a true simulated world, it is very complex and contains a large persistent world.
The MMOG genre of air traffic simulation is one example, with networks such as VATSIM and IVAO striving to provide rigorously authentic flight-simulation environments to players in both pilot and air traffic controller roles. In this category of MMOGs, the objective is to create duplicates of the real world for people who cannot or do not wish to undertake those experiences in real life. For example, flight simulation via an MMOG requires far less expenditure of time and money, is completely risk-free, and is far less restrictive (fewer regulations to adhere to, no medical exams to pass, and so on).
Another specialist area is the mobile telecoms operator (carrier) business where billion-dollar investments in networks are needed but market shares are won and lost on issues from segmentation to handset subsidies. A specialist simulation was developed by Nokia called Equilibrium/Arbitrage to have over a two-day period five teams of top management of one operator/carrier play a "wargame" against each other, under extremely realistic conditions, with one operator an incumbent fixed and mobile network operator, another a new entrant mobile operator, a third a fixed-line/internet operator, etc. Each team is measured by outperforming their rivals by market expectations of that type of player. Thus, each player has drastically different goals, but within the simulation, any one team can win. Also to ensure maximum intensity, only one team can win. Telecoms senior executives who have taken the Equilibrium/Arbitrage simulation say it is the most intense, and most useful training they have ever experienced. It is typical of business use of simulators, in very senior management training/retraining.
Examples of MMO simulation games include World of Tanks, War Thunder, Motor City Online, The Sims Online, and Jumpgate.
Sports
A massively multiplayer online sports game is a title where players can compete in some of the more traditional major league sports, such as football (soccer), basketball, baseball, hockey, golf or American football. According to GameSpot, Baseball Mogul Online was "the world's first massively multiplayer online sports game". Other titles that qualify as MMOSG have been around since the early 2000s, but only after 2010 did they start to receive the endorsements of some of the official major league associations and players.
Racing
MMOR means massively multiplayer online racing. Currently there are only a small number of racing-based MMOGs, including iRacing, Kart Rider, Test Drive Unlimited, Project Torque, Drift City and Race or Die. Other notable MMORs included Upshift Strikeracer, Motor City Online and Need for Speed: World, all of which have since shut down. The Trackmania series is the world's largest MMO racing game and holds the world record for "Most Players in a Single Online Race". Although Darkwind: War on Wheels is more combat-based than racing, it is also considered an MMOR.
Casual
Many types of MMO games can be classified as casual, because they are designed to appeal to all computer users (as opposed to subgroup of frequent game buyers), or to fans of another game genre (such as collectible card games). Such games are easy to learn and require a smaller time commitment than other game types. Other popular casual games include simple management games such as The Sims Online or Kung Fu Panda World.
MMOPGs, or massively multiplayer online puzzle games, are based entirely on puzzle elements. They are usually set in a world where the players can access the puzzles around the world. Most games that are MMOPGs are hybrids with other genres. Castle Infinity was the first MMOG developed for children. Its gameplay falls somewhere between puzzle and adventure.
There are also massively multiplayer collectible card games: Alteil, Astral Masters and Astral Tournament. Other MMOCCGs might exist (Neopets has some CCG elements) but are not as well known.
Alternate reality games (ARGs) can be massively multiplayer, allowing thousands of players worldwide to co-operate in puzzle trials and mystery solving. ARGs take place in a unique mixture of online and real-world play that usually does not involve a persistent world, and are not necessarily multiplayer, making them different from MMOGs.
Music/rhythm
Massively multiplayer online music/rhythm games (MMORGs), sometimes called massively multiplayer online dance games (MMODGs), are MMOGs that are also music video games. This idea was influenced by Dance Dance Revolution. Audition Online is another casual massively multiplayer online game and it is produced by T3 Entertainment.
Just Dance 2014 has a game mode called World Dance Floor, which also structures like an MMORPG.
Social
Massively multiplayer online social games (MMOSGs) focus on socialization instead of objective-based gameplay. There is a great deal of overlap in terminology with "online communities" and "virtual worlds". One example that has garnered widespread media attention is Linden Lab's Second Life, emphasizing socializing, worldbuilding and an in-world virtual economy that depends on the sale and purchase of user-created content. It is technically an MMOSG or Casual Multiplayer Online (CMO) by definition, though its stated goal was to realize the concept of the Metaverse from Neal Stephenson's novel Snow Crash. Instead of being based around combat, one could say that it was based around the creation of virtual objects, including models and scripts. In practice, it has more in common with Club Caribe than EverQuest. It was the first MMO of its kind to achieve widespread success (including attention from mainstream media); however, it was not the first (as Club Caribe was released in 1988). Competitors in this subgenre (non-combat-based MMORPG) include Active Worlds, There, SmallWorlds, Furcadia, Whirled, IMVU and Red Light Center.
Many browser-based Casual MMOs have begun to spring up. This has been made easier because of maturing of Adobe Flash and the popularity of Club Penguin, Growtopia and The Sims Online.
Combat
Massively multiplayer online combat games are realtime objective, strategy and capture the flag style modes.
Infantry Online is an example multiplayer combat video game with sprite animation graphics, using complex soldier, ground vehicle and space-ship models on typically complex terrains developed by Sony Online Entertainment.
Research
Some recent attempts to build peer-to-peer (P2P) MMOGs have been made. Outback Online may be the first commercial one, however, so far most of the efforts have been academic studies. A P2P MMOG may potentially be more scalable and cheaper to build, but notable issues with P2P MMOGs include security and consistency control, which can be difficult to address given that clients are easily hacked. Some MMOGs such as Vindictus use P2P networking and client-server networking together.
In April 2004, the United States Army announced that it was developing a massively multiplayer training simulation called AWE (asymmetric warfare environment). The purpose of AWE is to train soldiers for urban warfare and there are no plans for a public commercial release. Forterra Systems is developing it for the Army based on the There engine.
In 2010, Bonnie Nardi published an ethnographic study on World of Warcraft examined with Lev Vygotsky's activity theory.
As the field of MMOs grows larger each year, research has also begun to investigate the socio-informatic bind the games create for their users. In 2006, researchers Constance A. Steinkuehler and Dmitri Williams initiated research on such topics. The topic most intriguing to the pair was to further understand the gameplay, as well as the virtual world serving as a social meeting place, of popular MMOs.
To further explore the effects of social capital and social relationships on MMOs, Steinkuehler and Williams combined conclusions from two different MMO research projects: sociocultural perspective on culture and cognition, and the other on media effects of MMOs. The conclusions of the two studies explained how MMOs function as a new form of a "third place" for informal social interactions much like coffee shops, pubs, and other typical hangouts. Many scholars, however, such as Oldenburg (1999), challenge the idea of a MMOs serving as a "third place" due to inadequate bridging social capital. His argument is challenged by Putnam (2000) who concluded that MMOs are well suited for the formation of bridging social capital, tentative relationships that lack in depth, because it is inclusive and serves as a sociological lubricant that is shown across the data collected in both of the research studies.
MMOs can also move past the "lubricant" stage and into the "superglue" stage known as bonding social capital, a closer relationship that is characterized by stronger connections and emotional support. The study concludes that MMOs function best as a bridging mechanism rather than a bonding one, similar to a "third place". Therefore, MMOs have the capacity and the ability to serve as a community that effectively socializes users just like a coffee shop or pub, but conveniently in the comfort of their home.
Spending
British online gamers are outspending their German and French counterparts according to a study commissioned by Gamesindustry.com and TNS. The UK MMO-market is now worth £195 million in 2009 compared to the £165 million and £145 million spent by German and French online gamers.
The US gamers spend more, however, spending about $3.8 billion overall on MMO games. $1.8 billion of that money is spent on monthly subscription fees. The money spent averages out to $15.10 between both subscription and free-to-play MMO gamers. The study also found that 46% of 46 million players in the US pay real money to play MMO games.
Today's Gamers MMO Focus Report, published in March 2010, was commissioned by TNS and gamesindustry.com. A similar study for the UK market-only (UK National Gamers Survey Report) was released in February 2010 by the same groups.
See also
Extended reality
Game engine
List of massively multiplayer online games
Multiplayer video game
Online game
Social network game
Virtual world
References
External links
Multiplayer online games
Video game genres
Video game terminology
Social software | Massively multiplayer online game | Technology | 5,494 |
3,842,607 | https://en.wikipedia.org/wiki/NIST-7 | NIST-7 was the atomic clock used by the United States from 1993 to 1999. It was one of a series of Atomic Clocks at the National Institute of Standards and Technology. Eventually, it achieved an uncertainty of 5 × 10−15. The caesium beam clock served as the nation's primary time and frequency standard during that time period, but it has since been replaced with the more accurate NIST-F1, a caesium fountain atomic clock that neither gains nor loses one second in 100 million years.
References
External links
National Institute of Standards and Technology
National Institute of Standards and Technology
Atomic clocks | NIST-7 | Physics | 124 |
72,441,060 | https://en.wikipedia.org/wiki/Jacobi%20bound%20problem | The Jacobi bound problem concerns the veracity of Jacobi's inequality which is an inequality on the absolute dimension of a differential algebraic variety in terms of its defining equations.
This is one of Kolchin's Problems.
The inequality is the differential algebraic analog of Bézout's theorem in affine space.
Although first formulated by Jacobi, In 1936 Joseph Ritt recognized the problem as non-rigorous in that Jacobi didn't even have a rigorous notion of absolute dimension (Jacobi and Ritt used the term "order" - which Ritt first gave a rigorous definition for using the notion of transcendence degree).
Intuitively, the absolute dimension is the number of constants of integration required to specify a solution of a system of ordinary differential equations.
A mathematical proof of the inequality has been open since 1936.
Statement
Let be a differential field of characteristic zero and consider a differential algebraic variety determined by the vanishing of differential polynomials .
If is an irreducible component of of finite absolute dimension then
In the above display is the *jacobi number*.
It is defined to be
.
References
Unsolved problems in mathematics
Differential algebra | Jacobi bound problem | Mathematics | 235 |
22,623,555 | https://en.wikipedia.org/wiki/Chloro%28cyclopentadienyl%29bis%28triphenylphosphine%29ruthenium | Chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium is the organoruthenium half-sandwich compound with formula RuCl(PPh3)2(C5H5). It as an air-stable orange crystalline solid that is used in a variety of organometallic synthetic and catalytic transformations. The compound has idealized Cs symmetry. It is soluble in chloroform, dichloromethane, and acetone.
Preparation
Chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium was first reported in 1969 when it was prepared by reacting dichlorotris(triphenylphosphine)ruthenium(II) with cyclopentadiene.
RuCl2(PPh3)3 + C5H6 → RuCl(PPh3)3(C5H5) + HCl
It is prepared by heating a mixture of ruthenium(III) chloride, triphenylphosphine, and cyclopentadiene in ethanol.
Reactions
Chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium(II) undergoes a variety of reactions often by involving substitution of the chloride. With phenylacetylene it gives the phenyl vinylidene complex:
(C5H5)(PPh3)2RuCl + HC2Ph + NH4[PF6] → [Ru(C:CHPh)(PPh3)2(C5H5)][PF6] + NH4Cl
Displacement of one PPh3 by carbon monoxide affords a chiral compound.
(C5H5)(PPh3)2RuCl + CO → (C5H5)(PPh3)(CO)RuCl + PPh3
The compound can also be converted into the hydride:
(C5H5)(PPh3)2RuCl + NaOMe → (C5H5)(PPh3)2RuH + NaCl + CH2O
A related complex is tris(acetonitrile)cyclopentadienylruthenium hexafluorophosphate, which has three labile MeCN ligands.
Applications
Chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium(II) serves as a catalyst for a variety of specialized reactions. For example, in the presence of NH4PF6 it catalyzes the isomerisation of allylic alcohols to the corresponding saturated carbonyls.
References
Organoruthenium compounds
Triphenylphosphine complexes
Cyclopentadienyl complexes
Chloro complexes
Ruthenium(II) compounds | Chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium | Chemistry | 597 |
45,024,063 | https://en.wikipedia.org/wiki/PKS%201302%E2%80%93102 | PKS 1302−102 is a quasar in the Virgo constellation, located at a distance of approximately 1.1 Gpc (around 3.5 billion light-years). It has an apparent magnitude of about 14.9 mag in the V band with a redshift of 0.2784. The quasar is hosted by a bright elliptical galaxy, with two neighboring companions at distances of 3 kpc and 6 kpc. The light curve of PKS 1302−102 appears to be sinusoidal with an amplitude of 0.14 mag and a period of 1,884 ± 88 days, which suggests evidence of a supermassive black hole binary.
Possible black hole binary
PKS 1302−102 was selected from the Catalina Real-Time Transient Survey as one of 20 quasars with apparent periodic variations in the light curve. Of these quasars, PKS 1302−102 appeared to be the best candidate in terms of sinusoidal behavior and other selection criteria, such as data coverage of more than 1.5 cycles in the measured period. One plausible interpretation of the apparent periodic behavior is the possibility of two supermassive black holes (SMBH) orbiting each other with a separation of approximately 0.1 pc in the final stages of a 3.3 billion year old galaxy merger. If this turns out to be the case, it would make PKS 1302−102 an important object of study to various areas of research, including gravitational wave studies and the unsolved final parsec problem in a merger of black holes.
Other explanations, of lesser likelihood, to the observed sinusoidal periodicity include a hot spot on the inner part of the black hole's accretion disk and the possibility of a warped accretion disk which partially eclipses in the orbit around a single SMBH. However, it also remains possible that the periodic behavior in PKS 1302−102 is indeed just a random occurrence in the light curve of an ordinary quasar, as spurious nearly-periodic variations can occur over limited time periods as part of stochastic quasar variability. Further observations of the quasar could either promote true periodicity or rule out a binary interpretation, especially if the measured light curve randomly diverges from the sinusoidal model.
References
Further reading
https://arstechnica.com/science/2015/01/supermassive-black-hole-binary-discovered/
https://www.nytimes.com/2015/01/08/science/in-a-far-off-galaxy-2-black-holes-dance-toward-an-explosive-union.html
Quasars
Supermassive black holes
Virgo (constellation)
4662778 | PKS 1302–102 | Physics,Astronomy | 575 |
1,542,238 | https://en.wikipedia.org/wiki/Smooth%20structure | In mathematics, a smooth structure on a manifold allows for an unambiguous notion of smooth function. In particular, a smooth structure allows mathematical analysis to be performed on the manifold.
Definition
A smooth structure on a manifold is a collection of smoothly equivalent smooth atlases. Here, a smooth atlas for a topological manifold is an atlas for such that each transition function is a smooth map, and two smooth atlases for are smoothly equivalent provided their union is again a smooth atlas for This gives a natural equivalence relation on the set of smooth atlases.
A smooth manifold is a topological manifold together with a smooth structure on
Maximal smooth atlases
By taking the union of all atlases belonging to a smooth structure, we obtain a maximal smooth atlas. This atlas contains every chart that is compatible with the smooth structure. There is a natural one-to-one correspondence between smooth structures and maximal smooth atlases.
Thus, we may regard a smooth structure as a maximal smooth atlas and vice versa.
In general, computations with the maximal atlas of a manifold are rather unwieldy. For most applications, it suffices to choose a smaller atlas.
For example, if the manifold is compact, then one can find an atlas with only finitely many charts.
Equivalence of smooth structures
If and are two maximal atlases on the two smooth structures associated to and are said to be equivalent if there is a diffeomorphism such that
Exotic spheres
John Milnor showed in 1956 that the 7-dimensional sphere admits a smooth structure that is not equivalent to the standard smooth structure. A sphere equipped with a nonstandard smooth structure is called an exotic sphere.
E8 manifold
The E8 manifold is an example of a topological manifold that does not admit a smooth structure. This essentially demonstrates that Rokhlin's theorem holds only for smooth structures, and not topological manifolds in general.
Related structures
The smoothness requirements on the transition functions can be weakened, so that the transition maps are only required to be -times continuously differentiable; or strengthened, so that the transition maps are required to be real-analytic. Accordingly, this gives a or (real-)analytic structure on the manifold rather than a smooth one. Similarly, a complex structure can be defined by requiring the transition maps to be holomorphic.
See also
References
Differential topology
Structures on manifolds | Smooth structure | Mathematics | 474 |
22,780 | https://en.wikipedia.org/wiki/Octopus | An octopus (: octopuses or octopodes) is a soft-bodied, eight-limbed mollusc of the order Octopoda (, ). The order consists of some 300 species and is grouped within the class Cephalopoda with squids, cuttlefish, and nautiloids. Like other cephalopods, an octopus is bilaterally symmetric with two eyes and a beaked mouth at the centre point of the eight limbs. The soft body can radically alter its shape, enabling octopuses to squeeze through small gaps. They trail their eight appendages behind them as they swim. The siphon is used both for respiration and for locomotion, by expelling a jet of water. Octopuses have a complex nervous system and excellent sight, and are among the most intelligent and behaviourally diverse of all invertebrates.
Octopuses inhabit various regions of the ocean, including coral reefs, pelagic waters, and the seabed; some live in the intertidal zone and others at abyssal depths. Most species grow quickly, mature early, and are short-lived. In most species, the male uses a specially adapted arm to deliver a bundle of sperm directly into the female's mantle cavity, after which he becomes senescent and dies, while the female deposits fertilised eggs in a den and cares for them until they hatch, after which she also dies. Strategies to defend themselves against predators include the expulsion of ink, the use of camouflage and threat displays, the ability to jet quickly through the water and hide, and even deceit. All octopuses are venomous, but only the blue-ringed octopuses are known to be deadly to humans.
Octopuses appear in mythology as sea monsters like the kraken of Norway and the Akkorokamui of the Ainu, and possibly the Gorgon of ancient Greece. A battle with an octopus appears in Victor Hugo's book Toilers of the Sea, inspiring other works such as Ian Fleming's Octopussy. Octopuses appear in Japanese erotic art, shunga. They are eaten and considered a delicacy by humans in many parts of the world, especially the Mediterranean and the Asian seas.
Etymology and pluralisation
The scientific Latin term was derived from Ancient Greek (), a compound form of (, 'eight') and (, 'foot'), itself a variant form of , a word used for example by Alexander of Tralles ( – ) for the common octopus. The standard pluralised form of octopus in English is octopuses; the Ancient Greek plural , (), has also been used historically. The alternative plural octopi is usually considered incorrect because it wrongly assumes that octopus is a Latin second-declension noun or adjective when, in either Greek or Latin, it is a third-declension noun.
Historically, the first plural to commonly appear in English language sources, in the early 19th century, is the Latinate form octopi, followed by the English form octopuses in the latter half of the same century. The Hellenic plural is roughly contemporary in usage, although it is also the rarest.
Fowler's Modern English Usage states that the only acceptable plural in English is octopuses, that octopi is misconceived, and octopodes pedantic; the last is nonetheless used frequently enough to be acknowledged by the descriptivist Merriam-Webster 11th Collegiate Dictionary and Webster's New World College Dictionary. The Oxford English Dictionary lists octopuses, octopi, and octopodes, in that order, reflecting frequency of use, calling octopodes rare and noting that octopi is based on a misunderstanding. The New Oxford American Dictionary (3rd Edition, 2010) lists octopuses as the only acceptable pluralisation, and indicates that octopodes is still occasionally used, but that octopi is incorrect.
Anatomy and physiology
Size
The giant Pacific octopus (Enteroctopus dofleini) is often cited as the largest known octopus species. Adults usually weigh around , with an arm span of up to . The largest specimen of this species to be scientifically documented was an animal with a live mass of . Much larger sizes have been claimed for the giant Pacific octopus: one specimen was recorded as with an arm span of . A carcass of the seven-arm octopus, Haliphron atlanticus, weighed and was estimated to have had a live mass of . The smallest species is Octopus wolfi, which is around and weighs less than .
External characteristics
The octopus is bilaterally symmetrical along its dorso-ventral (back to belly) axis; the head and foot are at one end of an elongated body and function as the anterior (front) of the animal. The head includes the mouth and brain. The foot has evolved into a set of flexible, prehensile appendages, known as "arms", that surround the mouth and are attached to each other near their base by a webbed structure. The arms can be described based on side and sequence position (such as L1, R1, L2, R2) and divided into four pairs. The two rear appendages are generally used to walk on the sea floor, while the other six are used to forage for food. The bulbous and hollow mantle is fused to the back of the head and is known as the visceral hump; it contains most of the vital organs. The mantle cavity has muscular walls and contains the gills; it is connected to the exterior by a funnel or siphon. The mouth of an octopus, located underneath the arms, has a sharp hard beak.
The skin consists of a thin outer epidermis with mucous cells and sensory cells and a connective tissue dermis consisting largely of collagen fibres and various cells allowing colour change. Most of the body is made of soft tissue allowing it to lengthen, contract, and contort itself. The octopus can squeeze through tiny gaps; even the larger species can pass through an opening close to in diameter. Lacking skeletal support, the arms work as muscular hydrostats and contain longitudinal, transverse and circular muscles around a central axial nerve. They can extend and contract, twist to left or right, bend at any place in any direction or be held rigid.
The interior surfaces of the arms are covered with circular, adhesive suckers. The suckers allow the octopus to anchor itself or to manipulate objects. Each sucker is usually circular and bowl-like and has two distinct parts: an outer shallow cavity called an infundibulum and a central hollow cavity called an acetabulum, both of which are thick muscles covered in a protective chitinous cuticle. When a sucker attaches to a surface, the orifice between the two structures is sealed. The infundibulum provides adhesion while the acetabulum remains free, and muscle contractions allow for attachment and detachment. Each of the eight arms senses and responds to light, allowing the octopus to control the limbs even if its head is obscured.
The eyes of the octopus are large and at the top of the head. They are similar in structure to those of a fish, and are enclosed in a cartilaginous capsule fused to the cranium. The cornea is formed from a translucent epidermal layer; the slit-shaped pupil forms a hole in the iris just behind the cornea. The lens is suspended behind the pupil; photoreceptive retinal cells cover the back of the eye. The pupil can be adjusted in size; a retinal pigment screens incident light in bright conditions.
Some species differ in form from the typical octopus body shape. Basal species, the Cirrina, have stout gelatinous bodies with webbing that reaches near the tip of their arms, and two large fins above the eyes, supported by an internal shell. Fleshy papillae or cirri are found along the bottom of the arms, and the eyes are more developed.
Circulatory system
Octopuses have a closed circulatory system, in which the blood remains inside blood vessels. Octopuses have three hearts; a systemic or main heart that circulates blood around the body and two branchial or gill hearts that pump it through each of the two gills. The systemic heart becomes inactive when the animal is swimming. Thus the octopus tires quickly and prefers to crawl. Octopus blood contains the copper-rich protein haemocyanin to transport oxygen. This makes the blood very viscous and it requires considerable pressure to pump it around the body; octopuses' blood pressures can exceed . In cold conditions with low oxygen levels, haemocyanin transports oxygen more efficiently than haemoglobin. The haemocyanin is dissolved in the plasma instead of being carried within blood cells and gives the blood a bluish colour.
The systemic heart has muscular contractile walls and consists of a single ventricle and two atria, one for each side of the body. The blood vessels consist of arteries, capillaries and veins and are lined with a cellular endothelium which is quite unlike that of most other invertebrates. The blood circulates through the aorta and capillary system, to the venae cavae, after which the blood is pumped through the gills by the branchial hearts and back to the main heart. Much of the venous system is contractile, which helps circulate the blood.
Respiration
Respiration involves drawing water into the mantle cavity through an aperture, passing it through the gills, and expelling it through the siphon. The ingress of water is achieved by contraction of radial muscles in the mantle wall, and flapper valves shut when strong circular muscles force the water out through the siphon. Extensive connective tissue lattices support the respiratory muscles and allow them to expand the respiratory chamber. The lamella structure of the gills allows for a high oxygen uptake, up to 65% in water at . Water flow over the gills correlates with locomotion, and an octopus can propel its body when it expels water out of its siphon.
The thin skin of the octopus absorbs additional oxygen. When resting, around 41% of an octopus's oxygen absorption is through the skin. This decreases to 33% when it swims, as more water flows over the gills; skin oxygen uptake also increases. When it is resting after a meal, absorption through the skin can drop to 3% of its total oxygen uptake.
Digestion and excretion
The digestive system of the octopus begins with the buccal mass which consists of the mouth with its chitinous beak, the pharynx, radula and salivary glands. The radula is a spiked, muscular tongue-like organ with multiple rows of tiny teeth. Food is broken down and is forced into the oesophagus by two lateral extensions of the esophageal side walls in addition to the radula. From there it is transferred to the gastrointestinal tract, which is mostly suspended from the roof of the mantle cavity by numerous membranes. The tract consists of a crop, where the food is stored; a stomach, where food is ground down; a caecum where the now sludgy food is sorted into fluids and particles and which plays an important role in absorption; the digestive gland, where liver cells break down and absorb the fluid and become "brown bodies"; and the intestine, where the accumulated waste is turned into faecal ropes by secretions and blown out of the funnel via the rectum.
During osmoregulation, fluid is added to the pericardia of the branchial hearts. The octopus has two nephridia (equivalent to vertebrate kidneys) which are associated with the branchial hearts; these and their associated ducts connect the pericardial cavities with the mantle cavity. Before reaching the branchial heart, each branch of the vena cava expands to form renal appendages which are in direct contact with the thin-walled nephridium. The urine is first formed in the pericardial cavity, and is modified by excretion, chiefly of ammonia, and selective absorption from the renal appendages, as it is passed along the associated duct and through the nephridiopore into the mantle cavity.
Nervous system and senses
Octopuses (along with cuttlefish) have the highest brain-to-body mass ratios of all invertebrates; this is greater than that of many vertebrates. Octopuses have the same jumping genes that are active in the human brain, implying an evolutionary convergence at molecular level. The nervous system is complex, only part of which is localised in its brain, which is contained in a cartilaginous capsule. Two-thirds of an octopus's neurons are in the nerve cords of its arms. This allows their arms to perform complex reflex actions without input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organised in their brains via internal somatotopic maps of their bodies. The nervous system of cephalopods is the most complex of all invertebrates. The giant nerve fibers of the cephalopod mantle have been widely used for many years as experimental material in neurophysiology; their large diameter (due to lack of myelination) makes them relatively easy to study compared with other animals.
Like other cephalopods, octopuses have camera-like eyes, and can distinguish the polarisation of light. Colour vision appears to vary from species to species, for example, being present in O. aegina but absent in O. vulgaris.
Opsins in the skin respond to different wavelengths of light and help the animals choose a colouration that camouflages them; the chromatophores in the skin can respond to light independently of the eyes.
An alternative hypothesis is that cephalopod eyes in species that only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into colour vision, though this sacrifices image quality. This would explain pupils shaped like the letter "U", the letter "W", or a dumbbell, as well as the need for colourful mating displays.
Attached to the brain are two organs called statocysts (sac-like structures containing a mineralised mass and sensitive hairs), that allow the octopus to sense the orientation of its body. They provide information on the position of the body relative to gravity and can detect angular acceleration. An autonomic response keeps the octopus's eyes oriented so that the pupil is always horizontal. Octopuses may also use the statocyst to hear sound. The common octopus can hear sounds between 400 Hz and 1000 Hz, and hears best at 600 Hz.
Octopuses have an excellent somatosensory system. Their suction cups are equipped with chemoreceptors so they can taste what they touch. Octopus arms move easily because the sensors recognise octopus skin and prevent self-attachment. Octopuses appear to have poor proprioceptive sense and must observe the arms visually to keep track of their position.
Ink sac
The ink sac of an octopus is located under the digestive gland. A gland attached to the sac produces the ink, and the sac stores it. The sac is close enough to the funnel for the octopus to shoot out the ink with a water jet. Before it leaves the funnel, the ink passes through glands which mix it with mucus, creating a thick, dark blob which allows the animal to escape from a predator. The main pigment in the ink is melanin, which gives it its black colour. Cirrate octopuses usually lack the ink sac.
Life cycle
Reproduction
Octopuses are gonochoric and have a single, posteriorly-located gonad which is associated with the coelom. The testis in males and the ovary in females bulges into the gonocoel and the gametes are released here. The gonocoel is connected by the gonoduct to the mantle cavity, which it enters at the gonopore. An optic gland creates hormones that cause the octopus to mature and age and stimulate gamete production. The gland may be triggered by environmental conditions such as temperature, light and nutrition, which thus control the timing of reproduction and lifespan.
When octopuses reproduce, the male uses a specialised arm called a hectocotylus to transfer spermatophores (packets of sperm) from the terminal organ of the reproductive tract (the cephalopod "penis") into the female's mantle cavity. The hectocotylus in benthic octopuses is usually the third right arm, which has a spoon-shaped depression and modified suckers near the tip. In most species, fertilisation occurs in the mantle cavity.
The reproduction of octopuses has been studied in only a few species. One such species is the giant Pacific octopus, in which courtship is accompanied, especially in the male, by changes in skin texture and colour. The male may cling to the top or side of the female or position himself beside her. There is some speculation that he may first use his hectocotylus to remove any spermatophore or sperm already present in the female. He picks up a spermatophore from his spermatophoric sac with the hectocotylus, inserts it into the female's mantle cavity, and deposits it in the correct location for the species, which in the giant Pacific octopus is the opening of the oviduct. Two spermatophores are transferred in this way; these are about one metre (yard) long, and the empty ends may protrude from the female's mantle. A complex hydraulic mechanism releases the sperm from the spermatophore, and it is stored internally by the female.
About forty days after mating, the female giant Pacific octopus attaches strings of small fertilised eggs (10,000 to 70,000 in total) to rocks in a crevice or under an overhang. Here she guards and cares for them for about five months (160 days) until they hatch. In colder waters, such as those off Alaska, it may take up to ten months for the eggs to completely develop. The female aerates them and keeps them clean; if left untended, many will die. She does not feed during this time and dies soon after. Males become senescent and die a few weeks after mating.
The eggs have large yolks; cleavage (division) is superficial and a germinal disc develops at the pole. During gastrulation, the margins of this grow down and surround the yolk, forming a yolk sac, which eventually forms part of the gut. The dorsal side of the disc grows upward and forms the embryo, with a shell gland on its dorsal surface, gills, mantle and eyes. The arms and funnel develop as part of the foot on the ventral side of the disc. The arms later migrate upward, coming to form a ring around the funnel and mouth. The yolk is gradually absorbed as the embryo develops.
Most young octopuses hatch as paralarvae and are planktonic for weeks to months, depending on the species and water temperature. They feed on copepods, arthropod larvae and other zooplankton, eventually settling on the ocean floor and developing directly into adults with no distinct metamorphoses that are present in other groups of mollusc larvae. Octopus species that produce larger eggs – including the southern blue-ringed, Caribbean reef, California two-spot, Eledone moschata and deep sea octopuses – instead hatch as benthic animals similar to the adults.
In the argonaut (paper nautilus), the female secretes a fine, fluted, papery shell in which the eggs are deposited and in which she also resides while floating in mid-ocean. In this she broods the young, and it also serves as a buoyancy aid allowing her to adjust her depth. The male argonaut is minute by comparison and has no shell.
Lifespan
Octopuses have short lifespans, and some species complete their lifecycles in only six months. The giant Pacific octopus, one of the two largest species of octopus, usually lives for three to five years. Octopus lifespan is limited by reproduction. For most octopuses, the last stage of their life is called senescence. It is the breakdown of cellular function without repair or replacement. For males, this typically begins after mating. Senescence may last from weeks to a few months, at most. For females, it begins when they lay a clutch of eggs. Females will spend all their time aerating and protecting their eggs until they are ready to hatch. During senescence, an octopus does not feed and quickly weakens. Lesions begin to form and the octopus literally degenerates. Unable to defend themselves, octopuses often fall prey to predators. This makes most octopuses effectively semelparous. The larger Pacific striped octopus (LPSO) is an exception, as it can reproduce repeatedly over a life of around two years.
Octopus reproductive organs mature due to the hormonal influence of the optic gland but result in the inactivation of their digestive glands. Unable to feed, the octopus typically dies of starvation. Experimental removal of both optic glands after spawning was found to result in the cessation of broodiness, the resumption of feeding, increased growth, and greatly extended lifespans. It has been proposed that the naturally short lifespan may be functional to prevent rapid overpopulation.
Distribution and habitat
Octopuses live in every ocean, and different species have adapted to different marine habitats. As juveniles, common octopuses inhabit shallow tide pools. The Hawaiian day octopus (Octopus cyanea) lives on coral reefs; argonauts drift in pelagic waters. Abdopus aculeatus mostly lives in near-shore seagrass beds. Some species are adapted to the cold, ocean depths. The spoon-armed octopus (Bathypolypus arcticus) is found at depths of , and Vulcanoctopus hydrothermalis lives near hydrothermal vents at . The cirrate species are often free-swimming and live in deep-water habitats. Although several species are known to live at bathyal and abyssal depths, there is only a single indisputable record of an octopus in the hadal zone; a species of Grimpoteuthis (dumbo octopus) photographed at . No species are known to live in fresh water.
Behaviour and ecology
Most species are solitary when not mating, though a few are known to occur in high densities and with frequent interactions, such as signaling, mate defending and evicting individuals from dens. This is likely the result of abundant food supplies combined with limited den sites. The LPSO has been described as particularly social, living in groups of up to 40 individuals. Octopuses hide in dens, which are typically crevices in rocky outcrops or other hard structures, though some species burrow into sand or mud. Octopuses are not territorial but generally remain in a home range; they may leave in search of food. They can navigate back to a den without having to retrace their outward route. They are not migratory.
Octopuses bring captured prey to the den, where they can eat it safely. Sometimes the octopus catches more prey than it can eat, and the den is often surrounded by a midden of dead and uneaten food items. Other creatures, such as fish, crabs, molluscs and echinoderms, often share the den with the octopus, either because they have arrived as scavengers, or because they have survived capture. On rare occasions, octopuses hunt cooperatively with other species, with fish as their partners. They regulate the species composition of the hunting and the behavior of their by punching them.
Feeding
Nearly all octopuses are predatory; bottom-dwelling octopuses eat mainly crustaceans, polychaete worms, and other molluscs such as whelks and clams; open-ocean octopuses eat mainly prawns, fish and other cephalopods. Major items in the diet of the giant Pacific octopus include bivalve molluscs such as the cockle Clinocardium nuttallii, clams and scallops and crustaceans such as crabs and spider crabs. Prey that it is likely to reject include moon snails because they are too large and limpets, rock scallops, chitons and abalone, because they are too securely fixed to the rock. Small cirrate octopuses such as those of the genera Grimpoteuthis and Opisthoteuthis typically prey on polychaetes, copepods, amphipods and isopods.
A benthic (bottom-dwelling) octopus typically moves among the rocks and feels through the crevices. The creature may make a jet-propelled pounce on prey and pull it toward the mouth with its arms, the suckers restraining it. Small prey may be completely trapped by the webbed structure. Octopuses usually inject crustaceans like crabs with a paralysing saliva then dismember them with their beaks. Octopuses feed on shelled molluscs either by forcing the valves apart, or by drilling a hole in the shell to inject a nerve toxin. It used to be thought that the hole was drilled by the radula, but it has now been shown that minute teeth at the tip of the salivary papilla are involved, and an enzyme in the toxic saliva is used to dissolve the calcium carbonate of the shell. It takes about three hours for O. vulgaris to create a hole. Once the shell is penetrated, the prey dies almost instantaneously, its muscles relax, and the soft tissues are easy for the octopus to remove. Crabs may also be treated in this way; tough-shelled species are more likely to be drilled, and soft-shelled crabs are torn apart.
Some species have other modes of feeding. Grimpoteuthis has a reduced or non-existent radula and swallows prey whole. In the deep-sea genus Stauroteuthis, some of the muscle cells that control the suckers in most species have been replaced with photophores which are believed to fool prey by directing them to the mouth, making them one of the few bioluminescent octopuses.
Locomotion
Octopuses mainly move about by relatively slow crawling with some swimming in a head-first position. Jet propulsion or backward swimming, is their fastest means of locomotion, followed by swimming and crawling. When in no hurry, they usually crawl on either solid or soft surfaces. Several arms are extended forward, some of the suckers adhere to the substrate and the animal hauls itself forward with its powerful arm muscles, while other arms may push rather than pull. As progress is made, other arms move ahead to repeat these actions and the original suckers detach. During crawling, the heart rate nearly doubles, and the animal requires 10 or 15 minutes to recover from relatively minor exercise.
Most octopuses swim by expelling a jet of water from the mantle through the siphon into the sea. The physical principle behind this is that the force required to accelerate the water through the orifice produces a reaction that propels the octopus in the opposite direction. The direction of travel depends on the orientation of the siphon. When swimming, the head is at the front and the siphon is pointed backward but, when jetting, the visceral hump leads, the siphon points at the head and the arms trail behind, with the animal presenting a fusiform appearance. In an alternative method of swimming, some species flatten themselves dorso-ventrally, and swim with the arms held out sideways; this may provide lift and be faster than normal swimming. Jetting is used to escape from danger, but is physiologically inefficient, requiring a mantle pressure so high as to stop the heart from beating, resulting in a progressive oxygen deficit.
Cirrate octopuses cannot produce jet propulsion and rely on their fins for swimming. They have neutral buoyancy and drift through the water with the fins extended. They can also contract their arms and surrounding web to make sudden moves known as "take-offs". Another form of locomotion is "pumping", which involves symmetrical contractions of muscles in their webs producing peristaltic waves. This moves the body slowly.
In 2005, Adopus aculeatus and veined octopus (Amphioctopus marginatus) were found to walk on two arms, while at the same time mimicking plant matter. This form of locomotion allows these octopuses to move quickly away from a potential predator without being recognised. Some species of octopus can crawl out of the water briefly, which they may do between tide pools. "Stilt walking" is used by the veined octopus when carrying stacked coconut shells. The octopus carries the shells underneath it with two arms, and progresses with an ungainly gait supported by its remaining arms held rigid.
Intelligence
Octopuses are highly intelligent. Maze and problem-solving experiments have shown evidence of a memory system that can store both short- and long-term memory. Young octopuses learn nothing from their parents, as adults provide no parental care beyond tending to their eggs until the young octopuses hatch.
In laboratory experiments, octopuses can readily be trained to distinguish between different shapes and patterns. They have been reported to practise observational learning, although the validity of these findings is contested. Octopuses have also been observed in what has been described as play: repeatedly releasing bottles or toys into a circular current in their aquariums and then catching them. Octopuses often break out of their aquariums and sometimes into others in search of food. Growing evidence suggests that octopuses are sentient and capable of experiencing pain. The veined octopus collects discarded coconut shells, then uses them to build a shelter, an example of tool use.
Camouflage and colour change
Octopuses use camouflage when hunting and to avoid predators. To do this, they use specialised skin cells that change the appearance of the skin by adjusting its colour, opacity, or reflectivity. Chromatophores contain yellow, orange, red, brown, or black pigments; most species have three of these colours, while some have two or four. Other colour-changing cells are reflective iridophores and white leucophores. This colour-changing ability is also used to communicate with or warn other octopuses. The energy cost of the complete activation of the chromatophore system is very high equally being nearly as much as all the energy used by an octopus at rest.
Octopuses can create distracting patterns with waves of dark colouration across the body, a display known as the "passing cloud". Muscles in the skin change the texture of the mantle to achieve greater camouflage. In some species, the mantle can take on the spiky appearance of algae; in others, skin anatomy is limited to relatively uniform shades of one colour with limited skin texture. Octopuses that are diurnal and live in shallow water have evolved more complex skin than their nocturnal and deep-sea counterparts.
A "moving rock" trick involves the octopus mimicking a rock and then inching across the open space with a speed matching that of the surrounding water.
Defence
Aside from humans, octopuses may be preyed on by fishes, seabirds, sea otters, pinnipeds, cetaceans, and other cephalopods. Octopuses typically hide or disguise themselves by camouflage and mimicry; some have conspicuous warning coloration (aposematism) or deimatic behaviour (“bluffing” a seemingly threatening appearance). An octopus may spend 40% of its time hidden away in its den. When the octopus is approached, it may extend an arm to investigate. 66% of Enteroctopus dofleini in one study had scars, with 50% having amputated arms. The blue rings of the highly venomous blue-ringed octopus are hidden in muscular skin folds which contract when the animal is threatened, exposing the iridescent warning. The Atlantic white-spotted octopus (Callistoctopus macropus) turns bright brownish red with oval white spots all over in a high contrast display. Displays are often reinforced by stretching out the animal's arms, fins or web to make it look as big and threatening as possible.
Once they have been seen by a predator, they commonly try to escape but can also create a distraction by ejecting an ink cloud from their ink sac. The ink is thought to reduce the efficiency of olfactory organs, which would aid evasion from predators that employ smell for hunting, such as sharks. Ink clouds of some species might act as pseudomorphs, or decoys that the predator attacks instead.
When under attack, some octopuses can perform arm autotomy, in a manner similar to the way skinks and other lizards detach their tails. The crawling arm may distract would-be predators. Such severed arms remain sensitive to stimuli and move away from unpleasant sensations. Octopuses can replace lost limbs.
Some octopuses, such as the mimic octopus, can combine their highly flexible bodies with their colour-changing ability to mimic other, more dangerous animals, such as lionfish, sea snakes, and eels.
Pathogens and parasites
The diseases and parasites that affect octopuses have been little studied, but cephalopods are known to be the intermediate or final hosts of various parasitic cestodes, nematodes and copepods; 150 species of protistan and metazoan parasites have been recognised. The Dicyemidae are a family of tiny worms that are found in the renal appendages of many species; it is unclear whether they are parasitic or endosymbionts. Coccidians in the genus Aggregata living in the gut cause severe disease to the host. Octopuses have an innate immune system; their haemocytes respond to infection by phagocytosis, encapsulation, infiltration, or cytotoxic activities to destroy or isolate the pathogens. The haemocytes play an important role in the recognition and elimination of foreign bodies and wound repair. Captive animals are more susceptible to pathogens than wild ones. A gram-negative bacterium, Vibrio lentus, can cause skin lesions, exposure of muscle and sometimes death.
Evolution
The scientific name Octopoda was first coined and given as the order of octopuses in 1818 by English biologist William Elford Leach, who classified them as Octopoida the previous year. The Octopoda consists of around 300 known species and were historically divided into two suborders, the Incirrina and the Cirrina. More recent evidence suggests Cirrina is merely the most basal species, not a unique clade. The incirrate octopuses (the majority of species) lack the cirri and paired swimming fins of the cirrates. In addition, the internal shell of incirrates is either present as a pair of stylets or absent altogether.
Fossil history and phylogeny
The Cephalopoda evolved from a mollusc resembling the Monoplacophora in the Cambrian some 530 million years ago. The Coleoidea diverged from the nautiloids in the Devonian some 416 million years ago. In turn, the coleoids (including the squids and octopods) brought their shells inside the body and some 276 million years ago, during the Permian, split into the Vampyropoda and the Decabrachia. The octopuses arose from the Muensterelloidea within the Vampyropoda in the Jurassic. The earliest octopus likely lived near the sea floor (benthic to demersal) in shallow marine environments. Octopuses consist mostly of soft tissue, and so fossils are relatively rare. As soft-bodied cephalopods, they lack the external shell of most molluscs, including other cephalopods like the nautiloids and the extinct Ammonoidea. They have eight limbs like other Coleoidea, but lack the extra specialised feeding appendages known as tentacles which are longer and thinner with suckers only at their club-like ends. The vampire squid (Vampyroteuthis) also lacks tentacles but has sensory filaments.
The cladograms are based on Sanchez et al., 2018, who created a molecular phylogeny based on mitochondrial and nuclear DNA marker sequences. The position of the Eledonidae is from Ibáñez et al., 2020, with a similar methodology. Dates of divergence are from Kröger et al., 2011 and Fuchs et al., 2019.
The molecular analysis of the octopods shows that the suborder Cirrina (Cirromorphida) and the superfamily Argonautoidea are paraphyletic and are broken up; these names are shown in quotation marks and italics on the cladogram.
RNA editing and the genome
Octopuses, like other coleoid cephalopods but unlike more basal cephalopods or other molluscs, are capable of greater RNA editing, changing the nucleic acid sequence of the primary transcript of RNA molecules, than any other organisms. Editing is concentrated in the nervous system, and affects proteins involved in neural excitability and neuronal morphology. More than 60% of RNA transcripts for coleoid brains are recoded by editing, compared to less than 1% for a human or fruit fly. Coleoids rely mostly on ADAR enzymes for RNA editing, which requires large double-stranded RNA structures to flank the editing sites. Both the structures and editing sites are conserved in the coleoid genome and the mutation rates for the sites are severely hampered. Hence, greater transcriptome plasticity has come at the cost of slower genome evolution.
The octopus genome is unremarkably bilaterian except for large developments of two gene families: protocadherins, which regulate the development of neurons; and the C2H2 zinc-finger transcription factors. Many genes specific to cephalopods are expressed in the animals' skin, suckers, and nervous system.
Relationship to humans
In art, literature, and mythology
Ancient seafaring people were aware of the octopus, as evidenced by artworks and designs. For example, a stone carving found in the archaeological recovery from Bronze Age Minoan Crete at Knossos (1900–1100 BC) depicts a fisherman carrying an octopus. The terrifyingly powerful Gorgon of Greek mythology may have been inspired by the octopus or squid, the octopus itself representing the severed head of Medusa, the beak as the protruding tongue and fangs, and its tentacles as the snakes. The kraken is a legendary sea monster of giant proportions said to dwell off the coasts of Norway and Greenland, usually portrayed in art as a giant octopus attacking ships. Linnaeus included it in the first edition of his 1735 Systema Naturae. One translation of the Hawaiian creation myth the Kumulipo suggests that the octopus is the lone survivor of a previous age. The Akkorokamui is a gigantic octopus-like monster from Ainu folklore, worshipped in Shinto.
A battle with an octopus plays a significant role in Victor Hugo's 1866 book Travailleurs de la mer (Toilers of the Sea). Ian Fleming's 1966 short story collection Octopussy and The Living Daylights, and the 1983 James Bond film were partly inspired by Hugo's book. Japanese erotic art, shunga, includes ukiyo-e woodblock prints such as Katsushika Hokusai's 1814 print Tako to ama (The Dream of the Fisherman's Wife), in which an ama diver is sexually intertwined with a large and a small octopus. The print is a forerunner of tentacle erotica. The biologist P. Z. Myers noted in his science blog, Pharyngula, that octopuses appear in "extraordinary" graphic illustrations involving women, tentacles, and bare breasts.
Since it has numerous arms emanating from a common centre, the octopus is often used as a symbol for a powerful and manipulative organisation, company, or country.
The Beatles song "Octopus's Garden", on the band's 1969 album Abbey Road, was written by Ringo Starr after he was told about how octopuses travel along the sea bed picking up stones and shiny objects with which to build gardens.
Danger to humans
Octopuses generally avoid humans, but incidents have been verified. For example, a Pacific octopus, said to be nearly perfectly camouflaged, "lunged" at a diver and "wrangled" over his camera before it let go. Another diver recorded the encounter on video. All species are venomous, but only blue-ringed octopuses have venom that is lethal to humans. Blue-ringed octopuses are among the deadliest animals in the sea; their bites are reported each year across the animals' range from Australia to the eastern Indo-Pacific Ocean. They bite only when provoked or accidentally stepped upon; bites are small and usually painless. The venom appears to be able to penetrate the skin without a puncture, given prolonged contact. It contains tetrodotoxin, which causes paralysis by blocking the transmission of nerve impulses to the muscles. This causes death by respiratory failure leading to cerebral anoxia. No antidote is known, but if breathing can be kept going artificially, patients recover within 24 hours. Bites have been recorded from captive octopuses of other species; they leave swellings which disappear in a day or two.
As a food source
Octopus fisheries exist around the world with total catches varying between 245,320 and 322,999 metric tons from 1986 to 1995. The world catch peaked in 2007 at 380,000 tons, and had fallen by a tenth by 2012. Methods to capture octopuses include pots, traps, trawls, snares, drift fishing, spearing, hooking and hand collection. Octopuses have a food conversion efficiency greater than that of chickens, making octopus aquaculture a possibility. Octopuses compete with human fisheries targeting other species, and even rob traps and nets for their catch; they may, themselves, be caught as bycatch if they cannot get away.
Octopus is eaten in many cultures, such as those on the Mediterranean and Asian coasts. The arms and other body parts are prepared in ways that vary by species and geography. Live octopuses or their wriggling pieces are consumed as ikizukuri in Japanese cuisine and san-nakji in Korean cuisine. If not prepared properly, however, the severed arms can still choke the diner with their suction cups, causing at least one death in 2010. Animal welfare groups have objected to the live consumption of octopuses on the basis that they can experience pain.
In science and technology
In classical Greece, Aristotle (384–322 BC) commented on the colour-changing abilities of the octopus, both for camouflage and for signalling, in his Historia animalium: "The octopus ... seeks its prey by so changing its colour as to render it like the colour of the stones adjacent to it; it does so also when alarmed." Aristotle noted that the octopus had a hectocotyl arm and suggested it might be used in sexual reproduction. This claim was widely disbelieved until the 19th century. It was described in 1829 by the French zoologist Georges Cuvier, who supposed it to be a parasitic worm, naming it as a new species, Hectocotylus octopodis. Other zoologists thought it a spermatophore; the German zoologist Heinrich Müller believed it was "designed" to detach during copulation. In 1856, the Danish zoologist Japetus Steenstrup demonstrated that it is used to transfer sperm, and only rarely detaches.
Octopuses offer many possibilities in biological research, including their ability to regenerate limbs, change the colour of their skin, behave intelligently with a distributed nervous system, and make use of 168 kinds of protocadherins (humans have 58), the proteins that guide the connections neurons make with each other. The California two-spot octopus has had its genome sequenced, allowing exploration of its molecular adaptations. Having independently evolved mammal-like intelligence, octopuses have been compared by the philosopher Peter Godfrey-Smith, who has studied the nature of intelligence, to hypothetical intelligent extraterrestrials. Their problem-solving skills, along with their mobility and lack of rigid structure enable them to escape from supposedly secure tanks in laboratories and public aquariums.
Due to their intelligence, octopuses are listed in some countries as experimental animals on which surgery may not be performed without anesthesia, a protection usually extended only to vertebrates. In the UK from 1993 to 2012, the common octopus (Octopus vulgaris) was the only invertebrate protected under the Animals (Scientific Procedures) Act 1986. In 2012, this legislation was extended to include all cephalopods in accordance with a general EU directive.
Some robotics research is exploring biomimicry of octopus features. Octopus arms can move and sense largely autonomously without intervention from the animal's central nervous system. In 2015 a team in Italy built soft-bodied robots able to crawl and swim, requiring only minimal computation. In 2017, a German company made an arm with a soft pneumatically controlled silicone gripper fitted with two rows of suckers. It is able to grasp objects such as a metal tube, a magazine, or a ball, and to fill a glass by pouring water from a bottle.
See also
Notes
References
Bibliography
Further reading
See also
My Octopus Teacher, award-winning documentary
External links
Octopuses – Overview at the Encyclopedia of Life
Octopoda at the Tree of Life Web Project
"Can We Really Be Friends with an Octopus?" at Hakai Magazine, January 11, 2022
Articles containing video clips
Commercial molluscs
Extant Pennsylvanian first appearances
Tool-using animals | Octopus | Biology | 9,485 |
19,926,405 | https://en.wikipedia.org/wiki/Oxyphenonium%20bromide | Oxyphenonium bromide is an antimuscarinic drug. It is used to treat gastric and duodenal ulcers and to relieve visceral spasms.
References
Muscarinic antagonists
Quaternary ammonium compounds
Bromides
Tertiary alcohols
Carboxylate esters | Oxyphenonium bromide | Chemistry | 66 |
47,330,613 | https://en.wikipedia.org/wiki/Suillus%20pallidiceps | Suillus pallidiceps is a species of bolete fungus in the family Suillaceae. It was first described scientifically by American mycologists Alexander H. Smith and Harry D. Thiers in 1964.
See also
List of North American boletes
References
External links
pallidiceps
Fungi described in 1964
Fungi of North America
Fungus species | Suillus pallidiceps | Biology | 73 |
49,850,723 | https://en.wikipedia.org/wiki/Woodland%20edge | A woodland edge or forest edge is the transition zone (ecotone) from an area of woodland or forest to fields or other open spaces. Certain species of plants and animals are adapted to the forest edge, and these species are often more familiar to humans than species only found deeper within forests. A classic example of a forest edge species is the white-tailed deer in North America.
The woodland edge on maps
On topographic maps woods and forests are generally depicted in a soft green colour. Their edges are – like other features – usually determined from aerial photographs, but sometimes also by terrestrial survey. However, they only represent a snapshot in time because almost all woods have a tendency to spread or to gradually fill clearings. In addition, working out the exact edge of the wood or forest may be difficult where it transitions into scrub or bushes or the trees thin out slowly. Differences of opinion here often involved several tens of metres. In addition, many cartographers prefer to show even small islands of trees, while others – depending on the scale of the map – prefer more general, continuous lines to demarcate the forest or woodland edges.
For specialised work, aerial photographs or satellite imagery are frequently utilised without having to revise the maps. Cadastral maps cannot show the current situation because for reasons of cost they can only be updated at fairly long intervals and cultural boundaries are not legally binding.
Woodland edges and biology
On the woodland edge – however it is defined – not only does the flora change, but also the fauna and the soil type. These edge effects mean that many species of animal prefer woodland edges to the heart of the forest, because they have both protection and light – for example, tree pipits and dunnocks. At the woodland edge trees are often different from those inside the wood, as well as hedge vegetation, brambles and low-growing plants.
The more gradual the transition from open country to woodland (for example, through intermediate young trees or bushes), the less risk there is that, in stormy weather, wind will blow under the canopy and uproot the outer rows of trees. The structure of the woodland edge and its maintenance is viewed as important in forest management especially during reforestation.
Hunters also use the forest edge for the observation and hunting of wildlife, for example, by using tree stands or hides.
See also
Edge effects
Literature
Thomas Coch, Hermann Hondong: Waldrandpflege. Grundlagen und Konzepte. 21 Tabellen. "Praktischer Naturschutz" series. Neumann, Radebeul, 1995,
Beinlich, B., Gockel, H. A. & Grawe, F. (2014): Mittelwaldähnliche Waldrandgestaltung – Ökonomie und Ökologie im Einklang. – ANLiegen Natur 36(1): 61–65, Laufen. PDF 0.7 MB.
External links
Reforestation
Forest ecology
Ecosystems | Woodland edge | Biology | 607 |
48,706,354 | https://en.wikipedia.org/wiki/S-procedure | The S-procedure or S-lemma is a mathematical result that gives conditions under which a particular quadratic inequality is a consequence of another quadratic inequality. The S-procedure was developed independently in a number of different contexts and has applications in control theory, linear algebra and mathematical optimization.
Statement of the S-procedure
Let F1 and F2 be symmetric matrices, g1 and g2 be vectors and h1 and h2 be real numbers. Assume that there is some x0 such that the strict inequality holds. Then the implication
holds if and only if there exists some nonnegative number λ such that
is positive semidefinite.
References
== See also ==
Linear matrix inequality
Finsler's lemma
Control theory
Linear algebra
Mathematical optimization | S-procedure | Mathematics | 153 |
18,928,331 | https://en.wikipedia.org/wiki/NGC%206204 | NGC 6204 is an open cluster in the constellation Ara, lying close to the galactic equator. It is 3,540 ly (1,085 pc) distant from Earth. The cluster was discovered on 13 May 1826 by British astronomer James Dunlop.
References
Further reading
Carraro, Giovanni; Munari, Ulisse (2004): A multicolour CCD photometric study of the open clusters NGC 2866, Pismis 19, Westerlund 2, ESO96-SC04, NGC 5617 and NGC 6204; Monthly Notices of the Royal Astronomical Society 347 (2), S. 625–631
External links
http://seds.org/
Image NGC 6204
NGC 6204
6204
Ara (constellation)
Astronomical objects discovered in 1826 | NGC 6204 | Astronomy | 163 |
3,066,455 | https://en.wikipedia.org/wiki/Semantide | Semantides (or semantophoretic molecules) are biological macromolecules that carry genetic information or a transcript thereof. Three different categories or semantides are distinguished: primary, secondary and tertiary. Primary Semantides are genes, which consist of DNA. Secondary semantides are chains of messenger RNA, which are transcribed from DNA. Tertiary semantides are polypeptides, which are translated from messenger RNA. In eukaryotic organisms, primary semantides may consist of nuclear, mitochondrial or plastid DNA. Not all primary semantides ultimately form tertiary semantides. Some primary semantides are not transcribed into mRNA (non-coding DNA) and some secondary semantides are not translated into polypeptides (non-coding RNA). The complexity of semantides varies greatly. For tertiary semantides, large globular polypeptide chains are most complex while structural proteins, consisting of repeating simple sequences, are least complex. The term semantide and related terms were coined by Linus Pauling and Emile Zuckerkandl. Although semantides are the major type of data used in modern phylogenetics, the term itself is not commonly used.
Related terms
Isosemantic
DNA or RNA that differs in base sequence, but translate into identical polypeptide chains are referred to as being isosemantic.
Episemantic
Molecules that are synthesized by enzymes (tertiary semantides) are referred to as episemantic molecules. Episemantic molecules have a larger variety in types than semantides, which only consist of three types (DNA, RNA or polypeptides). Not all polypeptides are tertiary semantides. Some, mainly small polypeptides, can also be episemantic molecules.
Asemantic
Molecules that are not produced by an organism are referred to as asemantic molecules, because they do not contain any genetic information. Asementic molecules may be changed into episemantic molecules by anabolic processes. Asemantic molecules may also become semantic molecules when they integrate into a genome. Certain viruses and episomes have this ability.
When referring to a molecule as being semantic, episemantic or asemantic, then this only applies to a specific organism. A semantic molecule for one organism may be asemantic for another organism.
Research applications
Semantides are used as phylogenetic information for studying the evolutionary history of organisms. Primary semantides are also used in comparative biodiversity analyses. However, since extracellular DNA can persist for some time, these types of analysis cannot discern active from inactive and or dead organisms.
The extent to which biological macromolecules are informative for studying evolutionary history differs. The more complex a molecule, the more informative it is in for phylogenetics. Primary and secondary semantides contain the most information. In tertiary semantides, some information is lost, because many amino acids are coded for by more than one codon.
Episemantic molecules (e.g. carotenoids) are also informative for phylogenetics. However, the distributions of these molecules do not correlate perfectly with phylogenies based on semantides. Therefore, independent confirmation is often still needed. The more enzymes involved in a synthesis pathway, the more unlikely that such pathways have evolved separately. Therefore, for episemantic molecules, molecules that are synthesized from the least complex asemantic molecules are the most informative in phylogenetics. However, different pathways may synthesize similar or even identical molecules. For example, in animals, plants and other eukaryotes, different pathways have been found for vitamin C synthesis. Therefore, certain molecules should not be used for studying phylogenetic relationships.
Although asemantic molecules could indicate some quantitative or qualitative features of a group of organisms, they are considered to be unreliable and uninformative for phylogenetics.
Analyses using different semantides may yield conflicting phylogenies. However, if the phylogenies are congruent, then there is more support for the evolutionary relationship. By analyzing larger sequences (e.g. complete mitochondrial genome sequences), phylogenies can be constructed, which are more resolved and have more support.
Examples
Semantides often used in studies are common to most organisms and are known to only change slowly over time. Examples of these macromolecules are:
ATPase
Cytochrome b
Cytochrome c oxidase subunit I
Heat shock protein genes
Histone H3
RecA
Recombination activating gene 1
Ribonuclease P RNA
Ribosomal DNA (e.g. 28S rDNA)
Ribosomal RNA (e.g. 16S rRNA)
References
Phylogenetics | Semantide | Biology | 1,004 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.