id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
10,768,106 | https://en.wikipedia.org/wiki/NGC%206826 |
NGC 6826 (also known as Caldwell 15) is a planetary nebula located in the constellation Cygnus. It is commonly referred to as the "Blinking Planetary", although many other nebulae exhibit such "blinking". When viewed through a small telescope, the brightness of the central star overwhelms the eye when viewed directly, obscuring the surrounding nebula. However, it can be viewed well using averted vision, which causes it to "blink" in and out of view as the observer's eye wanders.
A distinctive feature of this nebula are the two bright patches on either side, which are known as Fast Low-Ionization Emission Regions, or FLIERS. They appear to be relatively young, moving outwards at supersonic speeds.
HD 186924 is the central star of the planetary nebula. It is an O-type star with a spectral type of O6fp.
See also
List of NGC objects
Planetary nebulae
References
External links
Cygnus (constellation)
Planetary nebulae
6826
015b
O-type stars | NGC 6826 | [
"Astronomy"
] | 214 | [
"Cygnus (constellation)",
"Constellations"
] |
10,768,456 | https://en.wikipedia.org/wiki/Software%20system | A software system is a system of intercommunicating components based on software forming part of a computer system (a combination of hardware and software). It "consists of a number of separate programs, configuration files, which are used to set up these programs, system documentation, which describes the structure of the system, and user documentation, which explains how to use the system".
A software system differs from a computer program or software. While a computer program is generally a set of instructions (source, or object code) that perform a specific task, a software system is more or an encompassing concept with many more components such as specification, test results, end-user documentation, maintenance records, etc.
The use of the term software system is at times related to the application of systems theory approaches in the context of software engineering. A software system consists of several separate computer programs and associated configuration files, documentation, etc., that operate together. The concept is used in the study of large and complex software, because it focuses on the major components of software and their interactions. It is also related to the field of software architecture.
Software systems are an active area of research for groups interested in software engineering in particular and systems engineering in general. Academic journals like the Journal of Systems and Software (published by Elsevier) are dedicated to the subject.
The ACM Software System Award is an annual award that honors people or an organization "for developing a system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both". It has been awarded by the Association for Computing Machinery (ACM) since 1983, with a cash prize sponsored by IBM.
Categories
Major categories of software systems include those based on application software development, programming software, and system software although the distinction can sometimes be difficult. Examples of software systems include operating systems, computer reservations systems, air traffic control systems, military command and control systems, telecommunication networks, content management systems, database management systems, expert systems, embedded systems, etc.
See also
ACM Software System Award
Common layers in an information system logical architecture
Computer program
Computer program installation
Experimental software engineering
Software bug
Software architecture
System software
Systems theory
Systems Science
Systems Engineering
Software Engineering
References
Systems engineering
Software engineering terminology | Software system | [
"Technology",
"Engineering"
] | 451 | [
"Software engineering",
"Systems engineering",
"Computing terminology",
"Software engineering terminology"
] |
10,769,824 | https://en.wikipedia.org/wiki/Gavkhouni | Gavkhouni () also written as Gawkhuni or Batlaq-e-Gavkhuni, located in the Iranian Plateau in central Iran, east of the city of Isfahan, is the terminal basin of the Zayandeh River. Gavkhouni is a salt marsh with a salinity of 31.5% and an average depth of about 1 m. The salt marsh can dry up in summer. The Zayandeh River originates in the Zagros mountains, and travels around 300 km, before terminating in Gavkhouni.
Gavkhouni receives pollution from Isfahan and other urban sources. Isfahan is a major oasis city on the Zayandeh River with a population over 1.5 million.
The marshes were designated a Ramsar site in 1975, the 19th wetland in Iran designated as a Wetland of International Importance on the Ramsar list. The wetland is home to a variety of migratory birds including flamingos, ducks, geese, gulls, pelicans, and grebes. The vegetation of the area is very specialised; there are no green plants and trees around the lake due to soil salinity, but in the wetland, different species such as reeds, cattail, Schoenoplectus, pondweeds and various algae grow.
Territorial and ecological importance
Gavkhoni Wetland has many ecological values, including groundwater feeding, flood control, food storage, suitable habitat for wildlife, preventing the infiltration of surface water and underground water, maintaining the surface area against erosive factors such as wind and storm, preventing From the expansion of the desert, the stabilization of sand dunes, the acceptance of thousands of migratory birds, the purification of toxic substances and diseases, the natural processing of fodder production, tourist attractions, transportation and its economic and social transfer for the people of the region.
According to the conducted studies, 229 animal species from 5 categories of vertebrates including: 49 mammal species, 125 bird species, 42 reptile species, one amphibian species, and 12 fish species have been identified in the Zayandeh River watershed.
Drying of Gavkhouni
Unfortunately, Isfahan's Gavkhouni Lagoon is currently "100%" dry. Experts say that the drying up of the Gavkhouni lagoon will increase migration and destroy the job opportunities provided through tourism.
Causes of drying
This wetland has dried up due to excessive harvesting of the water resources of the Zayandeh river basin, as well as due to mismanagement and non-expert works in recent years and has lost many functions and ecosystem services that it had for local communities. Because this wetland was fed by Zayandeh River, the largest river in the central regions of Iran. But in recent years, as a result of indiscriminate construction of dams for economic purposes and the drying of Zayandeh River, this wetland has also dried up. In this way, it should be said that the drying up of this international wetland is not primarily the result of the deterioration of global climate conditions. Rather, it is the product of improper management of the environment in Iran, and the lack of a comprehensive vision of development and disregard for the quality of human life and habitat has led to such a disaster. Meanwhile, Iran is a member of the international Ramsar Convention, and according to it, it has no right to pass laws that will dry up the wetlands.
Consequences of wetland drying
Government officials say: the drying up of the Gavkhouni wetland means the creation of a large center of dust in the country and the destruction of five provinces, including Isfahan,Chaharmahal and Bakhtiari province, Qom, Semnan, Yazd, and according to experts, it may even reach Tehran.
It cannot be said that the drying up of the lagoon has only affected agriculture. It should be said that the dryness of the river has had an effect on the drying of the wetland and the decrease of water in the region and the lowering of the underground water level, which has greatly affected the agriculture of the region.
The drying up of the wetland has endangered the livelihood of the people of the region and has also caused an increase in unemployment and the migration of local residents.
This event has also changed the vegetation and animal habitats and diseases such as cancer have grown.
References
External links
The Esfahan Basin
Marshes of Iran
Landforms of Isfahan province
Endorheic basins of Asia
Ramsar sites in Iran
Salt marshes | Gavkhouni | [
"Chemistry"
] | 919 | [
"Salt marshes",
"Salts"
] |
10,769,845 | https://en.wikipedia.org/wiki/Rocket%20Astrophysical%20Observatories%20K-2%2C%20K-3%20and%20K-4 | Rocket astrophysical observatories K-2, K-3 and K-4 were launched in Soviet Union in the 1960s and early 1970s under the direction of Grigor Gurzadyan of Byurakan Observatory in Armenia, for the study of the Solar ultraviolet and X-ray emission.
Technology
R-5 Pobeda ballistic rockets were used, launched from Kapustin Yar military base. The 500 km altitude flights, after the first 120 km of active regime, were performing 8–9 minutes of observations, with further parachute landing of the payload.
Sensors
The observatories of K-2, K-3 and K-4 series while undergoing ever developing modifications and combinations, included:
Lyman alpha camera for Solar chromospheric imaging, of 500 mm focal length, 70 mm slit;
coronal slit Roland spectrograph of wavelength range 500-1300 A and spectral resolution 0.1 A;
chromospheric spectrograph of 700-1800 A of resolution 0.1 A;
camera for coronal imaging at 2000-3000 A and up to 24 Solar radii distance from Solar disk;
camera for monochromatic imaging at 304 HeII and 584 HeI lines of 50 mm slit and of focal length 250 mm;
Solar imaging cameras at wavelengths shorter than 60 A, of focal length 150 mm and angular resolution up to 1 arc minute;
X-ray spectrograph for Solar corona spectra at 10-150 A, with dispersion 3A/mm.
The safe return of the payload enabled its use at several flights.
During the launch of October 1, 1965 the most powerful Solar X-ray flare among ever detected by then, was observed.
The launch of October 3, 1970 also was notable The very first launch was performed at February 15, 1961, during a Solar eclipse.
Successors
In the 1970s Gurzadyan's team, then in Garni Space Astronomy Laboratory in Armenia, developed the orbital Orion 1 and Orion 2 Space Observatories, installed onboard space station Salyut 1 and Soyuz 13, respectively.
References
Soviet space observatories | Rocket Astrophysical Observatories K-2, K-3 and K-4 | [
"Astronomy"
] | 431 | [
"Space telescopes",
"Soviet space observatories"
] |
10,773,039 | https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20rule | The Eötvös rule, named after the Hungarian physicist Loránd (Roland) Eötvös (1848–1919) enables the prediction of the surface tension of an arbitrary liquid pure substance at all temperatures. The density, molar mass and the critical temperature of the liquid have to be known. At the critical point the surface tension is zero.
The first assumption of the Eötvös rule is:
1. The surface tension is a linear function of the temperature.
This assumption is approximately fulfilled for most known liquids. When plotting the surface tension versus the temperature a fairly straight line can be seen which has a surface tension of zero at the critical temperature.
The Eötvös rule also gives a relation of the surface tension behaviour of different liquids in respect to each other:
2. The temperature dependence of the surface tension can be plotted for all liquids in a way that the data collapses to a single master curve. To do so either the molar mass, the density, or the molar volume of the corresponding liquid has to be known.
More accurate versions are found on the main page for surface tension.
The Eötvös rule
If V is the molar volume and Tc the critical temperature of a liquid the surface tension γ is given by
where k is a constant valid for all liquids, with a value of 2.1×10−7 J/(K·mol2/3).
More precise values can be gained when considering that the line normally passes the temperature axis 6 K before the critical point:
The molar volume V is given by the molar mass M and the density ρ
The term is also referred to as the "molar surface tension" γmol :
A useful representation that prevents the use of the unit mol−2/3 is given by the Avogadro constant NA :
As John Lennard-Jones and Corner showed in 1940 by means of the statistical mechanics the constant k′ is nearly equal to the Boltzmann constant.
Water
For water, the following equation is valid between 0 and 100 °C.
History
As a student, Eötvös started to research surface tension and developed a new method for its determination. The Eötvös rule was first found phenomenologically and published in 1886. In 1893 William Ramsay and Shields showed an improved version considering that the line normally passes the temperature axis 6 K before the critical point. John Lennard-Jones and Corner published (1940) a derivation of the equation by means of statistical mechanics. In 1945 E. A. Guggenheim gave a further improved variant of the equation.
References
Physical chemistry
Thermodynamic equations | Eötvös rule | [
"Physics",
"Chemistry"
] | 525 | [
"Applied and interdisciplinary physics",
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics",
"nan",
"Physical chemistry"
] |
10,774,004 | https://en.wikipedia.org/wiki/3C%20295 | 3C 295 is a narrow-line radio galaxy located in the constellation of Boötes. With a redshift of 0.464, it is approximately 5 billion light-years from Earth. At time of the discovery of its redshift in 1960, this was the remotest object known.
History
The number in its name corresponds with it being the 295th object in the Third Cambridge Catalogue of Radio Sources (which are ordered by right ascension). This is also where the prefix 3C came from.
The radio galaxy itself is a fairly normal small radio galaxy although unusually its hotspots are readily detected in optical and X-ray emission. The X-ray emission from the source is dominated by thermal emission from a rich cluster of galaxies. In optical images about 100 galaxies can be seen. 3C 295's cluster has enough material to create another 1,000 galaxies or more, making it one of the most massive objects in the known Universe. However, X-ray data showed that there is not enough mass to hold 3C 295 together gravitationally, which suggests the presence of dark matter.
References
External links
www.jb.man.ac.uk/atlas/ (J. P. Leahy)
Radio galaxies
2817657
295
Boötes | 3C 295 | [
"Astronomy"
] | 258 | [
"Boötes",
"Constellations"
] |
10,774,105 | https://en.wikipedia.org/wiki/Goro%20Azumaya | was a Japanese mathematician who introduced the notion of Azumaya algebra in 1951. His advisor was Shokichi Iyanaga. At the time of his death he was an emeritus professor at Indiana University.
References
External links
Biography of Azumaya by BiRep, Bielefeld University
1920 births
20th-century Japanese mathematicians
21st-century Japanese mathematicians
Algebraists
Indiana University faculty
2010 deaths
Nagoya University alumni
Japanese expatriates in the United States | Goro Azumaya | [
"Mathematics"
] | 91 | [
"Algebra",
"Algebraists"
] |
10,774,970 | https://en.wikipedia.org/wiki/Stanley%20Gill | Professor Stanley J. Gill (26 March 1926 – 5 April 1975) was a British computer scientist credited, along with Maurice Wilkes and David Wheeler, with the invention of the first computer subroutine.
Early life, education and career
Stanley Gill was born 26 March 1926 in Worthing, West Sussex, England. He was educated at Worthing High School for Boys and was, during his schooldays, a member of an amateur dramatic society.
In 1943, he was awarded a State Scholarship and went to St John's College, Cambridge, where he read Mathematics/Natural Sciences. He graduated BA in 1947 and MA in 1950. Gill worked at the National Physical Laboratory from 1947 to 1950, where he met his wife, Audrey Lee, whom he married in 1949. From 1952 to 1955 he was a Research Fellow at St John's working in a team led by Maurice Wilkes; the research involved pioneering work with the EDSAC computer in the Cavendish Laboratory. In 1952, he developed a very early computer game. It involved a dot (termed a sheep) approaching a line in which one of two gates could be opened. The game was controlled via the lightbeam of the EDSAC's paper tape reader. Interrupting it (such as by the player placing their hand in it) would open the upper gate. Leaving the beam unbroken would result in the lower gate opening.
He gained a PhD in 1953 and, following a year as Visiting Assistant Professor at the University of Illinois, Urbana, joined the Computer Department at Ferranti Ltd. In the UK in 1963 he was appointed Professor of Automatic Data Processing, UMIST, Manchester and, following various consultancies including International Computers Ltd he was appointed in 1964 to the newly created Chair of Computing Science and Computing Unit at Imperial College, University of London. This was later merged into the Imperial College Centre for Computing and Automation, which Gill became director of, whilst he worked as a consultant to the Ministry of Technology. Gill was a founding member of the Real Time Club in 1967 and its chairman from 1970 to 1975. In 1970 he became Chairman of Software Sciences Holdings Ltd and was Director of various companies in the Miles Roman Group. From 1972 until his death in 1975 he was Senior Consultant to PA International Management Consultants Ltd.
Gill travelled widely and advised on the establishment of departments of computing in several universities around the world. He was also President of the British Computer Society from 1967 to 1968.
Publications
The Preparation of Programs for an Electronic Digital Computer by Maurice Wilkes, David Wheeler, and Stanley Gill; (original 1951); reprinted with new introduction by Martin Campbell-Kelly; 198 pp.; . Available through Charles Babbage Institute Archive.org Full Text
Papers of Professor Stanley Gill 1964-1971, Imperial College Archives and Corporate Records Unit, Room 455, Sherfield Building, Imperial College, London, UK.
Gill, Stanley. Second Progress Report on the Automatic Computing Engine, National Physical Laboratory, Mathematics Division. (1949)
Gill, Stanley. A process for the step-by-step integration of differential equations in an automatic digital computing machine. Proc. Camb. Phil. Soc, v. 47, p. 96 (1951). [The Runge-Kutta-Gill method.] https://doi.org/10.1017/S0305004100026414
Gill, Stanley. The diagnosis of mistakes in programmes on the EDSAC. Proc. Roy. Soc. A., v. 206, p. 538 (1951). https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1951.0087
Gill, Stanley. "The application of an electronic digital computer to problems in mathematics and physics." PhD diss., University of Cambridge, November 1952.
Gill, Stanley and Bernhart, Frank R.. "An extension of Winn's result on reducible minor neighborhoods." (1973).
References
Further reading
Oral history interview with David Wheeler, 1987-05-14. Charles Babbage Institute, University of Minnesota. Wheeler was a research student at the University Mathematical Laboratory at Cambridge from 1948–1951, and a pioneer programmer on the EDSAC project. Wheeler discusses projects that were run on EDSAC, user-oriented programming methods, and the influence of EDSAC on the ILLIAC, the ORDVAC, and the IBM 701.
Biographical Librarian, St. John's College, Cambridge, UK.
https://mathworld.wolfram.com/GillsMethod.html
http://www.bitsavers.org/pdf/univac/1103/PX71900-10_CentrExchNewsl%2310_Dec56.pdf
External links
Imperial College of Science, Technology and Medicine webpage on Stanley Gill
1926 births
1975 deaths
Alumni of St John's College, Cambridge
British computer scientists
History of computing in the United Kingdom
People educated at Worthing High School | Stanley Gill | [
"Technology"
] | 1,019 | [
"History of computing",
"History of computing in the United Kingdom"
] |
10,775,053 | https://en.wikipedia.org/wiki/NGC%206027e | NGC 6027e is a tidal tail of NGC 6027, not an individual galaxy, that is part of Seyfert's Sextet, a compact group of galaxies, which is located in the constellation Serpens.
See also
NGC 6027
NGC 6027a
NGC 6027b
NGC 6027c
NGC 6027d
References
External links
HubbleSite NewsCenter: Pictures and description
Serpens
Barred spiral galaxies
6027e
56579
10116 NED06 | NGC 6027e | [
"Astronomy"
] | 100 | [
"Constellations",
"Serpens"
] |
10,775,253 | https://en.wikipedia.org/wiki/NGC%206884 | NGC 6884 is a planetary nebula located in the constellation Cygnus, less than a degree to the southwest of the star Ο1 Cygni. It lies at a distance of approximately from the Sun. The nebula was discovered on May 8, 1883, by American astronomer Edward C. Pickering.
This nebula consists of the cast-off outer atmosphere of an aging star. It is young and compact with a kinematic age of 720 years. The nebula is point-symmetric with arcs forming an S-shaped inner core; the shape is likely explained by bipolar outflows with a velocity of . The core is surrounded by a filamentary ring structure that is inclined at an angle of around 40–45° to the line of sight from the Earth. The core has an overall shape of a prolate ellipsoid with axis ratios of 1.6:1 and is inclined by 40°. The expansion velocity of the nebula ranges over 19–25 km/s. The central star has a temperature of and a class of .
References
External links
Planetary nebulae
6884
Cygnus (constellation) | NGC 6884 | [
"Astronomy"
] | 228 | [
"Cygnus (constellation)",
"Constellations"
] |
10,775,385 | https://en.wikipedia.org/wiki/Complete%20quotient | In the metrical theory of regular continued fractions, the kth complete quotient ζ k is obtained by ignoring the first k partial denominators ai. For example, if a regular continued fraction is given by
then the successive complete quotients ζ k are given by
A recursive relationship
From the definition given above we can immediately deduce that
or, equivalently,
Complete quotients and the convergents of x
Denoting the successive convergents of the regular continued fraction x = [a0; a1, a2, …] by A0, A1/B1, A2/B2, … (as explained more fully in the article fundamental recurrence formulas), it can be shown that
for all k ≥ 0.
This result can be better understood by recalling that the successive convergents of an infinite regular continued fraction approach the value x in a sort of zig-zag pattern:
so that when k is even we have Ak/Bk < x < Ak+1/Bk+1, and when k is odd we have Ak+1/Bk+1 < x < Ak/Bk. In either case, the k + 1st complete quotient ζ k+1 is the unique real number that expresses x in the form of a semiconvergent.
Complete quotients and equivalent real numbers
An equivalence relation defined by LFTs
Consider the set of linear fractional transformations (LFTs) defined by
where a, b, c, and d are integers, and ad − bc = ±1. Since this set of LFTs contains an identity element (0 + x)/1, and since it is closed under composition of functions, and every member of the set has an inverse in the set, these LFTs form a group (the group operation being composition of functions), GL(2,Z).
We can define an equivalence relation on the set of real numbers by means of this group of linear fractional transformations. We will say that two real numbers x and y are equivalent (written x ~ y) if
for some integers a, b, c, and d such that ad − bc = ±1.
Clearly this relation is symmetric, reflexive, and transitive, so it is an equivalence relation and it can be used to separate the real numbers into equivalence classes. All the rational numbers are equivalent, because each rational number is equivalent to zero. What can be said about the irrational numbers? Do they also fall into a single equivalence class?
A theorem about "equivalent" irrational numbers
Two irrational numbers x and y are equivalent under this scheme if and only if the infinitely long "tails" in their expansions as regular continued fractions are exactly the same. More precisely, the following theorem can be proved.
Let x and y be two irrational (real) numbers, and let the kth complete quotient in the regular continued fraction expansions of x and y be denoted by ζ k and ψ k, respectively, Then x ~ y (under the equivalence defined in the preceding section) if and only if there are positive integers m and n such that ζ m = ψ n.
An example
The golden ratio φ is the irrational number with the very simplest possible expansion as a regular continued fraction: φ = [1; 1, 1, 1, …]. The theorem tells us first that if x is any real number whose expansion as a regular continued fraction contains the infinite string
[1, 1, 1, 1, …], then there are integers a, b, c, and d (with ad − bc = ±1) such that
Conversely, if a, b, c, and d are integers (with ad − bc = ±1), then the regular continued fraction expansion of every real number y that can be expressed in the form
eventually reaches a "tail" that looks just like the regular continued fraction for φ.
References
Continued fractions | Complete quotient | [
"Mathematics"
] | 798 | [
"Continued fractions",
"Number theory"
] |
10,775,941 | https://en.wikipedia.org/wiki/NGC%206087 | NGC 6087 (also known as Caldwell 89 or the S Normae Cluster) is an open cluster of 40 or more stars centered on the Cepheid variable S Normae in the constellation Norma. At a distance of about 3500 ly and covering a field of almost one quarter of a degree, the stars range from seventh- to eleventh-magnitude, the brightest being 6.5 magnitude S Normae. The aggregate visual magnitude of the cluster is about 5.4.
Spectral analysis of the radial motion of the stars confirm that S Normae is a member of the cluster, and the period/luminosity relationship of Cepheid variables allows the distance to be determined with confidence.
References
External links
Open clusters
Norma (constellation)
6087
089b | NGC 6087 | [
"Astronomy"
] | 159 | [
"Norma (constellation)",
"Constellations"
] |
10,775,969 | https://en.wikipedia.org/wiki/Institution%20of%20Railway%20Signal%20Engineers | The Institution of Railway Signal Engineers (IRSE) is a worldwide professional body for all those engaged or interested in railway signalling and telecommunications (S&T) and allied disciplines. Half its members are in the UK and half are outside it.
Local sections
The IRSE is based in London, with international sections in:
Australasia
Hong Kong
India
Japan
The Netherlands
North America
Singapore
Southern Africa
Switzerland
Malaysia
Indonesia
France
Thailand
In the UK:
London and South East
Midland and North Western
Plymouth
Scottish
Western
York
There is also a Minor Railways section specialising in railways that are not part of the national network, including industrial, tourist and heritage railways.
Additionally, a Younger Members section aims to contribute to and improve the development of new entrants into the sector. Benefits include the co-ordination of a number of events each year.
Membership grades
Membership grade depends on a combination of the member's experience and any formal qualifications.
Affiliate
Accredited Technician
Associate Member
Member
Fellow
Companion
Headquarters
The headquarters of the IRSE is in Westminster, London, in the offices of the Institution of Mechanical Engineers.
The Chief Executive is Blane Judd BEng FCGI CEng FIET
Notable Members
Elsie Louisa Winterton became the IRSE's first woman member in 1923, whilst working as a draughtswoman for the Great Western Railway.
IRSE Licensing Scheme
The IRSE Licensing Scheme was introduced in 1994 as a means of competence certification for people undertaking work in the railway signalling and telecommunications industry. Over 50 licence categories cover the design, installation, testing, maintenance and engineering management of both railway signalling and telecommunications. Possession of a licence (or evidence that you are working towards obtaining a licence) is essential for people who want to carry out S&T engineering work for Network Rail or London Underground. Network Rail and London Underground require themselves and their contractors and consultants to ensure that all S&T engineers engaged in safety-critical and safety-related work possess IRSE licences.
Publications
IRSE News – an 11-edition journal featuring technical articles and papers and articles of general interest to the signalling community.
See also
Railway Industry Association
References
External links
Unofficial discussion forum for those considering taking the IRSE professional examination.
Minor Railways section library of guidelines aimed at minor and heritage railways.
1912 establishments in the United Kingdom
Institution of Mechanical Engineers
Organisations based in the City of Westminster
Organizations established in 1912
Railway Signal Engineers
Rail infrastructure in the United Kingdom
Railway signalling | Institution of Railway Signal Engineers | [
"Engineering"
] | 478 | [
"Institution of Mechanical Engineers",
"Mechanical engineering organizations"
] |
10,777,001 | https://en.wikipedia.org/wiki/Conservation%20Biology%20%28journal%29 | Conservation Biology is a bimonthly peer-reviewed scientific journal of the Society for Conservation Biology, published by Wiley-Blackwell and established in May 1987. It covers the science and practice of conserving Earth's biological diversity, including issues concerning any of the Earth's ecosystems or regions. The editor-in-chief is Mark Burgman.
Scope
The scientific papers in the journal cover a variety of topics, such as population ecology and genetics, climate change, freshwater and marine conservation, ecosystem management, citizen science, and other human dimensions of conservation, but all topics focus primarily on conservation relevance rather than specific ecosystems, species, or situations. Subscription to the journal is only open to members of Society for Conservation Biology.
Journal Metrics
According to the Journal Citation Reports, the journal has a 2019 impact factor of 5.405. It ranks 3rd among 55 in journals that focus on biodiversity and conservation, 12th among 158 in journals with an ecological focus. Conservation Biology also has an h5 index of 59, a cited half-life of >10, and a CiteScore of 5.97.
References
External links
Ecology journals
Conservation biology
English-language journals
Academic journals established in 1987
Wiley-Blackwell academic journals
Bimonthly journals
Academic journals associated with learned and professional societies | Conservation Biology (journal) | [
"Biology",
"Environmental_science"
] | 255 | [
"Environmental science journals",
"Conservation biology",
"Ecology journals"
] |
10,777,048 | https://en.wikipedia.org/wiki/Eight%20Ones | EO, or Eight Ones, is an 8-bit EBCDIC character code represented as all ones (binary 1111 1111, hexadecimal FF).
As a control code
Eight Ones, as an EBCDIC control code, is used for synchronisation purposes, such as a time and media filler. In Advanced Function Presentation code page definition resource headers, setting at least the first two bytes of the field for the eight-byte code page resource name (which is encoded in code page 500) to Eight Ones (0xFF) constitutes a "null name", which is treated as unset.
Mapping
When translated from the EBCDIC character set to code pages with a C1 control code set, Eight Ones is typically mapped to hexadecimal code 0x9F, in order to provide a unique character mapping in both directions. Prior to 1986, however, the C1 control code 0x9F was usually mapped to EBCDIC 0xE1, which was frequently used as a numeric (figure) space in code pages at the time (including the pre-1986 version of code page 37). The Unix utility follows the earlier convention, mapping the C1 code 0x9F to EBCDIC 0xE1, and mapping 0xFF (Eight Ones) to 0xFF.
As a graphical character
While Eight Ones is treated as a control code by IBM EBCDIC infrastructure, EBCDIC code pages from Fujitsu Siemens used on the BS2000 system frequently use it for a graphical character, most often the tilde. In these cases, the C1 control code 0x9F is mapped to a different location in the EBCDIC code page, most commonly 0x5F.
See also
0xFF
Delete character
References
Control characters | Eight Ones | [
"Technology"
] | 371 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
10,777,559 | https://en.wikipedia.org/wiki/Galvannealed | Galvannealed or galvanneal (galvannealed steel) is the result from the processes of galvanizing followed by annealing of sheet steel.
Galvannealed steel is a matte uniform grey color, which can be easily painted. In comparison to galvanized steel the coating is harder, and more brittle.
Production and properties
Production of galvannealed sheet steel begins with hot dip galvanization of sheet steel. After passing through the galvanizing zinc bath the sheet steel passes through air knives to remove excess zinc, and is then heated in an annealing furnace for several seconds causing iron and zinc layers to diffuse into one another causing the formation of zinc-iron alloy layers at the surface. The annealing step is performed with the strip still hot after the galvanizing step, with the zinc still liquid. The galvanising bath contains slightly over 0.1% aluminium, added to form a layer bonding between the iron and coated zinc. Annealing temperatures are around 500 to 565 °C. Pre-1990 annealing lines used gas-fired heating; post-1990s the use of induction furnaces became common.
Three distinct alloys are identified in the galvannealed surface. From the steel boundary these are named the Gamma (Γ), Zeta (ζ), and Delta (δ) layers, of compositions Fe3Zn10, FeZn10, FeZn13 respectively; resulting in an overall bulk iron content of 9-12%. The layers also contain around 1-4% aluminium. Composition depends primarily on heating time and temperature, limited by the diffusion of the two metals.
The resulting coating has a matte appearance, and is hard and brittle - under further working such as pressing or bending powder is produced from degradation of the coating, together with cracks on the surface. In comparison to a zinc (galvanized) coating galvannealed has better spot weldability, and is paintable, Due to iron present in the surface alloy phase galvanneal develops a reddish patina in moist environments - it is generally used painted. Zinc phosphate coating is a common pre-painting surface treatment.
Galvannealed sheet can also be produced from electroplated zinc steel sheet.
History
Patents relating to Galvannealed wire were obtained by the Keystone Steel and Wire Company (Peoria, Illinois, USA) c. 1923. The company used the name "Galvannealed" as a brand name. The key early patent was US patent No. 1430648 (J.L. Herman, 1922, Peoria, Illinois, USA) "Process of coating and treating materials having an iron base". The patent described the galvannealing process with specific reference to iron wires.
Uses
A major market for galvannealed steel is the automobile industry. In the mid 1980s, the Chrysler Corporation pioneered the use of Galvannealed sheet steels in the manufacture of their vehicles. In the 1990s galvannealled coatings were used by Honda, Toyota and Ford, with hot dip galvanized, electrogalvanized and other coatings (e.g. Zn-Ni) being used by other manufacturers, with variations depending on the part within the car frame, as well as due to local price differences.
Galvannealed steel is the preferred material for use in the construction of permanent debris and linen chute systems.
References
Sources
Coatings
Corrosion prevention
Metal plating
Zinc | Galvannealed | [
"Chemistry"
] | 704 | [
"Corrosion prevention",
"Metallurgical processes",
"Coatings",
"Corrosion",
"Metal plating"
] |
10,777,584 | https://en.wikipedia.org/wiki/Keetch%E2%80%93Byram%20drought%20index | The Keetch–Byram drought index (known as KBDI), created by John Keetch and George Byram in 1968 for the United States Department of Agriculture's Forest Service, is a measure of drought conditions. It is commonly used for the purpose of predicting the likelihood and severity of wildfire. It is calculated based on rainfall, air temperature, and other meteorological factors.
The KBDI is an estimate of the soil moisture deficit, which is the amount of water necessary to bring the soil moisture to its full capacity. A high soil moisture deficit means there is little water available for evaporation or plant transpiration. This occurs in conditions of extended drought, and has significant effects on fire behaviour.
In the United States, it is expressed as a range from 0 to 800, referring to hundredths of an inch of deficit in water availability; in countries that use the metric system, it is expressed from 0 to 200, referring to millimetres.
See also
National Fire Danger Rating System
Palmer drought index
Standardised Precipitation Evapotranspiration Index
McArthur Forest Fire Danger Index
1988 revision of the paper, "A drought index for forest fire control.". http://www.srs.fs.fed.us/pubs/rp/rp_se273.pdf
References
Droughts
Eponymous indices
Hazard scales
Hydrology
Wildfires
Meteorological indices | Keetch–Byram drought index | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 281 | [
"Hydrology",
"Environmental engineering"
] |
10,777,727 | https://en.wikipedia.org/wiki/Jack%20chain | A jack chain is a type of chain made of thin wire, with figure-eight-shaped links and loops at right angles to each other. Jack chains are often used to suspend fixtures such as lights or signs, for decorative purposes, or as part of a cable lock.
Jack chain may be manufactured as either single-jack chain or as double-jack chain. If double-jack, the lower loop is formed of two strands of wire rather than just one as in a single-jack.
Before the days of lavatory cisterns being close to the pan, jack chains were often used to release the cistern plug.
Other meanings
A jack chain is a tool attached to a toothed chain for moving logs.
References
External links
Image of various sizes of jack chain
Chains
Medieval armour | Jack chain | [
"Engineering"
] | 159 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
10,777,748 | https://en.wikipedia.org/wiki/Weitzenb%C3%B6ck%27s%20inequality | In mathematics, Weitzenböck's inequality, named after Roland Weitzenböck, states that for a triangle of side lengths , , , and area , the following inequality holds:
Equality occurs if and only if the triangle is equilateral. Pedoe's inequality is a generalization of Weitzenböck's inequality. The Hadwiger–Finsler inequality is a strengthened version of Weitzenböck's inequality.
Geometric interpretation and proof
Rewriting the inequality above allows for a more concrete geometric interpretation, which in turn provides an immediate proof.
Now the summands on the left side are the areas of equilateral triangles erected over the sides of the original triangle and hence the inequation states that the sum of areas of the equilateral triangles is always greater than or equal to threefold the area of the original triangle.
This can now be shown by replicating area of the triangle three times within the equilateral triangles. To achieve that the Fermat point is used to partition the triangle into three obtuse subtriangles with a angle and each of those subtriangles is replicated three times within the equilateral triangle next to it. This only works if every angle of the triangle is smaller than , since otherwise the Fermat point is not located in the interior of the triangle and becomes a vertex instead. However if one angle is greater or equal to it is possible to replicate the whole triangle three times within the largest equilateral triangle, so the sum of areas of all equilateral triangles stays greater than the threefold area of the triangle anyhow.
Further proofs
The proof of this inequality was set as a question in the International Mathematical Olympiad of 1961. Even so, the result is not too difficult to derive using Heron's formula for the area of a triangle:
First method
It can be shown that the area of the inner Napoleon's triangle, which must be nonnegative, is
so the expression in parentheses must be greater than or equal to 0.
Second method
This method assumes no knowledge of inequalities except that all squares are nonnegative.
and the result follows immediately by taking the positive square root of both sides. From the first inequality we can also see that equality occurs only when and the triangle is equilateral.
Third method
This proof assumes knowledge of the AM–GM inequality.
As we have used the arithmetic-geometric mean inequality, equality only occurs when and the triangle is equilateral.
Fourth method
Write so the sum and i.e. . But , so .
See also
List of triangle inequalities
Isoperimetric inequality
Hadwiger–Finsler inequality
Notes
References & further reading
Claudi Alsina, Roger B. Nelsen: When Less is More: Visualizing Basic Inequalities. MAA, 2009, , pp. 84-86
Claudi Alsina, Roger B. Nelsen: Geometric Proofs of the Weitzenböck and Hadwiger–Finsler Inequalities. Mathematics Magazine, Vol. 81, No. 3 (Jun., 2008), pp. 216–219 (JSTOR)
D. M. Batinetu-Giurgiu, Nicusor Minculete, Nevulai Stanciu: Some geometric inequalities of Ionescu-Weitzebböck type. International Journal of Geometry, Vol. 2 (2013), No. 1, April
D. M. Batinetu-Giurgiu, Nevulai Stanciu: The inequality Ionescu - Weitzenböck. MateInfo.ro, April 2013, (online copy)
Daniel Pedoe: On Some Geometrical Inequalities. The Mathematical Gazette, Vol. 26, No. 272 (Dec., 1942), pp. 202-208 (JSTOR)
Roland Weitzenböck: Über eine Ungleichung in der Dreiecksgeometrie. Mathematische Zeitschrift, Volume 5, 1919, pp. 137-146 (online copy at Göttinger Digitalisierungszentrum)
Dragutin Svrtan, Darko Veljan: Non-Euclidean Versions of Some Classical Triangle Inequalities. Forum Geometricorum, Volume 12, 2012, pp. 197–209 (online copy)
Mihaly Bencze, Nicusor Minculete, Ovidiu T. Pop: New inequalities for the triangle. Octogon Mathematical Magazine, Vol. 17, No.1, April 2009, pp. 70-89 (online copy)
External links
"Weitzenböck's Inequality," an interactive demonstration by Jay Warendorff - Wolfram Demonstrations Project.
Elementary geometry
Triangle inequalities
Articles containing proofs | Weitzenböck's inequality | [
"Mathematics"
] | 994 | [
"Elementary mathematics",
"Articles containing proofs",
"Elementary geometry"
] |
10,778,190 | https://en.wikipedia.org/wiki/Barrel%20shroud | A barrel shroud is an external covering that envelops (either partially or full-length) the barrel of a firearm to prevent unwanted direct contact with the barrel (e.g. accidental collision with surrounding objects or the user accidentally touching a hot barrel, which can lead to burns). Moving coverings such as pistol slides, fore-end extensions of the gunstock/chassis that do not fully encircle the barrel, and the receiver (or frame) of a firearm itself are generally not described as barrel shrouds, though they can functionally act as such.
In shotguns, a thin, slim partial shroud known as a rib is often mounted above the barrel to shield away the mirage generated by barrel heat, which can interfere with aiming.
Full-length barrel shrouds are commonly featured on air-cooled machine guns, where frequent rapid bursts or sustained automatic fire will leave the barrel extremely hot and dangerous to the user. However, shrouds can also be used on semi-automatic firearms, especially the ones with light-weight barrels, as even a small number of shots can heat up a barrel enough to injure the user in certain circumstances.
Barrel shrouds are also used on pump-action shotguns. The military trench shotgun features a ventilated metal handguard with a bayonet attachment lug. Ventilated handguards or heat shields (usually without bayonet lugs) are also used on police riot shotguns and shotguns marketed for civilian self-defense. The heat shield also serves as an attachment base for accessories such as sights or sling swivels.
Handguard
A handguard (also known as the forend or forearm) on firearms is a barrel shroud specifically designed to allow the user to grip the front of the gun. It provides a safe heat-insulated surface for the user's hand to firmly hold onto without needing to worry about getting burned by the barrel, which may become very hot when firing. It can also serve as an attachment platform for secondary weapons (such as an underslung M203 grenade launcher or M26-MASS) as well as accessories such as bipods, tactical lights, laser sights, night-vision devices, foregrips (or handstops), slings and a variety of other attachments.
Handguards are typically available as two types. The first has a contact point at the base of the barrel and a predetermined length up the barrel. They are typically made of polymer if they are this brand but can be made of different types of alloys. If they have the two contact points they are considered to be a drop in handguard.
The other type attaches around the barrel but does not make contact with it directly. That particular type of handguard is, the majority of the time, made out of some form of aluminum or aluminum alloy. That allows for what is considered a free-floating barrel. Free floating barrels are known to have greater accuracy than their counterparts that have drop in hand guards.
They also use a number of mounting systems, with the main ones being M-LOK, KeyMod, and Picatinny.
In the context of melee weapons, a "handguard" refers to the crossguard (also known as the quillons or crosstree), the enlarged front part of a sword, saber or knife/dagger's hilt, which protects the wielder's hands from an opponent's blade sliding towards the hilt or prevents the wielder's own hand and fingers from accidentally slipping onto the blade when stabbing.
Free-floating handguard
Free-floating handguards, also referred to as "floating" handguards, have seen a rise in popularity in the recent years. They work by only attaching to the firearm at one point (on the barrel nut by the upper receiver) while the remainder of the handguard does not make contact with the barrel. This gives the impression that the handguard is "floating" around the barrel, hence the name.
Because they void barrel warping, free-floating handguards have been known to increase accuracy between 0.5 and 0.75 MOA (0.15–0.2 mrad) compared to their drop-in counterparts.
Barrel warping occurs when the handguard makes contact with the barrel, which then slightly alters the barrel's angle, reducing accuracy. This can occur when a rifle is propped up against a surface or with a bipod. Force exerted onto the handguard pushes back up against the barrel, which deflects the barrel, reducing accuracy. The angle may seem insignificant; however, even a slight deviation can cause the shot to dramatically deviate down range.
Free-floating handguards do not suffer from barrel warping as they do not make contact with the barrel. Force exerted onto the handguard is not transferred to the barrel, which allows for an increase in accuracy.
See also
M-LOK – free licensed competing standard to KeyMod
KeyMod – open sourced competing standard to M-Lok
Muzzle shroud
Forearm
Thermal sleeve
References
External links
Finned carbine handguard assembly Randy E. Luth (patent)
KeyMod vs. M-LOK Modular Rail System Comparison, Presented by Caleb McGee, Naval Special Warfare Center Crane Division, 4 May 2017
Firearm components | Barrel shroud | [
"Technology"
] | 1,089 | [
"Firearm components",
"Components"
] |
10,778,792 | https://en.wikipedia.org/wiki/South%20American%20Energy%20Council | The South American Energy Council is a body set up to co-ordinate the regional energy policy of the Union of South American Nations (UNASUR).
History
Its establishment was agreed at the first South American Energy Summit, which took place on April 16–17 2007 on Isla Margarita in the Venezuelan state of Nueva Esparta. It was officially created in May 2010 during the UNASUR's Extraordinary Summit in Los Cardales, Argentina.
In 2012, the Council started to draft a South American Energy Treaty. Before drafting this treaty, the Council coordinated the creation of an energy balance and a 15-point strategy.
See also
South American Organization of Gas Producers and Exporters
References
Union of South American Nations
International energy organizations
Organizations established in 2007
Energy policy
Energy in Argentina
Energy in Bolivia
Energy in Brazil
Energy in Chile
Energy in Colombia
Energy in Ecuador
Energy in Guyana
Energy in Paraguay
Energy in Peru
Energy in Suriname
Energy in Uruguay
Energy in Venezuela
Energy in South America
2007 establishments in South America | South American Energy Council | [
"Engineering",
"Environmental_science"
] | 197 | [
"International energy organizations",
"Environmental social science",
"Energy organizations",
"Energy policy"
] |
10,778,863 | https://en.wikipedia.org/wiki/List%20of%20plants%20with%20symbolism | Various folk cultures and traditions assign symbolic meanings to plants. Although these are no longer commonly understood by populations that are increasingly divorced from their rural traditions, some meanings survive. In addition, these meanings are alluded to in older pictures, songs and writings. New symbols have also arisen: one of the most known in the United Kingdom is the red poppy as a symbol of remembrance of the fallen in war.
List
See also
Narcissus in culture – uses of narcissus flowers by humans
Lime tree in culture – uses of the lime (linden) tree by humans
Rose symbolism – a more expansive list of symbolic meanings of the rose
Apple (symbolism) – a more expansive list of symbolic means for apples
References
The Fortunate Fortune Flower Plant: A Comprehensive Guide
Symbols Dictionary: Flowers and Plants
Symbolism of Plants, Trees, and Herbs
Plants
Language of flowers
Lists of plants | List of plants with symbolism | [
"Biology"
] | 175 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
569,705 | https://en.wikipedia.org/wiki/Cell-mediated%20immunity | Cellular immunity, also known as cell-mediated immunity, is an immune response that does not rely on the production of antibodies. Rather, cell-mediated immunity is the activation of phagocytes, antigen-specific cytotoxic T-lymphocytes, and the release of various cytokines in response to an antigen.
History
In the late 19th century Hippocratic tradition medicine system, the immune system was imagined into two branches: humoral immunity, for which the protective function of immunization could be found in the humor (cell-free bodily fluid or serum) and cellular immunity, for which the protective function of immunization was associated with cells. CD4 cells or helper T cells provide protection against different pathogens. Naive T cells, which are immature T cells that have yet to encounter an antigen, are converted into activated effector T cells after encountering antigen-presenting cells (APCs). These APCs, such as macrophages, dendritic cells, and B cells in some circumstances, load antigenic peptides onto the major histocompatibility complex (MHC) of the cell, in turn presenting the peptide to receptors on T cells. The most important of these APCs are highly specialized dendritic cells; conceivably operating solely to ingest and present antigens. Activated effector T cells can be placed into three functioning classes, detecting peptide antigens originating from various types of pathogen: The first class being 1) Cytotoxic T cells, which kill infected target cells by apoptosis without using cytokines, 2) Th1 cells, which primarily function to activate macrophages, and 3) Th2 cells, which primarily function to stimulate B cells into producing antibodies.
In another ideology, the innate immune system and the adaptive immune system each comprise both humoral and cell-mediated components. Some cell-mediated components of the innate immune system include myeloid phagocytes, innate lymphoid cells (NK cells) and intraepithelial lymphocytes.
Synopsis
Cellular immunity protects the body through:
T-cell mediated immunity or T-cell immunity: activating antigen-specific cytotoxic T cells that are able to induce apoptosis in body cells displaying epitopes of foreign antigen on their surface, such as virus-infected cells, cells with intracellular bacteria, and cancer cells displaying tumor antigens;
Macrophage and natural killer cell action: enabling the destruction of pathogens via recognition and secretion of cytotoxic granules (for natural killer cells) and phagocytosis (for macrophages); and
Stimulating cells to secrete a variety of cytokines that influence the function of other cells involved in adaptive immune responses and innate immune responses.
Cell-mediated immunity is directed primarily at microbes that survive in phagocytes and microbes that infect non-phagocytic cells. It is most effective in removing virus-infected cells, but also participates in defending against fungi, protozoans, cancers, and intracellular bacteria. It also plays a major role in transplant rejection.
Type 1 immunity is directed primarily at viruses, bacteria, and protozoa and is responsible for activating macrophages, turning them into potent effector cells. This is achieved by the secretion of interferon gamma and TNF.
Overview
CD4+ T-helper cells may be differentiated into two main categories:
TH1 cells which produce interferon gamma and lymphotoxin alpha,
TH2 cells which produce IL-4, IL-5, and IL-13.
A third category called T helper 17 cells (TH17) were also discovered which are named after their secretion of Interleukin 17.
CD8+ cytotoxic T-cells may also be categorized as:
Tc1 cells,
Tc2 cells.
Similarly to CD4+ TH cells, a third category called TC17 were discovered that also secrete IL-17.
As for the ILCs, they[Clarification needed.] may be classified into three main categories
ILC1 which secrete type 1 cytokines,
ILC2 which secrete type 2 cytokines,
ILC3 which secrete type 17 cytokines.
Development of cells
All type 1 cells begin their development from the common lymphoid progenitor (CLp) which then differentiates to become the common innate lymphoid progenitor (CILp) and the t-cell progenitor (Tp) through the process of lymphopoiesis.
Common innate lymphoid progenitors may then be differentiated into a natural killer progenitor (NKp) or a common helper like innate lymphoid progenitor (CHILp). NKp cells may then be induced to differentiate into natural killer cells by IL-15. CHILp cells may be induced to differentiate into ILC1 cells by IL-15, into ILC2 cells by IL-7 or ILC3 cells by IL-7 as well.
T-cell progenitors may differentiate into naïve CD8+ cells or naïve CD4+ cells. Naïve CD8+ cells may then further differentiate into TC1 cells upon IL-12 exposure, [IL-4] can induce the differentiation into TC2 cells and IL-1 or IL-23 can induce the differentiation into TC17 cells. Naïve CD4+ cells may differentiate into TH1 cells upon IL-12 exposure, TH2 upon IL-4 exposure or TH17 upon IL-1 or IL-23 exposure.
Type 1 immunity
Type 1 immunity makes use of the type 1 subset for each of these cell types. By secreting interferon gamma and TNF, TH1, TC1, and group 1 ILCS activate macrophages, converting them to potent effector cells. It provides defense against intracellular bacteria, protozoa, and viruses. It is also responsible for inflammation and autoimmunity with diseases such as rheumatoid arthritis, multiple sclerosis, and inflammatory bowel disease all being implicated in type 1 immunity. Type 1 immunity consists of these cells:
CD4+ TH1 cells
CD8+ cytotoxic T cells (Tc1)
T-Bet+ interferon gamma producing group 1 ILCs(ILC1 and Natural killer cells)
CD4+ TH1 Cells
It has been found in both mice and humans that the signature cytokines for these cells are interferon gamma and lymphotoxin alpha. The main cytokine for differentiation into TH1 cells is IL-12 which is produced by dendritic cells in response to the activation of pattern recognition receptors. T-bet is a distinctive transcription factor of TH1 cells. TH1 cells are also characterized by the expression of chemokine receptors which allow their movement to sites of inflammation. The main chemokine receptors on these cells are CXCR3A and CCR5. Epithelial cells and keratinocytes are able to recruit TH1 cells to sites of infection by releasing the chemokines CXCL9, CXCL10 and CXCL11 in response to interferon gamma. Additionally, interferon gamma secreted by these cells seems to be important in downregulating tight junctions in the epithelial barrier.
CD8+ TC1 Cells
These cells generally produce interferon gamma. Interferon gamma and IL-12 promote differentiation toward TC1 cells. T-bet activation is required for both interferon gamma and cytolytic potential. CCR5 and CXCR3 are the main chemokine receptors for this cell.
Group 1 ILCs
Groups 1 ILCs are defined to include ILCs expressing the transcription factor T-bet and were originally thought to only include natural killer cells. Recently, there have been a large amount of NKp46+ cells that express certain master [transcription factor]s that allow them to be designated as a distinct lineage of natural killer cells termed ILC1s. ILC1s are characterized by the ability to produce interferon gamma, TNF, GM-CSF and IL-2 in response to cytokine stimulation but have low or no cytotoxic ability.
See also
Immune system
Humoral immunity (vs. cell-mediated immunity)
Immunity
References
Bibliography
Cell-mediated immunity (Encyclopædia Britannica)
Chapter 8:T Cell-Mediated Immunity Immunobiology: The Immune System in Health and Disease. 5th edition.
The 3 major types of innate and adaptive cell-mediated effector immunity
functions%20in%20steady-state%20homeostasis%20and%20during%20immune%20challenge. Innate lymphocytes-lineage, localization and timing of differentiation
Further reading
Cell-mediated immunity: How T cells recognize and respond to foreign antigens
Immunology
Helper
Human cells
Phagocytes
Cell biology
Immune system
Lymphatic system
Infectious diseases
Cell signaling | Cell-mediated immunity | [
"Chemistry",
"Biology"
] | 1,868 | [
"Cell biology",
"Immune system",
"Signal transduction",
"Immunology",
"Cytokines",
"Organ systems",
"Apoptosis"
] |
569,755 | https://en.wikipedia.org/wiki/Localhost | In computer networking, localhost is a hostname that refers to the current computer used to access it. The name localhost is reserved for loopback purposes.
It is used to access the network services that are running on the host via the loopback network interface. Using the loopback interface bypasses any local network interface hardware.
Loopback
The local loopback mechanism may be used to run a network service on a host without requiring a physical network interface, or without making the service accessible from the networks the computer may be connected to. For example, a locally installed website may be accessed from a Web browser by the URL http://localhost to display its home page.
IPv4 network standards reserve the entire address block (more than 16 million addresses) for loopback purposes. That means any packet sent to any of those addresses is looped back. The address is the standard address for IPv4 loopback traffic; the rest are not supported by all operating systems. However, they can be used to set up multiple server applications on the host, all listening on the same port number. In the IPv6 addressing architecture there is only a single address assigned for loopback: . The standard precludes the assignment of that address to any physical interface, as well as its use as the source or destination address in any packet sent to remote hosts.
Name resolution
The name localhost normally resolves to the IPv4 loopback address , and to the IPv6 loopback address .
This resolution is normally configured by the following lines in the operating system's hosts file:
127.0.0.1 localhost
::1 localhost
The name may also be resolved by Domain Name System (DNS) servers, but there are special considerations governing the use of this name:
An IPv4 or IPv6 address query for the name localhost must always resolve to the respective loopback address.
Applications may resolve the name to a loopback address themselves, or pass it to the local name resolver mechanisms.
When a name resolver receives an address (A or AAAA) query for localhost, it should return the appropriate loopback addresses, and negative responses for any other requested record types. Queries for localhost should not be sent to caching name servers.
To avoid burdening the Domain Name System root servers with traffic, caching name servers should never request name server records for localhost, or forward resolution to authoritative name servers.
When authoritative name servers receive queries for 'localhost' in spite of the provisions mentioned above, they should resolve them appropriately.
In addition to the mapping of localhost to the loopback addresses ( and ), localhost may also be mapped to other IPv4 (loopback) addresses and it is also possible to assign other, or additional, names to any loopback address. The mapping of localhost to addresses other than the designated loopback address range in the hosts file or in DNS is not guaranteed to have the desired effect, as applications may map the name internally.
In the Domain Name System, the name .localhost is reserved as a top-level domain name, originally set aside to avoid confusion with the hostname localhost. Domain name registrars are precluded from delegating domain names in the top-level .localhost domain.
Historical notes
In 1981, the block got a 'reserved' status, as not to assign it as a general purpose class A IP network.
This block was officially assigned for loopback purposes in 1986.
Its purpose as a Special Use IPv4 Address block was confirmed in 1994,, 2002, 2010,, and last in 2013.
From the outset, in 1995, the single IPv6 loopback address was defined. Its purpose and definition was unchanged in 1998,, 2003,, and up to the current definition, in 2006.
Packet processing
The processing of any packet sent to a loopback address, is implemented in the link layer of the TCP/IP stack. Such packets are never passed to any network interface controller (NIC) or hardware device driver and must not appear outside of a computing system, or be routed by any router. This permits software testing and local services, even in the absence of any hardware network interfaces.
Looped-back packets are distinguished from any other packets traversing the TCP/IP stack only by the special IP address they were addressed to. Thus, the services that ultimately receive them respond according to the specified destination. For example, an HTTP service could route packets addressed to and to different Web servers, or to a single server that returns different web pages. To simplify such testing, the hosts file may be configured to provide appropriate names for each address.
Packets received on a non-loopback interface with a loopback source or destination address must be dropped. Such packets are sometimes referred to as Martian packets. As with any other bogus packets, they may be malicious and any problems they might cause can be avoided by applying bogon filtering.
Special cases
The releases of the MySQL database differentiate between the use of the hostname localhost and the use of the addresses and . When using localhost as the destination in a client connector interface of an application, the MySQL application programming interface connects to the database using a Unix domain socket, while a TCP connection via the loopback interface requires the direct use of the explicit address.
One notable exception to the use of the addresses is their use in Multiprotocol Label Switching (MPLS) traceroute error detection, in which their property of not being routable provides a convenient means to avoid delivery of faulty packets to end users.
See also
Private network
Reserved IP addresses
0.0.0.0
References
Internet architecture
IP addresses | Localhost | [
"Technology"
] | 1,189 | [
"Internet architecture",
"IT infrastructure"
] |
569,759 | https://en.wikipedia.org/wiki/Link%20register | A link register (LR for short) is a register which holds the address to return to when a subroutine call completes. This is more efficient than the more traditional scheme of storing return addresses on a call stack, sometimes called a machine stack. The link register does not require the writes and reads of the memory containing the stack which can save a considerable percentage of execution time with repeated calls of small subroutines.
The IBM POWER architecture, and its PowerPC and Power ISA successors, have a special-purpose link register, into which subroutine call instructions put the return address.
In some other instruction sets, such as the ARM architectures, SPARC, and OpenRISC, subroutine call instructions put the return address into a specific general-purpose register, so that register is designated by the instruction set architecture as the link register. The ARMv7 architecture uses general-purpose register R14 as the link register, OpenRISC uses register r9, and SPARC uses "output register 7" or o7.
In some others, such as PA-RISC, RISC-V, and the IBM System/360 and its successors, including z/Architecture, the subroutine call instruction can store the return address in any general-purpose register; a particular register is usually chosen, by convention, to be used as the link register.
Some architectures have two link registers: a standard "branch link register" for most subroutine calls, and a special "interrupt link register" for interrupts. One of these is ARCv2 (ARC processors using version 2 of the ARCompact architecture), which uses general-purpose-registers r29 for the interrupt link register and r31 for the branch link register. References to "the link register" on such platforms will be referring to the branch link register.
Earlier ARC processors based on the ARCompact and ARCtangent architectures had three link registers: two interrupt link registers (ILINK) and one branch link register (BLINK). The two interrupt link registers were ILINK1 (for level 1 (low priority) maskable interrupts), and ILINK2 (for level 2 (mid priority) maskable interrupts). In these architectures, r29 was used as the level 1 interrupt link register, r30 as the level 2 interrupt link register, and r31 as the branch link register. ILINK1 and ILINK2 were not accessible in user mode on the ARC 700 processors.
The use of a link register, regardless of whether it is a dedicated register or a general-purpose register, allows for faster calls to leaf subroutines. When the subroutine is non-leaf, passing the return address in a register can still result in generation of more efficient code for thunks, e.g. for a subroutine whose sole purpose is to call another subroutine with arguments rearranged in some way. Other subroutines can benefit from the use of the link register because it can be saved in a batch with other callee-used registers—e.g. an ARM subroutine pushes registers 4-7 along with the link register, LR, by the single instruction
STMDB SP!, {R4-R7, LR} pipelining all memory writes required.
References
Digital registers | Link register | [
"Technology"
] | 685 | [
"Computing stubs",
"Computer hardware stubs"
] |
569,831 | https://en.wikipedia.org/wiki/Flat%20chain | Flat chain is a form of chain used chiefly in agricultural machinery. Early machinery made extensive use of flat chain. It has been gradually replaced in most applications by roller chain, which is quieter, lasts longer, and requires less frequent retensioning.
Modern flat chain is made from stamped steel. Individual links can be put together or taken apart using simple tools, unlike roller chain which requires a master link or special splicing equipment.
Today, flat chain is used most often for conveyor belts, because it lends itself well to the attachment of slats, flights, buckets, and prongs used to move material. Such attachments can be welded on in the field, or can be purchased ready-made on a single link (or pair of links where the conveyor uses two chains) and then spliced into a loop of chain.
Older forms of flat chain were made of iron. Though the sprockets are compatible with modern chain, the two types cannot be spliced together.
References
External links
The Complete Guide to Chain
History of Link-Belt Construction Equipment Co. an early manufacturer of flat chain.
Chain drives
Mechanical power transmission | Flat chain | [
"Physics"
] | 234 | [
"Mechanical power transmission",
"Mechanics"
] |
569,840 | https://en.wikipedia.org/wiki/Mechanoreceptor | A mechanoreceptor, also called mechanoceptor, is a sensory receptor that responds to mechanical pressure or distortion. Mechanoreceptors are located on sensory neurons that convert mechanical pressure into electrical signals that, in animals, are sent to the central nervous system.
Vertebrate mechanoreceptors
Cutaneous mechanoreceptors
Cutaneous mechanoreceptors respond to mechanical stimuli that result from physical interaction, including pressure and vibration. They are located in the skin, like other cutaneous receptors. They are all innervated by Aβ fibers, except the mechanorecepting free nerve endings, which are innervated by Aδ fibers. Cutaneous mechanoreceptors can be categorized by what kind of sensation they perceive, by the rate of adaptation, and by morphology. Furthermore, each has a different receptive field.
By sensation
The Slowly Adapting type 1 (SA1) mechanoreceptor, with the Merkel corpuscle end-organ (also known as Merkel discs) detect sustained pressure and underlies the perception of form and roughness on the skin. They have small receptive fields and produce sustained responses to static stimulation.
The Slowly Adapting type 2 (SA2) mechanoreceptors, with the Ruffini corpuscle end-organ (also known as the bulbous corpuscles), detect tension deep in the skin and fascia and respond to skin stretch, but have not been closely linked to either proprioceptive or mechanoreceptive roles in perception. They also produce sustained responses to static stimulation, but have large receptive fields.
The Rapidly Adapting (RA) or Meissner corpuscle end-organ mechanoreceptor (also known as the tactile corpuscles) underlies the perception of light touch such as flutter and slip on the skin. It adapts rapidly to changes in texture (vibrations around 50 Hz). They have small receptive fields and produce transient responses to the onset and offset of stimulation.
The Pacinian corpuscle or Vater-Pacinian corpuscles or Lamellar corpuscles in the skin and fascia detect rapid vibrations of about 200–300 Hz. They also produce transient responses, but have large receptive fields.
Free nerve endings detect touch, pressure, stretching, as well as the tickle and itch sensations. Itch sensations are caused by stimulation of free nerve ending from chemicals.
Hair follicle receptors called hair root plexuses sense when a hair changes position. Indeed, the most sensitive mechanoreceptors in humans are the hair cells in the cochlea of the inner ear (no relation to the follicular receptors – they are named for the hair-like mechanosensory stereocilia they possess); these receptors transduce sound for the brain.
By rate of adaptation
Cutaneous mechanoreceptors can also be separated into categories based on their rates of adaptation. When a mechanoreceptor receives a stimulus, it begins to fire impulses or action potentials at an elevated frequency (the stronger the stimulus, the higher the frequency). The cell, however, will soon "adapt" to a constant or static stimulus, and the pulses will subside to a normal rate. Receptors that adapt quickly (i.e., quickly return to a normal pulse rate) are referred to as "phasic". Those receptors that are slow to return to their normal firing rate are called tonic. Phasic mechanoreceptors are useful in sensing such things as texture or vibrations, whereas tonic receptors are useful for temperature and proprioception among others.
Slowly adapting: Slowly adapting mechanoreceptors include Merkel and Ruffini corpuscle end-organs, and some free nerve endings.
Slowly adapting type I mechanoreceptors have multiple Merkel corpuscle end-organs.
Slowly adapting type II mechanoreceptors have single Ruffini corpuscle end-organs.
Intermediate adapting: Some free nerve endings are intermediate adapting.
Rapidly adapting: Rapidly adapting mechanoreceptors include Meissner corpuscle end-organs, Pacinian corpuscle end-organs, hair follicle receptors and some free nerve endings.
Rapidly adapting type I mechanoreceptors have multiple Meissner corpuscle end-organs.
Rapidly adapting type II mechanoreceptors (usually called Pacinian) have single Pacinian corpuscle end-organs.
By receptive field
Cutaneous mechanoreceptors with small, accurate receptive fields are found in areas needing accurate taction (e.g. the fingertips). In the fingertips and lips, innervation density of slowly adapting type I and rapidly adapting type I mechanoreceptors are greatly increased. These two types of mechanoreceptors have small discrete receptive fields and are thought to underlie most low-threshold use of the fingers in assessing texture, surface slip, and flutter. Mechanoreceptors found in areas of the body with less tactile acuity tend to have larger receptive fields.
Lamellar corpuscles
Lamellar corpuscles, or Pacinian corpuscles or Vater-Pacini corpuscle, are deformation or pressure receptors located in the skin and also in various internal organs. Each is connected to a sensory neuron. Because of its relatively large size, a single lamellar corpuscle can be isolated and its properties studied. Mechanical pressure of varying strength and frequency can be applied to the corpuscle by stylus, and the resulting electrical activity detected by electrodes attached to the preparation.
Deforming the corpuscle creates a generator potential in the sensory neuron arising within it. This is a graded response: the greater the deformation, the greater the generator potential. If the generator potential reaches threshold, a volley of action potentials (nerve impulses) are triggered at the first node of Ranvier of the sensory neuron.
Once threshold is reached, the magnitude of the stimulus is encoded in the frequency of impulses generated in the neuron. So the more massive or rapid the deformation of a single corpuscle, the higher the frequency of nerve impulses generated in its neuron.
The optimal sensitivity of a lamellar corpuscle is 250 Hz, the frequency range generated upon finger tips by textures made of features smaller than 200 micrometres.
Ligamentous mechanoreceptors
There are four types of mechanoreceptors embedded in ligaments. As all these types of mechanoreceptors are myelinated, they can rapidly transmit sensory information regarding joint positions to the central nervous system.
Type I: (small) Low threshold, slow adapting in both static and dynamic settings
Type II: (medium) Low threshold, rapidly adapting in dynamic settings
Type III: (large) High threshold, slowly adapting in dynamic settings
Type IV: (very small) High threshold pain receptors that communicate injury
Type II and Type III mechanoreceptors in particular are believed to be linked to one's sense of proprioception.
Other mechanoreceptors
Other mechanoreceptors than cutaneous ones include the hair cells, which are sensory receptors in the vestibular system of the inner ear, where they contribute to the auditory system and equilibrioception. Baroreceptors are a type of mechanoreceptor sensory neuron that is excited by stretch of the blood vessel. There are also juxtacapillary (J) receptors, which respond to events such as pulmonary edema, pulmonary emboli, pneumonia, and barotrauma.
Muscle spindles and the stretch reflex
The knee jerk is the popularly known stretch reflex (involuntary kick of the lower leg) induced by tapping the knee with a rubber-headed hammer. The hammer strikes a tendon that inserts an extensor muscle in the front of the thigh into the lower leg. Tapping the tendon stretches the thigh muscle, which activates stretch receptors within the muscle called muscle spindles. Each muscle spindle consists of sensory nerve endings wrapped around special muscle fibers called intrafusal muscle fibers. Stretching an intrafusal fiber initiates a volley of impulses in the sensory neuron (a I-a neuron) attached to it. The impulses travel along the sensory axon to the spinal cord where they form several kinds of synapses:
Some of the branches of the I-a axons synapse directly with alpha motor neurons. These carry impulses back to the same muscle causing it to contract. The leg straightens.
Some of the branches of the I-a axons synapse with inhibitory interneurons in the spinal cord. These, in turn, synapse with motor neurons leading back to the antagonistic muscle, a flexor in the back of the thigh. By inhibiting the flexor, these interneurons aid contraction of the extensor.
Still other branches of the I-a axons synapse with interneurons leading to brain centers, e.g., the cerebellum, that coordinate body movements.
Mechanism of sensation
In somatosensory transduction, the afferent neurons transmit messages through synapses in the dorsal column nuclei, where second-order neurons send the signal to the thalamus and synapse with third-order neurons in the ventrobasal complex. The third-order neurons then send the signal to the somatosensory cortex.
More recent work has expanded the role of the cutaneous mechanoreceptors for feedback in fine motor control. Single action potentials from Meissner's corpuscle, Pacinian corpuscle and Ruffini ending afferents are directly linked to muscle activation, whereas Merkel cell-neurite complex activation does not trigger muscle activity.
Invertebrate mechanoreceptors
Insect and arthropod mechanoreceptors include:
Campaniform sensilla: Small domes in the exoskeleton that are distributed all along the insect's body. These cells are thought to detect mechanical load as resistance to muscle contraction, similar to the mammalian Golgi tendon organs.
Hair plates: Sensory neurons that innervate hairs that are found in the folds of insect joints. These hairs are deflected when one body segment moves relative to an adjoining segment, they have proprioceptive function, and are thought to act as limit detectors encoding the extreme ranges of motion for each joint.
Chordotonal organs: Internal stretch receptors at the joints, can have both extero- and proprioceptive functions. The neurons in the chordotonal organ in Drosophila melanogaster can be organized into club, claw, and hook neurons. Club neurons are thought to encode vibrational signals while claw and hook neurons can be subdivided into extension and flexion populations that encode joint angle and movement respectively.
Slit sensilla: Slits in the exoskeleton that detect physical deformation of the animal's exoskeleton, have proprioceptive function.
Bristle sensilla: Bristle neurons are mechanoreceptors that innervate hairs all along the body. Each neuron extends a dendritic process to innervate a single hair and projects its axon to the ventral nerve cord. These neurons are thought to mediate touch sensation by responding to physical deflections of the hair. In line with the fact that many insects exhibit different sized hairs, commonly referred to as macrochaetes (thicker longer hairs) and microchaetes (thinner shorter hairs), previous studies suggest that bristle neurons to these different hairs may have different firing properties such as resting membrane potential and firing threshold.
Plant mechanoreceptors
Mechanoreceptors are also present in plant cells where they play an important role in normal growth, development and the sensing of their environment. Mechanoreceptors aid the Venus flytrap (Dionaea muscipula Ellis) in capturing large prey.
Molecular biology
Mechanoreceptor proteins are ion channels whose ion flow is induced by touch. Early research showed that touch transduction in the nematode Caenorhabditis elegans was found to require a two transmembrane, amiloride-sensitive ion channel protein related to epithelial sodium channels (ENaCs). This protein, called MEC-4, forms a heteromeric Na+-selective channel together with MEC-10. Related genes in mammals are expressed in sensory neurons and were shown to be gated by low pH. The first of such receptor was ASIC1a, named so because it is an acid sensing ion channel (ASIC).
See also
Somatosensory system
Thermoreceptor
Nociceptor
Stretch sensor
Vestibular system
Stretch receptor
References
External links
Sensory receptors
Sensory systems
Perception
Ethology | Mechanoreceptor | [
"Biology"
] | 2,660 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
569,846 | https://en.wikipedia.org/wiki/Co-sleeping | Co-sleeping or bed sharing is a practice in which babies and young children sleep close to one or both parents, as opposed to in a separate room. Co-sleeping individuals sleep in sensory proximity to one another, where the individual senses the presence of others. This sensory proximity can either be triggered by touch, smell, taste, or noise. Therefore, the individuals can be a few centimeters away or on the other side of the room and still have an effect on the other. It is standard practice in many parts of the world, and is practiced by a significant minority in countries where cribs are also used.
Bed-sharing, a practice in which babies and young children sleep in the same bed with one or both parents, is a subset of co-sleeping. Co-bedding refers to infants (typically twins or higher-order multiples) sharing the same bed.
Whether cosleeping or using another sleep surface, it is considered important for the baby to be in the same room as an adult, committed caregiver for all sleeps — day and night — in early life. This is known to reduce the risk of SIDS by 50 per cent. Some organizations such as Red Nose Australia recommend this for the first 12 months of life and others such as the NHS recommend it for the first 6 months.
Introduction
Bed-sharing among married couples is standard practice in many parts of the world outside of North America, Europe and Australia, and even in the latter areas a significant minority of children have shared a bed with their parents at some point in childhood. One 2006 study of children age 3–10 in India reported 93% of children bed-sharing while a 2006 study of children in Kentucky in the United States reported 15% of infants and toddlers 2 weeks to 2 years engage in bed-sharing.
Bed-sharing was widely practiced in all areas up to the 19th century, until the advent of giving the child his or her own room and the crib. In many parts of the world, bed-sharing simply has the practical benefit of keeping the child warm at night. Bed-sharing has been relatively recently re-introduced into Western culture by practitioners of attachment parenting. Proponents hold that bed-sharing saves babies' lives (especially in conjunction with nursing),
promotes bonding, enables the parents to get more sleep and facilitates breastfeeding. Older babies can breastfeed during the night without waking their mother. Opponents argue that co-sleeping is stressful for the child when they are not co-sleeping. They also cite concerns that a parent may smother the child or promote an unhealthy dependence of the child on the parent(s).
Because children become accustomed to behaviors learned in early experiences, bed-sharing in infancy will also increase the likelihood of these children to crawl into their parent's bed in ages past infancy.
Health and safety
Health care professionals disagree about bed-sharing techniques, effectiveness, and ethics. However, safe cosleeping and bedsharing guidelines can be found on Lullaby Trust, where as organisations such as UNICEF outline the primary factors leading to hazardous cosleeping.
Traditional and cultural bedsharing and caregiving practices have also been found to reduce risk of SIDS for certain populations. But this is found to be opposite in others, increasing deaths categorised within SUDI.
Known risks
There are certain dangerous behaviors that increase SIDS and should be avoided whether placing a baby in a crib or co-sleeping: infants should always sleep on their backs on a firm surface (not waterbeds, pillows, recliners, or couches), mattresses should intersect the bedframe tightly, there should be no stuffed animals or soft toys near the baby, blankets should be light, a baby's head should never be covered, and other SIDS risk factors should be avoided. In addition some parents pose threats to infants due to their behaviors and conditions, such as smoking or drinking heavily, taking drugs, a history of skin infections, obesity, or any other specific risk-increasing traits.
Co-sleeping also increases the risks of suffocation and strangulation. The soft quality of the mattresses, comforters, and pillows may suffocate the infants. Some experts, then, recommend that the bed should be firm, and should not be a waterbed or couch; and that heavy quilts, comforters, and pillows should not be used. Another common advice given to prevent suffocation is to keep a baby on its back, not its stomach. Parents who roll over during their sleep could inadvertently crush and/or suffocate their child, especially if they are heavy sleepers, over-tired or over-exhausted and/or obese. There is also the risk of the baby falling to a hard floor, or getting wedged between the bed and the wall or headboard. A proposed solution to these problems is the bedside bassinet, in which, rather than bed-sharing, the baby's bed is placed next to the parent's bed.
Another precaution recommended by experts is that young children should never sleep next to babies under nine months of age.
A 2008 report explored the relationship between ad hoc parental behaviors similar to traditional co-sleeping methodology, though the study's subjects typically utilized cribs and other paraphernalia counter to co-sleeping models. While babies who had been exposed to behaviors reminiscent of co-sleeping had significant problems with sleep later in life, the study concluded that the parental behaviors were a reaction to already-present sleep difficulties. Most relationships between parental behavior and sleeping trouble were not statistically significant when controlled for those preexisting conditions. Further, typical co-sleeping parental behavior, like maternal presence at onset of sleep, were found to be protective factors against sleep problems.
Association with sudden infant death syndrome (SIDS)
Co-sleeping can often be regarded as an unnecessary practice that can be associated with issues such as sudden infant death syndrome (SIDS). However, research shows that opinions vary in the association between SIDS and co-sleeping. The most controversial issue regarding SIDS is whether bed sharing is a main cause, and whether it should be avoided or encouraged.
Some research indicates that SIDS risk increases with co-sleeping, particularly bed-sharing; other research indicates that co-sleeping done in an "appropriate and safe" manner reduces SIDS risk. As an example of the latter, the Pacific Islands Families study, conducted in New Zealand, indicated that the adoption of safe bed-sharing and room-sharing practices were saving infant lives, and found no examples of an infant dying from SIDS.
Arguments in favor
One study reported mothers getting more sleep and breast-feeding by co-sleeping than other arrangements. Parents also experience less exhaustion with such ease in feeding and comforting their child by simply reaching over to the child. As a result, co-sleeping also increases the responsiveness of parents to their child's needs.
It has been argued that co-sleeping evolved over five million years, that it alters the infant's sleep experience and the number of maternal inspections of the infant, and that it provides a beginning point for considering possibly unconventional ways of helping reduce the risk of sudden infant death syndrome (SIDS).
Stress hormones are lower in mothers and babies who co-sleep, specifically the balance of the stress hormone cortisol, the control of which is essential for a baby's healthy growth. In studies with animals, infants who stayed close to their mothers had higher levels of growth hormones and enzymes necessary for brain and heart growth. Also, the physiology of co-sleeping babies is more stable, including more stable temperatures, more regular heart rhythms, and fewer long pauses in breathing than babies who sleep alone.
Besides physical developmental advantages, co-sleeping may also promote long-term emotional health. In long-term follow-up studies of infants who slept with their parents and those who slept alone, the children who co-slept were happier, less anxious, had higher self-esteem, were less likely to be afraid of sleep, had fewer behavioral problems, tended to be more comfortable with intimacy, and were generally more independent as adults.
Products for infants
There are several products that claim they can be used to facilitate safe co-sleeping with an infant however these claims are not evidence based:
special-purpose bedside bassinets, sidecar sleepers and bedside sleepers, which attach directly to the side of an adult bed and are open to the parent's side, but have barriers on the other three sides.
bed top co-sleeping products designed to prevent the baby from rolling off the adult bed and to absorb breastmilk and other nighttime leaks.
side rails to prevent the child from rolling off the adult bed.
co-sleeping infant enclosures which are placed directly in the adult bed.
specially designed separate sleeping bags for parents and infants which prevent covers being inadvertently pulled over the baby's head.
wahakura : A simple woven basket that allows babies to safely sleep in the same bed as parents.
Prevalence
A study of a small population in Northeast England showed a variety of nighttime parenting strategies and that 65% of the sample had bed-shared, 95% of them having done so with both parents. The study reported that some of the parents found bedsharing effective, yet were covert in their practices, fearing disapproval of health professionals and relatives. A National Center for Health Statistics survey from 1991 to 1999 found that 25% of American families always, or almost always, slept with their baby in bed, 42% slept with their baby sometimes, and 32% never bed-shared with their baby.
Factors
Socioeconomic factors
Initial assumptions on co-sleeping may place it in a context of income and socioeconomic status. Generally, families of low socioeconomic status will be unable to afford a separate room for a child while those of high socioeconomic status can more easily afford a home with a sufficient number of rooms. However, statistical data shows the prevalence of co-sleeping in wealthy Japanese families and the ability of poor Western families to still find a separate space for their child, suggests that the acceptance of co-sleeping is a result of culture.
Cultural factors
Several studies show that the prevalence of co-sleeping is a result of cultural preference. In a study of 19 nations, a trend emerged, depicting a widely accepted practice of co-sleeping in Asian, African, and Latin American countries, while European and North American countries rarely practiced it. This trend resulted mostly from the respective fears of parents: Asian, African, and Latin American parents worried about the separation between the parents and the child, while European and North American parents feared a lack of privacy for both the parents and the child.
See also
Infant bed
Overlying
References
Further reading
Moreno MA, Rivara FP. Bed Sharing: A Controversial but Common Practice. JAMA Pediatrics. 2013;167:1088.
Jackson, Deborah. Three in a Bed: The Benefits of Sharing Your Bed with Your Baby, New York: Bloomsbury, 1999.
McKenna, James J. Sleeping with Your Baby, Washington, D.C.: Platypus Media, 2007.
Thevenin, Tine. The Family Bed, New Jersey: Avery Publishing Group, 1987.
Simard, V., et al. (2008). The Predictive Role of Maladaptive Parental Behaviors, Early Sleep Problems, and Child/Mother Psychological Factors. Archives of Pediatrics and Adolescent Medicine Available at: http://archpedi.ama-assn.org/cgi/content/short/162/4/360
Breastfeeding
Childhood
Infancy
Parenting
Sleep
Intimate relationships | Co-sleeping | [
"Biology"
] | 2,362 | [
"Behavior",
"Sleep"
] |
569,850 | https://en.wikipedia.org/wiki/Roller%20chain | Roller chain or bush roller chain is the type of chain drive most commonly used for transmission of mechanical power on many kinds of domestic, industrial and agricultural machinery, including conveyors, wire- and tube-drawing machines, printing presses, cars, motorcycles, and bicycles. It consists of a series of short cylindrical rollers held together by side links. It is driven by a toothed wheel called a sprocket. It is a simple, reliable, and efficient means of power transmission.
Sketches by Leonardo da Vinci in the 16th century show a chain with a roller bearing. In 1800, James Fussell patented a roller chain on development of his balance lock and in 1880 Hans Renold patented a bush roller chain.
Construction
There are two types of links alternating in the bush roller chain. The first type is inner links, having two inner plates held together by two sleeves or bushings upon which rotate two rollers. Inner links alternate with the second type, the outer links, consisting of two outer plates held together by pins passing through the bushings of the inner links. The "bushingless" roller chain is similar in operation though not in construction; instead of separate bushings or sleeves holding the inner plates together, the plate has a tube stamped into it protruding from the hole which serves the same purpose. This has the advantage of removing one step in assembly of the chain.
The roller chain design reduces friction compared to simpler designs, resulting in higher efficiency and less wear. The original power transmission chain varieties lacked rollers and bushings, with both the inner and outer plates held by pins which directly contacted the sprocket teeth; however this configuration exhibited extremely rapid wear of both the sprocket teeth and the plates where they pivoted on the pins. This problem was partially solved by the development of bushed chains, with the pins holding the outer plates passing through bushings or sleeves connecting the inner plates. This distributed the wear over a greater area; however the teeth of the sprockets still wore more rapidly than is desirable, from the sliding friction against the bushings. The addition of rollers surrounding the bushing sleeves of the chain and provided rolling contact with the teeth of the sprockets resulting in excellent resistance to wear of both sprockets and chain as well. There is even very low friction, as long as the chain is sufficiently lubricated. Continuous, clean, lubrication of roller chains is of primary importance for efficient operation, as is correct tensioning.
Lubrication
Many driving chains (for example, in factory equipment, or driving a camshaft inside an internal combustion engine) operate in clean environments, and thus the wearing surfaces (that is, the pins and bushings) are safe from precipitation and airborne grit, many even in a sealed environment such as an oil bath. Some roller chains are designed to have o-rings built into the space between the outside link plate and the inside roller link plates. Chain manufacturers began to include this feature in 1971 after the application was invented by Joseph Montano while working for Whitney Chain of Hartford, Connecticut. O-rings were included as a way to improve lubrication to the links of power transmission chains, a service that is vitally important to extending their working life. These rubber fixtures form a barrier that holds factory applied lubricating grease inside the pin and bushing wear areas. Further, the rubber o-rings prevent dirt and other contaminants from entering inside the chain linkages, where such particles would otherwise cause significant wear.
There are also many chains that have to operate in dirty conditions, and for size or operational reasons cannot be sealed. Examples include chains on farm equipment, bicycles, and chain saws. These chains will necessarily have relatively high rates of wear.
Many oil-based lubricants attract dirt and other particles, eventually forming an abrasive paste that will compound wear on chains. This problem can be reduced by use of a "dry" PTFE spray, which forms a solid film after application and repels both particles and moisture.
Motorcycle chain lubrication
Chains operating at high speeds comparable to those on motorcycles should be used in conjunction with an oil bath. For modern motorcycles this is not possible, and most motorcycle chains run unprotected. Thus, motorcycle chains tend to wear very quickly relative to other applications. They are subject to extreme forces and are exposed to rain, dirt, sand and road salt.
Motorcycle chains are part of the drive train to transmit the motor power to the back wheel. Properly lubricated chains can reach an efficiency of 98% or greater in the transmission. Unlubricated chains will significantly decrease performance and increase chain and sprocket wear.
Two types of aftermarket lubricants are available for motorcycle chains: spray on lubricants and oil drip feed systems.
Spray lubricants may contain wax or PTFE. While these lubricants use tack additives to stay on the chain they can also attract dirt and sand from the road and over time produce a grinding paste that accelerates component wear.
Oil drip feed systems continuously lubricate the chain and use light oil that does not stick to the chain. Research has shown that oil drip feed systems provide the greatest wear protection and greatest power saving.
Variants
If the chain is not being used for a high wear application (for instance if it is just transmitting motion from a hand-operated lever to a control shaft on a machine, or a sliding door on an oven), then one of the simpler types of chain may still be used. Conversely, where extra strength but the smooth drive of a smaller pitch is required, the chain may be "siamesed"; instead of just two rows of plates on the outer sides of the chain, there may be three ("duplex"), four ("triplex"), or more rows of plates running parallel, with bushings and rollers between each adjacent pair, and the same number of rows of teeth running in parallel on the sprockets to match. Timing chains on automotive engines, for example, typically have multiple rows of plates called strands.
Roller chain is made in several sizes, the most common American National Standards Institute (ANSI) standards being 40, 50, 60, and 80. The first digits indicate the pitch of the chain in eighths of an inch, with the last digit being 0 for standard chain, 1 for lightweight chain, and 5 for bushed chain with no rollers. Thus, a chain with half-inch pitch is a No. 40 while a No. 160 sprocket has teeth spaced 2 inches apart, etc. Metric pitches are expressed in sixteenths of an inch; thus a metric No. 8 chain (08B-1) is equivalent to an ANSI No. 40. Most roller chain is made from plain carbon or alloy steel, but stainless steel is used in food processing machinery or other places where lubrication is a problem, and nylon or brass are occasionally seen for the same reason.
Roller chain is ordinarily hooked up using a master link (also known as a "connecting link"), which typically has one pin held by a horseshoe clip rather than friction fit, allowing it to be inserted or removed with simple tools. Chain with a removable link or pin is also known as "cottered chain", which allows the length of the chain to be adjusted. Half links (also known as "offsets") are available and are used to increase the length of the chain by a single roller. Riveted roller chain has the master link (also known as a "connecting link") "riveted" or mashed on the ends. These pins are made to be durable and are not removable.
Horseshoe clip
A horseshoe clip is the U-shaped spring steel fitting that holds the side-plate of the joining (or "master") link formerly essential to complete the loop of a roller chain. The clip method is losing popularity as more and more chains are manufactured as endless loops not intended for maintenance. Modern motorcycles are often fitted with an endless chain but in the increasingly rare circumstances of the chain wearing out and needing to be replaced, a length of chain and a joining link (with horseshoe clip) will be provided as a spare. Changes in motorcycle suspension are tending to make this use less prevalent.
Common on older motorcycles and older bicycles (e.g. those with hub gears) this clip method cannot be used on bicycles fitted with derailleur gears, as the clip will tend to catch on the gear-changers.
In many cases, an endless chain cannot be replaced easily since it is linked into the frame of the machine (this is the case on the traditional bicycle, amongst other places). However, in some cases, a joining link with horseshoe clip cannot be used or is not preferred in the application either. In this case, a "soft link" is used, placed with a chain riveter and relying solely on friction. With modern materials and tools and skilled application this is a permanent repair having almost the same strength and life of the unbroken chain.
Use
Roller chains are used in low- to mid-speed drives at around 600 to 800 feet per minute; however, at higher speeds, around 2,000 to 3,000 feet per minute, V-belts are normally used due to wear and noise issues.
A bicycle chain is a form of roller chain. Bicycle chains may have a master link, or may require a chain tool for removal and installation. A similar but larger and thus stronger chain is used on most motorcycles although it is sometimes replaced by either a toothed belt or a shaft drive, which offer lower noise level and fewer maintenance requirements.
Some automobile engines use roller chains to drive the camshafts. Very high performance engines often use gear drive, and starting in the early 1960s toothed belts were used by some manufacturers.
Chains are also used in forklifts using hydraulic rams as a pulley to raise and lower the carriage; however, these chains are not considered roller chains, but are classified as lift or leaf chains.
Chainsaw cutting chains superficially resemble roller chains but are more closely related to leaf chains. They are driven by projecting drive links which also serve to locate the chain onto the bar.
A perhaps unusual use of a pair of motorcycle chains is in the Harrier jump jet, where a chain drive from an air motor is used to rotate the movable engine nozzles, allowing them to be pointed downwards for hovering flight, or to the rear for normal forward flight, a system known as "thrust vectoring".
Wear
The effect of wear on a roller chain is to increase the pitch (spacing of the links), causing the chain to grow longer. Note that this is due to wear at the pivoting pins and bushes, not from actual stretching of the metal (as does happen to some flexible steel components such as the hand-brake cable of a motor vehicle).
With modern chains it is unusual for a chain (other than that of a bicycle) to wear until it breaks, since a worn chain leads to the rapid onset of wear on the teeth of the sprockets, with ultimate failure being the loss of all the teeth on the sprocket. The sprockets (in particular the smaller of the two) suffer a grinding motion that puts a characteristic hook shape into the driven face of the teeth. (This effect is made worse by a chain improperly tensioned, but is unavoidable no matter what care is taken). The worn teeth (and chain) no longer provides smooth transmission of power and this may become evident from the noise, the vibration or (in car engines using a timing chain) the variation in ignition timing seen with a timing light. Both sprockets and chain should be replaced in these cases, since a new chain on worn sprockets will not last long. However, in less severe cases it may be possible to save the larger of the two sprockets, since it is always the smaller one that suffers the most wear. Only in very light-weight applications such as a bicycle, or in extreme cases of improper tension, will the chain normally jump off the sprockets.
The lengthening due to wear of a chain is calculated by the following formula:
M = the length of a number of links measured
S = the number of links measured
P = Pitch
In industry, it is usual to monitor the movement of the chain tensioner (whether manual or automatic) or the exact length of a drive chain (one rule of thumb is to replace a roller chain which has elongated 3% on an adjustable drive or 1.5% on a fixed-center drive). A simpler method, particularly suitable for the cycle or motorcycle user, is to attempt to pull the chain away from the larger of the two sprockets, whilst ensuring the chain is taut. Any significant movement (e.g. making it possible to see through a gap) probably indicates a chain worn up to and beyond the limit. Sprocket damage will result if the problem is ignored. Sprocket wear cancels this effect, and may mask chain wear.
Bicycle chain wear
The lightweight chain of a bicycle with derailleur gears can snap (or rather, come apart at the side-plates, since it is normal for the "riveting" to fail first) because the pins inside are not cylindrical, they are barrel-shaped. Contact between the pin and the bushing is not the regular line, but a point which allows the chain's pins to work its way through the bushing, and finally the roller, ultimately causing the chain to snap. This form of construction is necessary because the gear-changing action of this form of transmission requires the chain to both bend sideways and to twist, but this can occur with the flexibility of such a narrow chain and relatively large free lengths on a bicycle.
Chain failure is much less of a problem on hub-geared systems since the chainline does not bend, so the parallel pins have a much bigger wearing surface in contact with the bush. The hub-gear system also allows complete enclosure, a great aid to lubrication and protection from grit.
Chain strength
The most common measure of roller chain's strength is tensile strength. Tensile strength represents how much load a chain can withstand under a one-time load before breaking. Just as important as tensile strength is a chain's fatigue strength. The critical factors in a chain's fatigue strength is the quality of steel used to manufacture the chain, the heat treatment of the chain components, the quality of the pitch hole fabrication of the linkplates, and the type of shot plus the intensity of shot peen coverage on the linkplates. Other factors can include the thickness of the linkplates and the design (contour) of the linkplates. The rule of thumb for roller chain operating on a continuous drive is for the chain load to not exceed a mere 1/6 or 1/9 of the chain's tensile strength, depending on the type of master links used (press-fit vs. slip-fit). Roller chains operating on a continuous drive beyond these thresholds can and typically do fail prematurely via linkplate fatigue failure.
The standard minimum ultimate strength of the ANSI 29.1 steel chain is 12,500 x (pitch, in inches)2.
X-ring and O-Ring chains greatly decrease wear by means of internal lubricants, increasing chain life. The internal lubrication is inserted by means of a vacuum when riveting the chain together.
Chain standards
Standards organizations (such as ANSI and ISO) maintain standards for design, dimensions, and interchangeability of transmission chains. For example, the following table shows data from ANSI standard B29.1-2011 (precision power transmission roller chains, attachments, and sprockets) developed by the American Society of Mechanical Engineers (ASME). See the references for additional information.
For mnemonic purposes, below is another presentation of key dimensions from the same standard, expressed in fractions of an inch (which was part of the thinking behind the choice of preferred numbers in the ANSI standard):
A typical bicycle chain (for derailleur gears) uses narrow -inch-pitch chain. The width of the chain is variable, and does not affect the load capacity. The more sprockets at the rear wheel (historically 3–6, nowadays 7–12 sprockets), the narrower the chain. Chains are sold according to the number of speeds they are designed to work with, for example, "10 speed chain". Hub gear or single speed bicycles use 1/2 x 1/8 inch chains, where 1/8 inch refers to the maximum thickness of a sprocket that can be used with the chain.
Typically chains with parallel shaped links have an even number of links, with each narrow link followed by a broad one. Chains built up with a uniform type of link, narrow at one and broad at the other end, can be made with an odd number of links, which can be an advantage to adapt to a special chainwheel-distance; on the other side such a chain tends to be not so strong.
Roller chains made using ISO standard are sometimes called "isochains".
See also
Self-lubricating chain
References
Bibliography
External links
https://www.leonardodigitale.com/en/browse/Codex-atlanticus/0987-r/
The Complete Guide to Chain
Chain drives
Mechanical power transmission
Mechanical power control | Roller chain | [
"Physics"
] | 3,574 | [
"Mechanical power transmission",
"Mechanics",
"Mechanical power control"
] |
569,881 | https://en.wikipedia.org/wiki/Semi-arid%20climate | A semi-arid climate, semi-desert climate, or steppe climate is a dry climate sub-type. It is located on regions that receive precipitation below potential evapotranspiration, but not as low as a desert climate. There are different kinds of semi-arid climates, depending on variables such as temperature, and they give rise to different biomes.
Defining attributes of semi-arid climates
A more precise definition is given by the Köppen climate classification, which treats steppe climates (BSh and BSk) as intermediates between desert climates (BW) and humid climates (A, C, D) in ecological characteristics and agricultural potential. Semi-arid climates tend to support short, thorny or scrubby vegetation and are usually dominated by either grasses or shrubs as they usually cannot support forests.
To determine if a location has a semi-arid climate, the precipitation threshold must first be determined. The method used to find the precipitation threshold (in millimeters):
multiply by 20 the average annual temperature in degrees Celsius and then
add 280 if at least 70% of the total precipitation falls in the summer half of the year (April–September in the northern hemisphere, October–March in the southern hemisphere)
add 140 if 30–70% of the total precipitation falls in the summer half of the year
add nothing if less than 30% of the total precipitation falls in the summer half of the year
If the area's annual precipitation in millimeters is less than the threshold but more than half or 50% the threshold, it is classified as a BS (steppe, semi-desert, or semi-arid climate).
Furthermore, to delineate hot semi-arid climates from cold semi-arid climates, a mean annual temperature of is used as an isotherm. A location with a BS-type climate is classified as hot semi-arid (BSh) if its mean temperature is above this isotherm, and cold semi-arid (BSk) if not.
Hot semi-arid climates
Hot semi-arid climates (type "BSh") tend to be located from the high teens to mid-30s latitudes of the tropics and subtropics, typically in proximity to regions with a tropical savanna climate or a humid subtropical climate. These climates tend to have hot, or sometimes extremely hot, summers and warm to cool winters, with some to minimal precipitation. Hot semi-arid climates are most commonly found around the fringes of subtropical deserts.
Hot semi-arid climates are most commonly found in Africa, Australia, and South Asia. In Australia, a large portion of the Outback surrounding the central desert regions lies within the hot semi-arid climate region. In South Asia, both India and parts of Pakistan experience the seasonal effects of monsoons and feature short but well-defined wet seasons, but are not sufficiently wet overall to qualify as either a tropical savanna or a humid subtropical climate.
Hot semi-arid climates can be also found in parts of North America, such as most of northern Mexico, the ABC Islands, the rain shadows of Hispaniola's mountain ranges in the Dominican Republic and Haiti, parts of the Southwestern United States including California's Central Valley, and sections of South America such as the sertão, the Gran Chaco, and the poleward side of the arid deserts, where they typically feature a Mediterranean precipitation pattern, with generally rainless summers and wetter winters. They are also found in few areas of Europe surrounding the Mediterranean Basin. In Europe, BSh climates are predominantly found in southeastern Spain. It can also be found primarily in parts of south Greece but also in marginal areas of Thessaloniki and Chalkidiki in north Greece, most of Formentera, marginal areas of Ibiza and marginal areas of Italy in Sicily, Sardinia and Lampedusa.
Cold semi-arid climates
Cold semi-arid climates (type "BSk") tend to be located in elevated portions of temperate zones generally from the mid-30s to low 50s latitudes, typically bordering a humid continental climate or a Mediterranean climate. They are also typically found in continental interiors some distance from large bodies of water. Cold semi-arid climates usually feature warm to hot dry summers, though their summers are typically not quite as hot as those of hot semi-arid climates. Unlike hot semi-arid climates, areas with cold semi-arid climates tend to have cold and possibly freezing winters. These areas usually see some snowfall during the winter, though snowfall is much lower than at locations at similar latitudes with more humid climates.
Areas featuring cold semi-arid climates tend to have higher elevations than areas with hot semi-arid climates, and tend to feature major temperature swings between day and night, sometimes by as much as 20 °C (36 °F) or more. These large diurnal temperature variations are seldom seen in hot semi-arid climates. Cold semi-arid climates at higher latitudes tend to have dry winters and wetter summers, while cold semi-arid climates at lower latitudes tend to have precipitation patterns more akin to Mediterranean climates, with dry summers, relatively wet winters, and even wetter springs and autumns.
Cold semi-arid climates are most commonly found in central Asia and the western US, as well as the Middle East and other parts of Asia. However, they can also be found in Northern Africa, South Africa, sections of South America, sections of interior southern Australia (e.g. Kalgoorlie and Mildura) and inland Spain.
Charts of selected cities
Hot semi-arid
Cold semi-arid
See also
Continental climate
Dry climate
Desert climate
Dust Bowl (an era of devastating dust storms, mostly in the 1930s, in semi-arid areas on the Great Plains of the United States and Prairies of Canada)
Goyder's Line (a boundary marking the limit of semi-arid climates in the Australian state of South Australia)
Köppen climate classification
Palliser's Triangle (semi-arid area of Canada)
Ustic (Soil Moisture Regime)
Wave height
References
External links
Grasslands
Köppen climate types
Plains
Prairies
Climate of Africa
Climate of Asia
Climate of South America
Climate of North America
Climate of Australia
Climate of Europe | Semi-arid climate | [
"Biology"
] | 1,251 | [
"Grasslands",
"Ecosystems"
] |
569,945 | https://en.wikipedia.org/wiki/Trident%20Ploughshares | Trident Ploughshares (originally named Trident Ploughshares 2000) is an activist anti-nuclear weapons group, founded in 1998 with the aim of "beating swords into ploughshares" (taken from the Book of Isaiah). This is specifically by attempting to disarm the UK Trident nuclear weapons system, in a non-violent manner. The original group consisted of six core activists, including Angie Zelter, founder of the non-violent Snowball Campaign.
Based in Edinburgh, Scotland, the group is a partner in the International Campaign to Abolish Nuclear Weapons. It has attracted media attention for both its non-violent "disarmament" direct actions, and mass civil disobedience at the gates of Royal Navy establishments with connections to the United Kingdom's Trident weapons systems.
It was the recipient of the Right Livelihood Award in 2001 "for providing a practical model of principled, transparent and non-violent direct action dedicated to ridding the world of nuclear weapons."
Trident nuclear missile system and international law
The foundation of Trident Ploughshare's various disarmament actions is the 1996 Advisory Opinion of the International Court of Justice, Legality of the Threat or Use of Nuclear Weapons, in which it found that 'the threat or use of nuclear weapons would generally be contrary to the rules of international law applicable in armed conflict'.
In addition to this, Trident Ploughshares also argues that, since the British government is not actively negotiating nuclear disarmament and is actively considering upgrading the UK Trident programme, it is in violation of the Non-Proliferation Treaty of 1968.
Trident Ploughshares activists argue that since the British government has not responded to their various communications regarding the legal status of the Trident nuclear missile system, they must take individual responsibility for disarmament.
Previous Ploughshares actions
Prior to the setting up of Trident Ploughshare there were other actions carried out by members of the Ploughshares Movement, a Christian peace group.
On 29 January 1996, Andrea Needham, Joanna Wilson and Lotta Kronlid - known as the 'Ploughshares Four' - broke into the British Aerospace factory in Lancashire and caused £1.7m worth of damage to BAe Hawk number ZH955, a training aircraft that was to have been supplied along with 23 other jets to the New Order regime of Indonesia. Angie Zelter was later arrested as she announced her intention to further damage the planes.
Accused of causing, and conspiring to cause, criminal damage, with a maximum ten-year sentence, they argued that what they did was not a crime but that they "were acting to prevent British Aerospace and the British Government from aiding and abetting genocide". They were acquitted by the jury.
Protests and criminal trials
The second major disturbance was on 27 April 2001, when three female members of the campaign boarded the barge Maytime in Loch Goil and destroyed and took equipment. After being charged with maliciously damaging the vessel, stealing two inflatable life rafts and damaging equipment in an on-board laboratory, they were acquitted at the subsequent trial in Greenock, which was later appealed to the Scottish High Court with the Lord Advocate's Reference 2001. Although under Scottish Law the High Court did not have the power to overturn the acquittals, their judgement was that the basis of the defence case should not have been admissible.
In May 2005 the group squatted on Drake's Island, a privately owned island in Plymouth Sound declaring it a "nuclear free state" in order to "highlight Britain's hypocrisy over the non-proliferation treaty talks being held in New York".
See also
Anti-nuclear movement in the United Kingdom
Pitstop Ploughshares
References
External links
Main Trident Ploughshares website
Faslane 365
Nippon Myohoji: friends of Trident Ploughshares
The official site of the Right Livelihood Awards
The Loch Long Monster Documentary film on Trident Ploughshares
Anti-nuclear protest at dockyard
International Campaign to Abolish Nuclear Weapons
Anti-nuclear organizations
Trident (UK nuclear programme)
Direct action
Anti-nuclear movement in Scotland
1998 establishments in Scotland
Organisations based in Edinburgh
Organizations established in 1998 | Trident Ploughshares | [
"Engineering"
] | 853 | [
"Nuclear organizations",
"Anti-nuclear organizations"
] |
570,062 | https://en.wikipedia.org/wiki/Wolf%20Szmuness | Wolf Szmuness (March 12, 1919 – June 6, 1982) was a Polish-born epidemiologist who immigrated to and worked in the United States. He conducted research at the New York Blood Center and, from 1973, he was director of the Center's epidemiology laboratory. He designed and conducted the trials for the first vaccine to prove effective against hepatitis B.
European beginnings
Szmuness was born in Warsaw, Poland on 12 March 1919. He studied medicine in Italy, but he returned to be with his family around the Nazi German invasion of Poland in 1939. As the Germans and Soviets occupied Poland, Szmuness was separated from his family, who were later killed by the Germans. Trapped in the Communist-occupied part of Poland, Szmuness traveled eastward to escape the advancing Nazis. He asked the Soviets to let him fight the Germans but was sent to Siberia as a prisoner.
Following a year of hard labour in the prison camp, Szmuness was appointed head of sanitary conditions. He later became the head epidemiologist in the local district. After release from detention in 1946, Szmuness completed his medical education at the University of Tomsk in Siberia, and earned a degree in epidemiology from the University of Kharkiv.
Szmuness married a Russian woman, Maya, and in 1959 was allowed to return to Poland. There, he continued his education at the University of Lublin and worked as an epidemiologist in municipal and regional health departments.
Szmuness's colleague Aaron Kellner reports that the Polish authorities granted Szmuness a vacation at a rest home, where he shared a room with a Catholic priest, Karol Wojtyła, and began a longtime correspondence with him. Karol Wojtyła would later become Pope John Paul II.
Emigration and life in the United States
In 1969, Szmuness, his wife and their daughter Helena were permitted to attend a scientific meeting in Italy. Upon arriving, Szmuness defected and immigrated to New York City in the United States for religious and political reasons. Through the intervention of Walsh McDermott, a professor of public health at New York Hospital-Cornell Medical Center, Szmuness was hired by the New York City Blood Center. Because doctors from abroad are not usually accredited in the United States, Szmuness began as a laboratory technician, but his skills were quickly recognized, and, within two years, Szmuness headed his own lab. A separate department of epidemiology at the Center was created for him, and he also became a full professor at the Columbia University School of Public Health. According to Aaron Kellner, President of the Center, within five years of arriving in New York, Szmuness became "an international figure in epidemiology and the field of hepatitis".
Szmuness died of lung cancer in 1982.
Hepatitis B
Szmuness first became interested in the hepatitis B virus when his wife, Maya, was nearly killed by the liver disease caused by the virus, which she contracted through a blood transfusion. In New York, Szmuness investigated the natural history of hepatitis B. A vaccine was produced in the late 1970s, and Szmuness designed and conducted vaccine trials to determine its efficacy. Over 1000 male homosexuals participated in the trials; they were chosen as participants because they "had been found to have a risk of developing hepatitis B that is 10 times greater than that for the population in general".
AIDS Theory
A highly controversial theory suggested that HIV-contaminated Hepatitis B vaccine trials in 1978 were responsible for the original spread of AIDS in the United States by infecting gay men in New York City with HIV. Evidence as to the presence of HIV in Szmuness's lab, or a mechanism for this introduction have not been offered, and scientific data strongly suggests that HIV instead first came to the United States with Haitian immigrants around 1969, many years prior to trials conducted on the Hepatitis B vaccine.
References
Deaths from lung cancer
1919 births
1982 deaths
Sexual orientation and medicine
AIDS origin hypotheses
Polish emigrants to the United States | Wolf Szmuness | [
"Biology"
] | 854 | [
"Biological hypotheses",
"AIDS origin hypotheses"
] |
570,103 | https://en.wikipedia.org/wiki/1986%20California%20Proposition%2065 | Proposition 65 (formally titled The Safe Drinking Water and Toxic Enforcement Act of 1986, and also referred to as Prop 65) is a California law passed by direct voter initiative in 1986 by a 63%–37% vote. Its goals are to protect drinking water sources from toxic substances that cause cancer or birth defects and to reduce or eliminate exposures to those chemicals generally, such as in consumer products, by requiring warnings in advance of those exposures, with the intended goal being that companies choose to reformulate their products without the substances rather than simply providing notice of such substances in their product.
The proposition
In 1986, political strategists including Tom Hayden and his wife, environmental activist Jane Fonda, thought that an initiative addressing toxic pollutants would bring more left leaning voters to the polls to help Democrat Tom Bradley in his gubernatorial race against incumbent Republican George Deukmejian, who had vetoed several pollution cleanup bills. Hayden and others funded the initiative, and found three environmental attorneys to write it, including David Roe who did not expect it to pass. Voters passed it 2–1, but did not elect Bradley.
The act states: "no person in the course of doing business shall knowingly discharge or release a chemical known to the state to cause cancer or reproductive toxicity into water" or into anywhere that feeds a drinking water source. It also says that "no person in the course of doing business shall knowingly and intentionally expose" anyone to those chemicals "without first giving clear and reasonable warning."
Proposition 65 is administered by CalEPA's California Office of Environmental Health Hazard Assessment (OEHHA). Proposition 65 regulates substances officially listed by California as causing cancer or birth defects or other reproductive harm, in two ways. The first statutory requirement of Proposition 65 prohibits businesses from knowingly discharging listed substances into drinking water sources, or onto land where the substances can pass into drinking water sources. The second prohibits businesses from knowingly exposing individuals to listed substances without providing a clear and reasonable warning. The requirements apply to amounts above what would present a 1-in-100,000 risk of cancer assuming lifetime exposure (for carcinogens), or above one thousandth (1/1000) of the no observable effect level (for reproductive toxins).
An official list of substances covered by Proposition 65 is maintained and made publicly available. Chemicals are added to or removed from the official list based on California's analysis of current scientific information. All substances listed show their known risk factors, a unique CAS chemical classification number, the date they were listed, and, if so, whether they have been delisted. As a result of lawsuits, the list now also contains substances known only to cause cancer in animals, and contains over 900 substances.
Proposition 65 has had limited success in reducing exposures to known toxic chemicals, especially in consumer products, and its successes illustrate gaps in the effectiveness of federal toxics laws (see below). It remains politically controversial even after more than 30 years (see below), in large part because it, in effect, requires businesses to know the scientific safety level for specific cancer- and birth defect-causing chemicals that those businesses are intentionally exposing members of the public to, unless government has already set those levels. According to the California Environmental Protection Agency, "Proposition 65 has... increased public awareness about the adverse effects of exposures to listed chemicals.... [and] provided an incentive for manufacturers to remove listed chemicals from their products.... Although Proposition 65 has benefited Californians, it has come at a cost for companies doing business in the state."
Enforcement
Enforcement is carried out through civil lawsuits against Proposition 65 violators. These lawsuits may be brought by the California Attorney General, any district attorney, or certain city attorneys (those in cities with a population exceeding 750,000). Lawsuits may also be brought by private parties "acting in the public interest," but only after providing notice of the alleged violation to the Attorney General, the appropriate district attorney and city attorney, and the business accused of the violation.
A Proposition 65 Notice of Violation must provide adequate information to allow the recipient to assess the nature of the alleged violation. A notice must comply with the information and procedural requirements specified in regulations. A private party may not pursue an enforcement action directly under Proposition 65 if one of the government officials noted above initiates an action within sixty days of the notice. After 2003, private enforcers must also serve a certificate of merit (statement of expert consultation(s) supporting belief of reasonable and meritorious private action) as a means of preventing frivolous enforcement actions.
A business found to be in violation of Proposition 65 is subject to civil penalties of up to $2,500 per day for each violation. In addition, the business may be ordered by a court of law to stop committing the violation. Other penalties may apply, including unfair business practices violations as limited under California Proposition 64 (2004).
From 1988 (when the initiative went into effect) until 2020, there have been more than 30,000 violation claims, targeting over 100,000 products, filed by citizen prosecutors. From 2000 to 2020, businesses paid more than $370 million in settlements, with almost three quarters of that amount going to attorneys, and the majority of that going to a small group of perpetual litigants. One example cited by the Los Angeles Times is that of the for-profit company "Safe Products for Californians", run by Kenneth Moore and his lawyer ex-wife Tanya Moore, who received almost $700,000 in legal fees from over 100 lawsuits (half against Amazon sellers) in which Kenneth was her only client.
If a company's product contains a chemical on the list, but the intended use of the product would not expose the customer to the hazards found by scientific research (for example, a topical soap that contains a chemical known to cause cancer when eaten), the burden is placed on the company to prove that its product will not cause harm if it chooses not to label the product. Many companies therefore find it less expensive to simply add the Prop 65 warning to their products, regardless of the danger to the consumer.
Accomplishments
Proposition 65 has caused large numbers of consumer products to be reformulated to remove toxic ingredients, as documented in settlements of enforcement actions.
Proposition 65 has also caused government and industry to cooperate on scientific issues of chemical risk, resulting in risk-based standards for 282 toxic chemicals in the law's first few years of operation, an accomplishment described by a Governor's Task Force as "100 years of progress [by federal standards] in the areas of hazard identification, risk assessment, and exposure assessment." The existence of clear numerical standards has significantly assisted efforts to comply with the law, and to enforce it in situations of non-compliance.
Warning label
The following warning language is standard on products sold in California if they contain chemicals on the Proposition 65 list and the amount of exposure caused by the product is not within defined safety limits:
WARNING: This product contains chemicals known to the State of California to cause cancer and birth defects or other reproductive harm.
The wording can be changed as necessary, as long as it communicates that the chemical in question is known to the state to cause cancer, or birth defects or other reproductive harm. For exposures from other sources, such as car exhaust in a parking garage, a standard sign might read: "This area contains chemicals known to the State of California to cause cancer, or birth defects or other reproductive harm".
Controversy and abuse
Political controversy over the law, including industry attempts to have it preempted by federal law, has died down, although preemption bills continue to be introduced in the U.S. Congress, most recently H.R. 6022 (introduced June 6, 2018). However, enforcement actions remain controversial. Many Proposition 65 complaints are filed on behalf of straw man plaintiffs by private attorneys, some of whose businesses are built entirely on filing Proposition 65 lawsuits.
The law has also been criticized for causing "over-warning" or "meaningless warnings," and this risk has been recognized by a California court. There is no penalty for posting an unnecessary warning sign, and to the extent that warnings are vague or overused, they may not communicate much information to the end user. Many companies now routinely attach Prop 65 warning labels to any product of theirs that they think might possibly contain one of the 900 listed chemicals without testing to see whether the chemical is really present in their product and without reformulating their product, because it is cheaper to do so than to run the risk of being sued by Prop 65 enforcers.
Examples of warning signs can be found at gas stations, hardware suppliers, grocery stores, drug stores, medical facilities, parking garages, hotels, apartment complexes, retail stores, banks, and restaurants, warning about hazardous chemicals in items for sale, or present in the immediate environment. Utility companies mail a Prop 65 notice to all customers each year to warn them about exposures to natural gas, petroleum products and sandblasting.
Abuse of enforcement lawsuits has also been a consistent theme of Proposition 65 opponents, who criticize the motives of citizen enforcers. Industry critics and corporate defense lawyers charge that Proposition 65 is "a clever and irritating mechanism used by litigious NGOs and others to publicly spank politically incorrect opponents ranging from the American gun industry to seafood retailers, etc." Critics also note that the majority of settlement money collected from businesses has been used to pay plaintiffs' attorney fees. Businesses paid over $14.58 million in attorney fees and costs in 2012, 71% of all settlement money paid.
Because the law allows private citizens to sue and collect penalties from any business violating the law, lawyers and law firms have been criticized for using Proposition 65 to force monetary settlements out of Californian businesses. In the past the Attorney General's office has cited several instances of settlements where plaintiff attorneys received significant awards without providing for environmental benefit to the people of California, resulting in a requirement that the Attorney General's office must approve any pre-trial Proposition 65 settlement.
Recent reform efforts
In the 2013–14 session of the California State Assembly, a consensus bill, AB 227, introduced by Assemblyman Mike Gatto (D-Los Angeles), effectively offered to protect certain small companies in specified circumstances from the threat of citizen enforcement lawsuits, by providing them with a streamlined compliance procedure and limited penalties. The bill was passed unanimously, with support from Proposition 65 proponents and supporters, and was enacted on October 10, 2013.
Following the success of AB 227, Gov. Jerry Brown announced on May 7, 2013, that his office plans to introduce a proposal to reform Proposition 65. In 2017, Brown advocated for more reform to Prop 65 to reduce "frivolous shakedown lawsuits."
Reformulation of consumer goods
Alleged violators
the below list includes some of the named Fortune 500 companies that have been sued or received an intent to sue for allegedly not disclosing the Prop 65 warning on one or more of their products. The list includes, but is not limited to:
Amazon
CVS
Walmart
Target
Walgreens
Disney
Dollar General
Whole Foods
McDonald's (settled for US$3 million in 2002)
In most cases, such as McDonald's, Walgreens, and Disney, the listed chemicals have been removed. "As of August 2019, Amazon faces over 1,000 Prop 65 'Intent to Sue' notices." E-commerce marketplaces, like Amazon, require their sellers to disclose if their products contain Prop 65 chemicals. However, these companies are currently under fire for some of their sellers allegedly not disclosing Prop 65 chemicals that are in their brands.
List of chemicals
Proposition 65 requires that the governor revise and republish at least once per year the list of chemicals known to the State to cause cancer or reproductive toxicity. It also requires substances identified by the International Agency for Research on Cancer (IARC) as causing cancer in humans or laboratory animals to be added to the list.
There also exists a "Safe harbor List" with tolerance thresholds for some of the chemicals named in the Proposition 65 list. Concentrations under the tolerance threshold do not legally require the warning label.
See also
California ballot proposition
Environmentalism
Pollution
Toxicity
Notes
References
External links
Official Proposition 65 website
Official Proposition 65 list of substances
California Attorney General – Proposition 65 regulations
Forbes.com -Toxic Avengers, Morse Mehrban gets rich from Proposition 65
1986 in the environment
65
Environment of California
Initiatives in the United States
Regulation of chemicals
Regulation in the United States
United States state environmental legislation | 1986 California Proposition 65 | [
"Chemistry"
] | 2,538 | [] |
570,111 | https://en.wikipedia.org/wiki/Electronic%20control%20unit | An electronic control unit (ECU), also known as an electronic control module (ECM), is an embedded system in automotive electronics that controls one or more of the electrical systems or subsystems in a car or other motor vehicle.
Modern vehicles have many ECUs, and these can include some or all of the following: engine control module (ECM), powertrain control module (PCM), transmission control module (TCM), brake control module (BCM or EBCM), central control module (CCM), central timing module (CTM), general electronic module (GEM), body control module (BCM), and suspension control module (SCM). These ECUs together are sometimes referred to collectively as the car's computer though technically they are all separate computers, not a single one. Sometimes an assembly incorporates several individual control modules (a PCM often controls both the engine and the transmission).
Some modern motor vehicles have up to 150 ECUs. Embedded software in ECUs continues to increase in line count, complexity, and sophistication. Managing the increasing complexity and number of ECUs in a vehicle has become a key challenge for original equipment manufacturers (OEMs).
Types
Generic industry controller namingIs the naming of controllers where the logical thought of the controller's name implies the system the controller is responsible for controlling
Generic powertrainThe generic powertrain pertains to a vehicle's emission system and is the only regulated controller name.
Other controllersAll other controller names are decided upon by the individual OEM. The engine controller may have several different names, such as "DME", "Enhanced Powertrain", "PGM-FI" and many others.
Door control unit (DCU)
Engine control unit (ECU)not to be confused with electronic control unit, the generic term for all these devices
Electric power steering control unit (PSCU)Generally this will be integrated into the EPS power pack.
Human–machine interface (HMI)
Powertrain control module (PCM)Sometimes the functions of the engine control unit and transmission control module (TCM) are combined into a single unit called the Powertrain Control Module.
Seat control unit
Speed control unit (SCU)
Telematic control unit (TCU)
Transmission control module (TCM)
Brake control module (BCM; ABS or ESC)
Battery management system (BMS)
Key elements
Core
Microcontroller
Memory
SRAM
EEPROM
Flash
Inputs
Supply Voltage and Ground
Digital inputs
Analog inputs
Outputs
Actuator drivers (e.g. injectors, relays, valves)
H bridge drivers for servomotors
Logic outputs
Communication links
Housing
Bus Transceivers, e.g. for K-Line, CAN, Ethernet
Embedded Software
Boot Loader
Metadata for ECU and Software Identification, Version Management, Checksums
Functional Software Routines
Configuration Data
Design and development
The development of an ECU involves both hardware and software required to perform the functions expected from that particular module. Automotive ECU's are being developed following the V-model. Recently the trend is to dedicate a significant amount of time and effort to develop safe modules by following standards like ISO 26262. It is rare that a module is developed fully from scratch. The design is generally iterative and improvements are made to both the hardware and software. The development of most ECUs is carried out by Tier 1 suppliers based on specifications provided by the OEM.
Testing and validation
As part of the development cycle, manufacturers perform detailed FMEAs and other failure analyses to catch failure modes that can lead to unsafe conditions or driver annoyance. Extensive testing and validation activities are carried out as part of the Production part approval process to gain the confidence of the hardware and software. On-board diagnostics or OBD help provide specific data related to which system or component failed or caused a failure during run time and help perform repairs.
Modifications
Some people may wish to modify their ECU so as to be able to add or change functionality. However modern ECUs come equipped with protection locks to prevent users from modifying the circuit or exchange chips. The protection locks are a form of digital rights management (DRM), the circumventing of which is illegal in certain jurisdictions. In the United States for example, the DMCA criminalizes circumvention of DRM, though an exemption does apply that allows circumvention the owner of a motorized land vehicle if it is required to allow diagnosis, repair or lawful modification (ie. that does not violate applicable law such as emissions regulations).
References
Power control
Engine technology
Fuel injection systems
Engine control systems
Engine components
Onboard computers
Auto parts | Electronic control unit | [
"Physics",
"Technology",
"Engineering"
] | 952 | [
"Self-driving cars",
"Physical quantities",
"Engines",
"Engine technology",
"Power (physics)",
"Automotive engineering",
"Engine components",
"Power control"
] |
570,140 | https://en.wikipedia.org/wiki/Infinite%20impulse%20response | Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response that does not become exactly zero past a certain point but continues indefinitely. This is in contrast to a finite impulse response (FIR) system, in which the impulse response does become exactly zero at times for some finite , thus being of finite duration. Common examples of linear time-invariant systems are most electronic and digital filters. Systems with this property are known as IIR systems or IIR filters.
In practice, the impulse response, even of IIR systems, usually approaches zero and can be neglected past a certain point. However the physical systems which give rise to IIR or FIR responses are dissimilar, and therein lies the importance of the distinction. For instance, analog electronic filters composed of resistors, capacitors, and/or inductors (and perhaps linear amplifiers) are generally IIR filters. On the other hand, discrete-time filters (usually digital filters) based on a tapped delay line employing no feedback are necessarily FIR filters. The capacitors (or inductors) in the analog filter have a "memory" and their internal state never completely relaxes following an impulse (assuming the classical model of capacitors and inductors where quantum effects are ignored). But in the latter case, after an impulse has reached the end of the tapped delay line, the system has no further memory of that impulse and has returned to its initial state; its impulse response beyond that point is exactly zero.
Implementation and design
Although almost all analog electronic filters are IIR, digital filters may be either IIR or FIR. The presence of feedback in the topology of a discrete-time filter (such as the block diagram shown below) generally creates an IIR response. The z domain transfer function of an IIR filter contains a non-trivial denominator, describing those feedback terms. The transfer function of an FIR filter, on the other hand, has only a numerator as expressed in the general form derived below. All of the coefficients with (feedback terms) are zero and the filter has no finite poles.
The transfer functions pertaining to IIR analog electronic filters have been extensively studied and optimized for their amplitude and phase characteristics. These continuous-time filter functions are described in the Laplace domain. Desired solutions can be transferred to the case of discrete-time filters whose transfer functions are expressed in the z domain, through the use of certain mathematical techniques such as the bilinear transform, impulse invariance, or pole–zero matching method. Thus digital IIR filters can be based on well-known solutions for analog filters such as the Chebyshev filter, Butterworth filter, and elliptic filter, inheriting the characteristics of those solutions.
Transfer function derivation
Digital filters are often described and implemented in terms of the difference equation that defines how the output signal is related to the input signal:
where:
is the feedforward filter order
are the feedforward filter coefficients
is the feedback filter order
are the feedback filter coefficients
is the input signal
is the output signal.
A more condensed form of the difference equation is:
To find the transfer function of the filter, we first take the Z-transform of each side of the above equation to obtain:
After rearranging:
We then define the transfer function to be:
Stability
The transfer function allows one to judge whether or not a system is bounded-input, bounded-output (BIBO) stable. To be specific, the BIBO stability criterion requires that the ROC of the system includes the unit circle. For example, for a causal system, all poles of the transfer function have to have an absolute value smaller than one. In other words, all poles must be located within a unit circle in the -plane.
The poles are defined as the values of which make the denominator of equal to 0:
Clearly, if then the poles are not located at the origin of the -plane. This is in contrast to the FIR filter where all poles are located at the origin, and is therefore always stable.
IIR filters are sometimes preferred over FIR filters because an IIR filter can achieve a much sharper transition region roll-off than an FIR filter of the same order.
Example
Let the transfer function of a discrete-time filter be given by:
governed by the parameter , a real number with . is stable and causal with a pole at . The time-domain impulse response can be shown to be given by:
where is the unit step function. It can be seen that is non-zero for all , thus an impulse response which continues infinitely.
Advantages and disadvantages
The main advantage digital IIR filters have over FIR filters is their efficiency in implementation, in order to meet a specification in terms of passband, stopband, ripple, and/or roll-off. Such a set of specifications can be accomplished with a lower order (Q in the above formulae) IIR filter than would be required for an FIR filter meeting the same requirements. If implemented in a signal processor, this implies a correspondingly fewer number of calculations per time step; the computational savings is often of a rather large factor.
On the other hand, FIR filters can be easier to design, for instance, to match a particular frequency response requirement. This is particularly true when the requirement is not one of the usual cases (high-pass, low-pass, notch, etc.) which have been studied and optimized for analog filters. Also FIR filters can be easily made to be linear phase (constant group delay vs frequency)—a property that is not easily met using IIR filters and then only as an approximation (for instance with the Bessel filter). Another issue regarding digital IIR filters is the potential for limit cycle behavior when idle, due to the feedback system in conjunction with quantization.
Design Methods
Impulse Invariance
Impulse invariance is a technique for designing discrete-time infinite-impulse-response (IIR) filters from continuous-time filters in which the impulse response of the continuous-time system is sampled to produce the impulse response of the discrete-time system.
Impulse invariance is one of the commonly used methods to meet the two basic requirements of the mapping from the s-plane to the z-plane. This is obtained by solving the T(z) that has the same output value at the same sampling time as the analog filter, and it is only applicable when the inputs are in a pulse.
Note that all inputs of the digital filter generated by this method are approximate values, except for pulse inputs that are very accurate. This is the simplest IIR filter design method. It is the most accurate at low frequencies, so it is usually used in low-pass filters.
For Laplace transform or z-transform, the output after the transformation is just the input multiplied by the corresponding transformation function, T(s) or T(z). Y(s) and Y(z) are the converted output of input X(s) and input X(z), respectively.
When applying the Laplace transform or z-transform on the unit impulse, the result is 1. Hence, the output results after the conversion are
Now the output of the analog filter is just the inverse Laplace transform in the time domain.
If we use nT instead of t, we can get the output y(nT) derived from the pulse at the sampling time. It can also be expressed as y(n)
This discrete time signal can be applied z-transform to get T(z)
The last equation mathematically describes that a digital IIR filter is to perform z-transform on the analog signal that has been sampled and converted to T(s) by Laplace, which is usually simplified to
Pay attention to the fact that there is a multiplier T appearing in the formula. This is because even if the Laplace transform and z-transform for the unit pulse are 1, the pulse itself is not necessarily the same. For analog signals, the pulse has an infinite value but the area is 1 at t=0, but it is 1 at the discrete-time pulse t=0, so the existence of a multiplier T is required.
Step Invariance
Step invariance is a better design method than impulse invariant. The digital filter has several segments of input with different constants when sampling, which is composed of discrete steps. The step invariant IIR filter is less accurate than the same input step signal to the ADC. However, it is a better approximation for any input than the impulse invariant.
Step invariant solves the problem of the same sample values when T(z) and T(s) are both step inputs. The input to the digital filter is u(n), and the input to the analog filter is u(t). Apply z-transform and Laplace transform on these two inputs to obtain the converted output signal.
Perform z-transform on step input
Converted output after z-transform
Perform Laplace transform on step input
Converted output after Laplace transform
The output of the analog filter is y(t), which is the inverse Laplace transform of Y(s). If sampled every T seconds, it is y(n), which is the inverse conversion of Y(z).These signals are used to solve for the digital filter and the analog filter and have the same output at the sampling time.
The following equation points out the solution of T(z), which is the approximate formula for the analog filter.
Bilinear Transform
The bilinear transform is a special case of a conformal mapping, often used to convert a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain.
The bilinear transform is a first-order approximation of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of
where is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for or a similar approximation for can be performed.
The inverse of this mapping (and its first-order bilinear approximation) is
This relationship is used in the Laplace transfer function of any analog filter or the digital infinite impulse response (IIR) filter T(z) of the analog filter.
The bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function,
That is
which is used to calculate the IIR digital filter, starting from the Laplace transfer function of the analog filter.
See also
Autoregressive model
Electronic filter
Finite impulse response
Recurrence relation, mathematical formalization
System analysis
External links
IIR Digital Filter design tool - produces coefficients, graphs, poles, zeros, and C code
EngineerJS Online IIR Design Tool - does not require Java
Digital signal processing
Filter theory | Infinite impulse response | [
"Engineering"
] | 2,307 | [
"Telecommunications engineering",
"Filter theory"
] |
570,172 | https://en.wikipedia.org/wiki/Airspeed | In aviation, airspeed is the speed of an aircraft relative to the air it is flying through (which itself is usually moving relative to the ground due to wind). It is difficult to measure the exact airspeed of the aircraft (true airspeed), but other measures of airspeed, such as indicated airspeed and Mach number give useful information about the capabilities and limitations of airplane performance. The common measures of airspeed are:
Indicated airspeed (IAS), what is read on an airspeed gauge connected to a pitot-static system.
Calibrated airspeed (CAS), indicated airspeed adjusted for pitot system position and installation error.
True airspeed (TAS) is the actual speed the airplane is moving through the air. In conjunction with winds aloft it is used for navigation.
Equivalent airspeed (EAS) is true airspeed times root density ratio. It is a useful way of calculating aerodynamic loads and airplane performance at low speeds when the flow can be considered incompressible.
Mach number is a measure of how fast the airplane is flying relative to the speed of sound.
The measurement and indication of airspeed is ordinarily accomplished on board an aircraft by an airspeed indicator (ASI) connected to a pitot-static system. The pitot-static system comprises one or more pitot probes (or tubes) facing the on-coming air flow to measure pitot pressure (also called stagnation, total or ram pressure) and one or more static ports to measure the static pressure in the air flow. These two pressures are compared by the ASI to give an IAS reading. Airspeed indicators are designed to give true airspeed at sea level pressure and standard temperature. As the aircraft climbs into less dense air, its true airspeed is greater than the airspeed indicated on the ASI.
Calibrated airspeed is typically within a few knots of indicated airspeed, while equivalent airspeed decreases slightly from CAS as aircraft altitude increases or at high speeds.
Units
Airspeed is commonly given in knots (kn). Since 2010, the International Civil Aviation Organization (ICAO) recommends using kilometers per hour (km/h) for airspeed (and meters per second for wind speed on runways), but allows using the de facto standard of knots, and has no set date on when to stop.
Depending on the country of manufacture or which era in aviation history, airspeed indicators on aircraft instrument panels have been configured to read in knots, kilometers per hour, miles per hour. In high altitude flight, the Mach number is sometimes used for reporting airspeed.
Indicated airspeed
Indicated airspeed (IAS) is the airspeed indicator reading (ASIR) uncorrected for instrument, position, and other errors. From current EASA definitions: Indicated airspeed means the speed of an aircraft as shown on its pitot static airspeed indicator calibrated to reflect standard atmosphere adiabatic compressible flow at sea level uncorrected for airspeed system errors.
An airspeed indicator is a differential pressure gauge with the pressure reading expressed in units of speed, rather than pressure. The airspeed is derived from the difference between the ram air pressure from the pitot tube, or stagnation pressure, and the static pressure. The pitot tube is mounted facing forward; the static pressure is frequently detected at static ports on one or both sides of the aircraft. Sometimes both pressure sources are combined in a single probe, a pitot-static tube. The static pressure measurement is subject to error due to inability to place the static ports at positions where the pressure is true static pressure at all airspeeds and attitudes. The correction for this error is the position error correction (PEC) and varies for different aircraft and airspeeds. Further errors of 10% or more are common if the airplane is flown in "uncoordinated" flight.
Uses of indicated airspeed
Indicated airspeed is a better measure of power required and lift available than true airspeed. Therefore, IAS is used for controlling the aircraft during taxiing, takeoff, climb, descent, approach or landing. Target speeds for best rate of climb, best range, and best endurance are given in terms of indicated speed. The airspeed structural limit, beyond which the forces on panels may become too high or wing flutter may occur, is often given in terms of IAS.
Calibrated airspeed
Calibrated airspeed (CAS) is indicated airspeed corrected for instrument errors, position error (due to incorrect pressure at the static port) and installation errors.
Calibrated airspeed values less than the speed of sound at standard sea level (661.4788 knots) are calculated as follows:
minus position and installation error correction.
where
is the calibrated airspeed,
is speed of sound at standard sea level
is the ratio of specific heats (1.4 for air)
is the impact pressure, the difference between total pressure and static pressure
is the static air pressure at standard sea level
This expression is based on the form of Bernoulli's equation applicable to isentropic compressible flow. CAS is the same as true air speed at sea level standard conditions, but becomes smaller relative to true airspeed as we climb into lower pressure and cooler air. Nevertheless, it remains a good measure of the forces acting on the airplane, meaning stall speeds can be called out on the airspeed indicator. The values for and are consistent with the ISA i.e. the conditions under which airspeed indicators are calibrated.
True airspeed
The true airspeed (TAS; also KTAS, for knots true airspeed) of an aircraft is the speed of the aircraft relative to the air in which it is flying. The true airspeed and heading of an aircraft constitute its velocity relative to the atmosphere.
Uses of true airspeed
The true airspeed is important information for accurate navigation of an aircraft. To maintain a desired ground track whilst flying in a moving airmass, the pilot of an aircraft must use knowledge of wind speed, wind direction, and true air speed to determine the required heading. See wind triangle.
TAS is the appropriate speed to use when calculating the range of an airplane. It is the speed normally listed on the flight plan, also used in flight planning, before considering the effects of wind.
Measurement of true airspeed
True airspeed is calculated from calibrated airspeed as follows
where
is true airspeed
is the temperature ratio, namely local over standard sea level temperature,
Some airspeed indicators include a TAS scale, which is set by entering outside air temperature and pressure altitude. Alternatively, TAS can be calculated using an E6B flight calculator or equivalent, given inputs of CAS, outside air temperature (OAT) and pressure altitude.
Equivalent airspeed
Equivalent airspeed (EAS) is defined as the airspeed at sea level in the International Standard Atmosphere at which the (incompressible) dynamic pressure is the same as the dynamic pressure at the true airspeed (TAS) and altitude at which the aircraft is flying. That is, it is defined by the equation
where
is equivalent airspeed
is true airspeed
is the density of air at the altitude at which the aircraft is currently flying;
is the density of air at sea level in the International Standard Atmosphere (1.225 kg/m3 or 0.00237 slug/ft3).
Stated differently,
where
is the density ratio, that is
Uses of equivalent airspeed
EAS is a measure of airspeed that is a function of incompressible dynamic pressure. Structural analysis is often in terms of incompressible dynamic pressure, so equivalent airspeed is a useful speed for structural testing. The significance of equivalent airspeed is that, at Mach numbers below the onset of wave drag, all of the aerodynamic forces and moments on an aircraft are proportional to the square of the equivalent airspeed. Thus, the handling and 'feel' of an aircraft, and the aerodynamic loads upon it, at a given equivalent airspeed, are very nearly constant and equal to those at standard sea level irrespective of the actual flight conditions.
At standard sea level pressure, CAS and EAS are equal. Up to about 200 knots CAS and 10,000 ft (3,000 m) the difference is negligible, but at higher speeds and altitudes CAS diverges from EAS due to compressibility.
Mach number
Mach number is defined as
where
is true airspeed
is the local speed of sound
Both the Mach number and the speed of sound can be computed using measurements of impact pressure, static pressure and outside air temperature.
Uses of Mach number
For aircraft that fly close to, but below the speed of sound (i.e. most civil jets) the compressibility speed limit is given in terms of Mach number. Beyond this speed, Mach buffet or stall or tuck may occur.
See also
ICAO recommendations on use of the International System of Units
Acronyms and abbreviations in avionics
Flight instruments
Ground speed
Maneuvering speed
V speeds
References
Bibliography
External links
Calculators
Aerodynamics | Airspeed | [
"Physics",
"Chemistry",
"Engineering"
] | 1,851 | [
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
570,214 | https://en.wikipedia.org/wiki/AquaDom | The AquaDom (mixed Latin and German: 'water dome', more formally 'water cathedral') was a cylindrical acrylic glass aquarium with built-in transparent elevator inside the lobby of the Radisson Collection Hotel in the DomAquarée complex at Karl-Liebknecht-Straße in Berlin-Mitte, Germany. The DomAquarée complex also contains offices, a museum, a restaurant, and the Berlin Sea Life Centre aquarium.
On 16 December 2022, the AquaDom aquarium ruptured and collapsed, propelling the 1,500 fish inside into nearby facilities and streets, causing considerable damage and killing the majority of the fish. As of early 2024, plans were to forego rebuilding the tank and instead develop an indoor garden in the hotel lobby.
Construction
The AquaDom opened on 2 December 2003 at a cost of about 12.8 million euros. The acrylic cylinder was manufactured by International Concept Management, Inc. using Reynolds Polymer Technology panels, with architecture drawings provided by Sergei Tchoban. It was located in the same building as the Berlin Sea Life attraction but was owned and operated by Union Investment.
The aquarium was constructed from 41 acrylic panels – 26 panels for the outside cylinder and 15 panels for the inside cylinder for the elevator – which were bonded together on site. With a diameter of about and a height of about , resting on a tall foundation, it held the Guinness World Record for the world's largest cylindrical aquarium.
Operation
The water column was high, held of saltwater and accommodated about 1,500 tropical fish from over 100 species. A team of scuba divers conducted daily feedings, with of feed-fish, and cleaned the tank daily. According to Union Investment, the owner of the complex, the wall thickness of the outer acrylic cylinder was at the bottom and at the top. The water temperature was kept at .
In 2020, the aquarium was refurbished and upgraded, with all the water drained and the fish temporarily relocated to a breeding facility in the basement. According to the owner, seals were renewed at the base and an additional sealing level was fitted. The cylinder was repaired and polished in places. Maintenance work on the elevator was conducted.
Collapse and aftermath
The cylindrical tank burst at 5:43 am local time (4:43 am GMT) on 16 December 2022, sending approximately of water together with the tank's 1,500 fish into the hotel lobby and adjacent street. Sandra Weeser, a member of Germany's Bundestag staying at the hotel at the time, described awakening to "a kind of shock wave".
Berlin's Technisches Hilfswerk (THW) rescue team mounted a full-scale deployment, completing operations 12 hours later — with the hotel's lobby and atrium remaining devastated, described by onlookers as resembling a battlefield.
The majority of the 1,500 fish were killed and two people were hospitalized with injuries. Officials noted the collapse could easily have taken several lives had it taken place during the hotel's busier operational hours.
Detected by local seismographs, the collapse sent the water out of the hotel lobby and into nearby storm drains, but not before damaging several nearby businesses, including a neighboring Lindt chocolate shop and the basement of the adjacent DDR Museum, the latter which reopened three and a half months later. An associated power loss threatened hundreds of smaller fish in the facility's breeding tanks, which were ultimately rescued.
With no suspicion of foul play, and prior to a formal investigation, suspected causes included material fatigue, exacerbated by the differential between Berlin's very low air temperature ( that night, and the tank's water temperature (.
On 24 October 2023, prosecutors closed the investigation into the rupture after experts failed to determine a conclusive cause.
Similar events
Catastrophic failures and major leaks have occurred at numerous large acrylic tanks, including failures at the T-Rex Café at Disney Springs in Orlando; the Dubai Aquarium at the Dubai Mall; the Orient Shopping Center, Shanghai; the Gulfstream Casino, Hallandale Beach, Florida; at the Lotte Tower, Seoul, South Korea and at the Mazatlan, Mexico Aquarium.
References
External links
CityQuartier "DomAquarée"
Buildings and structures completed in 2003
2003 establishments in Germany
Defunct aquaria
Building and structure collapses in 2022
Building and structure collapses in Germany
Mechanical failure
2022 animal deaths
Collapsed buildings and structures | AquaDom | [
"Materials_science",
"Engineering"
] | 908 | [
"Mechanical failure",
"Materials science",
"Mechanical engineering"
] |
570,274 | https://en.wikipedia.org/wiki/Space%20Interferometry%20Mission | The Space Interferometry Mission, or SIM, also known as SIM Lite (formerly known as SIM PlanetQuest), was a planned space telescope proposed by the U.S. National Aeronautics and Space Administration (NASA), in conjunction with contractor Northrop Grumman. One of the main goals of the mission was the hunt for Earth-sized planets orbiting in the habitable zones of nearby stars other than the Sun. SIM was postponed several times and finally cancelled in 2010. In addition to detecting extrasolar planets, SIM would have helped astronomers construct a map of the Milky Way galaxy. Other important tasks would have included collecting data to help pinpoint stellar masses for specific types of stars, assisting in the determination of the spatial distribution of dark matter in the Milky Way and in the local group of galaxies and using the gravitational microlensing effect to measure the mass of stars. The spacecraft would have used optical interferometry to accomplish these and other scientific goals.
The initial contracts for SIM Lite were awarded in 1998, totaling US$200 million. Work on the SIM project required scientists and engineers to move through eight specific new technology milestones, and by November 2006, all eight had been completed. SIM Lite was originally proposed for a 2005 launch, aboard an Evolved Expendable Launch Vehicle (EELV). As a result of continued budget cuts, the launch date was pushed back at least five times. NASA had set a preliminary launch date for 2015. As of February 2007, many of the engineers working on the SIM program had moved on to other areas and projects, and NASA directed the project to allocate its resources toward engineering risk reduction. However, the preliminary budget for NASA for 2008 included zero dollars for SIM.
In 2007, the Congress restored funding for fiscal year 2008 as part of an omnibus appropriations bill which the President later signed. At the same time the Congress directed NASA to move the mission forward to the development phase. In 2009 the project continued its risk reduction work while waiting for the findings and recommendations of the Astronomy and Astrophysics Decadal Survey, Astro2010, performed by the National Academy of Sciences, which would determine the project's future.
In 2010, the Astro2010 Decadal Report was released and did not recommend that NASA continue the development of the SIM Lite Astrometric Observatory. This prompted NASA Astronomy and Physics Director, Jon Morse, to issue a letter on 24 September 2010 to the SIM Lite project manager, informing him that NASA was discontinuing its sponsorship of the SIM Lite mission and directing the project to discontinue Phase B activities immediately or as soon as practical. Accordingly, all SIM Lite activities were closed down by the end of calendar year 2010.
Mission
SIM Lite would have operated in an Earth-trailing heliocentric orbit, drifting away from Earth at the rate of 0.1 AU per year ultimately reaching a distance of 82 million km from Earth. This would have taken approximately years. The Sun would have continuously shone on the spacecraft, allowing it to avoid the occultations of target stars and eclipses of the Sun that would occur in an Earth orbit.
Had it been launched, SIM would have performed scientific research for five years.
Planet hunting
SIM Lite would have been the most powerful extrasolar planet hunting space telescope ever built. Through the technique of interferometry the spacecraft would be able to detect Earth-sized planets. SIM Lite was to perform its search for nearby, Earth-like planets by looking for the "wobble" in the parent star's apparent motion as the planet orbits. The spacecraft would have accomplished this task to an accuracy of one millionth of an arcsecond, or the thickness of a nickel viewed at the distance from Earth to the Moon. Titled the Deep Search, the planet hunting program was intended to search approximately 60 nearby stars for terrestrial planets (like Earth and Venus) in the habitable zone (where liquid water can exist throughout a full revolution (one "year") of the planet around its star). The Deep Search was to be the most demanding in terms of astrometric accuracy, hence the name, Deep Search. This program would have used the full capability of the SIM Lite spacecraft to make its measurements.
A flexible search strategy tunes SIM Lite's mass sensitivity at each star to a desired level in the habitable planet search. The value of ηEarth (Eta_Earth), the fraction of stars carrying Earth-analog planets, will be estimated by the Kepler Mission some time before SIM Lite launches. One strategy for a habitable planet search is to do a 'deeper' search (i.e. to lower mass sensitivity in the habitable zone) of a smaller number of targets if Earth analogs are common. A 'shallower' search of a larger number of targets could have been done if Earth analogs are rarer. For example, assuming that 40% of mission time is allocated for the planet search, SIM Lite could have surveyed:
65 stars for planets down to one Earth mass, in scaled 1 AU orbits, OR
149 stars for planets down to two Earth masses, in scaled 1 AU orbits, OR
239 stars for planets down to three Earth masses, in scaled 1 AU orbits.
Aside from searching for Earth-sized planets SIM Lite was scheduled to perform what has been dubbed the "Broad Survey". The Broad Survey would have looked at approximately 1,500 stars to help determine the abundance of Neptune-mass and larger planets around all star-types in Earth's sector of the Milky Way.
A third part of the planet finding mission was the search for Jupiter-mass planets around young stars. The survey would have helped scientists understand more about solar system formation, including the occurrence of hot Jupiters. This portion of the planet hunt was designed to study systems with one or more Jupiter mass planets before the system has reached long term equilibrium. Planet hunting techniques using a star's radial velocity cannot measure the regular, tiny to-and-fro wobble motions induced by planets against the strong atmospheric activity of a youthful star. It is through the techniques pioneered by Albert A. Michelson that the SIM would have been able to execute its three primary planet-finding missions.
The mission's planet finding component was set up to serve as an important complement to the future missions designed to image and measure terrestrial and other exoplanets. SIM Lite was to perform an important task that these missions will not be capable of: determining planet masses. Another task that the SIM was envisioned to perform for the future missions will include providing the orbital characteristics of the planets. With this knowledge other missions can estimate the optimal times and projected star–planet separation angles for them to observe the terrestrial (and other) planets SIM has detected.
Stellar mass
Another key aspect of SIM Lite's mission was determining the upper and lower limits of star's masses. Today, scientists understand that there are limits to how small or large a star can be. Objects that are too small lack the internal pressure to initiate thermonuclear fusion, which is what causes a star to shine. These objects are known as brown dwarfs and represent the lower end of the stellar mass scale. Stars that are too large become unstable and explode in a supernova.
Part of the SIM's mission was to provide pinpoint measurements for the two extremes in stellar mass and evolution. The telescope will not be able to measure the mass of every star in the Galaxy, since there are over 200 billion, but instead, it will take a "population census." Through this technique, SIM will be able to output accurate masses for representative examples for nearly every star type, including brown dwarfs, hot white dwarfs, red giant stars, and elusive black holes. Current space telescopes, including NASA's Hubble Space Telescope, can accurately measure mass for some types of stars, but not all. Estimates put the range for stellar mass somewhere between 8% the mass of the Sun and in excess of 60 times the mass of the Sun. The entire study was to focus on binary star systems, stars coupled through a mutual gravitational attraction.
Galactic mapping
Interferometric measurements of stellar positions over the course of the mission would have permitted SIM to precisely measure the distances between stars throughout the Milky Way. This would have allowed astronomers to create a "roadmap" of the Galaxy, answering many questions about its shape and size.
Currently, astronomers know little about the shape and size of our galaxy relative to what they know about other galaxies; it is difficult to observe the entire Milky Way from the inside. A good analogy is trying to observe a marching band as a member of the band. Observing other galaxies is much easier because humans are outside those galaxies. Steven Majewski and his team planned to use SIM Lite to help determine not only the shape and size of the Galaxy but also the distribution of its mass and the motion of its stars.
SIM Lite measurements of Milky Way stars were to yield data to understand four topics: fundamental galactic parameters, the Oort Limit, disk mass potential, and mass of the Galaxy to large radii. The first, fundamental galactic parameters, was aimed at answering key questions about the size, shape and the rotation rate of the Milky Way. The team hoped to more accurately determine the distance from the Sun to the Galactic Center. The second topic, the Oort Limit, would have attempted to determine the mass of the galactic disk.
The third project topic was disk mass potential. This topic was designed to make measurements of the distances to disk stars as well as their proper motions. The results of the third topic of study were to be combined with the results of the fundamental galactic parameters portion of the study to determine the Solar System's position and velocity in the galaxy. The final topic dealt with dark matter distribution in the Milky Way. SIM data was to be used to create a three-dimensional model of mass distribution in the Galaxy, out to a radius of 270 kiloparsecs (kps). Astronomers were to then use two different tests to determine the galactic potential at large radii.
Dark matter
Dark matter is the matter in the universe that cannot be seen. Because of the gravitational effect it exerts on stars and galaxies, scientists know that approximately 80% of the matter in the universe is dark matter. The spatial distribution of dark matter in the universe is largely unknown; SIM Lite would have helped scientists answer to this question.
The strongest evidence for dark matter comes from galactic motion. Galaxies rotate much faster than the amount of visible matter suggests they should; the gravity from the ordinary matter is not enough to hold the galaxy together. Scientists theorize that the galaxy is held together by huge quantities of dark matter. Similarly, clusters of galaxies do not appear to have enough visible matter to gravitationally balance the high speed motions of their component galaxies.
Besides measuring stellar motions within the Milky Way, SIM Lite was to measure the internal and average galactic motion of some of the neighboring galaxies near the Milky Way. The telescope's measurements were to be used in conjunction with other, currently available, data to provide astronomers with the first total mass measurements of individual galaxies. These numbers would enable scientists to estimate the spatial distribution of dark matter in the local group of galaxies, and by extension, throughout the universe.
Development
Beginnings
The Space Interferometry Mission began as a four-month preliminary architecture study in March 1997. NASA selected TRW's Space & Electronics Group, Eastman Kodak and Hughes Danbury Optical Systems to conduct the study. In 1998, TRW Inc. was selected as the contractor for the SIM Lite project; Northrop Grumman acquired part of TRW in 2002 and took over the contract. Also selected was Lockheed Martin Missiles and Space located in Sunnyvale, California. The two contracts, which included the mission formulation and implementation phases, were announced in September 1998 and worth a total of over US$200 million. The formulation phase of the mission included initial mission design and planning for the full scale implementation of the mission. At the time of the NASA announcement, launch was scheduled for 2005 and the mission was part of the Origins Program, a series of missions designed to answer questions such as the origin of life Earth.
In August 2000, NASA asked project managers to consider looking at the Space Shuttle, instead of the previously proposed EELV, as a launch vehicle. In late November 2000, NASA announced that the project's scientific team was selected. The group included notable names from the world of extrasolar planet research. The entire group consisted of 10 principal investigators and five mission specialists. At the time of this NASA announcement launch was scheduled for 2009 and the mission was still part of the Origins Program.
New technologies
SIM's new technology was meant to lead to the development of telescopes powerful enough to take images of Earth-like extrasolar planets orbiting distant stars and to determine whether those planets are able to sustain life. NASA has already started developing future missions that will build on SIM's technological legacy. The technological development phase of the mission was completed in November 2006 with the announcement that the eight, mission-technology-milestones set by NASA were reached. The milestones were necessary steps in the technological development before flight control instruments could begin to be designed. The completion of each milestone meant that new systems had to be developed for nanometer control as well as picometer knowledge technology; these systems enable the telescope to make its accurate measurements with extreme accuracy.
One of the new technologies developed for the mission were high-tech "rulers", capable of making measurements in increments a fraction of the width of a hydrogen atom. In addition, the rulers were developed to work as a network. The mission team also created "shock absorbers" to alleviate the effects of tiny vibrations in the spacecraft which would impede accurate measurements. Another of the milestones involved combining the new "rulers" and "shock absorbers" to prove that the Space Interferometry Mission craft could detect the tiny wobbles in stars caused by Earth-sized planets. The fifth of the technology milestones required the demonstration of the Microarcsecond Metrology Testbed at a performance of 3,200 picometers over its wide angle field of view. The wide angle measurements were to be used to determine the fixed positions of stars each time they were measured. This level of performance demonstrated SIM Lite's ability to calculate the astrometric grid. Another key development, known as gridless narrow-angle astrometry (GNAA), was the ability to apply the measurement capability worked out in the wide angle milestone and take it a step further, into narrow-angle measurements. Aiming to give an accuracy of 1 micro-arcsecond to the early stages of the SIM, the technique allows star positions to be measured without first setting up a grid of reference stars; instead, it sets up a reference frame using several reference stars and a target star observed from different locations, and star positions are calculated using delay measurements from separate observations. The narrow angle field was to be used by SIM to detect terrestrial planets; the team applied the same criteria to both the narrow and wide angle measurements. The final requirement before beginning work on flight controls was to make sure that all of the systems developed for the mission worked cohesively; this final NASA technology goal was completed last as it was dependent upon the others.
Status after 2006
Between the end of April and June 2006 the project completed three engineering milestones and from 2–8 November 2006 SIM completed a "Spacecraft Internal Design Review." As of June 2008, all of the eight engineering milestones were successfully completed.
The project had been in Phase B since June 2003. Jet Propulsion Laboratory's "Phase B" is called the "Preliminary Design" phase. Phase B further develops the mission concept developed during Phase A to prepare the project for entry into the Implementation Phase of the project. Requirements are defined, schedules are determined, and specifications are prepared to initiate system design and development." In addition, as part of Phase B, the SIM Lite project was to go through a number of reviews by NASA including System Requirements Review, System Design Review, and Non-Advocate Review. During this phase, experiments would have been proposed, peer reviewed, and eventually selected by NASA's Office of Space Science. Experiment selections are based on scientific value, cost, management, engineering, and safety.
Planned launch
The launch date for the SIM Lite mission was pushed back at least five times. At the program's outset, in 1998, the launch was scheduled for 2005. By 2000, the launch date had been delayed until 2009, a date that held through 2003; though some project scientists cited 2008 in late 2000. Between 2004 and 2006, contractor Northrop Grumman, the company designing and developing SIM, listed a launch date of 2011 on their website. With the release of the FY 2007 NASA budget, predictions changed again, this time to a date no earlier than 2015 or 2016. The delay of the launch date was primarily related to budget cuts made to the SIM Lite program. The 2007 change represented a difference of about three years from the 2006 launch date, outlined in NASA's FY 2006 budget as being two years behind 2005 budget predictions.
Other groups predicted dates matching officially predicted launch dates; the NASA Exoplanet Science Institute (formerly the Michelson Science Center) at the California Institute of Technology also set the date at 2015. As of June 2008, NASA has postponed the launch date "indefinitely".
A May 2005 NASA operating plan put the mission into a replanning phase through the spring of 2006. The launch was planned to be via an Evolved Expendable Launch Vehicle (EELV), likely an Atlas V 521 or equivalent.
Budget
SIM Lite was to be considered the flagship mission of NASA's Exoplanet Exploration Program (formerly known as the Navigator Program). According to the 2007 Presidential Budget for NASA, the program is, "a coherent series of increasingly challenging projects, each complementary to the others and each mission building on the results and capabilities of those that preceded it as NASA searches for habitable planets outside of the Solar System." The program, in addition to the Space Interferometry Mission, includes the Keck Interferometer and the Large Binocular Telescope Interferometer. When originally approved in 1996, the mission was given a $700 million cap (in 1996 dollars) which included launch costs and five years of operation. The first contracts, for the preliminary architecture study, were worth $200,000 each.
NASA's budget outlined plans for the three projects for fiscal year (FY) 2007. Of the three missions, SIM Lite was delayed further and the Keck Interferometer saw budget cuts. The 2007 NASA budget stipulated, "SIM Phase B activity will continue while new cost and schedule plans are developed, consistent with recent funding decisions." The funding decisions included a US$118.5 million cut over the FY 2006 NASA budget request for the Exoplanet Exploration Program. The budget also laid out projections for the program through the year 2010. Each year will have successive funding cuts, if compared to the 2006 request numbers. Starting with FY 2008, the Exoplanet Exploration Program will receive around $223.9 million less compared to 2006. The following years will have cuts of $155.2 million in 2009 and $172.5 million in 2010, compared to the 2006 request.
When SIM Lite entered what JPL terms "Phase B" in 2003 Fringes: Space Interferometry Mission Newsletter, called it a most important milestone on the way to a 2009 launch. The delays are budgetary in nature. In 2006, the mission received $117 million, an increase of $8.1 million over the previous year, but 2007 cuts amounted to $47.9 million less for the SIM program. In 2008, $128.7 million of the $223.9 million estimated to be cut from the Exoplanet Program budget would come from the SIM Lite mission. After an additional $51.9 million decrease in FY 2009, the program was reduced to $6 million in FY 2010 supplemented by substantial carryover from the previous year while awaiting the results of the Astronomy and Astrophysics Decadal Survey, Astro2010.
By February 2007 many of the budget cuts outlined in the FY 2007 budget were already being felt within the project. Engineers who worked on SIM were forced to find other areas to work in. A February 2007 editorial in the Space Interferometry Mission Newsletter described the situation as, "entirely due to budget pressures and priorities within the Science Mission Directorate at NASA (with) scientific motivation for the mission...as strong as ever." NASA, per the budget cuts, directed the SIM project to refocus its efforts toward engineering risk reduction. As of the February 2007 newsletter the plans for the refocus were in the process of being completed.
Instruments
Optical interferometry
Interferometry is a technique pioneered by Albert A. Michelson in the 19th century. Optical interferometry, which has matured within the last two decades, combines the light of multiple telescopes so that precise measurements can be made, akin to what might be accomplished with a single, much larger telescope. It is the interaction of light waves, called interference, that makes this possible. Interference can be used to cancel out the glare of bright stars or to measure distances and angles accurately. The construction of the word partially illustrates this: interfere + measure = interfer-o-metry. At radio wavelengths of the electromagnetic spectrum, interferometry has been used for more than 50 years to measure the structure of distant galaxies.
The SIM Lite telescope functions through optical interferometry. SIM was to be composed of one science interferometer (50 cm collectors, 6 m separation [baseline]), a guide interferometer (30 cm collectors, 4.2 m baseline), and a guide telescope (30 cm aperture). The sophisticated guide telescope stabilizes instrument pointing in the third dimension. The spacecraft's operational limiting magnitude would have gone down to 20 at 20 millionths of an arcsecond (μas) and its planet-finding, astrometric accuracy of 1.12 μas is for single measurements. The accuracy of its global, all-sky astrometric grid would have been 4 μas.
SIM's design since 2000 consisted of two light collectors (strictly speaking, they are Mersenne telescopes) mounted on opposite ends of a six-meter structure. The observatory would have been able to measure the small wobbles in stars and detect the planets causing them down to one Earth mass at distances up to 33 light years (10 parsecs) from the Sun.
See also
Notes
References
External links
"Fringes: Space Interferometry Mission Newsletter" Index
SIM Lite Astrometric Observatory (formerly SIM PlanetQuest), from NASA
SIM PlanetQuest Mission Profile by NASA's Solar System Exploration
Optical Long Baseline Interferometery News, from NASA
Interferometric telescopes
Cancelled spacecraft
Space astrometry missions
Space telescopes
Exoplanet search projects
Lockheed Corporation
Astronomy projects | Space Interferometry Mission | [
"Astronomy"
] | 4,708 | [
"Astronomy projects",
"Exoplanet search projects",
"Space telescopes",
"Space astrometry missions"
] |
570,440 | https://en.wikipedia.org/wiki/Zoosemiotics | Zoosemiotics is the semiotic study of the use of signs among animals, more precisely the study of semiosis among animals, i.e. the study of how something comes to function as a sign to some animal. It is the study of animal forms of knowing.
Considered part of biosemiotics, zoosemiotics is related to the fields of ethology and animal communication. It was developed by semiotician Thomas Sebeok based on the theories of German-Estonian biologist Jakob von Uexküll. The field is defined by having as its subject matter all of those semiotic processes that are shared by both animals and humans. The field also differs from the field of animal communication in that it also interprets signs that are not communicative in the traditional sense, such as camouflage, mimicry, courtship behavior etc. The field also studies cross-species communication, for example between humans and animals.
See also
Biosemiotics
French Zoosemiotics Society
Phytosemiotics
Neurosemiotics
References
Further reading
Sebeok, Thomas A. 1972. Perspectives in Zoosemiotics. Janua Linguarum. Series Minor 122. The Hague: Mouton de Gruyter.
Martinelli, Dario; Lehto, Otto (Eds.) 2009. Special issue: Zoosemiotics. Sign Systems Studies 37(3/4). (esp. G. Kaplan, Animals and music: Between cultural definitions and sensory evidence, 423–453; K. Kleisner, M. Stella, monsters we met, monsters we made: On the parallel emergence of phenotypic similarity under domestication 454–476; S. Pain, From biorhetorics to zoorhetorics, 498–508; K. Tüür, Bird sounds in nature writing: Human perspective on animal communication, 580–613; E. Vladimirova, Sign activity of mammals as means of ecological adaptation, 614–636; C. Brentari Konrad Lorenz’s epistemological criticism towards Jakob von Uexküll, 637–660).
Klopfer, P. (1974), Linguistics: Perspectives in Zoosemiotics. Thomas A. Sebeok. American Anthropologist 76: 939.
Felice Cimatti, 2002. Mente e linguaggio negli animali. Introduzione alla zoosemiotica cognitiva. Roma, Carocci.
Remo Gramigna 2010. Augustine's legacy for the history of zoosemioitcs. Hortus Semioticus 6.
Kull, Kalevi 2003. Thomas A. Sebeok and biology: building biosemiotics. Cybernetics & Human Knowing 10(1): 47–60
Martinelli, Dario 2007. Zoosemiotics. Proposal for a Handbook. Helsinki: Acta Semiotica Fennica 26. Imatra: International Semiotics Institute at Imatra.
Martinelli, Dario 2010. A Critical Companion to Zoosemiotics: People, Paths, Ideas. Biosemiotics 5. Berlin: Springer
Schuler, Werner 2003. Zoosemiose. In: Roland Posner, Klaus Robering and Thomas Sebeok (eds.) 2003: Ein Handbuch zu den zeichentheoretischen Grundlagen von Natur und Kultur / A Handbook on the Signtheoretic Foundations of Nature and Culture. Berlin and New York: Walter de Gruyter, 522–531.
Sebeok, Thomas A. 1990. Essays in Zoosemiotics (= Monograph Series of the TSC 5). Toronto: Toronto Semiotic Circle; Victoria College in the University of Toronto.
Smith, W. John 1974. Zoosemiotics: ethology and the theory of signs. Current Trends in Linguistics 12: 561–626
Turovski, Aleksei 2002. On the zoosemiotics of health and disease. Sign Systems Studies 30.1: 213–219.
Animal communication | Zoosemiotics | [
"Biology"
] | 861 | [
"Ethology",
"Behavior",
"Zoosemiotics"
] |
570,478 | https://en.wikipedia.org/wiki/Housing%20cooperative | A housing cooperative, or housing co-op, is a legal entity which owns real estate consisting of one or more residential buildings. The entity is usually a cooperative or a corporation and constitutes a form of housing tenure. Typically housing cooperatives are owned by shareholders but in some cases they can be owned by a non-profit organization. They are a distinctive form of home ownership that have many characteristics that differ from other residential arrangements such as single family home ownership, condominiums and renting.
The cooperative is membership based, with membership granted by way of a share purchase in the cooperative. Each shareholder in the legal entity is granted the right to occupy one housing unit. A primary advantage of the housing cooperative is the pooling of the members' resources so that their buying power is leveraged; thus lowering the cost per member in all the services and products associated with home ownership.
Another key element in some forms of housing cooperatives is that the members, through their elected representatives, screen and select who may live in the cooperative, unlike any other form of home ownership.
Housing cooperatives fall into two general tenure categories: non-ownership (referred to as non-equity or continuing) and ownership (referred to as equity or strata). In non-equity cooperatives, occupancy rights are sometimes granted subject to an occupancy agreement, which is similar to a lease. In equity cooperatives, occupancy rights are sometimes granted by way of the purchase agreements and legal instruments registered on the title. The corporation's articles of incorporation and bylaws as well as occupancy agreement specifies the cooperative's rules.
The word cooperative is also used to describe a non-share capital co-op model in which fee-paying members obtain the right to occupy a bedroom and share the communal resources of a house owned by a cooperative organization. Such is the case with student cooperatives in some college and university communities across the United States.
Legal status
As a legal entity, a co-op can contract with other companies or hire individuals to provide it with services, such as a maintenance contractor or a building manager. It can also hire employees, such as a manager or a caretaker, to deal with specific upkeep tasks at which volunteers may hesitate or may not be skilled, such as electrical maintenance.
In non-equity cooperatives and in limited equity cooperatives, a shareholder in a co-op does not own real estate, but a share of the legal entity that does own real estate. Co-operative ownership is quite distinct from condominiums where people own individual units and have little say in who moves into the other units. Because of this, most jurisdictions have developed separate legislation, similar to laws that regulate companies, to regulate how co-ops are operated and the rights and obligations of shareholders.
Ownership
Each resident or resident household has membership in the co-operative association. In non-equity cooperatives, members have occupancy rights to a specific suite within the housing co-operative as outlined in their "occupancy agreement", or "proprietary lease", which is essentially a lease. In ownership cooperatives, occupancy rights are transferred to the purchaser by way of the title transfer.
Since the housing cooperative holds title to all the property and housing structures, it bears the cost of maintaining, repairing and replacing them. This relieves the member from the cost and burden of such work. In that sense, the housing cooperative is like the landlord in a rental setting. However, another hallmark of cooperative living is that it is nonprofit, so that the work is done at cost, with no profit motive involved.
In some cases, the co-op follows Rochdale Principles where each shareholder has only one vote. Most cooperatives are incorporated as limited stock companies where the number of votes an owner has is tied to the number of shares owned by the person. Whichever form of voting is employed it is necessary to conduct an election among shareholders to determine who will represent them on the board of directors (if one exists), the governing body of the co-operative. The board of directors is generally responsible for the business decisions including the financial requirements and sustainability of the co-operative. Although politics vary from co-op to co-op and depend largely on the wishes of its members, it is a general rule that a majority vote of the board is necessary to make business decisions.
Management
In larger co-ops, members of a co-op typically elect a board of directors from amongst the shareholders at a general meeting, usually the annual general meeting. In smaller co-ops, all members sit on the board.
A housing cooperative's board of directors is elected by the membership, providing a voice and representation in the governance of the property. Rules are determined by the board, providing a flexible means of addressing the issues that arise in a community to assure the members' peaceful possession of their homes.
Finance
A housing cooperative is normally de facto non-profit, since usually most of its income comes from the rents paid by its residents (if in a formal corporation, then shareholders), who are invariably its members. There is no point in creating a deliberate surplus—except for operational requirements such as setting aside funds for replacement of assets—since that simply means that the rents paid by members are set higher than the expenses. (It is possible for a housing co-op to own other revenue-generating assets, such as a subsidiary business which could produce surplus income to offset the cost of the housing, but in those cases the housing rents are usually reduced to compensate for the additional revenue.)
In the lifecycle of buildings, the replacement of assets (capital repairs) requires significant funds which can be obtained through a variety of ways: assessments on current owners; sales of Treasury Stock (former rental units) to new shareholders; draw downs of reserves; unsecured loans; operating surpluses; fees on the sales of units between shareholders and new and increases to existing mortgages.
There are housing co-ops of the rich and famous: John Lennon, for instance, lived in The Dakota, a housing co-operative,
and most apartments in New York City that are owned rather than rented are held through a co-operative
rather than via a condominium arrangement.
Market-rate and limited-equity co-ops
There are two main types of housing co-operative share pricing: market rate and limited equity. With market rate, the share price is allowed to rise on the open market and shareholders may sell at whatever price the market will bear when they want to move out. In many ways market rate is thus similar financially to owning a condominium, with the difference being that often the co-op may carry a mortgage, resulting in a much higher monthly fee paid to the co-op than would be so in a condominium. The purchase price of a comparable unit in the co-op is typically much lower, however.
With limited equity, the co-op has rules regarding pricing of shares when sold. The idea behind limited equity is to maintain affordable housing. A sub-set of the limited equity model is the no-equity model, which looks very much like renting, with a very low purchase price (comparable to a rental security deposit) and a monthly fee in lieu of rent. When selling, all that is re-couped is that very low purchase price.
Research on housing cooperatives
Research in Canada found that housing cooperatives had residents rate themselves as having the highest quality of life and housing satisfaction of any housing organization in the city.
Other research among older residents from the rural United States found that those living in housing cooperatives felt much safer, independent, satisfied with life, had more friends, had more privacy, were healthier and had things repaired faster. Australian researchers found that cooperative housing built stronger social networks and support, as well as better relationships with neighbours compared to other forms of housing. They cost 14% less for residents and had lower rates of debt and vacancy. Other US research has found that housing cooperatives tended to have higher rates of building quality, building safety, feelings of security among residents, lower crime rates, stable access to housing and significantly lower costs compared to conventional housing.
By country
Australia
Housing co-operatives in Australia are primarily non-equity rental co-operatives, but there are some equity co-operatives as well. The rental co-operatives are generally a part of the Australian social housing/community housing sector and have been funded by various iterations of government funding programs.
One of the largest co-operative housing organisations in Australia is Common Equity Housing Ltd (CEHL) in the state of Victoria. CEHL is a registered housing association with its shares held by its 103-member co-operatives. As of 2023 CEHL co-operatives house 4,291 people in 2,101 homes.
Common Equity, in the state of NSW, is also a registered housing provider and manages 500 properties in 31 member housing co-operatives
Canada
Co-ops in Canada offer an affordable alternative to renting, but waiting lists for the units can be years-long.
France
In 2013, the opening of La Maison des Babayagas, an innovative housing co-op in Paris, gained worldwide attention. It was formed as a self-help community and built with financial assistance from the municipal government, specifically for female senior citizens. Located in the Paris suburb of Montreuil after many years of planning, it looks like any other apartment building. The senior citizens stay out of nursing homes, by staying active, alert, and assisting one another.
The purpose of the Baba Yaga Association is to create and develop an innovative lay residence for aging women that is: (1) self-managed, without hierarchy and without supervision; (2) united collective, with regard to finances as well as daily life; (3) citizen civic-minded, through openness to the community /city and through mutual interaction, engaging in its political, cultural and social life in a spirit of participatory democracy; (4) ecological in all aspects of life, in conformity with the values and actions expressed in the Charter of Living of the House of Babayagas.
Generally, the association's activities are tied to the purpose above, in particular, the development of a popular entity called the University of Knowledge of the Elderly (UNISAVIE: Université du savoir des vieux), and the initiation of a movement to promote other living places that are organized into similar networks.
The community charter sets out expectations for privacy. Each apartment is self-contained. Monthly meetings assure the optimal routines of the building and ensure that each person may participate fully and with complete liberty of expression. Plans set out the routine intervention of a mediator who could help get to the bottom of the causes of eventual conflicts in order to allow for their resolution.
The success of the Paris co-op inspired several Canadian grassroots groups to adopt similar values in senior housing initiatives; these values include autonomy and self-management, solidarity and mutual aid, civic engagement, and ecological responsibility.
Germany
Housing cooperatives, or "Wohnungsgenossenschaften" in German, are a type of housing association that provides affordable housing to its members. They are formed and run by a group of people who come together to pool their resources in order to purchase or build housing for their own use.
In Germany, housing cooperatives are typically organized as non-profit organizations, which means that any profits made from the sale or rental of the housing are reinvested in the cooperative rather than being distributed to shareholders. This allows housing cooperatives to offer lower prices for housing than would be possible for for-profit organizations.
Members of a housing cooperative typically have the right to occupy a specific unit within the cooperative's housing complex, and they also have a say in the management and decision-making of the cooperative. This can include voting on issues related to the maintenance and operation of the housing complex, as well as electing a board of directors to oversee the cooperative's operations.
Housing cooperatives are a popular form of housing in Germany, particularly in urban areas, and they are often seen as a way to provide affordable, community-oriented housing options.
In the Industrialisation in the 19th century there were many housing cooperatives founded in Germany. Presently, there are over 2,000 housing cooperatives with over two million apartments and over three million members in Germany. The public housing cooperatives are organised in the GdW Bundesverband deutscher Wohnungs- und Immobilienunternehmen (Federal association of German housing and real estate enterprise registered associations).
Egypt
The Housing cooperative project in Egypt aims to serve the low-income class, as it provides them with housing units consisting of two rooms, a hall or three rooms and a fully finished hall, with an area ranging from 75 to 90 square meters. In addition, these units are offered at a cost price only, with direct support ranging from 5 to 25 thousand pounds. The beneficiary of this unit can pay its price over a period of 20 years, as 538,000 units have been implemented in all governorates and new cities until 2022, implemented by Ministry of Housing, Utilities & Urban Communities.
India
In India, most 'flats' are owned outright. i.e. the title to each individual flat is separate. There is usually a governing body/society/association to administer maintenance and other building needs. These are comparable to the Condominium Buildings in the USA. The laws governing the building, its governing body and how flats within the building are transferred differ from state to state.
Certain buildings are organized as "Cooperative Housing Societies" where one actually owns a share in the Cooperative rather than the flat itself. This structure was very popular in the past but has become less common in recent times. Most states have separate laws governing Cooperative Housing Societies.
for additional information.
Netherlands
In the Netherlands there are three very different types of organization that could be considered a housing cooperative:
Housing corporation
A housing corporation (woningcorporatie) is a nonprofit organization dedicated to building and maintaining housing for rent for people with lower income. The first housing corporations started in the second half of the 19th century as small cooperative associations. The first such association in the world, VAK ("association for the working class") was founded in 1852 in Amsterdam. Between 2.4 and 2.5 million apartments in the Netherlands are rented by the housing corporations, i.e. more than 30% of the total of household dwellings (apartments and houses).
Owner association
A (house) owners' association (Vereniging van Eigenaren, VvE) is by Dutch law established wherever there are separately owned apartments in one building. The members are legally owners of their own apartment but have to cooperate in the association for the maintenance of the building as a whole.
Living cooperation
A living cooperation (wooncoöperatie) is a construct in which residents jointly own an apartment building using a democratically controlled cooperative, and pay rent to their own organisation. They were prohibited after World War II and legalised in 2015.
New Zealand
"Company-share" apartments operate in the New Zealand housing system.
Philippines
In the Philippines, a tenant-owner's association often forms as a means to buy new flats. When the cooperative is set up, it takes the major part of the loan needed to buy a property. These loans will then be paid off during a fixed period of years (typically 20 to 30), and once this is done, the cooperative is dispersed and the flats are transformed into condominiums.
Nordic countries
A tenant-owner's association (Swedish: bostadsrättsförening, Norwegian: borettslag, Danish: andelsboligforening) is a legal term used in the Scandinavian countries (Sweden, Denmark, and Norway) for a type of joint ownership of property in which the whole property is owned by a co-operative association, which in its turn is owned by its members. Each member holds a share in the association that is proportional to the area of his apartment. Members are required to have a tenant-ownership, which represents the apartment, and in most cases live permanently at the address. There are some legal differences between the countries, mainly concerning the conditions of ownership.
In Sweden, 16% of the population lives in apartments in housing cooperatives, while 25% live in rented apartments (more common among young adults and immigrants) and 50% live in private one-family houses (more common among families with children), the remainder living in other forms such as student dormitories or elderly homes.
In Finland, by contrast to the Scandinavian countries, housing cooperatives in the strict sense are extremely rare; instead, Finnish tenant-owned housing properties are generally organized as limited companies (Finnish ) in a system peculiar to Finnish law. The Finnish arrangement is similar to a housing cooperative in that the property is owned by a non-profit corporation and the right to use each unit is tied to ownership of a certain set of shares.
United Kingdom
Housing co-operatives are uncommon in the UK, making up about 0.1% of housing stock.
Most are based in urban areas and consist of affordable shared accommodation where the members look after the property themselves. Waiting lists can be very long due to the rarity of housing co-operatives. In some areas the application procedure is integrated into the council housing application system. The laws differ between England and Scotland. The Confederation of Co-operative Housing provides information on housing cooperatives in the United Kingdom and has published a guide on setting them up. The Shelter website provides information on housing and has information specific to England and Scotland.
The Catalyst Collective provides information about starting co-operatives in the UK and explains the legal structure of a housing coop. Radical Routes offers a guide on how to set up a housing co-operative.
Student housing cooperatives
Factors of raising cost of living for students and quality of accommodation have led to a drive for Student Housing Co-operatives within the UK inspired by the existing North American Student Housing Cooperatives and their work through North American Students of Cooperation. Edinburgh Student Housing Co-operative and Birmingham Student Housing Co-operative opened in 2014 and Sheffield Student Housing Co-operative in 2015. All existing Student Housing Co-operatives are members of Students for Cooperation.
United States
In the United States, housing co-ops are usually categorized as corporations or LLCs and are found in abundance in the area from Madison, Wisconsin, to the New York metropolitan area. There are also a number of cooperative and mutual housing projects still in operation across the US that were the result of the purchase of federal defense housing developments by their tenants or groups of returning war veterans and their families. These developments include seven of the eight middle-class housing projects built by the US government between 1940 and 1942 under the auspices of the Mutual Ownership Defense Housing Division of the Federal Works Agency. There are many regional housing cooperative associations, such as the Midwest Association of Housing Cooperatives, which is based in Michigan and serves the Midwest region, covering Ohio, Michigan, Indiana, Illinois, Wisconsin, Minnesota, and more.
The National Association of Housing Cooperatives (NAHC) represents all cooperatives within the United States who are members of the organization. This organization is a nonprofit, national federation of housing cooperatives, mutual housing associations, other resident-owned or controlled housing, professionals, organizations, and individuals interested in promoting the interests of cooperative housing communities. NAHC is the only national cooperative housing organization, and aims to support and educate existing and new cooperative housing communities as the best and most economical form of homeownership.
NASCO, or North American Students of Cooperation, is an organization founded in 1968 that has helped organized cooperative living for students. With a presence in over 100 towns and cities across North America, NASCO has provided tens of thousands of students with sustainable housing.
New York metropolitan area
Cooperatives have a long history in metropolitan New York – in November 1882, Harper's Magazine describes several cooperative apartment buildings already in existence, with plans to build more – and can be found throughout New York City, Westchester County, which borders the city to the north, and towns in northern New Jersey that are close to Manhattan, including Fort Lee, Edgewater, Ramsey, Passaic and Weehawken. Alku and Alku Toinen, apartment buildings built in 1916 by the Finnish American immigrant community in the Sunset Park neighborhood of Brooklyn, New York City, were the first nonprofit housing cooperatives in New York City.
Apartment buildings and multiple-family housing make up a more significant share of the housing stock in the New York City area than in most other U.S. cities as over 75% of apartment buildings in NYC are co-ops. Reasons suggested to explain why cooperatives are relatively more common than condominiums in the New York City area are:
Inspired by Abraham Kazan, cooperatives appeared at least as far back as the 1920s while a legal basis for condominium form of ownership was not available in New York State until 1964. Passage of the Condominium Act then opened a wave of construction of condominium buildings.
The cooperative form can be advantageous as a building mortgage can be carried by the cooperative corporation, leaving less financing to be obtained by each co-op owner. Under condominium ownership only the separate condo owners provide financing. Particularly when interest rates are high, a conversion sponsor may find unit buyers more easily under the cooperative arrangement as buyers will have less financing to arrange on their own; the apparent purchase price of a unit in a cooperative building holding an underlying mortgage is lower than a condo purchase. Cooperative unit buyers may not accurately weigh their share of the building's mortgage.
Also, later in a building's life after conversion, major new investments required to repair or replace building systems can be raised by a new central mortgage in a cooperative, while in a condominium funds could only be raised by onerous assessments being required of each individual unit owner. However, New York's condominium law was amended in 1997 to allow condominium associations to borrow money.
The 1974 creation and then subsequent influence on policy by the Urban Homesteading Assistance Board, a housing advocacy group, which enabled the conversion of over 1,600 foreclosed, city-held rentals into limited-equity, resident-controlled co-ops.
A co-op building's board can exercise its own business discretion to impose restrictions on shareholders, and reject prospective purchasers without explanation, as long as the board does not violate federal and state housing or civil rights laws.
Most of the housing cooperatives in the greater New York area were converted to that status during the 1980s; generally, they were large buildings built between the 1920s and 1950s that a single landlord or corporation owned and rented out that became unprofitable as rental properties. To encourage individual ownership of units, the initial buyers of units (buying from the owner of the entire building) did not have to be approved by a board. These units are known as sponsor units. Also, the rental tenants living in the building at the time of the conversion were usually given an option to buy at a discount. If the tenants were rent-controlled, the law usually protects them by allowing them to stay as renters and the unit may not be occupied by a purchaser until said tenant dies or moves out. Many of these buildings, especially in Manhattan, are actually quite luxurious and exclusive; many celebrities live in them and some famous people are even rejected by co-op boards. In the 1990s and 2000s some rental buildings in the Chicago, Washington, D.C., and Miami-Fort Lauderdale-West Palm Beach areas went through a similar conversion process, though not to the degree of New York.
Many of the cooperatives originally built as co-ops were sponsored by trade unions, such as the Amalgamated Clothing Workers of America. One of the largest projects was Cooperative Village in Lower East Side of Manhattan. The United Housing Foundation was set up in 1951 and built Co-op City in The Bronx, designed by architect Herman Jessor. One of the first subsidized, fixed-value cooperatives was Morningside Gardens in Manhattan's Morningside Heights.
Another dynamic also contributed to the large number of cooperatives established in the 1980s and 1990s in New York City – in this case by low- and moderate-income tenant groups. In the 1970s, many New York City private landlords were struggling to maintain their aging properties in the face of high interest rates, redlining, white flight and rising fuel costs. The period also saw some landlord-induced arson to obtain insurance proceeds and widespread non-payment of real estate taxes – over 20% of multi-family residential properties were in arrears in the mid-1970s. In 1977, the city passed Local Law #45, which allowed the city to begin foreclosure proceedings after just one year of non-payment of taxes, not three, resulting in the takeover of thousands of buildings, many of them occupied, by the city of New York through a legal action known as an in rem foreclosure. In September 1978, the city's housing agency, the New York City Department of Housing Preservation and Development (HPD), created a series of new housing programs designed to give building residents and community groups control and eventual ownership of in rem buildings.
The Urban Homesteading Assistance Board (UHAB), established in 1974, began to assist residents of these buildings to manage, rehabilitate and acquire their buildings, and form limited-equity housing co-operatives. Working with the city's housing agency, its existing loan programs and the power to dispose of abandoned property to non-profit organizations, as well as the state laws governing the establishment of co-operatives, UHAB was able to provide low-income people with the tools – seed money, legal advice, architectural plans, bookkeeping training – to build and run limited-equity housing co-operatives. Through a long-standing contract with the city to provide training and technical assistance to residents of buildings in the Tenant Interim Lease (TIL) Program, UHAB has worked with more than 1,600 coops, preserving over 30,000 units of affordable housing.
Some cooperatives in New York City do not own the land upon which their building is situated. These 'land-lease' buildings often have significant drawbacks for cooperative owners. However, there have been cases where shareholders of a building have bought the surrounding land, such as 167 East 61st Street (formerly known as Trump Plaza), where residents gathered $183 million to buy the surrounding land.
Student housing cooperatives
Student cooperatives provide housing and dining services to those who attend specific educational institutions. Some notable groups include Berkeley Student Cooperative, Santa Barbara Housing Cooperative and the Oberlin Student Cooperative Association.
See also
Cohousing
Condop
Subsidized housing
Worker cooperative
References
External links
Social programs
+
Human habitats
Private aid programs
Living arrangements | Housing cooperative | [
"Biology"
] | 5,461 | [
"Behavior",
"Altruism",
"Private aid programs"
] |
570,498 | https://en.wikipedia.org/wiki/23%20enigma | The 23 enigma is a belief in the significance of the number 23. The concept of the 23 enigma has been popularized by various books, movies, and conspiracy theories, which suggest that the number 23 appears with unusual frequency in various contexts and may be a symbol of some larger, hidden significance. A topic related to the 23 enigma is eikositriophobia, which is the fear of the number 23.
Origins
Robert Anton Wilson cites William S. Burroughs as the first person to believe in the 23 enigma. Wilson, in a 1977 article in Fortean Times, related the following anecdote:
In literature
The 23 enigma can be seen in:
Robert Anton Wilson and Robert Shea's 1975 book The Illuminatus! Trilogy (therein called the "23/17 Phenomenon")
Wilson's 1977 book Cosmic Trigger I: The Final Secret of the Illuminati (therein called "the Law of Fives" or "the 23 Enigma")
Arthur Koestler's contribution to The Challenge of Chance: A Mass Experiment in Telepathy and Its Unexpected Outcome (1973)
Principia Discordia
The text titled Principia Discordia claims that "All things happen in fives, or are divisible by or are multiples of five, or are somehow directly or indirectly appropriate to 5"—this is referred to as the Law of Fives. The 23 enigma is regarded as a corollary of the Law of Fives because 2 + 3 = 5.
In these works, 23 is considered lucky, unlucky, sinister, strange, sacred to the goddess Eris, or sacred to the unholy gods of the Cthulhu Mythos.
The 23 enigma can be viewed as an example of apophenia, selection bias and confirmation bias. In interviews, Wilson acknowledged the self-fulfilling nature of the 23 enigma, implying that the real value of the Law of Fives and the 23 enigma is in their demonstration of the mind's ability to perceive "truth" in nearly anything.
In the Illuminatus! Trilogy, Wilson expresses the same view, saying that one can find numerological significance in anything, provided that one has "sufficient cleverness".
In popular culture
Music and art duo The Justified Ancients of Mu Mu (later known as The KLF and the K Foundation) named themselves after the fictional conspiratorial group "The Justified Ancients of Mummu" from Illuminatus!; the number 23 is a recurring theme in the duo's work. Perhaps most infamously, as the K Foundation they performed a performance art piece, K Foundation Burn a Million Quid on 23 August 1994 and subsequently agreed not to publicly discuss the burning for a period of 23 years. 23 years to the day after the burning they returned to launch a novel and discuss why they had burnt the money.
The 2007 film The Number 23, starring Jim Carrey, is the story of a man who becomes obsessed with the number 23 while reading a book of the same title that seems to be about his life.
Industrial music group Throbbing Gristle recounted in great detail the meeting of Burroughs and Captain Clark and the significance of the number 23 in the ballad "The Old Man Smiled". Their 1980 album Heathen Earth, where this song appears, also features the number 23 on the cover.
See also
Benford's law
Ideas of reference and delusions of reference
Texas sharpshooter fallacy
References
External links
Numerology
Robert Anton Wilson
Superstitions about numbers
William S. Burroughs | 23 enigma | [
"Mathematics"
] | 724 | [
"Numerology",
"Mathematical objects",
"Numbers"
] |
570,602 | https://en.wikipedia.org/wiki/Max%20Dehn | Max Wilhelm Dehn (November 13, 1878 – June 27, 1952) was a German mathematician most famous for his work in geometry, topology and geometric group theory. Dehn's early life and career took place in Germany. However, he was forced to retire in 1935 and eventually fled Germany in 1939 and emigrated to the United States.
Dehn was a student of David Hilbert, and in his habilitation in 1900 Dehn resolved Hilbert's third problem, making him the first to resolve one of Hilbert's well-known 23 problems. Dehn's doctoral students include Ott-Heinrich Keller, Ruth Moufang, and Wilhelm Magnus; he also mentored mathematician Peter Nemenyi and the artists Dorothea Rockburne and Ruth Asawa.
Biography
Dehn was born to a family of Jewish origin in Hamburg, Imperial Germany.
He studied the foundations of geometry with Hilbert at Göttingen in 1899, and obtained a proof of the Jordan curve theorem for polygons. In 1900 he wrote his dissertation on the role of the Legendre angle sum theorem in axiomatic geometry, constructing the Dehn planes as counterexamples to the theorem in geometries without the Archimedean axiom. From 1900 to 1911 he was an employee and researcher at the University of Münster. In his habilitation at the University of Münster in 1900 he resolved Hilbert's third problem, by introducing what was afterwards called the Dehn invariant. This was the first resolution of one of the Hilbert Problems.
Dehn's interests later turned to topology and combinatorial group theory. In 1907 he wrote with Poul Heegaard the first book on the foundations of combinatorial topology, then known as analysis situs. Also in 1907, he described the construction of a new homology sphere. In 1908 he believed that he had found a proof of the Poincaré conjecture, but Tietze found an error.
In 1910 Dehn published a paper on three-dimensional topology in which he introduced Dehn surgery and used it to construct homology spheres. He also stated Dehn's lemma, but an error was found in his proof by Hellmuth Kneser in 1929. The result was proved in 1957 by Christos Papakyriakopoulos. The word problem for groups, also called the Dehn problem, was posed by him in 1911.
Dehn married Antonie Landau on August 23, 1912. Also in 1912, Dehn invented what is now known as Dehn's algorithm and used it in his work on the word and conjugacy problems for groups. The notion of a Dehn function in geometric group theory, which estimates the area of a relation in a finitely presented group in terms of the length of that relation, is also named after him. In 1914 he proved that the left and right trefoil knots are not equivalent. In the early 1920s Dehn introduced the result that would come to be known as the Dehn-Nielsen theorem; its proof would be published in 1927 by Jakob Nielsen.
In 1922 Dehn succeeded Ludwig Bieberbach at Frankfurt, where he stayed until he was forced to retire in 1935. During this time he taught a seminar on historical works of mathematics. The seminar attracted prolific mathematicians Carl Ludwig Siegel and André Weil, and Weil considered Dehn's seminar to be his most important contribution to mathematics. As an example of its influence, the seminar has been credited for inspiring Siegel's discovery of the Riemann–Siegel formula among Riemann's unpublished notes.
Dehn stayed in Germany until January 1939, when he fled to Copenhagen, and then to Trondheim, Norway, where he took a position at the Norwegian Institute of Technology. In October 1940 he left Norway for America by way of Siberia and Japan (the Atlantic crossing was considered too dangerous).
In America, Dehn obtained a position at Idaho Southern University (now Idaho State University). In 1942 he took a job at the Illinois Institute of Technology, and in 1943 he moved to St. John's College in Annapolis, Maryland. Finally in 1945, he moved to the experimental arts college, Black Mountain College, where he was the only mathematician.
He died in Black Mountain, North Carolina in 1952.
Black Mountain College
In March 1944, Dehn was invited to give two talks at Black Mountain College on the philosophy and history of mathematics. He noted in a letter that a lecture on an advanced mathematical topic didn't seem appropriate given the audience. He instead offered up the lectures "Common roots of mathematics and ornamentics," and "Some moments in the development of mathematical ideas." Black Mountain College faculty contacted him shortly after concerning a full-time position. After negotiating his salary from $25 to $40 per month, Dehn and his wife moved into housing provided by the school and he began teaching in January 1945.
While at Black Mountain College, Dehn taught courses in Mathematics, Philosophy, Greek, and Italian. In his class "Geometry for Artists," Dehn introduced students to geometric concepts such as points, lines, planes and solids; cones sectioned into circles, ellipses, parabolas, and hyperbolas; spheres and regular polyhedrons. His classes had an emphasis on the way shapes relate to each other, a concept that can be useful in artistic mediums.
He enjoyed the forested mountains found in Black Mountain, and would often hold class in the woods, giving lectures during hikes. His lectures frequently drifted off topic on tangents about philosophy, the arts, and nature and their connection to mathematics. He and his wife took part in community meetings and often ate in the dining room. They also regularly had long breakfasts with Buckminster Fuller and his wife.
In the summer of 1952 Dehn was made Professor Emeritus, which allowed him to remain on campus and act as an advisor. Unfortunately he died of an embolism shortly after witnessing the removal of several dogwood trees from the campus. He is buried in the woods on the campus.
See also
A wide variety of concepts have been named for Dehn. Among them:
Dehn's rigidity theorem
Dehn invariant
Dehn's algorithm
Dehn's lemma
Dehn plane
Dehn surgery
Dehn twist
Dehn–Sommerville equations
Other topics of interest
Chiral knot
Conjugacy problem
Freiheitssatz
Group isomorphism problem
Lotschnittaxiom
Mapping class group of a surface
Non-Archimedean ordered field
Scissors congruence
Two ears theorem
Undecidable problem
References
Further reading
Max Dehn, Papers on group theory and topology. Translated from the German and with introductions and an appendix by John Stillwell. With an appendix by Otto Schreier. Springer-Verlag, New York, 1987. viii+396 pp.
External links
Dehn's archive – at the University of Texas at Austin
1878 births
1952 deaths
19th-century American mathematicians
20th-century American mathematicians
19th-century German mathematicians
20th-century German mathematicians
Scientists from Hamburg
Jewish emigrants from Nazi Germany to the United States
Group theorists
Topologists
Academic staff of the University of Münster
Academic staff of Goethe University Frankfurt
Illinois Institute of Technology faculty
Idaho State University faculty
Black Mountain College faculty
Mathematicians from the German Empire | Max Dehn | [
"Mathematics"
] | 1,469 | [
"Topologists",
"Topology"
] |
570,662 | https://en.wikipedia.org/wiki/Wireless%20power%20transfer | Wireless power transfer (WPT; also wireless energy transmission or WET) is the transmission of electrical energy without wires as a physical link. In a wireless power transmission system, an electrically powered transmitter device generates a time-varying electromagnetic field that transmits power across space to a receiver device; the receiver device extracts power from the field and supplies it to an electrical load. The technology of wireless power transmission can eliminate the use of the wires and batteries, thereby increasing the mobility, convenience, and safety of an electronic device for all users. Wireless power transfer is useful to power electrical devices where interconnecting wires are inconvenient, hazardous, or are not possible.
Wireless power techniques mainly fall into two categories: Near and far field. In near field or non-radiative techniques, power is transferred over short distances by magnetic fields using inductive coupling between coils of wire, or by electric fields using capacitive coupling between metal electrodes. Inductive coupling is the most widely used wireless technology; its applications include charging handheld devices like phones and electric toothbrushes, RFID tags, induction cooking, and wirelessly charging or continuous wireless power transfer in implantable medical devices like artificial cardiac pacemakers, or electric vehicles. In far-field or radiative techniques, also called power beaming, power is transferred by beams of electromagnetic radiation, like microwaves or laser beams. These techniques can transport energy longer distances but must be aimed at the receiver. Proposed applications for this type include solar power satellites and wireless powered drone aircraft.
Wireless power transfer is a generic term for a number of different technologies for transmitting energy by means of electromagnetic fields. The technologies differ in the distance over which they can transfer power efficiently, whether the transmitter must be aimed (directed) at the receiver, and in the type of electromagnetic energy they use: time varying electric fields, magnetic fields, radio waves, microwaves, infrared or visible light waves.
In general a wireless power system consists of a "transmitter" device connected to a source of power such as a mains power line, which converts the power to a time-varying electromagnetic field, and one or more "receiver" devices which receive the power and convert it back to DC or AC electric current which is used by an electrical load. At the transmitter the input power is converted to an oscillating electromagnetic field by some type of "antenna" device. The word "antenna" is used loosely here; it may be a coil of wire which generates a magnetic field, a metal plate which generates an electric field, an antenna which radiates radio waves, or a laser which generates light. A similar antenna or coupling device at the receiver converts the oscillating fields to an electric current. An important parameter that determines the type of waves is the frequency, which determines the wavelength.
Wireless power uses the same fields and waves as wireless communication devices like radio, another familiar technology that involves electrical energy transmitted without wires by electromagnetic fields, used in cellphones, radio and television broadcasting, and WiFi. In radio communication the goal is the transmission of information, so the amount of power reaching the receiver is not so important, as long as it is sufficient that the information can be received intelligibly. In wireless communication technologies only tiny amounts of power reach the receiver. In contrast, with wireless power transfer the amount of energy received is the important thing, so the efficiency (fraction of transmitted energy that is received) is the more significant parameter. For this reason, wireless power technologies are likely to be more limited by distance than wireless communication technologies.
Wireless power transfer may be used to power up wireless information transmitters or receivers. This type of communication is known as wireless powered communication (WPC). When the harvested power is used to supply the power of wireless information transmitters, the network is known as Simultaneous Wireless Information and Power Transfer (SWIPT); whereas when it is used to supply the power of wireless information receivers, it is known as a Wireless Powered Communication Network (WPCN).
An important issue associated with all wireless power systems is limiting the exposure of people and other living beings to potentially injurious electromagnetic fields.
History
19th century developments and dead ends
The 19th century saw many developments of theories, and counter-theories on how electrical energy might be transmitted. In 1826, André-Marie Ampère discovered a connection between current and magnets. Michael Faraday described in 1831 with his law of induction the electromotive force driving a current in a conductor loop by a time-varying magnetic flux. Transmission of electrical energy without wires was observed by many inventors and experimenters, but lack of a coherent theory attributed these phenomena vaguely to electromagnetic induction. A concise explanation of these phenomena would come from the 1860s Maxwell's equations by James Clerk Maxwell, establishing a theory that unified electricity and magnetism to electromagnetism, predicting the existence of electromagnetic waves as the "wireless" carrier of electromagnetic energy. Around 1884 John Henry Poynting defined the Poynting vector and gave Poynting's theorem, which describe the flow of power across an area within electromagnetic radiation and allow for a correct analysis of wireless power transfer systems. This was followed on by Heinrich Rudolf Hertz' 1888 validation of the theory, which included the evidence for radio waves.
During the same period two schemes of wireless signaling were put forward by William Henry Ward (1871) and Mahlon Loomis (1872) that were based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. Both inventors' patents noted this layer connected with a return path using "Earth currents"' would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries, and could also be used for lighting, heat, and motive power. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile.
Nikola Tesla
After 1890, inventor Nikola Tesla experimented with transmitting power by inductive and capacitive coupling using spark-excited radio frequency resonant transformers, now called Tesla coils, which generated high AC voltages. Early on he attempted to develop a wireless lighting system based on near-field inductive and capacitive coupling and conducted a series of public demonstrations where he lit Geissler tubes and even incandescent light bulbs from across a stage. He found he could increase the distance at which he could light a lamp by using a receiving LC circuit tuned to resonance with the transmitter's LC circuit. using resonant inductive coupling. Tesla failed to make a commercial product out of his findings but his resonant inductive coupling method is now widely used in electronics and is currently being applied to short-range wireless power systems.
Tesla went on to develop a wireless power distribution system that he hoped would be capable of transmitting power long distance directly into homes and factories. Early on he seemed to borrow from the ideas of Mahlon Loomis, proposing a system composed of balloons to suspend transmitting and receiving electrodes in the air above in altitude, where he thought the pressure would allow him to send high voltages (millions of volts) long distances. To further study the conductive nature of low pressure air he set up a test facility at high altitude in Colorado Springs during 1899. Experiments he conducted there with a large coil operating in the megavolts range, as well as observations he made of the electronic noise of lightning strikes, led him to conclude incorrectly that he could use the entire globe of the Earth to conduct electrical energy. The theory included driving alternating current pulses into the Earth at its resonant frequency from a grounded Tesla coil working against an elevated capacitance to make the potential of the Earth oscillate. Tesla thought this would allow alternating current to be received with a similar capacitive antenna tuned to resonance with it at any point on Earth with very little power loss. His observations also led him to believe a high voltage used in a coil at an elevation of a few hundred feet would "break the air stratum down", eliminating the need for miles of cable hanging on balloons to create his atmospheric return circuit. Tesla would go on the next year to propose a "World Wireless System" that was to broadcast both information and power worldwide. In 1901, at Shoreham, New York he attempted to construct a large high-voltage wireless power station, now called Wardenclyffe Tower, but by 1904 investment dried up and the facility was never completed.
Post-war developments
Before World War II, little progress was made in wireless power transmission. Radio was developed for communication uses, but could not be used for power transmission since the relatively low-frequency radio waves spread out in all directions and little energy reached the receiver. In radio communication, at the receiver, an amplifier intensifies a weak signal using energy from another source. For power transmission, efficient transmission required transmitters that could generate higher-frequency microwaves, which can be focused in narrow beams towards a receiver.
The development of microwave technology during World War II, such as the klystron and magnetron tubes and parabolic antennas, made radiative (far-field) methods practical for the first time, and the first long-distance wireless power transmission was achieved in the 1960s by William C. Brown. In 1964, Brown invented the rectenna which could efficiently convert microwaves to DC power, and in 1964 demonstrated it with the first wireless-powered aircraft, a model helicopter powered by microwaves beamed from the ground.
Field regions
Electric and magnetic fields are created by charged particles in matter such as electrons. A stationary charge creates an electrostatic field in the space around it. A steady current of charges (direct current, DC) creates a static magnetic field around it. These fields contain energy, but cannot carry power because they are static. However time-varying fields can carry power. Accelerating electric charges, such as are found in an alternating current (AC) of electrons in a wire, create time-varying electric and magnetic fields in the space around them. These fields can exert oscillating forces on the electrons in a receiving "antenna", causing them to move back and forth. These represent alternating current which can be used to power a load.
The oscillating electric and magnetic fields surrounding moving electric charges in an antenna device can be divided into two regions, depending on distance Drange from the antenna.
The boundary between the regions is somewhat vaguely defined. The fields have different characteristics in these regions, and different technologies are used for transferring power:
Near-field or nonradiative region: This means the area within about 1 wavelength (λ) of the antenna. In this region the oscillating electric and magnetic fields are separate and power can be transferred via electric fields by capacitive coupling (electrostatic induction) between metal electrodes, or via magnetic fields by inductive coupling (electromagnetic induction) between coils of wire. These fields are not radiative, meaning the energy stays within a short distance of the transmitter. If there is no receiving device or absorbing material within their limited range to "couple" to, no power leaves the transmitter. The range of these fields is short, and depends on the size and shape of the "antenna" devices, which are usually coils of wire. The fields, and thus the power transmitted, decrease exponentially with distance, so if the distance between the two "antennas" Drange is much larger than the diameter of the "antennas" Dant very little power will be received. Therefore, these techniques cannot be used for long range power transmission. Resonance, such as resonant inductive coupling, can increase the coupling between the antennas greatly, allowing efficient transmission at somewhat greater distances, although the fields still decrease exponentially. Therefore the range of near-field devices is conventionally divided into two categories:
Short range: up to about one antenna diameter: Drange ≤ Dant. This is the range over which ordinary nonresonant capacitive or inductive coupling can transfer practical amounts of power.
Mid-range: up to 10 times the antenna diameter: Drange ≤ 10 Dant. This is the range over which resonant capacitive or inductive coupling can transfer practical amounts of power.
Far-field or radiative region: Beyond about 1 wavelength (λ) of the antenna, the electric and magnetic fields are perpendicular to each other and propagate as an electromagnetic wave; examples are radio waves, microwaves, or light waves. This part of the energy is radiative, meaning it leaves the antenna whether or not there is a receiver to absorb it. The portion of energy which does not strike the receiving antenna is dissipated and lost to the system. The amount of power emitted as electromagnetic waves by an antenna depends on the ratio of the antenna's size Dant to the wavelength of the waves λ, which is determined by the frequency: λ = c/f. At low frequencies f where the antenna is much smaller than the size of the waves, Dant << λ, very little power is radiated. Therefore near-field devices, which use lower frequencies, radiate almost none of their energy as electromagnetic radiation. Antennas about the same size as the wavelength Dant ≈ λ such as monopole or dipole antennas, radiate power efficiently, but the electromagnetic waves are radiated in all directions (omnidirectionally), so if the receiving antenna is far away, only a small amount of the radiation will hit it. Therefore, these can be used for short range, inefficient power transmission but not for long range transmission. However, unlike fields, electromagnetic radiation can be focused by reflection or refraction into beams. By using a high-gain antenna or optical system which concentrates the radiation into a narrow beam aimed at the receiver, it can be used for long range power transmission. From the Rayleigh criterion, to produce the narrow beams necessary to focus a significant amount of the energy on a distant receiver, an antenna must be much larger than the wavelength of the waves used: Dant >> λ = c/f. Practical beam power devices require wavelengths in the centimeter region or lower, corresponding to frequencies above 1 GHz, in the microwave range or above.
Near-field (nonradiative) techniques
At large relative distance, the near-field components of electric and magnetic fields are approximately quasi-static oscillating dipole fields. These fields decrease with the cube of distance: (Drange / Dant)−3 Since power is proportional to the square of the field strength, the power transferred decreases as (Drange / Dant)−6. or 60 dB per decade. In other words, if far apart, increasing the distance between the two antennas tenfold causes the power received to decrease by a factor of 106 = 1000000. As a result, inductive and capacitive coupling can only be used for short-range power transfer, within a few times the diameter of the antenna device Dant. Unlike in a radiative system where the maximum radiation occurs when the dipole antennas are oriented transverse to the direction of propagation, with dipole fields the maximum coupling occurs when the dipoles are oriented longitudinally.
Inductive coupling
In inductive coupling (electromagnetic induction or inductive power transfer, IPT), power is transferred between coils of wire by a magnetic field. The transmitter and receiver coils together form a transformer. An alternating current (AC) through the transmitter coil (L1) creates an oscillating magnetic field (B) by Ampere's law. The magnetic field passes through the receiving coil (L2), where it induces an alternating EMF (voltage) by Faraday's law of induction, which creates an alternating current in the receiver. The induced alternating current may either drive the load directly, or be rectified to direct current (DC) by a rectifier in the receiver, which drives the load. A few systems, such as electric toothbrush charging stands, work at 50/60 Hz so AC mains current is applied directly to the transmitter coil, but in most systems an electronic oscillator generates a higher frequency AC current which drives the coil, because transmission efficiency improves with frequency.
Inductive coupling is the oldest and most widely used wireless power technology, and virtually the only one so far which is used in commercial products. It is used in inductive charging stands for cordless appliances used in wet environments such as electric toothbrushes and shavers, to reduce the risk of electric shock. Another application area is "transcutaneous" recharging of biomedical prosthetic devices implanted in the human body, such as cardiac pacemakers, to avoid having wires passing through the skin. It is also used to charge electric vehicles such as cars and to either charge or power transit vehicles like buses and trains.
However the fastest growing use is wireless charging pads to recharge mobile and handheld wireless devices such as laptop and tablet computers, computer mouse, cellphones, digital media players, and video game controllers. In the United States, the Federal Communications Commission (FCC) provided its first certification for a wireless transmission charging system in December 2017.
The power transferred increases with frequency and the mutual inductance between the coils, which depends on their geometry and the distance between them. A widely used figure of merit is the coupling coefficient . This dimensionless parameter is equal to the fraction of magnetic flux through the transmitter coil that passes through the receiver coil when L2 is open circuited. If the two coils are on the same axis and close together so all the magnetic flux from passes through , and the link efficiency approaches 100%. The greater the separation between the coils, the more of the magnetic field from the first coil misses the second, and the lower and the link efficiency are, approaching zero at large separations. The link efficiency and power transferred is roughly proportional to . In order to achieve high efficiency, the coils must be very close together, a fraction of the coil diameter , usually within centimeters, with the coils' axes aligned. Wide, flat coil shapes are usually used, to increase coupling. Ferrite "flux confinement" cores can confine the magnetic fields, improving coupling and reducing interference to nearby electronics, but they are heavy and bulky so small wireless devices often use air-core coils.
Ordinary inductive coupling can only achieve high efficiency when the coils are very close together, usually adjacent. In most modern inductive systems resonant inductive coupling is used, in which the efficiency is increased by using resonant circuits. This can achieve high efficiencies at greater distances than nonresonant inductive coupling.
Resonant inductive coupling
Resonant inductive coupling (electrodynamic coupling, strongly coupled magnetic resonance) is a form of inductive coupling in which power is transferred by magnetic fields (B, green) between two resonant circuits (tuned circuits), one in the transmitter and one in the receiver. Each resonant circuit consists of a coil of wire connected to a capacitor, or a self-resonant coil or other resonator with internal capacitance. The two are tuned to resonate at the same resonant frequency. The resonance between the coils can greatly increase coupling and power transfer, analogously to the way a vibrating tuning fork can induce sympathetic vibration in a distant fork tuned to the same pitch.
Nikola Tesla first discovered resonant coupling during his pioneering experiments in wireless power transfer around the turn of the 20th century, but the possibilities of using resonant coupling to increase transmission range has only recently been explored. In 2007 a team led by Marin Soljačić at MIT used two coupled tuned circuits each made of a 25 cm self-resonant coil of wire at 10 MHz to achieve the transmission of 60 W of power over a distance of (8 times the coil diameter) at around 40% efficiency.
The concept behind resonant inductive coupling systems is that high Q factor resonators exchange energy at a much higher rate than they lose energy due to internal damping. Therefore, by using resonance, the same amount of power can be transferred at greater distances, using the much weaker magnetic fields out in the peripheral regions ("tails") of the near fields. Resonant inductive coupling can achieve high efficiency at ranges of 4 to 10 times the coil diameter (Dant). This is called "mid-range" transfer, in contrast to the "short range" of nonresonant inductive transfer, which can achieve similar efficiencies only when the coils are adjacent. Another advantage is that resonant circuits interact with each other so much more strongly than they do with nonresonant objects that power losses due to absorption in stray nearby objects are negligible.
A drawback of resonant coupling theory is that at close ranges when the two resonant circuits are tightly coupled, the resonant frequency of the system is no longer constant but "splits" into two resonant peaks, so the maximum power transfer no longer occurs at the original resonant frequency and the oscillator frequency must be tuned to the new resonance peak.
Resonant technology is currently being widely incorporated in modern inductive wireless power systems. One of the possibilities envisioned for this technology is area wireless power coverage. A coil in the wall or ceiling of a room might be able to wirelessly power lights and mobile devices anywhere in the room, with reasonable efficiency. An environmental and economic benefit of wirelessly powering small devices such as clocks, radios, music players and remote controls is that it could drastically reduce the 6 billion batteries disposed of each year, a large source of toxic waste and groundwater contamination.
A study for the Swedish military found that 85 kHz systems for dynamic wireless power transfer for vehicles can cause electromagnetic interference at a radius of up to 300 kilometers.
Capacitive coupling
Capacitive coupling also referred to as electric coupling, makes use of electric fields for the transmission of power between two electrodes (an anode and cathode) forming a capacitance for the transfer of power. In capacitive coupling (electrostatic induction), the conjugate of inductive coupling, energy is transmitted by electric fields between electrodes such as metal plates. The transmitter and receiver electrodes form a capacitor, with the intervening space as the dielectric. An alternating voltage generated by the transmitter is applied to the transmitting plate, and the oscillating electric field induces an alternating potential on the receiver plate by electrostatic induction, which causes an alternating current to flow in the load circuit. The amount of power transferred increases with the frequency the square of the voltage, and the capacitance between the plates, which is proportional to the area of the smaller plate and (for short distances) inversely proportional to the separation.
Capacitive coupling has only been used practically in a few low power applications, because the very high voltages on the electrodes required to transmit significant power can be hazardous, and can cause unpleasant side effects such as noxious ozone production. In addition, in contrast to magnetic fields, electric fields interact strongly with most materials, including the human body, due to dielectric polarization. Intervening materials between or near the electrodes can absorb the energy, in the case of humans possibly causing excessive electromagnetic field exposure. However capacitive coupling has a few advantages over inductive coupling. The field is largely confined between the capacitor plates, reducing interference, which in inductive coupling requires heavy ferrite "flux confinement" cores. Also, alignment requirements between the transmitter and receiver are less critical. Capacitive coupling has recently been applied to charging battery powered portable devices as well as charging or continuous wireless power transfer in biomedical implants, and is being considered as a means of transferring power between substrate layers in integrated circuits.
Two types of circuit have been used:
Transverse (bipolar) design: In this type of circuit, there are two transmitter plates and two receiver plates. Each transmitter plate is coupled to a receiver plate. The transmitter oscillator drives the transmitter plates in opposite phase (180° phase difference) by a high alternating voltage, and the load is connected between the two receiver plates. The alternating electric fields induce opposite phase alternating potentials in the receiver plates, and this "push-pull" action causes current to flow back and forth between the plates through the load. A disadvantage of this configuration for wireless charging is that the two plates in the receiving device must be aligned face to face with the charger plates for the device to work.
Longitudinal (unipolar) design: In this type of circuit, the transmitter and receiver have only one active electrode, and either the ground or a large passive electrode serves as the return path for the current. The transmitter oscillator is connected between an active and a passive electrode. The load is also connected between an active and a passive electrode. The electric field produced by the transmitter induces alternating charge displacement in the load dipole through electrostatic induction.
Resonance can also be used with capacitive coupling to extend the range. At the turn of the 20th century, Nikola Tesla did the first experiments with both resonant inductive and capacitive coupling.
Electrodynamic wireless power transfer
An electrodynamic wireless power transfer (EWPT) system utilizes a receiver with a mechanically resonating or rotating permanent magnet. When subjected to a time-varying magnetic field, the mechanical motion of the resonating magnet is converted into electricity by one or more electromechanical transduction schemes (e.g. electromagnetic/induction, piezoelectric, or capacitive). In contrast to inductive coupling systems which usually use high frequency magnetic fields, EWPT uses low-frequency magnetic fields (<1 kHz), which safely pass through conductive media and have higher human field exposure limits (~2 mTrms at 1 kHz), showing promise for potential use in wirelessly recharging biomedical implants.
For EWPT devices having identical resonant frequencies, the magnitude of power transfer is entirely dependent on critical coupling coefficient, denoted by , between the transmitter and receiver devices. For coupled resonators with same resonant frequencies, wireless power transfer between the transmitter and the receiver is spread over three regimes – under-coupled, critically coupled and over-coupled. As the critical coupling coefficient increases from an under-coupled regime () to the critical coupled regime, the optimum voltage gain curve grows in magnitude (measured at the receiver) and peaks when and then enters into the over-coupled regime where and the peak splits into two. This critical coupling coefficient is demonstrated to be a function of distance between the source and the receiver devices.
Magnetodynamic coupling
In this method, power is transmitted between two rotating armatures, one in the transmitter and one in the receiver, which rotate synchronously, coupled together by a magnetic field generated by permanent magnets on the armatures. The transmitter armature is turned either by or as the rotor of an electric motor, and its magnetic field exerts torque on the receiver armature, turning it. The magnetic field acts like a mechanical coupling between the armatures. The receiver armature produces power to drive the load, either by turning a separate electric generator or by using the receiver armature itself as the rotor in a generator.
This device has been proposed as an alternative to inductive power transfer for noncontact charging of electric vehicles. A rotating armature embedded in a garage floor or curb would turn a receiver armature in the underside of the vehicle to charge its batteries. It is claimed that this technique can transfer power over distances of 10 to 15 cm (4 to 6 inches) with high efficiency, over 90%. Also, the low frequency stray magnetic fields produced by the rotating magnets produce less electromagnetic interference to nearby electronic devices than the high frequency magnetic fields produced by inductive coupling systems. A prototype system charging electric vehicles has been in operation at University of British Columbia since 2012. Other researchers, however, claim that the two energy conversions (electrical to mechanical to electrical again) make the system less efficient than electrical systems like inductive coupling.
Zenneck wave transmission
A new kind of system using the Zenneck type waves was shown by Oruganti et al., where they demonstrated that it was possible to excite Zenneck wave type waves on flat metal-air interfaces and transmit power across metal obstacles.
Here the idea is to excite a localized charge oscillation at the metal-air interface, the resulting modes propagate along the metal-air interface.
Far-field (radiative) techniques
Far field methods achieve longer ranges, often multiple kilometer ranges, where the distance is much greater than the diameter of the device(s). High-directivity antennas or well-collimated laser light produce a beam of energy that can be made to match the shape of the receiving area. The maximum directivity for antennas is physically limited by diffraction.
In general, visible light (from lasers) and microwaves (from purpose-designed antennas) are the forms of electromagnetic radiation best suited to energy transfer.
The dimensions of the components may be dictated by the distance from transmitter to receiver, the wavelength and the Rayleigh criterion or diffraction limit, used in standard radio frequency antenna design, which also applies to lasers. Airy's diffraction limit is also frequently used to determine an approximate spot size at an arbitrary distance from the aperture. Electromagnetic radiation experiences less diffraction at shorter wavelengths (higher frequencies); so, for example, a blue laser is diffracted less than a red one.
The Rayleigh limit (also known as the Abbe diffraction limit), although originally applied to image resolution, can be viewed in reverse, and dictates that the irradiance (or intensity) of any electromagnetic wave (such as a microwave or laser beam) will be reduced as the beam diverges over distance at a minimum rate inversely proportional to the aperture size. The larger the ratio of a transmitting antenna's aperture or laser's exit aperture to the wavelength of radiation, the more can the radiation be concentrated in a compact beam.
Microwave power beaming can be more efficient than lasers, and is less prone to atmospheric attenuation caused by dust or aerosols such as fog.
Here, the power levels are calculated by combining the parameters together, and adding in the gains and losses due to the antenna characteristics and the transparency and dispersion of the medium through which the radiation passes. That process is known as calculating a link budget.
Microwaves
Power transmission via radio waves can be made more directional, allowing longer-distance power beaming, with shorter wavelengths of electromagnetic radiation, typically in the microwave range. A rectenna may be used to convert the microwave energy back into electricity. Rectenna conversion efficiencies exceeding 95% have been realized. Power beaming using microwaves has been proposed for the transmission of energy from orbiting solar power satellites to Earth and the beaming of power to spacecraft leaving orbit has been considered.
Power beaming by microwaves has the difficulty that, for most space applications, the required aperture sizes are very large due to diffraction limiting antenna directionality. For example, the 1978 NASA study of solar power satellites required a transmitting antenna and a receiving rectenna for a microwave beam at 2.45 GHz. These sizes can be somewhat decreased by using shorter wavelengths, although short wavelengths may have difficulties with atmospheric absorption and beam blockage by rain or water droplets. Because of the "thinned-array curse", it is not possible to make a narrower beam by combining the beams of several smaller satellites.
For earthbound applications, a large-area 10 km diameter receiving array allows large total power levels to be used while operating at the low power density suggested for human electromagnetic exposure safety. A human safe power density of 1 mW/cm2 distributed across a 10 km diameter area corresponds to 750 megawatts total power level. This is the power level found in many modern electric power plants. For comparison, a solar PV farm of similar size might easily exceed 10,000 megawatts (rounded) at best conditions during daytime.
Following World War II, which saw the development of high-power microwave emitters known as cavity magnetrons, the idea of using microwaves to transfer power was researched. By 1964, a miniature helicopter propelled by microwave power had been demonstrated.
Japanese researcher Hidetsugu Yagi also investigated wireless energy transmission using a directional array antenna that he designed. In February 1926, Yagi and his colleague Shintaro Uda published their first paper on the tuned high-gain directional array now known as the Yagi antenna. While it did not prove to be particularly useful for power transmission, this beam antenna has been widely adopted throughout the broadcasting and wireless telecommunications industries due to its excellent performance characteristics.
Wireless high power transmission using microwaves is well proven. Experiments in the tens of kilowatts have been performed at the Goldstone Deep Space Communications Complex in California in 1975 and more recently (1997) at Grand Bassin on Reunion Island. These methods achieve distances on the order of a kilometer.
Under experimental conditions, microwave conversion efficiency was measured to be around 54% across one meter.
A change to 24 GHz has been suggested as microwave emitters similar to LEDs have been made with very high quantum efficiencies using negative resistance, i.e., Gunn or IMPATT diodes, and this would be viable for short range links.
In 2013, inventor Hatem Zeine demonstrated how wireless power transmission using phased array antennas can deliver electrical power up to 30 feet. It uses the same radio frequencies as WiFi.
In 2015, researchers at the University of Washington introduced power over Wi-Fi, which trickle-charges batteries and powered battery-free cameras and temperature sensors using transmissions from Wi-Fi routers. Wi-Fi signals were shown to power battery-free temperature and camera sensors at ranges of up to 20 feet. It was also shown that Wi-Fi can be used to wirelessly trickle-charge nickel–metal hydride and lithium-ion coin-cell batteries at distances of up to 28 feet.
In 2017, the Federal Communications Commission (FCC) certified the first mid-field radio frequency (RF) transmitter of wireless power. In 2021 the FCC granted a license to an over-the-air (OTA) wireless charging system that combines near-field and far-field methods by using a frequency of about 900 MHz. Due to the radiated power of about 1 W this system is intended for small IoT devices as various sensors, trackers, detectors and monitors.
Lasers
In the case of electromagnetic radiation closer to the visible region of the spectrum (.2 to 2 micrometers), power can be transmitted by converting electricity into a laser beam that is received and concentrated onto photovoltaic cells (solar cells). This mechanism is generally known as 'power beaming' because the power is beamed at a receiver that can convert it to electrical energy. At the receiver, special photovoltaic laser power converters which are optimized for monochromatic light conversion are applied.
Advantages compared to other wireless methods are:
Collimated monochromatic wavefront propagation allows narrow beam cross-section area for transmission over large distances. As a result, there is little or no reduction in power when increasing the distance from the transmitter to the receiver.
Compact size: solid state lasers fit into small products.
No radio-frequency interference to existing radio communication such as Wi-Fi and cell phones.
Access control: only receivers hit by the laser receive power.
Drawbacks include:
Laser radiation is hazardous. Without a proper safety mechanism, low power levels can blind humans and other animals. High power levels can kill through localized spot heating.
Conversion between electricity and light is limited. Photovoltaic cells achieve a maximum of 40%–50% efficiency.
Atmospheric absorption, and absorption and scattering by clouds, fog, rain, etc., causes up to 100% losses.
Requires a direct line of sight with the target. (Instead of being beamed directly onto the receiver, the laser light can also be guided by an optical fiber. Then one speaks of power-over-fiber technology.)
Laser "powerbeaming" technology was explored in military weapons and aerospace applications. Also, it is applied for the powering of various kinds of sensors in industrial environments. Lately, it is developed for powering commercial and consumer electronics. Wireless energy transfer systems using lasers for consumer space have to satisfy laser safety requirements standardized under IEC 60825.
The first wireless power system using lasers for consumer applications was Wi-Charge, demonstrated in 2018, capable of delivering power to stationary and moving devices across a room. This wireless power system complies with safety regulations according to IEC 60825 standard. It is also approved by the US Food and Drugs Administration (FDA).
Other details include propagation, and the coherence and the range limitation problem.
Geoffrey Landis is one of the pioneers of solar power satellites and laser-based transfer of energy, especially for space and lunar missions. The demand for safe and frequent space missions has resulted in proposals for a laser-powered space elevator.
NASA's Dryden Flight Research Center has demonstrated a lightweight unmanned model plane powered by a laser beam. This proof-of-concept demonstrates the feasibility of periodic recharging using a laser beam system.
Scientists from the Chinese Academy of Sciences have developed a proof-of-concept of utilizing a dual-wavelength laser to wirelessly charge portable devices or UAVs.
Atmospheric plasma channel coupling
In atmospheric plasma channel coupling, energy is transferred between two electrodes by electrical conduction through ionized air. When an electric field gradient exists between the two electrodes, exceeding 34 kilovolts per centimeter at sea level atmospheric pressure, an electric arc occurs. This atmospheric dielectric breakdown results in the flow of electric current along a random trajectory through an ionized plasma channel between the two electrodes. An example of this is natural lightning, where one electrode is a virtual point in a cloud and the other is a point on Earth. Laser Induced Plasma Channel (LIPC) research is presently underway using ultrafast lasers to artificially promote development of the plasma channel through the air, directing the electric arc, and guiding the current across a specific path in a controllable manner. The laser energy reduces the atmospheric dielectric breakdown voltage and the air is made less insulating by superheating, which lowers the density () of the filament of air.
This new process is being explored for use as a laser lightning rod and as a means to trigger lightning bolts from clouds for natural lightning channel studies, for artificial atmospheric propagation studies, as a substitute for conventional radio antennas, for applications associated with electric welding and machining, for diverting power from high-voltage capacitor discharges, for directed-energy weapon applications employing electrical conduction through a ground return path, and electronic jamming.
Energy harvesting
In the context of wireless power, energy harvesting, also called power harvesting or energy scavenging, is the conversion of ambient energy from the environment to electric power, mainly to power small autonomous wireless electronic devices. The ambient energy may come from stray electric or magnetic fields or radio waves from nearby electrical equipment, light, thermal energy (heat), or kinetic energy such as vibration or motion of the device. Although the efficiency of conversion is usually low and the power gathered often minuscule (milliwatts or microwatts), it can be adequate to run or recharge small micropower wireless devices such as remote sensors, which are proliferating in many fields. This new technology is being developed to eliminate the need for battery replacement or charging of such wireless devices, allowing them to operate completely autonomously.
Uses
Inductive power transfer between nearby wire coils was the earliest wireless power technology to be developed, existing since the transformer was developed in the 1800s. Induction heating has been used since the early 1900s and is used for induction cooking.
With the advent of cordless devices, induction charging stands have been developed for appliances used in wet environments, like electric toothbrushes and electric razors, to eliminate the hazard of electric shock. One of the earliest proposed applications of inductive transfer was to power electric locomotives. In 1892 Maurice Hutin and Maurice Leblanc patented a wireless method of powering railroad trains using resonant coils inductively coupled to a track wire at 3 kHz.
In the early 1960s resonant inductive wireless energy transfer was used successfully in implantable medical devices including such devices as pacemakers and artificial hearts. While the early systems used a resonant receiver coil, later systems implemented resonant transmitter coils as well. These medical devices are designed for high efficiency using low power electronics while efficiently accommodating some misalignment and dynamic twisting of the coils. The separation between the coils in implantable applications is commonly less than 20 cm. Today resonant inductive energy transfer is regularly used for providing electric power in many commercially available medical implantable devices.
The first passive RFID (Radio Frequency Identification) technologies were invented by Mario Cardullo (1973) and Koelle et al. (1975) and by the 1990s were being used in proximity cards and contactless smartcards.
The proliferation of portable wireless communication devices such as mobile phones, tablet, and laptop computers in recent decades is currently driving the development of mid-range wireless powering and charging technology to eliminate the need for these devices to be tethered to wall plugs during charging. The Wireless Power Consortium was established in 2008 to develop interoperable standards across manufacturers. Its Qi inductive power standard published in August 2009 enables high efficiency charging and powering of portable devices of up to 5 watts over distances of 4 cm (1.6 inches). The wireless device is placed on a flat charger plate (which can be embedded in table tops at cafes, for example) and power is transferred from a flat coil in the charger to a similar one in the device. In 2007, a team led by Marin Soljačić at MIT used a dual resonance transmitter with a 25 cm diameter secondary tuned to 10 MHz to transfer 60 W of power to a similar dual resonance receiver over a distance of (eight times the transmitter coil diameter) at around 40% efficiency.
In 2008 the team of Greg Leyh and Mike Kennan of Nevada Lightning Lab used a grounded dual resonance transmitter with a 57 cm diameter secondary tuned to 60 kHz and a similar grounded dual resonance receiver to transfer power through coupled electric fields with an earth current return circuit over a distance of . In 2011, Dr. Christopher A. Tucker and Professor Kevin Warwick of the University of Reading, recreated Tesla's 1900 patent 0,645,576 in miniature and demonstrated power transmission over with a coil diameter of at a resonant frequency of 27.50 MHz, with an effective efficiency of 60%.
A major motivation for microwave research in the 1970s and 1980s was to develop a satellite for space-based solar power. Conceived in 1968 by Peter Glaser, this would harvest energy from sunlight using solar cells and beam it down to Earth as microwaves to huge rectennas, which would convert it to electrical energy on the electric power grid. In landmark 1975 experiments as technical director of a JPL/Raytheon program, Brown demonstrated long-range transmission by beaming 475 W of microwave power to a rectenna a mile away, with a microwave to DC conversion efficiency of 54%. At NASA's Jet Propulsion Laboratory, he and Robert Dickinson transmitted 30 kW DC output power across 1.5 km with 2.38 GHz microwaves from a 26 m dish to a 7.3 x 3.5 m rectenna array. The incident-RF to DC conversion efficiency of the rectenna was 80%. In 1983 Japan launched Microwave Ionosphere Nonlinear Interaction Experiment (MINIX), a rocket experiment to test transmission of high power microwaves through the ionosphere.
In recent years a focus of research has been the development of wireless-powered drone aircraft, which began in 1959 with the Dept. of Defense's RAMP (Raytheon Airborne Microwave Platform) project which sponsored Brown's research. In 1987 Canada's Communications Research Center developed a small prototype airplane called Stationary High Altitude Relay Platform (SHARP) to relay telecommunication data between points on earth similar to a communications satellite. Powered by a rectenna, it could fly at 13 miles (21 km) altitude and stay aloft for months. In 1992 a team at Kyoto University built a more advanced craft called MILAX (MIcrowave Lifted Airplane eXperiment).
In 2003 NASA flew the first laser powered aircraft. The small model plane's motor was powered by electricity generated by photocells from a beam of infrared light from a ground-based laser, while a control system kept the laser pointed at the plane.
See also
References
Further reading
Latest work on AirFuel Alliance class 2 and class 3 transmitters, adaptive tuning, radiated EMI, multi-mode wireless power systems, and control strategies.
Comprehensive, theoretical engineering text
Engineering text
Thibault, G. (2014). Wireless Pasts and Wired Futures. In J. Hadlaw, A. Herman, & T. Swiss (Eds.), Theories of the Mobile Internet. Materialities and Imaginaries. (pp. 126–154). London: Routledge. A short cultural history of wireless power
, Microwave powered aircraft, John E. Martin, et al. (1990).
, Solid state solar to microwave energy converter system and apparatus, Kenneth W. Dudley, et al. (1976).
, Microwave power receiving antenna, Carroll C. Dailey (1970).
External links
Microwave Power Transmission
The Stationary High Altitude Relay Platform (SHARP)
Marin Soljačić's MIT WiTricity
Energy development
Electric power distribution
Electromagnetic compatibility
Microwave transmission
Inventions by Nikola Tesla | Wireless power transfer | [
"Engineering"
] | 9,453 | [
"Radio electronics",
"Electrical engineering",
"Electromagnetic compatibility"
] |
570,723 | https://en.wikipedia.org/wiki/Barograph | A barograph is a barometer that records the barometric pressure over time in graphical form. This instrument is also used to make a continuous recording of atmospheric pressure. The pressure-sensitive element, a partially evacuated metal cylinder, is linked to a pen arm in such a way that the vertical displacement of the pen is proportional to the changes in the atmospheric pressure.
Development
Alexander Cumming, a watchmaker and mechanic, has a claim to having made the first effective recording barograph in the 1760s using an aneroid cell. Cumming created a series of barometrical clocks, including one for King George III. However, this type of design fell out of favour. Since the amount of movement that can be generated by a single aneroid is minuscule, up to seven aneroids (so called Vidie-cans) are often stacked "in series" to amplify their motion. This type of barograph was invented in 1844 by the Frenchman Lucien Vidi (1805–1866).
In such barographs one or more aneroid cells act through a gear or lever train to drive a recording arm that has at its extreme end either a scribe or a pen. A scribe records on smoked foil while a pen records on paper using ink, held in a nib. The recording material is mounted on a cylindrical drum which is rotated slowly by clockwork. Commonly, the drum makes one revolution per day, per week, or per month and the rotation rate can often be selected by the user.
Various other types of barograph have also been invented. Karl Kreil described a machine in 1843 based on a syphon barometer, where a pencil marked a chart at uniform intervals. Francis Ronalds, the Honorary Director of the Kew Observatory, created the first successful barograph utilising photography in 1845. The changing height of the mercury in the barometer was recorded on a continuously moving photosensitive surface. By 1847, a sophisticated temperature-compensation mechanism was also employed. Ronalds’ barograph was utilised by the UK Meteorological Office for many years to assist in weather forecasting and the machines were supplied to numerous observatories around the world.
Modern use
Today, traditional recording barographs for meteorological use have commonly been superseded (though not all) by electronic weather instruments that use computer methods to record the barometric pressure. These are not only less expensive than earlier barographs but they may also offer both greater recording length and the ability to perform further data analysis on the captured data including automated use of the data to forecast the weather. Older mechanical barographs are highly prized by collectors as they make good display items, often being made of high quality woods and brass.
The most common weather Barograph found in homes and public buildings these days are the 8-day type. Some important manufacturers of barographs are Negretti and Zambra, Short and Mason, and Richard Ferris among others. The late Victorian to early 20th century is generally considered to be the heyday of Barograph manufacture. Many important refinements were made at this time, including improved temperature compensation and modification of the pen arm, to allow less weight to be applied to the paper, allowing better registration of small pressure changes (i.e. less friction on the nib). Marine barographs (used on ships) often include damping. This evens out the motion of the ship so that a more stable reading can be obtained, this can be either oil damping of the mechanism or simple coiled spring feet on the base. But, newer solid state, digital barographs eliminate this issue altogether, since they use no moving parts.
Use in aviation
As atmospheric pressure responds in a predictable manner to changes in altitude, barographs may be used to record elevation changes during an aircraft flight. Barographs were required by the FAI to record certain tasks and record attempts associated with sailplanes. A continuously varying trace indicated that the sailplane had not landed during a task, while measurements from a calibrated trace could be used to establish the completion of altitude tasks or the setting of records. Examples of FAI approved sailplane barographs included the Replogle mechanical drum barograph and the EW electronic barograph (which may be used in conjunction with GPS). Mechanical barographs are not commonly used for flight documentation now, having been displaced by GNSS flight recorders.
Three-day barograph
On the top right of the picture of the three-day barograph can be seen a silver knurled knob. This is to adjust the barograph so that it correctly reflects the station pressure. Barely visible below the knob is a small silver plunger. This is pressed every three hours to leave a time mark on the paper.
The line between two of these marks is called the 'characteristic of barometric tendency' and is used by weather forecasters. The observer would first note if the pressure was lower or higher than three hours prior. Next, a code number would be chosen that best represents the three-hour trace. There are nine possible choices (0 to 8) and no single code has preference over another. In the case of the graph on the barograph, one of two codes could be picked. An 8 (steady then decreasing) or 6 (decreasing then steady). The observer should pick the 6 because it represents the last part of the trace and is thus most representative of the pressure change.
In the bottom centre is the aneroid (large circular silver object). As the pressure increases, the aneroid is pushed down causing the arm to move up and leave a trace on the paper. As the pressure decreases, the spring lifts the aneroid and the arm moves down.
After three days the drum to which the graph is attached is removed. At this point the clockwork motor is wound and if necessary corrections can be made to increase or decrease the speed and new chart is attached.
See also
Thermo-hygrograph
References
Meteorological instrumentation and equipment
fr:Baromètre#Barographes | Barograph | [
"Technology",
"Engineering"
] | 1,256 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
570,749 | https://en.wikipedia.org/wiki/Laboratory%20information%20management%20system | A laboratory information management system (LIMS), sometimes referred to as a laboratory information system (LIS) or laboratory management system (LMS), is a software-based solution with features that support a modern laboratory's operations. Key features include—but are not limited to—workflow and data tracking support, flexible architecture, and data exchange interfaces, which fully "support its use in regulated environments". The features and uses of a LIMS have evolved over the years from simple sample tracking to an enterprise resource planning tool that manages multiple aspects of laboratory informatics.
There is no useful definition of the term "LIMS" as it is used to encompass a number of different laboratory informatics components. The spread and depth of these components is highly dependent on the LIMS implementation itself. All LIMSs have a workflow component and some summary data management facilities but beyond that there are significant differences in functionality.
Historically the LIMyS, LIS, and process development execution system (PDES) have all performed similar functions. The term "LIMS" has tended to refer to informatics systems targeted for environmental, research, or commercial analysis such as pharmaceutical or petrochemical work. "LIS" has tended to refer to laboratory informatics systems in the forensics and clinical markets, which often required special case management tools. "PDES" has generally applied to a wider scope, including, for example, virtual manufacturing techniques, while not necessarily integrating with laboratory equipment.
In recent times LIMS functionality has spread even further beyond its original purpose of sample management. Assay data management, data mining, data analysis, and electronic laboratory notebook (ELN) integration have been added to many LIMS, enabling the realization of translational medicine completely within a single software solution. Additionally, the distinction between LIMS and LIS has blurred, as many LIMS now also fully support comprehensive case-centric clinical data.
History
Up until the late 1970s, the management of laboratory samples and the associated analysis and reporting were time-consuming manual processes often riddled with transcription errors. This gave some organizations impetus to streamline the collection of data and how it was reported. Custom in-house solutions were developed by a few individual laboratories, while some enterprising entities sought to develop commercial reporting solutions in the form of special instrument-based systems.
In 1982 the first generation of LIMS was introduced in the form of a centralized minicomputer, which offered automated reporting tools. As the interest in these early LIMS grew, industry leaders like Gerst Gibbon of the Federal Energy Technology Center in Pittsburgh began planting the seeds through LIMS-related conferences. By 1988 the second-generation commercial offerings were tapping into relational databases to expand LIMS into more application-specific territory, and International LIMS Conferences were in full swing. As personal computers became more powerful and prominent, a third generation of LIMS emerged in the early 1990s. These new LIMS took advantage of client/server architecture, allowing laboratories to implement better data processing and exchanges.
By 1995 the client/server tools allowed the processing of data anywhere on the network. Web-enabled LIMS were introduced the following year, enabling researchers to extend operations outside the laboratory. From 1996 to 2002 additional functionality was included, from wireless networking and georeferencing of samples, to the adoption of XML standards and Internet purchasing.
As of 2012, some LIMS have added additional characteristics such as clinical functionality, electronic laboratory notebook (ELN) functionality, as well a rise in the software as a service (SaaS) distribution model.
Technology
Operations
The LIMS is an evolving concept, with new features and functionality being added often. As laboratory demands change and technological progress continues, the functions of a LIMS will likely also change. Despite these changes, a LIMS tends to have a base set of functionality that defines it. That functionality can roughly be divided into five laboratory processing phases, with numerous software functions falling under each:
(1) the reception and log in of a sample and its associated customer data,
(2) the assignment, scheduling, and tracking of the sample and the associated analytical workload,
(3) the processing and quality control associated with the sample and the utilized equipment and inventory,
(4) the storage of data associated with the sample analysis,
(5) the inspection, approval, and compilation of the sample data for reporting and/or further analysis.
There are several pieces of core functionality associated with these laboratory processing phases that tend to appear in most LIMS:
Sample management
The core function of LIMS has traditionally been the management of samples. This typically is initiated when a sample is received in the laboratory, at which point the sample will be registered in the LIMS. Some LIMS will allow the customer to place an "order" for a sample directly to the LIMS at which point the sample is generated in an "unreceived" state. The processing could then include a step where the sample container is registered and sent to the customer for the sample to be taken and then returned to the lab. The registration process may involve accessioning the sample and producing barcodes to affix to the sample container. Various other parameters such as clinical or phenotypic information corresponding with the sample are also often recorded. The LIMS then tracks chain of custody as well as sample location. Location tracking usually involves assigning the sample to a particular freezer location, often down to the granular level of shelf, rack, box, row, and column. Other event tracking such as freeze and thaw cycles that a sample undergoes in the laboratory may be required.
Modern LIMS have implemented extensive configurability as each laboratory's needs for tracking additional data points can vary widely. LIMS vendors cannot typically make assumptions about what these data tracking needs are, and therefore vendors must create LIMS that are adaptable to individual environments. LIMS users may also have regulatory concerns to comply with such as CLIA, HIPAA, GLP, and FDA specifications, affecting certain aspects of sample management in a LIMS solution. One key to compliance with many of these standards is audit logging of all changes to LIMS data, and in some cases a full electronic signature system is required for rigorous tracking of field-level changes to LIMS data.
Instrument and application integration
Modern LIMS offer an increasing amount of integration with laboratory instruments and applications. A LIMS may create control files that are "fed" into the instrument and direct its operation on some physical item such as a sample tube or sample plate. The LIMS may then import instrument results files to extract data for quality control assessment of the operation on the sample. Access to the instrument data can sometimes be regulated based on chain of custody assignments or other security features if need be.
Modern LIMS products now also allow for the import and management of raw assay data results. Modern targeted assays such as qPCR and deep sequencing can produce tens of thousands of data points per sample. Furthermore, in the case of drug and diagnostic development as many as 12 or more assays may be run for each sample. In order to track this data, a LIMS solution needs to be adaptable to many different assay formats at both the data layer and import creation layer, while maintaining a high level of overall performance. Some LIMS products address this by simply attaching assay data as BLOBs to samples, but this limits the utility of that data in data mining and downstream analysis.
Electronic data exchange
The exponentially growing volume of data created in laboratories, coupled with increased business demands and focus on profitability, have pushed LIMS vendors to increase attention to how their LIMS handles electronic data exchanges. Attention must be paid to how an instrument's input and output data is managed, how remote sample collection data is imported and exported, and how mobile technology integrates with the LIMS. The successful transfer of data files in spreadsheets and other formats is a pivotal aspect of the modern LIMS. In fact, the transition "from proprietary databases to standardized database management systems such as MySQL" has arguably had one of the biggest impacts on how data is managed and exchanged in laboratories. In addition to mobile and database electronic data exchange, many LIMS support real-time data exchange with Electronic Health Records used in core hospital or clinic operations.
Additional functions
Aside from the key functions of sample management, instrument and application integration, and electronic data exchange, there are numerous additional operations that can be managed in a LIMS. This includes but is not limited to:
Audit management
Fully track and maintain an audit trail
Barcode handling
Assign one or more data points to a barcode format; read and extract information from a barcode
Chain of custody
Assign roles and groups that dictate access to specific data records and who is managing them
Compliance
Follow regulatory standards that affect the laboratory
Customer relationship management
Handle the demographic information and communications for associated clients
Document management
Process and convert data to certain formats; manage how documents are distributed and accessed
Instrument calibration and maintenance
Schedule important maintenance and calibration of lab instruments and keep detailed records of such activities
Inventory and equipment management
Measure and record inventories of vital supplies and laboratory equipment
Manual and electronic data entry
Provide fast and reliable interfaces for data to be entered by a human or electronic component
Method management
Provide one location for all laboratory process and procedure (P&P) and methodology to be housed and managed as well as connecting each sample handling step with current instructions for performing the operation
Personnel and workload management
Organize work schedules, workload assignments, employee demographic information, training, and financial information
Quality assurance and control
Gauge and control sample quality, corrective and preventive action (CAPA), data entry standards, and workflow
Reports
Create and schedule reports in a specific format; schedule and distribute reports to designated parties
Time tracking
Calculate and maintain processing and handling times on chemical reactions, workflows, and more
Traceability
Show audit trail and/or chain of custody of a sample
Workflows
Track a sample, a batch of samples, or a "lot" of batches through its lifecycle
Client-side options
A LIMS has utilized many architectures and distribution models over the years. As technology has changed, how a LIMS is installed, managed, and utilized has also changed with it. The following represents architectures which have been utilized at one point or another.
Thick-client
A thick-client LIMS is a more traditional client/server architecture, with some of the system residing on the computer or workstation of the user (the client) and the rest on the server. The LIMS software is installed on the client computer, which does all of the data processing. Later it passes information to the server, which has the primary purpose of data storage. Most changes, upgrades, and other modifications will happen on the client side.
This was one of the first architectures implemented into a LIMS, having the advantage of providing higher processing speeds (because processing is done on the client and not the server). Additionally, thick-client systems have also provided more interactivity and customization, though often at a greater learning curve. The disadvantages of client-side LIMS include the need for more robust client computers and more time-consuming upgrades, as well as a lack of base functionality through a web browser. The thick-client LIMS can become web-enabled through an add-on component.
Although there is a claim of improved security through the use of a thick-client LIMS, this is based on the misconception that "only users with the client application installed on their PC can access server side information". This secrecy-of-design reliance is known as security through obscurity and ignores an adversary's ability to mimic client-server interaction through, for example, reverse engineering, network traffic interception, or simply purchasing a thick-client license. Such a view is in contradiction of the "Open Design" principle of the National Institute of Standards and Technology's Guide to General Server Security which states that "system security should not depend on the secrecy of the implementation or its components", which can be considered as a reiteration of Kerckhoffs's principle.
Thin-client
A thin-client LIMS is a more modern architecture which offers full application functionality accessed through a device's web browser. The actual LIMS software resides on a server (host) which feeds and processes information without saving it to the user's hard disk. Any necessary changes, upgrades, and other modifications are handled by the entity hosting the server-side LIMS software, meaning all end-users see all changes made. To this end, a true thin-client LIMS will leave no "footprint" on the client's computer, and only the integrity of the web browser need be maintained by the user. The advantages of this system include significantly lower cost of ownership and fewer network and client-side maintenance expenses. However, this architecture has the disadvantage of requiring real-time server access, a need for increased network throughput, and slightly less functionality. A sort of hybrid architecture that incorporates the features of thin-client browser usage with a thick client installation exists in the form of a web-based LIMS.
Some LIMS vendors are beginning to rent hosted, thin-client solutions as "software as a service" (SaaS). These solutions tend to be less configurable than on-premises solutions and are therefore considered for less demanding implementations such as laboratories with few users and limited sample processing volumes.
Another implementation of the thin client architecture is the maintenance, warranty, and support (MSW) agreement. Pricing levels are typically based on a percentage of the license fee, with a standard level of service for 10 concurrent users being approximately 10 hours of support and additional customer service, at a roughly $200 per hour rate. Though some may choose to opt out of an MSW after the first year, it is often more economical to continue the plan in order to receive updates to the LIMS, giving it a longer life span in the laboratory.
Web-enabled
A web-enabled LIMS architecture is essentially a thick-client architecture with an added web browser component. In this setup, the client-side software has additional functionality that allows users to interface with the software through their device's browser. This functionality is typically limited only to certain functions of the web client. The primary advantage of a web-enabled LIMS is the end-user can access data both on the client side and the server side of the configuration. As in a thick-client architecture, updates in the software must be propagated to every client machine. However, the added disadvantages of requiring always-on access to the host server and the need for cross-platform functionality mean that additional overhead costs may arise.
Web-based
A web-based LIMS architecture is a hybrid of the thick- and thin-client architectures. While much of the client-side work is done through a web browser, the LIMS may also require the support of desktop software installed on the client device. The end result is a process that is apparent to the end-user through a web browser, but perhaps not so apparent as it runs thick-client-like processing in the background. In this case, web-based architecture has the advantage of providing more functionality through a more friendly web interface. The disadvantages of this setup are more sunk costs in system administration and reduced functionality on mobile platforms.
Configurability
LIMS implementations are notorious for often being lengthy and costly. This is partly due to the diversity of requirements within each lab, but also to the inflexible nature of most LIMS products for adapting to these widely varying requirements. Newer LIMS solutions are beginning to emerge that take advantage of modern techniques in software design that are inherently more configurable and adaptable — particularly at the data layer — than prior solutions. This means not only that implementations are much faster, but also that the costs are lower and the risk of obsolescence is minimized.
Distinction between a LIMS and a LIS
Until recently, the LIMS and Laboratory Information System (LIS) have exhibited a few key differences, making them noticeably separate entities.
A LIMS traditionally has been designed to process and report data related to batches of samples from biology labs, water treatment facilities, drug trials, and other entities that handle complex batches of data. A LIS has been designed primarily for processing and reporting data related to individual patients in a clinical setting.
A LIMS may need to satisfy good manufacturing practice (GMP) and meet the reporting and audit needs of the regulatory bodies and research scientists in many different industries. A LIS, however, must satisfy the reporting and auditing needs of health service agencies e.g. the hospital accreditation agency, HIPAA in the US, or other clinical medical practitioners.
A LIMS is most competitive in group-centric settings (dealing with "batches" and "samples") that often deal with mostly anonymous research-specific laboratory data, whereas a LIS is usually most competitive in patient-centric settings (dealing with "subjects" and "specimens") and clinical labs. An LIS is regulated as a medical device by the FDA, and the companies that produce the software are therefore liable for defects. Due to this, a LIS can not be customized by the client.
Standards
A LIMS covers standards such as 21 CFR Part 11 from the Food and Drug Administration (United States), ISO/IEC 17025,
ISO 15189, ISO 20387, Good Clinical Practice (GCP), Good Laboratory Practice (GLP), Good Manufacturing Practice (GMP), FDA Food Safety Modernization Act (FSMA), HACCP, and ISBER Best Practices.
See also
Data management
List of LIMS software packages
List of ELN software packages
Scientific management
Title 21 CFR Part 11
Virtual research environment
References
Further reading
Veterinary Lab Management Software Market Analysis
Information systems
Health informatics
Health care software | Laboratory information management system | [
"Technology",
"Biology"
] | 3,654 | [
"Information systems",
"Information technology",
"Health informatics",
"Medical technology"
] |
570,922 | https://en.wikipedia.org/wiki/Action%20at%20a%20distance | Action at a distance is the concept in physics that an object's motion can be affected by another object without the two being in physical contact; that is, it is the concept of the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity led to new action at a distance models providing alternative to field theories. Under our modern understanding, the four fundamental interactions (gravity, electromagnetism, the strong interaction and the weak interaction) in all of physics are not described by action at a distance.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action-at-a-distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, no medium is required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Direct impact of macroscopic objects seems visually distinguishable from action at a distance. If however the objects are constructed of atoms, and the volume of those atoms is not defined and atoms interact by electric and magnetic forces, the distinction is less clear.
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other models according to the needs of each physical problem.
One role is as a summary of physical phenomena, independent of any understanding of the cause of such an action. For example, astronomical tables of planetary positions can be compactly summarized using Newton's law of universal gravitation, which assumes the planets interact without contact or an intervening medium. As a summary of data, the concept does not need to be evaluated as a plausible physical model.
Action at a distance also acts as a model explaining physical phenomena even in the presence of other models. Again in the case of gravity, hypothesizing an instantaneous force between masses allows the return time of comets to be predicted as well as predicting the existence of previously unknown planets, like Neptune. These triumphs of physics predated the alternative more accurate model for gravity based on general relativity by many decades.
Introductory physics textbooks discuss central forces, like gravity, by models based on action-at-distance without discussing the cause of such forces or issues with it until the topics of relativity and fields are discussed. For example, see The Feynman Lectures on Physics on gravity.
History
Early inquiries into motion
Action-at-a-distance as a physical concept requires identifying objects, distances, and their motion. In antiquity, ideas about the natural world were not organized in these terms. Objects in motion were modeled as living beings. Around 1600, the scientific method began to take root. René Descartes held a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations. Many experiments with electrical and magnetic materials led to new ideas about forces. These efforts set the stage for Newton's work on forces and gravity.
Newtonian gravity
In 1687 Isaac Newton published his Principia which combined his laws of motion with a new mathematical analysis able to reproduce Kepler's empirical results. His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to the square of the distance between them. Thus the motions of planets were predicted by assuming forces working over great distances.
This mathematical expression of the force did not imply a cause. Newton considered action-at-a-distance to be an inadequate model for gravity. Newton, in his words, considered action at a distance to be:
Metaphysical scientists of the early 1700s strongly objected to the unexplained action-at-a-distance in Newton's theory. Gottfried Wilhelm Leibniz complained that the mechanism of gravity was "invisible, intangible, and not mechanical". Moreover, initial comparisons with astronomical data were not favorable. As mathematical techniques improved throughout the 1700s, the theory showed increasing success, predicting the date of the return of Halley's comet and aiding the discovery of planet Neptune in 1846. These successes and the increasingly empirical focus of science towards the 19th century led to acceptance of Newton's theory of gravity despite distaste for action-at-a-distance.
Electrical action at a distance
Electrical and magnetic phenomena also began to be explored systematically in the early 1600s. In William Gilbert's early theory of "electric effluvia," a kind of electric atmosphere, he rules out action-at-a-distance on the grounds that "no action can be performed by matter save by contact".
However subsequent experiments, especially those by Stephen Gray showed electrical effects over distance. Gray developed an experiment call the "electric boy" demonstrating electric transfer without direct contact.
Franz Aepinus was the first to show, in 1759, that a theory of action at a distance for electricity provides a simpler replacement for the electric effluvia theory. Despite this success, Aepinus himself considered the nature of the forces to be unexplained: he did "not approve of the doctrine which assumes the possibility of action at a distance", setting the stage for a shift to theories based on aether.
By 1785 Charles-Augustin de Coulomb showed that two electric charges at rest experience a force inversely proportional to the square of the distance between them, a result now called Coulomb's law. The striking similarity to gravity strengthened the case for action at a distance, at least as a mathematical model.
As mathematical methods improved, especially through the work of Pierre-Simon Laplace, Joseph-Louis Lagrange, and Siméon Denis Poisson, more sophisticated mathematical methods began to influence the thinking of scientists. The concept of potential energy applied to small test particles led to the concept of a scalar field, a mathematical model representing the forces throughout space. While this mathematical model is not a mechanical medium, the mental picture of such a field resembles a medium.
Fields as an alternative
Michael Faraday was the first who suggested that action at a distance was inadequate as an account of electric and magnetic forces, even in the form of a (mathematical) potential field. Faraday, an empirical experimentalist, cited three reasons in support of some medium transmitting electrical force: 1) electrostatic induction across an insulator depends on the nature of the insulator, 2) cutting a charged insulator causes opposite charges to appear on each half, and 3) electric discharge sparks are curved at an insulator. From these reasons he concluded that the particles of an insulator must be polarized, with each particle contributing to continuous action. He also experimented with magnets, demonstrating lines of force made visible by iron filings. However, in both cases his field-like model depends on particles that interact through an action-at-a-distance: his mechanical field-like model has no more fundamental physical cause than the long-range central field model.
Faraday's observations, as well as others, led James Clerk Maxwell to a breakthrough formulation in 1865, a set of equations that combined electricity and magnetism, both static and dynamic, and which included electromagnetic radiation – light. Maxwell started with elaborate mechanical models but ultimately produced a purely mathematical treatment using dynamical vector fields. The sense that these fields must be set to vibrate to propagate light set off a search of a medium of propagation; the medium was called the luminiferous aether or the aether.
In 1873 Maxwell addressed action at a distance explicitly. He reviews Faraday's lines of force, carefully pointing out that Faraday himself did not provide a mechanical model of these lines in terms of a medium. Nevertheless the many properties of these lines of force imply these "lines must not be regarded as mere mathematical abstractions". Faraday himself viewed these lines of force as a model, a "valuable aid" to the experimentalist, a means to suggest further experiments.
In distinguishing between different kinds of action Faraday suggested three criteria: 1) do additional material objects alter the action?, 2) does the action take time, and 3) does it depend upon the receiving end? For electricity, Faraday knew that all three criteria were met for electric action, but gravity was thought to only meet the third one. After Maxwell's time a fourth criteria, the transmission of energy, was added, thought to also apply to electricity but not gravity. With the advent of new theories of gravity, the modern account would give gravity all of the criteria except dependence on additional objects.
Fields fade into spacetime
The success of Maxwell's field equations led to numerous efforts in the later decades of the 19th century to represent electrical, magnetic, and gravitational fields, primarily with mechanical models. No model emerged that explained the existing phenomena. In particular no good model for stellar aberration, the shift in the position of stars with the Earth's relative velocity. The best models required the ether to be stationary while the Earth moved, but experimental efforts to measure the effect of Earth's motion through the aether found no effect.
In 1892 Hendrik Lorentz proposed a modified aether based on the emerging microscopic molecular model rather than the strictly macroscopic continuous theory of Maxwell. Lorentz investigated the mutual interaction of a moving solitary electrons within a stationary aether. He rederived Maxwell's equations in this way but, critically, in the process he changed them to represent the wave in the coordinates moving electrons. He showed that the wave equations had the same form if they were transformed using a particular scaling factor,
where is the velocity of the moving electrons and is the speed of light. Lorentz noted that if this factor were applied as a length contraction to moving matter in a stationary ether, it would eliminate any effect of motion through the ether, in agreement with experiment.
In 1899, Henri Poincaré questioned the existence of an aether, showing that the principle of relativity prohibits the absolute motion assumed by proponents of the aether model. He named the transformation used by Lorentz the Lorentz transformation but interpreted it as a transformation between two inertial frames with relative velocity . This transformation makes the electromagnetic equations look the same in every uniformly moving inertial frame. Then, in 1905, Albert Einstein demonstrated that the principle of relativity, applied to the simultaneity of time and the constant speed of light, precisely predicts the Lorentz transformation. This theory of special relativity quickly became the modern concept of spacetime.
Thus the aether model, initially so very different from action at a distance, slowly changed to
resemble simple empty space.
In 1905, Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. However, until 1915 gravity stood apart as a force still described by action-at-a-distance. In that year, Einstein showed that a field theory of spacetime, general relativity, consistent with relativity can explain gravity. New effects resulting from this theory were dramatic for cosmology but minor for planetary motion and physics on Earth.
Einstein himself noted Newton's "enormous practical success".
Modern action at a distance
In the early decades of the 20th century, Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker independently developed non-instantaneous models for action at a distance consistent with special relativity. In 1949 John Archibald Wheeler and Richard Feynman built on these models to develop a new field-free theory of electromagnetism.
While Maxwell's field equations are generally successful, the Lorentz model of a moving electron interacting with the field encounters mathematical difficulties: the self-energy of the moving point charge within the field is infinite. The Wheeler–Feynman absorber theory of electromagnetism avoids the self-energy issue. They interpret Abraham–Lorentz force, the apparent force resisting electron acceleration, as a real force returning from all the other existing charges in the universe.
The Wheeler–Feynman theory has inspired new thinking about the arrow of time and about the nature of quantum non-locality. The theory has implications for cosmology; it has been extended to quantum mechanics. A similar approach has been applied to develop an alternative theory of gravity consistent with general relativity. John G. Cramer has extended the Wheeler–Feynman ideas to create the transactional interpretation of quantum mechanics.
"Spooky action at a distance"
Albert Einstein wrote to Max Born about issues in quantum mechanics in 1947 and used a phrase translated as "spooky action at a distance", and in 1964, John Stewart Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. The phrase has been picked up and used as a description for the cause of small non-classical correlations between physically separated measurement of entangled quantum states. The correlations are predicted by quantum mechanics (the Bell theorem) and verified by experiments (the Bell test). Rather than a postulate like Newton's gravitational force, this use of "action-at-a-distance" concerns observed correlations which cannot be explained with localized particle-based models. Describing these correlations as "action-at-a-distance" requires assuming that particles became entangled and then traveled to distant locations, an assumption that is not required by quantum mechanics.
Force in quantum field theory
Quantum field theory does not need action at a distance. At the most fundamental level, only four forces are needed. Each force is described as resulting from the exchange of specific bosons. Two are short range: the strong interaction mediated by mesons and the weak interaction mediated by the weak boson; two are long range: electromagnetism mediated by the photon and gravity hypothesized to be mediated by the graviton. However, the entire concept of force is of secondary concern in advanced modern particle physics. Energy forms the basis of physical models and the word action has shifted away from implying a force to a specific technical meaning, an integral over the difference between potential energy and kinetic energy.
See also
References
External links
Force
Concepts in physics | Action at a distance | [
"Physics",
"Mathematics"
] | 3,137 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"nan",
"Wikipedia categories named after physical quantities",
"Matter"
] |
571,100 | https://en.wikipedia.org/wiki/Penthouse%20apartment | A penthouse is an apartment or unit traditionally on the highest floor of an apartment building, condominium, hotel, or tower. Penthouses are typically differentiated from other apartments by luxury features. The term 'penthouse' originally referred, and sometimes still does refer, to a separate smaller 'house' that was constructed on the roof of an apartment building. Architecturally it refers specifically to a structure on the roof of a building that is set back from its outer walls. These structures do not have to occupy the entire roof deck. Recently, luxury high rise apartment buildings have begun to designate multiple units on the entire top residential floor or multiple higher residential floors including the top floor as penthouse apartments, and outfit them to include ultra-luxury fixtures, finishes, and designs which are different from all other residential floors of the building. These penthouse apartments are not typically set back from the building's outer walls, but are instead flush with the rest of the building and simply differ in size, luxury, and consequently price. High-rise buildings can also have structures known as mechanical penthouses that enclose machinery or equipment such as the drum mechanisms for an elevator.
Etymology
The name penthouse is derived from , an Old French word meaning "attached building" or "appendage". The modern spelling is influenced by a 16th-century folk etymology that combines the Middle French word for "slope" () with the English noun house (the meaning at that time was "attached building with a sloping roof or awning").
Development
European designers and architects long recognized the potential in creating living spaces that could make use of rooftops and such setbacks. Penthouses first appeared in US cities in the 1920s with the exploitation of roof spaces for upscale property. The first recognized development was atop the Plaza Hotel overlooking Central Park in New York City in 1923. Its success caused a rapid development of similar luxury penthouse apartments in most major cities in the United States in the following years.
The popularity of penthouses stemmed from the setbacks allowing for significantly larger private outdoor terrace spaces than traditional cantilevered balconies. Due to the desirability of having outdoor space, buildings began to be designed with setbacks that could accommodate the development of apartments and terraces on their uppermost levels.
Modern penthouses may or may not have terraces. Upper floor space may be divided among several apartments, or a single apartment may occupy an entire floor. Penthouses often have their own private access where access to any roof, terrace, and any adjacent setback is exclusively controlled.
Design
Penthouses can also differentiate themselves by luxurious amenities such as high-end appliances, finest materials fitting, luxurious flooring system, and more.
Features not found in the majority of apartments in the building may include a private entrance or elevator, or higher/vaulted ceilings. In buildings consisting primarily of single level apartments, penthouse apartments may be distinguished by having two or more levels. They may also have such features as a terrace, fireplace, more floor area, oversized windows, multiple master suites, den/office space, hot-tubs, and more. They might be equipped with luxury kitchens featuring stainless steel appliances, granite counter-tops, breakfast bar/island, and more.
Penthouse residents often have fine views of the city skyline. Access to a penthouse apartment is usually provided by a separate elevator. Residents can also access a number of building services, such as pickup and delivery of everything from dry cleaning to dinner; reservations to restaurants and events made by building staffers; and other concierge services.
Penthouse apartments can also be situated on the corner of a building, providing 90° or more views of the surrounding skyline.
Cultural references
Penthouse apartments are considered to be at the top of their markets, and are generally the most expensive, with expansive views, large living spaces, and top-of-the-line amenities. Accordingly, they are often associated with a luxury lifestyle. Publisher Bob Guccione named his magazine Penthouse, with the trademark phrase "Life on top".
See also
Basement apartment
Luxury apartment
Roof garden
Notes
References
External links
Apartment types
Houses | Penthouse apartment | [
"Technology"
] | 819 | [
"Structural system",
"Houses"
] |
571,107 | https://en.wikipedia.org/wiki/Kale%20%28moon%29 | Kale , also known as , is a retrograde irregular satellite of Jupiter. It was discovered in 2001 by astronomers Scott S. Sheppard, D. Jewitt, and J. Kleyna, and was originally designated as .
Kale is about in diameter, and orbits Jupiter at an average distance of in 736.55 days, at an inclination of 165° to the ecliptic (166° to Jupiter's equator), in a retrograde direction and with an orbital eccentricity of 0.2011.
It was named in August 2003 after Kale, one of the Charites (, , 'Graces'), daughters of Zeus (Jupiter). Kale is the spouse of Hephaestus according to some authors (although most have Aphrodite play that role).
It belongs to the Carme group, made up of irregular retrograde moons orbiting Jupiter at a distance ranging between and at an inclination of about 165°.
References
Carme group
Moons of Jupiter
Irregular satellites
Discoveries by Scott S. Sheppard
20011209
Discoveries by David C. Jewitt
Discoveries by Jan Kleyna
Moons with a retrograde orbit | Kale (moon) | [
"Astronomy"
] | 234 | [
"Astronomy stubs",
"Planetary science stubs"
] |
571,109 | https://en.wikipedia.org/wiki/Dirichlet%20problem | In mathematics, a Dirichlet problem asks for a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows:
Given a function f that has values everywhere on the boundary of a region in , is there a unique continuous function twice continuously differentiable in the interior and continuous on the boundary, such that is harmonic in the interior and on the boundary?
This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle.
History
The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this variational problem based on a method which he called Dirichlet's principle. The existence of a unique solution is very plausible by the "physical argument": any charge distribution on the boundary should, by the laws of electrostatics, determine an electrical potential as solution. However, Karl Weierstrass found a flaw in Riemann's argument, and a rigorous proof of existence was found only in 1900 by David Hilbert, using his direct method in the calculus of variations. It turns out that the existence of a solution depends delicately on the smoothness of the boundary and the prescribed data.
General solution
For a domain having a sufficiently smooth boundary , the general solution to the Dirichlet problem is given by
where is the Green's function for the partial differential equation, and
is the derivative of the Green's function along the inward-pointing unit normal vector . The integration is performed on the boundary, with measure . The function is given by the unique solution to the Fredholm integral equation of the second kind,
The Green's function to be used in the above integral is one which vanishes on the boundary:
for and . Such a Green's function is usually a sum of the free-field Green's function and a harmonic solution to the differential equation.
Existence
The Dirichlet problem for harmonic functions always has a solution, and that solution is unique, when the boundary is sufficiently smooth and is continuous. More precisely, it has a solution when
for some , where denotes the Hölder condition.
Example: the unit disk in two dimensions
In some simple cases the Dirichlet problem can be solved explicitly. For example, the solution to the Dirichlet problem for the unit disk in R2 is given by the Poisson integral formula.
If is a continuous function on the boundary of the open unit disk , then the solution to the Dirichlet problem is given by
The solution is continuous on the closed unit disk and harmonic on
The integrand is known as the Poisson kernel; this solution follows from the Green's function in two dimensions:
where is harmonic () and chosen such that for .
Methods of solution
For bounded domains, the Dirichlet problem can be solved using the Perron method, which relies on the maximum principle for subharmonic functions. This approach is described in many text books. It is not well-suited to describing smoothness of solutions when the boundary is smooth. Another classical Hilbert space approach through Sobolev spaces does yield such information. The solution of the Dirichlet problem using Sobolev spaces for planar domains can be used to prove the smooth version of the Riemann mapping theorem. has outlined a different approach for establishing the smooth Riemann mapping theorem, based on the reproducing kernels of Szegő and Bergman, and in turn used it to solve the Dirichlet problem. The classical methods of potential theory allow the Dirichlet problem to be solved directly in terms of integral operators, for which the standard theory of compact and Fredholm operators is applicable. The same methods work equally for the Neumann problem.
Generalizations
Dirichlet problems are typical of elliptic partial differential equations, and potential theory, and the Laplace equation in particular. Other examples include the biharmonic equation and related equations in elasticity theory.
They are one of several types of classes of PDE problems defined by the information given at the boundary, including Neumann problems and Cauchy problems.
Example: equation of a finite string attached to one moving wall
Consider the Dirichlet problem for the wave equation describing a string attached between walls with one end attached permanently and the other moving with the constant velocity i.e. the d'Alembert equation on the triangular region of the Cartesian product of the space and the time:
As one can easily check by substitution, the solution fulfilling the first condition is
Additionally we want
Substituting
we get the condition of self-similarity
where
It is fulfilled, for example, by the composite function
with
thus in general
where is a periodic function with a period :
and we get the general solution
See also
Lebesgue spine
Notes
References
S. G. Krantz, The Dirichlet Problem. §7.3.3 in Handbook of Complex Variables. Boston, MA: Birkhäuser, p. 93, 1999. .
S. Axler, P. Gorkin, K. Voss, The Dirichlet problem on quadratic surfaces, Mathematics of Computation 73 (2004), 637–651.
.
Gérard, Patrick; Leichtnam, Éric: Ergodic properties of eigenfunctions for the Dirichlet problem. Duke Math. J. 71 (1993), no. 2, 559–607.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Potential theory
Partial differential equations
Fourier analysis
Mathematical problems
Boundary value problems | Dirichlet problem | [
"Mathematics"
] | 1,387 | [
"Functions and mappings",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Mathematical problems"
] |
571,196 | https://en.wikipedia.org/wiki/Ophiocordyceps%20sinensis | Ophiocordyceps sinensis (synonym Cordyceps sinensis), known colloquially as caterpillar fungus, is an entomopathogenic fungus (a fungus that grows on insects) in the family Ophiocordycipitaceae. It is mainly found in the meadows above on the Tibetan Plateau in Tibet and the Himalayan regions of Bhutan, India, and Nepal. It parasitizes larvae of ghost moths and produces a fruiting body which is valued in traditional Chinese medicine as an aphrodisiac. Caterpillar fungus contains the compound cordycepin, an adenosine derivative. However, naturally harvested fruiting bodies often contain high amounts of arsenic and other heavy metals, making them potentially toxic. As a result, their sale has been strictly regulated by China's State Administration for Market Regulation since 2016.
O. sinensis parasitizes the larvae of moths within the family Hepialidae, specifically genera found on the Tibetan Plateau and in the Himalayas, between elevations of . The fungus germinates in the living larva, kills and mummifies it, and then a dark brown stalk-like fruiting body which is a few centimeters long emerges from the corpse and stands upright.
O. sinensis is classified as a medicinal mushroom, and its use has a long history in traditional Chinese medicine as well as traditional Tibetan medicine. The hand-collected, intact fungus-caterpillar body is valued by herbalists as medicine, and because of its cost, its use is also a status symbol.
The fruiting bodies of the fungus are not yet cultivated commercially, but the mycelium form can be cultivated in vitro. Overharvesting and overexploitation have led to the classification of O. sinensis as an endangered species in China. Additional research needs to be carried out in order to understand its morphology and growth habits for conservation and optimum utilization.
Taxonomic history and systematics
Morphological features
Ophiocordyceps sinensis consists of two parts, a fungal endosclerotium (within the caterpillar) and stroma. The stroma is the upper fungal part and is dark brown or black, but can be a yellow color when fresh, and longer than the caterpillar itself, usually 4–10 cm. It grows singly from the larval head, and is clavate, sublanceolate or fusiform, and distinct from the stipe (stalk). The stipe is slender, glabrous, and longitudinally furrowed or ridged.
The fertile part of the stroma is the head. The head is granular because of the ostioles of the embedded perithecia. The perithecia are ordinally arranged and ovoid. The asci are cylindrical or slightly tapering at both ends, and may be straight or curved, with a capitate and hemispheroid apex, and may be two to four spored. Similarly, ascospores are hyaline, filiform, multiseptate at a length of 5–12 μm and subattenuated on both sides. Perithecial, ascus and ascospore characters in the fruiting bodies are the key identification characteristics of O. sinensis.
Ophiocordyceps (Petch) Kobayasi species produce whole ascospores and do not separate into part spores. This is different from other Cordyceps species, which produce either immersed or superficial perithecia perpendicular to stromal surface, and the ascospores at maturity are disarticulated into part spores. Generally Cordyceps species possess brightly colored and fleshy stromata, but O. sinensis has dark pigments and tough to pliant stromata, a typical characteristic feature of most of the Ophiocordyceps species.
Developments in classification
The species was first described scientifically by Miles Berkeley in 1843 as Sphaeria sinensis; Pier Andrea Saccardo transferred the species to the genus Cordyceps in 1878. The fungus was known as Cordyceps sinensis until 2007, when molecular analysis was used to amend the classification of the Cordycipitaceae and the Clavicipitaceae, resulting in the naming of a new family Ophiocordycipitaceae and the transfer of several Cordyceps species including C. sinensis to the genus Ophiocordyceps.
Common names
In Tibet, it is known as yartsa gunbu, (, "summer grass winter worm"). The name was first recorded in the 15th century by the Tibetan doctor Zurkhar Namnyi Dorje. In colloquial Tibetan yartsa gunbu is often shortened to simply "bu" or "yartsa". The Tibetan name is transliterated in Nepali as यार्चागुन्बू, यार्चागुन्बा, yarshagumba, yarchagumba or yarsagumba. The transliteration in Bhutan is .
In India, it is known as keera jhar, keeda jadi, keeda ghas or in Nepali, Hindi and Garhwali.
It is known in Chinese as (冬蟲夏草), meaning "winter worm, summer grass", which is a literal translation of the original Tibetan name. In traditional Chinese medicine, its name is often abbreviated as chong cao (蟲草 "insect plant"), a name that also applies to other Cordyceps species, such as C. militaris. In Japanese, it is known by the Japanese reading of the characters for the Chinese name, . Strangely, sometimes in Chinese English-language texts Cordyceps sinensis is referred to as aweto, which is the Māori name for Ophiocordyceps robertsii, a species from south-eastern Australia and New Zealand.
The English term "vegetable caterpillar" is a misnomer, as no plant is involved. "Caterpillar fungus" is a preferred term.
Nomenclature of the anamorph
Since the 1980s, 22 species in 13 genera have been attributed to the anamorph (asexually reproducing mold-like form) of O. sinensis. Of the 22 species, Cephalosporium acreomonium is the zygomycetous species of Umbelopsis, Chrysosporium sinense has very low similarity in RAPD polymorphism, hence it is not the anamorph. Likewise, Cephalosporium dongchongxiacae, C. sp. sensu, Hirsutella sinensis and H. hepiali and Synnematium sinnense are synonymous and only H. sinensis is only validly published in articles. Cephalosporium sinensis possibly might be synonymous to H. sinensis but there is lack of valid information. Isaria farinosa is combined to Paecilomyces farinosus and is not the anamorph. Several isolates of Isaria sp., Verticella sp., Scydalium sp. and Stachybotrys sp. were identified only up to generic level, and it is dubious that they are anamorph. Mortierella hepiali is discarded as anamorph as it belongs to Zygomycota. Paecilomyces sinensis and Sporothrix insectorum are discarded based on the molecular evidence. P. lingi appeared only in one article and thus is discarded because of incomplete information. Tolypocladium sinense, P. hepiali, and Scydalium hepiali, have no valid information and thus are not considered as anamorph to Ophiocordyceps sinensis. V. sinensis is not considered anamorph as there is no valid published information. Similarly, Metarhizium anisopliae is not considered anamorph as it has widely distributed host range, and is not restricted only in high altitude.
Thus Hirsutella sinensis is considered the validly published anamorph of O. sinensis, Cordyceps nepalensis and C. multiaxialis which had similar morphological characteristics to O. sinensis, also had almost identical or identical ITS sequences and its presumed anamorph, H. sinensis. This also confirms H. sinensis to be anamorph of O. sinensis and suggests C. nepalensis and C. multiaxialis are synonyms. Evidence based on microcyclic conidiation from ascospores and molecular studies support H. sinensis as the anamorph of the caterpillar fungus, O. sinensis.
Ecology and life cycle
The caterpillars prone to infection by O. sinensis generally live underground in alpine grass and shrub-lands on the Tibetan Plateau and the Himalayas at an altitude between . The fungus is reported from the northern range of Nepal, Bhutan, and also from the northern states of India, apart from northern Yunnan, eastern Qinghai, eastern Tibet, western Sichuan, southwestern Gansu provinces. Fifty-seven taxa from several genera (37 Thitarodes, 1 Bipectilus, 1 Endoclita, 1 Gazoryctra, 3 Pharmacis, and 14 others not correctly identified to genus) are recognized as potential hosts of O. sinensis.
The stalk-like dark brown to black fruiting body (or mushroom) grows out of the head of the dead caterpillar and emerges from the soil in alpine meadows by early spring. During late summer, the fruiting body disperses spores. The caterpillars, which live underground feeding on roots, are most vulnerable to the fungus after shedding their skin, during late summer. In late autumn, chemicals on the skin of the caterpillar interact with the fungal spores and release the fungal mycelia, which then infects the caterpillar.
The infected larvae tend to remain underground vertical to the soil surface with their heads up. After invading a host larva, the fungus ramifies throughout the host and eventually kills it. Gradually the host larvae become rigid because of the production of fungal sclerotia. Fungal sclerotia are multihyphal structures that can remain dormant and then germinate to produce spores. After overwintering, the fungus ruptures the host body, forming the fruiting body, a sexual sporulating structure (a perithecial stroma) from the larval head that is connected to the sclerotia (dead larva) below ground and grows upward to emerge from the soil to complete the cycle.
The slow growing O. sinensis grows at a comparatively low temperature, i.e., below 21 °C. Temperature requirements and growth rates are crucial factors that distinguish O. sinensis from other similar fungi. Climate change is suspected to be negatively affecting the mountain organism.
Use in traditional Asian medicines
The use of caterpillar fungus as folk medicine apparently originated in Tibet and Nepal. So far the oldest known text documenting its use was written in the late 15th century by the Tibetan doctor Zurkhar Nyamnyi Dorje (Wylie: ) [1439–1475]) in his text: ("Instructions on a Myriad of Medicines"), where he describes its use as an aphrodisiac.
The first mention of Ophiocordyceps sinensis in traditional Chinese medicine was in Wang Ang’s 1694 compendium of materia medica, Ben Cao Bei Yao. In the 18th century it was listed in Wu Yiluo's Ben cao cong xin ("New compilation of materia medica"). The ethno-mycological knowledge on caterpillar fungus among the Nepalese people is documented. The entire fungus-caterpillar combination is hand-collected for medicinal use.
In traditional Chinese medicine, it is regarded as having an excellent balance of yin and yang as it is considered to be composed of both an animal and a vegetable. They are now cultivated on an industrial scale for their use in traditional Chinese medicine. However, no one has succeeded so far in rearing the fungus by infecting cultivated caterpillars; all products derived from cultured Ophiocordyceps are derived from mycelia grown on grains or in liquids.
Economics and impact
In rural Tibet, yartsa gunbu has become the most important source of cash income. The fungi contributed 40% of the annual cash income to local households and 8.5% to the GDP in 2004. Prices have increased continuously, especially since the late 1990s. In 2008, one kilogram traded for US$3,000 (lowest quality) to over US$18,000 (best quality, largest larvae). The annual production on the Tibetan Plateau was estimated in 2009 at 80–175 tons. The Himalayan Ophiocordyceps production might not exceed a few tons.
In 2004 the value of a kilogram of caterpillars was estimated at 30,000 to 60,000 Nepali rupees in Nepal, and about Rs 100,000 in India. In 2011, the value of a kilogram of caterpillars was estimated at 350,000 to 450,000 Nepali rupees in Nepal. A 2012 BBC article indicated that in north Indian villages a single fungus was worth Rs 150 (about £2 or $3), which is more than the daily wage of a manual labourer. In 2012, a pound of top-quality yartsa had reached retail prices of $50,000.
The price of Ophiocordyceps sinensis is reported to have increased dramatically on the Tibetan Plateau, about 900% between 1998 and 2008, an annual average of over 20% (after inflation). However, the value of large caterpillar fungus has increased more dramatically than small Cordyceps, regarded as lower quality.
Because of its high value, inter-village conflicts over access to its grassland habitats has become a headache for the local governing bodies and in several cases people were killed. In November 2011, a court in Nepal convicted 19 villagers over the murder of a group of farmers during a fight over the prized aphrodisiac fungus. Seven farmers were killed in the remote northern district of Manang in June 2009 after going to forage for Yarchagumba.
Its value gave it a role in the Nepalese Civil War, as the Nepalese Maoists and government forces fought for control of the lucrative export trade during the June–July harvest season. Collecting yarchagumba in Nepal had only been legalised in 2001, and now demand is highest in countries such as China, Thailand, Vietnam, Korea and Japan. By 2002, the 'herb' was valued at R 105,000 ($1,435) per kilogram, allowing the government to charge a royalty of R 20,000 ($280) per kilogram.
The search for Ophiocordyceps sinensis is often perceived to threaten the environment of the Tibetan Plateau where it grows. While it has been collected for centuries and is still common in such areas, current collection rates are much higher than in historical times.
In the Kingdom of Bhutan, Ophiocordyceps sinensis is recently also being harvested. The quality of the Bhutanese variety has been shown to be equal to the Tibetan one.
Cultivated O. sinensis mycelium is an alternative to wild-harvested O. sinensis, and producers claim it may offer improved consistency. Artificial culture of O. sinensis is typically by growth of pure mycelia in liquid culture (in China) or on grains (in the West).
In Vietnam, according to the statistics of the Ministry of Agriculture and Rural Development, the production of cultivated Ophiocordyceps sinensis in Vietnam in 2022 reached about 1,000 tons, an increase of five times compared to 2017. The selling price of fresh O. sinensis ranges from 10-20 million VND/kg, while dried O. sinensis ranges from 100-200 million VND/kg. Therefore, the economic value of cultivated O. sinensis in Vietnam is estimated to be around 10,000 billion VND/year. In the period 2017-2022, the production of cultivated O. sinensis has grown at an average rate of 40%/year..
In India, fuelwood cutting by Ophiocordyceps sinensis collectors near the treeline is reported to be depleting populations of tree species such as of Himalayan birch Betula utilis.
See also
List of fungi by conservation status
References
Further reading
Winkler, D. 2005. Yartsa Gunbu – Cordyceps sinensis. Economy, Ecology & Ethno-mycology of a Fungus Endemic to the Tibetan Plateau. In: A. BOESI & F. CARDI (eds.). Wildlife and plants in traditional and modern Tibet: Conceptions, Exploitation and Conservation. Memorie della Società Italiana di Scienze Naturali e del Museo Civico di Storia Naturale di Milano, Vol. 33.1:69–85.
External links
Yartsa Gunbu (Cordyceps sinensis) in Tibet
An Electronic Monograph of Cordyceps and Related Fungi
Cordyceps information from Drugs.com
Cordyceps sinensis (Berk.) Sacc. Medicinal Plant Images Database (School of Chinese Medicine, Hong Kong Baptist University)
Tibet’s Golden "Worm" August 2012 National Geographic (magazine)
Fungi described in 1843
Fungi of Asia
Ophiocordycipitaceae
Medicinal fungi
Fungi used in traditional Chinese medicine
Taxa named by Miles Joseph Berkeley
Parasitic fungi
Fungus species | Ophiocordyceps sinensis | [
"Biology"
] | 3,634 | [
"Fungi",
"Fungus species"
] |
571,274 | https://en.wikipedia.org/wiki/Drug%20discovery | In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new candidate medications are discovered.
Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery, as with penicillin. More recently, chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that had a desirable therapeutic effect in a process known as classical pharmacology. After sequencing of the human genome allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compounds libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy.
Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, the process of drug development can continue. If successful, clinical trials are developed.
Modern drug discovery is thus usually a capital-intensive process that involves large investments by pharmaceutical industry corporations as well as national governments (who provide grants and loan guarantees). Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity was about US$1.8 billion. In the 21st century, basic discovery research is funded primarily by governments and by philanthropic organizations, while late-stage development is funded primarily by pharmaceutical companies or venture capitalists. To be allowed to come to market, drugs must undergo several successful phases of clinical trials, and pass through a new drug approval process, called the New Drug Application in the United States.
Discovering drugs that may be a commercial success, or a public health success, involves a complex interaction between investors, industry, academia, patent laws, regulatory exclusivity, marketing, and the need to balance secrecy with communication. Meanwhile, for disorders whose rarity means that no large commercial success or public health effect can be expected, the orphan drug funding process ensures that people who experience those disorders can have some hope of pharmacotherapeutic advances.
History
The idea that the effect of a drug in the human body is mediated by specific interactions of the drug molecule with biological macromolecules, (proteins or nucleic acids in most cases) led scientists to the conclusion that individual chemicals are required for the biological activity of the drug. This made for the beginning of the modern era in pharmacology, as pure chemicals, instead of crude extracts of medicinal plants, became the standard drugs. Examples of drug compounds isolated from crude preparations are morphine, the active agent in opium, and digoxin, a heart stimulant originating from Digitalis lanata. Organic chemistry also led to the synthesis of many of the natural products isolated from biological sources.
Historically, substances, whether crude extracts or purified chemicals, were screened for biological activity without knowledge of the biological target. Only after an active substance was identified was an effort made to identify the target. This approach is known as classical pharmacology, forward pharmacology, or phenotypic drug discovery.
Later, small molecules were synthesized to specifically target a known physiological/pathological pathway, avoiding the mass screening of banks of stored compounds. This led to great success, such as the work of Gertrude Elion and George H. Hitchings on purine metabolism, the work of James Black on beta blockers and cimetidine, and the discovery of statins by Akira Endo. Another champion of the approach of developing chemical analogues of known active substances was Sir David Jack at Allen and Hanbury's, later Glaxo, who pioneered the first inhaled selective beta2-adrenergic agonist for asthma, the first inhaled steroid for asthma, ranitidine as a successor to cimetidine, and supported the development of the triptans.
Gertrude Elion, working mostly with a group of fewer than 50 people on purine analogues, contributed to the discovery of the first anti-viral; the first immunosuppressant (azathioprine) that allowed human organ transplantation; the first drug to induce remission of childhood leukemia; pivotal anti-cancer treatments; an anti-malarial; an anti-bacterial; and a treatment for gout.
Cloning of human proteins made possible the screening of large libraries of compounds against specific targets thought to be linked to specific diseases. This approach is known as reverse pharmacology and is the most frequently used approach today.
In the 2020s, qubit and quantum computing started to be used to reduce the time needed to drug discovery.
Targets
A "target" is produced within the pharmaceutical industry. Generally, the "target" is the naturally existing cellular or molecular structure involved in the pathology of interest where the drug-in-development is meant to act. However, the distinction between a "new" and "established" target can be made without a full understanding of just what a "target" is. This distinction is typically made by pharmaceutical companies engaged in the discovery and development of therapeutics. In an estimate from 2011, 435 human genome products were identified as therapeutic drug targets of FDA-approved drugs.
"Established targets" are those for which there is a good scientific understanding, supported by a lengthy publication history, of both how the target functions in normal physiology and how it is involved in human pathology. This does not imply that the mechanism of action of drugs that are thought to act through a particular established target is fully understood. Rather, "established" relates directly to the amount of background information available on a target, in particular functional information. In general, "new targets" are all those targets that are not "established targets" but which have been or are the subject of drug discovery efforts. The majority of targets selected for drug discovery efforts are proteins, such as G-protein-coupled receptors (GPCRs) and protein kinases.
Screening and design
The process of finding a new drug against a chosen target for a particular disease usually involves high-throughput screening (HTS), wherein large libraries of chemicals are tested for their ability to modify the target. For example, if the target is a novel GPCR, compounds will be screened for their ability to inhibit or stimulate that receptor (see antagonist and agonist): if the target is a protein kinase, the chemicals will be tested for their ability to inhibit that kinase.
Another function of HTS is to show how selective the compounds are for the chosen target, as one wants to find a molecule which will interfere with only the chosen target, but not other, related targets. To this end, other screening runs will be made to see whether the "hits" against the chosen target will interfere with other related targets – this is the process of cross-screening. Cross-screening is useful because the more unrelated targets a compound hits, the more likely that off-target toxicity will occur with that compound once it reaches the clinic.
It is unlikely that a perfect drug candidate will emerge from these early screening runs. One of the first steps is to screen for compounds that are unlikely to be developed into drugs; for example compounds that are hits in almost every assay, classified by medicinal chemists as "pan-assay interference compounds", are removed at this stage, if they were not already removed from the chemical library. It is often observed that several compounds are found to have some degree of activity, and if these compounds share common chemical features, one or more pharmacophores can then be developed. At this point, medicinal chemists will attempt to use structure–activity relationships (SAR) to improve certain features of the lead compound:
increase activity against the chosen target
reduce activity against unrelated targets
improve the druglikeness or ADME properties of the molecule.
This process will require several iterative screening runs, during which, it is hoped, the properties of the new molecular entities will improve, and allow the favoured compounds to go forward to in vitro and in vivo testing for activity in the disease model of choice.
Amongst the physicochemical properties associated with drug absorption include ionization (pKa), and solubility; permeability can be determined by PAMPA and Caco-2. PAMPA is attractive as an early screen due to the low consumption of drug and the low cost compared to tests such as Caco-2, gastrointestinal tract (GIT) and Blood–brain barrier (BBB) with which there is a high correlation.
A range of parameters can be used to assess the quality of a compound, or a series of compounds, as proposed in the Lipinski's Rule of Five. Such parameters include calculated properties such as cLogP to estimate lipophilicity, molecular weight, polar surface area and measured properties, such as potency, in-vitro measurement of enzymatic clearance etc. Some descriptors such as ligand efficiency (LE) and lipophilic efficiency (LiPE) combine such parameters to assess druglikeness.
While HTS is a commonly used method for novel drug discovery, it is not the only method. It is often possible to start from a molecule which already has some of the desired properties. Such a molecule might be extracted from a natural product or even be a drug on the market which could be improved upon (so-called "me too" drugs). Other methods, such as virtual high throughput screening, where screening is done using computer-generated models and attempting to "dock" virtual libraries to a target, are also often used.
Another method for drug discovery is de novo drug design, in which a prediction is made of the sorts of chemicals that might (e.g.) fit into an active site of the target enzyme. For example, virtual screening and computer-aided drug design are often used to identify new chemical moieties that may interact with a target protein. Molecular modelling and molecular dynamics simulations can be used as a guide to improve the potency and properties of new drug leads.
There is also a paradigm shift in the drug discovery community to shift away from HTS, which is expensive and may only cover limited chemical space, to the screening of smaller libraries (maximum a few thousand compounds). These include fragment-based lead discovery (FBDD) and protein-directed dynamic combinatorial chemistry. The ligands in these approaches are usually much smaller, and they bind to the target protein with weaker binding affinity than hits that are identified from HTS. Further modifications through organic synthesis into lead compounds are often required. Such modifications are often guided by protein X-ray crystallography of the protein-fragment complex. The advantages of these approaches are that they allow more efficient screening and the compound library, although small, typically covers a large chemical space when compared to HTS.
Phenotypic screens have also provided new chemical starting points in drug discovery. A variety of models have been used including yeast, zebrafish, worms, immortalized cell lines, primary cell lines, patient-derived cell lines and whole animal models. These screens are designed to find compounds which reverse a disease phenotype such as death, protein aggregation, mutant protein expression, or cell proliferation as examples in a more holistic cell model or organism. Smaller screening sets are often used for these screens, especially when the models are expensive or time-consuming to run. In many cases, the exact mechanism of action of hits from these screens is unknown and may require extensive target deconvolution experiments to ascertain. The growth of the field of chemoproteomics has provided numerous strategies to identify drug targets in these cases.
Once a lead compound series has been established with sufficient target potency and selectivity and favourable drug-like properties, one or two compounds will then be proposed for drug development. The best of these is generally called the lead compound, while the other will be designated as the "backup". These decisions are generally supported by computational modelling innovations.
Nature as source
Traditionally, many drugs and other chemicals with biological activity have been discovered by studying chemicals that organisms create to affect the activity of other organisms for survival.
Despite the rise of combinatorial chemistry as an integral part of lead discovery process, natural products still play a major role as starting material for drug discovery. A 2007 report found that of the 974 small molecule new chemical entities developed between 1981 and 2006, 63% were natural derived or semisynthetic derivatives of natural products. For certain therapy areas, such as antimicrobials, antineoplastics, antihypertensive and anti-inflammatory drugs, the numbers were higher.
Natural products may be useful as a source of novel chemical structures for modern techniques of development of antibacterial therapies.
Plant-derived
Many secondary metabolites produced by plants have potential therapeutic medicinal properties. These secondary metabolites contain, bind to, and modify the function of proteins (receptors, enzymes, etc.). Consequently, plant derived natural products have often been used as the starting point for drug discovery.
History
Until the Renaissance, the vast majority of drugs in Western medicine were plant-derived extracts. This has resulted in a pool of information about the potential of plant species as important sources of starting materials for drug discovery. Botanical knowledge about different metabolites and hormones that are produced in different anatomical parts of the plant (e.g. roots, leaves, and flowers) are crucial for correctly identifying bioactive and pharmacological plant properties. Identifying new drugs and getting them approved for market has proved to be a stringent process due to regulations set by national drug regulatory agencies.
Jasmonates
Jasmonates are important in responses to injury and intracellular signals. They induce apoptosis and protein cascade via proteinase inhibitor, have defense functions, and regulate plant responses to different biotic and abiotic stresses. Jasmonates also have the ability to directly act on mitochondrial membranes by inducing membrane depolarization via release of metabolites.
Jasmonate derivatives (JAD) are also important in wound response and tissue regeneration in plant cells. They have also been identified to have anti-aging effects on human epidermal layer. It is suspected that they interact with proteoglycans (PG) and glycosaminoglycan (GAG) polysaccharides, which are essential extracellular matrix (ECM) components to help remodel the ECM. The discovery of JADs on skin repair has introduced newfound interest in the effects of these plant hormones in therapeutic medicinal application.
Salicylates
Salicylic acid (SA), a phytohormone, was initially derived from willow bark and has since been identified in many species. It is an important player in plant immunity, although its role is still not fully understood by scientists. They are involved in disease and immunity responses in plant and animal tissues. They have salicylic acid binding proteins (SABPs) that have shown to affect multiple animal tissues. The first discovered medicinal properties of the isolated compound was involved in pain and fever management. They also play an active role in the suppression of cell proliferation. They have the ability to induce death in lymphoblastic leukemia and other human cancer cells. One of the most common drugs derived from salicylates is aspirin, also known as acetylsalicylic acid, with anti-inflammatory and anti-pyretic properties.
Animal-derived
Some drugs used in modern medicine have been discovered in animals or are based on compounds found in animals. For example, the anticoagulant drugs, hirudin and its synthetic congener, bivalirudin, are based on saliva chemistry of the leech, Hirudo medicinalis. Used to treat type 2 diabetes, exenatide was developed from saliva compounds of the Gila monster, a venomous lizard.
Microbial metabolites
Microbes compete for living space and nutrients. To survive in these conditions, many microbes have developed abilities to prevent competing species from proliferating. Microbes are the main source of antimicrobial drugs. Streptomyces isolates have been such a valuable source of antibiotics, that they have been called medicinal molds. The classic example of an antibiotic discovered as a defense mechanism against another microbe is penicillin in bacterial cultures contaminated by Penicillium fungi in 1928.
Marine invertebrates
Marine environments are potential sources for new bioactive agents. Arabinose nucleosides discovered from marine invertebrates in 1950s, demonstrated for the first time that sugar moieties other than ribose and deoxyribose can yield bioactive nucleoside structures. It took until 2004 when the first marine-derived drug was approved. For example, the cone snail toxin ziconotide, also known as Prialt treats severe neuropathic pain. Several other marine-derived agents are now in clinical trials for indications such as cancer, anti-inflammatory use and pain. One class of these agents are bryostatin-like compounds, under investigation as anti-cancer therapy.
Chemical diversity
As above mentioned, combinatorial chemistry was a key technology enabling the efficient generation of large screening libraries for the needs of high-throughput screening. However, now, after two decades of combinatorial chemistry, it has been pointed out that despite the increased efficiency in chemical synthesis, no increase in lead or drug candidates has been reached. This has led to analysis of chemical characteristics of combinatorial chemistry products, compared to existing drugs or natural products. The chemoinformatics concept chemical diversity, depicted as distribution of compounds in the chemical space based on their physicochemical characteristics, is often used to describe the difference between the combinatorial chemistry libraries and natural products. The synthetic, combinatorial library compounds seem to cover only a limited and quite uniform chemical space, whereas existing drugs and particularly natural products, exhibit much greater chemical diversity, distributing more evenly to the chemical space. The most prominent differences between natural products and compounds in combinatorial chemistry libraries is the number of chiral centers (much higher in natural compounds), structure rigidity (higher in natural compounds) and number of aromatic moieties (higher in combinatorial chemistry libraries). Other chemical differences between these two groups include the nature of heteroatoms (O and N enriched in natural products, and S and halogen atoms more often present in synthetic compounds), as well as level of non-aromatic unsaturation (higher in natural products). As both structure rigidity and chirality are well-established factors in medicinal chemistry known to enhance compounds specificity and efficacy as a drug, it has been suggested that natural products compare favourably to today's combinatorial chemistry libraries as potential lead molecules.
Screening
Two main approaches exist for the finding of new bioactive chemical entities from natural sources.
The first is sometimes referred to as random collection and screening of material, but the collection is far from random. Biological (often botanical) knowledge is often used to identify families that show promise. This approach is effective because only a small part of the earth's biodiversity has ever been tested for pharmaceutical activity. Also, organisms living in a species-rich environment need to evolve defensive and competitive mechanisms to survive. Those mechanisms might be exploited in the development of beneficial drugs.
A collection of plant, animal and microbial samples from rich ecosystems can potentially give rise to novel biological activities worth exploiting in the drug development process. One example of successful use of this strategy is the screening for antitumor agents by the National Cancer Institute, which started in the 1960s. Paclitaxel was identified from Pacific yew tree Taxus brevifolia. Paclitaxel showed anti-tumour activity by a previously undescribed mechanism (stabilization of microtubules) and is now approved for clinical use for the treatment of lung, breast, and ovarian cancer, as well as for Kaposi's sarcoma. Early in the 21st century, Cabazitaxel (made by Sanofi, a French firm), another relative of taxol has been shown effective against prostate cancer, also because it works by preventing the formation of microtubules, which pull the chromosomes apart in dividing cells (such as cancer cells). Other examples are: 1. Camptotheca (Camptothecin · Topotecan · Irinotecan · Rubitecan · Belotecan); 2. Podophyllum (Etoposide · Teniposide); 3a. Anthracyclines (Aclarubicin · Daunorubicin · Doxorubicin · Epirubicin · Idarubicin · Amrubicin · Pirarubicin · Valrubicin · Zorubicin); 3b. Anthracenediones (Mitoxantrone · Pixantrone).
The second main approach involves ethnobotany, the study of the general use of plants in society, and ethnopharmacology, an area inside ethnobotany, which is focused specifically on medicinal uses.
Artemisinin, an antimalarial agent from sweet wormtree Artemisia annua, used in Chinese medicine since 200BC is one drug used as part of combination therapy for multiresistant Plasmodium falciparum.
Additionally, since machine learning has become more advanced, virtual screening is now an option for drug developers. AI algorithms are being used to perform virtual screening of chemical compounds, which involves predicting the activity of a compound against a specific target. By using machine learning algorithms to analyse large amounts of chemical data, researchers can identify potential new drug candidates that are more likely to be effective against a specific disease. Algorithms, such as Nearest-Neighbour classifiers, RF, extreme learning machines, SVMs, and deep neural networks (DNNs), are used for VS based on synthesis feasibility and can also predict in vivo activity and toxicity.
Structural elucidation
The elucidation of the chemical structure is critical to avoid the re-discovery of a chemical agent that is already known for its structure and chemical activity. Mass spectrometry is a method in which individual compounds are identified based on their mass/charge ratio, after ionization. Chemical compounds exist in nature as mixtures, so the combination of liquid chromatography and mass spectrometry (LC-MS) is often used to separate the individual chemicals. Databases of mass spectra for known compounds are available and can be used to assign a structure to an unknown mass spectrum. Nuclear magnetic resonance spectroscopy is the primary technique for determining chemical structures of natural products. NMR yields information about individual hydrogen and carbon atoms in the structure, allowing detailed reconstruction of the molecule's architecture.
New Drug Application
When a drug is developed with evidence throughout its history of research to show it is safe and effective for the intended use in the United States, the company can file an application – the New Drug Application (NDA) – to have the drug commercialized and available for clinical application. NDA status enables the FDA to examine all submitted data on the drug to reach a decision on whether to approve or not approve the drug candidate based on its safety, specificity of effect, and efficacy of doses.
See also
References
Further reading
External links | Drug discovery | [
"Chemistry",
"Biology"
] | 4,869 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
571,280 | https://en.wikipedia.org/wiki/Stratification%20%28mathematics%29 | Stratification has several usages in mathematics.
In mathematical logic
In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form is stratified if and only if
there is a stratification assignment S that fulfills the following conditions:
If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short .
If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short .
The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up.
Stratification is not only useful for guaranteeing unique interpretation of Horn clause
theories.
In a specific set theory
In New Foundations (NF) and related set theories, a formula in the language of first-order logic with equality and membership is said to be
stratified if and only if there is a function
which sends each variable appearing in (considered as an item of syntax) to
a natural number (this works equally well if all integers are used) in such a way that
any atomic formula appearing in satisfies and any atomic formula appearing in satisfies .
It turns out that it is sufficient to require that these conditions be satisfied only when
both variables in an atomic formula are bound in the set abstract
under consideration. A set abstract satisfying this weaker condition is said to be
weakly stratified.
The stratification of New Foundations generalizes readily to languages with more
predicates and with term constructions. Each primitive predicate needs to have specified
required displacements between values of at its (bound) arguments
in a (weakly) stratified formula. In a language with term constructions, terms themselves
need to be assigned values under , with fixed displacements from the
values of each of their (bound) arguments in a (weakly) stratified formula. Defined term
constructions are neatly handled by (possibly merely implicitly) using the theory
of descriptions: a term (the x such that ) must
be assigned the same value under as the variable x.
A formula is stratified if and only if it is possible to assign types to all variables appearing
in the formula in such a way that it will make sense in a version TST of the theory of
types described in the New Foundations article, and this is probably the best way
to understand the stratification of New Foundations in practice.
The notion of stratification can be extended to the lambda calculus; this is found
in papers of Randall Holmes.
A motivation for the use of stratification is to address Russell's paradox, the antinomy considered to have undermined Frege's central work Grundgesetze der Arithmetik (1902).
In topology
In singularity theory, there is a different meaning, of a decomposition of a topological space X into disjoint subsets each of which is a topological manifold (so that in particular a stratification defines a partition of the topological space). This is not a useful notion when unrestricted; but when the various strata are defined by some recognisable set of conditions (for example being locally closed), and fit together manageably, this idea is often applied in geometry. Hassler Whitney and René Thom first defined formal conditions for stratification. See Whitney stratification and topologically stratified space.
In statistics
See stratified sampling.
Mathematical logic
Mathematical terminology
Set theory
Stratifications | Stratification (mathematics) | [
"Mathematics"
] | 846 | [
"Set theory",
"Stratifications",
"Mathematical logic",
"Topology",
"nan"
] |
571,303 | https://en.wikipedia.org/wiki/Position-independent%20code | In computing, position-independent code (PIC) or position-independent executable (PIE) is a body of machine code that executes properly regardless of its memory address. PIC is commonly used for shared libraries, so that the same library code can be loaded at a location in each program's address space where it does not overlap with other memory in use by, for example, other shared libraries. PIC was also used on older computer systems that lacked an MMU, so that the operating system could keep applications away from each other even within the single address space of an MMU-less system.
Position-independent code can be executed at any memory address without modification. This differs from absolute code, which must be loaded at a specific location to function correctly, and load-time locatable (LTL) code, in which a linker or program loader modifies a program before execution, so it can be run only from a particular memory location. Generating position-independent code is often the default behavior for compilers, but they may place restrictions on the use of some language features, such as disallowing use of absolute addresses (position-independent code has to use relative addressing). Instructions that refer directly to specific memory addresses sometimes execute faster, and replacing them with equivalent relative-addressing instructions may result in slightly slower execution, although modern processors make the difference practically negligible.
History
In early computers such as the IBM 701 (29 April 1952) or the UNIVAC I (31 March 1951) code was not position-independent: each program was built to load into and run from a particular address. Those early computers did not have an operating system and were not multitasking-capable. Programs were loaded into main storage (or even stored on magnetic drum for execution directly from there) and run one at a time. In such an operational context, position-independent code was not necessary.
Even on base and bounds systems such as the CDC 6600, the GE 625 and the UNIVAC 1107, once the OS loaded code into a job's storage, it could only run from the relative address at which it was loaded.
Burroughs introduced a segmented system, the B5000 (1961), in which programs addressed segments indirectly via control words on the stack or in the program reference table (PRT); a shared segment could be addressed via different PRT locations in different processes. Similarly, on the later B6500, all segment references were via positions in a stack frame.
The IBM System/360 (7 April 1964) was designed with truncated addressing similar to that of the UNIVAC III, with code position independence in mind. In truncated addressing, memory addresses are calculated from a base register and an offset. At the beginning of a program, the programmer must establish addressability by loading a base register; normally, the programmer also informs the assembler with a USING pseudo-op. The programmer can load the base register from a register known to contain the entry point address, typically R15, or can use the BALR (Branch And Link, Register form) instruction (with a R2 Value of 0) to store the next sequential instruction's address into the base register, which was then coded explicitly or implicitly in each instruction that referred to a storage location within the program. Multiple base registers could be used, for code or for data. Such instructions require less memory because they do not have to hold a full 24, 31, 32, or 64 bit address (4 or 8 bytes), but instead a base register number (encoded in 4 bits) and a 12–bit address offset (encoded in 12 bits), requiring only two bytes.
This programming technique is standard on IBM S/360 type systems. It has been in use through to today's IBM System/z. When coding in assembly language, the programmer has to establish addressability for the program as described above and also use other base registers for dynamically allocated storage. Compilers automatically take care of this kind of addressing.
IBM's early operating system DOS/360 (1966) was not using virtual storage (since the early models of System S/360 did not support it), but it did have the ability to place programs to an arbitrary (or automatically chosen) storage location during loading via the PHASE name,* JCL (Job Control Language) statement.
So, on S/360 systems without virtual storage, a program could be loaded at any storage location, but this required a contiguous memory area large enough to hold that program. Sometimes memory fragmentation would occur from loading and unloading differently sized modules. Virtual storage - by design - does not have that limitation.
While DOS/360 and OS/360 did not support PIC, transient SVC routines in OS/360 could not contain relocatable address constants and could run in any of the transient areas without relocation.
IBM first introduced virtual storage on IBM System/360 model 67 in (1965) to support IBM's first multi-tasking operating and time-sharing operating system TSS/360. Later versions of DOS/360 (DOS/VS etc.) and later IBM operating systems all utilized virtual storage. Truncated addressing remained as part of the base architecture, and still advantageous when multiple modules must be loaded into the same virtual address space.
By way of comparison, on early segmented systems such as Burroughs MCP on the Burroughs B5000 (1961) and Multics (1964), and on paging systems such as IBM TSS/360 (1967), code was also inherently position-independent, since subroutine virtual addresses in a program were located in private data external to the code, e.g., program reference table, linkage segment, prototype section.
The invention of dynamic address translation (the function provided by an MMU) originally reduced the need for position-independent code because every process could have its own independent address space (range of addresses).
However, multiple simultaneous jobs using the same code created a waste of physical memory. If two jobs run entirely identical programs, dynamic address translation provides a solution by allowing the system simply to map two different jobs' address 32K to the same bytes of real memory, containing the single copy of the program.
Different programs may share common code. For example, the payroll program and the accounts receivable program may both contain an identical sort subroutine. A shared module (a shared library is a form of shared module) gets loaded once and mapped into the two address spaces.
SunOS 4.x and ELF
Procedure calls inside a shared library are typically made through small procedure linkage table (PLT) stubs, which then call the definitive function. This notably allows a shared library to inherit certain function calls from previously loaded libraries rather than using its own versions.
Data references from position-independent code are usually made indirectly, through Global Offset Tables (GOTs), which store the addresses of all accessed global variables. There is one GOT per compilation unit or object module, and it is located at a fixed offset from the code (although this offset is not known until the library is linked). When a linker links modules to create a shared library, it merges the GOTs and sets the final offsets in code. It is not necessary to adjust the offsets when loading the shared library later.
Position-independent functions accessing global data start by determining the absolute address of the GOT given their own current program counter value. This often takes the form of a fake function call in order to obtain the return value on stack (x86), in a specific standard register (SPARC, MIPS), or a special register (POWER/PowerPC/Power ISA), which can then be moved to a predefined standard register, or to obtain it into that standard register (PA-RISC, Alpha, ESA/390 and z/Architecture). Some processor architectures, such as the Motorola 68000, ARM, x86-64, newer versions of z/Architecture, Motorola 6809, WDC 65C816, and Knuth's MMIX allow referencing data by offset from the program counter. This is specifically targeted at making position-independent code smaller, less register demanding and hence more efficient.
Windows DLLs
Dynamic-link libraries (DLLs) in Microsoft Windows use variant E8 of the CALL instruction (Call near, relative, displacement relative to next instruction). These instructions do not need modification when the DLL is loaded.
Some global variables (e.g. arrays of string literals, virtual function tables) are expected to contain an address of an object in data section respectively in code section of the dynamic library; therefore, the stored address in the global variable must be updated to reflect the address where the DLL was loaded to. The dynamic loader calculates the address referred to by a global variable and stores the value in such global variable; this triggers copy-on-write of a memory page containing such global variable. Pages with code and pages with global variables that do not contain pointers to code or global data remain shared between processes. This operation must be done in any OS that can load a dynamic library at arbitrary address.
In Windows Vista and later versions of Windows, the relocation of DLLs and executables is done by the kernel memory manager, which shares the relocated binaries across multiple processes. Images are always relocated from their preferred base addresses, achieving address space layout randomization (ASLR).
Versions of Windows prior to Vista require that system DLLs be prelinked at non-conflicting fixed addresses at the link time in order to avoid runtime relocation of images. Runtime relocation in these older versions of Windows is performed by the DLL loader within the context of each process, and the resulting relocated portions of each image can no longer be shared between processes.
The handling of DLLs in Windows differs from the earlier OS/2 procedure it derives from. OS/2 presents a third alternative and attempts to load DLLs that are not position-independent into a dedicated "shared arena" in memory, and maps them once they are loaded. All users of the DLL are able to use the same in-memory copy.
Multics
In Multics each procedure conceptually has a code segment and a linkage segment.
The code segment contains only code and the linkage section serves as a template for a new linkage segment. Pointer register 4 (PR4) points to the linkage segment of the procedure. A call to a procedure saves PR4 in the stack before loading it with a pointer to the callee's linkage segment. The procedure call uses an indirect pointer pair with a flag to cause a trap on the first call so that the dynamic linkage mechanism can add the new procedure and its linkage segment to the Known Segment Table (KST), construct a new linkage segment, put their segment numbers in the caller's linkage section and reset the flag in the indirect pointer pair.
TSS
In IBM S/360 Time Sharing System (TSS/360 and TSS/370) each procedure may have a read-only public CSECT and a writable private Prototype Section (PSECT). A caller loads a V-constant for the routine into General Register 15 (GR15) and copies an R-constant for the routine's PSECT into the 19th word of the save area pointed to be GR13.
The Dynamic Loader does not load program pages or resolve address constants until the first page fault.
Position-independent executables
Position-independent executables (PIE) are executable binaries made entirely from position-independent code. While some systems only run PIC executables, there are other reasons they are used. PIE binaries are used in some security-focused Linux distributions to allow PaX or Exec Shield to use address space layout randomization (ASLR) to prevent attackers from knowing where existing executable code is during a security attack using exploits that rely on knowing the offset of the executable code in the binary, such as return-to-libc attacks. (The official Linux kernel since 2.6.12 of 2005 has a weaker ASLR that also works with PIE. It is weak in that randomness is applied to whole ELF file units.)
Apple's macOS and iOS fully support PIE executables as of versions 10.7 and 4.3, respectively; a warning is issued when non-PIE iOS executables are submitted for approval to Apple's App Store but there's no hard requirement yet and non-PIE applications are not rejected.
OpenBSD has PIE enabled by default on most architectures since OpenBSD 5.3, released on 1 May 2013. Support for PIE in statically linked binaries, such as the executables in /bin and /sbin directories, was added near the end of 2014. openSUSE added PIE as a default in 2015-02. Beginning with Fedora 23, Fedora maintainers decided to build packages with PIE enabled as the default. Ubuntu 17.10 has PIE enabled by default across all architectures. Gentoo's new profiles now support PIE by default. Around July 2017, Debian enabled PIE by default.
Android enabled support for PIEs in Jelly Bean and removed non-PIE linker support in Lollipop.
See also
Dynamic linker
Object file
Code segment
Notes
References
External links
Introduction to Position Independent Code
Position Independent Code internals
Programming in Assembly Language with PIC
The Curious Case of Position Independent Executables
Operating system technology
Computer libraries
Computer file formats | Position-independent code | [
"Technology"
] | 2,788 | [
"IT infrastructure",
"Computer libraries"
] |
571,325 | https://en.wikipedia.org/wiki/Sample%20and%20hold | In electronics, a sample and hold (also known as sample and follow) circuit is an analog device that samples (captures, takes) the voltage of a continuously varying analog signal and holds (locks, freezes) its value at a constant level for a specified minimum period of time. Sample and hold circuits and related peak detectors are the elementary analog memory devices. They are typically used in analog-to-digital converters to eliminate variations in input signal that can corrupt the conversion process. They are also used in electronic music, for instance to impart a random quality to successively-played notes.
A typical sample and hold circuit stores electric charge in a capacitor and contains at least one switching device such as a FET (field effect transistor) switch and normally one operational amplifier. To sample the input signal, the switch connects the capacitor to the output of a buffer amplifier. The buffer amplifier charges or discharges the capacitor so that the voltage across the capacitor is practically equal, or proportional to, input voltage. In hold mode, the switch disconnects the capacitor from the buffer. The capacitor is invariably discharged by its own leakage currents and useful load currents, which makes the circuit inherently volatile, but the loss of voltage (voltage drop) within a specified hold time remains within an acceptable error margin for all but the most demanding applications.
Purpose
Sample and hold circuits are used in linear systems. In some kinds of analog-to-digital converters (ADCs), the input is compared to a voltage generated internally from a digital-to-analog converter (DAC). The circuit tries a series of values and stops converting once the voltages are equal, within some defined error margin. If the input value was permitted to change during this comparison process, the resulting conversion would be inaccurate and possibly unrelated to the true input value. Such successive approximation converters will often incorporate internal sample and hold circuitry. In addition, sample and hold circuits are often used when multiple samples need to be measured at the same time. Each value is sampled and held, using a common sample clock.
For practically all commercial liquid crystal active matrix displays based on TN, IPS or VA electro-optic LC cells (excluding bi-stable phenomena), each pixel represents a small capacitor, which has to be periodically charged to a level corresponding to the greyscale value (contrast) desired for a picture element. In order to maintain the level during a scanning cycle (frame period), an additional electric capacitor is attached in parallel to each LC pixel to better hold the voltage. A thin-film FET switch is addressed to select a particular LC pixel and charge the picture information for it. In contrast to an S/H in general electronics, there is no output operational amplifier and no electrical signal AO. Instead, the charge on the hold capacitors controls the deformation of the LC molecules and thereby the optical effect as its output. The invention of this concept and its implementation in thin-film technology have been honored with the IEEE Jun-ichi Nishizawa Medal.
During a scanning cycle, the picture doesn't follow the input signal. This does not allow the eye to refresh and can lead to blurring during motion sequences, also the transition is visible between frames because the backlight is constantly illuminated, adding to display motion blur.
Sample and hold circuits are also frequently found on synthesizers, either as a discrete module or as an integral component. They are used to take periodic samples of an incoming signal, typically as a source of modulation for other components of the synthesizer. When a sample and hold circuit is plugged into a white noise generator the result is a sequence of random values, which - depending on the amplitude of modulation - can be used to provide subtle variations in a signal or wildly varying random tones.
Implementation
To keep the input voltage as stable as possible, it is essential that the capacitor have very low leakage, and that it not be loaded to any significant degree which calls for a very high input impedance.
See also
Analog signal to discrete time interval converter
Notes
References
Paul Horowitz, Winfield Hill (2001 ed.). The Art of Electronics. Cambridge University Press. .
Alan P. Kefauver, David Patschke (2007). Fundamentals of digital audio. A-R Editions, Inc. .
Analog Devices 21 page Tutorial "Sample and Hold Amplifiers" http://www.analog.com/static/imported-files/tutorials/MT-090.pdf
Applications of Monolithic Sample and hold Amplifiers-Intersil
Electronic circuits
Digital signal processing | Sample and hold | [
"Engineering"
] | 946 | [
"Electronic engineering",
"Electronic circuits"
] |
571,341 | https://en.wikipedia.org/wiki/DOT%20%28graph%20description%20language%29 | DOT is a graph description language, developed as a part of the Graphviz project. DOT graphs are typically stored as files with the .gv or .dot filename extension — .gv is preferred, to avoid confusion with the .dot extension used by versions of Microsoft Word before 2007. dot is also the name of the main program to process DOT files in the Graphviz package.
Various programs can process DOT files. Some, such as dot, neato, twopi, circo, fdp, and sfdp, can read a DOT file and render it in graphical form. Others, such as gvpr, gc, acyclic, ccomps, sccmap, and tred, read DOT files and perform calculations on the represented graph. Finally, others, such as lefty, dotty, and grappa, provide an interactive interface. The GVedit tool combines a text editor and a non-interactive viewer. Most programs are part of the Graphviz package or use it internally.
DOT is historically an acronym for "DAG of tomorrow", as the successor to a DAG format and a dag program which handled only directed acyclic graphs.
Syntax
Graph types
Undirected graphs
At its simplest, DOT can be used to describe an undirected graph. An undirected graph shows simple relations between objects, such as reciprocal friendship between people. The graph keyword is used to begin a new graph, and nodes are described within curly braces. A double-hyphen (--) is used to show relations between the nodes.
// The graph name and the semicolons are optional
graph graphname {
a -- b -- c;
b -- d;
}
Directed graphs
Similar to undirected graphs, DOT can describe directed graphs, such as flowcharts and dependency trees. The syntax is the same as for undirected graphs, except the digraph keyword is used to begin the graph, and an arrow (->) is used to show relationships between nodes.
digraph graphname {
a -> b -> c;
b -> d;
}
Attributes
Various attributes can be applied to graphs, nodes and edges in DOT files. These attributes can control aspects such as color, shape, and line styles. For nodes and edges, one or more attribute–value pairs are placed in square brackets [] after a statement and before the semicolon (which is optional). Graph attributes are specified as direct attribute–value pairs under the graph element, where multiple attributes are separated by a comma or using multiple sets of square brackets, while node attributes are placed after a statement containing only the name of the node, but not the relations between the dots.
graph graphname {
// This attribute applies to the graph itself
size="1,1";
// The label attribute can be used to change the label of a node
a [label="Foo"];
// Here, the node shape is changed.
b [shape=box];
// These edges both have different line properties
a -- b -- c [color=blue];
b -- d [style=dotted];
// [style=invis] hides a node.
}HTML-like labels are supported, although initially Graphviz did not handle them.
Comments
DOT supports C and C++ style single line and multiple line comments. In addition, it ignores lines with a number sign symbol # as their first character, like many interpreted languages.
Layout programs
The DOT language defines a graph, but does not provide facilities for rendering the graph. There are several programs that can be used to render, view, and manipulate graphs in the DOT language:
General
Graphviz – a collection of CLI utilities and libraries to manipulate and render graphs into different formats like SVG, PDF, PNG etc.
dot – CLI tool for conversion between and other formats
JavaScript
Canviza JavaScript library for rendering DOT files
d3-graphviza JavaScript library based on Viz.js and D3.js that renders DOT graphs and supports animated transitions between graphs and interactive graph manipulation
Vis.jsa JavaScript library that accept DOT as input for network graphs.
Viz.js – a JavaScript port of Graphviz that provides a simple wrapper for using it in the browser.
hpcc-js/wasm Graphviza fast WASM library for Graphviz similar to Viz.js
Java
Gephian interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs
Grappaa partial port of Graphviz to Java
graphviz-javaan open source partial port of Graphviz to Java available from github.com
ZGRViewera DOT viewer
Other
Beluginga Python- & Google Cloud Platform-based viewer of DOT and Beluga extensions
Delineatea Rust application for Linux than can edit fully-featured DOT graph with interactive preview, and export as PNG, SVG, or JPEG
dot2texa program to convert files from DOT to PGF/TikZ or PSTricks, both of which are rendered in LaTeX
OmniGrafflea digital illustration application for macOS that can import a subset of DOT, producing an editable document (but the result cannot be exported back to DOT)
Tulipa software framework in C++ that can import DOT files for analysis
VizierFXan Apache Flex graph rendering library in ActionScript
Notes
See also
External links
DOT tutorial and specification
Drawing graphs with dot
Node, Edge and Graph Attributes
Node Shapes
Gallery of examples
Graphviz Online: instant conversion and visualization of DOT descriptions
Boost Graph Library
lisp2dot or tree2dot: convert Lisp programming language-like program trees to DOT language (designed for use with genetic programming)
Mathematical software
Graph description languages
Graph drawing | DOT (graph description language) | [
"Mathematics"
] | 1,210 | [
"Mathematical relations",
"Graph description languages",
"Graph theory",
"Mathematical software"
] |
571,427 | https://en.wikipedia.org/wiki/Deviancy%20amplification%20spiral | The deviancy amplification spiral and deviancy amplification are terms used by interactionist sociologists to refer to the way levels of deviance or crime can be increased by the societal reaction to deviance itself.
Origin of term
The process of deviancy amplification was first described by Leslie T. Wilkins.
Process
According to sociologist Stanley Cohen, the spiral starts with some deviant act. Usually the deviance is criminal, but it can also involve lawful acts considered morally repugnant by a large segment of society. With the new focus on the issue, hidden or borderline examples that would not themselves have been newsworthy are reported, confirming the pattern. This confirmation of the pattern was first documented by Stanley Cohen in Folk Devils and Moral Panic, a study of the media response to clashes between the Mods and Rockers, two rival subcultures of the time.
Reported cases of such deviance are often presented as the ones we know about, or the "tip of the iceberg", an assertion that is nearly impossible to disprove immediately. For a variety of reasons, the less sensational aspects of the spiraling story that would help the public keep a rational perspective (such as statistics showing that the behavior or event is actually less common or less harmful than generally believed) tend to be ignored by the press.
As a result, minor problems begin to look serious and rare events begin to seem common. Members of the public are motivated to keep informed on these events, leading to high readership for the stories, feeding the spiral. The resulting publicity has the potential to increase the deviant behavior by glamorizing it, or by making it seem common or acceptable. In the next stage, public concern typically forces the police and the law enforcement system to focus more resources on dealing with the specific deviancy than it warrants.
Judges and magistrates then come under public pressure to deal out harsher sentences and politicians pass new laws to increase their popularity by giving the impression that they are dealing with the perceived threat. The responses by those in authority tend to reinforce the public's fear, while the media continue to report police and other law enforcement activity, amplifying the spiral.
The theory does not contend that moral panics always include the deviancy amplification spiral.
Eileen Barker asserts that the controversy surrounding certain new religious movements can turn violent in a deviancy amplification spiral. In his autobiography, Lincoln Steffens details how news reporting can be used to create the impression of a crime wave where there is none, in the chapter "I Make a Crime Wave".
Button and Tunley have also presented a theory that offers the opposite to deviancy amplification, which they call deviancy attenuation. In this, they argue using the case of fraud that there are some large problems which those in positions of power are able to seemingly attenuate through not accurately measuring them, leading to statistics which underestimate the problem, leading to fewer resources dedicated to them, reinforcing the belief of those in power that they are not a problem.
See also
Availability heuristic
Crowd psychology
Culture of fear
Folk devil
Love Jihad
Group sex
Jenkem
Junk food news
Knockout game
Mass hysteria
Mass media
Mean world syndrome
Missing white woman syndrome
Rainbow party
Representativeness heuristic
Sensationalism
Social control
Yellow journalism
References
Further reading
Cohen, Stanley. Folk devils and moral panics. London: Mac Gibbon and Kee, 1972. .
Section 3.4 Interpreting the crime problem of Free OpenLearn LearningSpace Unit DD100_1 originally written for the Open University Course, DD100.
Button, Mark and Tunley, Martin. (2015) Explaining Fraud Deviancy Attenuation in the United Kingdom. Crime, Law and Social Change, 63: 49-64
Media studies
Social phenomena
Social influence
Mass media issues
Deviance (sociology)
Criticism of journalism
News media manipulation
Media bias | Deviancy amplification spiral | [
"Biology"
] | 789 | [
"Deviance (sociology)",
"Behavior",
"Human behavior"
] |
571,478 | https://en.wikipedia.org/wiki/Bambi%20effect | The "Bambi effect" is an objection against the killing of animals that are perceived as "cute" or "adorable", such as deer, while there may be little or no objection to the suffering of animals that are perceived as somehow repulsive or less than desirable, such as pigs or other woodland creatures.
Referring to a form of purported anthropomorphism, the term is inspired by Walt Disney's 1942 animated film Bambi, where an emotional high point is the death of the lead character's mother at the hands of the film's antagonist, a hunter known only as "Man".
Effects
Some commentators have credited this purported effect with increasing public awareness of the dangers of pollution, for instance in the case of the fate of sea otters after the Exxon Valdez oil spill, and in the public interest in scaring birds off airfields in non-lethal ways. In the case of invasive species, perceived cuteness may help thwart efforts to eradicate non-native intruders, such as the white fallow deer in Point Reyes, California. The effect is also cited as the anthropomorphic quality of modern cinema: most people in modern Western civilization are not familiar with wildlife, other than "through TV or cinema, where fuzzy little critters discuss romance, self-determination and loyalty like pals over a cup of coffee", which has led to influences on public policy and the image of businesses cast in movies as polluting or otherwise harming the environment.
The effect was also cited in the events following a record snowfall in the U.S. state of Colorado in 2007, when food for mule deer, pronghorns, and elk became so scarce that they began to starve; the Colorado Department of Wildlife was inundated with requests and offers to help the animals from citizens, and ended up spending almost $2 million feeding the hungry wildlife. Among some butchers, the Bambi effect (and in general, Walt Disney's anthropomorphic characters) is credited with fueling the vegetarian movement; chefs use the term to describe customers' lack of interest in, for instance, whole fish: "It's the Bambi effect – [customers] don't want to see eyes looking at them".
The ’Bambi’ Effect has caused people to fight against organizations that manage wildlife. However, their intervention can often interfere with an ecosystem’s circle of life and thus their efforts become counterproductive. For example, this phenomenon can promote people to create organizations like The Smokey Bear Campaign. This Campaign decreased the number of fires but consequently led to an unexpected change in ecosystem. The ‘Bambi’ effect is backed up by a study (Wilks, 2008) which found that to help the more aggressive and unfriendly wildlife become more loved and see improvements in their environments there should be cuter and more innocent cartoons created and marketed for them.
See also
Animal welfare
Animal rights
Cruelty to animals
Poaching
References
Further reading
Deep ecology
Hunting
Bambi
Anthropomorphism
Eponyms | Bambi effect | [
"Biology",
"Environmental_science"
] | 623 | [
"Biological hypotheses",
"Deep ecology",
"Biophilia hypothesis",
"Environmental ethics"
] |
571,480 | https://en.wikipedia.org/wiki/Absorbed%20dose | Absorbed dose is a dose quantity which is the measure of the energy deposited in matter by ionizing radiation per unit mass. Absorbed dose is used in the calculation of dose uptake in living tissue in both radiation protection (reduction of harmful effects), and radiology (potential beneficial effects, for example in cancer treatment). It is also used to directly compare the effect of radiation on inanimate matter such as in radiation hardening.
The SI unit of measure is the gray (Gy), which is defined as one Joule of energy absorbed per kilogram of matter. The older, non-SI CGS unit rad, is sometimes also used, predominantly in the USA.
Deterministic effects
Conventionally, in radiation protection, unmodified absorbed dose is only used for indicating the immediate health effects due to high levels of acute dose. These are tissue effects, such as in acute radiation syndrome, which are also known as deterministic effects. These are effects which are certain to happen in a short time. The time between exposure and vomiting may be used as a heuristic for quantifying a dose when more precise means of testing are unavailable.
Effects of acute radiation exposure
Radiation therapy
Dose computation
The absorbed dose is equal to the radiation exposure (ions or C/kg) of the radiation beam multiplied by the ionization energy of the medium to be ionized.
For example, the ionization energy of dry air at 20 °C and 101.325 kPa of pressure is . (33.97 eV per ion pair) Therefore, an exposure of (1 roentgen) would deposit an absorbed dose of (0.00876 Gy or 0.876 rad) in dry air at those conditions.
When the absorbed dose is not uniform, or when it is only applied to a portion of a body or object, an absorbed dose representative of the entire item can be calculated by taking a mass-weighted average of the absorbed doses at each point.
More precisely,
Where
is the mass-averaged absorbed dose of the entire item ;
is the item of interest;
is the absorbed dose density (absorbed dose per unit volume) as a function of location;
is the density (mass per unit volume) as a function of location;
is volume.
Stochastic risk - conversion to equivalent dose
For stochastic radiation risk, defined as the probability of cancer induction and genetic effects occurring over a long time scale, consideration must be given to the type of radiation and the sensitivity of the irradiated tissues, which requires the use of modifying factors to produce a risk factor in sieverts. One sievert carries with it a 5.5% chance of eventually developing cancer based on the linear no-threshold model. This calculation starts with the absorbed dose.
To represent stochastic risk the dose quantities equivalent dose HT and effective dose E are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. The derivation of stochastic risk is in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram.
For whole body radiation, with Gamma rays or X-rays the modifying factors are numerically equal to 1, which means that in that case the dose in grays equals the dose in sieverts.
Development of the absorbed dose concept and the gray
Wilhelm Röntgen first discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques.
Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements, or ICRU, and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn.
One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber. At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation. This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air.
In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the "gram roentgen" (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation". This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units.
In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units, or SI. It was decided to define the SI unit of absorbed radiation as energy deposited per unit mass which is how the rad had been defined, but in MKS units it would be J/kg. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was equal to 100 rad, the cgs unit.
Other uses
Absorbed dose is also used to manage the irradiation and measure the effects of ionising radiation on inanimate matter in a number of fields.
Component survivability
Absorbed dose is used to rate the survivability of devices such as electronic components in ionizing radiation environments.
Radiation hardening
The measurement of absorbed dose absorbed by inanimate matter is vital in the process of radiation hardening which improves the resistance of electronic devices to radiation effects.
Food irradiation
Absorbed dose is the physical dose quantity used to ensure irradiated food has received the correct dose to ensure effectiveness. Variable doses are used depending on the application and can be as high as 70 kGy.
Radiation-related quantities
The following table shows radiation quantities in SI and non-SI units:
Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985.
See also
Kerma (physics)
Mean glandular dose
:Category:Units of radiation dose
Notes
References
Literature
External links
Specific Gamma-Ray Dose Constants for Nuclides Important to Dosimetry and Radiological Assessment, Laurie M. Unger and D. K . Trubey, Oak Ridge National Laboratory, May 1982 - contains gamma-ray dose constants (in tissue) for approximately 500 radionuclides.
Radioactivity quantities
Radiobiology
Radiation protection | Absorbed dose | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 1,726 | [
"Physical quantities",
"Quantity",
"Radiobiology",
"Radioactivity quantities",
"Radioactivity"
] |
571,538 | https://en.wikipedia.org/wiki/Syncytium | A syncytium (; : syncytia; from Greek: σύν syn "together" and κύτος kytos "box, i.e. cell") or symplasm is a multinucleate cell that can result from multiple cell fusions of uninuclear cells (i.e., cells with a single nucleus), in contrast to a coenocyte, which can result from multiple nuclear divisions without accompanying cytokinesis. The muscle cell that makes up animal skeletal muscle is a classic example of a syncytium cell. The term may also refer to cells interconnected by specialized membranes with gap junctions, as seen in the heart muscle cells and certain smooth muscle cells, which are synchronized electrically in an action potential.
The field of embryogenesis uses the word syncytium to refer to the coenocytic blastoderm embryos of invertebrates, such as Drosophila melanogaster.
Physiological examples
Protists
In protists, syncytia can be found in some rhizarians (e.g., chlorarachniophytes, plasmodiophorids, haplosporidians) and acellular slime moulds, dictyostelids (amoebozoans), acrasids (Excavata) and Haplozoon.
Plants
Some examples of plant syncytia, which result during plant development, include:
Developing endosperm
The non-articulated laticifers
The plasmodial tapetum, and
The "nucellar plasmodium" of the family Podostemaceae
Fungi
A syncytium is the normal cell structure for many fungi. Most fungi of Basidiomycota exist as a dikaryon in which thread-like cells of the mycelium are partially partitioned into segments each containing two differing nuclei, called a heterokaryon.
Animals
Nerve net
The neurons which makes up the subepithelial nerve net in comb jellies (Ctenophora) are fused into a neural syncytium, consisting of a continuous plasma membrane instead of being connected through synapses.
Skeletal muscle
A classic example of a syncytium is the formation of skeletal muscle. Large skeletal muscle fibers form by the fusion of thousands of individual muscle cells. The multinucleated arrangement is important in pathologic states such as myopathy, where focal necrosis (death) of a portion of a skeletal muscle fiber does not result in necrosis of the adjacent sections of that same skeletal muscle fiber, because those adjacent sections have their own nuclear material. Thus, myopathy is usually associated with such "segmental necrosis", with some of the surviving segments being functionally cut off from their nerve supply via loss of continuity with the neuromuscular junction.
Cardiac muscle
The syncytium of cardiac muscle is important because it allows rapid coordinated contraction of muscles along their entire length. Cardiac action potentials propagate along the surface of the muscle fiber from the point of synaptic contact through intercalated discs. Although a syncytium, cardiac muscle differs because the cells are not long and multinucleated. Cardiac tissue is therefore described as a functional syncytium, as opposed to the true syncytium of skeletal muscle.
Smooth muscle
Smooth muscle in the gastrointestinal tract is activated by a composite of three types of cells – smooth muscle cells (SMCs), interstitial cells of Cajal (ICCs), and platelet-derived growth factor receptor alpha (PDGFRα) that are electrically coupled and work together as an SIP functional syncytium.
Osteoclasts
Certain animal immune-derived cells may form aggregate cells, such as the osteoclast cells responsible for bone resorption.
Placenta
Another important vertebrate syncytium is in the placenta of placental mammals. Embryo-derived cells that form the interface with the maternal blood stream fuse together to form a multinucleated barrier – the syncytiotrophoblast. This is probably important to limit the exchange of migratory cells between the developing embryo and the body of the mother, as some blood cells are specialized to be able to insert themselves between adjacent epithelial cells. The syncytial epithelium of the placenta does not provide such an access path from the maternal circulation into the embryo.
Glass sponges
Much of the body of Hexactinellid sponges is composed of syncitial tissue. This allows them to form their large siliceous spicules exclusively inside their cells.
Tegument
The fine structure of the tegument in helminths is essentially the same in both the cestodes and trematodes. A typical tegument is 7–16 μm thick, with distinct layers. It is a syncytium consisting of multinucleated tissues with no distinct cell boundaries. The outer zone of the syncytium, called the "distal cytoplasm," is lined with a plasma membrane. This plasma membrane is in turn associated with a layer of carbohydrate-containing macromolecules known as the glycocalyx, that varies in thickness from one species to another. The distal cytoplasm is connected to the inner layer called the "proximal cytoplasm", which is the "cellular region or cyton or perikarya" through cytoplasmic tubes that are composed of microtubules. The proximal cytoplasm contains nuclei, endoplasmic reticulum, Golgi complex, mitochondria, ribosomes, glycogen deposits, and numerous vesicles. The innermost layer is bounded by a layer of connective tissue known as the "basal lamina". The basal lamina is followed by a thick layer of muscle.
Pathological examples
Viral infection
Syncytia can also form when cells are infected with certain types of viruses, notably HSV-1, HIV, MeV, SARS-CoV-2, and pneumoviruses, e.g. respiratory syncytial virus (RSV). These syncytial formations create distinctive cytopathic effects when seen in permissive cells. Because many cells fuse together, syncytia are also known as multinucleated cells, giant cells, or polykaryocytes. During infection, viral fusion proteins used by the virus to enter the cell are transported to the cell surface, where they can cause the host cell membrane to fuse with neighboring cells.
Reoviridae
Typically, the viral families that can cause syncytia are enveloped, because viral envelope proteins on the surface of the host cell are needed to fuse with other cells. Certain members of the Reoviridae family are notable exceptions due to a unique set of proteins known as fusion-associated small transmembrane (FAST) proteins. Reovirus induced syncytium formation is not found in humans, but is found in a number of other species and is caused by fusogenic orthoreoviruses. These fusogenic orthoreoviruses include reptilian orthoreovirus, avian orthoreovirus, Nelson Bay orthoreovirus, and baboon orthoreovirus.
HIV
HIV infects Helper CD4+ T cells and makes them produce viral proteins, including fusion proteins. Then, the cells begin to display surface HIV glycoproteins, which are antigenic. Normally, a cytotoxic T cell will immediately come to "inject" lymphotoxins, such as perforin or granzyme, that will kill the infected T helper cell. However, if T helper cells are nearby, the gp41 HIV receptors displayed on the surface of the T helper cell will bind to other similar lymphocytes. This makes dozens of T helper cells fuse cell membranes into a giant, nonfunctional syncytium, which allows the HIV virion to kill many T helper cells by infecting only one. It is associated with a faster progression of the disease
Mumps
The mumps virus uses HN protein to stick to a potential host cell, then, the fusion protein allows it to bind with the host cell. The HN and fusion proteins are then left on the host cell walls, causing it to bind with neighbour epithelial cells.
COVID-19
Mutations within SARS-CoV-2 variants contain spike protein variants that can enhance syncytia formation. The protease TMPRSS2 is essential for syncytia formation. Syncytia can allow the virus to spread directly to other cells, shielded from neutralizing antibodies and other immune system components. Syncytia formation in cells can be pathological to tissues.
"Severe cases of COVID-19 are associated with extensive lung damage and the presence of infected multinucleated syncytial pneumocytes. The viral and cellular mechanisms regulating the formation of these syncytia are not well understood," but membrane cholesterol seems necessary.
The syncytia appear to be long-lasting; the "complete regeneration" of the lungs after severe flu "does not happen" with COVID-19.
See also
Atrial syncytium
Coenocyte
Giant cell
Heterokaryon
Heterokaryosis
Plasmodium (life cycle)
Enteridium lycoperdon, a plasmodial slime mould
Syncytiotrophoblast
Xenophyophorea
References
Histology
Cell biology | Syncytium | [
"Chemistry",
"Biology"
] | 1,981 | [
"Histology",
"Cell biology",
"Microscopy"
] |
571,674 | https://en.wikipedia.org/wiki/Dragline%20excavator | A dragline excavator is a heavy-duty excavator used in civil engineering and surface mining. It was invented in 1904, and presented an immediate challenge to the steam shovel and its diesel and electric powered descendant, the power shovel. Much more efficient than even the largest of the latter, it enjoyed a heyday in extreme size for most of the 20th century, first becoming challenged by more efficient rotary excavators in the 1950s, then superseded by them on the upper end from the 1970s on.
The largest ever walking dragline was Big Muskie, a Bucyrus-Erie 4250-W put online in 1969 that swung a , 325 ton capacity bucket, had a boom, and weighed 13,500 tons.
The largest walking dragline produced as of 2014 was Joy Global’s digital AC drive control P&H 9020XPC, which has a bucket capacity of and boom lengths ranging from ; working weights vary between 7,539 and 8,002 tons.
Types
Draglines fall into two broad categories: those that are based on standard, lifting cranes, and the heavy units which have to be built on-site. Most crawler cranes, with an added winch drum on the front, can act as a dragline. These units (like other cranes) are designed to be temporarily dismantled and transported over the road on flatbed trailers. Draglines used in civil engineering are of this smaller, crane type. These are used for road, port construction, pond and canal dredging, and as pile driving rigs. These types are built by crane manufacturers such as Link-Belt and Hyster.
The much larger type which is erected on site is commonly used in strip-mining operations to remove overburden above coal and more recently for oil sands mining. The largest heavy draglines are among the largest mobile land machines ever built, weighing up to 13,500 tons, while the smallest and most common of the site-erected type weigh around 8,000 tons.
A dragline bucket system consists of a large bucket which is suspended from a large truss-like boom (or mast) with wire ropes. The bucket is maneuvered by means of a number of ropes and chains. The hoist rope, powered by large diesel or electric motors, supports the bucket and hoist-coupler assembly from the boom. The dragrope is used to draw the bucket assembly horizontally. By skillful maneuver of the hoist and the dragropes the bucket is controlled for various operations. A schematic of a large dragline bucket system is shown below.
History
The dragline was invented in 1904 by John W. Page (as a partner of the firm Page & Schnable Contracting) for use in digging the Chicago Canal. By 1912, Page realized that building draglines was more lucrative than contracting, so he created the Page Engineering Company to build draglines. Page built its first crude walking dragline in 1923. These used legs operated by rack and pinion on a separate frame that lifted the crane. The body was then pulled forward by chain on a roller track and then lowered again. Page developed the first diesel engines exclusively for dragline application in 1924. Page also invented the arched dragline bucket, a design still commonly used today by draglines from many other manufacturers, and in the 1960s pioneered an archless bucket design. With its walking mechanism badly behind that of competitor Monighan (see below), Page updated their mechanism to an eccentric drive in 1935. This much improved mechanism gave a proper elliptical motion and was used until 1988. Page modernized its draglines further with the 700 series in 1954. Page's largest dragline was the Model 757 delivered to the Obed Mine near Hinton, Alberta in 1983. It featured a 75-yard bucket on a 298-foot boom and an operating weight of 4,500 tons. In 1988, Harnischfeger Corporation (P&H Mining Equipment) purchased Page Engineering Company.
Harnischfeger Corporation was established as P&H Mining in 1884 by Alonzo Pawling and Henry Harnischfeger. In 1914, P&H introduced the world's first gasoline engine powered dragline. In 1988, Page was acquired by Harnischfeger which makes the P&H line of shovels, draglines, and cranes. P&H's largest dragline is the 9030C with a 160-yard bucket and up to a 425-foot boom.
In 1907, Monighan's Machine Works of Chicago became interested in manufacturing draglines when local contractor John W. Page placed an order for hoisting machinery to install one. In 1908, Monighan changed its name to the Monighan Machine Company. In 1913, a Monighan engineer named Oscar Martinson invented the first walking mechanism for a dragline. The device, known as the Martinson Tractor, was installed on a Monighan dragline, creating the first walking dragline. This gave Monighan a significant advantage over other draglines and the company prospered. The cam mechanism was further improved in 1925 by eliminating the drag chains for the shoes and changing to a cam wheel running in an oval track. This gave the shoe a proper elliptical motion. The first dragline using the new mechanism was the 3-W available in 1926. So popular were these machines that the name Monighan became a generic term for dragline. In the early 1930s, Bucyrus-Erie began purchasing shares of Monighan stock with Monighan's approval. Bucyrus purchased a controlling interest and the joint company became known as Bucyrus-Monighan until the formal merger in 1946. The first walking dragline excavator in the United Kingdom was used at the Wellingborough iron quarry in 1940.
Ransomes & Rapier was founded in 1869 by four engineers to build railway equipment and other heavy works. In 1914 they started building two small steam shovels as a result of a customer request. The rope-operated crowd system they built for this was patented and later sold to Bucyrus. After WWI, demand for excavators increased and in 1924 they reached an agreement to build Marion draglines from 1 to 8 cubic yards capacity. In 1927, they built Type-7 1-yard and Type-460 1.5-yard models. The deal to build Marion machines ended in 1936. R&R began building their own designs with the Type-4120 followed by the 4140 of 3.5 cubic yards. In 1958 the Ramsomes & Rapier division was sold to Newton, Chambers & Co. of Sheffield, which was combined with their NCK Crane & Excavator division. This became NCK-Rapier. The walking dragline division of NCK-Rapier was acquired by Bucyrus in 1988.
The Marion Power Shovel Company (established in 1880) built its first walking dragline with a simple single-crank mechanism in 1939. Its largest dragline was the 8950 sold to Amax Coal Company in 1973. It featured a 150-cubic yard bucket on a 310-foot boom and weighed 7,300 tons. Marion was acquired by Bucyrus in 1997.
Bucyrus Foundry and Manufacturing Company entered the dragline market in 1910 with the purchase of manufacturing rights for the Heyworth-Newman dragline excavator. Their "Class 14" dragline was introduced in 1911 as the first crawler mounted dragline. In 1912 Bucyrus helped pioneer the use of electricity as a power source for large stripping shovels and draglines used in mining. An Italian company, Fiorentini, produced dragline excavators from 1919 licensed by Bucyrus. After the merger with Monighan in 1946, Bucyrus began producing much larger machines using the Monighan walking mechanism such as the 800 ton 650-B which used a 15-yard bucket. Bucyrus' largest dragline was Big Muskie built for the Ohio Coal Company in 1969. This machine featured a 220-yard bucket on a 450-foot boom and weighed 14,500 tons. Bucyrus was itself acquired by heavy equipment and diesel engine maker, Caterpillar, in 2011. Caterpillar's largest dragline is the 8750 with a 169-yard bucket, 435-foot boom, and 8,350 ton weight.
The market for draglines began shrinking rapidly after the boom of the 1960s and 1970s which led to more mergers. P&H's acquisition of Page in 1988 along with Bucyrus' acquisition of Ransomes & Rapier in 1988 and Marion in 1997 cut the number of worldwide suppliers of heavy draglines by more than half. Today, P&H and Caterpillar are the only remaining manufacturers of large draglines.
Other manufacturers
Heavy Engineering Corporation Limited was the first Indian company to manufacture a walking dragline of 31-yard bucket capacity. HEC makes up to a 44-yard bucket. For comparison, this would be comparable to Caterpillar's Small Draglines 8000 series with a 42-yard bucket. HEC has supplied fifteen draglines to the Indian mining industry.
Operation
In a typical cycle of excavation, the bucket is positioned above the material to be excavated. The bucket is then lowered and the dragrope is then drawn so that the bucket is dragged along the surface of the material. The bucket is then lifted by using the hoist rope. A swing operation is then performed to move the bucket to the place where the material is to be dumped. The dragrope is then released causing the bucket to tilt and empty. This is called a dump operation.
On crane-type draglines, the bucket can also be 'thrown' by winding up to the jib and then releasing a clutch on the drag cable. This would then swing the bucket like a pendulum. Once the bucket had passed the vertical, the hoist cable would be released thus throwing the bucket. On smaller draglines, a skilled operator could make the bucket land about one-half the length of the jib further away than if it had just been dropped. On larger draglines, this is not a common practice.
Draglines in mining
A large dragline system used in the open pit mining industry costs approximately US$50–100 million. A typical bucket has a volume ranging from 40 to 80 cubic yards (30 to 60 cubic metres), though extremely large buckets have ranged up to 220 cubic yards (168 cubic meters). The length of the boom ranges from . In a single cycle, it can move up to 450 tons of material.
Most mining draglines are not diesel-powered like most other mining equipment. Their power consumption on order of several megawatts is so great that they have a direct connection to the high-voltage grid at voltages of between 6.6 and 22 kV. A typical dragline weighing 4000 to 6000 tons, with a 55-cubic-metre bucket, can use up to 6 megawatts during normal digging operations. Because of this, many (possibly apocryphal) stories have been told about the blackout-causing effects of mining draglines. For instance, there is a long-lived story that, back in the 1970s, if all seven draglines at Peak Downs Mine (a very large BHP coal mine in central Queensland, Australia) turned simultaneously, they would black out all of North Queensland. However even now, if they have been shut down, they are always restarted one at a time due to the immense power requirements of startup.
In all but the smallest of draglines, movement is accomplished by "walking" using feet or pontoons, as caterpillar tracks place too much pressure on the ground, and have great difficulty under the immense weight of the dragline. Maximum speed is only at most a few metres per minute, since the feet must be repositioned for each step. If travelling medium distances (about 30–100 km), a special dragline carrier can be brought in to transport the dragline. Above that distance, disassembly is generally required. But mining draglines due to their reach can work a large area from one position and do not need to constantly move along the face like smaller machines.
Limitations
The primary limitations of draglines are their boom height and boom length, which limits where the dragline can dump the waste material. Another primary limitation is their dig depth, which is limited by the length of rope the dragline can utilize. Inherent with their construction, a dragline is most efficient excavating material below the level of their base. While a dragline can dig above itself, it does so inefficiently and is not suitable to load piled up material (as a rope shovel or wheel loader can).
Despite their limitations, and their extremely high capital cost, draglines remain popular with many mines, due to their reliability, and extremely low waste removal cost.
Notable examples
The coal mining dragline known as Big Muskie, owned by the Central Ohio Coal Company (a division of American Electric Power), was the world's largest mobile earth-moving machine, weighing 13,500 tons and standing nearly 22 stories tall. It operated in Muskingum County, in the U.S. state of Ohio from 1969 to 1991, and derived power from a 13,800 volt electrical supply. It was dismantled for $700,000 worth of recycled metal in 1999.
The British firm of Ransomes & Rapier produced a few diesel-electric excavators rather over 1/10th its size, the largest in Europe in the 1960s at 1400-1800 tons. One, named SUNDEW, was used in a quarry from 1957 to 1974. After its working life at the first site in Rutland wrapped it walked in 9 weeks to Corby, where it continued on till being scrapped from January to June 1987.
Smaller draglines were also commonly used before hydraulic excavators came into common use, the smaller draglines are now rarely used other than on river and gravel pit works. The small machines were of a mechanical drive with clutches. Firms such as Ruston and Bucyrus made models such as the RB10 which were popular for small building works and drainage work. Several of these can still be seen in the English Fens of Cambridgeshire, Lincolnshire and parts of Norfolk. Ruston's are a company also associated with drainage pumping engines. Electric drive systems were only used on the larger mining machines, most modern machines use a diesel-hydraulic drive, as machines are seldom in one location long enough to justify the cost of installing a substation and supply cables.
Technological advances
The basic mechanical technology of draglines, unlike that of most equipment used in earth-moving, has remained relatively unchanged in design and control functions for almost 100 years. Some advances, however, have been made (such as hydraulic, then electro-hydraulic, controls (including joysticks) and using simulation software to train new operators), are being pursued (such as improved automation systems), or are arguable as a step forward (as is "universal dig-dump" (UDD)):
Automation
Researchers at CSIRO in Australia have a long-term research project into automating draglines. Mining automation teams at QCAT, a CSIRO division; have been developing the automation technology since 1994. Automated systems include cruise control and Digital Terrain Mapping. Working solutions include the proof-of-concept dragline swing cruise control on a Tarong BE1370.
Simulation software
Since draglines are typically large, complicated and very expensive, training new operators can be a tricky process. In the same way that flight simulators have developed to train pilots, mining simulator software has been developed to assist new operators in learning how to control the machines.
UDD
UDD stands for universal dig-dump. It represents the first fundamental change to draglines for almost a century, since the invention of the 'miracle hitch'. Instead of using two ropes (the hoist rope and the drag rope) to manipulate the bucket, a UDD machine uses four ropes, two hoist and two drag. This allows the dragline operator to have much greater selectivity in when to pick up the bucket, and in how the bucket may be dumped. UDD machines generally have higher productivity than a standard dragline, but often have greater mechanical issues. Within the mining industry, there is still much debate as to whether UDD improvements justify their costs.
See also
Bucket wheel excavator – alternative mining machine
Excavator – generic class of machine of which draglines are a sub class
Power shovel – type of mining machine (also called a front shovel)
Steam shovel – earliest type of mining excavator
References
K. Pathak, K. Dasgupta, A. Chattopadhyay, "Determination of the working zone of a dragline bucket – A graphical approach", Doncaster, The Institution of mining engineers, 1992.
Peter Ridley, Peter Corke, "Calculation of Dragline bucket pose under gravity loading", Mechanism and machine theory, Vol. 35, 2000.
External links
P&H draglines
Bucyrus draglines
HEC draglines
Articles containing video clips
Engineering vehicles
Excavators
Mining equipment
Draglines | Dragline excavator | [
"Engineering"
] | 3,517 | [
"Engineering vehicles",
"Draglines",
"Mining equipment"
] |
571,886 | https://en.wikipedia.org/wiki/Ferranti%20Mark%201 | The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was produced by British electrical engineering firm Ferranti Ltd. It was the world's first commercially available electronic general-purpose stored program digital computer.
Although preceded as a commercial digital computer by the BINAC and the Z4, the Z4 was electromechanical and lacked software programmability, while BINAC never operated successfully after delivery.
The Ferranti Mark 1 was "the tidied up and commercialised version of the Manchester Mark I". The first machine was delivered to the Victoria University of Manchester in February 1951 (publicly demonstrated in July) ahead of the UNIVAC I which was delivered to the United States Census Bureau in late December 1952, having been sold on 31 March 1951.
History and specifications
Based on the Manchester Mark 1, which was designed at the University of Manchester by Freddie Williams and Tom Kilburn, the machine was built by Ferranti of the United Kingdom. The main improvements over it were in the size of the primary and secondary storage, a faster multiplier, and additional instructions.
The Mark 1 used a 20-bit word stored as a single line of dots of electric charges settled on the surface of a Williams tube display, each cathodic tube storing 64 lines of dots. Instructions were stored in a single word, while numbers were stored in two words. The main memory consisted of eight tubes, each storing one such page of 64 words. Other tubes stored the single 80-bit accumulator (A), the 40-bit "multiplicand/quotient register" (MQ) and eight "B-lines", or index registers, which was one of the unique features of the Mark 1 design. The accumulator could also be addressed as two 40-bit words. An extra 20-bit word per tube stored an offset value into the secondary storage. Secondary storage was provided in the form of a 512-page magnetic drum, storing two pages per track, with about 30 milliseconds revolution time. The drum provided eight times the storage of the original designed at Manchester.
The instructions, like the Manchester machine, used a single-address format in which operands were modified and left in the accumulator. There were about fifty instructions in total. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in the new parallel unit in about 2.16 milliseconds (about 5 times faster than the original). The multiplier used almost a quarter of the machine's 4,050 vacuum tubes. Several instructions were included to copy a word of memory from one of the Williams tubes to a paper tape machine, or read them back in. Several new instructions were added to the original Manchester design, including a random number instruction and several new instructions using the B-lines.
The original Mark 1 had to be programmed by entering alphanumeric characters representing a five-bit value that could be represented on the paper tape input. The engineers decided to use the simplest mapping between the paper holes and the binary digits they represented, but the mapping between the holes and the physical keyboard was never meant to be a binary mapping. As a result, the characters representing the values from 0–31 (five-bit numbers) looked entirely random, specifically /E@A:SIU½DRJNFCKTZLWHYPQOBG"MXV£.
The first machine was delivered to the University of Manchester. Ferranti had high hopes for further sales, and were encouraged by an order placed by the Atomic Energy Research Establishment for delivery in autumn 1952. However, a change of government while the second machine was being built led to all government contracts over £100,000 being cancelled, leaving Ferranti with a partially completed Mark 1. The company ultimately sold it to the University of Toronto, who had been building their own machine, but saw the chance to buy the complete Mark 1 for even less. They purchased it for around $30,000, a "fire sale" price, and Beatrice Worsley gave it the nickname FERUT. FERUT was extensively used in business, engineering, and academia, among other duties, carrying out calculations as part of the construction of the St. Lawrence Seaway.
Alan Turing wrote a programming manual.
Mark 1 Star
After the first two machines, a revised version of the design became available, known as the Ferranti Mark 1 Star or the Ferranti Mark 1*. The revisions mainly cleaned up the instruction set for better usability. Instead of the original mapping from holes to binary digits that resulted in the random-looking mapping, the new machines mapped digits to holes to produce a much simpler mapping, ø£½0@:$ABCDEFGHIJKLMNPQRSTUVWXYZ. Additionally, several commands that used the index registers had side effects that led to quirky programming, but these were modified to have no side effects. The original machines' JUMP instructions landed at a location "one before" the actual address, for reasons similar to the odd index behaviour, but these proved useful only in theory and quite annoying in practice, and were similarly modified. Input/output was also modified, with five-bit numbers being output least significant digit to the right, as is typical for most numeric writing. These, among other changes, greatly improved the ease of programming the newer machines.
The Mark 1/1* weighed .
At least seven of the Mark 1* machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. Another was installed at Avro, the aircraft manufacturers, at their Chadderton factory in Manchester. This was used for work on the Vulcan among other projects.
Conway Berners-Lee and Mary Lee Woods, the parents of Tim Berners-Lee, inventor of the World Wide Web, both worked on the Ferranti Mark 1 and Mark 1*.
Computer music
Included in the Ferranti Mark 1's instruction set was a hoot command, which enabled the machine to give auditory feedback to its operators. The sound generated could be altered in pitch, a feature which was exploited when the Mark 1 made the earliest known recording of computer-generated music, playing a medley which included "God Save the King", "Baa Baa Black Sheep", and "In the Mood". The recording was made by the BBC towards the end of 1951, with the programming being done by Christopher Strachey, a mathematics teacher at Harrow and a friend of Alan Turing. It was not, however, the first computer to have played music; CSIRAC, Australia's first digital computer, achieved that with a rendition of "Colonel Bogey".
Computer games
In November 1951, Dr. Dietrich Prinz wrote one of the earliest computer games, a chess-playing program for the Manchester Ferranti Mark 1 computer. The limitation of the Mark 1 computer did not allow for a whole game of chess to be programmed. Prinz could only program mate-in-two chess problems. The program examined every possible move for White and Black (thousands of possible moves) until a solution was found, which took 15–20 minutes for easy problems but several hours in general. The program's restrictions were: no castling, no double pawn move, no en passant capture, no pawn promotion, nevertheless it distincted between checkmate and stalemate.
See also
History of computing hardware
List of vacuum-tube computers
Manchester computers
References
Notes
Citations
Bibliography
Further reading
External links
Ferranti Mark 1 at Computer50
A simulator of the Ferranti Mark 1, executing Christopher Strachey's Love letter algorithm from 1952
The Ferranti Mark 1* that went to Shell labs in Amsterdam, Netherlands (Dutch only), Google translation
Contains photo of the console
Programming Ferut in Transcode:
,
Early British computers
Ferranti
Ferranti computers
History of Manchester
History of science and technology in England
Department of Computer Science, University of Manchester
Vacuum tube computers
Serial computers | Ferranti Mark 1 | [
"Technology"
] | 1,648 | [
"Serial computers",
"Computers"
] |
571,903 | https://en.wikipedia.org/wiki/Tencent%20QQ | Tencent QQ (), also known as QQ, is an instant messaging software service and web portal developed by the Chinese technology company Tencent. QQ offers services that provide online social games, music, shopping, microblogging, movies, and group and voice chat software. As of March 2023, there were 597 million monthly active QQ accounts.
History
Tencent QQ was first released in China in February 1999 under the name of OICQ ("Open ICQ", a reference to the early IM service ICQ).
After the threat of a trademark infringement lawsuit by the AOL-owned ICQ, the product's name was changed to QQ (with "Q" and "QQ" used to imply "cute"). The software inherited existing functions from ICQ, and additional features such as software skins, people's images, and emoticons. QQ was first released as a "network paging" real-time communications service. Other features were later added, such as chatrooms, games, personal avatars (similar to "Meego" in MSN), online storage, and Internet dating services.
The official client runs on Microsoft Windows and a beta public version was launched for Mac OS X version 10.4.9 or newer. Formerly, two web versions, WebQQ (full version) and WebQQ Mini (Lite version), which made use of Ajax, were available. Development, support, and availability of WebQQ Mini, however, has since been discontinued. On 31 July 2008, Tencent released an official client for Linux, but this has not been made compatible with the Windows version and it is not capable of voice chat.
In response to competition with other instant messengers, such as Windows Live Messenger, Tencent released Tencent Messenger, which is aimed at businesses.
Membership
In 2002, Tencent stopped its free membership registration, requiring all new members to pay a fee. In 2003, however, this decision was reversed due to pressure from other instant messaging services such as Windows Live Messenger and Sina UC.
Tencent currently offers a premium membership scheme, where premium members enjoy features such as QQ mobile, ringtone downloads, and SMS sending/receiving. In addition, Tencent offers "Diamond" level memberships. Currently, there are seven diamond schemes available:
Red for the QQ Show service which features some superficial abilities such as having a colored account name.
Yellow to obtain extra storage and decorations in Qzone—a blog service.
Blue to obtain special abilities in the game-plays of QQ games.
Purple for obtaining special abilities in games including QQ Speed, QQ Nana, and QQ Tang
Pink for having different boosts in the pet-raising game called QQ Pet.
Green for using QQ music—a service for users to stream music online.
VIP for having extra features in the chat client such as removing advertisements
Black for gaining benefits related to DNF (Dungeon & Fighter), a multiplayer PC beat 'em up video game.
QQ Coin
The QQ Coin is a virtual currency used by QQ users to "purchase" QQ related items for their avatar and blog. QQ Coins are obtained either by purchase (one coin for one RMB) or by using the mobile phone service. Due to the popularity of QQ among young people in China, QQ Coins are accepted by online vendors in exchange for "real" merchandise such as small gifts. This has raised concerns of replacing (and thus "inflating") real currency in these transactions.
The People's Bank of China, China's central bank, tried to crack down on QQ Coins due to people using QQ Coins in exchange for real world goods. However, this only caused the value of QQ coins to rise as more and more third-party vendors started to accept them. Tencent claims the QQ Coin is a mere regular commodity, and is, therefore, not a currency.
Q Zone
Qzone is a social networking website based in China which was created by Tencent in 2005. Q Zone is a personal blog for QQ users. It can be set as a public page or a private friend-only page. Users can upload diaries and share photos.
QQ International
Windows
In 2009, QQ began to expand its services internationally with its QQ International client for Windows distributed through a dedicated English-language portal.
QQ International offers non-Mandarin speakers the opportunity to use most of the features of its Chinese counterpart to get in touch with other QQ users via chat, VoIP, and video calls, and it provides a non-Mandarin interface to access Qzone, Tencent's social network. The client supports English, French, Spanish, German, Korean, Japanese and Traditional Chinese.
One of the main features of QQ International is the optional and automatic machine translation in all chats.
Android
An Android version of QQ International was released in September 2013. The client's interface is in English, French, Spanish, German, Korean, Japanese and Traditional Chinese. In addition to text messaging, users can send each other images, videos, and audio media messages. Moreover, users can share multimedia content with all contacts through the client's Qzone interface.
The live translation feature is available for all incoming messages and supports up to 18 languages.
iOS
QQ International for iPhone and iOS devices was released at the end of 2013, fully equivalent to its Android counterpart.
Partnerships
In India, Tencent has partnered with ibibo to bring services such as chat, mail and game to the developing Indian internet sphere.
In Vietnam, Tencent has struck a deal with VinaGame to bring the QQ Casual Gaming portal as well as the QQ Messenger as an addition to the already thriving Vietnamese gaming communities.
In the United States, Tencent has partnered with AOL to bring QQ Games as a contender in the US social gaming market. Launched in 2007, QQ Games came bundled with the AIM installer, and competed with AOL's own games.com to provide a gaming experience for the AIM user base.
Web QQ
Tencent launched its web-based QQ formally on 15 September 2009, the latest version of which being 3.0. Rather than solely a web-based IM, WebQQ 3.0 functions more like its own operating system, with a desktop in which web applications can be added.
Social network website
In 2009, Tencent launched Xiaoyou (校友, 'schoolmate'), its first social network website. In mid-2010, Tencent changed direction and replaced Xiaoyou with Pengyou (朋友, 'friends'), trying to establish a more widespread network, to which extant QQ users could be easily redirected, hence giving Pengyou a major advantage over its competitors. Tencent's social network Qzone is linked to in the International and native versions of QQ.
Open source and cross-platform clients
Using reverse engineering, open source communities have come to understand the QQ protocol better and have attempted to implement client core libraries compatible with more user-friendly clients, free of advertisements. Most of these clients are cross-platform, so they are usable on operating systems which the official client does not support. However, these implementations had only a subset of functions of the official client and therefore were limited in features. Furthermore, QQ's parent company, Tencent, has over successive versions modified the QQ protocol to the extent that it can no longer be supported by most, and perhaps any, of the third-party implementations that were successful in the past (some of which are listed below). As of 2009, none of the developers of third-party clients have publicized any plans to restore QQ support.
Pidgin, an open source cross-platform multiprotocol client, with third-party plugin
Adium, an open source macOS client, with third-party plugin built on top of libqq-pidgin
Kopete, an open source multiprotocol client by KDE
Note: Kopete, old versions of Pidgin, and any other client whose QQ support was based on libpurple no longer supports QQ as of May 2011
Miranda NG, an open source multiprotocol client, designed for Microsoft Windows, with MirandaQQ2 plugin.
Eva
Merchandise
Tencent has taken advantage of the popularity of the QQ brand and has set up many Q-Gen stores selling QQ branded merchandise such as bags, watches, clothing as well as toy penguins.
Related characteristics
The accounts of QQ are purely a combination of numbers. The account numbers provided for the registered users are selected randomly by the system user registration. In 1995, the registered QQ accounts had only 5 digits, while currently, the digital numbers used for QQ accounts has reached 12. The first QQ number is held by Ma Huateng and his account number is 10001.
Membership to a QQ account usually lasts one month. When this membership is overdue and not renewed, the membership of the account will be suspended.
In relation to calculating "QQ Age", being logged in for 2 full hours would be considered as one full day. Thus, being logged in to QQ for around 700 hours would make the age increase by 1 year. In the 2012 version of QQ, users can see the age on the personal information page.
In 2004, Tencent launched QQ hierarchy which shows the level of a registered member. At the very beginning, this hierarchy was solely based on the hours a member spent in QQ. Hence, the longer the member stayed, the higher level they can attain. These results, however, were criticized as people tend to waste electrical energy due to longer hours of stay on the site. Therefore, Tencent changed the basis from an hour unit to a daily unit due to the involvement of several departments.
Controversies and criticisms
Coral QQ
Coral QQ, a modification of Tencent QQ, is another add-on for the software, providing free access to some of the services and blocking Tencent's advertisements. In 2006, Tencent filed a copyright lawsuit against Chen Shoufu (aka Soft), the author of Coral QQ, after his distribution of a modified Tencent QQ was ruled illegal. Chen then published his modification as a separate add-on. On 16 August 2007, Chen was detained again for allegedly making profits off of his ad-blocking add-on. The case resulted in a three-year prison sentence for Shoufu.
Dispute with Qihoo 360
In 2010, Chinese anti-virus company, Qihoo 360, analyzed the QQ protocol and accused QQ of automatically scanning users' computers and uploading their personal information to QQ's servers without the users' consent. In response, Tencent called 360 a malware and denied users of installing 360 access to some of the QQ's services. The Chinese Ministry of Industry and Information reprimanded both companies for "improper competition" and ordered them to come to an agreement.
Government surveillance
Some observers have criticized QQ's compliance in the Chinese government's Internet surveillance and censorship. A 2013 report by Reporters Without Borders specifically mentioned QQ as allowing authorities to monitor online conversations for keywords or phrases and track participants by their user number.
Adware controversy
The Chinese version of QQ makes use of embedded advertisements. Older versions of the client have been branded as malicious adware by some antivirus and anti-spyware vendors.
Both the Chinese and International versions of QQ had been tested in 2013. Currently it is identified as malware by DrWeb, Zillya, NANO-Antivirus, and VBA32 give positive results, most of which identify it as a trojan.
Security
On March 6, 2015, QQ scored 2 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. It received points for having communications encrypted in transit and for having a recent independent security audit. It lost points because communications are not end-to-end encrypted, users can not verify contacts' identities, past messages are not secure if the encryption keys are stolen (i.e. the service does not provide forward secrecy), the code is not open to independent review (i.e. the code is not open-source), and the security design is not properly documented.
See also
Comparison of instant messaging clients
WeChat
References
External links
Android (operating system) software
IOS software
Windows instant messaging clients
Symbian instant messaging clients
Internet technology companies of China
Tencent
Cross-platform software
Digital currencies
Instant messaging clients
Internet properties established in 1999 | Tencent QQ | [
"Technology"
] | 2,598 | [
"Instant messaging",
"Instant messaging clients"
] |
571,941 | https://en.wikipedia.org/wiki/Ethnobotany | Ethnobotany is an interdisciplinary field at the interface of natural and social sciences that studies the relationships between humans and plants. It focuses on traditional knowledge of how plants are used, managed, and perceived in human societies. Ethnobotany integrates knowledge from botany, anthropology, ecology, and chemistry to study plant-related customs across cultures. Researchers in this field document and analyze how different societies use local flora for various purposes, including medicine, food, religious use, intoxicants, building materials, fuels and clothing. Richard Evans Schultes, often referred to as the "father of ethnobotany", provided an early definition of the discipline:
Since Schultes' time, ethnobotany has evolved from primarily documenting traditional plant knowledge to applying this information in modern contexts, particularly in pharmaceutical development. The field now addresses complex issues such as intellectual property rights and equitable benefit-sharing arrangements arising from the use of traditional knowledge.
History
The idea of ethnobotany was first proposed by the early 20th century botanist John William Harshberger. While Harshberger did perform ethnobotanical research extensively, including in areas such as North Africa, Mexico, Scandinavia, and Pennsylvania, it was not until Richard Evans Schultes began his trips into the Amazon that ethnobotany became a more well known science. However, the practice of ethnobotany is thought to have much earlier origins in the first century AD when a Greek physician by the name of Pedanius Dioscorides wrote an extensive botanical text detailing the medical and culinary properties of "over 600 mediterranean plants" named De Materia Medica. Historians note that Dioscorides wrote about traveling often throughout the Roman empire, including regions such as "Greece, Crete, Egypt, and Petra", and in doing so obtained substantial knowledge about the local plants and their useful properties. European botanical knowledge drastically expanded once the New World was discovered due to ethnobotany. This expansion in knowledge can primarily be attributed to the substantial influx of new plants from the Americas, including crops such as potatoes, peanuts, avocados, and tomatoes. The French explorer Jacques Cartier learned a cure for scurvy (a tea made from the needles of a coniferous tree, likely spruce) from a local Iroquois tribe.
Medieval and Renaissance
During the medieval period, ethnobotanical studies were often conducted in connection with monasticism. However, most botanical knowledge was kept in gardens, such as physic gardens attached to hospitals and religious buildings. It was thought of in practical use terms for culinary and medical purposes and the ethnographic element was not studied as a modern anthropologist might approach ethnobotany today.
Age of Reason
In 1732, Carl Linnaeus carried out a research expedition in Scandinavia asking the Sami people about their ethnological usage of plants.
The Age of Enlightenment saw a rise in economic botanical exploration. Alexander von Humboldt collected data from the New World, and James Cook's voyages brought back collections and information on plants from the South Pacific. At this time major botanical gardens were started, for instance the Royal Botanic Gardens, Kew in 1759. The directors of the gardens sent out gardener-botanist explorers to care for and collect plants to add to their collections.
As the 18th century became the 19th, ethnobotany saw expeditions undertaken with more colonial aims rather than trade economics such as that of Lewis and Clarke which recorded both plants and the peoples encountered use of them. Edward Palmer collected material culture artifacts and botanical specimens from people in the North American West (Great Basin) and Mexico from the 1860s to the 1890s. Through all of this research, the field of "aboriginal botany" was established—the study of all forms of the vegetable world which aboriginal peoples use for food, medicine, textiles, ornaments and more.
Development and application in modern science
The first individual to study the emic perspective of the plant world was a German physician working in Sarajevo at the end of the 19th century: Leopold Glück. His published work on traditional medical uses of plants done by rural people in Bosnia (1896) has to be considered the first modern ethnobotanical work.
Other scholars analyzed uses of plants under an indigenous/local perspective in the 20th century: Matilda Coxe Stevenson, Zuni plants (1915); Frank Cushing, Zuni foods (1920); Keewaydinoquay Peschel, Anishinaabe fungi (1998), and the team approach of Wilfred Robbins, John Peabody Harrington, and Barbara Freire-Marreco, Tewa pueblo plants (1916).
In the beginning, ethonobotanical specimens and studies were not very reliable and sometimes not helpful. This is because the botanists and the anthropologists did not always collaborate in their work. The botanists focused on identifying species and how the plants were used instead of concentrating upon how plants fit into people's lives. On the other hand, anthropologists were interested in the cultural role of plants and treated other scientific aspects superficially. In the early 20th century, botanists and anthropologists better collaborated and the collection of reliable, detailed cross-disciplinary data began.
Beginning in the 20th century, the field of ethnobotany experienced a shift from the raw compilation of data to a greater methodological and conceptual reorientation. This is also the beginning of academic ethnobotany. The so-called "father" of this discipline is Richard Evans Schultes, even though he did not actually coin the term "ethnobotany". Today the field of ethnobotany requires a variety of skills: botanical training for the identification and preservation of plant specimens; anthropological training to understand the cultural concepts around the perception of plants; linguistic training, at least enough to transcribe local terms and understand native morphology, syntax, and semantics.
Mark Plotkin, who studied at Harvard University, the Yale School of Forestry and Tufts University, has contributed a number of books on ethnobotany. He completed a handbook for the Tirio people of Suriname detailing their medicinal plants; Tales of a Shaman's Apprentice (1994); The Shaman's Apprentice, a children's book with Lynne Cherry (1998); and Medicine Quest: In Search of Nature's Healing Secrets (2000).
Plotkin was interviewed in 1998 by South American Explorer magazine, just after the release of Tales of a Shaman's Apprentice and the IMAX movie Amazonia. In the book, he stated that he saw wisdom in both traditional and Western forms of medicine:
No medical system has all the answers—no shaman that I've worked with has the equivalent of a polio vaccine and no dermatologist that I've been to could cure a fungal infection as effectively (and inexpensively) as some of my Amazonian mentors. It shouldn't be the doctor versus the witch doctor. It should be the best aspects of all medical systems (ayurvedic, herbalism, homeopathic, and so on) combined in a way which makes health care more effective and more affordable for all.
A great deal of information about the traditional uses of plants is still intact with tribal peoples. But the native healers are often reluctant to accurately share their knowledge to outsiders. Schultes actually apprenticed himself to an Amazonian shaman, which involves a long-term commitment and genuine relationship. In Wind in the Blood: Mayan Healing & Chinese Medicine by Garcia et al. the visiting acupuncturists were able to access levels of Mayan medicine that anthropologists could not because they had something to share in exchange. Cherokee medicine priest David Winston describes how his uncle would invent nonsense to satisfy visiting anthropologists.
Another scholar, James W. Herrick, who studied under ethnologist William N. Fenton, in his work Iroquois Medical Ethnobotany (1995) with Dean R. Snow (editor), professor of Anthropology at Penn State, explains that understanding herbal medicines in traditional Iroquois cultures is rooted in a strong and ancient cosmological belief system. Their work provides perceptions and conceptions of illness and imbalances which can manifest in physical forms from benign maladies to serious diseases. It also includes a large compilation of Herrick's field work from numerous Iroquois authorities of over 450 names, uses, and preparations of plants for various ailments. Traditional Iroquois practitioners had (and have) a sophisticated perspective on the plant world that contrast strikingly with that of modern medical science.
Researcher Cassandra Quave at Emory University has used ethnobotany to address the problems that arise from antibiotic resistance. Quave notes that the advantage of medical ethnobotany over Western medicine rests in the difference in mechanism. For example, elmleaf blackberry extract focuses instead on the prevention of bacterial collaboration as opposed to directly exterminating them.
Issues
Many instances of gender bias have occurred in ethnobotany, creating the risk of drawing erroneous conclusions. Anthropologists would often consult with primarily men. In Las Pavas, a small farming community in Panama, anthropologists drew conclusions about the entire community's use of plant from their conversations and lessons with mostly men. They consulted with 40 families, but the women only participated rarely in interviews and never joined them in the field. Due to the division of labor, the knowledge of wild plants for food, medicine, and fibers, among others, was left out of the picture, resulting in a distorted view of which plants were actually important to them.
Ethnobotanists have also assumed that ownership of a resource means familiarity with that resource. In some societies women are excluded from owning land, while being the ones who work it. Inaccurate data can come from interviewing only the owners.
Other issues include ethical concerns regarding interactions with indigenous populations, and the International Society of Ethnobiology has created a code of ethics to guide researchers.
Scientific journals
Journal of Ethnobiology and Ethnomedicine
Economic Botany
Ethnobotany Research and Application
Journal of Ethnopharmacology
Indian Journal of Traditional Knowledge (IJTK)Latin American and Caribbean Bulletin of Medicinal and Aromatic Plants''
See also
Society for Ethnobotany
Agroecology
Anthropology
Botany
Economic botany
Ethnobiology
Ethnomedicine
Ethnomycology
History of plant systematics
Ethnobotany of Poland
Medical ethnobotany of India
List of ethnobotanists
Non-timber forest product
Phytogeography
Plant Resources of Tropical Africa
Plants in culture
Traditional ecological knowledge
References
External links
"Before Warm Springs Dam: History of Lake Sonoma Area" This California study has information about one of the first ethnobotanical mitigation projects undertaken in the USA.
Grow Your Own Drugs, a BBC 2 Programme presented by ethnobotanist James Wong.
Phytochemical and Ethnobotanical Databases
Ethnobotanical Database of Bangladesh (EDB)
Native American Ethnobotany
North Dakota Ethnobotany Database
Websites on ethnobotany and plants
Howard P. The Major Importance of 'Minor' Resources: Women and plant biodiversity. 2003
Pharmacognosy
Traditional knowledge | Ethnobotany | [
"Chemistry"
] | 2,283 | [
"Pharmacology",
"Pharmacognosy"
] |
571,993 | https://en.wikipedia.org/wiki/Simulcast | Simulcast (a portmanteau of simultaneous broadcast) is the broadcasting of programs or events across more than one resolution, bitrate or medium, or more than one service on the same medium, at exactly the same time (that is, simultaneously). For example, Absolute Radio is simulcast on both AM and on satellite radio. Likewise, the BBC's Prom concerts were formerly simulcast on both BBC Radio 3 and BBC Television. Another application is the transmission of the original-language soundtrack of movies or TV series over local or Internet radio, with the television broadcast having been dubbed into a local language.
Early radio simulcasts
Before launching stereo radio, experiments were conducted by transmitting left and right channels on different radio channels. The earliest record found was a broadcast by the BBC in 1926 of a Halle Orchestra concert from Manchester, using the wavelengths of the regional stations and Daventry.
In its earliest days, the BBC often transmitted the same programme on the "National Service" and the "Regional Network".
An early use of the word "simulcast" is from 1925.
Between 1990 and 1994, the BBC broadcast a channel of entertainment (Radio 5) which offered a wide range of simulcasts, taking programmes from the BBC World Service and Radio 1, 2, 3 and 4 for simultaneous broadcast.
Simulcasting to provide stereo sound for TV broadcasts
Before stereo TV sound transmission was possible, simulcasting on TV and radio was a method of effectively transmitting "stereo" sound to music TV broadcasts. Typically, an FM frequency in the broadcast area for viewers to tune their stereo systems to would be displayed on the screen. The band Grateful Dead and their concert "Great Canadian Train Ride" in 1970 was the first TV broadcast of a live concert with FM simulcast. In the 1970s WPXI in Pittsburgh broadcast a live Boz Scaggs performance which had the audio simultaneously broadcast on two FM radio stations to create a quadrophonic sound, the first of its kind. The first such transmission in the United Kingdom was on 14 November 1972, when the BBC broadcast a live classical concert from the Royal Albert Hall on both BBC2 and Radio 3. The first pop/rock simulcast was almost two years later, a recording of Van Morrison's London Rainbow Concert simultaneously on BBC2 TV and Radio 2 (see It's Too Late to Stop Now) on 27 May 1974.
Similarly, in the 1980s, before Multichannel Television Sound or home theater was commonplace in American households, broadcasters would air a high fidelity version of a television program's audio portion over FM stereo simultaneous with the television broadcast. PBS stations were the most likely to use this technique, especially when airing a live concert. It was also a way of allowing MTV and similar music channels to run stereo sound through the cable-TV network. This method required a stereo FM transmitter modulating MTV's stereo soundtrack through the cable-TV network, and customers connecting their FM receiver's antenna input to the cable-TV outlet. They would then tune the FM receiver to the specified frequency that would be published in documentation supplied by the cable-TV provider.
With the introduction of commercial FM stations in Australia in July 1980, commercial TV channels began simulcasting some music based programs with the new commercial FM stations and continued to do so into the early 1990s. These were initially rock based programs, such as late night music video shows and rock concerts, but later included some major rock musicals such as The Rocky Horror Picture Show and The Blues Brothers when they first aired on TV. During the mid-1980s the final Australian concert of several major rock artists such as Dire Straits were simulcast live on a commercial TV and FM station. The ABC also simulcast some programs on ABC Television and ABC FM, including the final concert of Elton John with the Melbourne Symphony Orchestra.
In South Africa, the SABC radio station Radio 2000 was established in 1986 to simulcast SABC 1 programming, especially imported American and British television shows, in their original English, before South Africa adopted a stereo standard which allowed secondary audio tracks through the television spectrum.
The first cable TV concert simulcast was Frank Zappa's Halloween show (31 October 1981), live from NYC's Palladium and shown on MTV with the audio-only portion simulcast over FM's new "Starfleet Radio" network. Engineered by Mark G. Pinske with the UMRK mobile recording truck. A later, notable application for simulcasting in this context was the Live Aid benefit concert that was broadcast around the world on 13 July 1985. Most destinations where this concert was broadcast had the concert simulcast by at least one TV network and at least one of the local FM stations.
Most stereo-capable video recorders made through the 1980s and early 1990s had a "simulcast" recording mode where they recorded video signals from the built-in TV tuner and audio signals from the VCR's audio line-in connectors. This was to allow one to connect a stereo FM tuner that is tuned to the simulcast frequency to the VCR's audio input in order to record the stereo sound of a TV program that would otherwise be recorded in mono. The function was primarily necessary with stereo VCRs that didn't have a stereo TV tuner or were operated in areas where stereo TV broadcasting wasn't in place. This was typically selected through the user setting the input selector to "Simulcast" or "Radio" mode or, in the case of some JVC units, the user setting another "audio input" switch from "TV" or "Tuner" to "Line".
In the mid to late 1990s, video game developer Nintendo utilized simulcasting to provide enhanced orchestral scoring and voice-acting for the first ever "integrated radio-games" – its Satellaview video games. Whereas digital game data was broadcast to the Satellaview unit to provide the basic game and game sounds, Nintendo's partner, satellite radio company St.GIGA, simultaneously broadcast the musical and vocal portion of the game via radio. These two streams were combined at the Satellaview to provide a unified audiotrack analogous to stereo.
Other uses
The term "simulcast" (describing simultaneous radio/television broadcast) was coined in 1948 by a press agent at WCAU-TV, Philadelphia. NBC and CBS had begun broadcasting a few programs both to their established nationwide radio audience and to the much smaller—though steadily-growing—television audience. NBC's "Voice of Firestone" is sometimes mentioned in this regard, but NBC's "Voice of Firestone Televues" program, reaching a small Eastern audience beginning in 1943, was a TV-only show, distinct from the radio "Voice of Firestone" broadcasts. Actual TV-AM radio simulcasts of the very same "Voice of Firestone" program began only on 5 September 1949. A documented candidate for first true simulcast may well be NBC's "We the People." Toscanini's NBC Symphony performance of 15 March 1952 is perhaps a first instance of radio/TV simulcasting of a concert, predating the much-heralded rock concert simulcasts beginning in the 1980s. It could, however, be argued that these Toscanini presentations—with admission controlled by NBC, as with all its programming—were no more "public concerts" than NBC's "Voice of Firestone" broadcasts beginning in 1949, or its "Band of America" programs, which were simulcast starting 17 October 1949. Likewise Toscanini's simulcast NBC presentation of two acts of Verdi's "Aida" on 3 April 1949.
Presently, in the United States, simulcast most often refers to the practice of offering the same programming on an FM and AM station owned by the same entity, in order to cut costs. With the advent of solid state AM transmitters and computers, it has become very easy for AM stations to broadcast a different format without additional cost; therefore, simulcast between FM/AM combinations are rarely heard today outside of rural areas, and in urban areas, where often the talk radio, sports radio, or all-news radio format of an AM station is simulcast on FM, mainly for the convenience of listeners in office buildings in urban cores which easily block AM signals, as well as those with FM-only tuners. In another case, popular programs will be aired simultaneously on different services in adjacent countries, such as animated sitcom The Simpsons, airing Sunday evenings at 8:00 p.m. (Eastern and Pacific times) on both Fox in the United States and Global (1989 to 2018) and Citytv (2018 to 2021) in Canada and entertainment show Ant & Dec's Saturday Night Takeaway, airing Saturday nights at various times between 7:00 pm and 7:30 pm on ITV in the United Kingdom and Virgin Media One in the Republic of Ireland.
During apartheid in South Africa, many foreign programmes on SABC television were dubbed in Afrikaans. The original soundtrack, usually in English, but sometimes in German or Dutch was available on the Radio 2000 service. This could be selected using a button labeled simulcast on many televisions manufactured before 1995.
Radio programs have been simulcast on television since the invention thereof however, as of recent, perhaps the most visible example of radio shows on television is The Howard Stern Show, which currently airs on Sirius Satellite Radio as well as Howard TV. Another prominent radio show that was simulcast on television is Imus in the Morning, which until the simulcast ended in 2015, aired throughout the years on MSNBC, RFD-TV and Fox Business Network, in addition to its radio broadcast distributed by Citadel Media. Multiple sports talk radio shows, including Mike & Mike, The Herd with Colin Cowherd and Boomer and Carton also are carried on television, saving those networks the burden of having to air encores of sporting events or other paid sports programming which may draw lower audiences. In New Zealand, breakfast programme The AM Show airs on television channel Three and was simulcast on radio station Magic Talk; both networks were owned and operated by MediaWorks New Zealand until December 2020, when Three was sold to Discovery, Inc. In 2022, the programme was rebranded as AM and ceased simulcasting on Magic Talk, becoming a TV-only format.
Following the acquisition of the assets of the professional wrestling promotion World Championship Wrestling (WCW) by the rival World Wrestling Federation (WWF), a segment simulcast between their two flagship programs—WCW Monday Nitro on TNT (which was airing its series finale from Panama City) and the WWF's Raw on TNN (from Cleveland)—on March 26, 2001, featured WWF owner Vince McMahon addressing the sale, only for his son Shane McMahon to reveal in-universe that he had bought WCW instead, setting up an "Invasion" storyline to begin integrating WCW talent and championships into WWF.
It is not uncommon for broadcasters to simulcast a particular program (such as a marquee event or special) across all of their networks as a "roadblock" in an effort to maximize ratings by preventing self-cannibalizing counterprogramming; for example, Paramount Global (and corporate predecessor Viacom) has simulcast award shows produced by its flagship properties across its cable channels, such as the MTV Video Music Awards and Nickelodeon Kids' Choice Awards. Certain events—particularly major charity appeals (such as Hope for Haiti Now and Stand Up to Cancer)—may be jointly simulcast by a consortium of networks in order to ensure a wide audience.
Simulcasting of sporting events
In sports, such as American football and baseball, simulcasts are when a single announcer broadcasts play-by-play coverage both over television and radio. The practice was common in the early years of television, but since the 1980s, most teams have used a separate team for television and for radio. In the National Hockey League, two teams currently use a simulcast:
The Buffalo Sabres, with play-by-play announcer Rick Jeanneret or Dan Dunleavy and analyst Rob Ray via MSG Western New York
The Dallas Stars, with play-by-play announcer Josh Bogorad and analyst Daryl Reaugh via Bally Sports Southwest
Al McCoy (Phoenix), Chick Hearn (Los Angeles), Kevin Calabro (Seattle) and Rod Hundley (Utah) were the last National Basketball Association team broadcasters to be simulcast. Until his retirement in 2016, the first three innings of Vin Scully's commentary for Los Angeles Dodgers home and NL West road games were simulcast on radio and television, with the remainder of the game called by Scully exclusively for television viewers. For the final game before his retirement, Scully's commentary was simulcast on the radio for the entirety of the game.
In the 2021 season, the Toronto Blue Jays broadcast the audio of the Sportsnet play-by-play with Dan Shulman (who has previously been a radio voice for MLB on ESPN Radio) and Buck Martinez over their radio network in what was stated to be a COVID-19-related measure. Media outlets disputed the decision and felt it was actually a cost-cutting move by Blue Jays and Sportsnet owner Rogers Communications, as the team had maintained dedicated radio broadcasts in 2020 with a remote crew.
As all NFL television broadcasts are done by the national networks or via cable, there are no regular TV-to-radio football simulcasts. In order to ensure that all of a particular team's games are available on free-to-air television in their home market, NFL rules require that games not aired by a broadcast television network (including cable networks and streaming platforms) be simulcast on a broadcast station in the main market of each participating team.
In greyhound racing and horse racing, a simulcast is a broadcast of a greyhound or horse race which allows wagering at two or more sites; the simulcast often involves the transmission of wagering information to a central site, so that all bettors may bet in the same betting pool, as well as the broadcast of the race, or bet from home as they watch on a network such as TVG Network or the Racetrack Television Network.
The regional sports network MASN previously used simulcasts for MLB games played between the Baltimore Orioles and Washington Nationals—regional rivals who share the same market and broadcaster. MASN and MASN2 simulcast a single feed of the games with a commentary team featuring personalities from both teams, featuring Jim Hunter and Bob Carpenter alternating play-by-play duties, and the teams' color commentators. This arrangement ended in 2014, with both channels now originating their own Orioles- and Nationals-specific telecasts as normal.
A more recent trend by sports broadcasts have been alternate feeds offering different viewing options, including specialty camera angles, alternative commentary, or enhanced in-game statistics and analysis. In 2021, ESPN introduced a simulcast of selected Monday Night Football games featuring Eli and Peyton Manning, joined by celebrity guests; the success of these broadcasts prompted ESPN to extend the format to other sports, with the Mannings' production company Omaha Productions being involved in some of these broadcasts.
Distribution of channels
On cable television systems, analog-digital simulcasting (ADS) means that analog channels are duplicated as digital subchannels. Digital tuners are programmed to use the digital subchannel instead of the analog. This allows for smaller, cheaper cable boxes by eliminating the analog tuner and some analog circuitry. On DVRs, it eliminates the need for an MPEG encoder to convert the analog signal to digital for recording. The primary advantage is the elimination of interference, and as analog channels are dropped, the ability to put 10 or more SDTV (or two HDTV, or various other combinations) channels in its place. The primary drawback is the common problem of over-compression (quantity over quality) resulting in fuzzy pictures and pixelation.
Multiplexing—also sometimes called "multicasting"—is something of a reversal of this situation, where multiple program streams are combined into a single broadcast. The two terms are sometimes confused.
In universities with multiple campuses, simulcasting may be used for a single teacher to teach class to students in two or more locations at the same time, using videoconferencing equipment.
In many public safety agencies, simulcast refers to the broadcasting of the same transmission on the same frequency from multiple towers either simultaneously, or offset by a fixed number of microseconds. This allows for a larger coverage area without the need for a large number of channels, resulting in increased spectral efficiency. This comes at the cost of overall poorer voice quality, as multiple sources increase multipath interference significantly, resulting in what is called simulcast distortion.
See also
Single Channel Simulcast
Digital distribution, Video on demand and Streaming media: In English language anime distribution, the word "simulcast" is often misused to refer to the online release of a Japanese animated television series during the same period as in Japan.
References
Broadcast engineering
Radio broadcasting
Television terminology
1940s neologisms | Simulcast | [
"Engineering"
] | 3,395 | [
"Broadcast engineering",
"Electronic engineering"
] |
572,293 | https://en.wikipedia.org/wiki/51%20Pegasi | 51 Pegasi (abbreviated 51 Peg), formally named Helvetios , is a Sun-like star located from Earth in the constellation of Pegasus. It was the first main-sequence star found to have an exoplanet (designated 51 Pegasi b, officially named Dimidium) orbiting it.
Properties
The star's apparent magnitude is 5.49, making it visible with the naked eye under suitable viewing conditions.
51 Pegasi was listed as a standard star for the spectral type G2IV in the 1989 The Perkins catalog of revised MK types for the cooler stars. Historically, it was generally given a stellar classification of G5V, and even in more modern catalogues it is usually listed as a main-sequence star. The NStars project assign it a G2V spectral class. It is generally considered to still be generating energy through the thermonuclear fusion of hydrogen at its core, but to be in a more evolved state than the Sun. The effective temperature of the chromosphere is about , giving 51 Pegasi the characteristic yellow hue of a G-type star. It is estimated to be about 4.8 billion years old, about the same age as the Sun, with a radius 1.152% larger and 9% more mass. The star has a higher proportion of elements other than hydrogen/helium compared to the Sun; a quantity astronomers term a star's metallicity. Stars with higher metallicity such as this are more likely to host giant planets. In 1996, astronomers Baliunas, Sokoloff, and Soon measured a rotational period of 37 days for 51 Pegasi.
Although the star was suspected of being variable during a 1981 study, subsequent observation showed there was almost no chromospheric activity between 1977 and 1989. Further examination between 1994 and 2007 showed a similar low or flat level of activity. This, along with its relatively low X-ray emission, suggests that the star may be in a Maunder minimum period during which a star produces a reduced number of star spots.
The star rotates at an inclination of 79 degrees relative to Earth.
Nomenclature
51 Pegasi is the Flamsteed designation. On its discovery, the star's planet — and actually the first exoplanet discovered around a main-sequence star — was designated 51 Pegasi b by its discoverers and unofficially dubbed Bellerophon, in keeping with the convention of naming planets after Greek and Roman mythological figures (Bellerophon was a figure from Greek mythology who rode the winged horse Pegasus).
In July 2014, the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the names of Helvetios for this star and Dimidium for its planet.
The names were those submitted by the Astronomische Gesellschaft Luzern, Switzerland. "Helvetios" is Latin for "the Helvetian" and refers to the Celtic tribe that lived in Switzerland during antiquity; 'Dimidium' is Latin for 'half', referring to the planet's mass of at least half the mass of Jupiter.
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. In its first bulletin of July 2016, the WGSN explicitly recognized the names of exoplanets and their host stars approved by the Executive Committee Working Group Public Naming of Planets and Planetary Satellites, including the names of stars adopted during the 2015 NameExoWorlds campaign. This star is now so entered in the IAU Catalog of Star Names.
Planetary system
On October 6, 1995, Swiss astronomers Michel Mayor and Didier Queloz announced the discovery of an exoplanet orbiting 51 Pegasi. The discovery was made at Observatoire de Haute-Provence in France. On 8 October 2019, Mayor and Queloz shared the Nobel Prize in Physics for their discovery.
51 Pegasi b (51 Peg b) was the first discovered exoplanet around a main-sequence star. It orbits very close to the star, experiences estimated temperatures around and has a mass at least half that of Jupiter. At the time of its discovery, this close distance was not compatible with theories of planet formation and resulted in discussions of planetary migration. However, several hot Jupiters are now known to be oblique relative to the stellar axis.
See also
Star systems
47 Ursae Majoris
55 Cancri
70 Virginis
PSR B1257+12
Tau Boötis
Upsilon Andromedae
Other articles
Lists of exoplanets
Solar analog
References
External links
51 Pegasi at SolStation.com.
nStars database entry
David Darling's encyclopedia
G-type main-sequence stars
Pegasi, 51
Maunder Minimum
Planetary systems with one confirmed planet
Pegasus (constellation)
BD+19 5036
Pegasi, 51
0882
217014
113357
8729 | 51 Pegasi | [
"Astronomy"
] | 1,030 | [
"Maunder Minimum",
"Magnetism in astronomy",
"Pegasus (constellation)",
"Constellations"
] |
572,352 | https://en.wikipedia.org/wiki/Complete%20partial%20order | In mathematics, the phrase complete partial order is variously used to refer to at least three similar, but distinct, classes of partially ordered sets, characterized by particular completeness properties. Complete partial orders play a central role in theoretical computer science: in denotational semantics and domain theory.
Definitions
The term complete partial order, abbreviated cpo, has several possible meanings depending on context.
A partially ordered set is a directed-complete partial order (dcpo) if each of its directed subsets has a supremum. (A subset of a partial order is directed if it is non-empty and every pair of elements has an upper bound in the subset.) In the literature, dcpos sometimes also appear under the label up-complete poset.
A pointed directed-complete partial order (pointed dcpo, sometimes abbreviated cppo), is a dcpo with a least element (usually denoted ). Formulated differently, a pointed dcpo has a supremum for every directed or empty subset. The term chain-complete partial order is also used, because of the characterization of pointed dcpos as posets in which every chain has a supremum.
A related notion is that of ω-complete partial order (ω-cpo). These are posets in which every ω-chain () has a supremum that belongs to the poset. The same notion can be extended to other cardinalities of chains.
Every dcpo is an ω-cpo, since every ω-chain is a directed set, but the converse is not true. However, every ω-cpo with a basis is also a dcpo (with the same basis). An ω-cpo (dcpo) with a basis is also called a continuous ω-cpo (or continuous dcpo).
Note that complete partial order is never used to mean a poset in which all subsets have suprema; the terminology complete lattice is used for this concept.
Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as limits of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory.
The dual notion of a directed-complete partial order is called a filtered-complete partial order. However, this concept occurs far less frequently in practice, since one usually can work on the dual order explicitly.
By analogy with the Dedekind–MacNeille completion of a partially ordered set, every partially ordered set can be extended uniquely to a minimal dcpo.
Examples
Every finite poset is directed complete.
All complete lattices are also directed complete.
For any poset, the set of all non-empty filters, ordered by subset inclusion, is a dcpo. Together with the empty filter it is also pointed. If the order has binary meets, then this construction (including the empty filter) actually yields a complete lattice.
Every set S can be turned into a pointed dcpo by adding a least element ⊥ and introducing a flat order with ⊥ ≤ s and s ≤ s for every s in S and no other order relations.
The set of all partial functions on some given set S can be ordered by defining f ≤ g if and only if g extends f, i.e. if the domain of f is a subset of the domain of g and the values of f and g agree on all inputs for which they are both defined. (Equivalently, f ≤ g if and only if f ⊆ g where f and g are identified with their respective graphs.) This order is a pointed dcpo, where the least element is the nowhere-defined partial function (with empty domain). In fact, ≤ is also bounded complete. This example also demonstrates why it is not always natural to have a greatest element.
The set of all linearly independent subsets of a vector space V, ordered by inclusion.
The set of all partial choice functions on a collection of non-empty sets, ordered by restriction.
The set of all prime ideals of a ring, ordered by inclusion.
The specialization order of any sober space is a dcpo.
Let us use the term “deductive system” as a set of sentences closed under consequence (for defining notion of consequence, let us use e.g. Alfred Tarski's algebraic approach). There are interesting theorems that concern a set of deductive systems being a directed-complete partial ordering. Also, a set of deductive systems can be chosen to have a least element in a natural way (so that it can be also a pointed dcpo), because the set of all consequences of the empty set (i.e. “the set of the logically provable/logically valid sentences”) is (1) a deductive system (2) contained by all deductive systems.
Characterizations
An ordered set is a dcpo if and only if every non-empty chain has a supremum. As a corollary, an ordered set is a pointed dcpo if and only if every (possibly empty) chain has a supremum, i.e., if and only if it is chain-complete. Proofs rely on the axiom of choice.
Alternatively, an ordered set is a pointed dcpo if and only if every order-preserving self-map of has a least fixpoint.
Continuous functions and fixed-points
A function f between two dcpos P and Q is called (Scott) continuous if it maps directed sets to directed sets while preserving their suprema:
is directed for every directed .
for every directed .
Note that every continuous function between dcpos is a monotone function.
This notion of continuity is equivalent to the topological continuity induced by the Scott topology.
The set of all continuous functions between two dcpos P and Q is denoted [P → Q]. Equipped with the pointwise order, this is again a dcpo, and pointed whenever Q is pointed.
Thus the complete partial orders with Scott-continuous maps form a cartesian closed category.
Every order-preserving self-map f of a pointed dcpo (P, ⊥) has a least fixed-point. If f is continuous then this fixed-point is equal to the supremum of the iterates (⊥, f (⊥), f (f (⊥)), … f n(⊥), …) of ⊥ (see also the Kleene fixed-point theorem).
Another fixed point theorem is the Bourbaki-Witt theorem, stating that if is a function from a dcpo to itself with the property that for all , then has a fixed point. This theorem, in turn, can be used to prove that Zorn's lemma is a consequence of the axiom of choice.
See also
Algebraic posets
Scott topology
Completeness
Notes
References
Order theory
ru:Частично упорядоченное множество#Полное частично упорядоченное множество | Complete partial order | [
"Mathematics"
] | 1,490 | [
"Order theory"
] |
572,382 | https://en.wikipedia.org/wiki/Continuity%20correction | In mathematics, a continuity correction is an adjustment made when a discrete object is approximated using a continuous object.
Examples
Binomial
If a random variable X has a binomial distribution with parameters n and p, i.e., X is distributed as the number of "successes" in n independent Bernoulli trials with probability p of success on each trial, then
for any x ∈ {0, 1, 2, ... n}. If np and np(1 − p) are large (sometimes taken as both ≥ 5), then the probability above is fairly well approximated by
where Y is a normally distributed random variable with the same expected value and the same variance as X, i.e., E(Y) = np and var(Y) = np(1 − p). This addition of 1/2 to x is a continuity correction.
Poisson
A continuity correction can also be applied when other discrete distributions supported on the integers are approximated by the normal distribution. For example, if X has a Poisson distribution with expected value λ then the variance of X is also λ, and
if Y is normally distributed with expectation and variance both λ.
Applications
Before the ready availability of statistical software having the ability to evaluate probability distribution functions accurately, continuity corrections played an important role in the practical application of statistical tests in which the test statistic has a discrete distribution: it had a special importance for manual calculations. A particular example of this is the binomial test, involving the binomial distribution, as in checking whether a coin is fair. Where extreme accuracy is not necessary, computer calculations for some ranges of parameters may still rely on using continuity corrections to improve accuracy while retaining simplicity.
See also
Yates's correction for continuity
Wilson score interval with continuity correction
References
Devore, Jay L., Probability and Statistics for Engineering and the Sciences, Fourth Edition, Duxbury Press, 1995.
Feller, W., On the normal approximation to the binomial distribution, The Annals of Mathematical Statistics, Vol. 16 No. 4, Page 319–329, 1945.
Theory of probability distributions
Computational statistics | Continuity correction | [
"Mathematics"
] | 433 | [
"Computational statistics",
"Computational mathematics"
] |
572,464 | https://en.wikipedia.org/wiki/Tomales%20Bay | Tomales Bay is a long, narrow inlet of the Pacific Ocean in Marin County in northern California in the United States.
Geography
Tomales Bay is approximately long and averages nearly wide, with relatively shallow depths averaging 18 ft, effectively separating the Point Reyes Peninsula from the mainland of Marin County. It is located approximately northwest of San Francisco. The bay forms the eastern boundary of Point Reyes National Seashore. Tomales Bay is recognized for protection by the California Bays and Estuaries Policy. On its northern end, it opens out onto Bodega Bay, which shelters it from the direct currents of the Pacific (especially the California Current). The bay is formed along a submerged portion of the San Andreas Fault. The fault divides the Point Reyes Peninsula through Tomales Bay in the north, and the Bolinas Lagoon in the south. The Bear Valley Visitor Center in Point Reyes Station is home to the Earthquake Trail, where visitors can see a visible rift formed on the fault during the 1906 San Francisco earthquake.
Towns bordering Tomales Bay include Inverness, Tomales, Inverness Park, Point Reyes Station, and Marshall. Additional hamlets include Nick's Cove, Spengers, Duck Cove, Shallow Beach, and Vilicichs. Dillon Beach lies just to the north of the mouth of the bay, and Tomales just to the east.
Beaches
California State Parks department monitored, surf-free beaches on the bay include Heart's Desire, Shell Beach, Indian Beach, Pebble Beach, and Millerton Point. Most beaches require a hike-in, so if visiting, prepare with walkable shoes. Swimming, picnicking, sailing, kayaking, motorboating, and fishing are all popular activities on the bay.
Water sports, oystering, and fishing
Watercrafts may be launched on Tomales Bay from the public boat ramp at Nick's Cove, north of Marshall. The sandbar at the mouth of Tomales Bay is notoriously dangerous, with a long history of small-boat accidents.
Oyster farming is a major industry on the bay. The two largest producers are Hog Island Oyster Company and Tomales Bay Oyster Company, both of which retail oysters to the public and have picnic grounds on the east shore. Hillsides east of Tomales Bay are grazed by cows belonging to local dairies. There is also grazing land west of the bay, on farms and ranches leased from Point Reyes National Seashore.
The California Office of Environmental Health Hazard Assessment (OEHHA) has developed a safe eating advisory for fish caught here, based on levels of mercury or PCBs found in local species.
Biology
The bay is home to many aquatic species, and its habitat diversity is supported by eelgrass beds and intertidal mudflats. In the bay’s waters, bony and cartilaginous fish species including halibut, coho salmon, bat rays and leopard sharks can be found. Along muddy parts of bay's shore, it is common to find the gastropods such as the invasive False Cerith snail, recognizable from its dextrally coiled shape and brown-gray pattern.
History
Coast Miwok
The area surrounding Tomales Bay was once the territory of the Coast Miwok tribe. Documented villages in the area included Echa-kolum (south of Marshall), Sakloki (opposite Tomales Point), Shotommo-wi (near the mouth of the Estero de San Antonio), and Utumia (near Tomales). The tribe's history is deeply rooted in the bay and its surrounding areas. Fishing and hunting supported their liveilhood, and shells and clams collected from the bay's shore served as currency.
Francis Drake
Francis Drake is thought to have landed in nearby Drakes Estero in 1579. Members of the Vizcaíno Expedition found the Bay in 1603, and thinking it a river, named it Rio Grande de San Sebastian.
European settlements
Early 19th-century settlements constituted the southernmost Russian colony in North America and were spread over an area stretching from Point Arena to Tomales Bay.
Railroad
The narrow gauge North Pacific Coast Railroad from Sausalito was constructed along the east side of the bay in 1874 and extended to the Russian River until it was dismantled in 1930.
Preservation efforts
Tomales Bay State Park was formed to preserve some of the bay shore; it opened to the public in 1952.
The Ramsar Convention, signed in 1971, listed Tomales Bay as a wetland of international importance.
The Giacomini Wetland Restoration Project, completed in 2008, returned to wetland several hundred acres at the south end of the bay that had been drained for grazing in the 1940s.
Lodge at Marconi
The Marconi State Historical Park (formerly Marconi Conference Center State Historic Park) preserves a small hotel built in 1913 by Guglielmo Marconi to house personnel who staffed his transpacific radio station nearby. RCA purchased the station from Marconi in 1920, and it closed in 1939, though other nearby radio stations on the Point Reyes Peninsula still operate today. It was purchased by a private foundation and given to the state in 1984 to operate as a conference center.
Gallery
See also
Hog Island (Tomales Bay)
Drakes Bay — adjacent to the south
Nova Albion
Pacific herring
References
External links
Tomales Bay SP
Marconi Conference Center SHP
Marconi Conference Center
Bays of California
Bays of Marin County, California
West Marin
Landforms of the San Francisco Bay Area
Places with bioluminescence
Ramsar sites in the United States | Tomales Bay | [
"Chemistry",
"Biology"
] | 1,107 | [
"Places with bioluminescence",
"Bioluminescence"
] |
572,498 | https://en.wikipedia.org/wiki/Bell%20polynomials | In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in Faà di Bruno's formula.
Definitions
Exponential Bell polynomials
The partial or incomplete exponential Bell polynomials are a triangular array of polynomials given by
where the sum is taken over all sequences j1, j2, j3, ..., jn−k+1 of non-negative integers such that these two conditions are satisfied:
The sum
is called the nth complete exponential Bell polynomial.
Ordinary Bell polynomials
Likewise, the partial ordinary Bell polynomial is defined by
where the sum runs over all sequences j1, j2, j3, ..., jn−k+1 of non-negative integers such that
Thanks to the first condition on indices, we can rewrite the formula as
where we have used the multinomial coefficient.
The ordinary Bell polynomials can be expressed in the terms of exponential Bell polynomials:
In general, Bell polynomial refers to the exponential Bell polynomial, unless otherwise explicitly stated.
Combinatorial meaning
The exponential Bell polynomial encodes the information related to the ways a set can be partitioned. For example, if we consider a set {A, B, C}, it can be partitioned into two non-empty, non-overlapping subsets, which are also referred to as parts or blocks, in 3 different ways:
{{A}, {B, C}}
{{B}, {A, C}}
{{C}, {B, A}}
Thus, we can encode the information regarding these partitions as
Here, the subscripts of B3,2 tell us that we are considering the partitioning of a set with 3 elements into 2 blocks. The subscript of each xi indicates the presence of a block with i elements (or block of size i) in a given partition. So here, x2 indicates the presence of a block with two elements. Similarly, x1 indicates the presence of a block with a single element. The exponent of xij indicates that there are j such blocks of size i in a single partition. Here, the fact that both x1 and x2 have exponent 1 indicates that there is only one such block in a given partition. The coefficient of the monomial indicates how many such partitions there are. Here, there are 3 partitions of a set with 3 elements into 2 blocks, where in each partition the elements are divided into two blocks of sizes 1 and 2.
Since any set can be divided into a single block in only one way, the above interpretation would mean that Bn,1 = xn. Similarly, since there is only one way that a set with n elements be divided into n singletons, Bn,n = x1n.
As a more complicated example, consider
This tells us that if a set with 6 elements is divided into 2 blocks, then we can have 6 partitions with blocks of size 1 and 5, 15 partitions with blocks of size 4 and 2, and 10 partitions with 2 blocks of size 3.
The sum of the subscripts in a monomial is equal to the total number of elements. Thus, the number of monomials that appear in the partial Bell polynomial is equal to the number of ways the integer n can be expressed as a summation of k positive integers. This is the same as the integer partition of n into k parts. For instance, in the above examples, the integer 3 can be partitioned into two parts as 2+1 only. Thus, there is only one monomial in B3,2. However, the integer 6 can be partitioned into two parts as 5+1, 4+2, and 3+3. Thus, there are three monomials in B6,2. Indeed, the subscripts of the variables in a monomial are the same as those given by the integer partition, indicating the sizes of the different blocks. The total number of monomials appearing in a complete Bell polynomial Bn is thus equal to the total number of integer partitions of n.
Also the degree of each monomial, which is the sum of the exponents of each variable in the monomial, is equal to the number of blocks the set is divided into. That is, j1 + j2 + ... = k . Thus, given a complete Bell polynomial Bn, we can separate the partial Bell polynomial Bn,k by collecting all those monomials with degree k.
Finally, if we disregard the sizes of the blocks and put all xi = x, then the summation of the coefficients of the partial Bell polynomial Bn,k will give the total number of ways that a set with n elements can be partitioned into k blocks, which is the same as the Stirling numbers of the second kind. Also, the summation of all the coefficients of the complete Bell polynomial Bn will give us the total number of ways a set with n elements can be partitioned into non-overlapping subsets, which is the same as the Bell number.
In general, if the integer n is partitioned into a sum in which "1" appears j1 times, "2" appears j2 times, and so on, then the number of partitions of a set of size n that collapse to that partition of the integer n when the members of the set become indistinguishable is the corresponding coefficient in the polynomial.
Examples
For example, we have
because the ways to partition a set of 6 elements as 2 blocks are
6 ways to partition a set of 6 as 5 + 1,
15 ways to partition a set of 6 as 4 + 2, and
10 ways to partition a set of 6 as 3 + 3.
Similarly,
because the ways to partition a set of 6 elements as 3 blocks are
15 ways to partition a set of 6 as 4 + 1 + 1,
60 ways to partition a set of 6 as 3 + 2 + 1, and
15 ways to partition a set of 6 as 2 + 2 + 2.
Table of values
Below is a triangular array of the incomplete Bell polynomials :
Properties
Generating function
The exponential partial Bell polynomials can be defined by the double series expansion of its generating function:
In other words, by what amounts to the same, by the series expansion of the k-th power:
The complete exponential Bell polynomial is defined by , or in other words:
Thus, the n-th complete Bell polynomial is given by
Likewise, the ordinary partial Bell polynomial can be defined by the generating function
Or, equivalently, by series expansion of the k-th power:
See also generating function transformations for Bell polynomial generating function expansions of compositions of sequence generating functions and powers, logarithms, and exponentials of a sequence generating function. Each of these formulas is cited in the respective sections of Comtet.
Recurrence relations
The complete Bell polynomials can be recurrently defined as
with the initial value .
The partial Bell polynomials can also be computed efficiently by a recurrence relation:
where
In addition:
When ,
The complete Bell polynomials also satisfy the following recurrence differential formula:
Derivatives
The partial derivatives of the complete Bell polynomials are given by
Similarly, the partial derivatives of the partial Bell polynomials are given by
If the arguments of the Bell polynomials are one-dimensional functions, the chain rule can be used to obtain
Stirling numbers and Bell numbers
The value of the Bell polynomial Bn,k(x1,x2,...) on the sequence of factorials equals an unsigned Stirling number of the first kind:
The sum of these values gives the value of the complete Bell polynomial on the sequence of factorials:
The value of the Bell polynomial Bn,k(x1,x2,...) on the sequence of ones equals a Stirling number of the second kind:
The sum of these values gives the value of the complete Bell polynomial on the sequence of ones:
which is the nth Bell number.
which gives the Lah number.
Touchard polynomials
Touchard polynomial can be expressed as the value of the complete Bell polynomial on all arguments being x:
Inverse relations
If we define
then we have the inverse relationship
More generally, given some function admitting an inverse ,
Determinant forms
The complete Bell polynomial can be expressed as determinants:
and
Convolution identity
For sequences xn, yn, n = 1, 2, ..., define a convolution by:
The bounds of summation are 1 and n − 1, not 0 and n .
Let be the nth term of the sequence
Then
For example, let us compute . We have
and thus,
Other identities
which gives the idempotent number.
.
The complete Bell polynomials satisfy the binomial type relation:
This corrects the omission of the factor in Comtet's book.
Special cases of partial Bell polynomials:
Examples
The first few complete Bell polynomials are:
Applications
Faà di Bruno's formula
Faà di Bruno's formula may be stated in terms of Bell polynomials as follows:
Similarly, a power-series version of Faà di Bruno's formula may be stated using Bell polynomials as follows. Suppose
Then
In particular, the complete Bell polynomials appear in the exponential of a formal power series:
which also represents the exponential generating function of the complete Bell polynomials on a fixed sequence of arguments .
Reversion of series
Let two functions f and g be expressed in formal power series as
such that g is the compositional inverse of f defined by g(f(w)) = w or f(g(z)) = z. If f0 = 0 and f1 ≠ 0, then an explicit form of the coefficients of the inverse can be given in term of Bell polynomials as
with and is the rising factorial, and
Asymptotic expansion of Laplace-type integrals
Consider the integral of the form
where (a,b) is a real (finite or infinite) interval, λ is a large positive parameter and the functions f and g are continuous. Let f have a single minimum in [a,b] which occurs at x = a. Assume that as x → a+,
with α > 0, Re(β) > 0; and that the expansion of f can be term wise differentiated. Then, Laplace–Erdelyi theorem states that the asymptotic expansion of the integral I(λ) is given by
where the coefficients cn are expressible in terms of an and bn using partial ordinary Bell polynomials, as given by Campbell–Froman–Walles–Wojdylo formula:
Symmetric polynomials
The elementary symmetric polynomial and the power sum symmetric polynomial can be related to each other using Bell polynomials as:
These formulae allow one to express the coefficients of monic polynomials in terms of the Bell polynomials of its zeroes. For instance, together with Cayley–Hamilton theorem they lead to expression of the determinant of a n × n square matrix A in terms of the traces of its powers:
Cycle index of symmetric groups
The cycle index of the symmetric group can be expressed in terms of complete Bell polynomials as follows:
Moments and cumulants
The sum
is the nth raw moment of a probability distribution whose first n cumulants are κ1, ..., κn. In other words, the nth moment is the nth complete Bell polynomial evaluated at the first n cumulants. Likewise, the nth cumulant can be given in terms of the moments as
Hermite polynomials
Hermite polynomials can be expressed in terms of Bell polynomials as
where xi = 0 for all i > 2; thus allowing for a combinatorial interpretation of the coefficients of the Hermite polynomials. This can be seen by comparing the generating function of the Hermite polynomials
with that of Bell polynomials.
Representation of polynomial sequences of binomial type
For any sequence a1, a2, …, an of scalars, let
Then this polynomial sequence is of binomial type, i.e. it satisfies the binomial identity
Example: For a1 = … = an = 1, the polynomials represent Touchard polynomials.
More generally, we have this result:
Theorem: All polynomial sequences of binomial type are of this form.
If we define a formal power series
then for all n,
Software
Bell polynomials are implemented in:
Mathematica as BellY
Maple as IncompleteBellB
SageMath as bell_polynomial
See also
Bell matrix
Exponential formula
Notes
References
(contains also elementary review of the concept Bell-polynomials)
Enumerative combinatorics
Polynomials | Bell polynomials | [
"Mathematics"
] | 2,574 | [
"Polynomials",
"Enumerative combinatorics",
"Algebra",
"Combinatorics"
] |
572,813 | https://en.wikipedia.org/wiki/Sphericon | In solid geometry, the sphericon is a solid that has a continuous developable surface with two congruent, semi-circular edges, and four vertices that define a square. It is a member of a special family of rollers that, while being rolled on a flat surface, bring all the points of their surface to contact with the surface they are rolling on. It was discovered independently by carpenter Colin Roberts (who named it) in the UK in 1969, by dancer and sculptor Alan Boeding of MOMIX in 1979, and by inventor David Hirsch, who patented it in Israel in 1980.
Construction
The sphericon may be constructed from a bicone (a double cone) with an apex angle of 90 degrees, by splitting the bicone along a plane through both apexes, rotating one of the two halves by 90 degrees, and reattaching the two halves.
Alternatively, the surface of a sphericon can be formed by cutting and gluing a paper template in the form of four circular sectors (with central angles ) joined edge-to-edge.
Geometric properties
The surface area of a sphericon with radius is given by
.
The volume is given by
,
exactly half the volume of a sphere with the same radius.
History
Around 1969, Colin Roberts (a carpenter from the UK) made a sphericon out of wood while attempting to carve a Möbius strip without a hole.
In 1979, David Hirsch invented a device for generating a meander motion. The device consisted of two perpendicular half discs joined at their axes of symmetry. While examining various configurations of this device, he discovered that the form created by joining the two half discs, exactly at their diameter centers, is actually a skeletal structure of a solid made of two half bicones, joined at their square cross-sections with an offset angle of 90 degrees, and that the two objects have exactly the same meander motion. Hirsch filed a patent in Israel in 1980, and a year later, a pull toy named Wiggler Duck, based on Hirsch's device, was introduced by Playskool Company.
In 1999, Colin Roberts sent Ian Stewart a package containing a letter and two sphericon models. In response, Stewart wrote an article "Cone with a Twist" in his Mathematical Recreations column of Scientific American. This sparked quite a bit of interest in the shape, and has been used by Tony Phillips to develop theories about mazes. Roberts' name for the shape, the sphericon, was taken by Hirsch as the name for his company, Sphericon Ltd.
In popular culture
In 1979, modern dancer Alan Boeding designed his "Circle Walker" sculpture from two crosswise semicircles, a skeletal version of the sphericon. He began dancing with a scaled-up version of the sculpture in 1980 as part of an MFA program in sculpture at Indiana University, and after he joined the MOMIX dance company in 1984 the piece became incorporated into the company's performances. The company's later piece "Dream Catcher" is based around a similar Boeding sculpture whose linked teardrop shapes incorporate the skeleton and rolling motion of the oloid, a similar rolling shape formed from two perpendicular circles each passing through the center of the other.
In 2008, British woodturner David Springett published the book "Woodturning Full Circle", which explains how sphericons (and other unusual solid forms, such as streptohedrons) can be made on a wood lathe.
References
External links
Sphericon construction animation at the National Curve Bank website.
Paper model of a sphericon Make a sphericon
Sphericon variations using regular polygons with different numbers of sides
A Sphericon in Motion showing the characteristic wobbly motion as it rolls across a flat surface
Geometric shapes | Sphericon | [
"Mathematics"
] | 775 | [
"Geometric shapes",
"Mathematical objects",
"Geometric objects"
] |
573,016 | https://en.wikipedia.org/wiki/Office%20supplies | Office supplies are consumables and equipment regularly used in offices by businesses and other organizations, by individuals engaged in written communications, recordkeeping or bookkeeping, janitorial and cleaning, and for storage of supplies or data. The range of items classified as office supplies varies, and typically includes small, expendable, daily use items, consumable products, small machines, higher cost equipment such as computers, as well as office furniture and art.
Typical products
Office supplies are typically divided by type of product and general use. Some of the many different office supply products include
Blank sheet paper: various sizes from small notes to letter and poster-size; various thicknesses from tissue paper to 120 pound; construction paper; photocopier and inkjet printer paper;
Preprinted forms: time cards, tax reporting forms (1099, W-2), "while you were out" pads, desk and wall calendars;
Label and adhesive paper: name tags, file folder labels, post-it notes, and address labels;
Media: ink and toner cartridges; memory cards and flash drives;
Communication equipment: desk telephones, cell phones, and VOIP adapters; Wi-Fi adapters, ethernet cable, network routers and switches;
Paper in roll or reel form: label tape, fax machine thermal paper, and adding machine tape;
Educational and entertainment items: books (business, time management and self-help), tax, business application and game software, desk accessories such as a Newton's cradle;
Mechanical fasteners: paper clips, binder clips, staples;
Chemical fasteners: duct tape, transparent tape, glue, mucilage;
Comestibles: usually on-the-go snacks such as coffee, cookies, candy, chips, pretzels, trail mixes, and other snacks;
Janitorial supplies: mops, buckets, wastebaskets, recycling bins, brooms, soap, air fresheners, disinfectants, detergents, paper towels, and toilet paper;
Merchant supplies: price tags; time clocks; credit card processing machines and cash registers;
Small machines: hole punches, rubber stamps, numbering machines, staplers, pencil sharpeners, and laminators;
Containers: binders, envelopes, boxes, crates, shelves, folders, and desk organizers;
Writing pads and books: notebooks, composition books, legal pads, and steno pads;
Writing utensils and corrections: pens, pencils, paints, markers, correction fluid, correction tape, and erasers;
Higher-cost equipment: computers, printers, fax machines and photocopiers;
Office furniture: office chairs, cubicles, anti-static mats, rugs, filing cabinets, and armoire desks.
Office food e.g. convenience food, bottled water
Common supplies and office equipment items before the advent of suitably priced word processing machines and PCs in the 1970s and 1980s were: typewriters, slide rules, calculators, adding machines, carbon- and carbonless paper.
Many businesses in the office supply industry have recently expanded into related markets for businesses like copy centers, which facilitate the creation and printing of business collateral such as business cards and stationery, plus printing and binding of high quality, high volume business and engineering documents. Some businesses also provide services for shipping, including packaging and bulk mailing and even offer diverse services like screen printing, office coffee, office fruit and office grocery delivery. In addition, many retail chains sell related supplies beyond businesses and regularly market their stores as a center for school supplies with August and early September being a major retail period for back to school sales.
Market size
The global office supplies market, valued at USD 151.46 billion in 2022, is projected to witness a 2.1% compound annual growth rate (CAGR) from 2023 to 2030. The industry's expansion is attributed to the flourishing global services sector and increased product consumption in education. Rising environmental consciousness is driving consumers towards sustainable sourcing, production, and packaging to minimize carbon footprint. However, the COVID-19 crisis adversely affected the market, with lockdowns and social distancing measures leading to decreased demand for traditional office supplies in corporate settings worldwide.
See also
List of office supply companies in the United States
References
Office equipment
Mass media technology
pt:Papelaria
tr:Kırtasiye | Office supplies | [
"Technology"
] | 901 | [
"Information and communications technology",
"Mass media technology"
] |
573,054 | https://en.wikipedia.org/wiki/Niter | Niter or nitre is the mineral form of potassium nitrate, KNO3. It is a soft, white, highly soluble mineral found primarily in arid climates or cave deposits.
Historically, the term niter was not well differentiated from natron, both of which have been very vaguely defined but generally refer to compounds of sodium or potassium joined with carbonate or nitrate ions.
Characteristics
Niter is a colorless to white mineral crystallizing in the orthorhombic crystal system. It is the mineral form of potassium nitrate, , and is soft (Mohs hardness 2), highly soluble in water, and easily fusible. Its crystal structure resembles that of aragonite, with potassium replacing calcium and nitrate replacing carbonate. It occurs in the soils of arid regions and as massive encrustations and efflorescent growths on cavern walls and ceilings where solutions containing alkali potassium and nitrate seep into the openings. It occasionally occurs as prismatic acicular crystal groups, and individual crystals commonly show pseudohexagonal twinning on [110]. Niter and other nitrates can also form in association with deposits of guano and similar organic materials.
History and etymology
Niter as a term has been known since ancient times, although there is much historical confusion with natron (an impure sodium carbonate/bicarbonate), and not all of the ancient salts known by this name or similar names in the ancient world contained nitrate. The name is from the Ancient Greek from Ancient Egyptian , related to the Hebrew , for salt-derived ashes (their interrelationship is not clear).
The Hebrew may have been used as, or in conjunction with soap, as implied by Jeremiah 2:22, "For though thou wash thee with niter, and take thee much soap..." However, it is not certain which substance (or substances) the Biblical "neter" refers to, with some suggesting sodium carbonate.
The Neo-Latin word for sodium, , is derived from this same class of desert minerals called (French) through Spanish from Greek (), derived from Ancient Egyptian , referring to the sodium carbonate salts occurring in the deserts of Egypt, not the nitratine (nitrated sodium salts) typically occurring in the deserts of Chile (classically known as "Chilean saltpeter" and variants of this term).
A term (, or aphronitre) which translates as "foam of niter" was a regular purchase in a fourth-century AD series of financial accounts, and since it was expressed as being "for the baths" was probably used as soap.
Niter was used to refer specifically to nitrated salts known as various types of saltpeter (only nitrated salts were good for making gunpowder) by the time niter and its derivative nitric acid were first used to name the element nitrogen, in 1790.
Availability
Because of its ready solubility in water, niter is most often found in arid environments and often in conjunction with other soluble minerals like halides, iodates, borates, gypsum, and rarer carbonates and sulphates. Potassium and other nitrates are of great importance for use in fertilizers and, historically, gunpowder. Much of the world's demand is now met by synthetically produced nitrates, though the natural mineral is still mined and is still of significant commercial value.
Niter occurs naturally in certain places like the "Caves of Salnitre" (Collbató) known since the Neolithic. In the "Cova del Rat Penat", guano (bat excrements) deposited over thousands of years became saltpeter after being leached by the action of rainwater.
In 1783, Giuseppe Maria Giovene and Alberto Fortis together discovered a "natural nitrary" in a doline close to Molfetta, Italy, named Pulo di Molfetta. The two scientists discovered that niter formed inside the walls of the caves of the doline, under certain conditions of humidity and temperature. After the discovery, it was suggested that manure could be used for agriculture, in order to increase the production, rather than to make gunpowder. The discovery was challenged by scholars until chemist Giuseppe Vairo and his pupil Antonio Pitaro confirmed the discovery. Naturalists sent by academies from all Europe came in large number to visit the site; since niter is a fundamental ingredient in the production of gunpowder, these deposits were of considerable strategic interest. The government started extraction. Shortly thereafter, Giovene discovered niter in other caves of Apulia. The remnants of the extraction plant is a site of industrial archaeology, although currently not open to tourists.
Similar minerals
Related minerals are soda niter (sodium nitrate), ammonia niter or gwihabaite (ammonium nitrate), nitrostrontianite (strontium nitrate), nitrocalcite (calcium nitrate), nitromagnesite (magnesium nitrate), nitrobarite (barium nitrate) and two copper nitrates, gerhardtite and buttgenbachite; in fact all of the natural elements in the first three columns of the periodic table and numerous other cations form nitrates which are uncommonly found for the reasons given, but have been described. Niter was used to refer specifically to nitrated salts known as various types of saltpeter (only nitrated salts were good for making gunpowder) by the time niter and its derivative nitric acid were first used to name the element nitrogen, in 1790.
See also
Nitratine - Sodium based fertilizer
References
External links
Etymology of "niter"
Poe's The Cask of Amontillado
Nitrate minerals
Potassium minerals
Orthorhombic minerals
Minerals in space group 36
Potash
History of mining
Nitrogen | Niter | [
"Chemistry"
] | 1,174 | [
"Potash",
"Salts"
] |
573,174 | https://en.wikipedia.org/wiki/Interior%20design | Interior design is the art and science of enhancing the interior of a building to achieve a healthier and more aesthetically pleasing environment for the people using the space. With a keen eye for detail and a creative flair, an interior designer is someone who plans, researches, coordinates, and manages such enhancement projects. Interior design is a multifaceted profession that includes conceptual development, space planning, site inspections, programming, research, communicating with the stakeholders of a project, construction management, and execution of the design.
History and current terms
In the past, interiors were put together instinctively as a part of the process of building.
The profession of interior design has been a consequence of the development of society and the complex architecture that has resulted from the development of industrial processes.
The pursuit of effective use of space, user well-being and functional design has contributed to the development of the contemporary interior design profession. The profession of interior design is separate and distinct from the role of interior decorator, a term commonly used in the US; the term is less common in the UK, where the profession of interior design is still unregulated and therefore, strictly speaking, not yet officially a profession.
In ancient India, architects would also function as interior designers. This can be seen from the references of Vishwakarma the architect—one of the gods in Indian mythology. In these architects' design of 17th-century Indian homes, sculptures depicting ancient texts and events are seen inside the palaces, while during the medieval times wall art paintings were a common feature of palace-like mansions in India commonly known as havelis. While most traditional homes have been demolished to make way to modern buildings, there are still around 2000 havelis in the Shekhawati region of Rajashtan that display wall art paintings.
In ancient Egypt, "soul houses" (or models of houses) were placed in tombs as receptacles for food offerings. From these, it is possible to discern details about the interior design of different residences throughout the different Egyptian dynasties, such as changes in ventilation, porticoes, columns, loggias, windows, and doors.
Painting interior walls has existed for at least 5,000 years, with examples found as far north as the Ness of Brodgar, as have templated interiors, as seen in the associated Skara Brae settlement. It was the Greeks, and later Romans who added co-ordinated, decorative mosaics floors, and templated bath houses, shops, civil offices, Castra (forts) and temple, interiors, in the first millennia BC. With specialised guilds dedicated to producing interior decoration, and formulaic furniture, in buildings constructed to forms defined by Roman architects, such as Vitruvius: De architectura, libri decem (The Ten Books on Architecture).
Throughout the 17th and 18th century and into the early 19th century, interior decoration was the concern of the homemaker, or an employed upholsterer or craftsman who would advise on the artistic style for an interior space. Architects would also employ craftsmen or artisans to complete interior design for their buildings.
Commercial interior design and management
In the mid-to-late 19th century, interior design services expanded greatly, as the middle class in industrial countries grew in size and prosperity and began to desire the domestic trappings of wealth to cement their new status. Large furniture firms began to branch out into general interior design and management, offering full house furnishings in a variety of styles. This business model flourished from the mid-century to 1914, when this role was increasingly usurped by independent, often amateur, designers. This paved the way for the emergence of the professional interior design in the mid-20th century.
In the 1950s and 1960s, upholsterers began to expand their business remits. They framed their business more broadly and in artistic terms and began to advertise their furnishings to the public. To meet the growing demand for contract interior work on projects such as offices, hotels, and public buildings, these businesses became much larger and more complex, employing builders, joiners, plasterers, textile designers, artists, and furniture designers, as well as engineers and technicians to fulfil the job. Firms began to publish and circulate catalogs with prints for different lavish styles to attract the attention of expanding middle classes.
As department stores increased in number and size, retail spaces within shops were furnished in different styles as examples for customers. One particularly effective advertising tool was to set up model rooms at national and international exhibitions in showrooms for the public to see. Some of the pioneering firms in this regard were Waring & Gillow, James Shoolbred, Mintons, and Holland & Sons. These traditional high-quality furniture making firms began to play an important role as advisers to unsure middle class customers on taste and style, and began taking out contracts to design and furnish the interiors of many important buildings in Britain.
This type of firm emerged in America after the Civil War. The Herter Brothers, founded by two German émigré brothers, began as an upholstery warehouse and became one of the first firms of furniture makers and interior decorators. With their own design office and cabinet-making and upholstery workshops, Herter Brothers were prepared to accomplish every aspect of interior furnishing including decorative paneling and mantels, wall and ceiling decoration, patterned floors, and carpets and draperies.
A pivotal figure in popularizing theories of interior design to the middle class was the architect Owen Jones, one of the most influential design theorists of the nineteenth century. Jones' first project was his most important—in 1851, he was responsible for not only the decoration of Joseph Paxton's gigantic Crystal Palace for the Great Exhibition but also the arrangement of the exhibits within. He chose a controversial palette of red, yellow, and blue for the interior ironwork and, despite initial negative publicity in the newspapers, was eventually unveiled by Queen Victoria to much critical acclaim. His most significant publication was The Grammar of Ornament (1856), in which Jones formulated 37 key principles of interior design and decoration.
Jones was employed by some of the leading interior design firms of the day; in the 1860s, he worked in collaboration with the London firm Jackson & Graham to produce furniture and other fittings for high-profile clients including art collector Alfred Morrison as well as Ismail Pasha, Khedive of Egypt.
In 1882, the London Directory of the Post Office listed 80 interior decorators. Some of the most distinguished companies of the period were Crace, Waring & Gillowm and Holland & Sons; famous decorators employed by these firms included Thomas Edward Collcutt, Edward William Godwin, Charles Barry, Gottfried Semper, and George Edmund Street.
Transition to professional interior design
By the turn of the 20th century, amateur advisors and publications were increasingly challenging the monopoly that the large retail companies had on interior design. English feminist author Mary Haweis wrote a series of widely read essays in the 1880s in which she derided the eagerness with which aspiring middle-class people furnished their houses according to the rigid models offered to them by the retailers. She advocated the individual adoption of a particular style, tailor-made to the individual needs and preferences of the customer:One of my strongest convictions, and one of the first canons of good taste, is that our houses, like the fish's shell and the bird's nest, ought to represent our individual taste and habits.
The move toward decoration as a separate artistic profession, unrelated to the manufacturers and retailers, received an impetus with the 1899 formation of the Institute of British Decorators; with John Dibblee Crace as its president, it represented almost 200 decorators around the country. By 1915, the London Directory listed 127 individuals trading as interior decorators, of which 10 were women. Rhoda Garrett and Agnes Garrett were the first women to train professionally as home decorators in 1874. The importance of their work on design was regarded at the time as on a par with that of William Morris. In 1876, their work – Suggestions for House Decoration in Painting, Woodwork and Furniture – spread their ideas on artistic interior design to a wide middle-class audience.
By 1900, the situation was described by The Illustrated Carpenter and Builder:Until recently when a man wanted to furnish he would visit all the dealers and select piece by piece of furniture ....Today he sends for a dealer in art furnishings and fittings who surveys all the rooms in the house and he brings his artistic mind to bear on the subject.In America, Candace Wheeler was one of the first woman interior designers and helped encourage a new style of American design. She was instrumental in the development of art courses for women in a number of major American cities and was considered a national authority on home design. An important influence on the new profession was The Decoration of Houses, a manual of interior design written by Edith Wharton with architect Ogden Codman in 1897 in America. In the book, the authors denounced Victorian-style interior decoration and interior design, especially those rooms that were decorated with heavy window curtains, Victorian bric-a-brac, and overstuffed furniture. They argued that such rooms emphasized upholstery at the expense of proper space planning and architectural design and were, therefore, uncomfortable and rarely used. The book is considered a seminal work, and its success led to the emergence of professional decorators working in the manner advocated by its authors, most notably Elsie de Wolfe.
Elsie De Wolfe was one of the first interior designers. Rejecting the Victorian style she grew up with, she chose a more vibrant scheme, along with more comfortable furniture in the home. Her designs were light, with fresh colors and delicate Chinoiserie furnishings, as opposed to the Victorian preference of heavy, red drapes and upholstery, dark wood and intensely patterned wallpapers. Her designs were also more practical; she eliminated the clutter that occupied the Victorian home, enabling people to entertain more guests comfortably. In 1905, de Wolfe was commissioned for the interior design of the Colony Club on Madison Avenue; its interiors garnered her recognition almost over night. She compiled her ideas into her widely read 1913 book, The House in Good Taste.
In England, Syrie Maugham became a legendary interior designer credited with designing the first all-white room. Starting her career in the early 1910s, her international reputation soon grew; she later expanded her business to New York City and Chicago. Born during the Victorian Era, a time characterized by dark colors and small spaces, she instead designed rooms filled with light and furnished in multiple shades of white and mirrored screens. In addition to mirrored screens, her trademark pieces included: books covered in white vellum, cutlery with white porcelain handles, console tables with plaster palm-frond, shell, or dolphin bases, upholstered and fringed sleigh beds, fur carpets, dining chairs covered in white leather, and lamps of graduated glass balls, and wreaths.
Expansion
The interior design profession became more established after World War II. From the 1950s onwards, spending on the home increased. Interior design courses were established, requiring the publication of textbooks and reference sources. Historical accounts of interior designers and firms distinct from the decorative arts specialists were made available. Organisations to regulate education, qualifications, standards and practices, etc. were established for the profession.
Interior design was previously seen as playing a secondary role to architecture. It also has many connections to other design disciplines, involving the work of architects, industrial designers, engineers, builders, craftsmen, etc. For these reasons, the government of interior design standards and qualifications was often incorporated into other professional organisations that involved design. Organisations such as the Chartered Society of Designers, established in the UK in 1986, and the American Designers Institute, founded in 1938, governed various areas of design.
It was not until later that specific representation for the interior design profession was developed. The US National Society of Interior Designers was established in 1957, while in the UK the Interior Decorators and Designers Association was established in 1966. Across Europe, other organisations such as The Finnish Association of Interior Architects (1949) were being established and in 1994 the International Interior Design Association was founded.
Ellen Mazur Thomson, author of Origins of Graphic Design in America (1997), determined that professional status is achieved through education, self-imposed standards and professional gate-keeping organizations. Having achieved this, interior design became an accepted profession.
Interior decorators and interior designers
Interior design is the art and science of understanding people's behavior to create functional spaces, that are aesthetically pleasing, within a building. Decoration is the furnishing or adorning of a space with decorative elements, sometimes complemented by advice and practical assistance. In short, interior designers may decorate, but decorators do not design.
Interior designer
Interior designer implies that there is more of an emphasis on planning, functional design and the effective use of space, as compared to interior decorating. An interior designer in fine line design can undertake projects that include arranging the basic layout of spaces within a building as well as projects that require an understanding of technical issues such as window and door positioning, acoustics, and lighting. Although an interior designer may create the layout of a space, they may not alter load-bearing walls without having their designs stamped for approval by a structural engineer. Interior designers often work directly with architects, engineers and contractors.
Interior designers must be highly skilled in order to create interior environments that are functional, safe, and adhere to building codes, regulations and ADA requirements. They go beyond the selection of color palettes and furnishings and apply their knowledge to the development of construction documents, occupancy loads, healthcare regulations and sustainable design principles, as well as the management and coordination of professional services including mechanical, electrical, plumbing, and life safety—all to ensure that people can live, learn or work in an innocuous environment that is also aesthetically pleasing.
Someone may wish to specialize and develop technical knowledge specific to one area or type of interior design, such as residential design, commercial design, hospitality design, healthcare design, universal design, exhibition design, furniture design, and spatial branding.
Interior design is a creative profession that is relatively new, constantly evolving, and often confusing to the public. It is not always an artistic pursuit and can rely on research from many fields to provide a well-trained understanding of how people are often influenced by their environments.
Color in interior design
Color is a powerful design tool in decoration, as well as in interior design, which is the art of composing and coordinating colors together to create a stylish scheme on the interior architecture of the space.
It can be important to interior designers to acquire a deep experience with colors, understand their psychological effects, and understand the meaning of each color in different locations and situations in order to create suitable combinations for each place.
Combining colors together could result in creating a state of mind as seen by the observer, and could eventually result in positive or negative effects on them. Colors can make the room feel either more calm, cheerful, comfortable, stressful, or dramatic. Color combinations can make a tiny room seem larger or smaller. So it is for the Interior designer to choose appropriate colors for a place towards achieving how clients would want to look at, and feel in, that space.
In 2024, red-colored home accessories were popularized on social media and in several design magazines for claiming to enhance interior design. This was coined the Unexpected Red Theory.
Specialties
Residential
Residential design is the design of the interior of private residences. As this type of design is specific for individual situations, the needs and wants of the individual are paramount in this area of interior design. The interior designer may work on the project from the initial planning stage or may work on the remodeling of an existing structure. It is often a process that takes months to fine-tune and create a space with the vision of the client.
Commercial
Commercial design encompasses a wide range of subspecialties.
Retail: includes malls and shopping centers, department stores, specialty stores, visual merchandising, and showrooms.
Visual and spatial branding: The use of space as a medium to express a corporate brand.
Corporate: office design for any kind of business such as banks.
Healthcare: the design of hospitals, assisted living facilities, medical offices, dentist offices, psychiatric facilities, laboratories, medical specialist facilities.
Hospitality and recreation: includes hotels, motels, resorts, cruise ships, cafes, bars, casinos, nightclubs, theaters, music and concert halls, opera houses, sports venues, restaurants, gyms, health clubs and spas, etc.
Institutional: government offices, financial institutions (banks and credit unions), schools and universities, religious facilities, etc.
Industrial facilities: manufacturing and training facilities as well as import and export facilities.
Exhibition: includes museums, gallery, exhibition hall, specially the design for showroom and exhibition gallery.
Traffic building: includes bus station, subway station, airports, pier, etc.
Sports: includes gyms, stadiums, swimming rooms, basketball halls, etc.
Teaching in a private institute that offer classes of interior design.
Self-employment.
Employment in private sector firms.
Other
Other areas of specialization include amusement and theme park design, museum and exhibition design, exhibit design, event design (including ceremonies, weddings, baby and bridal showers, parties, conventions, and concerts), interior and prop styling, craft styling, food styling, product styling, tablescape design, theatre and performance design, stage and set design, scenic design, and production design for film and television. Beyond those, interior designers, particularly those with graduate education, can specialize in healthcare design, gerontological design, educational facility design, and other areas that require specialized knowledge. Some university programs offer graduate studies in theses and other areas. For example, both Cornell University and the University of Florida offer interior design graduate programs in environment and behavior studies.
Profession
Education
There are various paths that one can take to become a professional interior designer. All of these paths involve some form of training. Working with a successful professional designer is an informal method of training and has previously been the most common method of education. In many states, however, this path alone cannot lead to licensing as a professional interior designer. Training through an institution such as a college, art or design school or university is a more formal route to professional practice.
In many countries, several university degree courses are now available, including those on interior architecture, taking three or four years to complete.
A formal education program, particularly one accredited by or developed with a professional organization of interior designers, can provide training that meets a minimum standard of excellence and therefore gives a student an education of a high standard. There are also university graduate and Ph.D. programs available for those seeking further training in a specific design specialization (i.e. gerontological or healthcare design) or those wishing to teach interior design at the university level.
Working conditions
There are a wide range of working conditions and employment opportunities within interior design. Large and tiny corporations often hire interior designers as employees on regular working hours. Designers for smaller firms and online renovation platforms usually work on a contract or per-job basis. Self-employed designers, who made up 32% of interior designers in 2020, usually work the most hours. Interior designers often work under stress to meet deadlines, stay on budget, and meet clients' needs and wishes.
In some cases, licensed professionals review the work and sign it before submitting the design for approval by clients or construction permitting. The need for licensed review and signature varies by locality, relevant legislation, and scope of work. Their work can involve significant travel to visit different locations. However, with technology development, the process of contacting clients and communicating design alternatives has become easier and requires less travel.
Styles
Art Deco
The Art Deco style began in Europe in the early years of the 20th century, with the waning of Art Nouveau. The term "Art Deco" was taken from the Exposition Internationale des Arts Decoratifs et Industriels Modernes, a world's fair held in Paris in 1925. Art Deco rejected many traditional classical influences in favour of more streamlined geometric forms and metallic color. The Art Deco style influenced all areas of design, especially interior design, because it was the first style of interior decoration to spotlight new technologies and materials.
Art Deco style is mainly based on geometric shapes, streamlining, and clean lines. The style offered a sharp, cool look of mechanized living utterly at odds with anything that came before.
Art Deco rejected traditional materials of decoration and interior design, opting instead to use more unusual materials such as chrome, glass, stainless steel, shiny fabrics, mirrors, aluminium, lacquer, inlaid wood, sharkskin, and zebra skin. The use of harder, metallic materials was chosen to celebrate the machine age. These materials reflected the dawning modern age that was ushered in after the end of the First World War. The innovative combinations of these materials created contrasts that were very popular at the time – for example the mixing together of highly polished wood and black lacquer with satin and furs. The barber shop in the Austin Reed store in London was designed by P. J. Westwood. It was soon regarded as the trendiest barber shop in Britain due to its use of metallic materials.
The color themes of Art Deco consisted of metallic color, neutral color, bright color, and black and white. In interior design, cool metallic colors including silver, gold, metallic blue, charcoal grey, and platinum tended to predominate. Serge Chermayeff, a Russian-born British designer made extensive use of cool metallic colors and luxurious surfaces in his room schemes. His 1930 showroom design for a British dressmaking firm had a silver-grey background and black mirrored-glass wall panels.
Black and white was also a very popular color scheme during the 1920s and 1930s. Black and white checkerboard tiles, floors and wallpapers were very trendy at the time. As the style developed, bright vibrant colors became popular as well.
Art Deco furnishings and lighting fixtures had a glossy, luxurious appearance with the use of inlaid wood and reflective finishes. The furniture pieces often had curved edges, geometric shapes, and clean lines. Art Deco lighting fixtures tended to make use of stacked geometric patterns.
Modern art
Modern design grew out of the decorative arts, mostly from the Art Deco, in the early 20th century. One of the first to introduce this modernist style was Frank Lloyd Wright, who had not become hugely popularized until completing the house called Fallingwater in the 1930s. Modern art reached its peak during the 1950s and '60s, which is why designers and decorators today may refer to modern design as being "mid-century". Modern art does not refer to the era or age of design and is not the same as contemporary design, a term used by interior designers for a shifting group of recent styles and trends.
Arab materials
"Majlis painting", also called nagash painting, is the decoration of the majlis, or front parlor of traditional Arabic homes, in the Asir province of Saudi Arabia and adjoining parts of Yemen. These wall paintings, an arabesque form of mural or fresco, show various geometric designs in bright colors: "Called 'nagash' in Arabic, the wall paintings were a mark of pride for a woman in her house."
The geometric designs and heavy lines seem to be adapted from the area's textile and weaving patterns. "In contrast with the sobriety of architecture and decoration in the rest of Arabia, exuberant color and ornamentation characterize those of Asir. The painting extends into the house over the walls and doors, up the staircases, and onto the furniture itself. When a house is being painted, women from the community help each other finish the job. The building then displays their shared taste and knowledge. Mothers pass these on to their daughters. This artwork is based on a geometry of straight lines and suggests the patterns common to textile weaving, with solid bands of different colors. Certain motifs reappear, such as the triangular mihrab or 'niche' and the palmette. In the past, paint was produced from mineral and vegetable pigments. Cloves and alfalfa yielded green. Blue came from the indigo plant. Red came from pomegranates and a certain mud. Paintbrushes were created from the tough hair found in a goat's tail. Today, however, women use modern manufactured paint to create new looks, which have become an indicator of social and economic change."
Women in the Asir province often complete the decoration and painting of the house interior. "You could tell a family's wealth by the paintings," Um Abdullah says: "If they didn't have much money, the wife could only paint the motholath, the basic straight, simple lines, in patterns of three to six repetitions in red, green, yellow and brown." When women did not want to paint the walls themselves, they could barter with other women who would do the work. Several Saudi women have become famous as majlis painters, such as Fatima Abou Gahas.
The interior walls of the home are brightly painted by the women, who work in defined patterns with lines, triangles, squares, diagonals and tree-like patterns. "Some of the large triangles represent mountains. Zigzag lines stand for water and also for lightning. Small triangles, especially when the widest area is at the top, are found in pre-Islamic representations of female figures. That the small triangles found in the wall paintings in 'Asir are called banat may be a cultural remnant of a long-forgotten past."
"Courtyards and upper pillared porticoes are principal features of the best Nadjdi architecture, in addition to the fine incised plaster wood (jiss) and painted window shutters, which decorate the reception rooms. Good examples of plasterwork can often be seen in the gaping ruins of torn-down buildings- the effect is light, delicate and airy. It is usually around the majlis, around the coffee hearth and along the walls above where guests sat on rugs, against cushions. Doughty wondered if this "parquetting of jis", this "gypsum fretwork... all adorning and unenclosed" originated from India. However, the Najd fretwork seems very different from that seen in the Eastern Province and Oman, which are linked to Indian traditions, and rather resembles the motifs and patterns found in ancient Mesopotamia. The rosette, the star, the triangle and the stepped pinnacle pattern of dadoes are all ancient patterns, and can be found all over the Middle East of antiquity. Al-Qassim Province seems to be the home of this art, and there it is normally worked in hard white plaster (though what you see is usually begrimed by the smoke of the coffee hearth). In Riyadh, examples can be seen in unadorned clay.
Media popularization
Interior design has become the subject of television shows. In the United Kingdom, popular interior design and decorating programs include 60 Minute Makeover (ITV), Changing Rooms (BBC), and Selling Houses (Channel 4). Famous interior designers whose work is featured in these programs include Linda Barker and Laurence Llewelyn-Bowen. In the United States, the TLC Network aired a popular program called Trading Spaces, a show based on the UK program Changing Rooms. In addition, both HGTV and the DIY Network also televise many programs about interior design and decorating, featuring the works of a variety of interior designers, decorators, and home improvement experts in a myriad of projects.
Fictional interior decorators include the Sugarbaker sisters on Designing Women and Grace Adler on Will & Grace. There is also another show called Home MADE. There are two teams and two houses and whoever has the designed and made the worst room, according to the judges, is eliminated. Another show on the Style Network, hosted by Niecy Nash, is Clean House where they re-do messy homes into themed rooms that the clients would like. Other shows include Design on a Dime, Designed to Sell, and The Decorating Adventures of Ambrose Price. The show called Design Star has become more popular through the five seasons that have already aired. The winners of this show end up getting their own TV shows, of which are Color Splash hosted by David Bromstad, Myles of Style hosted by Kim Myles, Paint-Over! hosted by Jennifer Bertrand, The Antonio Treatment hosted by Antonio Ballatore, and finally Secrets from a Stylist hosted by Emily Henderson. Bravo also has a variety of shows that explore the lives of interior designers. These include Flipping Out, which explores the life of Jeff Lewis and his team of designers; Million Dollar Decorators explores the lives of interior designers Nathan Turner, Jeffrey Alan Marks, Mary McDonald, Kathryn Ireland, and Martyn Lawrence Bullard.
Interior design has also become the subject of radio shows. In the U.S., popular interior design & lifestyle shows include Martha Stewart Living and Living Large featuring Karen Mills. Famous interior designers whose work is featured on these programs include Bunny Williams, Barbara Barry, and Kathy Ireland, among others.
Many interior design magazines exist to offer advice regarding color palette, furniture, art, and other elements that fall under the umbrella of interior design. These magazine often focus on related subjects to draw a more specific audience. For instance, architecture as a primary aspect of Dwell, while Veranda is well known as a luxury living magazine. Lonny Magazine and the newly relaunched, Domino Magazine, cater to a young, hip, metropolitan audience, and emphasize accessibility and a do-it-yourself (DIY) approach to interior design.
Gallery
Notable interior decorators
Other early interior decorators:
Sibyl Colefax
Dorothy Draper
Pierre François Léonard Fontaine
Syrie Maugham
Margery Hoffman Smith
Elsie de Wolfe
Arthur Stannard Vernay
Frank Lloyd Wright
Many of the most famous designers and decorators during the 20th century had no formal training. Some examples include Sister Parish, Robert Denning and Vincent Fourcade, Kerry Joyce, Kelly Wearstler, Stéphane Boudin, Georges Geffroy, Emilio Terry, Carlos de Beistegui, Nina Petronzio, Lorenzo Mongiardino, Mary Jean Thompson and David Nightingale Hicks.
Notable interior designers in the world today include Scott Salvator, Troy Adams, Jonathan Adler, Michael S. Smith, Martin Brudnizki, Mary Douglas Drysdale, Kelly Hoppen, Kelly Wearstler, Nina Campbell, David Collins, Nate Berkus, Sandra Espinet, Jo Hamilton and Nicky Haslam.
See also
1960s decor
American Society of Interior Designers
Blueprint
British Institute of Interior Design
Chartered Society of Designers
Environmental psychology
Experiential interior design
Fuzzy architectural spatial analysis
Interior architecture
Interior design psychology
Interior design regulation in the United States
Japanese interior design
Primitive decorating
Wall decals
Window treatment
References
External links
Candace Wheeler: The Art and Enterprise of American Design, 1875–1900, a full text exhibition catalog from The Metropolitan Museum of Art, which includes a great deal of content about early interior design
Architectural design
Decorative arts
Home economics | Interior design | [
"Engineering"
] | 6,356 | [
"Design",
"Architectural design",
"Architecture"
] |
573,343 | https://en.wikipedia.org/wiki/Refractory%20metals | Refractory metals are a class of metals that are extraordinarily resistant to heat and wear. The expression is mostly used in the context of materials science, metallurgy and engineering. The definition of which elements belong to this group differs. The most common definition includes five elements: two of the fifth period (niobium and molybdenum) and three of the sixth period (tantalum, tungsten, and rhenium). They all share some properties, including a melting point above 2000 °C and high hardness at room temperature. They are chemically inert and have a relatively high density. Their high melting points make powder metallurgy the method of choice for fabricating components from these metals. Some of their applications include tools to work metals at high temperatures, wire filaments, casting molds, and chemical reaction vessels in corrosive environments. Partly due to the high melting point, refractory metals are stable against creep deformation to very high temperatures.
Definition
Most definitions of the term 'refractory metals' list the extraordinarily high melting point as a key requirement for inclusion. By one definition, a melting point above is necessary to qualify, which includes iridium, osmium, niobium, molybdenum, tantalum, tungsten, rhenium, rhodium, ruthenium and hafnium. The five elements niobium, molybdenum, tantalum, tungsten and rhenium are included in all definitions, while the widest definition, including all elements with a melting point above , such as titanium, vanadium, zirconium, and chromium. Technetium is not included because of its radioactivity, though it would otherwise have qualified under the widest definition.
Properties
Physical
Refractory metals have high melting points, with tungsten and rhenium the highest of all elements, and the other's melting points only exceeded by osmium and iridium, and the sublimation of carbon. These high melting points define most of their applications. All the metals are body-centered cubic except rhenium which is hexagonal close-packed. The physical properties of the refractory elements vary significantly because they are members of different groups of the periodic table. The hardness, high melting and boiling points, and high enthalpies of atomization of these metals arise from the partial occupation of the outer d subshell, allowing the d electrons to participate in metallic bonding. This gives stiff, highly stable bonds to neighboring atoms and a body-centered cubic crystal structure that resists deformation. Moving to the right in the periodic table, more d electrons increase this effect, but as the d subshell fills they are pulled by the higher nuclear charge into the atom's inert core, reducing their ability to delocalize to form bonds with neighbors. These opposing effects result in groups 5 through 7 exhibiting the most refractory properties.
Creep resistance is a key property of the refractory metals. In metals, the starting of creep correlates with the melting point of the material; the creep in aluminium alloys starts at 200 °C, while for refractory metals temperatures above 1500 °C are necessary. This resistance against deformation at high temperatures makes the refractory metals suitable against strong forces at high temperature, for example in jet engines, or tools used during forging.
Chemical
The refractory metals show a wide variety of chemical properties because they are members of three distinct groups in the periodic table. They are easily oxidized, but this reaction is slowed down in the bulk metal by the formation of stable oxide layers on the surface (passivation). Especially the oxide of rhenium is more volatile than the metal, and therefore at high temperature the stabilization against the attack of oxygen is lost, because the oxide layer evaporates. They all are relatively stable against acids.
Applications
Refractory metals, and alloys made from them, are used in lighting, tools, lubricants, nuclear reaction control rods, as catalysts, and for their chemical or electrical properties. Because of their high melting point, refractory metal components are never fabricated by casting. The process of powder metallurgy is used. Powders of the pure metal are compacted, heated using electric current, and further fabricated by cold working with annealing steps. Refractory metals and their alloys can be worked into wire, ingots, rebars, sheets or foil.
Molybdenum alloys
Molybdenum-based alloys are widely used, because they are cheaper than superior tungsten alloys. The most widely used alloy of molybdenum is the Titanium-Zirconium-Molybdenum alloy TZM, composed of 0.5% titanium and 0.08% of zirconium (with molybdenum being the rest). The alloy exhibits a higher creep resistance and strength at high temperatures, making service temperatures of above 1060 °C possible for the material. The high resistivity of Mo-30W, an alloy of 70% molybdenum and 30% tungsten, against the attack of molten zinc makes it the ideal material for casting zinc. It is also used to construct valves for molten zinc.
Molybdenum is used in mercury wetted reed relays, because molybdenum does not form amalgams and is therefore resistant to corrosion by liquid mercury.
Molybdenum is the most commonly used of the refractory metals. Its most important use is as a strengthening alloy of steel. Structural tubing and piping often contains molybdenum, as do many stainless steels. Its strength at high temperatures, resistance to wear and low coefficient of friction are all properties which make it invaluable as an alloying compound. Its excellent anti-friction properties lead to its incorporation in greases and oils where reliability and performance are critical. Automotive constant-velocity joints use grease containing molybdenum. The compound sticks readily to metal and forms a very hard, friction-resistant coating. Most of the world's molybdenum ore can be found in China, the USA, Chile and Canada.
Tungsten and its alloys
Tungsten was discovered in 1781 by Swedish chemist Carl Wilhelm Scheele. Tungsten has the highest melting point of all metals, at .
Up to 22% Rhenium is alloyed with tungsten to improve its high temperature strength and corrosion resistance. Thorium as an alloying compound is used when electric arcs have to be established. The ignition is easier and the arc burns more stably than without the addition of thorium. For powder metallurgy applications, binders have to be used for the sintering process. For the production of the tungsten heavy alloy, binder mixtures of nickel and iron or nickel and copper are widely used. The tungsten content of the alloy is normally above 90%. The diffusion of the binder elements into the tungsten grains is low even at the sintering temperatures and therefore the interior of the grains are pure tungsten.
Tungsten and its alloys are often used in applications where high temperatures are present but still a high strength is necessary and the high density is not troublesome. Tungsten wire filaments provide the vast majority of household incandescent lighting, but are also common in industrial lighting as electrodes in arc lamps. Lamps get more efficient in the conversion of electric energy to light with higher temperatures and therefore a high melting point is essential for the application as filament in incandescent light. Gas tungsten arc welding (GTAW, also known as tungsten inert gas (TIG) welding) equipment uses a permanent, non-melting electrode. The high melting point and the wear resistance against the electric arc makes tungsten a suitable material for the electrode.
Tungsten's high density and strength are also key properties for its use in weapon projectiles, for example as an alternative to depleted Uranium for tank gun rounds. Its high melting point makes tungsten a good material for applications like rocket nozzles, for example in the UGM-27 Polaris. Some of the applications of tungsten are not related to its refractory properties but simply to its density. For example, it is used in balance weights for planes and helicopters or for heads of golf clubs. In this applications similar dense materials like the more expensive osmium can also be used.
The most common use for tungsten is as the compound tungsten carbide in drill bits, machining and cutting tools. The largest reserves of tungsten are in China, with deposits in Korea, Bolivia, Australia, and other countries.
It also finds itself serving as a lubricant, antioxidant, in nozzles and bushings, as a protective coating and in many other ways. Tungsten can be found in printing inks, x-ray screens, in the processing of petroleum products, and flame proofing of textiles.
Niobium alloys
Niobium is nearly always found together with tantalum, and was named after Niobe, the daughter of the mythical Greek king Tantalus for whom tantalum was named. Niobium has many uses, some of which it shares with other refractory metals. It is unique in that it can be worked through annealing to achieve a wide range of strength and ductility, and is the least dense of the refractory metals. It can also be found in electrolytic capacitors and in the most practical superconducting alloys. Niobium can be found in aircraft gas turbines, vacuum tubes and nuclear reactors.
An alloy used for liquid rocket thruster nozzles, such as in the main engine of the Apollo Lunar Modules, is C103, which consists of 89% niobium, 10% hafnium and 1% titanium. Another niobium alloy was used for the nozzle of the Apollo Service Module. As niobium is oxidized at temperatures above 400 °C, a protective coating is necessary for these applications to prevent the alloy from becoming brittle.
Tantalum and its alloys
Tantalum is one of the most corrosion-resistant substances available.
Many important uses have been found for tantalum owing to this property, particularly in the medical and surgical fields, and also in harsh acidic environments. It is also used to make superior electrolytic capacitors. Tantalum films provide the second most capacitance per volume of any substance after Aerogel, and allow miniaturization of electronic components and circuitry. Many cellular phones and computers contain tantalum capacitors.
Rhenium alloys
Rhenium is the most recently discovered refractory metal. It is found in low concentrations with many other metals, in the ores of other refractory metals, platinum or copper ores. It is useful as an alloy to other refractory metals, where it adds ductility and tensile strength. Rhenium alloys are being used in electronic components, gyroscopes and nuclear reactors. Rhenium finds its most important use as a catalyst. It is used as a catalyst in reactions such as alkylation, dealkylation, hydrogenation and oxidation. However its rarity makes it the most expensive of the refractory metals.
Advantages and shortfalls
The strength and high-temperature stability of refractory metals make them suitable for hot metalworking applications and for vacuum furnace technology. Many special applications exploit these properties: for example, tungsten lamp filaments operate at temperatures up to 3073 K, and molybdenum furnace windings withstand 2273 K.
However, poor low-temperature fabricability and extreme oxidability at high temperatures are shortcomings of most refractory metals. Interactions with the environment can significantly influence their high-temperature creep strength. Application of these metals requires a protective atmosphere or coating.
The refractory metal alloys of molybdenum, niobium, tantalum, and tungsten have been applied to space nuclear power systems. These systems were designed to operate at temperatures from 1350 K to approximately 1900 K. An environment must not interact with the material in question. Liquid alkali metals as the heat transfer fluids are used as well as the ultra-high vacuum.
The high-temperature creep strain of alloys must be limited for them to be used. The creep strain should not exceed 1–2%. An additional complication in studying creep behavior of the refractory metals is interactions with environment, which can significantly influence the creep behavior.
See also
Refractory – heat resistance of nonmetallic materials
References
Further reading
Metals
Metallurgy
Metals, Refractory | Refractory metals | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,616 | [
"Metals",
"Metallurgy",
"Refractory materials",
"Materials science",
"Refractory metals",
"Materials",
"Alloys",
"nan",
"Matter"
] |
573,489 | https://en.wikipedia.org/wiki/C3%20carbon%20fixation | {{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete plants in these areas.
The isotopic signature of plants shows higher degree of 13C depletion than the plants, due to variation in fractionation of carbon isotopes in oxygenic photosynthesis across plant types. Specifically, plants do not have PEP carboxylase like plants, allowing them to only utilize ribulose-1,5-bisphosphate carboxylase (Rubisco) to fix through the Calvin cycle. The enzyme Rubisco largely discriminates against carbon isotopes, evolving to only bind to 12C isotope compared to 13C (the heavier isotope), contributing to more 13C depletion seen in plants compared to plants especially since the pathway uses PEP carboxylase in addition to Rubisco.
Variations
Not all C3 carbon fixation pathways operate at the same efficiency.
Refixation
Bamboos and the related rice have an improved C3 efficiency. This improvement might be due to its ability to recapture CO2 produced during photorespiration, a behavior termed "carbon refixation". These plants achieve refixation by growing chloroplast extensions called "stromules" around the stroma in mesophyll cells, so that any photorespired CO2 from the mitochondria has to pass through the RuBisCO-filled chloroplast.
Refixation is also performed by a wide variety of plants. The common approach involving growing a bigger bundle sheath leads down to C2 photosynthesis.
Synthetic glycolate pathway
C3 carbon fixation is prone to photorespiration (PR) during dehydration, accumulating toxic glycolate products. In the 2000s scientists used computer simulation combined with an optimization algorithm to figure out what parts of the metabolic pathway may be tuned to improve photosynthesis. According to simulation, improving glycolate metabolism would help significantly to reduce photorespiration.
Instead of optimizing specific enzymes on the PR pathway for glycolate degradation, South et al. decided to bypass PR altogether. In 2019, they transferred Chlamydomonas reinhardtii glycolate dehydrogenase and Cucurbita maxima malate synthase into the chloroplast of tobacco (a model organism). These enzymes, plus the chloroplast's own, create a catabolic cycle: acetyl-CoA combines with glyoxylate to form malate, which is then split into pyruvate and CO2; the former in turn splits into acetyl-CoA and CO2. By forgoing all transport among organelles, all the CO2 released will go into increasing the CO2 concentration in the chloroplast, helping with refixation. The end result is 24% more biomass. An alternative using E. coli glycerate pathway produced a smaller improvement of 13%. They are now working on moving this optimization into other crops like wheat.
References
Photosynthesis
Metabolic pathways
Carbon | C3 carbon fixation | [
"Chemistry",
"Biology"
] | 1,056 | [
"Metabolic pathways",
"Biochemistry",
"Metabolism",
"Photosynthesis"
] |
573,528 | https://en.wikipedia.org/wiki/Systems%20development%20life%20cycle | In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
"Software development organization follows some process when developing a Software product in mature organization this is well defined and managed. In Software development life cycle, we develop Software in a Systematic and disciplined manner."
Overview
A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize.
SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations.
In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".
SDLC is not a methodology per se, but rather a description of the phases that a methodology should address. The list of phases is not definitive, but typically includes planning, analysis, design, build, test, implement, and maintenance/support. In the Scrum framework, for example, one could say a single user story goes through all the phases of the SDLC within a two-week sprint. By contrast the waterfall methodology, where every business requirement is translated into feature/functional descriptions which are then all implemented typically over a period of months or longer.
History
According to Elliott (2004), SDLC "originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".
The structured systems analysis and design method (SSADM) was produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".
Models
SDLC provides a set of phases/steps/activities for system designers and developers to follow. Each phase builds on the results of the previous one. Not every project requires that the phases be sequential. For smaller, simpler projects, phases may be combined/overlap.
Waterfall
The oldest and best known is the waterfall model, which uses a linear sequence of steps. Waterfall has different varieties. One variety is as follows:
Preliminary analysis
Conduct with a preliminary analysis, consider alternative solutions, estimate costs and benefits, and submit a preliminary plan with recommendations.
Conduct preliminary analysis: Identify the organization's objectives and define the nature and scope of the project. Ensure that the project fits with the objectives.
Consider alternative solutions: Alternatives may come from interviewing employees, clients, suppliers, and consultants, as well as competitive analysis.
Cost-benefit analysis: Analyze the costs and benefits of the project.
Systems analysis, requirements definition
Decompose project goals into defined functions and operations. This involves gathering and interpreting facts, diagnosing problems, and recommending changes. Analyze end-user information needs and resolve inconsistencies and incompleteness:
Collect facts: Obtain end-user requirements by document review, client interviews, observation, and questionnaires.
Scrutinize existing system(s): Identify pros and cons.
Analyze the proposed system: Find solutions to issues and prepare specifications, incorporating appropriate user proposals.
Systems design
At this step, desired features and operations are detailed, including screen layouts, business rules, process diagrams, pseudocode, and other deliverables.
Development
Write the code.
Integration and testing
Assemble the modules in a testing environment. Check for errors, bugs, and interoperability.
Acceptance, installation, deployment
Put the system into production. This may involve training users, deploying hardware, and loading information from the prior system.
Maintenance
Monitor the system to assess its ongoing fitness. Make modest changes and fixes as needed. To maintain the quality of the system. Continual monitoring and updates ensure the system remains effective and high-quality.
Evaluation
The system and the process are reviewed. Relevant questions include whether the newly implemented system meets requirements and achieves project goals, whether the system is usable, reliable/available, properly scaled and fault-tolerant. Process checks include review of timelines and expenses, as well as user acceptance.
Disposal
At end of life, plans are developed for discontinuing the system and transitioning to its replacement. Related information and infrastructure must be repurposed, archived, discarded, or destroyed, while appropriately protecting security.
In the following diagram, these stages are divided into ten steps, from definition to creation and modification of IT work products:
Systems analysis and design
Systems analysis and design (SAD) can be considered a meta-development activity, which serves to set the stage and bound the problem. SAD can help balance competing high-level requirements. SAD interacts with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning.
Object-oriented analysis and design
Object-oriented analysis and design (OOAD) is the process of analyzing a problem domain to develop a conceptual model that can then be used to guide development. During the analysis phase, a programmer develops written requirements and a formal vision document via interviews with stakeholders.
The conceptual model that results from OOAD typically consists of use cases, and class and interaction diagrams. It may also include a user interface mock-up.
An output artifact does not need to be completely defined to serve as input of object-oriented design; analysis and design may occur in parallel. In practice the results of one activity can feed the other in an iterative process.
Some typical input artifacts for OOAD:
Conceptual model: A conceptual model is the result of object-oriented analysis. It captures concepts in the problem domain. The conceptual model is explicitly independent of implementation details.
Use cases: A use case is a description of sequences of events that, taken together, complete a required task. Each use case provides scenarios that convey how the system should interact with actors (users). Actors may be end users or other systems. Use cases may further elaborated using diagrams. Such diagrams identify the actor and the processes they perform.
System Sequence Diagram: A System Sequence diagrams (SSD) is a picture that shows, for a particular use case, the events that actors generate, their order, including inter-system events.
User interface document: Document that shows and describes the user interface.
Data model: A data model describes how data elements relate to each other. The data model is created before the design phase. Object-oriented designs map directly from the data model. Relational designs are more involved.
System lifecycle
The system lifecycle is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out, and disposal.
Conceptual design
The conceptual design stage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated, and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptual design review has determined that the system specification properly addresses the motivating need.
Key steps within the conceptual design stage include:
Need identification
Feasibility analysis
System requirements analysis
System specification
Conceptual design review
Preliminary system design
During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements. At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development.
Key steps within the preliminary design stage include:
Functional analysis
Requirements allocation
Detailed trade-off studies
Synthesis of system options
Preliminary design of engineering models
Development specification
Preliminary design review
For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast-growing bank in Fiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers' needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs.
Detail design and development
This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment, and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification.
Key steps within the detail design and development stage include:
Detailed design
Detailed synthesis
Development of engineering and prototype models
Revision of development specification
Product, process, and material specification
Critical design review
Production and construction
During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications, and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement.
Key steps within the product construction stage include:
Production and/or construction of system components
Acceptance testing
System distribution and operation
Operational testing and evaluation
System assessment
Utilization and support
Once fully deployed, the system is used for its intended operational role and maintained within its operational environment.
Key steps within the utilization and support stage include:
System operation in the user environment
Change management
System modifications for improvement
System assessment
Phase-out and disposal
Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle. Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems.
Phases
System investigation
During this step, current priorities that would be affected and how they should be handled are considered. A feasibility study determines whether creating a new or improved system is appropriate. This helps to estimate costs, benefits, resource requirements, and specific user needs.
The feasibility study should address operational, financial, technical, human factors, and legal/political concerns.
Analysis
The goal of analysis is to determine where the problem is. This step involves decomposing the system into pieces, analyzing project goals, breaking down what needs to be created, and engaging users to define requirements.
Design
In systems design, functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. Modular design reduces complexity and allows the outputs to describe the system as a collection of subsystems.
The design stage takes as its input the requirements already defined. For each requirement, a set of design elements is produced.
Design documents typically include functional hierarchy diagrams, screen layouts, business rules, process diagrams, pseudo-code, and a complete data model with a data dictionary. These elements describe the system in sufficient detail that developers and engineers can develop and deliver the system with minimal additional input.
Testing
The code is tested at various levels in software testing. Unit, system, and user acceptance tests are typically performed. Many approaches to testing have been adopted.
The following types of testing may be relevant:
Path testing
Data set testing
Unit testing
System testing
Integration testing
Black-box testing
White-box testing
Regression testing
Automation testing
User acceptance testing
Software performance testing
Training and transition
Once a system has been stabilized through testing, SDLC ensures that proper training is prepared and performed before transitioning the system to support staff and end users. Training usually covers operational training for support staff as well as end-user training.
After training, systems engineers and developers transition the system to its production environment.
Operations and maintenance
Maintenance includes changes, fixes, and enhancements.
Evaluation
The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements.
Life cycle
Management and control
SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives while executing projects. Control objectives are clear statements of the desired result or purpose and should be defined and monitored throughout a project. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.
To manage and control a substantial SDLC initiative, a work breakdown structure (WBS) captures and schedules the work. The WBS and all programmatic material should be kept in the "project description" section of the project notebook. The project manager chooses a WBS format that best describes the project.
The diagram shows that coverage spans numerous phases of the SDLC but the associated MCD (Management Control Domains) shows mappings to SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain, and System Build and Prototype is primarily performed as part of delivery and support.
Work breakdown structured organization
The upper section of the WBS provides an overview of the project scope and timeline. It should also summarize the major phases and milestones. The middle section is based on the SDLC phases. WBS elements consist of milestones and tasks to be completed rather than activities to be undertaken and have a deadline. Each task has a measurable output (e.g., analysis document). A WBS task may rely on one or more activities (e.g. coding). Parts of the project needing support from contractors should have a statement of work (SOW). The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by contractors.
Baselines
Baselines are established after four of the five phases of the SDLC, and are critical to the iterative nature of the model. Baselines become milestones.
functional baseline: established after the conceptual design phase.
allocated baseline: established after the preliminary design phase.
product baseline: established after the detail design and development phase.
updated product baseline: established after the production construction phase.
Alternative methodologies
Alternative software development methods to systems development life cycle are:
Software prototyping
Joint applications development (JAD)
Rapid application development (RAD)
Extreme programming (XP);
Open-source development
End-user development
Object-oriented programming
Strengths and weaknesses
Fundamentally, SDLC trades flexibility for control by imposing structure. It is more commonly used for large scale projects with many developers.
See also
Application lifecycle management
Decision cycle
IPO model
Software development methodologies
References
Further reading
Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson
Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke.
Computer World, 2002, Retrieved on June 22, 2006, from the World Wide Web:
Management Information Systems, 2005, Retrieved on June 22, 2006, from the World Wide Web:
External links
The Agile System Development Lifecycle
Pension Benefit Guaranty Corporation – Information Technology Solutions Lifecycle Methodology
DoD Integrated Framework Chart IFC (front, back)
FSA Life Cycle Framework
HHS Enterprise Performance Life Cycle Framework
The Open Systems Development Life Cycle
System Development Life Cycle Evolution Modeling
Zero Deviation Life Cycle
Integrated Defense AT&L Life Cycle Management Chart, the U.S. DoD form of this concept.
Systems engineering
Computing terminology
Software development process
Software engineering | Systems development life cycle | [
"Technology",
"Engineering"
] | 3,519 | [
"Systems engineering",
"Computing terminology",
"Computer engineering",
"Software engineering",
"Information technology"
] |
15,929,501 | https://en.wikipedia.org/wiki/Performance%20per%20watt | In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems: an example using this is the Green500 list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing than Moore's Law.
System designers building parallel computers, such as Google's hardware, pick CPUs based on their performance per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.
Spaceflight computers have hard limits on the maximum power available and also have hard requirements on minimum real-time performance. A ratio of processing speed to required electrical power is more useful than raw processing speed.
Definition
The performance and power consumption metrics used depend on the definition; reasonable measures of performance are FLOPS, MIPS, or the score for any performance benchmark. Several measures of power usage may be employed, depending on the purposes of the metric; for example, a metric might only consider the electrical power delivered to a machine directly, while another might include all power necessary to run a computer, such as cooling and monitoring systems. The power measurement is often the average power used while running the benchmark, but other measures of power usage may be employed (e.g. peak power, idle power).
For example, the early UNIVAC I computer performed approximately 0.015 operations per watt-second (performing 1,905 operations per second (OPS), while consuming 125 kW). The Fujitsu FR-V VLIW/vector processor system on a chip in the 4 FR550 core variant released 2005 performs 51 Giga-OPS with 3 watts of power consumption resulting in 17 billion operations per watt-second. This is an improvement by over a trillion times in 54 years.
Most of the power a computer uses is converted into heat, so a system that takes fewer watts to do a job will require less cooling to maintain a given operating temperature. Reduced cooling demands makes it easier to quiet a computer. Lower energy consumption can also make it less costly to run, and reduce the environmental impact of powering the computer (see green computing).
If installed where there is limited climate control, a lower power computer will operate at a lower temperature, which may make it more reliable. In a climate controlled environment, reductions in direct power use may also create savings in climate control energy.
Computing energy consumption is sometimes also measured by reporting the energy required to run a particular benchmark, for instance EEMBC EnergyBench. Energy consumption figures for a standard workload may make it easier to judge the effect of an improvement in energy efficiency.
When performance is defined as , then performance per watt can be written as . Since a watt is one , then performance per watt can also be written as .
FLOPS per watt
FLOPS per watt is a common measure. Like the FLOPS (Floating Point Operations Per Second) metric it is based on, the metric is usually applied to scientific computing and simulations involving many floating point calculations.
Examples
, the Green500 list rates the two most efficient supercomputers highest those are both based on the same manycore accelerator PEZY-SCnp Japanese technology in addition to Intel Xeon processors both at RIKEN, the top one at 6673.8 MFLOPS/watt; and the third ranked is the Chinese-technology Sunway TaihuLight (a much bigger machine, that is the ranked 2nd on TOP500, the others are not on that list) at 6051.3 MFLOPS/watt.
In June 2012, the Green500 list rated BlueGene/Q, Power BQC 16C as the most efficient supercomputer on the TOP500 in terms of FLOPS per watt, running at 2,100.88 MFLOPS/watt.
In November 2010, IBM machine, Blue Gene/Q achieves 1,684 MFLOPS/watt.
On 9 June 2008, CNN reported that IBM's Roadrunner supercomputer achieves 376 MFLOPS/watt.
As part of the Intel Tera-Scale research project, the team produced an 80-core CPU that can achieve over 16,000 MFLOPS/watt. The future of that CPU is not certain.
Microwulf, a low cost desktop Beowulf cluster of four dual-core Athlon 64 X2 3800+ computers, runs at 58 MFLOPS/watt.
Kalray has developed a 256-core VLIW CPU that achieves 25,000 MFLOPS/watt. Next generation is expected to achieve 75,000 MFLOPS/watt. However, in 2019 their latest chip for embedded is 80-core and claims up to 4 TFLOPS at 20 W.
Adapteva announced the Epiphany V, a 1024-core 64-bit RISC processor intended to achieve 75 GFLOPS/watt, while they later announced that the Epiphany V was "unlikely" to become available as a commercial product
US Patent 10,020,436, July 2018 claims three intervals of 100, 300, and 600 GFLOPS/watt.
GPU efficiency
Graphics processing units (GPU) have continued to increase in energy usage, while CPUs designers have recently focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like 3DMark2006 score per watt can help identify more efficient GPUs. However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.
With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design.
Since GPUs may also be used for some general purpose computation, sometimes their performance is measured in terms also applied to CPUs, such as FLOPS per watt.
Challenges
While performance per watt is useful, absolute power requirements are also important. Claims of improved performance per watt may be used to mask increasing power demands. For instance, though newer generation GPU architectures may provide better performance per watt, continued performance increases can negate the gains in efficiency, and the GPUs continue to consume large amounts of power.
Benchmarks that measure power under heavy load may not adequately reflect typical efficiency. For instance, 3DMark stresses the 3D performance of a GPU, but many computers spend most of their time doing less intense display tasks (idle, 2D tasks, displaying video). So the 2D or idle efficiency of the graphics system may be at least as significant for overall energy efficiency. Likewise, systems that spend much of their time in standby or soft off are not adequately characterized by just efficiency under load. To help address this some benchmarks, like SPECpower, include measurements at a series of load levels.
The efficiency of some electrical components, such as voltage regulators, decreases with increasing temperature, so the power used may increase with temperature. Power supplies, motherboards, and some video cards are some of the subsystems affected by this. So their power draw may depend on temperature, and the temperature or temperature dependence should be noted when measuring.
Performance per watt also typically does not include full life-cycle costs. Since computer manufacturing is energy intensive, and computers often have a relatively short lifespan, energy and materials involved in production, distribution, disposal and recycling often make up significant portions of their cost, energy use, and environmental impact.
Energy required for climate control of the computer's surroundings is often not counted in the wattage calculation, but it can be significant.
Other energy efficiency measures
SWaP (space, wattage and performance) is a Sun Microsystems metric for data centers, incorporating power and space:
Where performance is measured by any appropriate benchmark, and space is size of the computer.
Reduction of power, mass, and volume is also important for spaceflight computers.
See also
Energy efficiency benchmarks
Average CPU power (ACP) a measure of power consumption when running several standard benchmarks
EEMBC EnergyBench
SPECpower a benchmark for web servers running Java (Server Side Java Operations per Joule)
Other
Data center infrastructure efficiency (DCIE)
Energy proportional computing
IT energy management
Koomey's law
Landauer's principle
Low-power electronics
Power usage effectiveness (PUE)
Processor power dissipation
Notes and references
Further reading
External links
The Green500
Benchmarks (computing)
Computers and the environment
Electric power
Energy conservation
Computer performance | Performance per watt | [
"Physics",
"Technology",
"Engineering"
] | 1,843 | [
"Physical quantities",
"Computing comparisons",
"Computer performance",
"Computers and the environment",
"Power (physics)",
"Benchmarks (computing)",
"Electric power",
"Computing and society",
"Electrical engineering",
"Computers"
] |
15,930,975 | https://en.wikipedia.org/wiki/Fazlul%20Halim%20Chowdhury | Fazlul Halim Chowdhury (1 August 19309 April 1996) was a fellow of the Bangladesh Academy of Sciences and one of the longest-serving Vice-Chancellors of the University of Dhaka. He made pioneering contributions to the development of physical chemistry in Bangladesh, publishing more than 20 articles. He focused on cellulose fibers (especially jute), polyelectrolytes, and proteins.
Early life
Chowdhury was born on 1 August 1930 to Abdul Aziz Chowdhury, an educationist and Afifa Khatun of Kunja Sreepur village, in Comilla District, Bengal Presidency.
Education
SSC, Noakhali R.K. Zilla H.E. School, 1945
HSC, Comilla Victoria College, 1947
BSc (Hons), Department of Chemistry, University of Dhaka, First in the First Class
MSc, Department of Chemistry, University of Dhaka, First in the First Class, 1951
PhD, Manchester University, UK (Thesis entitled "The Acid Behaviour of Carboxylic Derivatives", 5 July 1956) Awarded "Royal Commission for the Exhibition of 1851" to pursue PhD studies.
Academic career
Lecturer, 1952–53; assistant professor, 195658, Department of Chemistry, Dhaka University
Professor, Department of Chemistry and Applied Chemistry, Rajshahi University, 195890
Nuffield Fellow, Cambridge University, U.K. 196062
Dean, Faculty of Science, Rajshahi University, 1972
Commonwealth Senior Fellow, Cambridge University, U.K. 197374
Member, University Grants Commission (UGC), 197476
Vice Chancellor, University of Dhaka, 197683
Fellow of Bangladesh Academy of Sciences, 1979
Asia Foundation Fellowship, 1984
President, Bangladesh Chemical Society, 198486
Senior Advisor in Basic Sciences, UNESCO, New Delhi, 198590
University of Asia Pacific, Dhaka, 199596
President, The Rajshahi University Teachers Association
Provost, Abdul Latif Hall, Rajshahi University
Senior Researcher, American Association for the Advancement of Science Washington D.C.
Research
Chowdhury made pioneering contributions to the development of physical chemistry in the country, publishing more than 20 articles. He focused on cellulose fibers (of jute in particular), polyelectrolytes, and proteins. He also guided a number of PhD theses.
References
1930 births
1996 deaths
Fellows of Bangladesh Academy of Sciences
Bangladeshi chemists
Physical chemists
Vice-chancellors of the University of Dhaka
Academic staff of the University of Rajshahi
Comilla Victoria Government College alumni
People from Chauddagram Upazila | Fazlul Halim Chowdhury | [
"Chemistry"
] | 523 | [
"Physical chemists"
] |
15,931,153 | https://en.wikipedia.org/wiki/Model%20complete%20theory | In model theory, a first-order theory is called model complete if every embedding of its models is an elementary embedding.
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson.
Model companion and model completion
A companion of a theory T is a theory T* such that every model of T can be embedded in a model of T* and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an -categorical theory, then it always has a model companion.
A model completion for a theory T is a model companion T* such that for any model M of T, the theory of T* together with the diagram of M is complete. Roughly speaking, this means every model of T is embeddable in a model of T* in a unique way.
If T* is a model companion of T then the following conditions are equivalent:
T* is a model completion of T
T has the amalgamation property.
If T also has universal axiomatization, both of the above are also equivalent to:
T* has elimination of quantifiers
Examples
Any theory with elimination of quantifiers is model complete.
The theory of algebraically closed fields is the model completion of the theory of fields. It is model complete but not complete.
The model completion of the theory of equivalence relations is the theory of equivalence relations with infinitely many equivalence classes, each containing an infinite number of elements.
The theory of real closed fields, in the language of ordered rings, is a model completion of the theory of ordered fields (or even ordered domains).
The theory of real closed fields, in the language of rings, is the model companion for the theory of formally real fields, but is not a model completion.
Non-examples
The theory of dense linear orders with a first and last element is complete but not model complete.
The theory of groups (in a language with symbols for the identity, product, and inverses) has the amalgamation property but does not have a model companion.
Sufficient condition for completeness of model-complete theories
If T is a model complete theory and there is a model of T that embeds into any model of T, then T is complete.
Notes
References
Mathematical logic
Model theory | Model complete theory | [
"Mathematics"
] | 500 | [
"Mathematical logic",
"Model theory"
] |
15,931,374 | https://en.wikipedia.org/wiki/Low-flush%20toilet | A low-flush toilet (or low-flow toilet or high-efficiency toilet) is a flush toilet that uses significantly less water than traditional high-flow toilets. Before the early 1990s in the United States, standard flush toilets typically required at least 3.5 gallons (13.2 litres) per flush and they used float valves that often leaked, increasing their total water use. In the early 1990s, because of concerns about water shortages, and because of improvements in toilet technology, some states and then the federal government began to develop water-efficiency standards for appliances, including toilets, mandating that new toilets use less water. The first standards required low-flow toilets of 1.6 gallons (6.0 litres) per flush. Further improvements in the technology to overcome concerns about the initial poor performance of early models have further cut the water use of toilets and while federal standards stagnate at 1.6 gallons per flush, certain states' standards toughened up to require that new toilets use no more than 1.28 gallons (4.8 litres) per flush, while working far better than older models. Low-flush toilets include single-flush models and dual-flush toilets, which typically use 1.6 US gallons per flush for the full flush and 1.28 US gallons or less for a reduced flush.
Water savings
The US Environmental Protection Agency's WaterSense program provides certification that toilets meet the goal of using less than 1.6 US gallons per flush. Units that meet or exceed this standard can carry the WaterSense sticker. The EPA estimates that the average US home will save US$90 per year, and $2,000 over the lifetime of the toilets. Dry toilets can lead to even more water savings in private homes as they use no water for flushing.
Problems
The early low-flush toilets in the US often had a poor design that required more than one flush to rid the bowl of solid waste, resulting in limited water savings. In response, US Congressman Joe Knollenberg from Michigan tried to get Congress to repeal the law but was unsuccessful, and the industry worked to redesign and improve toilet functioning. Some reduction in sewer flows have caused slight backups or required redesign of wastewater pipes, but overall, very substantial residential water savings have resulted from the change over time to more efficient toilets.
History
In 1988 Massachusetts became the first state in the US to mandate the use of low-flush toilets in new construction and remodeling. In 1992 US President George H. W. Bush signed the Energy Policy Act. This law made 1.6 gallons per flush a mandatory federal maximum for new toilets. This law went into effect on January 1, 1994, for residential buildings and January 1, 1997, for commercial buildings.
The first generation of low-flush toilets were simple modifications of traditional toilets. A valve would open and the water would passively flow into the bowl. The resulting water pressure was often inadequate to carry away waste. Improvements in design now make modern models not only more water-efficient but more effective than old models. In addition to tank-type toilets that "pull" waste down, there are also now pressure-assist models, which use water pressure to effectively "push" waste.
See also
Low-flow fixtures
Dual flush toilet
Sewer dosing unit
Waterless urinal
Residential water use in the U.S. and Canada
References
Toilets
Toilet types
Water conservation
Water conservation tools
Sustainable products
Bathrooms | Low-flush toilet | [
"Biology"
] | 689 | [
"Excretion",
"Toilets"
] |
15,932,284 | https://en.wikipedia.org/wiki/Microscopy%20Society%20of%20America | The Microscopy Society of America (MSA), founded in 1942 as The Electron Microscope Society of America, is a non-profit organization that provides microanalytical facilities for studies within the sciences. Currently, there are approximately 3000 members. The society holds an annual meeting, which is usually held in the beginning of August. It has 30 local affiliates across the United States. The society has a program for examining and certifying technologists of electron microscopes. The organization produces two journals: Microscopy Today, and Microscopy and Microanalysis. As of 2024, the President is Jay Potts.
History
A meeting of electron microscopists took place in November 1942 at the Sherman House Hotel in Chicago. It was organized by G. L. Clark of the University of Chicago. At this meeting the society was founded as the Electron Microscope Society of America (EMSA). For the 1949 meeting, the EMSA invited representatives from European microscopy societies, which may have been a catalyzing event for the creation of an international microscopy society: the International Federation of Societies for Electron Microscopy (IFSEM), which the EMSA later joined, and would eventually hold joint meetings with IFSEM; the first of these joint meetings would the 9th International Congress of Electron Microscopy in 1978.
The name of the society was changed in 1964 to the Electron Microscopy Society of America to "reflect the cross-discipline nature of microscopy applications." In 1993, the name was changed to the current one: the Microscopy Society of America to "reflect the increasing diversity of microscopy and microanalysis techniques and their applications represented at the annual Microscopy and Microanalysis (M&M) meeting and in MSA publications."
Structure
The MSA has an MSA Executive Council made up of five individuals: the president, president-elect, past president, treasurer and secretary. Those elected president serve three-year terms, where they have different roles during each year. During the first year they act as the president-elect, during the second year they act as the president, and during the final year they act as the past president. The treasurer serves a five-year term, and the secretary serves a two-year term.
Additionally, there is an MSA Council made up of seven individuals each elected to two-year terms.
Publications
Microscopy Today and Microscopy and Microanalysis both release six times a year alternating with each other. The former is released in odd months (January, March etc.), while the latter is released in even months (February, April etc.). Both are now published by Oxford University Press, but were published by the Cambridge University Press prior to 2023.
Microscopy Today
Microscopy Today is a trade magazine intended to provide information to microscopists working in all fields, with coverage including light microscopy, microanalytical methods and electron microscopy. The current editor-in-chief is Dr. Robert L. Price.
It was published by Cambridge University Press until Volume 31, where publishing was taken over by Oxford University Press.
Microscopy and Microanalysis
Microscopy Listserver
The Microscopy Listserver is a network based discussion forum giving members of the scientific community a centralized Internet address to which questions/comments/answers in the various fields of Microscopy or Microanalysis can be rapidly distributed to a list of (subscribed) individuals by electronic mail. There are in excess of 3000 subscribers to the Microscopy Listserver from over 40 countries on 6 continents, who participate in this system on a daily basis. Messages are posted and circulated daily on a variety of topics. The Listserver was founded by Nestor J. Zaluzec who continues to host and operate the service for the scientific community, the Listserver is co-sponsored in part by the Microscopy Society of America.
This Listserver has been in operation since 1993 and maintains a searchable archive of all posted Email questions, comments, and responses. Every two months, selected contributions on the Microscopy Listserver are published in the archives of Microscopy-Today
For the purposes of this forum, Microscopy or Microanalysis is considered to include all techniques which employ a probe such as: photons (including x-rays), electrons, ions, mechanical and/or electromagnetic radiation to form a representation or characterization of the microstructure (internal or external) of any material in either physical and/or life sciences applications.
Some of the more common techniques which are associated with this field include the following:
optical microscopy
x-ray microscopy
scanning electron microscopy
transmission electron microscopy
atomic force microscopy
scanning tunneling microscopy
scanning ion microscopy
analytical electron microscopy
high resolution electron microscopy*
intermediate/high voltage electron microscopy
electron microprobe analyzers
x-ray energy dispersive spectroscopy
electron energy loss spectroscopy
.......
There are no charges for usage of the forum, except for the request that one actively participates in any discussion to which you have a question, comment and/or contribution.
Unsolicited commercial advertising messages are prohibited, however, brief announcements of educational/training courses are permitted on a strictly limited basis.
In compliance with US Public Law 108-187 (CANSPAM Act) only subscribers and/or posters receive copies of posting to the Listserver via Email. Non-subscribers are allowed to browse the archives.
Notable people
Thomas F. Anderson, biophysical chemist and geneticist; elected President of the Electron Microscope Society of America in 1955.
M. Grace Burke, materials scientist; elected President of Microscopy Society of America in 2005.
C. Barry Carter, professor of material science; elected President of Microscopy Society of America in 1997.
Thomas Eugene Everhart, educator and physicist; elected President of the Electron Microscopy Society of America in 1977.
Robert Glaeser, biochemist; elected President of the Electron Microscopy Society of America in 1986.
Ernest Lenard Hall, university professor; elected President of Microscopy Society of America in 2013.
David Harker, medical researcher; elected President of the Electron Microscope Society of America in 1946.
Étienne de Harven, pathologist and electron microscopist; elected President of the Electron Microscopy Society of America in 1976.
James Hillier, scientist and inventor who commercialized the first electron microscope with Albert Prebus; elected President of the Electron Microscope Society of America in 1945.
Deborah F. Kelly (living), biomedical engineer and university professor; elected President of the Microscopy Society of America in 2022.
Michael A. O'Keefe, physicist; elected President of Microscopy Society of America in 2007.
David W. Piston, physicist; elected President of Microscopy Society of America in 2010.
Keith R. Porter, cell biologist; elected President of the Electron Microscope Society of America in 1962 and 1990.
David J. Smith, experimental physicist; elected President of Microscopy Society of America in 2009.
Robley C. Williams, biophysicist and virologist,; elected President of the Electron Microscope Society of America in 1951.
Ralph Walter Graystone Wyckoff, chemist and pioneer of X-ray crystallography; elected President of the Electron Microscope Society of America in 1950.
Nestor J. Zaluzec, scientist and inventor; elected President of Microscopy Society of America in 2011.
References
External links
Official website
Microscopy organizations
Organizations established in 1942
Scientific societies based in the United States
1942 establishments in the United States | Microscopy Society of America | [
"Chemistry"
] | 1,472 | [
"Microscopy organizations",
"Microscopy"
] |
15,932,531 | https://en.wikipedia.org/wiki/Weak%20focusing | Weak focusing occurs in particle accelerators when charged particles are passing through uniform magnetic fields, causing them to move in circular paths due to the Lorentz force. Because of the circular movement, the orbits of two particles with slightly different positions may approximate or even cross each other.
Because a particle beam has a finite emittance, this effect was used in cyclotrons and early synchrotrons to prevent the growth of deviations from the desired particle orbit. Due to its definition, it also occurs in the dipole magnets of modern accelerator facilities and must be considered in beam optics calculations. In modern facilities, most of the beam focusing is usually done by quadrupole magnets, using Strong focusing to enable smaller beam sizes and vacuum chambers, thus reducing the average magnet size.
References
Accelerator physics | Weak focusing | [
"Physics"
] | 163 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
15,933,523 | https://en.wikipedia.org/wiki/Xi%20Aquilae%20b | Xi Aquilae b (abbreviated ξ Aquilae b, ξ Aql b), formally named Fortitudo , is an extrasolar planet approximately 184 light-years from the Sun in the constellation of Aquila. The planet was discovered orbiting the yellow giant star Xi Aquilae in 2008. The planet has a minimum mass of 2.8 Jupiter and a period of 137 days.
Name
Following its discovery the planet was designated Xi Aquilae b. In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Fortitudo for this planet.
The winning name was submitted by Libertyer, a student club at Hosei University of Tokyo, Japan. Fortitudo is Latin for 'fortitude'. Aquila is Latin for 'eagle', a symbol of fortitude – emotional and mental strength in the face of adversity.
See also
18 Delphini b
41 Lyncis b
References
External links
Aquila (constellation)
Giant planets
Exoplanets discovered in 2008
Exoplanets detected by radial velocity
Exoplanets with proper names | Xi Aquilae b | [
"Astronomy"
] | 268 | [
"Aquila (constellation)",
"Constellations"
] |
15,933,750 | https://en.wikipedia.org/wiki/Forking%20extension | In model theory, a forking extension of a type is an extension of that type that is not whereas a non-forking extension is an extension that is as free as possible. This can be used to extend the notions of linear or algebraic independence to stable theories. These concepts were introduced by S. Shelah.
Definitions
Suppose that A and B are models of some complete ω-stable theory T. If p is a type of A and q is a type of B containing p, then q is called a forking extension of p if its Morley rank is smaller, and a nonforking extension if it has the same Morley rank.
Axioms
Let T be a stable complete theory. The non-forking relation ≤ for types over T is the unique relation that satisfies the following axioms:
If p≤q then p⊂q. If f is an elementary map then p≤q if and only if fp≤fq
If p⊂q⊂r then p≤r if and only if p≤q and q≤r
If p is a type of A and A⊂B then there is some type q of B with p≤q.
There is a cardinal κ such that if p is a type of A then there is a subset A0 of A of cardinality less than κ so that (p|A0) ≤ p, where | stands for restriction.
For any p there is a cardinal λ such that there are at most λ non-contradictory types q with p≤q.
Unicode
References
Model theory | Forking extension | [
"Mathematics"
] | 313 | [
"Applied mathematics",
"Mathematical logic",
"Applied mathematics stubs",
"Model theory"
] |
15,933,761 | https://en.wikipedia.org/wiki/41%20Lyncis%20b | 41 Lyncis b (abbreviated 41 Lyn b), also designated HD 81688 b and named Arkas , is an extrasolar planet approximately 280 light-years from Earth in the constellation of Ursa Major.
A gas giant with a minimum mass 2.7 times that of Jupiter, it orbits the K-type star 41 Lyncis with an orbital period of 184 days (corresponding to a semi-major axis of 0.81 AU). It was discovered and announced by Bun'ei Sato on February 19, 2008.
Name
In July 2014, the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the name Arkas for this planet. The winning name was submitted by the Okayama Astro Club of Japan. Arkas was the son of Callisto (Ursa Major) in Greek mythology.
See also
18 Delphini b
Xi Aquilae b
References
Ursa Major
Giant planets
Exoplanets discovered in 2008
Exoplanets detected by radial velocity
Exoplanets with proper names | 41 Lyncis b | [
"Astronomy"
] | 250 | [
"Ursa Major",
"Constellations"
] |
15,933,895 | https://en.wikipedia.org/wiki/18%20Delphini%20b | 18 Delphini b (abbreviated 18 Del b), formally named Arion , is an extrasolar planet approximately 249 light-years away in the constellation of Delphinus.
The 993-day period planet orbits the yellow giant star 18 Delphini. A very massive and dense planet with a minimum mass of , it was discovered on February 19, 2008, by Bun'ei Sato.
In July 2014, the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the name Arion for this planet. The winning name was submitted by the Tokushima Prefectural Jonan High School Science Club of Japan. Arion was a genius of poetry and music in ancient Greece. According to legend, his life was saved at sea by dolphins after attracting their attention by the playing of his kithara ('Delphinus' is Latin for 'dolphin').
See also
41 Lyncis b
Xi Aquilae b
References
External links
– lists data about the star.
Delphinus
Giant planets
Exoplanets discovered in 2008
Exoplanets detected by radial velocity
Exoplanets with proper names | 18 Delphini b | [
"Astronomy"
] | 263 | [
"Delphinus",
"Constellations"
] |
2,977,685 | https://en.wikipedia.org/wiki/Ball%20flower | The ball-flower (also written ballflower) is an architectural ornament in the form of a ball inserted in the cup of a flower. It came into use in the latter part of the 13th century in England and became one of the chief ornaments of the 14th century, in the period known as Decorated Gothic.
Ball-flowers were generally placed in rows at equal distances in the hollow of a moulding, frequently by the sides of mullions. Examples are found in many churches of the period including Gloucester Cathedral; St Mary's Church, Bloxham; St. Michael's Church, Swaton ( 1300); and Tewkesbury Abbey ( 1330). The presence of ball-flowers on the west part of Salisbury Cathedral has helped date this facade to the 14th century.
References
Sources
External links
Picture of ball-flowers outlining a window of Gloucester Cathedral
Ornaments (architecture)
Visual motifs | Ball flower | [
"Mathematics"
] | 188 | [
"Symbols",
"Visual motifs"
] |
2,977,752 | https://en.wikipedia.org/wiki/Hartogs%20number | In mathematics, specifically in axiomatic set theory, a Hartogs number is an ordinal number associated with a set. In particular, if X is any set, then the Hartogs number of X is the least ordinal α such that there is no injection from α into X. If X can be well-ordered then the cardinal number of α is a minimal cardinal greater than that of X. If X cannot be well-ordered then there cannot be an injection from X to α. However, the cardinal number of α is still a minimal cardinal number (i.e. ordinal) not less than or equal to the cardinality of X (with the bijection definition of cardinality and the injective function order). (If we restrict to cardinal numbers of well-orderable sets then that of α is the smallest that is not not less than or equal to that of X.) The map taking X to α is sometimes called Hartogs's function. This mapping is used to construct the aleph numbers, which are all the cardinal numbers of infinite well-orderable sets.
The existence of the Hartogs number was proved by Friedrich Hartogs in 1915, using Zermelo set theory alone (that is, without using the axiom of choice, or the later-introduced Replacement schema of Zermelo-Fraenkel set theory).
Hartogs's theorem
Hartogs's theorem states that for any set X, there exists an ordinal α such that ; that is, such that there is no injection from α to X. As ordinals are well-ordered, this immediately implies the existence of a Hartogs number for any set X. Furthermore, the proof is constructive and yields the Hartogs number of X.
Proof
See .
Let be the class of all ordinal numbers β for which an injective function exists from β into X.
First, we verify that α is a set.
X × X is a set, as can be seen in Axiom of power set.
The power set of X × X is a set, by the axiom of power set.
The class W of all reflexive well-orderings of subsets of X is a definable subclass of the preceding set, so it is a set by the axiom schema of separation.
The class of all order types of well-orderings in W is a set by the axiom schema of replacement, as can be described by a simple formula.
But this last set is exactly α. Now, because a transitive set of ordinals is again an ordinal, α is an ordinal. Furthermore, there is no injection from α into X, because if there were, then we would get the contradiction that α ∈ α. And finally, α is the least such ordinal with no injection into X. This is true because, since α is an ordinal, for any β < α, β ∈ α so there is an injection from β into X.
Historical remark
In 1915, Hartogs could use neither von Neumann-ordinals nor the replacement axiom, and so his result is one of Zermelo set theory and looks rather different from the modern exposition above. Instead, he considered the set of isomorphism classes of well-ordered subsets of X and the relation in which the class of A precedes that of B if A is isomorphic with a proper initial segment of B. Hartogs showed this to be a well-ordering greater than any well-ordered subset of X. However, the main purpose of his contribution was to show that trichotomy for cardinal numbers implies the (then 11 year old) well-ordering theorem (and, hence, the axiom of choice).
See also
Successor cardinal
Aleph number
References
Set theory
Cardinal numbers | Hartogs number | [
"Mathematics"
] | 799 | [
"Cardinal numbers",
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Numbers"
] |
2,977,884 | https://en.wikipedia.org/wiki/Conjugation%20of%20isometries%20in%20Euclidean%20space | In a group, the conjugate by g of h is ghg−1.
Translation
If h is a translation, then its conjugation by an isometry can be described as applying the isometry to the translation:
the conjugation of a translation by a translation is the first translation
the conjugation of a translation by a rotation is a translation by a rotated translation vector
the conjugation of a translation by a reflection is a translation by a reflected translation vector
Thus the conjugacy class within the Euclidean group E(n) of a translation is the set of all translations by the same distance.
The smallest subgroup of the Euclidean group containing all translations by a given distance is the set of all translations. So, this is the conjugate closure of a singleton containing a translation.
Thus E(n) is a direct product of the orthogonal group O(n) and the subgroup of translations T, and O(n) is isomorphic with the quotient group of E(n) by T:
O(n) E(n) / T
Thus there is a partition of the Euclidean group with in each subset one isometries that keeps the origins fixed, and its combination with all translations.
Each isometry is given by an orthogonal matrix A in O(n) and a vector b:
and each subset in the quotient group is given by the matrix A only.
Similarly, for the special orthogonal group SO(n) we have
SO(n) E+(n) / T
Inversion
The conjugate of the inversion in a point by a translation is the inversion in the translated point, etc.
Thus the conjugacy class within the Euclidean group E(n) of inversion in a point is the set of inversions in all points.
Since a combination of two inversions is a translation, the conjugate closure of a singleton containing inversion in a point is the set of all translations and the inversions in all points. This is the generalized dihedral group dih (Rn).
Similarly { I, −I } is a normal subgroup of O(n), and we have:
E(n) / dih (Rn) O(n) / { I, −I }
For odd n we also have:
O(n) SO(n) × { I, −I }
and hence not only
O(n) / SO(n) { I, −I }
but also:
O(n) / { I, −I } SO(n)
For even n we have:
E+(n) / dih (Rn) SO(n) / { I, −I }
Rotation
In 3D, the conjugate by a translation of a rotation about an axis is the corresponding rotation about the translated axis. Such a conjugation produces the screw displacement known to express an arbitrary Euclidean motion according to Chasles' theorem.
The conjugacy class within the Euclidean group E(3) of a rotation about an axis is a rotation by the same angle about any axis.
The conjugate closure of a singleton containing a rotation in 3D is E+(3).
In 2D it is different in the case of a k-fold rotation: the conjugate closure contains k rotations (including the identity) combined with all translations.
E(2) has quotient group O(2) / Ck and E+(2) has quotient group SO(2) / Ck . For k = 2 this was already covered above.
Reflection
The conjugates of a reflection are reflections with a translated, rotated, and reflected mirror plane. The conjugate closure of a singleton containing a reflection is the whole E(n).
Rotoreflection
The left and also the right coset of a reflection in a plane combined with a rotation by a given angle about a perpendicular axis is the set of all combinations of a reflection in the same or a parallel plane, combined with a rotation by the same angle about the same or a parallel axis, preserving orientation
Isometry groups
Two isometry groups are said to be equal up to conjugacy with respect to affine transformations if there is an affine transformation such that all elements of one group are obtained by taking the conjugates by that affine transformation of all elements of the other group. This applies for example for the symmetry groups of two patterns which are both of a particular wallpaper group type. If we would just consider conjugacy with respect to isometries, we would not allow for scaling, and in the case of a parallelogrammatic lattice, change of shape of the parallelogram. Note however that the conjugate with respect to an affine transformation of an isometry is in general not an isometry, although volume (in 2D: area) and orientation are preserved.
Cyclic groups
Cyclic groups are Abelian, so the conjugate by every element of every element is the latter.
Zmn / Zm Zn.
Zmn is the direct product of Zm and Zn if and only if m and n are coprime. Thus e.g. Z12 is the direct product of Z3 and Z4, but not of Z6 and Z2.
Dihedral groups
Consider the 2D isometry point group Dn. The conjugates of a rotation are the same and the inverse rotation. The conjugates of a reflection are the reflections rotated by any multiple of the full rotation unit. For odd n these are all reflections, for even n half of them.
This group, and more generally, abstract group Dihn, has the normal subgroup Zm for all divisors m of n, including n itself.
Additionally, Dih2n has two normal subgroups isomorphic with Dihn. They both contain the same group elements forming the group Zn, but each has additionally one of the two conjugacy classes of Dih2n \ Z2n.
In fact:
Dihmn / Zn Dihn
Dih2n / Dihn Z2
Dih4n+2 Dih2n+1 × Z2
References
Euclidean symmetries
Group theory | Conjugation of isometries in Euclidean space | [
"Physics",
"Mathematics"
] | 1,268 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Symmetry"
] |
2,977,910 | https://en.wikipedia.org/wiki/N-skeleton | In mathematics, particularly in algebraic topology, the of a topological space presented as a simplicial complex (resp. CW complex) refers to the subspace that is the union of the simplices of (resp. cells of ) of dimensions In other words, given an inductive definition of a complex, the is obtained by stopping at the .
These subspaces increase with . The is a discrete space, and the a topological graph. The skeletons of a space are used in obstruction theory, to construct spectral sequences by means of filtrations, and generally to make inductive arguments. They are particularly important when has infinite dimension, in the sense that the do not become constant as
In geometry
In geometry, a of P (functionally represented as skelk(P)) consists of all elements of dimension up to k.
For example:
skel0(cube) = 8 vertices
skel1(cube) = 8 vertices, 12 edges
skel2(cube) = 8 vertices, 12 edges, 6 square faces
For simplicial sets
The above definition of the skeleton of a simplicial complex is a particular case of the notion of skeleton of a simplicial set. Briefly speaking, a simplicial set can be described by a collection of sets , together with face and degeneracy maps between them satisfying a number of equations. The idea of the n-skeleton is to first discard the sets with and then to complete the collection of the with to the "smallest possible" simplicial set so that the resulting simplicial set contains no non-degenerate simplices in degrees .
More precisely, the restriction functor
has a left adjoint, denoted . (The notations are comparable with the one of image functors for sheaves.) The n-skeleton of some simplicial set is defined as
Coskeleton
Moreover, has a right adjoint . The n-coskeleton is defined as
For example, the 0-skeleton of K is the constant simplicial set defined by . The 0-coskeleton is given by the Cech nerve
(The boundary and degeneracy morphisms are given by various projections and diagonal embeddings, respectively.)
The above constructions work for more general categories (instead of sets) as well, provided that the category has fiber products. The coskeleton is needed to define the concept of hypercovering in homotopical algebra and algebraic geometry.
References
External links
Algebraic topology
General topology | N-skeleton | [
"Mathematics"
] | 526 | [
"General topology",
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
2,977,958 | https://en.wikipedia.org/wiki/Cyanocarbon | In organic chemistry, cyanocarbons are a group of chemical compounds that contain several cyanide functional groups. Such substances generally are classified as organic compounds, since they are formally derived from hydrocarbons by replacing one or more hydrogen atoms with a cyanide group. One of the simplest member is (tetracyanomethane, also known as carbon tetracyanide). Organic chemists often refer to cyanides as nitriles.
In general, cyanide is an electronegative substituent. Thus, for example, cyanide-substituted carboxylic acids tend to be stronger than the parents. The cyanide group can also stabilize anions by delocalizing negative charge as revealed by resonance structures.
Definition and examples
Cyanocarbons are organic compounds bearing enough cyano functional groups to significantly alter their chemical properties.
Illustrative cyanocarbons:
Tetracyanoethylene, which reduces to a stable anion [C2(CN)4]−, unlike most derivatives of ethylene.
Pentacyanocyclopentadiene, which forms an air-stable anion, in contrast to cyclopentadiene.
Tetracyanoethylene oxide, an electrophilic epoxide that undergoes ready scission of its C-C bond.
Tetracyanoquinodimethane, , which reduces to a stable anion, unlike most quinones.
Cyanoform (tricyanomethane),
Pentacyanopropenide, .
References
Nitriles | Cyanocarbon | [
"Chemistry"
] | 327 | [
"Functional groups",
"Organic compounds",
"Nitriles",
"Organic compound stubs",
"Organic chemistry stubs"
] |
2,977,969 | https://en.wikipedia.org/wiki/Cycloamylose | Cycloamyloses are cyclic α-1,4 linked glucans comprising dozens or hundreds of glucose units. Chemically they are similar to the much smaller cyclodextrins, which are typically composed of 6, 7 or 8 glucose units.
Discovery
Cycloamyloses were discovered as a result of studies of the function of 4-α-glucanotransferase, also known as disproportionating enzyme or D-enzyme (EC 2.4.1.25) isolated from potato.
Synthesis
Upon incubation of D-enzyme with high molecular weight amylose, a product was obtained with decreased ability to form a blue complex with iodine, without reducing or non-reducing ends, and resistant to hydrolysis by glucoamylase (an exoamylase). Takaha and Smith deduced that the product was a cyclic polymer, which they confirmed by mass spectrometry and acid hydrolysis, and showed that it comprised between 17 and several hundred glucose units. It was subsequently shown that D-enzyme could create complex cycloglucans from amylopectin. Similar 4-α-glucanotransferases from bacteria and other organisms have also been shown to produce cycloglucans upon incubation with amylose or amylopectin.
Structure
While the structures of cyclodextrins are planar circles, the structure of cycloamyloses with 10 to 14 glucose units were determined to be circular with strain-induced band-flips and kinks. In contrast the structure of a larger cycloamylose with 26 glucose units was determined to comprise two short left-handed V-amylose helices in antiparallel arrangement.
Applications
Cycloamyloses contain cavities in the helices which are capable of accommodating guest molecules, which suggested applications in chemical technologies. Cycloamylose is used in artificial chaperone technology for the refolding of denatured proteins. Cycloglucans have physicochemical properties that make them useful in food and manufacturing.
References
Polysaccharides
Starch
Macrocycles | Cycloamylose | [
"Chemistry"
] | 458 | [
"Organic compounds",
"Carbohydrates",
"Macrocycles",
"Polysaccharides"
] |
2,978,009 | https://en.wikipedia.org/wiki/Cytochemistry | Cytochemistry is the branch of cell biology dealing with the detection of cell constituents by means of biochemical analysis and visualization techniques. This is the study of the localization of cellular components through the use of staining methods. The term is also used to describe a process of identification of the biochemical content of cells. Cytochemistry is a science of localizing chemical components of cells and cell organelles on thin histological sections by using several techniques like enzyme localization, micro-incineration, micro-spectrophotometry, radioautography, cryo-electron microscopy, X-ray microanalysis by energy-dispersive X-ray spectroscopy, immunohistochemistry and cytochemistry, etc.
Freeze fracture enzyme cytochemistry
Freeze fracture enzyme cytochemistry was initially mentioned in the study of Pinto de silva in 1987. It is a technique that allows the introduction of cytochemistry into a freeze fracture cell membrane. immunocytochemistry is used in this technique to label and visualize the cell membrane's molecules. This technique could be useful in analyzing the ultrastructure of cell membranes. The combination of immunocytochemistry and freeze fracture enzyme technique, research can identify and have a better understanding of the structure and distribution of a cell membrane.
Origin
Jean Brachet's research in Brussel demonstrated the localization and relative abundance between RNA and DNA in the cells of both animals and plants opened up the door into the research of cytochemistry. The work by Moller and Holter in 1976 about endocytosis which discussed the relationship between a cell's structure and function had established the needs of cytochemical research.
Aims
Cytochemical research aims to study individual cells that may contain several cell types within a tissue. It takes a nondestructive approach to study the localization of the cell. By remaining the cell components intact, researcher are able to study the intact cell activity rather than studying an isolated biochemical activity which the result may be influenced by the distorted cell membrane and spatial difference.
References
Brighton, Carl T. and Robert M. Hunt (1974). "Mitochondrial calcium and its role in calcification". Clinical Orthopaedics and Related Research 100: 406–416.
Brighton, Carl T. and Robert M. Hunt (1978). "The role of mitochondria in growth plate calcification as demonstrated in a rachitic model". Journal of Bone and Joint Surgery, 60-A: 630–639.
Biochemistry
Cell biology | Cytochemistry | [
"Chemistry",
"Biology"
] | 533 | [
"Biochemistry",
"Cell biology",
"nan"
] |
2,978,014 | https://en.wikipedia.org/wiki/Extended%20Enterprise%20Modeling%20Language | Extended Enterprise Modeling Language (EEML) in software engineering is a modelling language used for enterprise modelling across a number of layers.
Overview
Extended Enterprise Modeling Language (EEML) is a modelling language which combines structural modelling, business process modelling, goal modelling with goal hierarchies and resource modelling. It was intended to bridge the gap between goal modelling and other modelling approaches. According to Johannesson and Söderström (2008) "the process logic in EEML is mainly expressed through nested structures of tasks and decision points. The sequencing of tasks is expressed by the flow relation between decision points. Each task has an input port and the output port being decision points for modeling process logic".
EEML was designed as a simple language, making it easy to update models. In addition to capturing tasks and their interdependencies, models show which roles perform each task, and the tools, services and information they apply.
History
Extended Enterprise Modeling Language (EEML) is from the late 1990s, developed in the EU project EXTERNAL as extension of the Action Port Model (APM) by S. Carlsen (1998). The EXTERNAL project aimed to "facilitate inter-organisational cooperation in knowledge intensive industries. The project worked on the hypothesis that interactive process models form a suitable framework for tools and methodologies for dynamically networked organisations. In the project EEML (Extended Enterprise Modelling Language) was first constructed as a common metamodel, designed to enable syntactic and semantic interoperability".
It was further developed in the EU projects Unified Enterprise Modelling Language (UEML) from 2002 to 2003 and the ongoing ATHENA project.
The objectives of the UEML Working group were to "define, to validate and to disseminate a set of core language constructs to support a Unified Language for Enterprise Modelling, named UEML, to serve as a basis for interoperability within a smart organisation or a network of enterprises".
Topics
Modeling domains
The EEML-language is divided into 4 sub-languages, with well-defined links across these languages:
Process modelling
Data modelling
Resource modelling
Goal modelling
Process modelling in EEML, according to Krogstie (2006) "supports the modeling of process logic which is mainly expressed through nested structures of tasks and decision points. The sequencing of the tasks is expressed by the flow relation between decision points. Each task has minimum an input port and an output port being decision points for modeling process logic, Resource roles are used to connect resources of various kinds (persons, organisations, information, material objects, software tools and manual tools) to the tasks. In addition, data modeling (using UML class diagrams), goal modeling and competency modeling (skill requirements and skills possessed) can be integrated with the process models".
Layers
EEML has four layers of interest:
Generic Task Type: This layer identifies the constituent tasks of generic, repetitive processes and the logical dependencies between these tasks.
Specific Task Type: At this layer, we deal with process modelling in another scale, which is more linked to the concretisation, decomposition and specialisation phases. Here process models are expanded and elaborated to facilitate business solutions. From an integration viewpoint, this layer aims at uncovering more efficiently the dependencies between the sub-activities, with regards for the resources required for actual performance.
Manage Task Instances: The purpose of this layer consists in providing constraints but also useful resources (in the form of process templates) to the planning and performance of an enterprise process. The performance of organisational, information, and tool resources in their environment are highlighted through concrete resources allocation management.
Perform Task Instances: Here is covered the actual execution of tasks with regards to issues of empowerment and decentralisation. At this layer, resources are utilised or consumed in an exclusive or shared manner.
These tasks are tied together through another layer called Manage Task Knowledge which allows to achieve a global interaction through the different layers by performing a real consistency between them. According to EEML 2005 Guide, this Manage Task Knowledge can be defined as the collection of processes necessary for innovation, dissemination, and exploitation of knowledge in a co-operating ensemble where interact knowledge seekers and knowledge sources by the means of a shared knowledge base.
Goal modelling
Goal modelling is one of the four EEML modelling domains age. A goal expresses the wanted (or unwanted) state of affairs (either current or future) in a certain context. Example of the goal model is depicted below. It shows goals and relationships between them. It is possible to model advanced goal-relationships in EEML by using goal connectors. A goal connector is used when one need to link several goals.
In goal modelling to fulfil Goal1, one must achieve to other goals: both Goal2 and Goal3 (goal-connector with “and” as the logical relation going out). If Goal2 and Goal3 are two different ways of achieving Goal1, then it should be “xor” logical relationship. It can be an opposite situation when both Goal2 and Goal3 need to be fulfilled and to achieve them one must fulfil Goal1. In this case Goal2 and Goal3 are linked to goal connector and this goal connector has a link to Goal1 with ”and”-logical relationship.
The table indicates different types of connecting relationships in EEML goal modelling. Goal model can also be interlinked with a process model.
Goal and process oriented modelling
We can describe process model as models that comprise a set of activities and an activity can be decomposed into sub-activities. These activities have relationship amongst themselves. A goal describes the expected state of operation in a business enterprise and it can be linked to whole process model or to a process model fragment with each level activity in a process model can be considered as a goal.
Goals are related in a hierarchical format where you find some goals are dependent on other sub goals for them to be complete which means all the sub goals must be achieved for the main goal to be achieved. There is other goals where only one of the goals need to be fulfilled for the main goal to be achieved. In goal modelling, there is use of deontic operator which falls in between the context and achieved state. Goals apply to tasks, milestones, resource roles and resources as well and can be considered as action rule for at task. EEML rules were also possible to although the goal modelling requires much more consultation in finding the connections between rules on the different levels. Goal-oriented analysis focuses on the description and evaluation of alternatives and their relationship to the organisational objectives.
Resource modeling
Resources have specific roles during the execution of various processes in an organisation. The following icons represent the various resources required in modelling.
The relations of these resources can be of different types:
a. Is Filled By – this is the assignment relation between roles and resources. It has a cardinality of one-to-many relationship.
b. Is Candidate For – candidate indicates the possible filling of the role by a resource.
c. Has Member – this is a kind of relations between organisation and person by denoting that a certain person has membership in the organisation. Has a cardinality of many-to-many relation.
d. Provide Support To – support pattern between resources and roles.
e. Communicates With – Communication pattern between resources and roles.
f. Has Supervision Over – shows which role resource supervises another role or resource.
g. Is Rating Of – describes the relation between skill and a person or organisation.
h. Is required By – this is the primary skill required for this role
i. Has Access to – creating of models with the access rights.
Benefits
From a general point of view, EEML can be used like any other modelling languages in numerous cases. However we can highlight the virtual enterprise example, which can be considered as a direct field of application for EEML with regard to Extended Enterprise planning, operation, and management.
Knowledge sharing: Create and maintain a shared understanding of the scope and purpose of the enterprise, as well as viewpoints on how to fulfil the purpose.
Dynamically networked organisations: Make knowledge as available as possible within the organisation.
Heterogeneous infrastructures: Achieve a relevant knowledge sharing process through heterogeneous infrastructures.
Process knowledge management: Integrate the different business processes levels of abstraction.
Motivation: creates enthusiasm and commitment among members of an organisation to follow up on the various actions that are necessary to restructure the enterprise.
EEML can help organisations meet these challenges by modelling all the manufacturing and logistics processes in the extended enterprise. This model allows capturing a rich set of relationships between the organisation, people, processes and resources of the virtual enterprise. It also aims at making people understand, communicate, develop and cultivate solutions to business problems
According to J. Krogstie (2008), enterprise models can be created to serve various purposes which include:
Human sense making and communication – the main purpose of enterprise modelling is to make sense of the real world aspects of an enterprise in order to facilitate communicate with parties involved.
Computer assisted analysis – the main purpose of enterprise modelling is to gain knowledge about the enterprise through simulation and computation of various parameters.
Model deployment and activation – the main purpose of enterprise modelling is to integrate the model in an enterprise-wide information system and enabling on-line information retrieval and direct work process guidance.
EEML enables Extended Enterprises to build up their operation based on standard processes through allowing modelling of all actors, processes and tasks in the Extended Enterprise and thereby have clear description of the Extended Enterprise. Finally, models developed will be used to measure and evaluate the Extended Enterprise.
See also
i*
Modeling language
Semantic parameterization
Software design
Software development methodology
References
Further reading
Bolchini, D., Paolini, P.: "Goal-Driven Requirements Analysis for Hypermedia-intensive Web Applications", Requirements Engineering Journal, Springer, RE03 Special Issue (9) 2004: 85-103.
Jørgensen, Håvard D.: "Process-Integrated eLearning"
Kramberg, V.: "Goal-oriented Business Processes with WS-BPEL", Master Thesis, University of Stuttgart, 2008.
John Krogstie (2005). EEML2005: Extended Enterprise Modeling Language
John Krogstie (2001). "A Semiotic Approach to Quality in Requirements Specifications" (Proc. IFIP 8.1) IFIP 8.1. Working Conference on Organizational Semiotics.
Lin Liu, Eric Yu. "Designing information systems in social context: a goal and scenario modelling approach"
External links
Description of EEML
GRL web site University of Toronto,
"The Business Motivation Model Business Governance in a Volatile World", Release 1.3, Business Rules Group, 2007.
Business process
Enterprise modelling
modeling languages | Extended Enterprise Modeling Language | [
"Engineering"
] | 2,182 | [
"Systems engineering",
"Enterprise modelling"
] |
2,978,122 | https://en.wikipedia.org/wiki/ISO%207001 | ISO 7001 ("public information symbols") is a standard published by the International Organization for Standardization that defines a set of pictograms and symbols for public information. The latest version, ISO 7001:2023, was published in February 2023.
The set is the result of extensive testing in several countries and different cultures and have met the criteria for comprehensibility set up by the ISO. The design process and testing of ISO 7001 symbols is governed by ISO 22727:2007, Graphical symbols — Creation and design of public information symbols — Requirements. Common examples of public information symbols include those representing toilets, car parking, and information, and the International Symbol of Access.
History
ISO 7001 was first released in October 1980, with a single amendment in 1985. The second edition was released in February 1990, with one amendment in 1993. The third edition, the latest edition was released in November 2007, and has received four amendments in 2013, 2015, 2016 and 2017. The use of the symbols of ISO 7001 is recommended by the European standard EN 17210.
Implementation
ISO 7001 sets out some general guidelines for how symbols should be utilized, though large aspects are left up to the decision of the individual or entity designing signage for their facility.
Symbols were created with the goal of being able to stand alone, without any accompanying text. However, text can be used to further aid in communicating the message, particularly in a situation where a custom symbol has been designed for a unique situation not covered by standard ISO 7001 symbols. Specific sizes for symbols are not provided in ISO 7001, though symbols are designed with the goal of being clearly understood regardless placed on something as small as a floor plan of a building or as a large as a giant sign hanging from a ceiling in a large open space.
While symbols are intended and recommended to be reproduced as presented in ISO 7001, the ISO acknowledges that situations may exist where a symbol should be modified due to national or cultural needs of a particular situation. Though key elements and the intent of the original symbol design must be retained to ensure it will be effective.
No colours are specified in ISO 7001, with the only guidance being to ensure clear contrast between the symbol and the sign background, as well as the environment the sign is in.
There is a clear recommendation against using colors specified in ISO 3864, due to possible confusion with safety signage using those colors. Of explicit concern is green and white, due to the risk of confusing a green and white 'PI PF 030' direction arrow symbol, for an ISO 7010 evacuation route arrow.
To avoid possible confusion with similar safety symbols of ISO 7010, symbols in ISO 7001 do not use the standard prohibition symbol consisting of a red circle with a red slash. Instead, either a red 'slash' or red 'cross' is used. A slash is used when an object is prohibited, and covers the entire symbol. A cross is used in situations where a behavior is prohibited, with the cross placed over the portion of the symbol depicting the behavior that is being prohibited rather than the entire symbol.
The slash and cross can be added to other symbols, such as a baggage cart to indicate 'no baggage carts'. ISO 7001 states that when symbols are designed, they should not have key elements that would be obstructed by the slash as positioned on the template provided in ISO 22727:2007. The slash or cross must be on top of the symbol, and should be red in color.
Symbols
The standard consists of 177 symbols, divided into seven categories: accessibility, public facilities, transport facilities, behaviour of the public, commercial facilities, tourism, cultural and heritage and sporting activities.
Accessibility
All symbol reference numbers in this category are prefixed with "AC", for Accessibility.
Public facilities
All symbol reference numbers in this category are prefixed with "PF", for Public Facilities.
Transportation facilities
All symbol reference numbers in this category are prefixed with "TF", for Transport Facilities.
Behaviour of the public
All symbol reference numbers in this category are prefixed with "BP", for Behaviour of the Public.
Commercial facilities
All symbol reference numbers in this category are prefixed with "CF", for Commercial Facilities.
Tourism, culture and heritage
All symbol reference numbers in this category are prefixed with "TC", for Tourism, Culture and heritage.
Sporting activities
All symbol reference numbers in this category are prefixed with "SA", for Sports Activities.
See also
DOT pictograms - United States version of this standard.
ISO 7010 - ISO Standard for safety symbols.
Notes
References
External links
The international language of ISO graphical symbols - A 2010 document published by the ISO educate about ISO graphical symbol standards ISO 7000 (Symbols for equipment), ISO 7001 (Symbols for public information), ISO 7010 (Symbols for safety signs).
07001
Symbols
Pictograms | ISO 7001 | [
"Mathematics"
] | 992 | [
"Symbols",
"Pictograms"
] |
2,978,244 | https://en.wikipedia.org/wiki/Bezant%C3%A9e | Bezantée, bezantie or bezanty is an ornamentation consisting of roundels. The word derives from bezant, a gold coin from the Byzantine Empire, which was in common European use until circa 1250.
In architecture, bezantée moulding was much used in the Norman period.
In heraldry the word is shorthand for semé of bezants, i.e. strewn (literally "seeded") with bezants. A bezant is a roundel whose tincture is or. In English heraldry, a field sable bezanty often alludes to the Duchy of Cornwall.
An ounce (leopard) bezanty appears as a supporter in the English bearings of St Edmundsbury Borough Council; a bordure bezanty appears in the coat of Berkhamstead Town Council.
References
Heraldry Dictionary
MacKinnon of Dunakin, Charles. Heraldry. 1966. London: Frederick Warne & Co. (Page 60)
Heraldic charges
Architectural elements | Bezantée | [
"Technology",
"Engineering"
] | 210 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
2,978,269 | https://en.wikipedia.org/wiki/Bice | Bice, from the French bis, originally meaning dark-coloured, is a green or blue pigment. In French the terms vert bis and azur bis mean dark green and dark blue respectively. Bice pigments were generally prepared from basic copper carbonates, but sometimes ultramarine or other pigments were used.
Historic usage
In 1522 a stone cross with gilt lead stars was erected at the Bullstake in Canterbury, and painted with bice and gilded by Florence the painter. The bice cost 6 shillings the pound.
Jo Kirby of the National Gallery London notes the occurrence of the pigment bice in three grades in an account of Tudor painting at Greenwich Palace in 1527. In this case, the three grades indicate the use of the mineral azurite rather than a manufactured blue copper carbonate. Similarly, green bice in other 16th-records may sometimes have been the mineral malachite. John "Paynter", who worked for Bess of Hardwick, used blue bice in 1596.
Ian Bristow, a historian of paint, concluded that the pigment blue bice found in records of British interior-decoration until the first half of the 17th century was azurite. The expensive natural mineral azurite was superseded by manufactured blue verditer.
The color is also referenced in Edith Nesbit's novel The Story of the Treasure Seekers: "...Alice looked up from her painting. She was trying to paint a fairy queen's frock with green bice, and it wouldn't rub. There is something funny about green bice. It never will rub off; no matter how expensive your paintbox is-and even boiling water is very little use. She said, ‘Bother the bice’!..."
References
Pigments
Inorganic pigments
bice
bice | Bice | [
"Chemistry"
] | 372 | [
"Inorganic pigments",
"Inorganic compounds"
] |
2,978,332 | https://en.wikipedia.org/wiki/Cathodic%20arc%20deposition | Cathodic arc deposition or Arc-PVD is a physical vapor deposition technique in which an electric arc is used to vaporize material from a cathode target. The vaporized material then condenses on a substrate, forming a thin film. The technique can be used to deposit metallic, ceramic, and composite films.
History
Industrial use of modern cathodic arc deposition technology originated in Soviet Union around 1960–1970.
By the late 1970s, Soviet government released the use of this technology to the West.
Among many designs in USSR at that time the design by L. P. Sablev et al., was allowed to be used outside the USSR.
Process
The arc evaporation process begins with the striking of a high current, low voltage arc on the surface of a cathode (known as the target) that gives rise to a small (usually a few micrometres wide), highly energetic emitting area known as a cathode spot. The localised temperature at the cathode spot is extremely high (around 15000 °C), which results in a high velocity (10 km/s) jet of vapourised cathode material, leaving a crater behind on the cathode surface. The cathode spot is only active for a short period of time, then it self-extinguishes and re-ignites in a new area close to the previous crater. This behaviour causes the apparent motion of the arc.
As the arc is basically a current carrying conductor it can be influenced by the application of an electromagnetic field, which in practice is used to rapidly move the arc over the entire surface of the target, so that the total surface is eroded over time.
The arc has an extremely high power density resulting in a high level of ionization (30-100%), multiple charged ions, neutral particles, clusters and macro-particles (droplets). If a reactive gas is introduced during the evaporation process, dissociation, ionization and excitation can occur during interaction with the ion flux and a compound film will be deposited.
One downside of the arc evaporation process is that if the cathode spot stays at an evaporative point for too long it can eject a large amount of macro-particles or droplets. These droplets are detrimental to the performance of the coating as they are poorly adhered and can extend through the coating. Worse still if the cathode target material has a low melting point such as aluminium the cathode spot can evaporate through the target resulting in either the target backing plate material being evaporated or cooling water entering the chamber. Therefore, magnetic fields as mentioned previously are used to control the motion of the arc. If cylindrical cathodes are used the cathodes can also be rotated during deposition. By not allowing the cathode spot to remain in one position too long aluminium targets can be used and the number of droplets is reduced. Some companies also use filtered arcs that use magnetic fields to separate the droplets from the coating flux.
Equipment design
A Sablev type Cathodic arc source, which is the most widely used in the West, consists of a short cylindrically shaped, electrically conductive target at the cathode with one open end. This target has an electrically-floating metal ring surrounding it, working as an arc confinement ring (Strel'nitskij shield). The anode for the system can be either the vacuum chamber wall or a discrete anode. Arc spots are generated by a mechanical trigger (or igniter) striking on the open end of the target making a temporary short circuit between the cathode and anode. After the arc spots are generated they can be steered by a magnetic field, or move randomly in absence of magnetic field.
The plasma beam from a Cathodic Arc source contains some larger clusters of atoms or molecules (so called macro-particles), which prevent it from being useful for some applications without some kind of filtering.
There are many designs for macro-particle filters and the most studied design is based on the work by I. I. Aksenov et al. in 70's. It consists of a quarter-torus duct bent at 90 degrees from the arc source and the plasma is guided out of the duct by principle of plasma optics.
There are also other interesting designs, such as a design which incorporates a straight duct filter built-in with a truncated cone shaped cathode as reported by D. A. Karpov in the 1990s. This design became quite popular among both the thin hard-film coaters and researchers in Russia and former USSR countries until now.
Cathodic arc sources can be made into a long tubular shape (extended-arc) or a long rectangular shape, but both designs are less popular.
Applications
Cathodic arc deposition is actively used to synthesize extremely hard films to protect the surface of cutting tools and extend their life significantly. A wide variety of thin hard-film, Superhard coatings and nanocomposite coatings can be synthesized by this technology including TiN, TiAlN, CrN, ZrN, AlCrTiN and TiAlSiN.
This is also used quite extensively particularly for carbon ion deposition to create diamond-like carbon films. Because the ions are blasted from the surface ballistically, it is common for not only single atoms, but larger clusters of atoms to be ejected. Thus, this kind of system requires a filter to remove atom clusters from the beam before deposition.
The DLC film from a filtered-arc contains an extremely high percentage of sp3 diamond which is known as tetrahedral amorphous carbon, or ta-C.
Filtered Cathodic arc can be used as metal ion/plasma source for Ion implantation and Plasma Immersion Ion Implantation and Deposition (PIII&D).
See also
Ion beam deposition
Physical vapor deposition
References
SVC "51st Annual Technical Conference Proceedings" (2008) Society of Vacuum Coaters, ISSN 0737-5921 (previous proceedings available on CD from SVC Publications)
A. Anders, "Cathodic Arcs: From Fractal Spots to Energetic Condensation" (2008) Springer, New York.
R. L. Boxman, D. M. Sanders, and P. J. Martin (editors) "Handbook of Vacuum Arc Science and Technology"(1995) Noyes Publications, Park Ridge, N.J.
Brown, I.G., Annu. Rev. Mat. Sci. 28, 243 (1998).
Sablev et al., US Patent #3,783,231, 01 Jan. 1974
Sablev et al., US Patent #3,793,179, 19 Feb. 1974
D. A. Karpov, "Cathodic arc sources and macroparticle filtering", Surface and Coatings technology 96 (1997) 22-23
S. Surinphong, "Basic Knowledge about PVD Systems and Coatings for Tools Coating" (1998), in Thai language
A. I. Morozov, Reports of the Academy of Sciences of the USSR, 163 (1965) 1363, in Russian language
I. I. Aksenov, V. A. Belous, V. G. Padalka, V. M. Khoroshikh, "Transport of plasma streams in a curvilinear plasma-optics system", Soviet Journal of Plasma Physics, 4 (1978) 425
https://www.researchgate.net/publication/273004395_Arc_source_designs
https://www.researchgate.net/publication/234202890_Transport_of_plasma_streams_in_a_curvilinear_plasma-optics_system
Industrial processes
Physical vapor deposition techniques
Thin film deposition
Coatings | Cathodic arc deposition | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,595 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Planes (geometry)",
"Solid state engineering"
] |
2,978,525 | https://en.wikipedia.org/wiki/Pinacotheca | A pinacotheca (Latin borrowing from = + ) was a picture gallery in either ancient Greece or ancient Rome. The name is specifically used for the building containing pictures which formed the left wing of the Propylaea on the Acropolis at Athens, Greece. Though Pausanias speaks of the pictures "which time had not effaced", which seems to point to fresco painting, the fact that there is no trace of preparation for stucco on the walls implies that the paintings were easel pictures. The Romans adopted the term for the room in a private house containing pictures, statues, and other works of art.
In the modern world the word is often used as a name for a public art gallery concentrating on paintings, mostly in Italy (as "Pinacoteca"), such as the Pinacoteca Vaticana of the Vatican Museums (which is usually meant when the plain word is used), the Pinacoteca di Brera in Milan (more often "the Brera" informally), the Pinacoteca Giovanni e Marella Agnelli built on the roof of the former Lingotto Fiat factory in Turin, Italy, with others in Bologna and Siena. In Brazil, there is the Pinacoteca do Estado de São Paulo. In Paris, the Pinacothèque de Paris. In Munich the three main galleries are called the Alte Pinakothek (old masters), Neue Pinakothek (19th century) and Pinakothek der Moderne. The Pinacotheca, Melbourne, was a gallery for avant-garde art from 1967 to 2002. At Hallbergmoos, near Munich Airport, there was the Pinakothek Hallbergmoos (20th and 21st century) between 2010 and 2014.
See also
Glyptotheque
References
Ancient Greek painting
Roman Empire art
Art museums and galleries in Greece
Acropolis of Athens
Types of art museums and galleries
Defunct art museums and galleries
Rooms
History of museums | Pinacotheca | [
"Engineering"
] | 402 | [
"Rooms",
"Architecture"
] |
2,978,799 | https://en.wikipedia.org/wiki/Hartogs%27s%20theorem%20on%20separate%20holomorphicity | In mathematics, Hartogs's theorem is a fundamental result of Friedrich Hartogs in the theory of several complex variables. Roughly speaking, it states that a 'separately analytic' function is continuous. More precisely, if is a function which is analytic in each variable zi, 1 ≤ i ≤ n, while the other variables are held constant, then F is a continuous function.
A corollary is that the function F is then in fact an analytic function in the n-variable sense (i.e. that locally it has a Taylor expansion). Therefore, 'separate analyticity' and 'analyticity' are coincident notions, in the theory of several complex variables.
Starting with the extra hypothesis that the function is continuous (or bounded), the theorem is much easier to prove and in this form is known as Osgood's lemma.
There is no analogue of this theorem for real variables. If we assume that a function
is differentiable (or even analytic) in each variable separately, it is not true that will necessarily be continuous. A counterexample in two dimensions is given by
If in addition we define , this function has well-defined partial derivatives in and at the origin, but it is not continuous at origin. (Indeed, the limits along the lines and are not equal, so there is no way to extend the definition of to include the origin and have the function be continuous there.)
References
Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992.
External links
Several complex variables
Theorems in complex analysis | Hartogs's theorem on separate holomorphicity | [
"Mathematics"
] | 329 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Several complex variables",
"Theorems in complex analysis",
"Mathematical objects",
"Mathematical relations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.