id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
8,646,255 | https://en.wikipedia.org/wiki/Calcium%20borate | Calcium borate (Ca3(BO3)2). It can be prepared by reacting calcium metal with boric acid. The resulting precipitate is calcium borate. A hydrated form occurs naturally as the minerals colemanite, nobleite and priceite.
One of its uses is as a binder in some grades of hexagonal boron nitride for hot pressing. Other uses include flame retardant in epoxy molding compounds, a ceramic flux in some ceramic glazes, reactive self-sealing binders in hazardous waste management, additive for insect-resistant polystyrene, fertilizer, and production of boron glasses.
Also it used as a main source of boron oxide in the manufacturing of ceramic frits that used in the ceramic glaze or ceramic engobe for wall and floor ceramic tiles.
References
Borates
Calcium compounds
Flame retardants | Calcium borate | [
"Chemistry"
] | 185 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
8,646,876 | https://en.wikipedia.org/wiki/Screed | Screed has three meanings in building construction:
A flat board (screed board, floating screed) or a purpose-made aluminium tool used to smooth and to "true" materials like concrete, stucco and plaster after they have been placed on a surface or to assist in flattening;
A strip of plaster or wood applied to a surface to act as a guide for a screed tool (screed rail, screed strip, screed batten);
The material itself which has been flattened with a screed (screed coat). In the UK, screed has also come to describe a thin, top layer of material (sand and cement, magnesite or calcium sulphate), poured in place on top of the structural concrete or insulation, on top of which other finishing materials can be applied, or the structural material can be left bare to achieve a raw effect.
Screed board
In the United States, a person called a concrete finisher performs the process of screeding, which is the process of cutting off excess wet concrete to bring the top surface of a slab to the proper grade and smoothness. A power concrete screed has a gasoline motor attached, which helps smooth and vibrate concrete as it is flattened. After the concrete is flattened it is smoothed with a concrete float or power trowel. A concrete floor is sometimes called a solid ground floor.
A plasterer also may use a screed to level a wall or ceiling surface in plasterwork.
This sense of screed has been extended to asphalt paving where a free floating screed is part of a machine that spreads the paving material.
Screed rails
A weep screed or sill screed is a screed rail which has drainage holes to allow moisture which penetrated an exterior plaster or stucco coating to drain through the screed.
Liquid and flow screeds
Flowing screeds are made from inert fillers such as sand, with a binder system based on cement or often calcium sulphate. Flow screeds are often preferred to traditional screeds as they are easier and faster to install and provide a similar finish. Flow screed is often used in combination with underfloor heating installation.
Liquid flow screed is self-levelling. No vibration is necessary to remove bubbles and densify the liquid mass.
Due to the easy consolidation thickness can sometimes be reduced in comparison to conventional screeds. This minimises heat storage leading to a floor that reacts quickly to user requirement hence raising the efficacy of underfloor heating.
Screed coats
A development in the UK is the delivery, mixing, and pumping of screed from a single vehicle. Where previously screed jobs required a separate pump to administer the screed, these new machines can now administer the screed directly from the mixing pan to the floor at a range of up to 60 meters. For example, the material called granolithic.
See also
Screed wire, an alternate name for a ground wire in electrical work
References
Sources
Constructing Architecture – Materials, Processes, Structures: A Handbook; Andrea Deplazes (ed.); Birkhauser, 2005
Concrete
Construction
Floors
Pavements | Screed | [
"Engineering"
] | 669 | [
"Structural engineering",
"Floors",
"Concrete",
"Construction"
] |
8,647,217 | https://en.wikipedia.org/wiki/Transcritical%20cycle | A transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states. In particular, for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour and/or supercritical conditions during the expansion phase. The ultrasupercritical steam Rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels, where water is used as working fluid. Other typical applications of transcritical cycles to the purpose of power generation are represented by organic Rankine cycles, which are especially suitable to exploit low temperature heat sources, such as geothermal energy, heat recovery applications or waste to energy plants. With respect to subcritical cycles, the transcritical cycle exploits by definition higher pressure ratios, a feature that ultimately yields higher efficiencies for the majority of the working fluids. Considering then also supercritical cycles as a valid alternative to the transcritical ones, the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work. This evidences the extreme potential of transcritical cycles to the purpose of producing the most power (measurable in terms of the cycle specific work) with the least expenditure (measurable in terms of spent energy to compress the working fluid).
While in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid, in transcritical cycles one pressure level is above the critical pressure and the other is below. In the refrigeration field carbon dioxide, CO2, is increasingly considered of interest as refrigerant.
Transcritical conditions of the working fluid
In transcritical cycles, the pressure of the working fluid at the outlet of the pump is higher than the critical pressure, while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature.
During the heating phase, which is typically considered an isobaric process, the working fluid overcomes the critical temperature, moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process, a significant difference between subcritical and transcritical cycles. Due to this significant difference in the heating phase, the heat injection into the cycle is significantly more efficient from a second law perspective, since the average temperature difference between the hot source and the working fluid is reduced.
As a consequence, the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics. Therefore, the expansion process can be accomplished exploiting higher pressure ratios, which yields higher power production. Modern ultrasupercritical Rankine cycles can reach maximum temperatures up to 620°C exploiting the optimized heat introduction process.
Characterization of the power cycle
As in any power cycle, the most important indicator of its performance is the thermal efficiency. The thermal efficiency of a transcritical cycle is computed as:
where is the thermal input of the cycle, provided by either combustion or with a heat exchanger, and is the power produced by the cycle.
The power produced is considered comprehensive of the produced power during the expansion process of the working fluid and the one consumed during the compression step.
The typical conceptual configuration of a transcritical cycle employs a single heater, thanks to the absence of drastic phase change from one state to another, being the pressure above the critical one. In subcritical cycles, instead, the heating process of the working fluid occurs in three different heat exchangers: in economizers the working fluid is heated (while remaining in the liquid phase) up to a condition approaching the saturated liquid conditions. Evaporators accomplish fluid evaporation process (typically up to the saturated vapour conditions) and in superheaters the working fluid is heated form the saturated vapour conditions to a superheated vapor. Moreover, using Rankine cycles as bottoming cycles in the context of combined gas-steam cycles keeps the configuration of the former ones as always subcritical. Therefore, there will be multiple pressure levels and hence multiple evaporators, economizers and superheaters, which introduces a significant complication to the heat injection process in the cycle.
Characterization of the compression process
Along adiabatic and isentropic processes, such as those theoretically associated with pumping processes in transcritical cycles, the enthalpy difference across both a compression and an expansion is computed as:
Consequently, a working fluid with a lower specific volume (hence higher density) can inevitably be compressed spending a lower mechanical work than one with low density (more gas like).
In transcritical cycles, the very high maximum pressures and the liquid conditions along the whole compression phase ensure a higher density and a lower specific volume with respect to supercritical counterparts. Considering the different physical phases though which compression processes occur, transcritical and supercritical cycles employ pumps (for liquids) and compressors (for gases), respectively, during the compression step.
Characterization of the expansion process
In the expansion step of the working fluid in transcritical cycles, as in subcritical ones, the working fluid can be discharged either in wet or dry conditions.
Typical dry expansions are those involving organic or other unconventional working fluids, which are characterized by non-negligible molecular complexities and high molecular weights.
The expansion step occurs in turbines: depending on the application and on the nameplate power produced by the power plant, both axial turbines and radial turbines can be exploited during fluid expansion. Axial turbines favour lower rotational speed and higher power production, while radial turbines are suitable for limited powers produced and high rotational speed.
Organic cycles are appropriate choices for low enthalpy applications and are characterized by higher average densities across the expanders than those occurring in transcritical steam cycles: for this reason a low blade height is normally designed and the volumetric flow rate is kept limited to relatively small values. On the other hand in large scale application scenarios the expander blades typically show heights that exceed one meter and that are exploited in the steam cycles. Here, in fact, the fluid density at the outlet of the last expansion stage is significantly low.
In general, the specific work of the cycle is expressed as:
Even though the specific work of any cycle is strongly dependent on the actual working fluid considered in the cycle, transcritical cycles are expected to exhibit higher specific works than the corresponding subcritical and supercritical counterparts (i.e., that exploit the same working fluid). For this reason, at fixed boundary conditions, power produced and working fluid, a lower mass flow rate is expected in transcritical cycles than in other configurations.
Applications in power cycles
Ultrasupercritical Rankine cycles
In the last decades, the thermal efficiency of Rankine cycles increased drastically, especially for large scale applications fueled by coal: for these power plants, the application of ultrasupercritical layouts was the main factor to achieve the goal, since the higher pressure ratio ensures higher cycle efficiencies.
The increment in thermal efficiency of power plants fueled by dirty fuels became crucial also in the reduction of the specific emissions of the plants, both in therms of greenhouse gas and for pollutant such as sulfur dioxide or NOx.
In large scale applications, ultrasupercritical Rankine cycles employ up to 10 feedwater heaters, five on the high pressure side and five on the low pressure side, including the deaerator, helping in the increment of the temperature at the inlet of the boiler up to 300°C, allowing a significant regenerative air preheating, thus reducing the fuel consumption. Studies on the best performant configurations of supercritical rankine cycles (300 bar of maximum pressure, 600°C of maximum temperature and two reheats) show that such layouts can achieve a cycle efficiency higher than 50%, about 6% higher than subcritical configurations.
Organic Rankine cycles
Organic Rankine cycles are innovative power cycles which allow good performances for low enthalpy thermal sources and ensure condensation above the atmospheric pressure, thus avoiding deaerators and large cross sectional area in the heat rejection units. Moreover, with respect to steam Rankine cycles, ORC have a higher flexibility in handling low power sizes, allowing significant compactness.
Typical applications of ORC cover: waste heat recovery plants, geothermal plants, biomass plants and waste to energy power plants.
Organic Rankine cycles use organic fluids (such as hydrocarbons, perfluorocarbons, chlorofluorocarbon, and many others) as working fluids. Most of them have a critical temperature in the range of 100-200°C, for this reason perfectly adaptable to transcritical cycles in low temperature applications.
Considering organic fluids, having a maximum pressure above the critical one can more than double the temperature difference across the turbine, with respect to the subcritical counterpart, and significantly increase both the cycle specific work and cycle efficiency.
Applications in refrigeration cycles
A refrigeration cycle, also known as heat pump, is a thermodynamic cycle that allows the removal of heat from a low temperature heat source and the rejection of heat into a high temperature heat source, thanks to mechanical power consumption. Traditional refrigeration cycles are subcritical, with the high pressure side (where heat rejection occurs) below the critical pressure.
Innovative transcritical refrigeration cycles, instead, should use a working fluid whose critical temperature is around the ambient temperature. For this reason, carbon dioxide is chosen due to its favourable critical conditions. In fact, the critical point of carbon dioxide is 31°C, reasonably in between the hot source and cold source of traditional refrigeration applications, thus suitable for a transcritical applications.
In transcritical refrigeration cycles the heat is dissipated through a gas cooler instead of a desuperheater and a condenser like in subcritical cycles. This limits the plant components, plant complexity and costs of the power block.
The advantages of using supercritical carbon dioxide as working fluid, instead of traditional refrigerant fluids (like HFC of HFO), in refrigeration cycles is represented both by economic aspects and environmental ones. The cost of carbon dioxide is two order of magnitude lower than the ones of the average refrigerant working fluid and the environmental impact of carbon dioxide is very limited (with a GWP of 1 and an ODP of 0), the fluid is not reactive nor significantly toxic. No other working fluids for refrigeration is able to reach the same environmental favourable characteristics of carbon dioxide.
References
Energy conversion
Power station technology
Thermodynamics | Transcritical cycle | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,180 | [
"Thermodynamics",
"Dynamical systems"
] |
8,647,599 | https://en.wikipedia.org/wiki/Frangible%20nut | The frangible nut is a component used in many industries, but most commonly by NASA, to sever mechanical connections. It is, by definition, an explosively-splittable nut. The bolt remains intact while the nut itself is split into two or more parts.
Space Shuttle
Solid Rocket Booster Holddown System
Frangible nuts secured the solid rocket boosters (SRB) of the Space Shuttle, which were bolted to the mobile launcher platform (MLP) until liftoff. On the Shuttle, they were separated using NASA standard detonators (NSDs) and explosive booster cartridges. The space shuttle used two NSDs and booster cartridges for the frangible nut atop each of the four , studs holding each SRB to the MLP. Once detonation occurred, the shuttle lifted free of the MLP. The broken nut and any fragments from detonation were captured by energy absorption material, such as metal foam, enclosed in a blast container to prevent damage to the shuttle. In case of NSD failure, or incomplete clearance of the nut from the bolt, the SRB had ample thrust to break the bolt itself and launch unhindered.
At launch, two pyrotechnic, or explosive, devices "break" a frangible nut into two halves, allowing the stud, which is under high tension, to eject into the hold-down post system and release the space shuttle from the MLP. A number of factors work to slow or interrupt the stud’s ejection velocity. At liftoff, a stud not ejected prior to the first space shuttle movement, which occurs approximately 200—250 milliseconds after ignition, becomes bound and/or pinched and results in a hang-up.
Each frangible nut has two recesses 180 degrees apart, where a pyrotechnic device, or booster cartridge, and detonator are installed. At liftoff, each detonator receives a "fire" signal, which in turn initiates the booster cartridges, causing the frangible nut to fracture. Although only one is actually required to fire and break the frangible nut, two booster cartridges/detonators are used for redundancy. The difference in the booster cartridge function time of the two sides has been determined to decrease initial stud velocity and is determined to be a major contributor to stud hang-ups.
The frangible nut has been modified to incorporate a crossover assembly which pyrotechnically "links" the two booster cartridges/detonators in each frangible nut, resulting in detonation of both sides within 50 microseconds or less, versus a typical difference of approximately 250 microseconds experienced prior to this design modification. With the time reduction, a greater initial velocity is achieved, thereby reducing the probability of a stud hang-up. After completion of extensive component qualification and system certification testing to prove the design goal of 50 microseconds or less had been achieved, the crossover system design was approved for flight. The first flight using this new design occurred on STS-126. The crossover system was installed in all eight holddown locations on the solid rocket boosters.
External Tank Separation
Frangible nuts were also used for separation of the two aft structural attachments of the external tank prior to orbital insertion. The attach bolts were driven by the explosive force of the NSDs and a spring into a cavity in the tank strut. The nuts and all residual pieces of the NSDs were caught in a cover assembly within the shuttle.
References
Nuts (hardware)
Spacecraft pyrotechnics | Frangible nut | [
"Engineering"
] | 724 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
8,647,884 | https://en.wikipedia.org/wiki/Comparison%20of%20FTP%20server%20software%20packages |
Graphical
Console/terminal-based
Summary board
Graphical UI based FTP Servers
Terminal/Console based FTP Servers
See also
File Transfer Protocol (FTP)
Comparison of FTP client software
FTPS (FTP over SSL/TLS)
FTP over SSH
SSH File Transfer Protocol (SFTP)
Comparison of SSH servers
Comparison of SSH clients
Notes
FTP servers
FTP servers | Comparison of FTP server software packages | [
"Technology"
] | 81 | [
"Computing-related lists",
"Lists of software"
] |
8,648,006 | https://en.wikipedia.org/wiki/Bessemer%20Gold%20Medal | The Bessemer Gold Medal is awarded annually by the Institute of Materials, Minerals and Mining (IOM3) "for outstanding services to the steel industry, to the inventor or designer of any significant innovation in the process employed in the manufacture of steel, or for innovation in the use of steel in the manufacturing industry or the economy generally". The recipient is expected to prepare and deliver the Bessemer Lecture.
It was established and endowed to the Iron and Steel Institute in 1874 by Sir Henry Bessemer and was first awarded to Isaac Lowthian Bell in 1874. The Iron and Steel Institute merged in 1974 into the Institute of Metals, which in 1993 became part of the Institute of Materials, which in turn became part of the IOM3 in 2002.
Prizewinners
Source: IOM3 archive website and current IOM3 website
IOM3
2020 David Anthony Worsley
2019 J Bolton
2018 I Samarasekera
2017 J Speer
2016 A W Cramb
2015 John Beynon
2014 H Tomono
2013 Prince Philip, Duke of Edinburgh
2013 K Mills
2012 G Honeyman
2011 I Christmas
2010 M Sellars
2009 G Arvedi
2008 T Mukherjee
2007 L Mittal
2006 H Bhadeshia
2005 S I Pettifor
2004 R J Fruehan
2003 J P Birat
2002 R E Dolby
Institute of Materials
2001 M J Pettifor
2000 Terry Gladman
1999 Etham T Turkdogan
1998 R Baker
1997 F Kenneth Iverson
1996 Sir Brian Moffat
1995 P Wright
1994 F B Pickering
1993 H Saito
Institute of Metals
1992 C E H Morris
1991 Frank Fitzgerald
1990 J S Pennington
1989 Gerald R Heffernan
1988 Sir R Scholey
1987 Tae-Joon Park
1986 J.R.D. Tata
1985 Viscount E Davignon
Metals Society
1984 P Metz
1983 I K MacGregor
1982 G W van Stein Callenfels
1981 Sir I McLennan
1980 M Tenenbaum
1979 H O H Haavisto
1978 Karl Brotzmann
1977 H Morrogh
1976 J D Joy
1975 Richard Weck
1974 Sir M Finniston
Iron and Steel Institute
1973 J W Menter
1972 M Morgan
1971 A G Quarrell
1970 P Coheur
1969 Queen Elizabeth II
1968 F D Richardson
1967 E T Judge
1966 John Hugh Chesters
1965 T Sendzimir
1965 N P Allen
1964 H Malcor
1963 F H Saniter
1962 Sir Charles Goodeve
1961 W Barr
1960 Hermann Schenck
1959 B M S Kalling
1958 W F Cartwright
1957 R Durrer
1956 C Sykes
1955 J Chipman
1954 T P Colclough
1953 R Mather
1952 H H Burton
1951 B F Fairless
1950 J Mitchell
1948 W J Dawson
1947 K M Tigerchiold
1947 Sir William J Larke
1946 J S Hollings
1945 Harold Wright
1944 E Lewis
1943 J H Whiteley
1942 E G Grace
1941 T Swinden
1940 Sir Andrew McCance
1939 J Henderson
1938 C H Desch
1937 Colonel N. T. Belaiew
1937 A Mayer
1936 F Clements
1935 A M Portevin
1934 King George V
1933 W H Hatfield
1932 H Louis
1931 Sir Harold Carpenter
1930 W Rosenhain
1930 E Schneider
1929 Sir Charles A Parsons
1928 C M Schwab
1927 Axel Wahlberg
1926 Sir Hugh Bel
1925 T Turner
1924 A Sauveur
1923 W H Maw
1922 K Honda
1921 C Freemont
1920 H Brearley
1919 Federico Giolitti
1918 The Rt Hon Lord Invernairn of Strathnairn
1917 A Lamberton
1916 F W Harbord
1915 P Martin
1914 Edward Riley
1913 A Greiner
1912 J H Darby
1911 H L Le Chatelier
1910 E H Saniter
1909 A Pourcel
1908 B Talbot
1907 J A Brinell
1906 F Osmond
1906 King Edward VII
1905 J O Arnold
1904 A Carnegie
1904 Sir R Hadfield
1903 The Rt Hon Lord Airedale of Gledhow
1902 F A Krupp
1901 J E Stead
1900 Henri de Wendel
1899 H M Queen Victoria
1898 R Prince-Williams
1897 Sir Frederick Abel
1896 H Wedding
1895 H M Howe
1894 John Gjers
1893 J Fritz
1892 A Cooper
1891 The Rt Hon Lord Armstrong
1890 W D Allen
1890 Hon A S Hewitt
1889 J D Ellis
1889 H Schneider
1888 D Adamson
1887 James Riley
1886 Edward Williams
1885 R Akerman
1884 E P Martin
1884 E W Richards
1883 G J Snelus
1883 Sidney Gilchrist Thomas
1882 A L Holley
1881 W Menelaus
1880 Sir J Whitworth
1879 P Cooper
1878 P R von Tunner
1877 J Percy
1876 R F Mushet
1875 Sir C W Siemens
1874 Sir Lowthian Bell
See also
List of engineering awards
References
Chemical engineering awards
Awards established in 1874
Bessemer Gold Medal
British awards
Steel industry of the United Kingdom | Bessemer Gold Medal | [
"Chemistry",
"Engineering"
] | 952 | [
"Bessemer Gold Medal",
"Chemical engineering",
"Chemical engineering awards"
] |
8,648,241 | https://en.wikipedia.org/wiki/Modulation%20error%20ratio | The modulation error ratio or MER is a measure used to quantify the performance of a digital radio (or digital TV) transmitter or receiver in a communications system using digital modulation (such as QAM). A signal sent by an ideal transmitter or received by a receiver would have all constellation points precisely at the ideal locations, however various imperfections in the implementation (such as noise, low image rejection ratio, phase noise, carrier suppression, distortion, etc.) or signal path cause the actual constellation points to deviate from the ideal locations.
Transmitter MER can be measured by specialized equipment, which demodulates the received signal in a similar way to how a real radio demodulator does it. Demodulated and detected signal can be used as a reasonably reliable estimate for the ideal transmitted signal in MER calculation.
Definition
An error vector is a vector in the I-Q plane between the ideal constellation point and the point received by the receiver. The Euclidean distance between the two points is its magnitude.
The modulation error ratio is equal to the ratio of the root mean square (RMS) power (in Watts) of the reference vector to the power (in Watts) of the error. It is defined in dB as:
where Perror is the RMS power of the error vector, and Psignal is the RMS power of ideal transmitted signal.
MER is defined as a percentage in a compatible (but reciprocal) way:
with the same definitions.
MER is closely related to error vector magnitude (EVM), but MER is calculated from the average power of the signal. MER is also closely related to signal-to-noise ratio. MER includes all imperfections including deterministic amplitude imbalance, quadrature error and distortion, while noise is random by nature.
See also
Error vector magnitude
Carrier to Noise Ratio
Signal-to-noise ratio
References
ETSI technical report ETR 290: "Measurement guidelines for DVB systems", Errata 1, May 1997
Quantized radio modulation modes
Radio electronics
Digital radio
Telecommunications | Modulation error ratio | [
"Technology",
"Engineering"
] | 405 | [
"Information and communications technology",
"Radio electronics",
"Telecommunications"
] |
8,648,480 | https://en.wikipedia.org/wiki/Nonel | Nonel is a shock tube detonator designed to initiate explosions, generally for the purpose of demolition of buildings and for use in the blasting of rock in mines and quarries. Nonel is a contraction of "non electric". Instead of electric wires, a hollow plastic tube delivers the firing impulse to the detonator, making it immune to most of the hazards associated with stray electric current.
It consists of a small diameter, three-layer plastic tube coated on the innermost wall with a reactive explosive compound, which, when ignited, propagates a low energy signal, similar to a dust explosion. The reaction travels at approximately 2,000 m/s (6,500 ft/s) along the length of the tubing with minimal disturbance outside of the tube.
Nonel was invented by the Swedish company Nitro Nobel in the 1960s and 1970s, under the leadership of Per-Anders Persson, and launched to the demolitions market in 1973. (Nitro Nobel became a part of Dyno Nobel after being sold to Norwegian Dyno Industrier AS in 1986.)
References
Further reading
Explosive videos
An experimental study of temperature structure of shock relaxation in air-dusty explosive media
Demolition
Detonators
Explosives
Mining equipment
Swedish inventions | Nonel | [
"Chemistry",
"Engineering"
] | 252 | [
"Demolition",
"Mining equipment",
"Construction",
"Explosives",
"Explosions"
] |
11,772,928 | https://en.wikipedia.org/wiki/Weibull%20modulus | The Weibull modulus is a dimensionless parameter of the Weibull distribution. It represents the width of a probability density function (PDF) in which a higher modulus is a characteristic of a narrower distribution of values. Use case examples include biological and brittle material failure analysis, where modulus is used to describe the variability of failure strength for materials.
Definition
The Weibull distribution, represented as a cumulative distribution function (CDF), is defined by:
in which m is the Weibull modulus. is a parameter found during the fit of data to the Weibull distribution and represents an input value for which ~67% of the data is encompassed. As m increases, the CDF distribution more closely resembles a step function at , which correlates with a sharper peak in the probability density function (PDF) defined by:
Failure analysis often uses this distribution, as a CDF of the probability of failure F of a sample, as a function of applied stress σ, in the form:
Failure stress of the sample, σ, is substituted for the property in the above equation. The initial property is assumed to be 0, an unstressed, equilibrium state of the material.
In the plotted figure of the Weibull CDF, it is worth noting that the plotted functions all intersect at a stress value of 50 MPa, the characteristic strength for the distributions, even though the value of the Weibull moduli vary. It is also worth noting in the plotted figure of the Weibull PDF that a higher Weibull modulus results in a steeper slope within the plot.
The Weibull distribution can also be multi-modal, in which there would be multiple reported values and multiple reported moduli, m. The CDF for a bimodal Weibull distribution has the following form, when applied to materials failure analysis:
This represents a material which fails by two different modes. In this equation m1 is the modulus for the first mode, and m2 is the modulus for the second mode. Φ is the fraction of the sample set which fail by the first mode. The corresponding PDF is defined by: Examples of a bimodal Weibull PDF and CDF are plotted in the figures of this article with values of the characteristic strength being 40 and 120 MPa, the Weibull moduli being 4 and 10, and the value of Φ is 0.5, corresponding to 50% of the specimens failing by each failure mode.
Linearization of the CDF
The complement of the cumulative Weibull distribution function can be expressed as:
Where P corresponds to the probability of survival of a specimen for a given stress value. Thus, it follows that:
where m is the Weibull modulus. If the probability is plotted vs the stress, we find that the graph is sigmoidal, as shown in the figure above. Taking advantage of the fact that the exponential is the base of the natural logarithm, the above equation can be rearranged to:
Which, using the properties of logarithms, can also be expressed as:
When the left side of this equation is plotted as a function of the natural logarithm of stress, a linear plot can be created which has a slope of the Weibull modulus, m, and an x-intercept of .
Looking at the plotted linearization of the CDFs from above it can be seen that all of the lines intersect the x-axis at the same point because all of the functions have the same value of the characteristic strength. The slopes vary because of the differing values of the Weibull moduli.
Measurement
Standards organizations have created multiple standards for measuring and reporting values of Weibull parameters, along with other statistical analyses of strength data:
ASTM C1239-13: Standard Practice for Reporting Uniaxial Strength Data and Estimating Weibull Distribution Parameters for Advanced Ceramics
ASTM D7846-21: Standard Practice for Reporting Uniaxial Strength Data and Estimating Weibull Distribution Parameters for Advanced Graphites
ISO 20501:2019 Fine Ceramics (Advanced Ceramics, Advanced Technical Ceramics) - Weibull Statistics for Strength Data
ANSI DIN EN 843-5:2007 Advanced Technical Ceramics - Mechanical Properties of Monolithic Ceramics at Room Temperature - Part 5: Statistical Analysis
When applying a Weibull distribution to a set of data the data points must first be put in ranked order. For the use case of failure analysis specimens' failure strengths are ranked in ascending order, i.e. from lowest to greatest strength. A probability of failure is then assigned to each failure strength measured, ASTM C1239-13 uses the following formula:
where is the specimen number as ranked and is the total number of specimens in the sample. From there can be plotted against failure strength to obtain a Weibull CDF. The Weibull parameters, modulus and characteristic strength, can be obtained from fitting or using the linearization method detailed above.
Example uses from published work
Weibull statistics are often used for ceramics and other brittle materials. They have also been applied to other fields as well such as meteorology where wind speeds are often described using Weibull statistics.
Ceramics and brittle materials
For ceramics and other brittle materials, the maximum stress that a sample can be measured to withstand before failure may vary from specimen to specimen, even under identical testing conditions. This is related to the distribution of physical flaws present in the surface or body of the brittle specimen, since brittle failure processes originate at these weak points. Much work has been done to describe brittle failure with the field of linear elastic fracture mechanics and specifically with the development of the ideas of the stress intensity factor and Griffith Criterion. When flaws are consistent and evenly distributed, samples will behave more uniformly than when flaws are clustered inconsistently. This must be taken into account when describing the strength of the material, so strength is best represented as a distribution of values rather than as one specific value.
Consider strength measurements made on many small samples of a brittle ceramic material. If the measurements show little variation from sample to sample, the calculated Weibull modulus will be high, and a single strength value would serve as a good description of the sample-to-sample performance. It may be concluded that its physical flaws, whether inherent to the material itself or resulting from the manufacturing process, are distributed uniformly throughout the material. If the measurements show high variation, the calculated Weibull modulus will be low; this reveals that flaws are clustered inconsistently, and the measured strength will be generally weak and variable. Products made from components of low Weibull modulus will exhibit low reliability and their strengths will be broadly distributed. With careful manufacturing processes Weibull moduli of up to 98 have been seen for glass fibers tested in tension.
A table is provided with the Weibull moduli for several common materials. However, it is important to note that the Weibull modulus is a fitting parameter from strength data, and therefore the reported value may vary from source to source. It also is specific to the sample preparation and testing method, and subject to change if the analysis or manufacturing process changes.
Organic materials
Studies examining organic brittle materials highlight the consistency and variability of the Weibull modulus within naturally occurring ceramics such as human dentin and abalone nacre. Research on human dentin samples indicates that the Weibull modulus remains stable across different depths or locations within the tooth, with an average value of approximately 4.5 and a range between 3 and 6. Variations in the modulus suggest differences in flaw populations between individual teeth, thought to be caused by random defects introduced during specimen preparation. Speculation exists regarding a potential decrease in the Weibull modulus with age due to changes in flaw distribution and stress sensitivity. Failure in dentin typically initiates at these flaws, which can be intrinsic or extrinsic in origin, arising from factors such as cavity preparation, wear, damage, or cyclic loading.
Studies on the abalone shell illustrate its unique structural adaptations, sacrificing tensile strength perpendicular to its structure to enhance strength parallel to the tile arrangement. The Weibull modulus of abalone nacre samples is determined to be 1.8, indicating a moderate degree of variability in strength among specimens.
Quasi-brittle materials
The Weibull modulus of quasi-brittle materials correlates with the decline in the slope of the energy barrier spectrum, as established in fracture mechanics models. This relationship allows for the determination of both the fracture energy barrier spectrum decline slope and the Weibull modulus, while keeping factors like crack interaction and defect-induced degradation in consideration. Temperature dependence and variations due to crack interactions or stress field interactions are observed in the Weibull modulus of quasi-brittle materials. Damage accumulation leads to a rapid decrease in the Weibull modulus, resulting in a right-shifted distribution with a smaller Weibull modulus as damage increases.
Quality analysis
Weibull analysis is also used in quality control and "life analysis" for products. A higher Weibull modulus allows for companies to more confidently predict the life of their product for use in determining warranty periods.
Other methods of characterization for brittle materials
A further method to determine the strength of brittle materials has been described by the Wikibook contribution Weakest link determination by use of three parameter Weibull statistics.
References
Materials science
Engineering statistics | Weibull modulus | [
"Physics",
"Materials_science",
"Engineering"
] | 1,915 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan",
"Engineering statistics"
] |
11,773,868 | https://en.wikipedia.org/wiki/L-selectride | L-selectride is a organoboron compound with the chemical formula . A colorless salt, it is usually dispensed as a solution in THF. As a particularly basic and bulky borohydride, it is used for stereoselective reduction of ketones.
Use in synthesis
Like other borohydrides, reductions are effected in two steps: delivery of the hydride equivalent to give the lithium alkoxide followed by hydrolytic workup:
The selectivity of this reagent is illustrated by its reduction of all three methylcyclohexanones to the less stable methylcyclohexanols in >98% yield.
Under certain conditions, L-selectride can selectively reduce enones by conjugate addition of hydride, owing to the greater steric hindrance the bulky hydride reagent experiences at the carbonyl carbon relative to the (also-electrophilic) β-position. L-Selectride can also stereoselectively reduce carbonyl groups in a 1,2-fashion, again due to the steric nature of the hydride reagent.
It reduces ketones to alcohols.
Related compounds
N-selectride and K-selectride are related compounds, but instead of lithium as cation they have sodium and potassium cations respectively. These reagents can sometimes be used as alternatives to, for instance, sodium amalgam reductions in inorganic chemistry.
Related compounds
Lithium Trisiamylborohydride
Lithium triethylborohydride ("Super hydride")
References
Borohydrides
Organolithium compounds
Reducing agents
Sec-Butyl compounds | L-selectride | [
"Chemistry"
] | 349 | [
"Organolithium compounds",
"Reagents for organic chemistry",
"Redox",
"Reducing agents"
] |
11,774,223 | https://en.wikipedia.org/wiki/Function%20%28engineering%29 | In engineering, a function is interpreted as a specific process, action or task that a system is able to perform.
In engineering design
In the lifecycle of engineering projects, there are usually distinguished subsequently: Requirements and Functional specification documents. The Requirements usually specifies the most important attributes of the requested system. In the Design specification documents, physical or software processes and systems are frequently the requested functions
In products
For advertising and marketing of technical products, the number of functions they can perform is often counted and used for promotion. For example a calculator capable of the basic mathematical operations of addition, subtraction, multiplication, and division, would be called a "four-function" model; when other operations are added, for example for scientific, financial, or statistical calculations, advertisers speak of "57 scientific functions", etc. A wristwatch with stopwatch and timer facilities would similarly claim a specified number of functions. To maximise the claim, trivial operations which do not significantly enhance the functionality of a product may be counted.
References
See also
Process
System
Utility
Engineering concepts | Function (engineering) | [
"Engineering"
] | 217 | [
"nan"
] |
11,774,456 | https://en.wikipedia.org/wiki/Phoma%20draconis | Phoma draconis is a fungal plant pathogen.
See also
List of foliage plant diseases (Agavaceae)
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
draconis
Fungi described in 1983
Fungus species | Phoma draconis | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,774,474 | https://en.wikipedia.org/wiki/MEO%20%28telecommunication%20company%29 | MEO (formerly TMN and PTC) is a mobile and fixed telecommunications service and brand from Altice Portugal (formerly Portugal Telecom), managed by MEO - Serviços de Comunicações e Multimédia. The service was piloted in Lisbon in 2007 and was later extended to Porto and Castelo Branco.
It was created on September 18, 2000, about 9 months after the liberalization of the Fixed Telecommunications Market in Portugal, it was considered the Historic Fixed Network Operator. On December 29, 2014, Portugal Telecom extinguished the subsidiary TMN, which in January had changed its name to "MEO", integrating its subsidiary into PT Comunicações. On the same date, PT Comunicações adopted the name of the extinct subsidiary – "MEO – Serviços de Comunicações e Multimédia".
History
MEO in its current form was founded in 2007 after the separation of PT Comunicações and PT Multimédia (later ZON and today now NOS). While PT Multimédia employed coaxial cables, after separation, MEO started making use of copper cables. The television service supplied by MEO within the copper cable network is served on the ADSL line. Telecomunicações Móveis Nacionais (TMN), Portugal's first and largest mobile network operator, was later integrated into the MEO brand in 2014 after two of TMN's shareholders, Telefones de Lisboa e Porto (TLP) and Marconi Comunicações Internacionais (the Portuguese operations of the UK-based Marconi Company) were acquired by Portugal Telecom in 1994 and 2002 respectively.
The commercial launch of the ADSL2+ service took place in June 2007. The satellite service began in April 2008, using the Hispasat satellite, soon followed by the FTTH service. The ADSL2+ and FTTH offers reached across Portugal and included broadband Internet services (at up to 400 Mbit/s) as well as a telephone service.
In May 2009, PT Portugal (now Altice) announced, after digital terrestrial television (TDT) transmissions had started, that the triple play service was also available with fiber optic speeds can achieve 400 Mbit/s.
Another service that rely on TDT, MEO TDT, which is included in the 3G plates service that is captured through the mobile internet signals from TDT. This service included one High Definition (HD) channel and the five main Portuguese channels. MEO TDT service also allows some of the advantages found on the ADSL and Fiber Optic service (pause, record...).
In July 2010, PT Portugal informed that MEO had surpassed 700 thousand clients.
In November 2011, MEO achieved one million subscribers. In January 2014, MEO and TMN became a single brand, MEO Serviços de Comunicações e Multimédia S.A..
In 2013, MEO launched a quadruple play service called M4O that in addition to the functionalities already referred has added the mobile phone, in a converged strategic logic. In July 2014, MEO launched a bundle which also includes the offering of mobile internet, called M5O.
Chronology
1991
Created on 22 March to take on the only existing mobile service in Portugal, based on an analogue network launched in 1989 by TLP (Lisbon and Oporto's phone service) and CTT (Portuguese Postal Service/national phone service), both State companies. The network prefix was 0676.
In December, Marconi (Portuguese international phone service, also a State company) bought into the company; ownership became equally split among the three partners.
1992
In March, regulatory agency ICP-Instituto das Comunicações de Portugal (Communications Institute of Portugal) announced the winners of the public bids for two licenses for mobile services through GSM. One winner was TMN; the second winner was a private consortium formed for the bid, called Telecel (later bought by Vodafone).
In May, the first GSM call was placed. Prefix was 0936.
On 8 October the GSM service was commercially launched.
1993
In May, the first roaming call was made.
In October, TMN launched the voice mail service for free to all customers.
1994
TMN was incorporated into Portugal Telecom, the State-run telecom born from the merger of TLP, Marconi and Telecom Portugal (spin-off from CTT).
1995
Inauguration, in February, of the digital network in the Madeira island.
In September, launch of MIMO, the world's first prepaid mobile service.
1996
In April, a new logo was presented.
In June, launch of SPOT, a prepaid tariff for younger customers.
In July, inauguration of the digital network in the Azores.
1998
In April, TMN was the first Portuguese operator to adopt billing by the second as imposed by law.
TMN reached one million customers.
In September, third GSM competitor was launched: Optimus (now NOS).
1999
TMN reached two million customers. It got its second million customers in just one year, as opposed to nine years for the first million.
ICP granted to TMN a license for Fixed phone services, with prefix 1096. TMN would only offer this service to its corporate customers, backed on its parent company's landline network.
2000
Prefix of TMN was changed to 96 as part of an overall restructuring of the national numbering system.
2003
In June, TMN launched a mobile portal, i9 (pronounced innov), on the trail of Vodafone live!, launched in November 2005.
2005
On 28 September TMN introduced a new logo, shown above.
2007
PTC launches the IPTV service in February 2007.
In June 2007, PT presented the new dimension of the MEO service.
2008
MEO Satélite arrives in the national territory on April 2, 2008, for PT subscribers.
Disney Channel arrived at MEO without replacement through on June 1, 2008.
2009
On April 2, 2009, MEO celebrated 1 year and launched the children's television for Portugal Telecom.
PT Presented MEO Fibra in May 2009, after to Clix and Zon.
In October 2009, MEO had more than 500 thousand cus.
2010
In June 2010, MEO launched this APP on TV with Meo Interactivo.
In autumn 2010, PT launches Music Box, MEO Videoclube on PC and MEO Jogos.
2011
TMN launches the E tariff, a new prepaid card.
MEO premieres the series Fora da Box, with episodes on channel 54 and Facebook.
MEO reaches 1 million customers and the new MEO Go! is created. This TMN store will buy cell phones at PT Bluestore (now lojas MEO).
In December he launches the MEO Like Music concert, deactivated in 2018.
2012
On February 10, 2012, MEO launches the MEO Kanal communication channel.
In March 2012, TMN changes its slogan Até Já for Vamos Lá.
In May 2012, MEO launches 4G mobile broadband and reaches more channels on MEO Kanal.
In July 2012, MEO worked in partnership with Cofina (now Medialivre), to launch the channel in 2013.
In October 2012, MEO chose the brand for the best consumer award in Portugal in 2012.
2013
On January 11, 2013, MEO launched the new logo, the second logo in the history of telecommunications.
On the same day, PT Bluestore moved to Lojas MEO TMN.
2014
In January 27, 2014 PT Portugal will discontinue TMN brand and merge it with MEO. On December 29, 2014, PT Portugal extinguished TMN, changing its name to MEO - Serviços de Comunicações e Multimédia e on the same day, PTC merged the MEO brand.
2015
In January 2015, PT decided to combine MEO and PTC into a single company, now renamed MEO - Serviços de Comunicações e Multimédia.
On July 2, 2015, PT's MEO will be replaced by the Altice group.
2017
From 31 October the carrier name on the iPhone changed to altice MEO
2018
On August 8, 2018, MEO has reached one and a half million subscribers.
2020
MEO is the provider with the largest share of television subscribers, with 39.8%, followed by NOS with 39.7%.
2023
In June, MEO celebrates 15 years. Now, the protagonists from the iconic ‘Communicado à Nação’ are 4 15 year olds who, in the same scenario and with the same clothes, futuristic, with the same production team and photographer responsible for the 2008 campaign.
2024
In May 2024, MEO recently changed its logo, the third time to have the current logo.
Marketing
The communication campaign invested in a strong advertising effort, protagonized by Portuguese humoristic characters, the Gato Fedorento. In 2015, Marketing left Gato Fedorento after 7 years and Cristiano Ronaldo launches the new campaign for the MEO brand, a sub-division of Altice. In 2018, Sophia arrived, precisely as the new 4K and Portable box with join venture with Cristiano Ronaldo. The current slogan is MEO Humaniza-te.
MEO Logos
Service
MEO's technology transmits over fiber optic and ADSL—either television (IPTV), telephone (VOIP) and internet. MEO ADSL integrates a router with a switch, connected to the telephone plug to decode and distribute the signal, and another for the television called MEOBox. The two MEOBox models are built by Motorola and Scientific Atlanta, with a processor, optional hard drive, HDMI slot, two SCART slots, a digital sound slot and an Ethernet slot.
The MEO Fiber Optic service uses an Optical Network Terminal, that decodes the fiber optic signal and passes it to the router.
Television
MEO offers television content transmission through four platforms: the ADSL network (IPTV), fiber optic (IPTV), satellite (DTH) and the 3G/4G network inherited from mobile communications carrier TMN, added to MEO in January 2014.
MEO ADSL television service includes a basic slate of 120 TV channels. Subscribers can access more than 170 channels if purchasing the “MEO Total” bundle, which included HD channels. FTTH MEO offers bundles distinguished by the speed of data transmission. Just like MEO ADSL, the basic package includes 120 channels.
With IPTV channels can be purchased through the MEObox remote control, unlike the satellite and coax services. Another advantage is its speed of 200 milliseconds.
The IPTV network also enables the customer to play games in the MEOBox and to explore content from the internet and dozens of interactive apps. The programming schedule is available along with a “PIP” (Picture In Picture) showing other channels onscreen alongside the current selection. It is also possible to record and pause the show being live transmitted or even watching what was transmitted on the last 7 days (automatic recordings.
In geographies where fiber-optic or ADSL networks are not available, MEO offers a television service by satellite.
The anywhere MEO's TV solution is called MEO Go.
MEO VideoClube
MEO VideoClube is a video-on-demand service that offers a catalog of thousands of Portuguese and international programs (including movies, documentaries and concerts). Additional features available include trailers, synopses, cast and IMDb rating; a favorites list; 48-hour viewing window; renting HD/3D movies with dolby surround sound; total control and privacy through a security PIN for “rentals and purchases” and a security PIN to access adult content. MEO VideoClube can be used inside or outside home on televisions, tablets; smartphones or personal computers through MEO Go service; and connected TV's and game consoles.
It is possible to watch movies without an internet connection, using Download & Play, available on a PC through MEO Go. In 2014, the services was refreshed with an improved image, faster navigation and new features with additional content and information, and a more accessible user experience. MEO VideoClube offers multiple payment options including a monthly invoice and the prepaid MEO VideoClube card.
MEO Go
MEO Go allows viewers to watch live TV and video on demand content on Windows, Mac and mobile devices such as tablets and smartphones via any 3G/4G broadband or WiFi internet connection. MEO Go offers over 70 TV live channels; automatic recording; thousands of movies; and access to a programming guide (Guia TV) that contains detailed program information and allows scheduling alerts and remote recordings. The MEO Go free app is available for the Android, iOS, Windows Phone and Windows 8 operating systems. The service is available at no extra cost to MEO TV customers, via MEO's home WiFi; or via any 3G/4G and WiFi internet access.
Since MEO Go's launch, in November 2011, the service added:
Download movies to watch later without internet access (August 2012)
MEO Go app for Windows 8 (October 2012)
Automatic recording (January 2013)
Tablet (February 2013) and iPhone and iPod Touch (February 2014) apps with a remote control, social network integration and a share-to-TV feature, to send contents form the mobile device to the TV
In 2013 MEO Go had more than 100.000 monthly active users, and more than 500.000 app downloads. Worldwide, it was recognized as one of the most complete and innovative platforms of its kind, winning international awards, including the CSI Awards 2014 and the Stevie Awards 2014.
Internet
The ADSL Internet service offers 24 Mbit/s downstream and 1 Mbit/s for upstream without traffic limitation, nationally or internationally. The fiber optic network allows downstream speeds up to 400 Mbit/s and 100 Mbit/s downstream for a higher data allocation. In 2015 mobile internet was added, named M5O.
Telephone
The telephone service offers charge free calls without limit to all national fixed networks by on the MEO fixed network (formerly as PTC). Initial costs are integrated in the MEO service subscription.
Mobile phone
Mobile phone service is supplied through the TMN networks. Following the demise of the TMN brand, tariffs remained unchanged and the telephonic support line (1696) remained the same as well. In 2015 telephone service was included in a quadruple play pack, named M4O.
Channels
The television channel line-up includes the Portuguese channels such as:
AXN
Star Channel
Syfy
MTV
Disney Channel
Cartoon Network
Cartoonito
Canal Panda
RTP Memória
National Geographic Channel
FOX News
CNN
Russia Today (suspended)
Al Jazeera
TVCine
Eurosport
BBC News
DW-TV
BBC Earth
Rai Italia
BTV
Sporting TV
Porto Canal
Canal 11
DAZN
Sport TV+
Record
Sponsorship
MEO sponsored all the "Big Three" from the Primeira Liga (Benfica, Porto and Sporting) from 2005 to 2015, when he sold for the rights with NOS. However it started sponsoring Porto again, as well as Rio Ave and Desportivo das Aves. It is also a sponsor of the FPF Football. It also sponsors athletes such as footballer CR7, Moto GP racer Miguel Oliveira, surfer Frederico Morais and World Surf League event MEO RCP Portugal.
Sharing of sports TV rights
In July 2016, MEO shared the broadcasting rights of NOS and Vodafone Portugal on the proposal of the live broadcasts of FC Porto, which currently broadcasts on Porto Canal, according to an agreement with the CMVM..
The company that had owned the Primeira Liga matches through the portuguese Sport TV channel.
MEO Empresas
The service was created by PT in 2014 after the merger between PT Negócios and PT Prime and offers a range of technological and telecommunications solutions for SMEs - Small, Medium-sized Companies and Large Companies and Institutions, such as the Government of Portugal. Changed name in January 2020 from the previous PT Empresas to Altice Empresas, becoming MEO Empresas in 2023.
Net neutrality dispute
A MEO advertisement for data access was the focus of a discussion beginning in October 2017 in Portugal, the European Union and the United States and relating to net neutrality.
MEO posted their advertisement for Internet services on their own website. On 26 October 2017, Democratic Party U.S. Representative Ro Khanna posted a screenshot of MEO's website to his Twitter feed while stating that their sales model was a violation of net neutrality.
Following Khanna's message, the technology community at Reddit discussed it on 27 October. Net neutrality advocate Cory Doctorow featured the ad as an illustration of a net neutrality violation on Boing Boing on 28 October. Quartz reported that the ad showed a net neutrality violation on 30 October. Tim Wu, the legal scholar who defined the term "net neutrality", commented on 30 October after reading the Quartz article that the ad did show a violation of net neutrality. From this point the discussion was far ranging. By 22 November, MEO published a response to the attention.
Responses
Many media sources reported that the sales model which the image described was a bad thing for being a violation of net neutrality.
Some other media sources reported that many people are misunderstanding the image. To clarify, these sources reported that MEO's sales model is aligned with Portuguese and European law, and that law defines net neutrality in a way that permits MEO's sales model. In another clarification point, sources noted that the MEO ad is for services to mobile phones and not an additional fee to broadband service (cf. EU Regulation 2015/2120).
References
External links
IPTV UK
Altice Portugal
2007 establishments in Portugal
Digital television
Internet service providers of Portugal
Streaming television
Mobile phone companies of Portugal
Television in Portugal | MEO (telecommunication company) | [
"Technology"
] | 3,737 | [
"Multimedia",
"Streaming television"
] |
11,774,498 | https://en.wikipedia.org/wiki/Baum%E2%80%93Connes%20conjecture | In mathematics, specifically in operator K-theory, the Baum–Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object.
The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture.
The conjecture is also closely related to index theory, as the assembly map is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program.
The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects.
Formulation
Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism
called the assembly map, from the equivariant K-homology with -compact supports of the classifying space of proper actions to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1.
Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism:
Baum-Connes Conjecture. The assembly map is an isomorphism.
As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the -algebra, one usually views the conjecture as an "explanation" of the right hand side.
The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982.
In case is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space of .
There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a -algebra on which acts by -automorphisms. It says in KK-language that the assembly map
is an isomorphism, containing the case without coefficients as the case
However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups.
Examples
Let be the integers . Then the left hand side is the K-homology of which is the circle. The -algebra of the integers is by the commutative Gelfand–Naimark transform, which reduces to the Fourier transform in this case, isomorphic to the algebra of continuous functions on the circle. So the right hand side is the topological K-theory of the circle. One can then show that the assembly map is KK-theoretic Poincaré duality as defined by Gennadi Kasparov, which is an isomorphism.
Results
The conjecture without coefficients is still open, although the field has received great attention since 1982.
The conjecture is proved for the following classes of groups:
Discrete subgroups of and .
Groups with the Haagerup property, sometimes called a-T-menable groups. These are groups that admit an isometric action on an affine Hilbert space which is proper in the sense that for all and all sequences of group elements with . Examples of a-T-menable groups are amenable groups, Coxeter groups, groups acting properly on trees, and groups acting properly on simply connected cubical complexes.
Groups that admit a finite presentation with only one relation.
Discrete cocompact subgroups of real Lie groups of real rank 1.
Cocompact lattices in or . It was a long-standing problem since the first days of the conjecture to expose a single infinite property T-group that satisfies it. However, such a group was given by V. Lafforgue in 1998 as he showed that cocompact lattices in have the property of rapid decay and thus satisfy the conjecture.
Gromov hyperbolic groups and their subgroups.
Among non-discrete groups, the conjecture has been shown in 2003 by J. Chabert, S. Echterhoff and R. Nest for the vast class of all almost connected groups (i. e. groups having a cocompact connected component), and all groups of -rational points of a linear algebraic group over a local field of characteristic zero (e.g. ). For the important subclass of real reductive groups, the conjecture had already been shown in 1987 by Antony Wassermann.
Injectivity is known for a much larger class of groups thanks to the Dirac-dual-Dirac method. This goes back to ideas of Michael Atiyah and was developed in great generality by Gennadi Kasparov in 1987.
Injectivity is known for the following classes:
Discrete subgroups of connected Lie groups or virtually connected Lie groups.
Discrete subgroups of p-adic groups.
Bolic groups (a certain generalization of hyperbolic groups).
Groups which admit an amenable action on some compact space.
The simplest example of a group for which it is not known whether it satisfies the conjecture is .
References
.
.
External links
On the Baum-Connes conjecture by Dmitry Matsnev.
C*-algebras
K-theory
Surgery theory
Conjectures
Unsolved problems in mathematics | Baum–Connes conjecture | [
"Mathematics"
] | 1,251 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
11,774,532 | https://en.wikipedia.org/wiki/Phaeosphaeriopsis%20obtusispora | Phaeosphaeriopsis obtusispora is a fungal plant pathogen.
See also
List of foliage plant diseases (Agavaceae)
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporales
Fungus species | Phaeosphaeriopsis obtusispora | [
"Biology"
] | 56 | [
"Fungi",
"Fungus species"
] |
11,774,594 | https://en.wikipedia.org/wiki/Coniothyrium%20henriquesii | Coniothyrium henriquesii is a fungal plant pathogen.
References
Fungal plant pathogens and diseases
Pleosporales
Fungus species | Coniothyrium henriquesii | [
"Biology"
] | 29 | [
"Fungi",
"Fungus species"
] |
11,774,653 | https://en.wikipedia.org/wiki/Alternaria%20panax | Alternaria panax is a fungal plant pathogen, which causes Alternaria blight of ginseng.
References
panax
Fungal plant pathogens and diseases
Food plant pathogens and diseases
Eudicot diseases
Fungi described in 1912
Fungus species | Alternaria panax | [
"Biology"
] | 50 | [
"Fungi",
"Fungus species"
] |
11,774,703 | https://en.wikipedia.org/wiki/Colletotrichum%20trichellum | Colletotrichum trichellum is a fungal plant pathogen. It is known for causing leaf and stem spot in English Ivy.
References
trichellum
Fungal plant pathogens and diseases
Eudicot diseases
Fungi described in 1817
Fungus species | Colletotrichum trichellum | [
"Biology"
] | 49 | [
"Fungi",
"Fungus species"
] |
11,774,737 | https://en.wikipedia.org/wiki/CBM-CFS3 | CBM-CFS3 (Carbon Budget Model of the Canadian Forest Sector) is a Windows-based software modelling framework for stand- and landscape-level forest ecosystem carbon accounting. It is used to calculate forest carbon stocks and stock changes for the past (monitoring) or into the future (projection). It can be used to create, simulate and compare various forest management scenarios in order to assess impacts on carbon. It is compliant with requirements under the Kyoto Protocol and with the Good Practice Guidance for Land Use, Land-Use Change and Forestry (2003) report published by the Intergovernmental Panel on Climate Change (IPCC).
It is the central model of the Government of Canada's National Forest Carbon Monitoring, Accounting and Reporting System (NFCMARS). The CBM-CFS3 was developed through a collaboration between Natural Resources Canada's Canadian Forest Service (CFS) and the Canadian Model Forest Network, and is currently supported by the CFS. The CBM-CFS3 is distributed at no charge by the Canadian Forest Service through Canada's National Forest Information System web site. Technical support is available by contacting Stephen Kull, Carbon Model Extension Forester, at the CFS.
See also
Carbon accounting
References
External links
Canadian Forest Service, Forest Carbon Accounting Web Site
Canadian Forest Service CBM-CFS3 Web Site
Natural Resources Canada Web Site
Canadian Forest Service Web Site
The Canadian Model Forest Network
Good Practice Guidance for Land Use, Land-Use Change and Forestry
Forest models
Climate change in Canada | CBM-CFS3 | [
"Biology",
"Environmental_science"
] | 309 | [
"Environmental modelling",
"Forest models",
"Biological models"
] |
11,774,751 | https://en.wikipedia.org/wiki/Phyllosticta%20concentrica | Phyllosticta concentrica is a fungal plant pathogen.
See also
List of foliage plant diseases (Araliaceae)
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
concentrica
Fungi described in 1876
Taxa named by Pier Andrea Saccardo
Fungus species | Phyllosticta concentrica | [
"Biology"
] | 59 | [
"Fungi",
"Fungus species"
] |
11,774,803 | https://en.wikipedia.org/wiki/Guignardia%20philoprina | Guignardia philoprina is a plant pathogen that causes leaf spot on Araliaceae sp.
References
External links
Photo at the Fungi4Schools page of the British Mycological Society
Fungal plant pathogens and diseases
Botryosphaeriaceae
Fungi described in 1859
Fungus species | Guignardia philoprina | [
"Biology"
] | 60 | [
"Fungi",
"Fungus species"
] |
11,774,843 | https://en.wikipedia.org/wiki/Colletotrichum%20derridis | Colletotrichum derridis is a fungal plant pathogen.
References
derridis
Fungal plant pathogens and diseases
Fungi described in 1950
Fungus species | Colletotrichum derridis | [
"Biology"
] | 32 | [
"Fungi",
"Fungus species"
] |
11,774,908 | https://en.wikipedia.org/wiki/Cercospora%20rhapidicola | Cercospora rhapidicola is a fungal plant pathogen.
References
rhapidicola
Fungal plant pathogens and diseases
Fungus species | Cercospora rhapidicola | [
"Biology"
] | 31 | [
"Fungi",
"Fungus species"
] |
11,774,961 | https://en.wikipedia.org/wiki/Pucciniastrum%20epilobii | Pucciniastrum epilobii is a plant pathogen infecting fuchsias.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Pucciniales
Fungi described in 1861
Fungus species | Pucciniastrum epilobii | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,775,057 | https://en.wikipedia.org/wiki/Aspergillus%20fischeri | Aspergillus fischeri is a species of fungus in the genus Aspergillus. And is widely distribute in soil, grain and canned food world wide. In the other hand Aspergillus fischeri is a BSL-1 plant pathogen.
About 64% of species in genus Aspergillus lack knowned sexual reproduction in their life cycle, causing them were classfied into Fungi imperfecti before, producing the teleomorph name Neosartorya fischeri when the sexual reproduction were discoverd.
But after the abolish of the Fungi imperfecti nomenclature, Aspergillus which is the anamorph name of should be the holomorph name when we discuss this species.
Reproduction
The reproduction of Aspergillus fischeri could be divided into three way sexual, asexual and parasexual.
Asexual and parasexual are two nonsexual way when fungi reproducing. The distinction between them are where is the parent hyphae come from. When the mycelium comes from different individual occasionally fuse. The onset of fuse become the beginning of the parasexual stage. And the two haploid nuclei from each mother cell will in one cell, dividing mitotically and experience random cross-over event until restore the haploid choromosome number.
There are two mating type when fungi reproducing when fungi reproduce in a sexual way. According to the number of mating type produced by individual, fungi thus to classfied into homothallic and heterothallic. And Aspergillus fischeri is homothallic, which can producing both mating type within one individuals.
Pathogenetic contrast to Aspergillus fumigatus
Although Aspergillus fumigatus , which is highly relative with Aspergillus fischeri is the mainly pathogen of Aspergillosis. Aspergillus fischeri is regard as non-pathogenetic.
Recent research has show that the difference between these two species, such as the progression when treating on immune depress rat. And find out that in most stress under common circumstance, Aspergillus fischeri survival rate is lower than Aspergillus fumigatus
Plastic degrading
Members in Aspergillus are commonly used to survey the primary and secondary metabolism in fungi, for example, Aspergillus nidulans to be a type creature. And because of the exclusive range they can degrade, the name cell factory can be use to describe them. As the result, scientists have research many fungi that can degrde plastic. And find out Aspergillus fischeri can degrade Polycaprolactone (PCL).
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
fischeri
Fungus species | Aspergillus fischeri | [
"Biology"
] | 574 | [
"Fungi",
"Fungus species"
] |
11,775,253 | https://en.wikipedia.org/wiki/Isariopsis%20clavispora | Isariopsis clavispora is a fungal plant pathogen that causes leaf spot on grape.
References
External links
USDA ARS Fungal Database
Mycosphaerellaceae
Fungi described in 1886
Leaf diseases
Fungal grape diseases
Taxa named by Miles Joseph Berkeley
Fungus species | Isariopsis clavispora | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,775,326 | https://en.wikipedia.org/wiki/Monostichella%20coryli | Monostichella coryli is a plant pathogen infecting hazelnuts.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Hazelnut tree diseases
Dermateaceae
Taxa named by John Baptiste Henri Joseph Desmazières
Fungus species | Monostichella coryli | [
"Biology"
] | 57 | [
"Fungi",
"Fungus species"
] |
11,775,351 | https://en.wikipedia.org/wiki/Labrella%20coryli | Labrella coryli is an ascomycete fungus. It is a plant pathogen that causes anthracnose on hazelnut. It was not found in North America prior to 1951.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal tree pathogens and diseases
Hazelnut tree diseases
Enigmatic Ascomycota taxa
Fungus species | Labrella coryli | [
"Biology"
] | 73 | [
"Fungi",
"Fungus species"
] |
11,775,446 | https://en.wikipedia.org/wiki/Ascochyta%20prasadii | Ascochyta prasadii is a plant pathogen that causes leaf spot and stem cankers on hemp.
See also
List of Ascochyta species
References
Fungal plant pathogens and diseases
Eudicot diseases
prasadii
Fungi described in 1968
Fungus species | Ascochyta prasadii | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,775,515 | https://en.wikipedia.org/wiki/Ophiobolus%20anguillides | Ophiobolus anguillides is a plant pathogen that causes stem canker on hemp.
References
Fungal plant pathogens and diseases
Hemp diseases
Pleosporales
Fungus species | Ophiobolus anguillides | [
"Biology"
] | 41 | [
"Fungi",
"Fungus species"
] |
11,775,553 | https://en.wikipedia.org/wiki/Stemphylium%20cannabinum | Stemphylium cannabinum is a plant pathogen that infects hemp.
References
See also
Dobrozrakova, Taisiia Leonidovna; Letova, M.F.; Stepanov, K.M.; Khokhryakov, M.K. 1956. Opredelitel' Bolezni Rastenii [A manual on the determination of plant diseases]. :1-661
Fungal plant pathogens and diseases
Hemp diseases
Pleosporaceae
Fungus species | Stemphylium cannabinum | [
"Biology"
] | 107 | [
"Fungi",
"Fungus species"
] |
11,775,588 | https://en.wikipedia.org/wiki/Ascochyta%20humuli | Ascochyta humuli is a plant pathogen that causes leaf spot on hops.
See also
List of Ascochyta species
References
Fungal plant pathogens and diseases
Hop diseases
humuli
Fungus species | Ascochyta humuli | [
"Biology"
] | 42 | [
"Fungi",
"Fungus species"
] |
11,775,607 | https://en.wikipedia.org/wiki/Pucciniastrum%20hydrangeae | Pucciniastrum hydrangeae is a plant pathogen infecting hydrangeas.
References
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Pucciniales
Fungi described in 1906
Fungus species | Pucciniastrum hydrangeae | [
"Biology"
] | 44 | [
"Fungi",
"Fungus species"
] |
11,775,685 | https://en.wikipedia.org/wiki/Meliola%20mangiferae | Meliola mangiferae, also described as black mildew, is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Meliolaceae
Fungi described in 1905
Fungus species | Meliola mangiferae | [
"Biology"
] | 49 | [
"Fungi",
"Fungus species"
] |
11,775,724 | https://en.wikipedia.org/wiki/Cochliobolus%20tuberculatus | Cochliobolus tuberculatus is a plant pathogen.
Genomics
Condon et al., 2013 elucidates the pathogen's relationship with other Cochliobolus.
References
Fungal plant pathogens and diseases
tuberculatus
Fungi described in 1962
Fungus species | Cochliobolus tuberculatus | [
"Biology"
] | 59 | [
"Fungi",
"Fungus species"
] |
11,775,744 | https://en.wikipedia.org/wiki/Physalospora%20disrupta | Physalospora disrupta is a plant pathogen infecting mangoes.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Mango tree diseases
Xylariales
Fungus species | Physalospora disrupta | [
"Biology"
] | 46 | [
"Fungi",
"Fungus species"
] |
11,775,868 | https://en.wikipedia.org/wiki/Network%20allocation%20vector | The network allocation vector (NAV) is a virtual carrier-sensing mechanism used with wireless network protocols such as IEEE 802.11 (Wi-Fi) and IEEE 802.16 (WiMax). The virtual carrier-sensing is a logical abstraction which limits the need for physical carrier-sensing at the air interface in order to save power. The MAC layer frame headers contain a duration field that specifies the transmission time required for the frame, in which time the medium will be busy. The stations listening on the wireless medium read the Duration field and set their NAV, which is an indicator for a station on how long it must defer from accessing the medium.
The NAV may be thought of as a counter, which counts down to zero at a uniform rate. When the counter is zero, the virtual carrier-sensing indication is that the medium is idle; when nonzero, the indication is busy. The medium shall be determined to be busy when the station (STA) is transmitting. In IEEE 802.11, the NAV represents the number of microseconds the sending STA intends to hold the medium busy (maximum of 32,767 microseconds). When the sender sends a Request to Send the receiver waits one SIFS before sending Clear to Send. Then the sender will wait again one SIFS before sending all the data. Again the receiver will wait a SIFS before sending ACK. So NAV is the duration from the first SIFS to the ending of ACK. During this time the medium is considered busy.
Wireless stations are often battery-powered, so to conserve power the stations may enter a power-saving mode. A station decrements its NAV counter until it becomes zero, at which time it is awakened to sense the medium again.
The NAV virtual carrier sensing mechanism is a prominent part of the CSMA/CA MAC protocol used with IEEE 802.11 WLANs. NAV is used in DCF, PCF and HCF.
Media access control
Computer networking | Network allocation vector | [
"Technology",
"Engineering"
] | 418 | [
"Computer networking",
"Computer engineering",
"Computer network stubs",
"Computer science",
"Computing stubs"
] |
11,776,086 | https://en.wikipedia.org/wiki/Drechslera%20avenacea | Drechslera avenacea is a fungal plant pathogen.
Hosts
Hosts are Avena spp. Includes wild oats, for which it is a bioherbicide.
Research
Ghajar et al., 2006 investigates and provides culturing parameters.
References
Fungal plant pathogens and diseases
Pleosporaceae
Fungus species | Drechslera avenacea | [
"Biology"
] | 67 | [
"Fungi",
"Fungus species"
] |
11,776,186 | https://en.wikipedia.org/wiki/Ascochyta%20caricae | Ascochyta caricae is a fungal plant pathogen that causes dry rot on papaya.
See also
List of Ascochyta species
References
Ascochyta
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Papaya tree diseases
caricae
Fungi described in 1851
Taxa named by Gottlob Ludwig Rabenhorst
Fungus species | Ascochyta caricae | [
"Biology"
] | 77 | [
"Fungi",
"Fungus species"
] |
11,776,312 | https://en.wikipedia.org/wiki/Corticium%20roseum | Corticium roseum is a species of fungus in the family Corticiaceae. Basidiocarps (fruit bodies) are effused, smooth, corticioid, and pink. The species has a wide, north and south temperate distribution and in Europe is typically found on dead, attached branches of Salix and Populus.
Taxonomy
Corticium roseum was originally described by Persoon in 1794 as part of his new genus Corticium. It was later selected as the type species of the genus. Morphological differences between collections indicated that C. roseum might be a species complex and several new species were described. Molecular research, based on cladistic analysis of DNA sequences, has partly confirmed this. Corticium boreoroseum, C. medioroseum, and C. malagasoroseum are separate species, based on DNA evidence, whilst C. erikssonii and C. lombardiae are synonyms of C. roseum.
References
Fungi described in 1794
Corticiales
Taxa named by Christiaan Hendrik Persoon
Fungus species | Corticium roseum | [
"Biology"
] | 225 | [
"Fungi",
"Fungus species"
] |
11,776,355 | https://en.wikipedia.org/wiki/Tyromyces%20chioneus | Tyromyces chioneus, commonly known as the white cheese polypore, is a species of polypore fungus. A widely distributed fungus, it has a circumpolar distribution, in temperate boreal pine forests, of Asia, Europe, and North America, causes white rot in dead hardwood trees, especially birch.
Taxonomy
The species was first described as Polyporus chioneus by Elias Fries in 1815. It was transferred to the genus Tyromyces by Petter Karsten in 1881.
Tyromyces chioneus is the type species of Tyromyces. The specific epithet chioneus means "snow", referring to its white color. It is commonly known as the "white cheese polypore".
Description
The fruit bodies are semicircular to fan-shaped brackets that measure up to broad by wide, with a thickness of . The upper surface is initially white before aging to yellowish or grayish, and has a texture ranging from smooth to tomentose. The undersurface features white to cream-colored, round to angular pores measuring 3–4 per millimeter. The flesh is soft and fleshy when young, but becomes hard and brittle in age or when dry. It has a mild or indistinct taste, and a pleasant odor.
It has a white spore print, and the spores are smooth, cylindrical, hyaline (translucent), with dimensions of 4–5 by 1.5–2 μm. The basidia are club-shaped, four-spored, and measure 10–15 by 4–5 μm; they have a clamp at their base. The hyphal system is dimitic, consisting of generative and skeletal hyphae. The generative hyphae have clamps and are intricately branched. The skeletal hyphae, in contrast, are thick-walled, rarely branched, and measure 2–4.5 μm in diameter. Although cystidia are absent from the hymenium, there are fused cystidioles (immature cystidia) measuring 15–20 by 4–5 μm.
The species is inedible.
Habitat and distribution
Tyromyces chioneus causes white rot in dead hardwood trees. Its most common host is birch. The species has a circumpolar distribution, in temperate boreal pine forests, including Asia, Europe, and North America. In Greenland, it is common on Betula pubescens.
Chemistry
Cultures of the fungus have been shown to contain a sesquiterpene with anti-HIV activity in laboratory experiments.
References
Fungi described in 1815
Fungi of Europe
Fungi of North America
Fungal tree pathogens and diseases
Inedible fungi
chioneus
Taxa named by Elias Magnus Fries
Fungus species | Tyromyces chioneus | [
"Biology"
] | 567 | [
"Fungi",
"Fungus species"
] |
11,776,982 | https://en.wikipedia.org/wiki/Salinosporamide%20A | Salinosporamide A (Marizomib) is a potent proteasome inhibitor being studied as a potential anticancer agent. It entered phase I human clinical trials for the treatment of multiple myeloma, only three years after its discovery in 2003. This marine natural product is produced by the obligate marine bacteria Salinispora tropica and Salinispora arenicola, which are found in ocean sediment. Salinosporamide A belongs to a family of compounds, known collectively as salinosporamides, which possess a densely functionalized γ-lactam-β-lactone bicyclic core.
History
Salinosporamide A was discovered by William Fenical and Paul Jensen from Scripps Institution of Oceanography in La Jolla, CA. In preliminary screening, a high percentage of the organic extracts of cultured Salinispora strains possessed antibiotic and anticancer activities, which suggests that these bacteria are an excellent resource for drug discovery. Salinispora strain CNB-392 was isolated from a heat-treated marine sediment sample and cytotoxicity-guided fractionation of the crude extract led to the isolation of salinosporamide A. Although salinosporamide A shares an identical bicyclic ring structure with omuralide, it is uniquely functionalized. Salinosporamide A displayed potent in vitro cytotoxicity against HCT-116 human colon carcinoma with an IC50 value of 11 ng mL-1. This compound also displayed potent and highly selective activity in the NCI's 60-cell-line panel with a mean GI50 value (the concentration required to achieve 50% growth inhibition) of less than 10 nM and a greater than 4 log LC50 differential between resistant and susceptible cell lines. The greatest potency was observed against NCI-H226 non-small cell lung cancer, SF-539 brain tumor, SK-MEL-28 melanoma, and MDA-MB-435 melanoma (formerly misclassified as breast cancer), all with LC50 values less than 10 nM. Salinosporamide A was tested for its effects on proteasome function because of its structural relationship to omuralide. When tested against purified 20S proteasome, salinosporamide A inhibited proteasomal chymotrypsin-like proteolytic activity with an IC50 value of 1.3 nM. This compound is approximately 35 times more potent than omuralide which was tested as a positive control in the same assay. Thus, the unique functionalization of the core bicyclic ring structure of salinosporamide A appears to have resulted in a molecule that is a significantly more potent proteasome inhibitor than omuralide.
Mechanism of action
Salinosporamide A inhibits proteasome activity by covalently modifying the active site threonine residues of the 20S proteasome.
Biosynthesis
It was originally hypothesized that salinosporamide B was a biosynthetic precursor to salinosporamide A due to their structural similarities.
It was thought that the halogenation of the unactivated methyl group was catalyzed by a non-heme iron halogenase. Recent work using 13C-labeled feeding experiments reveal distinct biosynthetic origins of salinosporamide A and B.
While they share the biosynthetic precursors acetate and presumed β-hydroxycyclohex-2'-enylalanine (3), they differ in the origin of the four-carbon building block that gives rise to their structural differences involving the halogen atom. A hybrid polyketide synthase-nonribosomal peptide synthetase (PKS-NRPS) pathway is most likely the biosynthetic mechanism in which acetyl-CoA and butyrate-derived ethylmalonyl-CoA condense to yield the β-ketothioester (4), which then reacts with (3) to generate the linear precursor (5).
Total synthesis
The first stereoselective synthesis was reported by Rajender Reddy Leleti and E. J.Corey. Later several routes to the total synthesis of salinosporamide A have been reported.
Clinical study
In vitro studies using purified 20S proteasomes showed that salinosporamide A has lower EC50 for trypsin-like (T-L) activity than does bortezomib. In vivo animal model studies show marked inhibition of T-L activity in response to salinosporamide A, whereas bortezomib enhances T-L proteasome activity.
Initial results from early-stage clinical trials of salinosporamide A in relapsed/refractory multiple myeloma patients were presented at the 2011 American Society of Hematology annual meeting. Further early-stage trials of the drug in a number of different cancers are ongoing.
References
External links
Gamma-lactams
Lactones
Antibiotics
Proteasome inhibitors
Experimental cancer drugs
Total synthesis
Secondary alcohols
Cyclohexenes
Chloroethyl compounds
Chlorine-containing natural products | Salinosporamide A | [
"Chemistry",
"Biology"
] | 1,088 | [
"Biotechnology products",
"Antibiotics",
"Chemical synthesis",
"Total synthesis",
"Biocides"
] |
11,777,319 | https://en.wikipedia.org/wiki/C2Cl4O | {{DISPLAYTITLE:C2Cl4O}}
The molecular formula C2Cl4O may refer to:
Tetrachloroethylene oxide
Trichloroacetyl chloride | C2Cl4O | [
"Chemistry"
] | 41 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
11,777,722 | https://en.wikipedia.org/wiki/PhilNITS | The Philippine National Information Technology Standards Foundation, Inc., or PhilNITS, is a non-stock, non-profit, non-government organization that is implementing in the Philippines the Information Technology standards adopted from Japan, with the support of the Department of Trade and Industry (DTI) of the Philippines and the Ministry of Economy, Trade and Industry (METI) of Japan.
History
PhilNITS was initially known as the Japanese IT Standards Exams of the Philippines Foundation, Inc. (JITSE-Phil), and was registered as such with the Securities and Exchange Commission on April 10, 2002, and set up its office at the Penthouse of the Prince Bldg, in Rada Street, Legazpi Village, Makati,
A week after its incorporation, the Japan IT Engineers Examination Center (JITEC) represented by its president, Mr. Takao Tominaga, signed a mutual recognition agreement (MRA) with the JITSE-Phil Foundation, represented by its Founding President, Ms. Ma. Corazon M. Akol in ceremonies held at the Makati Shangri-La, Manila Hotel and witnessed by Ambassador Ara of Japan, Secretary Mar Roxas of the Department of Trade and Industry, Chairman Virgilio Pena of Information Technology and E-Commerce Council (now Commission on Information and Communications Technology), Mr. Yoshikai, Deputy Director General of the Ministry of Economy, Trade and Industry, and Mr. Sakai, Commercial Attache of Japan to the Philippines.
DTI started to support JITSE-Phil in 2003 by providing it with its office space at the Oppen Building in Makati, in 2005, at the WDC Building in Cebu City and in 2007, at the Mintrade Bldg. in Davao City.
With the MRA between JITEC and JITSE-Phil, it was able to receive technical support from Japan. JITEC has been providing guidance, training, necessary hardware/software programs, and the documentation required in implementing the Standards on the Fundamentals of IT (FE) and in Software Design and Development (SW).
On May 29, 2003, the Bureau of Product Standards (BPS) of the Department of Trade and Industry (DTI), after due consultation with the National Computer Center (NCC), accepted JITSE as the Phil. National Standard - PNS 2030:2003 Information Technology Engineers Skills Standards.
On May 30, 2003, after an evaluation of the results of the First Certification Exams conducted by JITSE-Phil on the Fundamentals of IT Engineers (FE), the Ministry of Justice of Japan recognized JITSE as the Philippine Nihon Joho Gijutsu Hyojun Shiken Zaidan. With this official recognition by the Ministry of Justice, the FE Certificate, more popularly called the JITSE Certificate, can be used as a valid document for processing the work visas of IT professionals bound for Japan.
With the Asia IT Initiative Program (AITI) of METI, JITSE-Phil was able to receive technical support from Japan through the Japan External Trade Organization (JETRO), the Association for Overseas Technical Scholarships (AOTS) and the Center of the International Cooperation on Computerization (CICC).
JETRO has provided JITSE-Phil since 2003, through the Japan Expert Service Abroad (JEXSA) Project, technical experts and the training facilities in its offices in Makati, Cebu, and Davao. AOTS has provided Training Courses in the Philippines (in the various offices of JITSE-Phil as well as in some schools in provinces where JITSE-Phil has no Training Center) and have awarded scholarships for training in Japan.
On August 31, 2004. JITSE-Phil changed its name to PhilNITS Foundation to correct the misconception that the standards being implemented only for the Japanese market but for Asia.
In the ITEE Conference held at the AOTS Yokohama Kenshu Center, JITEC and the 6 organizations with mutual recognition agreements (MRAs) with JITEC, decided to form the Information Technology Professional Examination Council or ITPEC. The members of ITPEC are: the Japan Information Technology Examination Center (JITEC) of Japan, the Multimedia Technology Enhancement & Operations Sendirian Berhad (METEOR) of Malaysia, the Myanmar Computer Federation (MCF) of Myanmar, the Japan-Mongolian Information Technology Association (JMITA) of Mongolia, (now replaced by the National IT Park, NITP), the Philippine National IT Standards Foundation (PhilNITS) of the Philippines, the National Electronics and Computer technology Center (NECTEC) of Thailand, and the Vietnam Information Technology Examination and Training Support Center (VITEC) of Vietnam. In this Agreement, the members decided to have a common exam, on the same agreed upon date and time, and to recognize each other’s Certificate. ITPEC members have been using the same logo to raise public recognition of the examination and have adopted a common marketing strategy. JITEC-IPA expects to establish multi-lateral mutual recognition agreements, and transform ITPEC into a fully Asia-wide organization.
Through Grants from AOTS, PhilNITS has trained 8,174 IT Professionals in the Philippines and has sent 197 scholars to Japan. CICC has provided several Training programs as well as an e Learning System consisting of the hardware (2 servers and 4 terminals), the software (that can accommodate a maximum of 2000 users) and contents consisting of 24 modules developed by JITEC and CICC and 1 module developed by the Thomson Learning Center, donated by Fujitsu to PhilNITS. All the modules are made available to the public, 24 hours a day, 7 days a week through a subscription fee of ₱500.00 per month.
Current Certifications
The ITPEC certification exams (known locally as PhilNITS certification exams) are administered as written below. Currently there are three levels of examination being administered by PhilNITS. Topics covered by the exams are those of technology, strategy and management. IT professionals who pass these certification examinations are certified for life. An IT professional may take directly the certification level he/she would want to take. There is no limit to the number of times you take the examinations until you pass the exams.
ITPEC Fundamentals of IT Passport Exam (IP Exam, Level 1)
Known locally as PhilNITS IP, the Information Technology Passport Exam is for individuals who have basic knowledge in IT that all business workers should commonly possess, and who are doing information technology related tasks or trying to utilize IT related technology in their tasks. The exam duration is 120 minutes or 2 hours (conducted during a morning schedule). It consists of 100 questions in multiple choice (one per four choices) broke down into two types: the short question type, one question per item, 88 questions; and medium question type, four questions per item, 12 questions (3 items).
The pilot examination was conducted last March 28, 2010. The first regular exam was conducted last October 24, 2010. This exam is conducted twice a year in last Sunday of April and last Sunday of October.
ITPEC Fundamentals of IT Engineers Exam (FE Exam, Level 2)
Known locally as PhilNITS FE, this exam is conducted twice a year in the last Sunday of April and last Sunday of October. The 300 minute (150 minutes in the morning and 150 minutes in the afternoon) multiple choice examination are administered in 10 exam centers in the Philippines: University of Baguio for the north Luzon area, Philippine Christian University in Manila, Ateneo de Naga University for the Bicol Region, University of San Carlos in Cebu and Holy Name University in Bohol, Lorma Colleges in La Union and Leyte Academic Center for the Visayas region, and for Mindanao: Capitol University in Cagayan de Oro, Ateneo de Zamboanga University in Zamboanga City and the University of the Immaculate Conception in Davao City.
The Fundamental IT Engineers Exam is for individuals who have basic fundamental knowledge and skills required to be an advance IT human resource, who possess practical utilization abilities. Those who fail either the morning or afternoon part of the exam are given another chance to take the removal exam after which they will have to take the entire test again. This means two chances to pass the exam.
ITPEC Applied Information Technology Exam (AP Exam, Level 3)
Known locally as PhilNITS AP is conducted once a year on the last Sunday of October. The multiple choice exam is for 300 minutes (150 minutes in the morning and 150 minutes in the afternoon). This examination is for individuals who have applied knowledge and skills required to be an advanced IT human resource, and who have established their own direction as an advanced IT human resource. It is ideally given to people who have at least two years work experience.
Other projects
Conducting free training courses on IP, FE & AP/SW for teachers and commercial trainers. Usually funded by grants from AOTS, CICC, IPA, METI, and JETRO.
Conducting free summer training of teachers, a joint undertaking with the Philippine Society of IT Educators (PSITE) and the Philippine Accrediting Association of Schools, Colleges and Universities (PAASCU). PAASCU is providing the venue and PhilNITS is providing the lecturers (from the PhilNITS Society).
Providing training using the e-learning system donated by Japan.
Guiding and helping in the development of the PhilNITS Society, an organization formed whose only criterion for membership is being PhilNITS-certified, in any exam category.
Conducting software and hardware training. Customized training and assessment also available.
Conducting Systems Development for Outsourced Projects. Work is done by Bridge Software Engineers who are Nihongo proficient.
Board
Officers of PhilNITS are:
Ms. Ma. Corazon M. Akol – President
Mr. Peter Que Jr. – VP for Operations
Mr. Shinichiro Kato – VP for Finance
Ms. Flora Capili – Secretary.
See also
ITPEC - East Asia
Open University Malaysia METEOR - Malaysia
NECTEC - Thailand
References
External links
Information technology qualifications
Organizations based in Metro Manila
Science and technology in the Philippines | PhilNITS | [
"Technology"
] | 2,063 | [
"Computer occupations",
"Information technology qualifications"
] |
11,777,993 | https://en.wikipedia.org/wiki/Fuse%20plug | A fuse plug is a collapsible dam installed on spillways in dams to increase the dam's capacity.
The principle behind the fuse plug is that the majority of water that overflows a dam's spillway can be safely dammed except in high flood conditions. The fuse plug may be a sand-filled container, a steel structure or a concrete block. Under normal flow conditions the water will spill over the fuse plug and down the spillway. In high flood conditions, where the water velocity may be so high that the dam itself may be put in danger, the fuse plug simply washes away, and the flood waters safely spill over the dam.
Fuse plugs are used in many dams throughout the world. For example, the Warragamba Dam in New South Wales has fuse plugs that are approximately 14m high.
References
Dams | Fuse plug | [
"Engineering"
] | 175 | [
"Civil engineering",
"Civil engineering stubs"
] |
11,778,031 | https://en.wikipedia.org/wiki/Sodium%20sulfate%20%28data%20page%29 | This page provides supplementary chemical data on sodium sulfate.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Safety Data Sheet (SDS) for this chemical from the manufacturer and follow its directions.
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Sodium sulfate (data page) | [
"Chemistry"
] | 73 | [
"Chemical data pages",
"nan"
] |
11,778,236 | https://en.wikipedia.org/wiki/OLPC%20XO | The OLPC XO (formerly known as $100 Laptop, Children's Machine, 2B1) is a low cost laptop computer intended to be distributed to children in developing countries around the world, to provide them with access to knowledge, and opportunities to "explore, experiment and express themselves" (constructionist learning). The XO was developed by Nicholas Negroponte, a co-founder of MIT's Media Lab, and designed by Yves Behar's Fuseproject company. The laptop is manufactured by Quanta Computer and developed by One Laptop per Child (OLPC), a non-profit 501(c)(3) organization.
The subnotebooks were designed for sale to government-education systems which then would give each primary school child their own laptop. Pricing was set to start at US$188 in 2006, with a stated goal to reach the $100 mark in 2008 and the 50-dollar mark by 2010. When offered for sale in the Give One Get One campaigns of Q4 2006 and Q4 2007, the laptop was sold at $199.
The rugged, low-power computers use flash memory instead of a hard disk drive (HDD), and come with a pre-installed operating system derived from Fedora Linux, with the Sugar graphical user interface (GUI). Mobile ad hoc networking via 802.11s Wi-Fi mesh networking, to allow many machines to share Internet access as long as at least one of them could connect to an access point, was initially announced, but quickly abandoned after proving unreliable.
The latest version of the OLPC XO is the XO-4 Touch, was introduced in 2012.
History
The first early prototype was unveiled by the project's founder Nicholas Negroponte and then-United Nations Secretary-General Kofi Annan on November 16, 2005, at the World Summit on the Information Society (WSIS) in Tunis, Tunisia. The device shown was a rough prototype using a standard development board. Negroponte estimated that the screen alone required three more months of development. The first working prototype was demonstrated at the project's Country Task Force Meeting on May 23, 2006.
Steve Jobs had offered Mac OS X free of charge for use in the laptop, but according to Seymour Papert, a professor emeritus at MIT who is one of the initiative's founders, the designers wanted an operating system that can be tinkered with: "We declined because it's not open source." Therefore, Linux was chosen.
In 2006, Microsoft had suddenly developed an interest in the XO project and wanted the formerly open source effort to run Windows. Negroponte agreed to provide engineer assistance to Microsoft to facilitate their efforts. During this time, the project mission statement changed to remove mentions of "open source". A number of developers, such as Ivan Krstić and Walter Bender, resigned because of these changes in strategy. The version of Windows that ran on the XO was Windows XP.
Approximately 400 developer boards (Alpha-1) were distributed in mid-2006; 875 working prototypes (Beta 1) were delivered in late 2006; 2400 Beta-2 machines were distributed at the end of February 2007; full-scale production started November 6, 2007. Quanta Computer, the project's contract manufacturer, said in February 2007 that it had confirmed orders for one million units. Quanta indicated that it could ship five million to ten million units that year because seven nations had committed to buy the XO-1 for their schoolchildren: Argentina, Brazil, Libya, Nigeria, Rwanda, Thailand, and Uruguay. Quanta plans to offer machines very similar to the XO-1 on the open market.
The One Laptop Per Child project originally stated that a consumer version of the XO laptop was not planned. In 2007, the project established a website, laptopgiving.org, for outright donations and for a "Give 1 Get 1" offer valid (but only to the United States, its territories, and Canadian addresses) from November 12, 2007 until December 31, 2007. For each computer purchased at a cost of $399, an XO is also sent to a child in a developing nation. OLPC again restarted the G1G1 program through Amazon.com in November 2008, but has since stopped as of December 2008 or 2009.
On May 20, 2008, OLPC announced the next generation of XO, OLPC XO-2 which was thereafter cancelled in favor of the tablet-like designed XO-3. In late 2008, the New York City Department of Education began a project to purchase large numbers of XO computers for use by schoolchildren.
The design received the Community category award of the 2007 Index: Award.
In 2008 the XO was awarded London's Design Museum "Design of the Year", plus two gold, one silver, and one bronze award at the Industrial Design Society of America's International Design Excellence Awards (IDEAs).
Goals
The XO-1 is designed to be low-cost, small, durable, and efficient. It is shipped with a slimmed-down version of Fedora Linux and a custom GUI named Sugar that is intended to help young children collaborate. The XO-1 includes a video camera, a microphone, long-range Wi-Fi, and a hybrid stylus and touchpad. Along with a standard plug-in power supply, human and solar power sources are available, allowing operation far from a commercial power grid. Mary Lou Jepsen has listed the design goals of the device as follows:
Minimal power use, with a design target of 2–3 Watts (W) total
Minimal production cost, with a target of $100 per laptop for production runs of millions of units
A "cool" look, implying innovative styling in its physical appearance
E-book function
Open source and free software provided with the laptop
In keeping with its goals of robustness and low power use, the design of the laptop intentionally omits all motor-driven moving parts; it has no hard disk drive, optical (compact disc (CD) or Digital Versatile Disc DVD) media, floppy disk drive, or fan (the device is passively cooled). No Serial ATA interface is needed due to the lack of hard drive. Storage is via an internal SD card slot. There is also no PC card slot, although Universal Serial Bus (USB) ports are included.
A built-in hand-crank generator was part of the notebook in the original design; however, it is now an optional clamp-on peripheral.
Hardware
Display
1200 × 900 7.5 inch (19 cm) diagonal transflective LCD (200 dpi) that uses 0.1 to 1.0 W depending on mode. The two modes are:
Reflective (backlight off) monochrome mode for low-power use in sunlight. This mode provides very sharp images for high-quality text
Backlit color mode, with an alternance of red, green and blue pixels
XO 1.75 developmental version for XO-3 has an optional touch screen
The first-generation OLPC laptops have a novel low-cost liquid crystal display (LCD).
The electronic visual display is the costliest component in most laptops. In April 2005, Negroponte hired Mary Lou Jepsen, who was interviewing to join the Media Arts and Sciences faculty at the MIT Media Lab in September 2008, as OLPC Chief Technology Officer. Jepsen developed a new display for the first-generation OLPC laptop, inspired by the design of small LCDs used in portable DVD players, which she estimated would cost about $35. In the OLPC XO-1, the screen is estimated to be the second most costly component, after the central processing unit (CPU) and chipset.
Jepsen has described the removal of the filters that color the RGB subpixels as the critical design innovation in the new LCD. Instead of using subtractive color filters, the display uses a plastic diffraction grating and lenses on the rear of the LCD to illuminate each pixel. This grating pattern is stamped using the same technology used to make DVDs. The grating splits the light from the white backlight into a spectrum. The red, green, and blue components are diffracted into the correct positions to illuminate the corresponding pixel with R, G or B. This innovation results in a much brighter display for a given amount of backlight illumination: while the color filters in a regular display typically absorb 85% of the light that hits them, this display absorbs little of that light. Most LCD screens at the time used cold cathode fluorescent lamp backlights which were fragile, difficult or impossible to repair, required a high voltage power supply, were relatively power-hungry, and accounted for 50% of the screens' cost (sometimes 60%). The light-emitting diode (LED) backlight in the XO-1 is easily replaceable, rugged, and low-cost.
The remainder of the LCD uses extant display technology and can be made using extant manufacturing equipment. Even the masks can be made using combinations of extant materials and processes.
When lit primarily from the rear with the white LED backlight, the display shows a color image composed of both RGB and grayscale information. When lit primarily from the front by ambient light, for example from the sun, the display shows a monochromatic (black and white) image composed of just the grayscale information.
"Mode" change occurs by varying the relative amounts backlight and ambient light. With more backlight, a higher chrominance is available and a color image display is seen. As ambient light levels, such as sunlight, exceed the backlight, a grayscale display is seen; this can be useful when reading e-books for an extended time in bright light such as sunlight. The backlight brightness can also be adjusted to vary the level of color seen in the display and to conserve battery power.
In color mode (when lit primarily from the rear), the display does not use the common RGB pixel geometry for liquid crystal computer displays, in which each pixel contains three tall thin rectangles of the primary colors. Instead, the XO-1 display provides one color for each pixel. The colors align along diagonals that run from upper-right to lower left (see diagram on the right). To reduce the color artifacts caused by this pixel geometry, the color component of the image is blurred by the display controller as the image is sent to the screen. Despite the color blurring, the display still has high resolution for its physical size; normal displays put about 588(H) × 441(V) to 882(H) × 662(V) pixels in this amount of physical area and support subpixel rendering for slightly higher perceived resolution. A Philips Research study measured the XO-1 display's perceived color resolution as effectively 984(H) × 738(V). A conventional liquid crystal display with the same number of green pixels (green carries most brightness or luminance information for human eyes) as the OLPC XO-1 would be 693 × 520. Unlike a standard RGB LCD, resolution of the XO-1 display varies with angle. Resolution is greatest from upper-right to lower left, and lowest from upper-left to lower-right. Images which approach or exceed this resolution will lose detail and gain color artifacts. The display gains resolution when in bright light; this comes at the expense of color (as the backlight is overpowered) and color resolution can never reach the full 200 dpi sharpness of grayscale mode because of the blur which is applied to images in color mode.
Power
DC input, ±11–18 V, maximum 15 W power draw
5-cell rechargeable NiMH battery pack, 3000 mAh minimum 3050 mAh typical 80% usable, charge at 0...45 °C (deprecated in 2009)
2-cell rechargeable LiFePO4 battery pack, 2800 mAh minimum 2900 mAh typical 100% usable, charge at 0...60 °C
Four-cell rechargeable LiFePO4 battery pack, 3100 mAh minimum 3150 mAh typical 100% usable, charge at −10...50 °C
External manual power options included a clamp-on crank generator similar to the original built-in one (see photo in the Gallery, below), but they generated 1/4 the power initially hoped, and less than a thousand were produced. A pull-string generator was also designed by Potenco but never mass-produced.
External power options include 110–240 Volt AC and input from an external solar panel. Solar is the predominant alternate power source for schools using XOs.
The laptop design specification goals are about 2 W of power consumed during normal use, far less than the 10 W to 45 W of conventional laptops. With build 656, power use is between 5 and 8 watts measured on G1G1 laptop. Future software builds are expected to meet the 2-watt target.
In e-book mode (XO 1.5), all hardware sub-systems except the monochrome dual-touch display are powered down. When the user moves to a different page, the other systems wake up, render the new page on the display, and then go back to sleep. Power use in this e-book mode is estimated to be 0.3 to 0.8 W. The XO 2.0 is planned to consume even less power than earlier versions, less than 1.0 W in full color mode.
Power options include batteries, solar power panels, and human-powered generators, which make the XO self-powered equipment. 10 batteries at once can be charged from the school building power in the XO multi-battery charger. The low power use, combined with these power options are useful in many countries that lack a power infrastructure.
Networking
Wireless networking using an "Extended Range" 802.11b/g and 802.11s (mesh) Marvell 8388 wireless chip, chosen due to its ability to autonomously forward packets in the mesh even if the CPU is powered off. When connected in a mesh, it is run at a low bitrate (2 Mbit/s) to minimize power use. Despite the wireless chip's minimalism, it supports Wi-Fi Protected Access (WPA). An ARM processor is included.
Dual adjustable antennas for diversity reception.
IEEE 802.11b support will be provided using a Wi-Fi "Extended Range" chip set. Jepsen has said the wireless chip set will be run at a low bit rate, 2 Mbit/s maximum rather than the usual higher speed 5.5 Mbit/s or 11 Mbit/s to minimize power use. The conventional IEEE 802.11b system only handles traffic within a local cloud of wireless devices in a manner similar to an Ethernet network. Each node transmits and receives its own data, but it does not route packets between two nodes that cannot communicate directly. The OLPC laptop will use IEEE 802.11s to form the wireless mesh network.
Whenever the laptop is powered on it can participate in a mobile ad hoc network (MANET) with each node operating in a peer-to-peer fashion with other laptops it can hear, forwarding packets across the cloud. If a computer in the cloud has access to the Internet—either directly or indirectly—then all computers in the cloud are able to share that access. The data rate across this network will not be high; however, similar networks, such as the store and forward Motoman project have supported email services to 1000 schoolchildren in Cambodia, according to Negroponte. The data rate should be sufficient for asynchronous network applications (such as email) to communicate outside the cloud; interactive uses, such as web browsing, or high-bandwidth applications, such as video streaming should be possible inside the cloud. The IP assignment for the meshed network is intended to be automatically configured, so no server administrator or an administration of IP addresses is needed.
Building a MANET is still untested under the OLPC's current configuration and hardware environment. Although one goal of the laptop is that all of its software be open source, the source code for this routing protocol is currently closed source. While there are open-source alternatives such as OLSR or B.A.T.M.A.N., none of these options is yet available running at the data-link layer (Layer 2) on the Wi-Fi subsystem's co-processor; this is critical to OLPC's power efficiency scheme. Whether Marvell Technology Group, the producer of the wireless chip set and owner of the current meshing protocol software, will make the firmware open source is still an unanswered question. As of 2011, it has not done so.
Shell
Yves Behar is the chief designer of the present XO shell. The shell of the laptop is resistant to dirt and moisture, and is constructed with 2 mm thick plastic (50% thicker than typical laptops). It contains a pivoting, reversible display, movable rubber Wi-Fi antennas, and a sealed rubber-membrane keyboard.
Input and ports
Water-resistant membrane keyboard, customized to the locale in which it will be distributed. The multiplication and division symbols are included. The keyboard is designed for the small hands of children.
Five-key cursor-control pad; four directional keys plus Enter
Four "Game Buttons" (functionally PgUp, PgDn, Home, and End) modeled after the PlayStation Controller layout (, , , and ).
Touchpad for mouse control and handwriting input
Built-in color camera, to the right of the display, VGA resolution (640×480)
Built-in stereo speakers
Built-in microphone
Audio based on the AC'97 codec, with jacks for external stereo speakers and microphones, Line-out, and Mic-in
Three external USB 2.0 ports.
More than twenty different keyboards have been laid out, to suit local needs to match the standard keyboard for the country in which a laptop is intended. Around half of these have been manufactured for prototype machines. There are parts of the world which do not have a standard keyboard representing their language. As Negroponte states this is "because there's no real commercial interest in making a keyboard". One example of where the OLPC has bridged this gap is in creating an Amharic keyboard for Ethiopia. For several languages, the keyboard is the first ever created for that language.
Negroponte has demanded that the keyboard not contain a caps lock key, which frees up keyboard space for new keys such as a future "view source" key.
Beneath the keyboard was a large area that resembled a very wide touchpad. The capacitive portion of the mousepad was an Alps GlidePoint touchpad, which was in the central third of the sensor and could be used with a finger. The full width was a resistive sensor which, though never supported by software, was intended to be used with a stylus. This unusual feature was eliminated in the CL1A hardware revision because it suffered from erratic pointer motion. Alps Electronics provided both the capacitive and resistive components of the mousepad.
Release history
The first XO prototype, displayed in 2005, had a built-in hand-crank generator for charging the battery. The XO-1 beta, released in early 2007, used a separate hand-crank generator.
The XO-1 was released in late 2007.
Power option: solar panel.
CPU: 433 MHz IA-32 x86 AMD Geode LX-700 at 0.8 watts, with integrated graphics controller
256 MB of Dual (DDR266) 133 MHz DRAM (in 2006 the specification called for 128 MB of RAM)
1024 kB (1 MB) flash ROM with open-source Open Firmware
1024 MB of SLC NAND flash memory (in 2006 the specifications called for 512 MB of flash memory)
Average battery life three hours
The XO 1.5 was released in early 2010.
Via/x86 CPU 4.5 W
Fewer physical parts
Lower power use
Power option: solar panel.
CPU: 400 to 1000 MHz IA-32 x86 VIA C7 at 0.8 watts, with integrated graphics controller
512 to 1024 MB of Dual (DDR266) 133 MHz DRAM
1024 kB (1 MB) flash ROM with open-source Open Firmware
4 GB of SLC NAND flash memory (upgradable, microSD)
Average battery life 3–5 hours (varies with active suspend)
The XO 1.75 began development in 2010, with full production starting in February 2012.
2 watt ARM CPU
Fewer physical parts, 40% lower power use.
Power option: solar panel.
CPU: 400 to 1000 MHz ARM Marvell Armada 610 at 0.8 watts, with integrated graphics controller
1024 to 2048 MB of DDR3 (TBD)
1024 TBD kB (1 MB) flash ROM with open-source Open Firmware
4-8 GB of SLC NAND flash memory (upgradable, microSD)
Accelerometer
Average battery life 5–10 hours
The XO 2, previously scheduled for release in 2010, was canceled in favor of XO 3. With a price target , it had an elegant, lighter, folding dual touch-screen design. The hardware would have been open-source and sold by various manufacturers. A choice of operating system (Windows XP or Linux) was intended outside the United States. Its price target in the United States includes two computers, one donated.
The OLPC XO-3 was scheduled for release in late 2012. It was canceled in favor of the XO-4. It featured one solid color multi-touch screen design, and a solar panel in the cover or carrying case.
The XO 4 is a refresh of the XO 1 to 1.75 with a later ARM CPU and an optional touch screen. This model will not be available for consumer sales. There is a mini HDMI port to allow connecting to a display.
The XO Tablet was designed by third-party Vivitar, rather than OLPC, and based on the Android platform whereas all previous XO models were based on Sugar running on top of Fedora. It is commercially available and has been used in OLPC projects.
Software
Countries are expected to remove and add software to best adapt the laptop to the local laws and educational needs. As supplied by OLPC, all of the software on the laptop will be free and open source. All core software is intended to be localized to the languages of the target countries. The underlying software includes:
A pared-down version of Fedora Linux as the operating system, with students receiving root access (although not normally operating in that mode).
Open Firmware, written in a variant of Forth
A simple custom web browser based upon the Gecko engine used by Mozilla Firefox.
A word processor based on AbiWord.
Email through the web-based Gmail service.
Online chat and VoIP programs.
Python 2.5 is the primary programming language used to develop Sugar "Activities". Several other interpreted programming languages are included, such as JavaScript, Csound, the eToys version of Squeak, and Turtle Art
A music sequencer with digital instruments: Jean Piché's TamTam
Audio and video player software: Totem or Helix.
The laptop uses the Sugar graphical user interface, written in Python, on top of the X Window System and the Matchbox window manager. This interface is not based on the typical desktop metaphor but presents an iconic view of programs and documents and a map-like view of nearby connected users. The current active program is displayed in full-screen mode. Much of the core Sugar interface uses icons, bypassing localization issues. Sugar is also defined as having no folders present in the UI.
Jim Gettys, responsible for the laptops' system software, has called for a re-education of programmers, saying that many applications use too much memory or even leak memory. "There seems to be a common fallacy among programmers that using memory is good: on current hardware it is often much faster to recompute values than to have to reference memory to get a precomputed value. A full cache miss can be hundreds of cycles, and hundreds of times the power use of an instruction that hits in the first level cache."
On August 4, 2006, the Wikimedia Foundation announced that static copies of selected Wikipedia articles would be included on the laptops. Jimmy Wales, chair of the Wikimedia Foundation, said that "OLPC's mission goes hand in hand with our goal of distributing encyclopedic knowledge, free of charge, to every person in the world. Not everybody in the world has access to a broadband connection." Negroponte had earlier suggested he would like to see Wikipedia on the laptop. Wales feels that Wikipedia is one of the "killer apps" for this device.
Don Hopkins announced that he is creating a free and open source port of the game SimCity to the OLPC with the blessing of Will Wright and Electronic Arts, and demonstrated SimCity running on the OLPC at the Game Developer's Conference in March 2007. The free and open source SimCity plans were confirmed at the same conference by SJ Klein, director of content for the OLPC, who also asked game developers to create "frameworks and scripting environments—tools with which children themselves could create their own content."
The laptop's security architecture, known as Bitfrost, was publicly introduced in February 2007. No passwords will be required for ordinary use of the machine. Programs are assigned certain bundles of rights at install time which govern their access to resources; users can later add more rights. Optionally, the laptops can be configured to request leases from a OLPC XS central server and to stop working when the leases expire; this is designed as a theft-prevention mechanism.
The pre-8.20 software versions were criticized for bad wireless connectivity and other minor issues.
Deployment
The XO-1 is nicknamed ceibalita in Uruguay after the Ceibal project.
Reception and reviews
The hand-crank system for powering the laptop was abandoned by designers shortly after it was announced, and the "mesh" internet-sharing approach performed poorly and was then dropped. Bill Gates of Microsoft criticized the screen quality.
Some critics of the program would have preferred less money being spent on technology and more money being spent on clean water and "real schools". Some supporters worried about the lack of plans for teaching students. The program was based on constructionism, which is the idea that, if they had the tools, the kids would largely figure out how to do things on their own. Others wanted children to learn the Microsoft Windows operating system, rather than OLPC's lightweight Linux derivative, on the belief that the children would use Microsoft Windows in their careers. Intel's Classmate PC used Microsoft Windows and sold for .
The project was known as "the laptop", but it originally cost $130 for a bare-bones laptop, and then the price rose to $180 in the next revision. The solid-state alternative to a hard drive was sturdy, which meant that the laptop could be dropped with a lower risk of breakingalthough more laptops were broken than expectedbut it was costly, so the machines had limited storage capacity.
See also
Classmate PC
Comparison of netbooks
Computer technology for developing areas
eMate 300
Digital gap
Lemote
Linutop
OLPC XO-3
PlayPower
Sakshat
Sinomanic
VIA pc-1 Initiative
Zonbu
Notes
References
$100 Laptop Nears Launch, SPIE; The International Society for Optical Engineering. The Optics, Photonics, Fibers, and Lasers Resource, July 2006
$100 laptop production begins, BBC News, July 22, 2007
$100-laptop created for world's poorest countries, New Scientist, November 17, 2005
Doing it for the kids, man: Children's laptop inspires open source projects October 27, 2006 Article about how the project's hardware constraints will lead to better apps and kludge-removal for everyone
First video of a working "One Laptop Per Child" laptop – demonstration of the first working prototype, by Silicon Valley Sleuth blog
"Hand-cranked computers: Is this a wind-up?", The Independent, November 24, 2005
"Laptop with a mission widens its audience", The New York Times, October 4, 2007
"Make your own $100 laptop...?", Make magazine, December 2, 2005
Sugar, presentation of the userinterface – Videostream
The $100 Laptop: an Up-Close Look – Web video of the first laptop prototype, by Andy Carvin
External links
2005 software
Information and communication technologies for development
Linux-based devices
Mobile computers
One Laptop per Child
Subnotebooks
Quanta Computer
Computer-related introductions in 2005 | OLPC XO | [
"Technology"
] | 5,984 | [
"Information and communications technology",
"Information and communication technologies for development"
] |
11,778,686 | https://en.wikipedia.org/wiki/Security%20appliance | A security appliance is any form of server appliance that is designed to protect computer networks from unwanted traffic.
Types of security appliances
Active devices block unwanted traffic. Examples of such devices are firewalls, anti virus scanning devices, and content filtering devices.
Passive devices detect and report on unwanted traffic, such as intrusion detection appliances.
Preventative devices scan networks and identify potential security problems (such as penetration testing and vulnerability assessment appliances).
Unified Threat Management (UTM) appliances combine features together into one system, such as some firewalls, content filtering, web caching etc.
References
Server appliance | Security appliance | [
"Technology"
] | 126 | [
"Computing stubs",
"Computer network stubs"
] |
11,778,797 | https://en.wikipedia.org/wiki/Zymosan | Zymosan is a beta-glucan with repeating glucose units connected by β-1,3-glycosidic linkages. It binds to TLR 2 and Dectin-1 (CLEC7A). Zymosan is a ligand found on the surface of fungi, like yeast.
Zymosan is prepared from yeast cell walls and consists of protein-carbohydrate complexes. It is used to induce experimental sterile inflammation. In macrophages, zymosan-induced responses include the induction of proinflammatory cytokines, arachidonate mobilization, protein phosphorylation, and inositol phosphate formation. Zymosan A also raises cyclin D2 levels, suggesting a role for the latter in macrophage activation besides proliferation. It potentiates acute liver damage after galactosamine injection, suggesting that certain types of nonparenchymal cells other than Kupffer cells are involved in zymosan action.
References
External links
Polysaccharides | Zymosan | [
"Chemistry"
] | 222 | [
"Carbohydrates",
"Polysaccharides"
] |
11,779,453 | https://en.wikipedia.org/wiki/Etest | Etest (previously known as the Epsilometer test) is a way of determining antimicrobial sensitivity by placing a strip impregnated with antimicrobials onto an agar plate. A strain of bacterium or fungus will not grow near a concentration of antibiotic or antifungal if it is sensitive. For some microbial and antimicrobial combinations, the results can be used to determine a minimum inhibitory concentration (MIC). Etest is a proprietary system manufactured by bioMérieux. It is a laboratory test used in healthcare settings to help guide physicians by indicating what concentration of antimicrobial could successfully be used to treat patients' infections.
Use
Etest is a quantitative technique for determining the MIC of microoganisms. It is used for a range of Gram-negative and Gram-positive bacteria such as Pseudomonas, Staphylococcus, and Enterococcus species, as well as fastidious bacteria, such as Neisseria and Streptococcus pneumoniae. It can also be used to determine MICs against certain fungi.
Etest is a pre-prepared non-porous plastic reagent strip with a predefined gradient of antibiotic, covering a continuous concentration range. It is applied to the surface of an agar plate inoculated with the test strain, where there is release of the antimicrobial gradient from the plastic carrier to the agar to form a stable and continuous gradient beneath and in nearby to the strip.
The time taken for a plate to be ready depends on the m that is being tested, and the conditions of the agar plate. The predefined Etest gradient remains stable for at least 18 to 24 hours; that is, a period that covers the critical times of many species of fastidious and non-fastidious organisms.
After the test, the bacterial growth becomes visible after incubation and a symmetrical inhibition ellipse centered along the strip is seen. The MIC value is read from the scale in terms of μg/mL where the ellipse edge intersects the strip. After the required incubation period, the minimum inhibitory value is read where the edge of the inhibition ellipse intersects the side of the strip. The plate should not be read if the culture appears mixed or if the lawn of growth is too light or too heavy.
Etest MIC endpoints are usually clear-cut although different growth/inhibition patterns may be seen depending on the antifungal or antibiotic used.
Selection of agar medium
Etest can be used with many different kinds of AST agar medium as long as the medium supports good growth of the test organism and does not interfere with the activity of the antimicrobial agent. However, to maximise reproducibility, the medium chosen should fulfil the basic requirements for a susceptibility test medium. The following AST media are recommended for use with Etest:
Aerobes: Mueller Hinton agar such as MHE (bioMérieux)
Anaerobes: Brucella blood agar with appropriate supplements
These media may require supplemental nutrients to obtain enhanced growth of nutritionally fastidious organisms such as pneumococci, streptococci, Abiotrophia, Haemophilus, gonococci, meningococci and Campylobacter. In general, media recommendations from the Clinical and Laboratory Standards Institute (CLSI) and European Committee on Antimicrobial Susceptibility Testing (EUCAST) are considered appropriate for Etest.
Etest equipment
The Etest family of instruments is designed to simplify the daily use of Etest. Simplex C76, Nema C88, and Retro C80 are easy to use, reducing operator fatigue, saving time and improving the quality of results by increasing reproducibility. Etest and related instruments offer one of the most efficient methods for generating on-scale MIC values across 15 doubling dilutions for susceptibility testing of a wide range of drug-bug combinations, including fastidious organisms.
Simplex C76 automates the placement of 1 to 6 different Etest strips to simplify the setup of MIC panels. Application of up to 6 strips for large agar plates or up to 2 strips on small plates takes <12 seconds.
Retro C80 is a rota-plater that simplifies and standardizes the inoculation of small and large agar plates making Etest® easier to read when compared to manual streaking.
Nema C88 is a vacuum pen that simplifies the application of Etest® strips. The applicator is held like a pen and the evacuation hole is covered with the fingertip to create suction. The suction cup is placed on the strip to lift it up and then position onto the agar surface. The strip is released by removing the finger tip from the evacuation hole.
History
The Etest strip was first described in 1988 and was introduced commercially in 1991 by AB BIODISK. bioMérieux acquired AB BIODISK in 2008 and continues to manufacture and market this product range under the mark Etest.
During the 1950s, Hans Ericsson (Professor of microbiology at the Karolinska Hospital and Karolinska Institute, Stockholm), the scientific founder of AB BIODISK, developed a method to standardize the disk diffusion test and to improve its reproducibility and reliability for clinical susceptibility predictions. The inhibition zone sizes from disk test results were compared to MIC values based on the reference agar dilution procedure.The correlation between zone sizes and MIC values was then assessed using regression analysis and regression lines were used for extrapolating zone interpretive limits that corresponded to the MIC breakpoint values that defined susceptible, intermediate and resistant categorical results.
Etest was first presented at the Interscience Conference on Antimicrobial Agents and Chemotherapy (ICAAC) in Los Angeles in 1988 as a novel gradient concept for MIC determinations. In September 1991, Etest was launched globally as a MIC product after receiving the USA Food and Drug Administration (FDA) clearance.
See also
Antibiotic sensitivity testing
References
Microbiology techniques
Antimicrobial resistance | Etest | [
"Chemistry",
"Biology"
] | 1,277 | [
"Microbiology techniques"
] |
11,779,912 | https://en.wikipedia.org/wiki/Marine%20sediment | Marine sediment, or ocean sediment, or seafloor sediment, are deposits of insoluble particles that have accumulated on the seafloor. These particles either have their origins in soil and rocks and have been transported from the land to the sea, mainly by rivers but also by dust carried by wind and by the flow of glaciers into the sea, or they are biogenic deposits from marine organisms or from chemical precipitation in seawater, as well as from underwater volcanoes and meteorite debris.
Except within a few kilometres of a mid-ocean ridge, where the volcanic rock is still relatively young, most parts of the seafloor are covered in sediment. This material comes from several different sources and is highly variable in composition. Seafloor sediment can range in thickness from a few millimetres to several tens of kilometres. Near the surface seafloor sediment remains unconsolidated, but at depths of hundreds to thousands of metres the sediment becomes lithified (turned to rock).
Rates of sediment accumulation are relatively slow throughout most of the ocean, in many cases taking thousands of years for any significant deposits to form. Sediment transported from the land accumulates the fastest, on the order of one metre or more per thousand years for coarser particles. However, sedimentation rates near the mouths of large rivers with high discharge can be orders of magnitude higher. Biogenous oozes accumulate at a rate of about one centimetre per thousand years, while small clay particles are deposited in the deep ocean at around one millimetre per thousand years.
Sediments from the land are deposited on the continental margins by surface runoff, river discharge, and other processes. Turbidity currents can transport this sediment down the continental slope to the deep ocean floor. The deep ocean floor undergoes its own process of spreading out from the mid-ocean ridge, and then slowly subducts accumulated sediment on the deep floor into the molten interior of the earth. In turn, molten material from the interior returns to the surface of the earth in the form of lava flows and emissions from deep sea hydrothermal vents, ensuring the process continues indefinitely. The sediments provide habitat for a multitude of marine life, particularly of marine microorganisms. Their fossilized remains contain information about past climates, plate tectonics, ocean circulation patterns, and the timing of major extinctions.
Overview
Except within a few kilometres of a mid-ocean ridge, where the volcanic rock is still relatively young, most parts of the seafloor are covered in sediments. This material comes from several different sources and is highly variable in composition, depending on proximity to a continent, water depth, ocean currents, biological activity, and climate. Seafloor sediments (and sedimentary rocks) can range in thickness from a few millimetres to several tens of kilometres. Near the surface, the sea-floor sediments remain unconsolidated, but at depths of hundreds to thousands of metres (depending on the type of sediment and other factors) the sediment becomes lithified.
The various sources of seafloor sediment can be summarized as follows:
Terrigenous sediment is derived from continental sources transported by rivers, wind, ocean currents, and glaciers. It is dominated by quartz, feldspar, clay minerals, iron oxides, and terrestrial organic matter.
Pelagic carbonate sediment is derived from organisms (e.g., foraminifera) living in the ocean water (at various depths, but mostly near surface) that make their shells (a.k.a. tests) out of carbonate minerals such as calcite.
Pelagic silica sediment is derived from marine organisms (e.g., diatoms and radiolaria) that make their tests out of silica (microcrystalline quartz).
Volcanic ash and other volcanic materials are derived from both terrestrial and submarine eruptions.
Iron and manganese nodules form as direct precipitates from ocean-bottom water.
The distributions of some of these materials around the seas are shown in the diagram at the start of this article ↑. Terrigenous sediments predominate near the continents and within inland seas and large lakes. These sediments tend to be relatively coarse, typically containing sand and silt, but in some cases even pebbles and cobbles. Clay settles slowly in nearshore environments, but much of the clay is dispersed far from its source areas by ocean currents. Clay minerals are predominant over wide areas in the deepest parts of the ocean, and most of this clay is terrestrial in origin. Siliceous oozes (derived from radiolaria and diatoms) are common in the south polar region, along the equator in the Pacific, south of the Aleutian Islands, and within large parts of the Indian Ocean. Carbonate oozes are widely distributed in all of the oceans within equatorial and mid-latitude regions. In fact, clay settles everywhere in the oceans, but in areas where silica- and carbonate-producing organisms are prolific, they produce enough silica or carbonate sediment to dominate over clay.
Carbonate sediments are derived from a wide range of near-surface pelagic organisms that make their shells out of carbonate. These tiny shells, and the even tinier fragments that form when they break into pieces, settle slowly through the water column, but they don't necessarily make it to the bottom. While calcite is insoluble in surface water, its solubility increases with depth (and pressure) and at around 4,000 m, the carbonate fragments dissolve. This depth, which varies with latitude and water temperature, is known as the carbonate compensation depth. As a result, carbonate oozes are absent from the deepest parts of the ocean (deeper than 4,000 m), but they are common in shallower areas such as the mid-Atlantic ridge, the East Pacific Rise (west of South America), along the trend of the Hawaiian/Emperor Seamounts (in the northern Pacific), and on the tops of many isolated seamounts.
Texture
Sediment texture can be examined in several ways. The first way is grain size. Sediments can be classified by particle size according to the Wentworth scale. Clay sediments are the finest with a grain diameter of less than .004 mm and boulders are the largest with grain diameters of 256 mm or larger. Among other things, grain size represents the conditions under which the sediment was deposited. High energy conditions, such as strong currents or waves, usually results in the deposition of only the larger particles as the finer ones will be carried away. Lower energy conditions will allow the smaller particles to settle out and form finer sediments.
Sorting is another way to categorize sediment texture. Sorting refers to how uniform the particles are in terms of size. If all of the particles are of a similar size, such as in beach sand, the sediment is well-sorted. If the particles are of very different sizes, the sediment is poorly sorted, such as in glacial deposits.
A third way to describe marine sediment texture is its maturity, or how long its particles have been transported by water. One way which can indicate maturity is how round the particles are. The more mature a sediment the rounder the particles will be, as a result of being abraded over time. A high degree of sorting can also indicate maturity, because over time the smaller particles will be washed away, and a given amount of energy will move particles of a similar size over the same distance. Lastly, the older and more mature a sediment the higher the quartz content, at least in sediments derived from rock particles. Quartz is a common mineral in terrestrial rocks, and it is very hard and resistant to abrasion. Over time, particles made from other materials are worn away, leaving only quartz behind. Beach sand is a very mature sediment; it is composed primarily of quartz, and the particles are rounded and of similar size (well-sorted).
Origins
Marine sediments can also classified by their source of origin. There are four types:
Lithogenous sediments, also called terrigenous sediments, are derived from preexisting rock and come from land via rivers, ice, wind and other processes. They are referred to as terrigenous sediments since most comes from the land.
Biogenous sediments are composed of the remains of marine organisms, and come from organisms like plankton when their exoskeletons break down
Hydrogenous sediments come from chemical reactions in the water, and are formed when materials that are dissolved in water precipitate out and form solid particles.
Cosmogenous sediments are derived from extraterrestrial sources, coming from space, filtering in through the atmosphere or carried to Earth on meteorites.
Lithogenous
Lithogenous or terrigenous sediment is primarily composed of small fragments of preexisting rocks that have made their way into the ocean. These sediments can contain the entire range of particle sizes, from microscopic clays to large boulders, and they are found almost everywhere on the ocean floor. Lithogenous sediments are created on land through the process of weathering, where rocks and minerals are broken down into smaller particles through the action of wind, rain, water flow, temperature- or ice-induced cracking, and other erosive processes. These small eroded particles are then transported to the oceans through a variety of mechanisms:
Streams and rivers: Various forms of runoff deposit large amounts of sediment into the oceans, mostly in the form of finer-grained particles. About 90% of the lithogenous sediment in the oceans is thought to have come from river discharge, particularly from Asia. Most of this sediment, especially the larger particles, will be deposited and remain fairly close to the coastline, however, smaller clay particles may remain suspended in the water column for long periods of time and may be transported great distances from the source.
Wind: Windborne (aeolian) transport can take small particles of sand and dust and move them thousands of kilometres from the source. These small particles can fall into the ocean when the wind dies down, or can serve as the nuclei around which raindrops or snowflakes form. Aeolian transport is particularly important near desert areas.
Glaciers and ice rafting: As glaciers grind their way over land, they pick up lots of soil and rock particles, including very large boulders, that get carried by the ice. When the glacier meets the ocean and begins to break apart or melt, these particles get deposited. Most of the deposition will happen close to where the glacier meets the water, but a small amount of material is also transported longer distances by rafting, where larger pieces of ice drift far from the glacier before releasing their sediment.
Gravity: Landslides, mudslides, avalanches, and other gravity-driven events can deposit large amounts of material into the ocean when they happen close to shore.
Waves: Wave action along a coastline will erode rocks and will pull loose particles from beaches and shorelines into the water.
Volcanoes: Volcanic eruptions emit vast amounts of ash and other debris into the atmosphere, where it can then be transported by wind to eventually get deposited in the oceans.
Gastroliths: Another, relatively minor, means of transporting lithogenous sediment to the ocean are gastroliths. Gastrolith means "stomach stone". Many animals, including seabirds, pinnipeds, and some crocodiles deliberately swallow stones and regurgitate them latter. Stones swallowed on land can be regurgitated at sea. The stones can help grind food in the stomach or act as ballast regulating buoyancy. Mostly these processes deposit lithogenous sediment close to shore. Sediment particles can then be transported farther by waves and currents, and may eventually escape the continental shelf and reach the deep ocean floor.
Composition
Lithogenous sediments usually reflect the composition of whatever materials they were derived from, so they are dominated by the major minerals that make up most terrestrial rock. This includes quartz, feldspar, clay minerals, iron oxides, and terrestrial organic matter. Quartz (silicon dioxide, the main component of glass) is one of the most common minerals found in nearly all rocks, and it is very resistant to abrasion, so it is a dominant component of lithogenous sediments, including sand.
Biogenous
Biogenous sediments come from the remains of living organisms that settle out as sediment when the organisms die. It is the "hard parts" of the organisms that contribute to the sediments; things like shells, teeth or skeletal elements, as these parts are usually mineralized and are more resistant to decomposition than the fleshy "soft parts" that rapidly deteriorate after death.
Macroscopic sediments contain large remains, such as skeletons, teeth, or shells of larger organisms. This type of sediment is fairly rare over most of the ocean, as large organisms do not die in enough of a concentrated abundance to allow these remains to accumulate. One exception is around coral reefs; here there is a great abundance of organisms that leave behind their remains, in particular the fragments of the stony skeletons of corals that make up a large percentage of tropical sand.
Microscopic sediment consists of the hard parts of microscopic organisms, particularly their shells, or tests. Although very small, these organisms are highly abundant and as they die by the billions every day their tests sink to the bottom to create biogenous sediments. Sediments composed of microscopic tests are far more abundant than sediments from macroscopic particles, and because of their small size they create fine-grained, mushy sediment layers. If the sediment layer consists of at least 30% microscopic biogenous material, it is classified as a biogenous ooze. The remainder of the sediment is often made up of clay.
The primary sources of microscopic biogenous sediments are unicellular algaes and protozoans (single-celled amoeba-like creatures) that secrete tests of either calcium carbonate (CaCO3) or silica (SiO2). Silica tests come from two main groups, the diatoms (algae) and the radiolarians (protozoans).
Diatoms are particularly important members of the phytoplankton, functioning as small, drifting algal photosynthesizers. A diatom consists of a single algal cell surrounded by an elaborate silica shell that it secretes for itself. Diatoms come in a range of shapes, from elongated, pennate forms, to round, or centric shapes that often have two halves, like a Petri dish. In areas where diatoms are abundant, the underlying sediment is rich in silica diatom tests, and is called diatomaceous earth.
Radiolarians are planktonic protozoans (making them part of the zooplankton), that like diatoms, secrete a silica test. The test surrounds the cell and can include an array of small openings through which the radiolarian can extend an amoeba-like "arm" or pseudopod. Radiolarian tests often display a number of rays protruding from their shells which aid in buoyancy. Oozes that are dominated by diatom or radiolarian tests are called siliceous oozes.
Like the siliceous sediments, the calcium carbonate, or calcareous sediments are also produced from the tests of microscopic algae and protozoans; in this case the coccolithophores and foraminiferans. Coccolithophores are single-celled planktonic algae about 100 times smaller than diatoms. Their tests are composed of a number of interlocking CaCO3 plates (coccoliths) that form a sphere surrounding the cell. When coccolithophores die the individual plates sink out and form an ooze. Over time, the coccolithophore ooze lithifies to becomes chalk. The White Cliffs of Dover in England are composed of coccolithophore-rich ooze that turned into chalk deposits.
Foraminiferans (also referred to as forams) are protozoans whose tests are often chambered, similar to the shells of snails. As the organism grows, is secretes new, larger chambers in which to reside. Most foraminiferans are benthic, living on or in the sediment, but there are some planktonic species living higher in the water column. When coccolithophores and foraminiferans die, they form calcareous oozes.
Older calcareous sediment layers contain the remains of another type of organism, the discoasters; single-celled algae related to the coccolithophores that also produced calcium carbonate tests. Discoaster tests were star-shaped, and reached sizes of 5-40 μm across. Discoasters went extinct approximately 2 million years ago, but their tests remain in deep, tropical sediments that predate their extinction.
Because of their small size, these tests sink very slowly; a single microscopic test may take about 10–50 years to sink to the bottom! Given that slow descent, a current of only 1 cm/sec could carry the test as much as 15,000 km away from its point of origin before it reaches the bottom. Despite this, the sediments in a particular location are well-matched to the types of organisms and degree of productivity that occurs in the water overhead. This means the sediment particles must be sinking to the bottom at a much faster rate, so they accumulate below their point of origin before the currents can disperse them. Most of the tests do not sink as individual particles; about 99% of them are first consumed by some other organism, and are then aggregated and expelled as large fecal pellets, which sink much more quickly and reach the ocean floor in only 10–15 days. This does not give the particles as much time to disperse, and the sediment below will reflect the production occurring near the surface. The increased rate of sinking through this mechanism has been called the "fecal express".
Hydrogenous
Seawater contains many different dissolved substances. Occasionally chemical reactions occur that cause these substances to precipitate out as solid particles, which then accumulate as hydrogenous sediment. These reactions are usually triggered by a change in conditions, such as a change in temperature, pressure, or pH, which reduces the amount of a substance that can remain in a dissolved state. There is not a lot of hydrogenous sediment in the ocean compared to lithogenous or biogenous sediments, but there are some interesting forms.
In hydrothermal vents seawater percolates into the seafloor where it becomes superheated by magma before being expelled by the vent. This superheated water contains many dissolved substances, and when it encounters the cold seawater after leaving the vent, these particles precipitate out, mostly as metal sulfides. These particles make up the "smoke" that flows from a vent, and may eventually settle on the bottom as hydrogenous sediment. Hydrothermal vents are distributed along the Earth's plate boundaries, although they may also be found at intra-plate locations such as hotspot volcanoes. Currently there are about 500 known active submarine hydrothermal vent fields, about half visually observed at the seafloor and the other half suspected from water column indicators and/or seafloor deposits.
Manganese nodules are rounded lumps of manganese and other metals that form on the seafloor, generally ranging between 3–10 cm in diameter, although they may sometimes reach up to 30 cm. The nodules form in a manner similar to pearls; there is a central object around which concentric layers are slowly deposited, causing the nodule to grow over time. The composition of the nodules can vary somewhat depending on their location and the conditions of their formation, but they are usually dominated by manganese- and iron oxides. They may also contain smaller amounts of other metals such as copper, nickel and cobalt. The precipitation of manganese nodules is one of the slowest geological processes known; they grow on the order of a few millimetres per million years. For that reason, they only form in areas where there are low rates of lithogenous or biogenous sediment accumulation, because any other sediment deposition would quickly cover the nodules and prevent further nodule growth. Therefore, manganese nodules are usually limited to areas in the central ocean, far from significant lithogenous or biogenous inputs, where they can sometimes accumulate in large numbers on the seafloor (Figure 12.4.2 right). Because the nodules contain a number of commercially valuable metals, there has been significant interest in mining the nodules over the last several decades, although most of the efforts have thus far remained at the exploratory stage. A number of factors have prevented large-scale extraction of nodules, including the high costs of deep sea mining operations, political issues over mining rights, and environmental concerns surrounding the extraction of these non-renewable resources.
Evaporites are hydrogenous sediments that form when seawater evaporates, leaving the dissolved materials to precipitate into solids, particularly halite (salt, NaCl). In fact, the evaporation of seawater is the oldest form of salt production for human use, and is still carried out today. Large deposits of halite evaporites exist in a number of places, including under the Mediterranean Sea. Beginning around 6 million years ago, tectonic processes closed off the Mediterranean Sea from the Atlantic, and the warm climate evaporated so much water that the Mediterranean was almost completely dried out, leaving large deposits of salt in its place (an event known as the Messinian Salinity Crisis). Eventually the Mediterranean re-flooded about 5.3 million years ago, and the halite deposits were covered by other sediments, but they still remain beneath the seafloor.
Oolites are small, rounded grains formed from concentric layers of precipitation of material around a suspended particle. They are usually composed of calcium carbonate, but they may also from phosphates and other materials. Accumulation of oolites results in oolitic sand, which is found in its greatest abundance in the Bahamas.
Methane hydrates are another type of hydrogenous deposit with a potential industrial application. All terrestrial erosion products include a small proportion of organic matter derived mostly from terrestrial plants. Tiny fragments of this material plus other organic matter from marine plants and animals accumulate in terrigenous sediments, especially within a few hundred kilometres of shore. As the sediments pile up, the deeper parts start to warm up (from geothermal heat), and bacteria get to work breaking down the contained organic matter. Because this is happening in the absence of oxygen (a.k.a. anaerobic conditions), the by-product of this metabolism is the gas methane (CH4). Methane released by the bacteria slowly bubbles upward through the sediment toward the seafloor. At water depths of 500 m to 1,000 m, and at the low temperatures typical of the seafloor (close to 4 °C), water and methane combine to create a substance known as methane hydrate. Within a few metres to hundreds of metres of the seafloor, the temperature is low enough for methane hydrate to be stable and hydrates accumulate within the sediment. Methane hydrate is flammable because when it is heated, the methane is released as a gas. The methane within seafloor sediments represents an enormous reservoir of fossil fuel energy. Although energy corporations and governments are anxious to develop ways to produce and sell this methane, anyone that understands the climate-change implications of its extraction and use can see that this would be folly.
Cosmogenous
Cosmogenous sediment is derived from extraterrestrial sources, and comes in two primary forms; microscopic spherules and larger meteor debris. Spherules are composed mostly of silica or iron and nickel, and are thought to be ejected as meteors burn up after entering the atmosphere. Meteor debris comes from collisions of meteorites with Earth. These high impact collisions eject particles into the atmosphere that eventually settle back down to Earth and contribute to the sediments. Like spherules, meteor debris is mostly silica or iron and nickel. One form of debris from these collisions are tektites, which are small droplets of glass. They are likely composed of terrestrial silica that was ejected and melted during a meteorite impact, which then solidified as it cooled upon returning to the surface.
Cosmogenous sediment is fairly rare in the ocean and it does not usually accumulate in large deposits. However, it is constantly being added to through space dust that continuously rains down on Earth. About 90% of incoming cosmogenous debris is vaporized as it enters the atmosphere, but it is estimated that 5 to 300 tons of space dust land on the Earth's surface each day.
Composition
Siliceous ooze
Siliceous ooze is a type of biogenic pelagic sediment located on the deep ocean floor. Siliceous oozes are the least common of the deep sea sediments, and make up approximately 15% of the ocean floor. Oozes are defined as sediments which contain at least 30% skeletal remains of pelagic microorganisms. Siliceous oozes are largely composed of the silica based skeletons of microscopic marine organisms such as diatoms and radiolarians. Other components of siliceous oozes near continental margins may include terrestrially derived silica particles and sponge spicules. Siliceous oozes are composed of skeletons made from opal silica Si(O2), as opposed to calcareous oozes, which are made from skeletons of calcium carbonate organisms (i.e. coccolithophores). Silica (Si) is a bioessential element and is efficiently recycled in the marine environment through the silica cycle. Distance from land masses, water depth and ocean fertility are all factors that affect the opal silica content in seawater and the presence of siliceous oozes.
Calcareous ooze
The term calcareous can be applied to a fossil, sediment, or sedimentary rock which is formed from, or contains a high proportion of, calcium carbonate in the form of calcite or aragonite. Calcareous sediments (limestone) are usually deposited in shallow water near land, since the carbonate is precipitated by marine organisms that need land-derived nutrients. Generally speaking, the farther from land sediments fall, the less calcareous they are. Some areas can have interbedded calcareous sediments due to storms, or changes in ocean currents. Calcareous ooze is a form of calcium carbonate derived from planktonic organisms that accumulates on the sea floor. This can only occur if the ocean is shallower than the carbonate compensation depth. Below this depth, calcium carbonate begins to dissolve in the ocean, and only non-calcareous sediments are stable, such as siliceous ooze or pelagic red clay.
Lithified sediments
Distribution
Where and how sediments accumulate will depend on the amount of material coming from a source, the distance from the source, the amount of time that sediment has had to accumulate, how well the sediments are preserved, and the amounts of other types of sediments that are also being added to the system.
Rates of sediment accumulation are relatively slow throughout most of the ocean, in many cases taking thousands of years for any significant deposits to form. Lithogenous sediment accumulates the fastest, on the order of one metre or more per thousand years for coarser particles. However, sedimentation rates near the mouths of large rivers with high discharge can be orders of magnitude higher.
Biogenous oozes accumulate at a rate of about 1 cm per thousand years, while small clay particles are deposited in the deep ocean at around one millimetre per thousand years. As described above, manganese nodules have an incredibly slow rate of accumulation, gaining 0.001 millimetres per thousand years.
Marine sediments are thickest near the continental margins where they can be over 10 km thick. This is because the crust near passive continental margins is often very old, allowing for a long period of accumulation, and because there is a large amount of terrigenous sediment input coming from the continents. Near mid-ocean ridge systems where new oceanic crust is being formed, sediments are thinner, as they have had less time to accumulate on the younger crust.
As distance increases from a ridge spreading center the sediments get progressively thicker, increasing by approximately 100–200 m of sediment for every 1000 km distance from the ridge axis. With a seafloor spreading rate of about 20–40 km/million years, this represents a sediment accumulation rate of approximately 100–200 m every 25–50 million years.
The diagram at the start of this article ↑ shows the distribution of the major types of sediment on the ocean floor. Cosmogenous sediments could potentially end up in any part of the ocean, but they accumulate in such small abundances that they are overwhelmed by other sediment types and thus are not dominant in any location. Similarly, hydrogenous sediments can have high concentrations in specific locations, but these regions are very small on a global scale. So cosmogenous and hydrogenous sediments can mostly be ignored in the discussion of global sediment patterns.
Coarse lithogenous/terrigenous sediments are dominant near the continental margins as land runoff, river discharge, and other processes deposit vast amounts of these materials on the continental shelf. Much of this sediment remains on or near the shelf, while turbidity currents can transport material down the continental slope to the deep ocean floor (abyssal plain). Lithogenous sediment is also common at the poles where thick ice cover can limit primary production, and glacial breakup deposits sediments along the ice edge.
Coarse lithogenous sediments are less common in the central ocean, as these areas are too far from the sources for these sediments to accumulate. Very small clay particles are the exception, and as described below, they can accumulate in areas that other lithogenous sediment will not reach.
The distribution of biogenous sediments depends on their rates of production, dissolution, and dilution by other sediments. Coastal areas display very high primary production, so abundant biogenous deposits might be expected in these regions. However, sediment must be >30% biogenous to be considered a biogenous ooze, and even in productive coastal areas there is so much lithogenous input that it swamps the biogenous materials, and that 30% threshold is not reached. So coastal areas remain dominated by lithogenous sediment, and biogenous sediments will be more abundant in pelagic environments where there is little lithogenous input.
In order for biogenous sediments to accumulate their rate of production must be greater than the rate at which the tests dissolve. Silica is undersaturated throughout the ocean and will dissolve in seawater, but it dissolves more readily in warmer water and lower pressures; that is, it dissolves faster near the surface than in deep water. Silica sediments will therefore only accumulate in cooler regions of high productivity where they accumulate faster than they dissolve. This includes upwelling regions near the equator and at high latitudes where there are abundant nutrients and cooler water.
Oozes formed near the equatorial regions are usually dominated by radiolarians, while diatoms are more common in the polar oozes. Once the silica tests have settled on the bottom and are covered by subsequent layers, they are no longer subject to dissolution and the sediment will accumulate. Approximately 15% of the seafloor is covered by siliceous oozes.
Biogenous calcium carbonate sediments also require production to exceed dissolution for sediments to accumulate, but the processes involved are a little different than for silica. Calcium carbonate dissolves more readily in more acidic water. Cold seawater contains more dissolved CO2 and is slightly more acidic than warmer water. So calcium carbonate tests are more likely to dissolve in colder, deeper, polar water than in warmer, tropical, surface water. At the poles the water is uniformly cold, so calcium carbonate readily dissolves at all depths, and carbonate sediments do not accumulate. In temperate and tropical regions calcium carbonate dissolves more readily as it sinks into deeper water.
The depth at which calcium carbonate dissolves as fast as it accumulates is called the calcium carbonate compensation depth or calcite compensation depth, or simply the CCD. The lysocline represents the depths where the rate of calcium carbonate dissolution increases dramatically (similar to the thermocline and halocline). At depths shallower than the CCD carbonate accumulation will exceed the rate of dissolution, and carbonate sediments will be deposited. In areas deeper than the CCD, the rate of dissolution will exceed production, and no carbonate sediments can accumulate (see diagram at right). The CCD is usually found at depths of 4 – 4.5 km, although it is much shallower at the poles where the surface water is cold. Thus calcareous oozes will mostly be found in tropical or temperate waters less than about 4 km deep, such as along the mid-ocean ridge systems and atop seamounts and plateaus.
The CCD is deeper in the Atlantic than in the Pacific since the Pacific contains more CO2, making the water more acidic and calcium carbonate more soluble. This, along with the fact that the Pacific is deeper, means that the Atlantic contains more calcareous sediment than the Pacific. All told, about 48% of the seafloor is dominated by calcareous oozes.
Much of the rest of the deep ocean floor (about 38%) is dominated by abyssal clays. This is not so much a result of an abundance of clay formation, but rather the lack of any other types of sediment input. The clay particles are mostly of terrestrial origin, but because they are so small they are easily dispersed by wind and currents, and can reach areas inaccessible to other sediment types. Clays dominate in the central North Pacific, for example. This area is too far from land for coarse lithogenous sediment to reach, it is not productive enough for biogenous tests to accumulate, and it is too deep for calcareous materials to reach the bottom before dissolving.
Because clay particles accumulate so slowly, the clay-dominated deep ocean floor is often home to hydrogenous sediments like manganese nodules. If any other type of sediment was produced here it would accumulate much more quickly and would bury the nodules before they had a chance to grow.
Coastal sediments
Shallow water marine environments are found in areas between the shore and deeper water, such as a reef wall or a shelf break. The water in this environment is shallow and clear, allowing the formation of different sedimentary structures, carbonate rocks, coral reefs, and allowing certain organisms to survive and become fossils.
The sediment itself is often composed of limestone, which forms readily in shallow, warm calm waters. The shallow marine environments are not exclusively composed of siliciclastic or carbonaceous sediments. While they cannot always coexist, it is possible to have a shallow marine environment composed solely of carbonaceous sediment or one that is composed completely of siliciclastic sediment. Shallow water marine sediment is made up of larger grain sizes because smaller grains have been washed out to deeper water. Within sedimentary rocks composed of carbonaceous sediment, there may also be evaporite minerals. The most common evaporite minerals found within modern and ancient deposits are gypsum, anhydrite, and halite; they can occur as crystalline layers, isolated crystals or clusters of crystals.
In terms of geologic time, it is said that most Phanerozoic sedimentary rock was deposited in shallow marine environments as about 75% of the sedimentary carapace is made up of shallow marine sediments; it is then assumed that Precambrian sedimentary rocks were too, deposited in shallow marine waters, unless it is specifically identified otherwise. This trend is seen in the North American and Caribbean region. Also, as a result of supercontinent breakup and other shifting tectonic plate processes, shallow marine sediment displays large variations in terms of quantity in the geologic time.
Bioturbation
Bioturbation is the reworking of sediment by animals or plants. These include burrowing, ingestion, and defecation of sediment grains. Bioturbating activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. The formal study of bioturbation began in the 1800s by Charles Darwin experimenting in his garden. The disruption of aquatic sediments and terrestrial soils through bioturbating activities provides significant ecosystem services. These include the alteration of nutrients in aquatic sediment and overlying water, shelter to other species in the form of burrows in terrestrial and water ecosystems, and soil production on land.
Bioturbators are ecosystem engineers because they alter resource availability to other species through the physical changes they make to their environments. This type of ecosystem change affects the evolution of cohabitating species and the environment, which is evident in trace fossils left in marine and terrestrial sediments. Other bioturbation effects include altering the texture of sediments (diagenesis), bioirrigation, and displacement of microorganisms and non-living particles. Bioturbation is sometimes confused with the process of bioirrigation, however these processes differ in what they are mixing; bioirrigation refers to the mixing of water and solutes in sediments and is an effect of bioturbation
Walruses and salmon are examples of large bioturbators. Although the activities of these large macrofaunal bioturbators are more conspicuous, the dominant bioturbators are small invertebrates, such as polychaetes, ghost shrimp and mud shrimp. The activities of these small invertebrates, which include burrowing and ingestion and defecation of sediment grains, contribute to mixing and the alteration of sediment structure.
Bioirrigation
Bioirrigation is the process of benthic organisms flushing their burrows with overlying water. The exchange of dissolved substances between the porewater and overlying seawater that results is an important process in the context of the biogeochemistry of the oceans. Coastal aquatic environments often have organisms that destabilize sediment. They change the physical state of the sediment. Thus improving the conditions for other organisms and themselves. These organisms often also cause Bioturbation, which is commonly used interchangeably or in reference with bioirrigation.
Bioirrigation works as two different processes. These processes are known as particle reworking and ventilation, which is the work of benthic macro-invertebrates (usually ones that burrow). This particle reworking and ventilation is caused by the organisms when they feed (faunal feeding), defecate, burrow, and respire. Bioirrigation is responsible for a large amount of oxidative transport and has a large impact on biogeochemical cycles.
Pelagic sediments
Pelagic sediments, or pelagite, are fine-grained sediments that accumulate as the result of the settling of particles to the floor of the open ocean, far from land. These particles consist primarily of either the microscopic, calcareous or siliceous shells of phytoplankton or zooplankton; clay-size siliciclastic sediment; or some mixture of these. Trace amounts of meteoric dust and variable amounts of volcanic ash also occur within pelagic sediments.
Based upon the composition of the ooze, there are three main types of pelagic sediments: siliceous oozes, calcareous oozes, and red clays.
An extensive body of work on deep-water processes and sediments has been built over the past 150 years since the voyage of HMS Challenger (1872–1876), during which the first systematic study of seafloor sediments was made. For many decades since that pioneering expedition, and through the first half of the twentieth century, the deep sea was considered entirely pelagic in nature.
The composition of pelagic sediments is controlled by three main factors. The first factor is the distance from major landmasses, which affects their dilution by terrigenous, or land-derived, sediment. The second factor is water depth, which affects the preservation of both siliceous and calcareous biogenic particles as they settle to the ocean bottom. The final factor is ocean fertility, which controls the amount of biogenic particles produced in surface waters.
Turbidites
Turbidites are the geologic deposits of a turbidity current, which is a type of amalgamation of fluidal and sediment gravity flow responsible for distributing vast amounts of clastic sediment into the deep ocean. Turbidites are deposited in the deep ocean troughs below the continental shelf, or similar structures in deep lakes, by underwater avalanches which slide down the steep slopes of the continental shelf edge. When the material comes to rest in the ocean trough, it is the sand and other coarse material which settles first followed by mud and eventually the very fine particulate matter. This sequence of deposition creates the Bouma sequences that characterize these rocks.
Turbidites were first recognised in the 1950s and the first facies model was developed by Bouma in 1962. Since that time, turbidites have been one of the better known and most intensively studied deep-water sediment facies. They are now very well known from sediment cores recovered from modern deep-water systems, subsurface (hydrocarbon) boreholes and ancient outcrops now exposed on land. Each new study of a particular turbidite system reveals specific deposit characteristics and facies for that system. The most commonly observed facies have been variously synthesised into a range of facies schemes.
Contourites
A contourite is a sedimentary deposit commonly formed on continental rise to lower slope settings, although they may occur anywhere that is below storm wave base. Countourites are produced by thermohaline-induced deepwater bottom currents and may be influenced by wind or tidal forces. The geomorphology of contourite deposits is mainly influenced by the deepwater bottom-current velocity, sediment supply, and seafloor topography.
Contourites were first identified in the early 1960s by Bruce Heezen and co-workers at Woods Hole Oceanographic Institute. Their now seminal paper demonstrated the very significant effects of contour-following bottom currents in shaping sedimentation on the deep continental rise off eastern North America. The deposits of these semi-permanent alongslope currents soon became known as contourites, and the demarcation of slope-parallel, elongate and mounded sediment bodies made up largely of contourites became known as contourite drifts.
Hemipelagic
Hemipelagic sediments, or hemipelagite, are a type of marine sediments that consists of clay and silt-sized grains that are terrigenous and some biogenic material derived from the landmass nearest the deposits or from organisms living in the water. Hemipelagic sediments are deposited on continental shelves and continental rises, and differ from pelagic sediment compositionally. Pelagic sediment is composed of primarily biogenic material from organisms living in the water column or on the seafloor and contains little to no terrigenous material. Terrigenous material includes minerals from the lithosphere like feldspar or quartz. Volcanism on land, wind blown sediments as well as particulates discharged from rivers can contribute to Hemipelagic deposits. These deposits can be used to qualify climatic changes and identify changes in sediment provenances.
Ecology
Benthos () is the community of organisms that live on, in, or near the seafloor, also known as the benthic zone.
Hyperbenthos (or hyperbenthic organisms), prefix , live just above the sediment.
Epibenthos (or epibenthic organisms), prefix , live on top of the sediments.
Endobenthos (or endobenthic organisms), prefix , live buried, or burrowing in the sediment, often in the oxygenated top layer.
Microbenthos
Marine microbenthos are microorganisms that live in the benthic zone of the ocean – that live near or on the seafloor, or within or on surface seafloor sediments. The word benthos comes from Greek, meaning "depth of the sea". Microbenthos are found everywhere on or about the seafloor of continental shelves, as well as in deeper waters, with greater diversity in or on seafloor sediments. In shallow waters, seagrass meadows, coral reefs and kelp forests provide particularly rich habitats. In photic zones benthic diatoms dominate as photosynthetic organisms. In intertidal zones changing tides strongly control opportunities for microbenthos.
Diatoms form a (disputed) phylum containing about 100,000 recognised species of mainly unicellular algae. Diatoms generate about 20 per cent of the oxygen produced on the planet each year, take in over 6.7 billion metric tons of silicon each year from the waters in which they live, and contribute nearly half of the organic material found in the oceans.
Coccolithophores are minute unicellular photosynthetic protists with two flagella for locomotion. Most of them are protected by a shell covered with ornate circular plates or scales called coccoliths. The coccoliths are made from calcium carbonate. The term coccolithophore derives from the Greek for a seed carrying stone, referring to their small size and the coccolith stones they carry. Under the right conditions they bloom, like other phytoplankton, and can turn the ocean milky white.
Radiolarians are unicellular predatory protists encased in elaborate globular shells usually made of silica and pierced with holes. Their name comes from the Latin for "radius". They catch prey by extending parts of their body through the holes. As with the silica frustules of diatoms, radiolarian shells can sink to the ocean floor when radiolarians die and become preserved as part of the ocean sediment. These remains, as microfossils, provide valuable information about past oceanic conditions.
Like radiolarians, foraminiferans (forams for short) are single-celled predatory protists, also protected with shells that have holes in them. Their name comes from the Latin for "hole bearers". Their shells, often called tests, are chambered (forams add more chambers as they grow). The shells are usually made of calcite, but are sometimes made of agglutinated sediment particles or chiton, and (rarely) of silica. Most forams are benthic, but about 40 species are planktic. They are widely researched with well established fossil records which allow scientists to infer a lot about past environments and climates.
Both foraminifera and diatoms have planktonic and benthic forms, that is, they can drift in the water column or live on sediment at the bottom of the ocean. Either way, their shells end up on the seafloor after they die. These shells are widely used as climate proxies. The chemical composition of the shells are a consequence of the chemical composition of the ocean at the time the shells were formed. Past water temperatures can be also be inferred from the ratios of stable oxygen isotopes in the shells, since lighter isotopes evaporate more readily in warmer water leaving the heavier isotopes in the shells. Information about past climates can be inferred further from the abundance of forams and diatoms, since they tend to be more abundant in warm water.
The sudden extinction event which killed the dinosaurs 66 million years ago also rendered extinct three-quarters of all other animal and plant species. However, deep-sea benthic forams flourished in the aftermath. In 2020 it was reported that researchers have examined the chemical composition of thousands of samples of these benthic forams and used their findings to build the most detailed climate record of Earth ever.
Some endoliths have extremely long lives. In 2013 researchers reported evidence of endoliths in the ocean floor, perhaps millions of years old, with a generation time of 10,000 years. These are slowly metabolizing and not in a dormant state. Some Actinomycetota found in Siberia are estimated to be half a million years old.
Sediment cores
The diagram on the right shows an example of a sediment core. The sample was retrieved from the Upernavik Fjord circa 2018. Grain-size measurements were made, and the top 50 cm was dated with the 210Pb method.
Carbon processing
Thinking about ocean carbon and carbon sequestration has shifted in recent years from a structurally-based chemical reactivity viewpoint toward a view that includes the role of the ecosystem in organic carbon degradation rates. This shift in view towards organic carbon and ecosystem involvement includes aspects of the "molecular revolution" in biology, discoveries on the limits of life, advances in quantitative modelling, paleo studies of ocean carbon cycling, novel analytical techniques, and interdisciplinary efforts. In 2020, LaRowe et al. outlined a broad view of this issue that is spread across multiple scientific disciplines related to marine sediments and global carbon cycling.
Evolutionary history
To begin with, the Earth was molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust and water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a planetoid with the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans.
By the start of the Archean, about four billion years ago, rocks were often heavily metamorphized deep-water sediments, such as graywackes, mudstones, volcanic sediments and banded iron formations. Greenstone belts are typical Archean formations, consisting of alternating high- and low-grade metamorphic rocks. High-grade rocks were derived from volcanic island arcs, while low-grade metamorphic rocks represented deep-sea sediments eroded from the neighboring island rocks and deposited in a forearc basin. The earliest-known supercontinent Rodinia assembled about one billion years ago, and began to break apart after about 250 million years during the latter part of the Proterozoic.
The Paleozoic, (Ma), started shortly after the breakup of Pannotia and at the end of a global ice age. Throughout the early Paleozoic, the Earth's landmass was broken up into a substantial number of relatively small continents. Toward the end of the era the continents gathered together into a supercontinent called Pangaea, which included most of the Earth's land area. During the Silurian, which started 444 Ma, Gondwana continued a slow southward drift to high southern latitudes. The melting of ice caps and glaciers contributed to a rise in sea levels, recognizable from the fact that Silurian sediments overlie eroded Ordovician sediments, forming an unconformity. Other cratons and continent fragments drifted together near the equator, starting the formation of a second supercontinent known as Euramerica.
During the Triassic deep-ocean sediments were laid down and subsequently disappeared through the subduction of oceanic plates, so very little is known of the Triassic open ocean. The supercontinent Pangaea rifted during the Triassic – especially late in the period – but had not yet separated. The first non-marine sediments in the rift that marks the initial break-up of Pangea are of Late Triassic age. Because of the limited shoreline of one super-continental mass, Triassic marine deposits are globally relatively rare; despite their prominence in Western Europe where the Triassic was first studied. In North America, for example, marine deposits are limited to a few exposures in the west. Thus Triassic stratigraphy is mostly based on organisms living in lagoons and hypersaline environments, such as Estheria crustaceans and terrestrial vertebrates.
Patterns or traces of bioturbation are preserved in lithified rock. The study of such patterns is called ichnology, or the study of "trace fossils", which, in the case of bioturbators, are fossils left behind by digging or burrowing animals. This can be compared to the footprint left behind by these animals. In some cases bioturbation is so pervasive that it completely obliterates sedimentary structures, such as laminated layers or cross-bedding. Thus, it affects the disciplines of sedimentology and stratigraphy within geology. The study of bioturbator ichnofabrics uses the depth of the fossils, the cross-cutting of fossils, and the sharpness (or how well defined) of the fossil to assess the activity that occurred in old sediments. Typically the deeper the fossil, the better preserved and well defined the specimen.
Important trace fossils from bioturbation have been found in marine sediments from tidal, coastal and deep sea sediments. In addition sand dune, or Eolian, sediments are important for preserving a wide variety of fossils. Evidence of bioturbation has been found in deep-sea sediment cores including into long records, although the act extracting the core can disturb the signs of bioturbation, especially at shallower depths. Arthropods, in particular are important to the geologic record of bioturbation of Eolian sediments. Dune records show traces of burrowing animals as far back as the lower Mesozoic, 250 Ma, although bioturbation in other sediments has been seen as far back as 550 Ma.
Research history
The first major study of deep-ocean sediments occurred between 1872 and 1876 with the HMS Challenger expedition, which travelled nearly 70,000 nautical miles sampling seawater and marine sediments. The scientific goals of the expedition were to take physical measurements of the seawater at various depths, as well as taking samples so the chemical composition could be determined, along with any particulate matter or marine organisms that were present. This included taking samples and analysing sediments from the deep ocean floor. Before the Challenger voyage, oceanography had been mainly speculative. As the first true oceanographic cruise, the Challenger expedition laid the groundwork for an entire academic and research discipline.
Earlier theories of continental drift proposed that continents in motion "plowed" through the fixed and immovable seafloor. Later in the 1960s the idea that the seafloor itself moves and also carries the continents with it as it spreads from a central rift axis was proposed by Harold Hess and Robert Dietz. The phenomenon is known today as plate tectonics. In locations where two plates move apart, at mid-ocean ridges, new seafloor is continually formed during seafloor spreading. In 1968, the oceanographic research vessel Glomar Challenger was launched and embarked on a 15-year-long program, the Deep Sea Drilling Program. This program provided crucial data that supported the seafloor spreading hypothesis by collecting rock samples that confirmed that the farther from the mid-ocean ridge, the older the rock was.
See also
Bioturbation
Depositional environment
Cosmic dust
Interplanetary dust
Deep biosphere
Great Calcite Belt
Marine clay
Microbially induced sedimentary structure
Oolitic aragonite sand
Organic-rich sedimentary rocks
Redox gradient
Seafloor depth versus age
Sediment-water interface
Sedimentary rock
Sediment transport
Coastal sediment transport
Coastal sediment supply
References
Sources
Marine geology
Oceanography
Sediments
Sedimentary rocks | Marine sediment | [
"Physics",
"Environmental_science"
] | 11,407 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
11,780,267 | https://en.wikipedia.org/wiki/Donetsk%20National%20Technical%20University | Donetsk National Technical University (DonNTU, formerly Donetsk Polytechnic Institute and other names) is the largest and oldest higher education establishment in Donbas, founded in 1921. In its early years, it was attended by Nikita Khrushchev.
Following the loss of Ukrainian government control over Donetsk in 2014 during the war in Donbass, the University was evacuated to Pokrovsk.
A small group of collaborationists among the faculty have claimed to continue to operate as the Donetsk National Technical University in the campus in Russian-occupied Donetsk, as a project aimed to legitimize the so-called Donetsk People's Republic, however diplomas they issued weren't recognized even in Russia itself and local students were officially enrolled as correspondent students at minor provincial Russian universities. Following the full-scale Russian invasion of Ukraine, the university appears to have been discontinued.
On 28 February 2024, the university building in Pokrovsk was partially destroyed by Russian missile attack.
Structure
Donetsk National Technical University (DonNTU) is the first higher education establishment in the Donbas Region and one of the first technical universities in Ukraine. 27,000 students study at 7 faculties, 60 specialities being their major. There are 28 correspondent members and academicians of the engineering academies, 18 honorary researchers and professors among the academics of the university. A number of scientists of DonNTU are honorary and full members of foreign organizations and academies. There are professors and students whose work was supported by the Soros Fund.
DPI team
From 1987—1996, the university had a group known as the participating in the popular comedic game-show KVN. It contained a number of figures, made up of students who were studying at the university and would later become prominent within Ukrainian society, including Ismail Abdullaiev and .
Collaboration
DonNTU has more than 70 collaboration agreements with universities all over the world.
There is an office of the Siemens company at the university.
At the three engineering faculties (German, French, and English) students are trained in the appropriate foreign language.
A Polish faculty has been established.
Thirty professors from foreign universities are Honorary Doctors of DonNTU.
The university has a reading room sponsored by the Goethe Institute, Germany.
Donetsk National Technical University is a member of the EAU (European Association of Universities).
DonNTU is a member of
UICEE – The International Engineering Education Centre sponsored by UNESCO, Melbourne, Australia
EAIE – European Association of International Education
EAAU – Euro-Asian Association of Universities
SEFI – European Association of Engineering Education
IGIP – International Association of Engineering Education (Austria)
COFRAMA – French Council on Management links development with the countries of the CIS and Russia (Lion, France); PRELUDE – International Association of Research and links with universities (Belgium)
CEUME – Consortium of Management Education in Ukraine (the US, Poland)
URAN – Ukrainian Educational and Research Network sponsored by the NATO and German Research Network
DonNTU is a participant in the following international programmes:
TEMPUS-TACIS NCD-JEP – 23125-2002 European Studios; DAAD Eastern Partnerships (Germany)
Stipend of the International Board of the Ministry of Education and Science (the German Aerodynamics Center)
BWTZ-Programm (Germany, the Ministry of Science)
INTAS – Publishing House (Germany)
BMEU/CEUME Business - Management - Education (USA, Poland)
The Jozef Mihknowski Science Development Fund (Poland)
Students Exchange Programmes AIESEC (Poland)
Grant from the Ministry of Education and Sports, Poland
Grant from the Ministry of Education and Science (Russia)
Programme Dnipro (France)
Grant of the Special School of Social Works, Construction and Industry (ESTP), (France)
Grant from the government of Czech Republic
SIDA – Master and Bachelor programme Sandwich (Sweden)
Gallery
References
External links
Official website of the University
Website of russian collaborationsts, offline since March 2022
Universities and colleges in Donetsk
Universities and colleges established in 1921
1921 establishments in Ukraine
Technical universities and colleges in Ukraine
National universities in Ukraine
Institutions with the title of National in Ukraine
Schools of mines | Donetsk National Technical University | [
"Engineering"
] | 835 | [
"Schools of mines",
"Engineering universities and colleges"
] |
11,780,506 | https://en.wikipedia.org/wiki/Bull%20Questar | In information technology, Questar computer terminals are a line of largely 3270-compatible text-only dumb terminals manufactured by Groupe Bull and widely used in France and some other markets. The terminals combine standard 3270 emulation with a number of Questar-specific features. The terminals have been most successful with users who already operate compatible Bull mainframe systems, and have achieved far less market penetration as plug-compatible replacements for IBM 3270 terminals.
The Bull Questar 400 was a licensed version of the Convergent Technologies NGEN.
However there was Bull Questar M, in fact Micral series 80. Z80 based microcomputer.
References
Block-oriented terminal
Groupe Bull | Bull Questar | [
"Technology"
] | 140 | [
"Computing stubs",
"Computer hardware stubs"
] |
11,780,880 | https://en.wikipedia.org/wiki/RespOrg | A RespOrg, or responsible organization, is a company that maintains the registration for individual toll-free telephone numbers in the North American Numbering Plan by means of the distributed Service Management System/800 database.
RespOrgs were established in 1993 as part of a Federal Communications Commission order instituting toll-free number portability. A RespOrg (pronounced as though it were a single word, something like "ressporg") can be a long-distance company, reseller, end user or an independent that offers an outsourced service.
Operation
The initial implementation of toll-free calling was primitive. In the 1950s, a collect call or a call to a Zenith number had to be completed manually by a telephone operator. By 1967, a direct-dial 1-800-number could be provided using Wide Area Telephone Service (WATS), but each prefix was tied to a specific geographic destination and each number was installed with special fixed-rate trunks which were priced beyond the reach of most small businesses. There was no means to select between rival carriers and little room for vanity numbering; a subscriber would need three separate numbers to be reachable from Canada, US interstate and US intrastate.
A "data base communication call processing method" patented by Roy P. Weber of Bell Labs, and implemented by AT&T in 1982, broke the link between individual telephone numbers and a specific trunk, city, or carrier. A toll-free number was merely an index into a large, distributed database; any number could be reassigned geographically anywhere by changing its database record. A call could be routed to one of multiple locations based on the call origin, load balancing between multiple call centers, times, or days. While this data was originally maintained by telephone companies, the breakup of the Bell System in the 1980s and the introduction of toll-free number portability in 1993 required an independent operator to maintain the SMS/800 database.
If the Service Management System were a central registry that controlled routing on all toll-free and other telephone numbers, the RespOrgs would be its registrars. Many RespOrgs are telephone companies or long-distance carriers; a toll-free number provided by a carrier is bundled with RespOrg service adequate to send all calls through that one carrier to a single local destination number.
A large subscriber with more complex requirements could use an independent RespOrg to direct calls for an individual number to multiple carriers for least-cost routing or to provide disaster recovery. A number that reaches multiple call centers via multiple carriers can be configured to avoid any single point of failure; any change to a number's routing can be propagated throughout the network in fifteen minutes. An independent RespOrg may also hold an advantage in obtaining vanity phonewords by reserving recently disconnected numbers for its clients in the first few seconds after they become available.
The function of RespOrgs in North American telephony is analogous to that of an individual registrar in the Internet's Domain Name System.
Regulatory framework
Every toll-free telephone number is managed individually by a RespOrg. There are approximately 350 RespOrg services, ranging in size from large incumbent local exchange carriers (ILECs) to small companies that control only a few numbers. All RespOrgs operate under the same tariff and are required to follow specific guidelines for this process. The guidelines are maintained by a national industry group known as the SMS/800 Number Administration Committee (SNAC), a committee of the Alliance for Telecommunications Industry Solutions. Membership is open to any RespOrg.
In the United States, according to the regulations of the Federal Communications Commission, the end-user has the right to select a RespOrg and have their numbers transferred to their control. This process is called "porting" or "change of RespOrg" and requires a signed letter of authorization from the end-user.
In theory, regulations prevent hoarding, brokering and warehousing of numbers by both RespOrgs and subscribers. In practice, some RespOrgs do abuse the system by stockpiling millions of toll-free numbers for advertising purposes, because the enforcement of the regulations has been weak and sporadic. This situation has led to periodic creation of overlay plan toll-free area codes to prevent exhaustion of the SMS/800 available number pool.
See also
Toll-free telephone numbers in the United States
References
Network access
Telephone numbers | RespOrg | [
"Mathematics",
"Engineering"
] | 891 | [
"Telephone numbers",
"Network access",
"Mathematical objects",
"Electronic engineering",
"Numbers"
] |
11,780,930 | https://en.wikipedia.org/wiki/Roof%20edge%20protection | Roof edge protection is fall protection equipment most commonly used during the construction of commercial buildings or residential housing. They can be used along with timber, steel, or concrete structures. It often consists of a toe board, a main guard rail and an intermediate rail.". Roof edge protection can take the form of personal fall arrest systems (PFAS), fall restraint systems, guardrail systems, warning line systems, safety monitors, or ladders. Since construction is one of the most dangerous professions in the world, roof edge protection offers much-needed protection against falls from heights which is one of the primary causes of fatalities for workers.
History
With a combination of the Industrial Revolution taking place, the United States receiving a huge influx of immigrants, and investments in commercial and residential buildings, the construction industry grew rapidly. However, this came at the cost of many fatalities and injuries since men were required to work without any standardized safety precautions and their employers were not incentivized to provide any safety equipment.
In 1877, the state of Massachusetts began implementing safety and health legislation. Even though the first safety laws primarily concentrated on the working conditions and safety practices within factories and other workplaces, it paved the way for efforts to be geared towards roof edge protection as construction skyrocketed in U.S. cities. In the following years, equipment such as guards were developed to protect workers from falling from edges. By 1913, the National Council for Industrial Safety was established in order to collect data and information on industrial deaths and this provided the government with evidence for the need of injury and fatality prevention at work.
By the 1920s, body belts were the standard for fall protection but there was a risk of the belt slipping over the user's shoulders in the event of a fall. Users also had to manually tie and retie lines for the body belt. At this time, steel guardrails were also present as highway guardrails but this became common practice in manufacturing and industrial settings as well. In the building of the Golden Gate Bridge, the danger and difficulty of the construction project also allowed for safety to take center stage. Civil engineer Joseph Baermann Strauss insisted on safety measures such as hardhats, glare-free goggles, and most notably, a safety net under the developing bridge to prevent fatalities from falling. In the 1940s, a better alternative to the body belt which was the safety harness was also created. By the 1970s, the Occupational Safety and Hazard Administration (OSHA) was established and began issuing standard updates for fall protection in the construction industry. In 1994, OSHA also issued Subpart M Fall Protection Standard which required roof edge protection to be in place where employees were working six feet or more above a lower level. Since then, OSHA has periodically introduced and updated its requirements and recommendations for fall protection to meet the changing circumstances within the construction industry.
Legal requirements
Edge protection is often required to meet strict technical safety standards set by government authorities. Depending on occupational safety requirements in any given country or region, the quality of edge protection used on building and construction sites can be examined by government inspectors.
Many US federal and state laws require building contractors to implement measures to prevent workers falling from heights. These laws often expressly require the use of edge protection or harness systems. In the UK edge protection guidelines are set out by the Edge Protection Federation's Code of Practice, which itself has been written to comply with BS EN 13374.
See also
Construction site safety
Occupational safety
Roofer
Fall protection
References
External links
Fall Protection
Dalton Roofing
Safety codes
Construction safety
Edge Protection | Roof edge protection | [
"Technology",
"Engineering"
] | 720 | [
"Structural engineering",
"Structural system",
"Construction",
"Construction safety",
"Roofs"
] |
11,781,050 | https://en.wikipedia.org/wiki/ITerating | ITerating was a Wiki-based software guide, where users could find, compare and give reviews to software products. As of January 2021 the domain is listed as being for sale and the website no longer on-line. Founded in October 2005, and based in New York, ITerating was created by CEO Nicolas Vandenberghe, who saw that there was an industry need for a comprehensive resource to help evaluate software solutions.
The site aims to be a reference guide for the IT industry and includes reviews, ratings, articles, and detailed product feature comparisons. ITerating uses Semantic Web tools (including RDF - Resource Description Framework) to combine user edits with Web service feeds from other sites.
Designed for use by developers and industry consultants, ITerating allows users to contribute to categories such as Software Engineering Tools; Website Design & Tools; Website Software Tools; Website & Communication Applications & Social Networking; or to create their own category if does not exist yet.
Wiki Matrix
Iterating announced the addition of a Feature Matrix in June 2007, which allows users to dynamically create customized, side-by-side feature comparisons of software solutions.
References
Online databases
Computing websites
Software companies based in New York (state)
Defunct software companies of the United States | ITerating | [
"Technology"
] | 256 | [
"Computing websites"
] |
11,781,302 | https://en.wikipedia.org/wiki/Astronomy%20Letters | Astronomy Letters (Russian: Pis’ma v Astronomicheskii Zhurnal) is a Russian peer-reviewed scientific journal. The journal covers research on all aspects of astronomy and astrophysics, including high energy astrophysics, cosmology, space astronomy, theoretical astrophysics, radio astronomy, extra galactic astronomy, stellar astronomy, and investigation of the Solar system.
Pis’ma v Astronomicheskii Zhurnal is translated in its English version by MAIK Nauka/Interperiodica, which is also the official publisher. However, beginning in 2006 access and distribution outside of Russia is made through Springer Science+Business Media. Both English and Russian versions are published simultaneously.
Astronomy Letters was established in 1994 and published bimonthly. From 1999, it has been published monthly. The editor-in-chief is Rashid A. Sunyaev (Space Research Institute).
Abstracting and indexing
Astronomy Letters is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.075.
See also
Astronomy Reports
References
External links
Website on Springer (English)
Pis'ma v Astronomicheskii Zhurnal (English)
Astronomy journals
English-language journals
Academic journals established in 1994
Astronomy in Russia
Springer Science+Business Media academic journals
Nauka academic journals | Astronomy Letters | [
"Astronomy"
] | 275 | [
"Astronomy journals",
"Astronomy journal stubs",
"Works about astronomy",
"Astronomy stubs"
] |
320,308 | https://en.wikipedia.org/wiki/North%20American%20Network%20Operators%27%20Group | The North American Network Operators' Group (NANOG) is a forum for the coordination and dissemination of information to backbone/enterprise networking technologies and operational practices. It runs meetings, talks, surveys, and a mailing list for Internet service providers. The main method of communication is the NANOG mailing list (known informally as NANOG-l), a free mailing list to which anyone may subscribe or post.
History
NANOG evolved from the NSFNET "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues. At the February 1994 regional tech meeting in San Diego, the group revised its charter to include a broader base of network service providers and subsequently adopted NANOG as its new name. NANOG was organized by Merit Network, a non-profit Michigan organization, from 1994 through 2011, when it was transferred to NewNOG.
Funding
Funding for NANOG originally came from the National Science Foundation as part of two projects Merit undertook in partnership with NSF and other organizations: the NSFNET Backbone Service and the Routing Arbiter project. All NANOG funds came from conference registration fees and donations from vendors, and starting in 2011, membership dues.
Meetings
NANOG meetings are held three times each year and include presentations, tutorials, and BOFs (Birds of a Feather meetings). There are also lightning talks, where speakers can submit brief presentations (no longer than 10 minutes) on a very short term. Conference participants typically include senior engineering staff from tier 1 and tier 2 ISPs. In addition to the conferences, NANOG On the Road events offer single-day networking events.
NANOG meetings are organized by NewNOG, Inc., a Delaware non-profit organization, which took over responsibility for NANOG from the Merit Network in February 2011. Meetings are hosted by NewNOG and other organizations from the U.S. and Canada. Overall leadership is provided by the NANOG Steering Committee, established in 2005, and a Program Committee.
See also
Internet network operators' group
References
External links
Routing Arbiter Project
CIO official website
Ripe Labs - Network operations
Computer networking
Electronic mailing lists
Internet Network Operators' Groups
History of the Internet | North American Network Operators' Group | [
"Technology",
"Engineering"
] | 447 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
320,320 | https://en.wikipedia.org/wiki/Speculum%20%28medicine%29 | A speculum (Latin for 'mirror'; : specula or speculums) is a medical tool for investigating body orifices, with a form dependent on the orifice for which it is designed. In old texts, the speculum may also be referred to as a diopter or dioptra. Like an endoscope, a speculum allows a view inside the body; endoscopes, however, tend to have optics while a speculum is intended for direct vision.
History
Vaginal and anal specula were used by the ancient Greeks and Romans, and speculum artifacts have been found in Pompeii. The modern vaginal speculum, developed by J. Marion Sims, consists of a hollow cylinder with a rounded end that is divided into two hinged parts, somewhat like the beak of a duck. This speculum is inserted into the vagina to dilate it for examination of the vagina and cervix.
The modern vaginal speculum was developed by J. Marion Sims, a plantation doctor in Lancaster County, United States. Between 1845 and 1849, Sims performed dozens of surgeries, without anesthesia, on at least 12 enslaved women. In these experiments, Sims developed a technique to repair fistula and in the process invented the duckbill speculum. These experiments, and the development of the modern specula, led some to regard Sims as the "father of modern gynaecology."
By the 1860s, specula were integrated into criminal justice practices in the UK. In Great Britain, examinations of the cervix were made mandatory for all women convicted of prostitution by the country's Contagious Disease Act.
In the 19th century, the vaginal speculum became a cultural symbol of the tenuous relationship between women and their physicians. Use of the speculum was generally avoided in medical practices, and most vaginal conditions were diagnosed through symptoms or palpating the abdomen. Many practitioners had moral concerns about the use of the speculum, and preferred to diagnose through palpating the abdomen. As late as 1910, physicians believed the vaginal speculum to be inferior to the "educated touch."
These concerns continued into the early 20th century as the speculum became commonplace in gynecology practices. Often, nurses played a major role in ensuring the proper use of the speculum during medical exams. The 1946 and 1956 editions of a multi-volume gynecology text for nurses required that nurses remain present during examination to protect both the patient and physician from "blackmail by designing persons."
, 85% of gynecologists are women. As a result of this demographic shift, the procedures around speculum use have also changed.
Construction
Specula have been made of glass or metal. They were generally made of stainless steel and sterilized between uses, but particularly in the 21st century, many — especially those used in emergency departments and doctor's offices — are made of plastic, and are disposable, single-use items. Those used in surgical suites are still commonly made of stainless steel.
Types
Specula come in a variety of shapes based on their purpose, and a variety of sizes; in any case the cylinder or bill(s) of the instrument allow the operator a direct vision of the area of interest and the possibility to introduce instruments for further interventions such as a biopsy.
Vaginal
The most common specula used in gynecologic practice are varying sizes of bivalved vaginal speculum; the two bills are hinged and are "closed" when the speculum is inserted to facilitate its entry and "opened" in its final position where they can be arrested by a screw mechanism, so that the operator is freed from keeping the bills apart.
A cylindrical-shaped speculum, introduced in 2001, the dilating vaginal speculum (also known as the Veda-scope) invented by Clemens van der Weegen, inflates the vagina with filtered air. (see diagram) The device has two main functions: a) to take a normal Pap smear with a cervical brush or a cytology brush; and b) as an internal colposcope so that the operator can pivot the Veda-scope to view any part of the vagina barrel and cervix facilitated by an internal light source that can illuminate the vaginal wall and cervix with multi-coloured light filters, which can detect pre-cancerous cells with the aid of acetic acid solution and iodine solution. It also has a facility to attach a digital camera for viewing and recording.
A specialized form of vaginal speculum is the weighted speculum, which consists of a broad half tube which is bent at about a 90-degree angle, with the channel of the tube on the exterior side of the angle. One end of the tube has a roughly spherical metal weight surrounding the channel of the speculum. A weighted speculum is placed in the vagina during vaginal surgery with the patient in the lithotomy position. The weight holds the speculum in place and frees the surgeon's hands for other tasks.
A vaginal speculum is also used in fertility treatments, particularly artificial insemination, and allows the vaginal cavity to be opened and observed thereby facilitating the deposit of semen into the vagina.
Cylindrical shape
One bill
Two bills (bivalved)
Three bills
Rectal
Vaginal specula are also used for anal surgery, although several other forms of anal specula exist. One form, the anoscope, resembles a tube that has a removable bullet-shaped insert. When the anoscope is inserted into the anus, the insert dilates the anus to the diameter of the tube. The insert is then removed, leaving the tube to allow examination of the lower rectum and anus.
This style of anal speculum is one of the oldest designs for surgical instruments still in use, with examples dating back many centuries. The sigmoidoscope can be further advanced into the lower intestinal tract and requires an endoscopic set-up.
Tubal shape
One bill
Two bills
Three bills
Nasal
Nasal specula have two relatively flat bills with handle. The instrument is hinged so that when the handles are squeezed together the bills spread laterally, allowing examination.
Additionally, the Thudichum nasal speculum is commonly used in the outpatient examination of the nose.
Aural
Ear or aural specula resemble a funnel, and come in a variety of sizes.
Eyelid
For ophthalmic surgery such as cataract surgery, a speculum designed to retract the eyelids is used.
Oral
In veterinary medicine, a McPherson Speculum can be used for oral examination. The speculum helps keep the mouth open during the exam and helps avoid biting injuries.
Non-medical use
Specula are used for sexual pleasure, both vaginally and anally.
See also
References
External links
Gynaecology
Medical equipment | Speculum (medicine) | [
"Biology"
] | 1,423 | [
"Medical equipment",
"Medical technology"
] |
320,323 | https://en.wikipedia.org/wiki/Causeway | A causeway is a track, road or railway on the upper point of an embankment across "a low, or wet place, or piece of water". It can be constructed of earth, masonry, wood, or concrete. One of the earliest known wooden causeways is the Sweet Track in the Somerset Levels, England, which dates from the Neolithic age. Timber causeways may also be described as both boardwalks and bridges.
Etymology
When first used, the word causeway appeared in a form such as "causey way", making clear its derivation from the earlier form "causey". This word seems to have come from the same source by two different routes. It derives ultimately, from the Latin for heel, , and most likely comes from the trampling technique to consolidate earthworks.
Originally, the construction of a causeway used earth that had been trodden upon to compact and harden it as much as possible, one layer at a time, often by slaves or flocks of sheep. Today, this work is done by machines. The same technique would have been used for road embankments, raised river banks, sea banks and fortification earthworks.
The second derivation route is simply the hard, trodden surface of a path. The name by this route came to be applied to any firmly surfaced road. It is now little-used except in dialect and in the names of roads which were originally notable for their solidly made surface. The 1911 Encyclopædia Britannica states: "causey, a mound or dam, which is derived, through the Norman-French (cf. modern ), from the late Latin , a road stamped firm with the feet (, to tread)."
The word is comparable in both meanings with the French , from a form of which it reached English by way of Norman French. The French adjective carries the meaning of having been given a hardened surface and is used to mean either paved or shod. As a noun is used on the one hand for a metalled carriageway, and on the other for an embankment with or without a road.
Other languages have a noun with similar dual meaning. In Welsh, it is . The Welsh is relevant here, as it also has a verb , meaning to trample. The trampling and ramming technique for consolidating earthworks was used in fortifications and there is a comparable, outmoded form of wall construction technique, used in such work and known as pisé, a word derived not from trampling but from ramming or tamping. The Welsh word translates directly to the English word 'causeway'; it is possible that, with Welsh being a lineal linguistic descendant of the original native British tongues, the English word derives from the Welsh.
A transport corridor that is carried instead on a series of arches, perhaps approaching a bridge, is a viaduct; a short stretch of viaduct is called an overpass. The distinction between the terms causeway and viaduct becomes blurred when flood-relief culverts are incorporated, though generally a causeway refers to a roadway supported mostly by earth or stone, while a bridge supports a roadway between piers (which may be embedded in embankments). Some low causeways across shore waters become inaccessible when covered at high tide.
History
The Aztec city-state of Tenochtitlan had causeways supporting roads and aqueducts. One of the oldest engineered roads yet discovered is the Sweet Track in England. Built in 3807 or 3806 BC, the track was a walkway consisting mainly of planks of oak laid end-to-end, supported by crossed pegs of ash, oak, and lime, driven into the underlying peat.
In East Africa, the Husuni Kubwa (the "Great Fort"), situated outside the town of Kilwa, was an early 14th-century sultan's palace and emporium that contained causeways and platforms at the entrance of the Harbour made from blocks of reef and coral nearly a meter high. These acted as breakwaters, allowing mangroves to grow which is one of the ways the breakwater can be spotted from a distance. Some parts of the causeway are made from the bedrock, but usually the bedrock was used as a base. Coral stone was also used to build up the causeways, with sand and lime being used to cement the cobbles together. However, some of the stones were left loose.
In Scotland, the skirmish known as Clense the Calsey, or Cleanse the Causeway, took place in the High Street of Edinburgh in 1520.
In the 18th century, Dahomey lacked an effective navy hence it built causeways for naval purposes starting in 1774.
Engineering
The modern embankment may be constructed within a cofferdam: two parallel steel sheet pile or concrete retaining walls, anchored to each other with steel cables or rods. This construction may also serve as a dyke that keeps two bodies of water apart, such as bodies with a different water level on each side, or with salt water on one side and fresh water on the other. This may also be the primary purpose of a structure, the road providing a hardened crest for the dike, slowing erosion in the event of an overflow. It also provides access for maintenance as well perhaps, as a public service.
Examples
Notable causeways include those that connect Singapore and Malaysia (the Johor-Singapore Causeway), Bahrain and Saudi Arabia (25-km long King Fahd Causeway) and Venice to the mainland, all of which carry roadways and railways. In the Netherlands there are a number of prominent dikes which also double as causeways, including the Afsluitdijk, Brouwersdam, and Markerwaarddijk. In the Republic of Panama a causeway connects the islands of Perico, Flamenco, and Naos to Panama City on the mainland. It also serves as a breakwater for ships entering the Panama Canal.
Causeways are also common in Florida, where low bridges may connect several human-made islands, often with a much higher bridge (or part of a single bridge) in the middle so that taller boats may pass underneath safely. Causeways are most often used to connect the barrier islands with the mainland. In the case of the Courtney Campbell Causeway, however, the mainland (Hillsborough County) is connected by a causeway to a peninsula (Pinellas County). A well-known causeway is the NASA Causeway connecting the town of Titusville on the Florida mainland to the rocket-launching facility at the Kennedy Space Center on Merritt Island.
The Churchill Barriers in Orkney are some of the most notable sets of causeways in Europe. Constructed in waters up to 18 metres deep, the four barriers link five islands on the eastern side of the natural harbour at Scapa Flow. They were built during World War II as military defences for the harbour, on the orders of Winston Churchill.
The Estrada do Istmo connecting the islands of Taipa and Coloane in Macau was initially built as a causeway. The sea on both sides of the causeway then became shallower as a result of silting, and mangroves began to conquer the area. Later, land reclamation took place on both sides of the road and the area has subsequently been named Cotai and become home to several casino complexes.
Specific causeways around the world
Various causeways in the world:
Adam's Bridge, historic causeway which existed till 1480 CE
Afsluitdijk, Netherlands.
Canso Causeway, Nova Scotia, Canada ()
Churchill Barriers, 4 causeways in Orkney, Scotland
Colaba Causeway, Mumbai, India
Courtney Campbell Causeway, Tampa Bay, Florida, United States
Gaoji Causeway, Xiamen, China
Hindenburgdamm, 11km rail link between the island of Sylt and the German mainland ()
Johor–Singapore Causeway ()
King Fahd Causeway, Bahrain, Saudi Arabia ()
Lake Cuitzeo Causeway, Michoacán, Mexico (19.9380°N 101.1547°W)
Lake Pontchartrain Causeway, Metairie, Louisiana, Southern; Mandeville, Louisiana, Northern, United States
Lucin Cutoff railroad causeway across the Great Salt Lake in Utah, USA
MacArthur Causeway, Florida, United States
Mahim Causeway, Mumbai, India
Pulaski Skyway
Robert Moses Causeway, Bay Shore, Long Island, New York, United States ()
Rømødæmningen, 9km link between Rømø, and the Danish mainland
Sanibel Causeway, Sanibel, Florida, United States
Sloedam, Zeeland, Netherlands
Swarkestone causeway, Derby, England, United Kingdom
The Causeway, Bermuda
The Causeway, Western Australia
Venice ()
Yolo Causeway, California, USA
Ponte Conde de Linhares, Panjim, Goa, India. The 3.2 km route was the longest causeway in Asia at the time of its completion in 1634.
Cherkasy Dam (:uk:Черкаська дамба), Ukraine
Cayo Santa María Causeway, Cuba
Cayo Coco Causeway, Cuba
Disadvantages
Unlike tunnels or bridges, causeways do not permit shipping through the strait which can cause problems. In some cases, causeways were built with "gates" or other facilities to permit shipping to pass through.
Ecological consequences
Causeways affect currents and may therefore be involved in beach erosion or changed deposition patterns; this effect has been a problem at the Hindenburgdamm in northern Germany. During hurricane seasons, the winds and rains of approaching tropical storms—as well as waves generated by the storm in the surrounding bodies of water—make traversing causeways problematic at best and impossibly dangerous during the fiercest parts of the storms. For this reason (and related reasons, such as the need to minimize traffic jams on both the roads approaching the causeway and the causeway itself), emergency evacuation of island residents is a high priority for local, regional, and even national authorities.
Causeways can separate populations of wildlife, putting further pressure on endangered species.
Causeways can cause a mineral imbalance between portions of a body of water. For example, a causeway built in the Great Salt Lake has caused the northern half of the lake to have much higher salinity, to the point that the two halves show a major color imbalance (as can be seen in the image at right). Furthermore, the difference in salinity has become so severe that native brine shrimp cannot survive in much of the waters, with the northern part being too salty and the southern part being insufficiently salty.
Gallery
See also
Causey Arch, County Durham, England
Causewayed enclosure
Kūlgrinda
Sacbe
References
Further reading
External links
Buildings and structures by type
Transport buildings and structures | Causeway | [
"Engineering"
] | 2,174 | [
"Buildings and structures by type",
"Architecture"
] |
320,340 | https://en.wikipedia.org/wiki/Needle%20and%20syringe%20programmes | A needle and syringe programme (NSP), also known as needle exchange program (NEP), is a social service that allows injection drug users (IDUs) to obtain clean and unused hypodermic needles and associated paraphernalia at little or no cost. It is based on the philosophy of harm reduction that attempts to reduce the risk factors for blood-borne diseases such as HIV/AIDS and hepatitis.
History
Needle-exchange programmes can be traced back to informal activities undertaken during the 1970s. The idea is likely to have been rediscovered in multiple locations. The first government-approved initiative (Netherlands) was undertaken in the early to mid-1980s, followed closely by initiatives in the United Kingdom and Australia by 1986. While the initial programme was motivated by an outbreak of hepatitis B, the AIDS pandemic motivated the rapid adoption of these programmes around the world.
Operation
Needle and syringe programs operate differently in different parts of the world; the first NSPs in Europe and Australia gave out sterile equipment to drug users, having begun in the context of the early AIDS epidemic. The United States took a far more reluctant approach, typically requiring IDUs to already have used needles to exchange for sterile ones - this "One-for-one" system is where the same number of syringes must be returned.
According to Santa Cruz County, California, exchange staff interviewed by Santa Cruz Local in 2019, it is a common practice not to count the number of exchanged needles exactly, but rather to estimate the number based on a container's volume. Holyoke, Massachusetts, also uses the volume system. United Nations Office on Drugs and Crime for South Asia suggests visual estimation or asking the client how many they brought back. The volume-based method left potential for gaming the system and an exchange agency in Vancouver devoted significant effort to game the system.
Some, such as the Columbus Public Health in Ohio weigh the returned sharps rather than counting.
The practices and policies vary between needle and syringe program sites. In addition to exchange, there is a model called "needs-based" where the syringes are handed out without requiring any to be returned.
According to a report published in 1994, Montreal's CACTUS exchange which has a policy of one-for-one, plus one needle with a limit of 15 had a return rate of 75-80% between 1991 and 1993.
An exchange in Boulder, Colorado, implemented a one-for-one with four starter needles and reported an exchange rate of 89.1% in 1992.
In the United States, where the One-for-one system still dominates, some 25% of injecting drug users are living positive with HIV; in Australia, which hands out equipment for free to anyone needing it (only charging a small fee for some more expensive equipment, like wheel filters and higher-quality tourniquets), only 1% of the IDU population is HIV-positive as of 2015, compared to over 20% in the late 1980s when NSP programs began to spread nationally and became accessible to most of the population.
International experience
Programs providing sterile needles and syringes currently operate in 87 countries around the world. IA comprehensive 2004 study by the World Health Organization (WHO) found a "compelling case that NSPs substantially and cost effectively reduce the spread of HIV among IDUs and do so without evidence of exacerbating injecting drug use at either the individual or societal level." WHO's findings have also been supported by the American Medical Association (AMA), which in 2000 adopted a position strongly supporting NSPs when combined with addiction counseling.
Australia
The Melbourne, Australia, inner-city suburbs of Richmond, and Abbotsford are locations in which the use and dealing of heroin has been concentrated. The Burnet Institute research organisation completed the 2013 'North Richmond Public Injecting Impact Study' in collaboration with the Yarra Drug and Health Forum and North Richmond Community Health Centre and recommended 24-hour access to sterile injecting equipment due to the ongoing "widespread, frequent and highly visible" nature of illicit drug use in the areas. Between 2010 and 2012 a four-fold increase in the levels of inappropriately discarded injecting equipment was documented for the two suburbs. In the surrounding City of Yarra, an average of 1,550 syringes per month was collected from public syringe disposal bins in 2012. Paul Dietze stated, "We have tried different measures and the problem persists, so it's time to change our approach".
On 28 May 2013, the Burnet Institute stated that it recommended 24-hour access to sterile injecting equipment in the Melbourne suburb of Footscray after the area's drug culture continued to grow after more than ten years of intense law enforcement efforts. The institute's research concluded that public injecting behaviour is frequent in the area and injecting paraphernalia has been found in carparks, parks, footpaths, and drives. Furthermore, people who inject drugs have broken into syringe disposal bins to reuse discarded equipment.
A study commissioned by the Australian Government revealed that for every A$1 invested in NSPs in Australia, $4 was saved in direct healthcare costs, and if productivity and economic benefits are included, the programs returned a staggering $27 for every $1 invested. The study notes that over a longer time horizon than that considered (10 years) the cost-benefit ratio grows even further. In terms of infections averted and lives saved, the study finds that, between 2000 and 2009, 32,000 HIV infections and 96,667 hepatitis C infections were averted, and approximately 140,000 disability-adjusted life years were gained.
United Kingdom
From the 1980s, Maggie Telfer from the Bristol Drugs Project advocated for needle exchanges to be established in the United Kingdom. The British public body, the National Institute for Health and Care Excellence (NICE), introduced a recommendation in April 2014 due to an increase in the number of young people who inject steroids at UK needle exchanges. NICE previously published needle exchange guidelines in 2009, in which needle and syringe services were not advised for people under 18, but the organisation's director Professor Mike Kelly explained that a "completely different group" of people were presenting at programmes. In the updated guidance, NICE recommended the provision of specialist services for "rapidly increasing numbers of steroid users", and that needles should be provided to people under the age of 18—a first for NICE—following reports of 15-year-old steroid injectors seeking to develop their muscles.
United States
The first program in the United States to be operated at public expenses was established in Tacoma, Washington in November 1988. The Centers for Disease Control and Prevention and the National Institutes of Health confirm that needle exchange is an effective strategy for the prevention of HIV. The NIH estimated in 2002 that in the United States, 15–20% of injection drug users have HIV and at least 70% have hepatitis C. The Centers for Disease Control (CDC) reports one-fifth of all new HIV infections and the vast majority of hepatitis C infections are the result of injection drug use. United States Department of Health and Human Services reports 7%, or 2,400 cases of HIV infections in 2018 were among drug users.
Portland, Oregon, was the first city in nation to expend public funds on a NSP which opened in 1989. It is also one of the longest running programme in the country. Despite the word "exchange" in the programme name, the Portland needle exchange operated by Multnomah County hands out syringes to addicts who do not present any to exchange. The exchange programme reports 70% of their users are transients who experience "homelessness or unstable housing" It was reported that during the fiscal year 2015–2016, the county dispensed 2,478,362 syringes and received 2,394,460, a shortage of 83,902 needles. In 2016, it was reported that the Cleveland needle exchange program sees "mostly white suburban kids ages 18 to 25".
San Francisco
Since the full sanction of syringe exchange programs (SEP) by then-Mayor Frank Jordan in 1993, the San Francisco Department of Public Health has been responsible for the management of syringe access and the proposed disposal of these devices in the city. This sanction, which was originally executed as a state of emergency to address the HIV epidemic, allowed SEPs to provide sterile syringes, take back used devices, and operate as a service for health education to support individuals struggling with substance use disorders. Since then, it was approximated that from July 1, 2017, to December 31, 2017, only 1,672,000 out of the 3,030,000 distributed needles (60%) were returned to the designated sites. In April 2018, acting Mayor Mark Farrell allocated $750,000 towards the removal of abandoned needles littering the streets of San Francisco.
General characteristics
As of 2011, at least 221 programmes operated in the US. Most (91%) were legally authorized to operate; 38.2% were managed by their local health authorities. The CDC reported in 1993 that the most significant expenses for the NSPs is personnel cost, which reports it represents 66% of the budget.
More than 36 million syringes were distributed annually, mostly through large urban programmes operating a stationary site. More generally, US NEPs distribute syringes through a variety of methods including mobile vans, delivery services and backpack/pedestrian routes that include secondary (peer-to-peer) exchange.
Funding
In the United States, a ban on federal funding for needle exchange programs began in 1988, when republican North Carolina Senator Jesse Helms led Congress to enact a prohibition on the use of federal funds to encourage drug abuse. The ban was briefly lifted in 2009, reinstated in 2010, and partially lifted again in 2015. Currently, federal funds can still not be used for the purchase of needles and syringes or other injecting paraphernalia by needle exchange programs, though can be used for training and other program support in the case of a declared public health emergency. In the time between 2010 and 2011 when no ban was in place, at least three programmes were able to obtain federal funds and two-thirds reported planning to pursue such funding. A 1997 study estimated that while the funding ban was in effect, it "may have led to HIV infection among thousands of IDUs, their sexual partners, and their children." US NEPs continue to be funded through a mixture of state and local government funds, supplemented by private donations. The funding ban was effectively lifted for every aspect of the exchanges except the needles themselves in the omnibus spending bill passed in December 2015 and signed by President Obama. This change was first suggested by Kentucky Republicans Hal Rogers and Mitch McConnell, according to their spokespeople.
Legal aspects
Many states criminalized needle possession without a prescription, arresting people as they left underground needle exchange efforts. In some jurisdictions, such as New York, needle exchange activists challenged the laws in court, with judges ruling that their actions were justified by a "necessity defense" which permits breaking of a law to prevent an imminent harm. In other jurisdictions where syringe possession without a prescription remained illegal, physician-based prescription programmes have shown promise. Epidemiological research demonstrating that syringe access programmes are both effective and cost-effective helped to change state and local NEP-operation laws, as well as the status of syringe possession more broadly. For example, between 1989 and 1992, three exchanges in New York City tagged syringes to help demonstrate rates of return prior to the legalization of the approach.
By 2012, legal syringe exchange programmes existed in at least 35 states. In some settings, syringe possession and purchase is decriminalized, while in others, authorized NEP clients are exempt from certain drug paraphernalia laws. However, despite the legal changes, gaps between the formal law and environment mean that many programmes continue to face law enforcement interference and covert programmes continue to exist within the U.S.
Colorado allows covert syringe exchange programmes to operate. Current Colorado laws leave room for interpretation over the requirement of a prescription to purchase syringes. Based on such laws, the majority of pharmacies do not sell syringes without a prescription and police arrest people who possess syringes without a prescription. Boulder County health department reports between January 2012 and March 2012, the group received over 45,000 dirty needles and distributed around 45,200 sterile syringes.
As of 2017, NSPs are illegal in 15 states. NSPs are prohibited by local regulations in cities in Orange County, California, even though it is not disallowed by state law in California.
Law enforcement
Conflict with law enforcement
Removal of legal barriers to the operation of NEPs and other syringe access initiatives has been identified as an important part of a comprehensive approach to reducing HIV transmission among IDUs. Legal barriers include both "law on the books" and "law on the streets", i.e., the actual practices of law enforcement officers, which may or may not reflect relevant law. Changes in syringe and drug control policy can be ineffective in reducing such barriers if police continue to treat syringe possession as a crime or participation in NEP as evidence of criminal activity.
Although most US NEPs operate legally, many report some form of police interference. In a 2009 national survey of 111 US NEP managers, 43% reported at least monthly client harassment, 31% at least monthly unauthorized confiscation of clients' syringes, 12% at least monthly client arrest en route to or from NEP and 26% uninvited police appearances at program sites at least every six months. In multivariate modeling, legal status of the program (operating legally vs illegally) and jurisdiction's syringe regulation environment were not associated with frequency of police interference.
A detailed 2011 analysis of NEP client experiences in Los Angeles suggested that as many as 7% of clients report negative encounters with security officers in any given month. Given that syringes are not prohibited in the jurisdiction and their confiscation can only occur as part of an otherwise authorized arrest, almost 40% of those who reported syringe confiscation were not arrested. This raises concerns about extrajudicial confiscation of personal property. Approximately 25% of the encounters detailed by respondents involved private security personnel, rather than local police.
Similar findings have emerged internationally. For example, despite instituting laws protecting syringe access and possession and adopting NEPs, IDUs and sex workers in Mexico's Northern Border regions report frequent syringe confiscation by law enforcement personnel. In this region as well as elsewhere, reports of syringe confiscation are correlated with increases in risky behaviors, such as groin injecting, public injection and utilization of pharmacies. These practices translate to risk for HIV and other blood-borne diseases.
Racial gradient
NEPs serving predominantly IDUs of color may be almost four times more likely to report frequent client arrest en route to or from the program and almost four times more likely to report unauthorized syringe confiscation. A 2005 study in Philadelphia found that African-Americans accessing the city's legally operated exchange decreased at more than twice the rate of white individuals after the initiation of a police anti-drug operation. These and other findings illustrate a possible mechanism by which racial disparities in law enforcement can translate into disparities in HIV transmission. The majority (56%) of respondents reported not documenting adverse police events; those who did were 2.92 times more likely to report unauthorized syringe confiscation. These findings suggest that systematic surveillance and interventions are needed to address police interference.
Causes
Police interference with legal NEP operations may be partially explained by training defects. A study of police officers in an urban police department four years after the decriminalization of syringe purchase and possession in the US state of Rhode Island suggested that up to a third of police officers were not aware that the law had changed. This knowledge gap parallels other areas of public health law, underscoring pervasive gaps in dissemination.
Even police officers with accurate knowledge of the law, however, reported intention to confiscate syringes from drug users as a way to address problematic substance use. Police also reported anxiety about accidental needle sticks and acquiring communicable diseases from IDUs, but were not trained or equipped to deal with this occupational risk; this anxiety was intertwined with negative attitudes towards syringe access initiatives.
Training and interventions to address law enforcement barriers
US NEPs have successfully trained police, especially when framed as addressing police occupational safety and human resources concerns. Preliminary evidence also suggests that training can shift police knowledge and attitudes regarding NEPs specifically and public health-based approaches towards problematic drug use in general.
According to a 2011 survey, 20% of US NEPs reported training police during the previous year. Covered topics included the public health rationale behind NEPs (71%), police occupational health (67%), needle stick injury (62%), NEPs' legal status (57%), and harm reduction philosophy (67%). On average, training was seen as moderately effective, but only four programmes reported conducting any formal evaluation. Assistance with training police was identified by 72% of respondents as the key to improving police relations.
Advocacy
Organizations ranging from the NIH, CDC, the American Bar Association, the American Medical Association, the American Psychological Association, the World Health Organization and many others endorsed low-threshold programmes including needle exchange.
Needle exchange programmes have faced opposition on both political and moral grounds. Advocacy groups including the National District Attorneys Association (NDAA), Drug Watch International, The Heritage Foundation, Drug Free Australia, and so forth and religious organizations such as the Catholic Church.
In the United States NEP programmes have proliferated, despite lack of public acceptance. Internationally, needle exchange is widely accepted.
Research
Disease transmission
Two 2010 'reviews of reviews' by a team originally led by Norah Palmateer that examined systematic reviews and meta-analyses on the topic found insufficient evidence that NSP prevents transmission of the hepatitis C virus, tentative evidence that it prevents transmission of HIV, and sufficient evidence that it reduces self-reported risky injecting behaviour. In a comment Palmateer warned politicians not to use her team's review of reviews as a justification to close existing programmes or to hinder the introduction of new needle-exchange schemes. The weak evidence on the programmes' disease prevention effectiveness is due to inherent design limitations of the reviewed primary studies and should not be interpreted as the programmes lacking preventive effects.
The second of the Palmateer team's 'review of reviews' scrutinised 10 previous formal reviews of needle exchange studies, and after critical appraisal only four reviews were considered rigorous enough to meet the inclusion criteria. Those were done by the teams of Gibson (2001), Wodak and Cooney (2004), Tilson (2007) and Käll (2007). The Palmateer team judged that their conclusion in favour of NSP effectiveness was not consistent with the results from the HIV studies they reviewed.
The Wodak and Cooney review had, from 11 studies of what they determined as demonstrating acceptable rigour, found 6 that were positive regarding the effectiveness of NSPs in preventing HIV, 3 that were negative and 2 inconclusive. However a review by Käll et al. disagreed with the Wodak and Cooney review, reclassifying the studies on NSP effectiveness to 3 positive, 3 negative and 5 inconclusive. The US Institute of Medicine evaluated the conflicting evidence of both Drs Wodak and Käll in their Geneva session and concluded that although multicomponent HIV prevention programmes that include needle and syringe exchange reduced intermediate HIV risk behavior, evidence regarding the effect of needle and syringe exchange alone on HIV incidence was limited and inconclusive, given "myriad design and methodological issues noted in the majority of studies." Four studies that associated needle exchange with reduced HIV prevalence failed to establish a causal link, because they were designed as population studies rather than assessing individuals.
NEPs successfully serve as one component of HIV prevention strategies. Multi-component HIV prevention programmes that include NSE reduce drug-related HIV risk behaviors and enhance the impact of harm reduction services.
Tilson (2007) concluded that only comprehensive packages of services in multi-component prevention programmes can be effective in reducing drug-related HIV risks. In such packages, it is unclear what the relative contribution of needle exchange may be to reductions in risk behavior and HIV incidence.
Multiple examples can be cited showing the relative ineffectiveness of needle exchange programmes alone in stopping the spread of blood-borne disease. Many needle exchange programmes do not make any serious effort to treat drug addiction. For example, David Noffs of the Life Education Center wrote, "I have visited sites around Chicago where people who request info on quitting their habit are given a single sheet on how to go cold turkey—hardly effective treatment or counseling."
A 2013 systematic review found support for the use of NEPs to prevent and treat HIV and HCV infection. A 2014 systematic review and meta-analysis found evidence that NEPs were effective in reducing HIV transmission among injection drug users, but that other harm reduction programmes have probably also contributed to the decrease in HIV incidence. NEPs appear to be as effective in low- and middle-income countries as in high-income ones.
Worker training
Lemon and Shah presented a 2013 paper at the International Congress of Psychiatrists that highlighted lack of training for needle exchange workers and also showed the workers performing a range of tasks beyond contractual obligations, for which they had little support or training. It also showed how needle exchange workers were a common first contact for distressed drug users. Perhaps the most concerning finding was that workers were not legally allowed to provide Naloxone should it be needed.
Drug use
According to a 2022 study by Vanderbilt University economist Analisa Packham, syringe exchange programs reduce HIV rates by 18.2 percent but lead to greater drug use. Syringe exchange programmes increased drug-related mortality rates by 11.7 percent and opioid-related mortality rates by 21.6 percent.
Arguments for and against
Needle disposal
NSPs Do Not Increase Litter: Broad Arguments
Activist groups claim there is no way to ensure SEP users will be properly disposed of. Peer reviewed studies suggest that there are less improperly disposed of syringes in cities with needle exchange programs than in cities without. Other studies of similar design find that syringe exchange program drop boxes were associated with an overall decrease of improper syringe disposal (over 98% decrease) and going further from said syringe exchange sites increases the amount of improperly disposed needles. Other ethnographic studies find evidence that criminal related drug possession laws further serve to increase improperly disposed of needles, and decreasing the severity of possession laws may positively impact proper syringe disposal, this corroborates the CDC's own guidelines on syringe disposal, which claim "Studies have found that syringe litter is more likely in areas without SSPs".
NSPs Do Increase Litter: Broad Arguments
On the other hand, there is data to suggest SEPs do increase improper syringe disposal. Opposition groups contribute their own proof through photographic evidence of increased needle litter, additionally, opponents argue that programs which do not mandate a 1:1 needle exchange encourage the more convenient improper discarding of needles when the programs are not open or are not accepting needle returns. Additionally, many programs allow for unlimited access to needles, which opponents argue increases litter to a much higher degree on the basis of increasing total needles in circulation. Portland residents in areas where syringe acquisition is unlimited claim to be "drowning in needles" and picking up upwards of 100 per week. Opposition groups also argue government action in increasing the amount of syringe disposal boxes is slow.
NSPs that strictly adhere to one-for-one policy and do not furnish starter syringes/needles do not increase the number of them in circulation.
The few studies that specifically evaluated the effects of NEPs produced "modest" evidence of no impact on improper needle discards and injection frequency and "weak" evidence on lack of impact on numbers of drug users, high-risk user networks and crime trends.
Some NSPs hands outs needles without an expectation of used syringes being returned. One NSP in Portland, Oregon, hands out syringes without question. Neighbors near the NSP are routinely finding discarded syringes and the neighborhood organization to which they are a part of, the University Park park neighborhood association, desires the needle handout operation to stop. A local resident visited a NSP in Chico, California, and she was handed 100 syringes without question. The City Council in Chico is discussing banning the operation.
A 2003 Australian bi-partisan Federal Parliamentary inquiry published recommendations, registering concern about the lack of accountability of Australia's needle exchanges, and lack of a national program to track needle stick injuries. Community concern about discarded needles and needle stick injury led Australia to allocate $17.5 million in 2003/4 to investigating retractable technology for syringes.
Treatment program enrollment
IDUs risk multiple health problems from non-sterile injecting practices, drug complications and associated lifestyle choices. Unrelated health problems such as diabetes may be neglected because of drug dependence. IDUs are typically reluctant to use conventional health services. Such reluctance/neglect implies poorer health and increased use of emergency services, creating added costs. Harm reduction based health care centres, also known as targeted health care outlet or low-threshold health care outlet for IDUs have been established to address this issue.
NSP staff facilitate connections among people who use drugs and medical facilities, thereby exposing them to voluntary physical, psychological and emotional treatment programmes.
Social services for addicts can be organized around needle exchanges, increasing their accessibility.
Cost effectiveness
As of 2011, CDC estimated that every HIV infection prevented through a needle exchange program saves an estimated US$178,000+. Separately it reported an overall 30 percent or more reduction in HIV cases among IDUs.
Proponents
Proponents of harm reduction argue that the provision of a needle exchange provides a social benefit in reducing health costs and also provides a safe means to dispose of used syringes. For example, in the United Kingdom, proponents of SEPs assert that, along with other programmes, they have reduced the spread of HIV among intravenous drug users. These supposed benefits have led to an expansion of these programmes in most jurisdictions that have introduced them, increasing geographical coverage and operating hours. Vending machines that automatically dispense injecting equipment have been successfully introduced.
Other promoted benefits of these programmes include providing a first point of contact for formal drug treatment, access to health and counselling service referrals, the provision of up-to-date information about safe injecting practices, access to contraception and sexual health services and providing a means for data collection from users about their behaviour and/or drug use patterns. SEP outlets in some settings offer basic primary health care. These are known as 'targeted primary health care outlets', because they primarily target people who inject drugs and/or 'low-threshold health care outlets', because they reduce common barriers to health care from the conventional health care outlets,. Clients frequently visit SEP outlets for help accessing sterile injecting equipment. These visits are used opportunistically to offer other health care services.
A clinical trial of needle exchange found that needle exchange did not cause an increase in drug injection.
California Environmental Quality Act (CEQA)
Within California, those opposed to syringe exchange programs have frequently invoked the California Environmental Quality Act (CEQA) as a means to bar syringe exchange programs from operating, citing the environmental impact of improper syringe disposals. Most notably SEP opposition within Santa Cruz, and Orange County—whose only syringe exchange program The Orange County Needle Exchange Program (OCNEP) was blocked from operating in October, 2019 by an Orange County lawsuit which charged the program with creating hazardous conditions and litter for residents. The OCNEP contests that public needle litter still exists after the shutdown of their program.
Legislation in California signed by governor Gavin Newsom in 2021, AB-1344, aimed to block the use of CEQA to challenge SEPs. The provision states that "Needle and syringe exchange services application submissions, authorizations, and operations performed pursuant to this chapter shall be exempt from review under the California Environmental Quality Act, Division 13 (commencing with Section 21000) of the Public Resources Code."
The provision was passed on the basis of curtailing the opioid epidemic. There is no part of the bill which explicitly addresses the environmental concerns of the plaintiffs.
Scope
In a 1993 mortality study among 415 injection drug users in the Philadelphia area, over four years, 28 died: 5 from HIV-related causes; 7 from overdose, 5 from homicide, 4 from heart disease, 3 from renal failure, 2 from liver disease, 1 from suicide and 1 from cancer.
Community issues
NSP effectiveness studies usually focused on addict health effects; the United States National District Attorneys Association argues that they neglect effects on the broader community.
NSPs may concentrate drug activity into communities in which they operate. Only a small number of short-term studies considered whether NSPs have such effects. To the extent that this happens, they may negatively affect property values, increase localized crime rates and damage broader perceptions about the host community. In 1987 in the Platzspitz park in Zürich "...authorities chose to allow illegal drug use and sales at the park, in an effort to contain Zürich's growing drug problem. Police were not allowed to enter the park or make arrests. Clean needles were given out to addicts as part of the Zürich Intervention Pilot Project, or ZIPP-AIDS program. However, lack of control over what went on in the park caused a multitude of problems. Drug dealers and users arrived from all over Europe, and crime became rampant as dealers fought for control and addicts (who numbered up to 20,000) stole to support their habit."
In Australia, which is considered a leading proponent of harm reduction, a survey showed that one-third of the public believed that NSPs encouraged drug use, and 20% believed that NSPs dispensed drugs.
Diversion
NPR interviewed a syringe exchange program Prevention Point Philadelphia in Philadelphia, United States, and some of its clients. The program Prevention Point allows anyone presenting syringes to exchange for the same quantity without limitation and this has led to drug addicts selling clean syringes to other drug addicts to make drug money. Some drug dealers use the needle exchange to obtain a supply of large quantities of needles to sell or give to their drug buyers.
Some participants interviewed by a The Baltimore Sun in February 2000 revealed that they sell some of the new syringes obtained from the exchange in order to make drug money and did not always stop needle sharing among drug addicts.
See also
Supervised injection site
References
Addiction medicine
Drug culture
Drug paraphernalia
Drug safety
Harm reduction
Medical hygiene
Infection-control measures
Medical waste
Prevention of HIV/AIDS
Public policy
Public services | Needle and syringe programmes | [
"Chemistry",
"Biology"
] | 6,393 | [
"Medical waste",
"Drug safety"
] |
320,351 | https://en.wikipedia.org/wiki/Hypodermic%20needle | A hypodermic needle (from Greek ὑπο- (hypo- = under), and δέρμα (derma = skin)) is a very thin, hollow tube with one sharp tip. It is one of a category of medical tools which enter the skin, called sharps. It is commonly used with a syringe, a hand-operated device with a plunger, to inject substances into the body (e.g., saline solution, solutions containing various drugs or liquid medicines) or extract fluids from the body (e.g., blood). Large-bore hypodermic intervention is especially useful in catastrophic blood loss or treating shock.
A hypodermic needle is used for rapid delivery of liquids, or when the injected substance cannot be ingested, either because it would not be absorbed (as with insulin), or because it would harm the liver. It is also useful to deliver certain medications that cannot be delivered orally due to vomiting. There are many possible routes for an injection, with intramuscular (into a muscle) and intravenous (into a vein) being the most common. A hypodermic syringe has the ability to retain liquid and blood in it up to years after the last use and a great deal of caution should be taken to use a new syringe every time.
The hypodermic needle also serves an important role in research environments where sterile conditions are required. The hypodermic needle significantly reduces contamination during inoculation of a sterile substrate. The hypodermic needle reduces contamination for two reasons: First, its surface is extremely smooth, which prevents airborne pathogens from becoming trapped between irregularities on the needle's surface, which would subsequently be transferred into the media (e.g. agar) as contaminants; second, the needle's surface is extremely sharp, which significantly reduces the diameter of the hole remaining after puncturing the membrane and consequently prevents microbes larger than this hole from contaminating the substrate.
History
Early use and experimentation
The ancient Greeks and Romans knew injection as a method of medicinal delivery from observations of snakebites and poisoned weapons. There are also references to "anointing" and "inunction" in the Old Testament as well as the works of Homer, but injection as a legitimate medical tool was not truly explored until the 17th century.
Christopher Wren performed the earliest confirmed experiments with crude hypodermic needles, performing intravenous injection into dogs in 1656. These experiments consisted of using animal bladders (as the syringe) and goose quills (as the needle) to administer drugs such as opium intravenously to dogs. Wren and others' main interest was to learn if medicines traditionally administered orally would be effective intravenously. In the 1660s, Johann Daniel Major of Kiel and Johann Sigismund Elsholtz of Berlin were the first to experiment with injections in humans.
19th-century development
The 19th century saw the development of medicines that were effective in small doses, such as opiates and strychnine. This spurred a renewed interest in direct, controlled application of medicine. "Some controversy surrounds the question of priority in hypodermic medication." Irish physician Francis Rynd is generally credited with the first successful injection in 1844, in the Meath Hospital in Dublin, Ireland.
Alexander Wood's main contribution was the all-glass syringe in 1851, which allowed the user to estimate dosage based on the levels of liquid observed through the glass. Wood used hypodermic needles and syringes primarily for the application of localized, subcutaneous injection (localized anesthesia) and therefore was not as interested in precise dosages.
Simultaneous to Wood's work in Edinburgh, Charles Pravaz of Lyon also experimented with sub-dermal injections in sheep using a syringe of his own design. Pravaz designed a hypodermic needle measuring 3 cm (1.18 in) long and 5 mm (0.2 in) in diameter; it was made entirely of silver.
Charles Hunter, a London surgeon, is credited with the coining of the term "hypodermic" to describe subcutaneous injection in 1858. The name originates from two Greek words: hypo, "under", and derma, "skin". Furthermore, Hunter is credited with acknowledging the systemic effects of injection after noticing that a patient's pain was alleviated regardless of the injection's proximity to the pained area. Hunter and Wood were involved in a lengthy dispute over not only the origin of the modern hypodermic needle, but also because of their disagreement as to the medicine's effect once administered.
Modern improvements
Dr. Francis Rynd used the first "Hollow Needle" as a hypodermic syringe on Ms. Margaret Cox in Ireland on June 3rd, 1844. Dr. Wood can be largely credited with the popularization and acceptance of injection as a medical technique, as well as the widespread use and acceptance of the hypodermic needle. The basic technology of the hypodermic needle has stayed largely unchanged since the 19th century, but as the years progressed and medical and chemical knowledge improved, small refinements have been made to increase safety and efficacy, with needles being designed and tailored for very particular uses. Hypodermic needles remain essential to large volume administration or exchange in settings of trauma or dialysis. The trend of needle specification for use began in the 1920s, particularly for the administration of insulin to diabetics.
The onset of World War II spurred the early development of partially disposable syringes for the administration of morphine and penicillin on the battlefield. Development of the fully disposable hypodermic needle was spurred on in the 1950s for several reasons. The Korean War created blood shortages and in response disposable, sterile syringes were developed for collecting blood. The widespread immunization against polio during the period required the development of a fully disposable syringe system.
The 1950s also saw the rise and recognition of cross-contamination from used needles. This led to the development of the first fully disposable plastic syringe by New Zealand pharmacist Colin Murdoch in 1956. This period also marked a shift in interest from needle specifications to general sterility and safety.
The 1980s saw the rise of the HIV epidemic and with it renewed concern over the safety of cross-contamination from used needles. New safety controls were designed on disposable needles to ensure the safety of medical workers in particular. These controls were implemented on the needles themselves, such as retractable needles, but also in the handling of used needles, particularly in the use of hard-surface disposal receptacles found in every medical office today.
By 2008, all-plastic needles were in production and in limited use. One version was made of Vectra (plastic) aromatic liquid crystal polymer tapered from 1.2 mm at the hub to 0.72 mm at the tip (equivalent to 22 gauge metal needle), with an ID/OD ratio of 70%.
Manufacture
Hypodermic needles are normally made from a stainless-steel or Niobium tube through a process known as tube drawing where the tube is drawn through progressively smaller dies to make the needle. The end of the needle is bevelled to create a sharp pointed tip, letting the needle easily penetrate the skin.
Gauge
The main system for measuring the diameter of a hypodermic needle is the Birmingham gauge (also known as the Stubs Iron Wire Gauge); the French gauge is used mainly for catheters. Various needle lengths are available for any given gauge. Needles in common medical use range from 7 gauge (the largest) to 34 (the smallest). 21-gauge needles are most commonly used for drawing blood for testing purposes, and 16- or 17-gauge needles are most commonly used for blood donation, as the larger luminal cross-sectional area results in lower fluid shear, reducing harm to red blood cells while also allowing more blood to be collected in a shorter time.
Although reusable needles remain useful for some scientific applications, disposable needles are far more common in medicine. Disposable needles are embedded in a plastic or aluminium hub that attaches to the syringe barrel by means of a press-fit or twist-on fitting. These are sometimes referred to as "Luer Lock" connections, referring to the trademark Luer-Lok. The male and female luer lock and hub—produced by pharmaceutical equipment manufacturers—are two of the most critical parts of disposable hypodermic needles.
Use by non-specialists
Hypodermic needles are usually used by medical professionals (dentists, phlebotomists, physicians, pharmacists, nurses, paramedics), but they are sometimes used by patients themselves. This is most common with type one diabetics, who may require several insulin injections a day. It also occurs with patients who have asthma or other severe allergies. Such patients may need to take desensitization injections or they may need to carry injectable medicines to use for first aid in case of a severe allergic reaction. In the latter case, such patients often carry a syringe loaded with epinephrine (e.g. EpiPen), diphenhydramine (e.g. Benadryl), or dexamethasone. Rapid injection of one of these drugs may stop a severe allergic reaction.
Multiple sclerosis patients may also treat themselves by injection; several MS therapies, including various interferon preparations, are designed to be self-administered by subcutaneous or intramuscular injection.
Transgender people may also inject their own hormone replacement therapy, using either intramuscular injection or subcutaneous injection methods.
Hypodermic needles are also used for erotic piercing.
Phobia
It is estimated that anywhere from nearly 3.5 to 10% of the world's population may have a phobia of needles (trypanophobia), and it is much more common in children, ages 5–17. Topical anesthetics can be used to desensitize the area where the injection will take place to reduce pain and discomfort. For children, various techniques may be effective at reducing distress or pain related to needles. Techniques include: distraction, hypnosis, combined cognitive behavioral therapy, and breathing techniques.
References
External links
The Needle Phobia Page
Needle Phobia and Dental Injections
Drug delivery devices
Drug paraphernalia | Hypodermic needle | [
"Chemistry"
] | 2,189 | [
"Pharmacology",
"Drug delivery devices"
] |
320,364 | https://en.wikipedia.org/wiki/Trampling%20%28sexual%20practice%29 | Trampling is a sexual activity that involves being trampled underfoot by another person or persons. Trampling is common enough to support a subgenre of trampling pornography.
Because trampling can be used to produce pain, the trampling fetish for some adherents is closely linked to sadomasochistic fetishism.
A similar fetish is to imagine themselves as being tiny under another's feet, or being normal size, but being trampled by a giant person. This is known as "giant/giantess fetishism" or macrophilia. It is not the same as trampling.
Trampling is usually done barefooted, in socks, nylons, or shoes. The trampler will predominantly walk, jump and stomp on the person's back and chest.
See also
Crush fetish
Foot fetishism
Shoe fetishism
Erotic asphyxiation
Breast fetishism
Buttocks fetishism
References
BDSM terminology
Foot fetishism | Trampling (sexual practice) | [
"Biology"
] | 200 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
320,453 | https://en.wikipedia.org/wiki/Black%20Stone | The Black Stone () is a rock set into the eastern corner of the Kaaba, the ancient building in the center of the Grand Mosque in Mecca, Saudi Arabia. It is revered by Muslims as an Islamic relic which, according to Muslim tradition, dates back to the time of Adam and Eve.
The stone was venerated at the Kaaba in pre-Islamic pagan times. According to Islamic tradition, it was set intact into the Kaaba's wall by the Islamic prophet Muhammad in 605 CE, five years before his first revelation. Since then, it has been broken into fragments and is now cemented into a silver frame in the side of the Kaaba. Its physical appearance is that of a fragmented dark rock, polished smooth by the hands of pilgrims. It has often been described as a meteorite.
Muslim pilgrims circle the Kaaba as a part of the tawaf ritual during the hajj and many try to stop to kiss the Black Stone, emulating the kiss that Islamic tradition records that it received from Muhammad. While the Black Stone is revered, Islamic theologians emphasize that it has no divine significance and that its importance is historical in nature.
Physical description
The Black Stone was originally a single piece of rock but today consists of several pieces that have been cemented together. They are surrounded by a silver frame which is fastened by silver nails to the Kaaba's outer wall. The fragments are themselves made up of smaller pieces which have been combined to form the seven or eight fragments visible today. The Stone's exposed face measures about by . Its original size is unclear and the recorded dimensions have changed considerably over time, as the pieces have been rearranged in their cement matrix on several occasions. In the 10th century, an observer described the Black Stone as being one cubit () long. By the early 17th century, it was recorded as measuring . According to Ali Bey in the 18th century, it was described as high, and Muhammad Ali Pasha reported it as being long by wide.
The Black Stone is attached to the east corner of the Kaaba, known as al-Rukn al-Aswad (the 'Corner of the Black Stone'). The choice of the east corner may have had ritual significance; it faces the rain-bringing east wind (al-qabul) and the direction from which Canopus rises.
The silver frame around the Black Stone and the black kiswah or cloth enveloping the Kaaba were for centuries maintained by the Ottoman Sultans in their role as Custodian of the Two Holy Mosques. The frames wore out over time due to the constant handling by pilgrims and were periodically replaced. Worn-out frames were brought back to Istanbul, where they are still kept as part of the sacred relics in the Topkapı Palace.
Appearance of the Black Stone
The Black Stone was described by European travellers to Arabia in the 19th- and early-20th centuries, who visited the Kaaba disguised as pilgrims. Swiss traveller Johann Ludwig Burckhardt visited Mecca in 1814, and provided a detailed description in his 1829 book Travels in Arabia:
Visiting the Kaaba in 1853, Richard Francis Burton noted that:
Ritter von Laurin, the Austrian consul-general in Egypt, was able to inspect a fragment of the Stone removed by Muhammad Ali in 1817 and reported that it had a pitch-black exterior and a silver-grey, fine-grained interior in which tiny cubes of a bottle-green material were embedded. There are reportedly a few white or yellow spots on the face of the Stone, and it is officially described as being white with the exception of the face.
History and tradition
The Black Stone was held in reverence well before Islam. It had long been associated with the Kaaba, which was built in the pre-Islamic period and was a site of pilgrimage of Nabataeans who visited the shrine once a year to perform their pilgrimage. The Kaaba held 360 idols of the Meccan gods. The Semitic cultures of the Middle East had a tradition of using unusual stones to mark places of worship, while bowing, worshiping and praying to such sacred objects is also described in the Tanakh as idolatrous and was the subject of prophetic rebuke. The meteorite-origin theory of the Black Stone has seen it likened by some writers to the meteorite which was placed and worshipped in the Greek Temple of Artemis.
The Kaaba has been associated with fertility rites of Arabia. Some New Age writers remark on the apparent similarity of the Black Stone and its frame to the external female genitalia. However, the silver frame was placed on the Black Stone to secure the fragments, after the original stone was broken.
A "red stone" was associated with the deity of the south Arabian city of Ghaiman, and there was a "white stone" in the Kaaba of al-Abalat (near the city of Tabala, south of Mecca). Worship at that time period was often associated with stone reverence, mountains, special rock formations, or distinctive trees. The Kaaba marked the location where the sacred world intersected with the profane, and the embedded Black Stone was a further symbol of this as an object as a link between heaven and earth. Aziz Al-Azmeh claims that the divine name ar-Rahman (one of the names of God in Islam and cognate to one of the Jewish names of God Ha'Rachaman, both meaning "the Merciful One" or "the Gracious One") was used for astral gods in Mecca and might have been associated with the Black Stone. Muhammad is said to have called the stone "the right hand of al-Rahman".
Muhammad
According to Islamic belief, Muhammad is credited with setting the Black Stone in the current place in the wall of the Kaaba. A story found in Ibn Ishaq's Sirah Rasul Allah tells how the clans of Mecca renovated the Kaaba following a major fire which had partly destroyed the structure. The Black Stone had been temporarily removed to facilitate the rebuilding work. The clans could not agree on which one of them should have the honour of setting the Black Stone back in its place.
They decided to wait for the next man to come through the gate and ask him to make the decision. That person was 35-year-old Muhammad, five years before his prophethood. He asked the elders of the clans to bring him a cloth and put the Black Stone in its centre. Each of the clan leaders held the corners of the cloth and carried the Black Stone to the right spot. Then, Muhammad set the stone in place, satisfying the honour of all of the clans. After his Conquest of Mecca in 630, Muhammad is said to have ridden round the Kaaba seven times on his camel, touching the Black Stone with his stick in a gesture of reverence.
Desecrations
The Stone has suffered repeated desecrations and damage over the course of time. It is said to have been struck and smashed to pieces by a stone fired from a catapult during the Umayyad Caliphate's siege of Mecca in 683. The fragments were rejoined by Abd Allah ibn al-Zubayr using a silver ligament. In January 930, it was stolen by the Qarmatians, who carried the Black Stone away to their base in Hajar (modern Eastern Arabia). According to Ottoman historian Qutb al-Din, writing in 1857, the Qarmatian leader Abu Tahir al-Jannabi set the Black Stone up in his own mosque, the Masjid al-Dirar, with the intention of redirecting the hajj away from Mecca. This failed, as pilgrims continued to venerate the spot where the Black Stone had been.
According to the historian al-Juwayni, the Stone was returned twenty-three years later, in 952. The Qarmatians held the Black Stone for ransom, and forced the Abbasids to pay a huge sum for its return. It was wrapped in a sack and thrown into the Friday Mosque of Kufa, accompanied by a note saying "By command we took it, and by command we have brought it back." Its abduction and removal caused further damage, breaking the stone into seven pieces. Its abductor, Abu Tahir, is said to have met a terrible fate; according to Qutb al-Din, "the filthy Abu Tahir was afflicted with a gangrenous sore, his flesh was eaten away by worms, and he died a most terrible death." To protect the shattered stone, the custodians of the Kaaba commissioned a pair of Meccan goldsmiths to build a silver frame to surround it, and it has been enclosed in a similar frame ever since.
In the 11th century, a man allegedly sent by the Fatimid caliph al-Hakim bi-Amr Allah attempted to smash the Black Stone but was killed on the spot, having caused only slight damage. In 1674, according to Johann Ludwig Burckhardt, someone allegedly smeared the Black Stone with excrement so that "every one who kissed it retired with a sullied beard". According to the archaic Sunni belief, by the accusation of one boy, the Persian of an unknown faith was suspected of sacrilege, where Sunnis of Mecca "have turned the circumstance to their own advantage" by assaulting, beating random Persians and forbidding them from Hajj until the ban was overturned by the order of Muhammad Ali. The explorer Sir Richard Francis Burton pointed out on the alleged "excrement action" that "it is scarcely necessary to say that a Shi'a, as well as a Sunni, would look upon such an action with lively horror", and that the real culprit was "some Jew or Christian, who risked his life to gratify a furious bigotry".
Ritual role
The Black Stone plays a central role in the ritual of , when pilgrims kiss the Black Stone, touch it with their hands or raise their hands towards it while repeating the "God is Greatest". They perform this in the course of walking seven times around the Kaaba in a counterclockwise direction (), emulating the actions of Muhammad. At the end of each circuit, they perform and may approach the Black Stone to kiss it at the end of . In modern times, large crowds make it practically impossible for everyone to kiss the stone, so it is currently acceptable to point in the direction of the Stone on each of their seven circuits around the Kaaba. Some even say that the Stone is best considered simply as a marker, useful in keeping count of the ritual circumambulations that one has performed.
Writing in Dawn in Madinah: A Pilgrim's Passage, Muzaffar Iqbal described his experience of venerating the Black Stone during a pilgrimage to Mecca:
The Black Stone and the Kaaba's opposite corner, , are both often perfumed by the mosque's custodians. This can cause problems for pilgrims in the state of ('consecration'), who are forbidden from using scented products and will require a (donation) as a penance if they touch either.
Meaning and symbolism
One tradition holds that the Black Stone was placed by Adam in the original Kaaba.
Muslims believe that the stone was originally pure and dazzling white, but has since turned black because of the sins of the people who touch it.
According to a prophetic tradition, "Touching them both (the Black Stone and ) is an expiation for sins." Adam's altar and the stone were said to have been lost during Noah's Flood and forgotten. Ibrahim (Abraham) was said to have later found the Black Stone at the original site of Adam's altar when the angel Jibrail revealed it to him. Ibrahim ordered his son Ismael – who in Muslim belief is an ancestor of Muhammad – to build a new temple, the Kaaba, into which the stone was to be embedded.
Another tradition says that the Black Stone was originally an angel that had been placed by God in the Garden of Eden to guard Adam. The angel was absent when Adam ate the forbidden fruit and was punished by being turned into a jewel – the Black Stone. God granted it the power of speech and placed it at the top of Abu Qubays, a mountain in the historic region of Khurasan, before moving the mountain to Mecca. When Ibrahim took the Black Stone from Abu Qubays to build the Kaaba, the mountain asked Ibrahim to intercede with God so that it would not be returned to Khurasan and would stay in Mecca.
Another tradition holds that it was brought down to Earth by "an angel from heaven".
According to some scholars, the Black Stone was the same stone that Islamic tradition describes as greeting Muhammad before his prophethood. This led to a debate about whether the Black Stone's greeting comprised actual speech or merely a sound, and following that, whether the stone was a living creature or an inanimate object. Whichever was the case, the stone was held to be a symbol of prophethood.
A hadith records that, when the second Caliph Umar ibn al-Khattab (580–644) came to kiss the stone, he said in front of all assembled: "No doubt, I know that you are a stone and can neither harm anyone nor benefit anyone. Had I not seen Allah's Messenger [Muhammad] kissing you, I would not have kissed you." In the hadith collection Kanz al-Ummal, it is recorded that Ali responded to Umar, saying, "This stone (Hajar Aswad) can indeed benefit and harm.[...] Allah says in Quran that he created human beings from the progeny of Adam and made them witness over themselves and asked them, 'Am I not your creator?' Upon this, all of them confirmed it. Thus Allah wrote this confirmation. And this stone has a pair of eyes, ears and a tongue and it opened its mouth upon the order of Allah, who put that confirmation in it and ordered to witness it to all those worshippers who come for Hajj."
Muhammad Labib al-Batanuni, writing in 1911, commented on the practice that the pre-Islamic practice of venerating stones (including the Black Stone) arose not because such stones are "sacred for their own sake, but because of their relation to something holy and respected". The Indian Islamic scholar Muhammad Hamidullah summed up the meaning of the Black Stone:
In recent years several literalist views of the Black Stone have emerged. A small minority accepts as literally true a hadith, usually taken as allegorical, which asserts that "the Stone will appear on the Day of Judgement () with eyes to see and a tongue to speak, and give evidence in favour of all who kissed it in true devotion, but speak out against whoever indulged in gossip or profane conversations during his circumambulation of the Kaaba".
Scientific origins
The nature of the Black Stone has been much debated. It has been described variously as basalt stone, an agate, a piece of natural glass or—most popularly—a stony meteorite. , the curator of the Austro-Hungarian imperial collection of minerals, published the first comprehensive analysis of the Black Stone in 1857, in which he favoured a meteoritic origin for the stone. Robert Dietz and John McHone proposed in 1974 that the Black Stone was actually an agate, judging from its physical attributes and a report by an Arab geologist that the stone contained clearly discernible diffusion banding characteristic of agates.
A significant clue to its nature is provided by an account of the stone's recovery in 951 CE, after it had been stolen 21 years earlier. According to a chronicler, the stone was identified by its ability to float in water, which would rule out the Black Stone being an agate, a basalt lava, or a stony meteorite, though it would be compatible with it being glass or pumice.
Elsebeth Thomsen of the University of Copenhagen proposed a different hypothesis in 1980. She suggested that the Black Stone may be a glass fragment, or impactite, from the impact of a fragmented meteorite that fell 6,000 years ago at Wabar, a site in the Rub' al Khali desert east of Mecca. A 2004 scientific analysis of the Wabar site suggests that the impact event happened much more recently than first thought and might have occurred within the last 200–300 years.
The meteoritic hypothesis is viewed by geologists as doubtful. The British Natural History Museum suggests that it may be a pseudometeorite; in other words, a terrestrial rock mistakenly attributed to a meteoritic origin.
The Black Stone has never been analysed with modern scientific techniques and its origins remain the subject of speculation.
See also
Qibla, direction in which Muslims pray
List of individual rocks
References
Citations
Bibliography
Grunebaum, G. E. von (1970). Classical Islam: A History 600 A.D.–1258 A.D.. Aldine Publishing Company.
Sheikh Safi-ur-Rahman al-Mubarkpuri (2002). Ar-Raheeq Al-Makhtum (The Sealed Nectar): Biography of the Prophet. Dar-us-Salam Publications. .
Elliott, Jeri (1992). Your Door to Arabia. .
Mohamed, Mamdouh N. (1996). Hajj to Umrah: From A to Z. Amana Publications. .
Time-Life Books (1988). Time Frame AD 600–800: The March of Islam, .
Islamic pilgrimages
Stones
Sacred rocks
Hajj
Kaaba
Meteorites in culture
Hajj terminology
Temple of Artemis | Black Stone | [
"Physics"
] | 3,631 | [
"Stones",
"Physical objects",
"Matter"
] |
320,468 | https://en.wikipedia.org/wiki/Affine%20variety | In algebraic geometry, an affine algebraic set is the set of the common zeros over an algebraically closed field of some family of polynomials in the polynomial ring An affine variety or affine algebraic variety, is an affine algebraic set such that the ideal generated by the defining polynomials is prime.
Some texts use the term variety for any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense).
In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field in which the coefficients are considered, from the algebraically closed field (containing ) over which the common zeros are considered (that is, the points of the affine algebraic set are in ). In this case, the variety is said defined over , and the points of the variety that belong to are said -rational or rational over . In the common case where is the field of real numbers, a -rational point is called a real point. When the field is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by has no rational points for any integer greater than two.
Introduction
An affine algebraic set is the set of solutions in an algebraically closed field of a system of polynomial equations with coefficients in . More precisely, if are polynomials with coefficients in , they define an affine algebraic set
An affine (algebraic) variety is an affine algebraic set that is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible.
If is an affine algebraic set, and is the ideal of all polynomials that are zero on , then the quotient ring is called the of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or the polynomial functions on the variety. They form the ring of regular functions on the variety, or, simply, the ring of the variety; in other words (see #Structure sheaf), it is the space of global sections of the structure sheaf of X.
The dimension of a variety is an integer associated to every variety, and even to every algebraic set, whose importance relies on the large number of its equivalent definitions (see Dimension of an algebraic variety).
Examples
The complement of a hypersurface in an affine variety (that is for some polynomial ) is affine. Its defining equations are obtained by saturating by the defining ideal of . The coordinate ring is thus the localization .
In particular, (the affine line with the origin removed) is affine.
On the other hand, (the affine plane with the origin removed) is not an affine variety; cf. Hartogs' extension theorem.
The subvarieties of codimension one in the affine space are exactly the hypersurfaces, that is the varieties defined by a single polynomial.
The normalization of an irreducible affine variety is affine; the coordinate ring of the normalization is the integral closure of the coordinate ring of the variety. (Similarly, the normalization of a projective variety is a projective variety.)
Rational points
For an affine variety over an algebraically closed field , and a subfield of , a -rational point of is a point That is, a point of whose coordinates are elements of . The collection of -rational points of an affine variety is often denoted Often, if the base field is the complex numbers , points that are -rational (where is the real numbers) are called real points of the variety, and -rational points ( the rational numbers) are often simply called rational points.
For instance, is a -rational and an -rational point of the variety as it is in and all its coordinates are integers. The point is a real point of that is not -rational, and is a point of that is not -rational. This variety is called a circle, because the set of its -rational points is the unit circle. It has infinitely many -rational points that are the points
where is a rational number.
The circle is an example of an algebraic curve of degree two that has no -rational point. This can be deduced from the fact that, modulo , the sum of two squares cannot be .
It can be proved that an algebraic curve of degree two with a -rational point has infinitely many other -rational points; each such point is the second intersection point of the curve and a line with a rational slope passing through the rational point.
The complex variety has no -rational points, but has many complex points.
If is an affine variety in defined over the complex numbers , the -rational points of can be drawn on a piece of paper or by graphing software. The figure on the right shows the -rational points of
Singular points and tangent space
Let be an affine variety defined by the polynomials and be a point of .
The Jacobian matrix of at is the matrix of the partial derivatives
The point is regular if the rank of equals the codimension of , and singular otherwise.
If is regular, the tangent space to at is the affine subspace of defined by the linear equations
If the point is singular, the affine subspace defined by these equations is also called a tangent space by some authors, while other authors say that there is no tangent space at a singular point.
A more intrinsic definition, which does not use coordinates is given by Zariski tangent space.
The Zariski topology
The affine algebraic sets of kn form the closed sets of a topology on kn, called the Zariski topology. This follows from the fact that and (in fact, a countable intersection of affine algebraic sets is an affine algebraic set).
The Zariski topology can also be described by way of basic open sets, where Zariski-open sets are countable unions of sets of the form for These basic open sets are the complements in kn of the closed sets zero loci of a single polynomial. If k is Noetherian (for instance, if k is a field or a principal ideal domain), then every ideal of k is finitely-generated, so every open set is a finite union of basic open sets.
If V is an affine subvariety of kn the Zariski topology on V is simply the subspace topology inherited from the Zariski topology on kn.
Geometry–algebra correspondence
The geometric structure of an affine variety is linked in a deep way to the algebraic structure of its coordinate ring. Let I and J be ideals of k[V], the coordinate ring of an affine variety V. Let I(V) be the set of all polynomials in that vanish on V, and let denote the radical of the ideal I, the set of polynomials f for which some power of f is in I. The reason that the base field is required to be algebraically closed is that affine varieties automatically satisfy Hilbert's nullstellensatz: for an ideal J in where k is an algebraically closed field,
Radical ideals (ideals that are their own radical) of k[V] correspond to algebraic subsets of V. Indeed, for radical ideals I and J, if and only if Hence V(I)=V(J) if and only if I=J. Furthermore, the function taking an affine algebraic set W and returning I(W), the set of all functions that also vanish on all points of W, is the inverse of the function assigning an algebraic set to a radical ideal, by the nullstellensatz. Hence the correspondence between affine algebraic sets and radical ideals is a bijection. The coordinate ring of an affine algebraic set is reduced (nilpotent-free), as an ideal I in a ring R is radical if and only if the quotient ring R/I is reduced.
Prime ideals of the coordinate ring correspond to affine subvarieties. An affine algebraic set V(I) can be written as the union of two other algebraic sets if and only if I=JK for proper ideals J and K not equal to I (in which case ). This is the case if and only if I is not prime. Affine subvarieties are precisely those whose coordinate ring is an integral domain. This is because an ideal is prime if and only if the quotient of the ring by the ideal is an integral domain.
Maximal ideals of k[V] correspond to points of V. If I and J are radical ideals, then if and only if As maximal ideals are radical, maximal ideals correspond to minimal algebraic sets (those that contain no proper algebraic subsets), which are points in V. If V is an affine variety with coordinate ring this correspondence becomes explicit through the map where denotes the image in the quotient algebra R of the polynomial An algebraic subset is a point if and only if the coordinate ring of the subset is a field, as the quotient of a ring by a maximal ideal is a field.
The following table summarises this correspondence, for algebraic subsets of an affine variety and ideals of the corresponding coordinate ring:
Products of affine varieties
A product of affine varieties can be defined using the isomorphism then embedding the product in this new affine space. Let and have coordinate rings and respectively, so that their product has coordinate ring . Let be an algebraic subset of and an algebraic subset of Then each is a polynomial in , and each is in . The product of and is defined as the algebraic set in The product is irreducible if each , is irreducible.
The Zariski topology on is not the topological product of the Zariski topologies on the two spaces. Indeed, the product topology is generated by products of the basic open sets and Hence, polynomials that are in but cannot be obtained as a product of a polynomial in with a polynomial in will define algebraic sets that are in the Zariski topology on but not in the product topology.
Morphisms of affine varieties
A morphism, or regular map, of affine varieties is a function between affine varieties that is polynomial in each coordinate: more precisely, for affine varieties and , a morphism from to is a map of the form where for each These are the morphisms in the category of affine varieties.
There is a one-to-one correspondence between morphisms of affine varieties over an algebraically closed field and homomorphisms of coordinate rings of affine varieties over going in the opposite direction. Because of this, along with the fact that there is a one-to-one correspondence between affine varieties over and their coordinate rings, the category of affine varieties over is dual to the category of coordinate rings of affine varieties over The category of coordinate rings of affine varieties over is precisely the category of finitely-generated, nilpotent-free algebras over
More precisely, for each morphism of affine varieties, there is a homomorphism between the coordinate rings (going in the opposite direction), and for each such homomorphism, there is a morphism of the varieties associated to the coordinate rings. This can be shown explicitly: let and be affine varieties with coordinate rings and respectively. Let be a morphism. Indeed, a homomorphism between polynomial rings factors uniquely through the ring and a homomorphism is determined uniquely by the images of Hence, each homomorphism corresponds uniquely to a choice of image for each . Then given any morphism from to a homomorphism can be constructed that sends to where is the equivalence class of in
Similarly, for each homomorphism of the coordinate rings, a morphism of the affine varieties can be constructed in the opposite direction. Mirroring the paragraph above, a homomorphism sends to a polynomial in . This corresponds to the morphism of varieties defined by
Structure sheaf
Equipped with the structure sheaf described below, an affine variety is a locally ringed space.
Given an affine variety X with coordinate ring A, the sheaf of k-algebras is defined by letting be the ring of regular functions on U.
Let D(f) = { x | f(x) ≠ 0 } for each f in A. They form a base for the topology of X and so is determined by its values on the open sets D(f). (See also: sheaf of modules#Sheaf associated to a module.)
The key fact, which relies on Hilbert nullstellensatz in the essential way, is the following:
Proof: The inclusion ⊃ is clear. For the opposite, let g be in the left-hand side and , which is an ideal. If x is in D(f), then, since g is regular near x, there is some open affine neighborhood D(h) of x such that ; that is, hm g is in A and thus x is not in V(J). In other words, and thus the Hilbert nullstellensatz implies f is in the radical of J; i.e., .
The claim, first of all, implies that X is a "locally ringed" space since
where . Secondly, the claim implies that is a sheaf; indeed, it says if a function is regular (pointwise) on D(f), then it must be in the coordinate ring of D(f); that is, "regular-ness" can be patched together.
Hence, is a locally ringed space.
Serre's theorem on affineness
A theorem of Serre gives a cohomological characterization of an affine variety; it says an algebraic variety is affine if and only if for any and any quasi-coherent sheaf F on X. (cf. Cartan's theorem B.) This makes the cohomological study of an affine variety non-existent, in a sharp contrast to the projective case in which cohomology groups of line bundles are of central interest.
Affine algebraic groups
An affine variety over an algebraically closed field is called an affine algebraic group if it has:
A multiplication , which is a regular morphism that follows the associativity axiom—that is, such that for all points , and in
An identity element such that for every in
An inverse morphism, a regular bijection such that for every in
Together, these define a group structure on the variety. The above morphisms are often written using ordinary group notation: can be written as , or ; the inverse can be written as or Using the multiplicative notation, the associativity, identity and inverse laws can be rewritten as: , and .
The most prominent example of an affine algebraic group is the general linear group of degree This is the group of linear transformations of the vector space if a basis of is fixed, this is equivalent to the group of invertible matrices with entries in It can be shown that any affine algebraic group is isomorphic to a subgroup of . For this reason, affine algebraic groups are often called linear algebraic groups.
Affine algebraic groups play an important role in the classification of finite simple groups, as the groups of Lie type are all sets of -rational points of an affine algebraic group, where is a finite field.
Generalizations
If an author requires the base field of an affine variety to be algebraically closed (as this article does), then irreducible affine algebraic sets over non-algebraically closed fields are a generalization of affine varieties. This generalization notably includes affine varieties over the real numbers.
An affine variety plays a role of a local chart for algebraic varieties; that is to say, general algebraic varieties such as projective varieties are obtained by gluing affine varieties. Linear structures that are attached to varieties are also (trivially) affine varieties; e.g., tangent spaces, fibers of algebraic vector bundles.
An affine variety is a special case of an affine scheme, a locally-ringed space that is isomorphic to the spectrum of a commutative ring (up to an equivalence of categories). Each affine variety has an affine scheme associated to it: if is an affine variety in with coordinate ring then the scheme corresponding to is the set of prime ideals of The affine scheme has "classical points", which correspond with points of the variety (and hence maximal ideals of the coordinate ring of the variety), and also a point for each closed subvariety of the variety (these points correspond to prime, non-maximal ideals of the coordinate ring). This creates a more well-defined notion of the "generic point" of an affine variety, by assigning to each closed subvariety an open point that is dense in the subvariety. More generally, an affine scheme is an affine variety if it is reduced, irreducible, and of finite type over an algebraically closed field
Notes
See also
Algebraic variety
Affine scheme
Representations on coordinate rings
References
The original article was written as a partial human translation of the corresponding French article.
Milne, James S. Lectures on Étale cohomology
Algebraic geometry | Affine variety | [
"Mathematics"
] | 3,581 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
320,469 | https://en.wikipedia.org/wiki/Projective%20variety | In algebraic geometry, a projective variety is an algebraic variety that is a closed subvariety of a projective space. That is, it is the zero-locus in of some finite family of homogeneous polynomials that generate a prime ideal, the defining ideal of the variety.
A projective variety is a projective curve if its dimension is one; it is a projective surface if its dimension is two; it is a projective hypersurface if its dimension is one less than the dimension of the containing projective space; in this case it is the set of zeros of a single homogeneous polynomial.
If X is a projective variety defined by a homogeneous prime ideal I, then the quotient ring
is called the homogeneous coordinate ring of X. Basic invariants of X such as the degree and the dimension can be read off the Hilbert polynomial of this graded ring.
Projective varieties arise in many ways. They are complete, which roughly can be expressed by saying that there are no points "missing". The converse is not true in general, but Chow's lemma describes the close relation of these two notions. Showing that a variety is projective is done by studying line bundles or divisors on X.
A salient feature of projective varieties are the finiteness constraints on sheaf cohomology. For smooth projective varieties, Serre duality can be viewed as an analog of Poincaré duality. It also leads to the Riemann–Roch theorem for projective curves, i.e., projective varieties of dimension 1. The theory of projective curves is particularly rich, including a classification by the genus of the curve. The classification program for higher-dimensional projective varieties naturally leads to the construction of moduli of projective varieties. Hilbert schemes parametrize closed subschemes of with prescribed Hilbert polynomial. Hilbert schemes, of which Grassmannians are special cases, are also projective schemes in their own right. Geometric invariant theory offers another approach. The classical approaches include the Teichmüller space and Chow varieties.
A particularly rich theory, reaching back to the classics, is available for complex projective varieties, i.e., when the polynomials defining X have complex coefficients. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. For example, the theory of holomorphic vector bundles (more generally coherent analytic sheaves) on X coincide with that of algebraic vector bundles. Chow's theorem says that a subset of projective space is the zero-locus of a family of holomorphic functions if and only if it is the zero-locus of homogeneous polynomials. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory.
Variety and scheme structure
Variety structure
Let k be an algebraically closed field. The basis of the definition of projective varieties is projective space , which can be defined in different, but equivalent ways:
as the set of all lines through the origin in (i.e., all one-dimensional vector subspaces of )
as the set of tuples , with not all zero, modulo the equivalence relation for any . The equivalence class of such a tuple is denoted by This equivalence class is the general point of projective space. The numbers are referred to as the homogeneous coordinates of the point.
A projective variety is, by definition, a closed subvariety of , where closed refers to the Zariski topology. In general, closed subsets of the Zariski topology are defined to be the common zero-locus of a finite collection of homogeneous polynomial functions. Given a polynomial , the condition
does not make sense for arbitrary polynomials, but only if f is homogeneous, i.e., the degrees of all the monomials (whose sum is f) are the same. In this case, the vanishing of
is independent of the choice of .
Therefore, projective varieties arise from homogeneous prime ideals I of , and setting
Moreover, the projective variety X is an algebraic variety, meaning that it is covered by open affine subvarieties and satisfies the separation axiom. Thus, the local study of X (e.g., singularity) reduces to that of an affine variety. The explicit structure is as follows. The projective space is covered by the standard open affine charts
which themselves are affine n-spaces with the coordinate ring
Say i = 0 for the notational simplicity and drop the superscript (0). Then is a closed subvariety of defined by the ideal of generated by
for all f in I. Thus, X is an algebraic variety covered by (n+1) open affine charts .
Note that X is the closure of the affine variety in . Conversely, starting from some closed (affine) variety , the closure of V in is the projective variety called the of V. If defines V, then the defining ideal of this closure is the homogeneous ideal of generated by
for all f in I.
For example, if V is an affine curve given by, say, in the affine plane, then its projective completion in the projective plane is given by
Projective schemes
For various applications, it is necessary to consider more general algebro-geometric objects than projective varieties, namely projective schemes. The first step towards projective schemes is to endow projective space with a scheme structure, in a way refining the above description of projective space as an algebraic variety, i.e., is a scheme which it is a union of (n + 1) copies of the affine n-space kn. More generally, projective space over a ring A is the union of the affine schemes
in such a way the variables match up as expected. The set of closed points of , for algebraically closed fields k, is then the projective space in the usual sense.
An equivalent but streamlined construction is given by the Proj construction, which is an analog of the spectrum of a ring, denoted "Spec", which defines an affine scheme. For example, if A is a ring, then
If R is a quotient of by a homogeneous ideal I, then the canonical surjection induces the closed immersion
Compared to projective varieties, the condition that the ideal I be a prime ideal was dropped. This leads to a much more flexible notion: on the one hand the topological space may have multiple irreducible components. Moreover, there may be nilpotent functions on X.
Closed subschemes of correspond bijectively to the homogeneous ideals I of that are saturated; i.e., This fact may be considered as a refined version of projective Nullstellensatz.
We can give a coordinate-free analog of the above. Namely, given a finite-dimensional vector space V over k, we let
where is the symmetric algebra of . It is the projectivization of V; i.e., it parametrizes lines in V. There is a canonical surjective map , which is defined using the chart described above. One important use of the construction is this (cf., ). A divisor D on a projective variety X corresponds to a line bundle L. One then set
;
it is called the complete linear system of D.
Projective space over any scheme S can be defined as a fiber product of schemes
If is the twisting sheaf of Serre on , we let denote the pullback of to ; that is, for the canonical map
A scheme X → S is called projective over S if it factors as a closed immersion
followed by the projection to S.
A line bundle (or invertible sheaf) on a scheme X over S is said to be very ample relative to S if there is an immersion (i.e., an open immersion followed by a closed immersion)
for some n so that pullbacks to . Then a S-scheme X is projective if and only if it is proper and there exists a very ample sheaf on X relative to S. Indeed, if X is proper, then an immersion corresponding to the very ample line bundle is necessarily closed. Conversely, if X is projective, then the pullback of under the closed immersion of X into a projective space is very ample. That "projective" implies "proper" is deeper: the main theorem of elimination theory.
Relation to complete varieties
By definition, a variety is complete, if it is proper over k. The valuative criterion of properness expresses the intuition that in a proper variety, there are no points "missing".
There is a close relation between complete and projective varieties: on the one hand, projective space and therefore any projective variety is complete. The converse is not true in general. However:
A smooth curve C is projective if and only if it is complete. This is proved by identifying C with the set of discrete valuation rings of the function field k(C) over k. This set has a natural Zariski topology called the Zariski–Riemann space.
Chow's lemma states that for any complete variety X, there is a projective variety Z and a birational morphism Z → X. (Moreover, through normalization, one can assume this projective variety is normal.)
Some properties of a projective variety follow from completeness. For example,
for any projective variety X over k. This fact is an algebraic analogue of Liouville's theorem (any holomorphic function on a connected compact complex manifold is constant). In fact, the similarity between complex analytic geometry and algebraic geometry on complex projective varieties goes much further than this, as is explained below.
Quasi-projective varieties are, by definition, those which are open subvarieties of projective varieties. This class of varieties includes affine varieties. Affine varieties are almost never complete (or projective). In fact, a projective subvariety of an affine variety must have dimension zero. This is because only the constants are globally regular functions on a projective variety.
Examples and basic invariants
By definition, any homogeneous ideal in a polynomial ring yields a projective scheme (required to be prime ideal to give a variety). In this sense, examples of projective varieties abound. The following list mentions various classes of projective varieties which are noteworthy since they have been studied particularly intensely. The important class of complex projective varieties, i.e., the case , is discussed further below.
The product of two projective spaces is projective. In fact, there is the explicit immersion (called Segre embedding)
As a consequence, the product of projective varieties over k is again projective. The Plücker embedding exhibits a Grassmannian as a projective variety. Flag varieties such as the quotient of the general linear group modulo the subgroup of upper triangular matrices, are also projective, which is an important fact in the theory of algebraic groups.
Homogeneous coordinate ring and Hilbert polynomial
As the prime ideal P defining a projective variety X is homogeneous, the homogeneous coordinate ring
is a graded ring, i.e., can be expressed as the direct sum of its graded components:
There exists a polynomial P such that for all sufficiently large n; it is called the Hilbert polynomial of X. It is a numerical invariant encoding some extrinsic geometry of X. The degree of P is the dimension r of X and its leading coefficient times r! is the degree of the variety X. The arithmetic genus of X is (−1)r (P(0) − 1) when X is smooth.
For example, the homogeneous coordinate ring of is and its Hilbert polynomial is ; its arithmetic genus is zero.
If the homogeneous coordinate ring R is an integrally closed domain, then the projective variety X is said to be projectively normal. Note, unlike normality, projective normality depends on R, the embedding of X into a projective space. The normalization of a projective variety is projective; in fact, it's the Proj of the integral closure of some homogeneous coordinate ring of X.
Degree
Let be a projective variety. There are at least two equivalent ways to define the degree of X relative to its embedding. The first way is to define it as the cardinality of the finite set
where d is the dimension of X and Hi's are hyperplanes in "general positions". This definition corresponds to an intuitive idea of a degree. Indeed, if X is a hypersurface, then the degree of X is the degree of the homogeneous polynomial defining X. The "general positions" can be made precise, for example, by intersection theory; one requires that the intersection is proper and that the multiplicities of irreducible components are all one.
The other definition, which is mentioned in the previous section, is that the degree of X is the leading coefficient of the Hilbert polynomial of X times (dim X)!. Geometrically, this definition means that the degree of X is the multiplicity of the vertex of the affine cone over X.
Let be closed subschemes of pure dimensions that intersect properly (they are in general position). If mi denotes the multiplicity of an irreducible component Zi in the intersection (i.e., intersection multiplicity), then the generalization of Bézout's theorem says:
The intersection multiplicity mi can be defined as the coefficient of Zi in the intersection product in the Chow ring of .
In particular, if is a hypersurface not containing X, then
where Zi are the irreducible components of the scheme-theoretic intersection of X and H with multiplicity (length of the local ring) mi.
A complex projective variety can be viewed as a compact complex manifold; the degree of the variety (relative to the embedding) is then the volume of the variety as a manifold with respect to the metric inherited from the ambient complex projective space. A complex projective variety can be characterized as a minimizer of the volume (in a sense).
The ring of sections
Let X be a projective variety and L a line bundle on it. Then the graded ring
is called the ring of sections of L. If L is ample, then Proj of this ring is X. Moreover, if X is normal and L is very ample, then is the integral closure of the homogeneous coordinate ring of X determined by L; i.e., so that pulls-back to L.
For applications, it is useful to allow for divisors (or -divisors) not just line bundles; assuming X is normal, the resulting ring is then called a generalized ring of sections. If is a canonical divisor on X, then the generalized ring of sections
is called the canonical ring of X. If the canonical ring is finitely generated, then Proj of the ring is called the canonical model of X. The canonical ring or model can then be used to define the Kodaira dimension of X.
Projective curves
Projective schemes of dimension one are called projective curves. Much of the theory of projective curves is about smooth projective curves, since the singularities of curves can be resolved by normalization, which consists in taking locally the integral closure of the ring of regular functions. Smooth projective curves are isomorphic if and only if their function fields are isomorphic. The study of finite extensions of
or equivalently smooth projective curves over is an important branch in algebraic number theory.
A smooth projective curve of genus one is called an elliptic curve. As a consequence of the Riemann–Roch theorem, such a curve can be embedded as a closed subvariety in . In general, any (smooth) projective curve can be embedded in (for a proof, see Secant variety#Examples). Conversely, any smooth closed curve in of degree three has genus one by the genus formula and is thus an elliptic curve.
A smooth complete curve of genus greater than or equal to two is called a hyperelliptic curve if there is a finite morphism of degree two.
Projective hypersurfaces
Every irreducible closed subset of of codimension one is a hypersurface; i.e., the zero set of some homogeneous irreducible polynomial.
Abelian varieties
Another important invariant of a projective variety X is the Picard group of X, the set of isomorphism classes of line bundles on X. It is isomorphic to and therefore an intrinsic notion (independent of embedding). For example, the Picard group of is isomorphic to via the degree map. The kernel of is not only an abstract abelian group, but there is a variety called the Jacobian variety of X, Jac(X), whose points equal this group. The Jacobian of a (smooth) curve plays an important role in the study of the curve. For example, the Jacobian of an elliptic curve E is E itself. For a curve X of genus g, Jac(X) has dimension g.
Varieties, such as the Jacobian variety, which are complete and have a group structure are known as abelian varieties, in honor of Niels Abel. In marked contrast to affine algebraic groups such as , such groups are always commutative, whence the name. Moreover, they admit an ample line bundle and are thus projective. On the other hand, an abelian scheme may not be projective. Examples of abelian varieties are elliptic curves, Jacobian varieties and K3 surfaces.
Projections
Let be a linear subspace; i.e., for some linearly independent linear functionals si. Then the projection from E is the (well-defined) morphism
The geometric description of this map is as follows:
We view so that it is disjoint from E. Then, for any , where denotes the smallest linear space containing E and x (called the join of E and x.)
where are the homogeneous coordinates on
For any closed subscheme disjoint from E, the restriction is a finite morphism.
Projections can be used to cut down the dimension in which a projective variety is embedded, up to finite morphisms. Start with some projective variety If the projection from a point not on X gives Moreover, is a finite map to its image. Thus, iterating the procedure, one sees there is a finite map
This result is the projective analog of Noether's normalization lemma. (In fact, it yields a geometric proof of the normalization lemma.)
The same procedure can be used to show the following slightly more precise result: given a projective variety X over a perfect field, there is a finite birational morphism from X to a hypersurface H in In particular, if X is normal, then it is the normalization of H.
Duality and linear system
While a projective n-space parameterizes the lines in an affine n-space, the dual of it parametrizes the hyperplanes on the projective space, as follows. Fix a field k. By , we mean a projective n-space
equipped with the construction:
, a hyperplane on
where is an L-point of for a field extension L of k and
For each L, the construction is a bijection between the set of L-points of and the set of hyperplanes on . Because of this, the dual projective space is said to be the moduli space of hyperplanes on .
A line in is called a pencil: it is a family of hyperplanes on parametrized by .
If V is a finite-dimensional vector space over k, then, for the same reason as above, is the space of hyperplanes on . An important case is when V consists of sections of a line bundle. Namely, let X be an algebraic variety, L a line bundle on X and a vector subspace of finite positive dimension. Then there is a map:
determined by the linear system V, where B, called the base locus, is the intersection of the divisors of zero of nonzero sections in V (see Linear system of divisors#A map determined by a linear system for the construction of the map).
Cohomology of coherent sheaves
Let X be a projective scheme over a field (or, more generally over a Noetherian ring A). Cohomology of coherent sheaves on X satisfies the following important theorems due to Serre:
is a finite-dimensional k-vector space for any p.
There exists an integer (depending on ; see also Castelnuovo–Mumford regularity) such that for all and p > 0, where is the twisting with a power of a very ample line bundle
These results are proven reducing to the case using the isomorphism
where in the right-hand side is viewed as a sheaf on the projective space by extension by zero. The result then follows by a direct computation for n any integer, and for arbitrary reduces to this case without much difficulty.
As a corollary to 1. above, if f is a projective morphism from a noetherian scheme to a noetherian ring, then the higher direct image is coherent. The same result holds for proper morphisms f, as can be shown with the aid of Chow's lemma.
Sheaf cohomology groups Hi on a noetherian topological space vanish for i strictly greater than the dimension of the space. Thus the quantity, called the Euler characteristic of ,
is a well-defined integer (for X projective). One can then show for some polynomial P over rational numbers. Applying this procedure to the structure sheaf , one recovers the Hilbert polynomial of X. In particular, if X is irreducible and has dimension r, the arithmetic genus of X is given by
which is manifestly intrinsic; i.e., independent of the embedding.
The arithmetic genus of a hypersurface of degree d is in . In particular, a smooth curve of degree d in has arithmetic genus . This is the genus formula.
Smooth projective varieties
Let X be a smooth projective variety where all of its irreducible components have dimension n. In this situation, the canonical sheaf ωX, defined as the sheaf of Kähler differentials of top degree (i.e., algebraic n-forms), is a line bundle.
Serre duality
Serre duality states that for any locally free sheaf on X,
where the superscript prime refers to the dual space and is the dual sheaf of .
A generalization to projective, but not necessarily smooth schemes is known as Verdier duality.
Riemann–Roch theorem
For a (smooth projective) curve X, H2 and higher vanish for dimensional reason and the space of the global sections of the structure sheaf is one-dimensional. Thus the arithmetic genus of X is the dimension of . By definition, the geometric genus of X is the dimension of H0(X, ωX). Serre duality thus implies that the arithmetic genus and the geometric genus coincide. They will simply be called the genus of X.
Serre duality is also a key ingredient in the proof of the Riemann–Roch theorem. Since X is smooth, there is an isomorphism of groups
from the group of (Weil) divisors modulo principal divisors to the group of isomorphism classes of line bundles. A divisor corresponding to ωX is called the canonical divisor and is denoted by K. Let l(D) be the dimension of . Then the Riemann–Roch theorem states: if g is a genus of X,
for any divisor D on X. By the Serre duality, this is the same as:
which can be readily proved. A generalization of the Riemann–Roch theorem to higher dimension is the Hirzebruch–Riemann–Roch theorem, as well as the far-reaching Grothendieck–Riemann–Roch theorem.
Hilbert schemes
Hilbert schemes parametrize all closed subvarieties of a projective scheme X in the sense that the points (in the functorial sense) of H correspond to the closed subschemes of X. As such, the Hilbert scheme is an example of a moduli space, i.e., a geometric object whose points parametrize other geometric objects. More precisely, the Hilbert scheme parametrizes closed subvarieties whose Hilbert polynomial equals a prescribed polynomial P. It is a deep theorem of Grothendieck that there is a scheme over k such that, for any k-scheme T, there is a bijection
The closed subscheme of that corresponds to the identity map is called the universal family.
For , the Hilbert scheme is called the Grassmannian of r-planes in and, if X is a projective scheme, is called the Fano scheme of r-planes on X.
Complex projective varieties
In this section, all algebraic varieties are complex algebraic varieties. A key feature of the theory of complex projective varieties is the combination of algebraic and analytic methods. The transition between these theories is provided by the following link: since any complex polynomial is also a holomorphic function, any complex variety X yields a complex analytic space, denoted . Moreover, geometric properties of X are reflected by the ones of . For example, the latter is a complex manifold if and only if X is smooth; it is compact if and only if X is proper over .
Relation to complex Kähler manifolds
Complex projective space is a Kähler manifold. This implies that, for any projective algebraic variety X, is a compact Kähler manifold. The converse is not in general true, but the Kodaira embedding theorem gives a criterion for a Kähler manifold to be projective.
In low dimensions, there are the following results:
(Riemann) A compact Riemann surface (i.e., compact complex manifold of dimension one) is a projective variety. By the Torelli theorem, it is uniquely determined by its Jacobian.
(Chow-Kodaira) A compact complex manifold of dimension two with two algebraically independent meromorphic functions is a projective variety.
GAGA and Chow's theorem
Chow's theorem provides a striking way to go the other way, from analytic to algebraic geometry. It states that every analytic subvariety of a complex projective space is algebraic. The theorem may be interpreted to saying that a holomorphic function satisfying certain growth condition is necessarily algebraic: "projective" provides this growth condition. One can deduce from the theorem the following:
Meromorphic functions on the complex projective space are rational.
If an algebraic map between algebraic varieties is an analytic isomorphism, then it is an (algebraic) isomorphism. (This part is a basic fact in complex analysis.) In particular, Chow's theorem implies that a holomorphic map between projective varieties is algebraic. (consider the graph of such a map.)
Every holomorphic vector bundle on a projective variety is induced by a unique algebraic vector bundle.
Every holomorphic line bundle on a projective variety is a line bundle of a divisor.
Chow's theorem can be shown via Serre's GAGA principle. Its main theorem states:
Let X be a projective scheme over . Then the functor associating the coherent sheaves on X to the coherent sheaves on the corresponding complex analytic space Xan is an equivalence of categories. Furthermore, the natural maps
are isomorphisms for all i and all coherent sheaves on X.
Complex tori vs. complex abelian varieties
The complex manifold associated to an abelian variety A over is a compact complex Lie group. These can be shown to be of the form
and are also referred to as complex tori. Here, g is the dimension of the torus and L is a lattice (also referred to as period lattice).
According to the uniformization theorem already mentioned above, any torus of dimension 1 arises from an abelian variety of dimension 1, i.e., from an elliptic curve. In fact, the Weierstrass's elliptic function attached to L satisfies a certain differential equation and as a consequence it defines a closed immersion:
There is a p-adic analog, the p-adic uniformization theorem.
For higher dimensions, the notions of complex abelian varieties and complex tori differ: only polarized complex tori come from abelian varieties.
Kodaira vanishing
The fundamental Kodaira vanishing theorem states that for an ample line bundle on a smooth projective variety X over a field of characteristic zero,
for i > 0, or, equivalently by Serre duality for i < n. The first proof of this theorem used analytic methods of Kähler geometry, but a purely algebraic proof was found later. The Kodaira vanishing in general fails for a smooth projective variety in positive characteristic. Kodaira's theorem is one of various vanishing theorems, which give criteria for higher sheaf cohomologies to vanish. Since the Euler characteristic of a sheaf (see above) is often more manageable than individual cohomology groups, this often has important consequences about the geometry of projective varieties.
Related notions
Multi-projective variety
Weighted projective variety, a closed subvariety of a weighted projective space
See also
Algebraic geometry of projective spaces
Adequate equivalence relation
Hilbert scheme
Lefschetz hyperplane theorem
Minimal model program
Notes
References
R. Vakil, Foundations Of Algebraic Geometry
External links
The Hilbert Scheme by Charles Siegel - a blog post
varieties Ch. 1
Algebraic geometry
Algebraic varieties
Projective geometry | Projective variety | [
"Mathematics"
] | 6,015 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
320,498 | https://en.wikipedia.org/wiki/Computer-mediated%20communication | Computer-mediated communication (CMC) is defined as any human communication that occurs through the use of two or more electronic devices. While the term has traditionally referred to those communications that occur via computer-mediated formats (e.g., instant messaging, email, chat rooms, online forums, social network services), it has also been applied to other forms of text-based interaction such as text messaging. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software.
Forms
Computer-mediated communication can be broken down into two forms: synchronous and asynchronous. Synchronous computer-mediated communication refers to communication that occurs in real-time. All parties are engaged in the communication simultaneously; however, they are not necessarily all in the same location. Examples of synchronous communication are video chats and audio calls. On the other hand, asynchronous computer-mediated communication refers to communication that takes place when the parties engaged are not communicating in unison. In other words, the sender does not receive an immediate response from the receiver. Most forms of computer-mediated technology are asynchronous. Examples of asynchronous communication are text messages and emails.
Scope
Scholars from a variety of fields study phenomena that can be described under the umbrella term of computer-mediated communication (CMC) (see also Internet studies). For example, many take a sociopsychological approach to CMC by examining how humans use "computers" (or digital media) to manage interpersonal interaction, form impressions and maintain relationships. These studies have often focused on the differences between online and offline interactions, though contemporary research is moving towards the view that CMC should be studied as embedded in everyday life. Another branch of CMC research examines the use of paralinguistic features such as emoticons, pragmatic rules such as turn-taking and the sequential analysis and organization of talk, and the various sociolects, styles, registers or sets of terminology specific to these environments (see Leet). The study of language in these contexts is typically based on text-based forms of CMC, and is sometimes referred to as "computer-mediated discourse analysis".
The way humans communicate in professional, social, and educational settings varies widely, depending upon not only the environment but also the method of communication in which the communication occurs, which in this case is through computers or other information and communication technologies (ICTs). The study of communication to achieve collaboration—common work products—is termed computer-supported collaboration and includes only some of the concerns of other forms of CMC research.
Popular forms of CMC include e-mail, video, audio or text chat (text conferencing including "instant messaging"), bulletin board systems, list-servs, and MMOs. These settings are changing rapidly with the development of new technologies. Weblogs (blogs) have also become popular, and the exchange of RSS data has better enabled users to each "become their own publisher".
Characteristics
Communication occurring within a computer-mediated format has an effect on many different aspects of an interaction. Some of those that have received attention in the scholarly literature include impression formation, deception, group dynamics, disclosure reciprocity, disinhibition and especially relationship formation.
CMC is examined and compared to other communication media through a number of aspects thought to be universal to all forms of communication, including (but not limited to) synchronicity, persistence or "recordability", and anonymity. The association of these aspects with different forms of communication varies widely. For example, instant messaging is intrinsically synchronous but not persistent, since one loses all the content when one closes the dialog box unless one has a message log set up or has manually copy-pasted the conversation. E-mail and message boards, on the other hand, are low in synchronicity since response time varies, but high in persistence since messages sent and received are saved. Properties that separate CMC from other media also include transience, its multimodal nature, and its relative lack of governing codes of conduct. CMC is able to overcome physical and social limitations of other forms of communication and therefore allow the interaction of people who are not physically sharing the same space.
Technology would be a powerful tool when defining communication as a learning process that needs a sender and receiver. According to Nicholas Jankowski in his book The Contours of Multimedia, a third party, like software, acts in the middle between a sender and receiver. The sender is interacting with this third party to send. The receiver interacts with it as well, creating an additional interaction with the medium itself along with the initially intended one between sender and receiver.
The medium in which people choose to communicate influences the extent to which people disclose personal information. CMC is marked by higher levels of self-disclosure in conversation as opposed to face-to-face interactions. Self disclosure is any verbal communication of personally relevant information, thought, and feeling which establishes and maintains interpersonal relationships. This is due in part to visual anonymity and the absence of nonverbal cues which reduce concern for losing positive face. According to Walther’s (1996) hyperpersonal communication model, computer-mediated communication is valuable in providing a better communication and better first impressions. Moreover, Ramirez and Zhang (2007) indicate that computer-mediated communication allows more closeness and attraction between two individuals than a face-to-face communication. Online impression management, self-disclosure, attentiveness, expressivity, composure and other skills contribute to competence in computer mediated communication. In fact, there is a considerable correspondence of skills in computer-mediated and face-to-face interaction even though there is great diversity of online communication tools.
Anonymity and in part privacy and security depends more on the context and particular program being used or web page being visited. However, most researchers in the field acknowledge the importance of considering the psychological and social implications of these factors alongside the technical "limitations".
Language learning
CMC is widely discussed in language learning because CMC provides opportunities for language learners to practice their language. For example, Warschauer conducted several case studies on using email or discussion boards in different language classes. Warschauer claimed that information and communications technology “bridge the historic divide between speech...and writing”. Thus, considerable concern has arisen over the reading and writing research in L2 due to the booming of the Internet. In the learning process, students, especially kids, need cognitive learning, but they also need social interaction, which enhances their psychological needs. Although technology has its powerful effect in assisting the English language learners to learn, it can not be a comprehensive way that covers different aspects of the learning process.
Benefits
The nature of CMC means that it is easy for individuals to engage in communication with others regardless of time, location, or other spatial constraints to communication. In that CMC allows for individuals to collaborate on projects that would otherwise be impossible due to such factors as geography, it has enhanced social interaction not only between individuals but also in working life. In addition, CMC can also be useful for allowing individuals who might be intimidated due to factors like character or disabilities to participate in communication. By allowing an individual to communicate in a location of their choosing, a CMC call allows a person to engage in communication with minimal stress. Making an individual comfortable through CMC also plays a role in self-disclosure, which allows a communicative partner to open up more easily and be more expressive. When communicating through an electronic medium, individuals are less likely to engage in stereotyping and are less self-conscious about physical characteristics. The role that anonymity plays in online communication can also encourage some users to be less defensive and form relationships with others more rapidly.
Disadvantages
While computer-mediated communication can be beneficial, technological mediation can also inhibit the communication process. Unlike face-to-face communication, nonverbal cues such as tone and physical gestures, which assist in conveying the message, are lost through computer-mediated communication. As a result, the message being communicated is more vulnerable to being misunderstood due to a wrong interpretation of tone or word meaning. Moreover, according to Dr. Sobel-Lojeski of Stony Brook University and Professor Westwell of Flinders University, the virtual distance that is fundamental to computer-mediated communication can create a psychological and emotional sense of detachment, which can contribute to sentiments of societal isolation.
Crime
Cybersex trafficking and other cyber crimes involve computer-mediated communication. Cybercriminals can carry out the crimes in any location where they have a computer or tablet with a webcam or a smartphone with an internet connection. They also rely on social media networks, videoconferences, pornographic video sharing websites, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of these crimes are sent to authorities annually. New laws and police procedures are needed to combat crimes involving CMC.
See also
Emotions in virtual communication
Internet relationship
Discourse community
References
Further reading
External links
Applied linguistics
Information systems
Internet culture
he:למידה משולבת מחשב | Computer-mediated communication | [
"Technology"
] | 1,918 | [
"Computer-mediated communication",
"Information technology",
"Computing and society",
"Information systems"
] |
320,578 | https://en.wikipedia.org/wiki/Tarski%27s%20theorem%20about%20choice | In mathematics, Tarski's theorem, proved by , states that in ZF the theorem "For every infinite set , there is a bijective map between the sets and " implies the axiom of choice. The opposite direction was already known, thus the theorem and axiom of choice are equivalent.
Tarski told that when he tried to publish the theorem in Comptes Rendus de l'Académie des Sciences de Paris, Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest.
Proof
The goal is to prove that the axiom of choice is implied by the statement "for every infinite set ".
It is known that the well-ordering theorem is equivalent to the axiom of choice; thus it is enough to show that the statement implies that for every set there exists a well-order.
Since the collection of all ordinals such that there exists a surjective function from to the ordinal is a set, there exists an infinite ordinal, such that there is no surjective function from to
We assume without loss of generality that the sets and are disjoint.
By the initial assumption, thus there exists a bijection
For every it is impossible that because otherwise we could define a surjective function from to
Therefore, there exists at least one ordinal such that so the set is not empty.
We can define a new function:
This function is well defined since is a non-empty set of ordinals, and so has a minimum.
For every the sets and are disjoint.
Therefore, we can define a well order on for every we define since the image of that is, is a set of ordinals and therefore well ordered.
References
Axiom of choice
Cardinal numbers
Set theory
Theorems in the foundations of mathematics
fr:Ordinal de Hartogs#Produit cardinal | Tarski's theorem about choice | [
"Mathematics"
] | 416 | [
"Mathematical theorems",
"Cardinal numbers",
"Foundations of mathematics",
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Mathematical axioms",
"Axiom of choice",
"Numbers",
"Axioms of set theory",
"Mathematical problems",
"Theorems in the foundations of mathematics... |
320,638 | https://en.wikipedia.org/wiki/Zero%20morphism | In category theory, a branch of mathematics, a zero morphism is a special kind of morphism exhibiting properties like the morphisms to and from a zero object.
Definitions
Suppose C is a category, and f : X → Y is a morphism in C. The morphism f is called a constant morphism (or sometimes left zero morphism) if for any object W in C and any , fg = fh. Dually, f is called a coconstant morphism (or sometimes right zero morphism) if for any object Z in C and any g, h : Y → Z, gf = hf. A zero morphism is one that is both a constant morphism and a coconstant morphism.
A category with zero morphisms is one where, for every two objects A and B in C, there is a fixed morphism 0AB : A → B, and this collection of morphisms is such that for all objects X, Y, Z in C and all morphisms f : Y → Z, g : X → Y, the following diagram commutes:
The morphisms 0XY necessarily are zero morphisms and form a compatible system of zero morphisms.
If C is a category with zero morphisms, then the collection of 0XY is unique.
This way of defining a "zero morphism" and the phrase "a category with zero morphisms" separately is unfortunate, but if each hom-set has a unique "zero morphism", then the category "has zero morphisms".
Examples
Related concepts
If C has a zero object 0, given two objects X and Y in C, there are canonical morphisms f : X → 0 and g : 0 → Y. Then, gf is a zero morphism in MorC(X, Y). Thus, every category with a zero object is a category with zero morphisms given by the composition 0XY : X → 0 → Y.
If a category has zero morphisms, then one can define the notions of kernel and cokernel for any morphism in that category.
References
Section 1.7 of
.
Notes
Morphisms
0 (number) | Zero morphism | [
"Mathematics"
] | 473 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Category theory",
"Mathematical relations",
"Morphisms"
] |
320,733 | https://en.wikipedia.org/wiki/Impedance%20matching | In electrical engineering, impedance matching is the practice of designing or adjusting the input impedance or output impedance of an electrical device for a desired value. Often, the desired value is selected to maximize power transfer or minimize signal reflection. For example, impedance matching typically is used to improve power transfer from a radio transmitter via the interconnecting transmission line to the antenna. Signals on a transmission line will be transmitted without reflections if the transmission line is terminated with a matching impedance.
Techniques of impedance matching include transformers, adjustable networks of lumped resistance, capacitance and inductance, or properly proportioned transmission lines. Practical impedance-matching devices will generally provide best results over a specified frequency band.
The concept of impedance matching is widespread in electrical engineering, but is relevant in other applications in which a form of energy, not necessarily electrical, is transferred between a source and a load, such as in acoustics or optics.
Theory
Impedance is the opposition by a system to the flow of energy from a source. For constant signals, this impedance can also be constant. For varying signals, it usually changes with frequency. The energy involved can be electrical, mechanical, acoustic, magnetic, electromagnetic, or thermal. The concept of electrical impedance is perhaps the most commonly known. Electrical impedance, like electrical resistance, is measured in ohms. In general, impedance (symbol: Z) has a complex value; this means that loads generally have a resistance component (symbol: R) which forms the real part and a reactance component (symbol: X) which forms the imaginary part.
In simple cases (such as low-frequency or direct current power transmission) the reactance may be negligible or zero; the impedance can be considered a pure resistance, expressed as a real number. In the following summary we will consider the general case when resistance and reactance are both significant, and the special case in which the reactance is negligible.
Maximum power transfer matching
Complex conjugate matching is used when maximum power transfer is required, namely
where a superscript * indicates the complex conjugate. A conjugate match is different from a reflection-less match when either the source or load has a reactive component.
If the source has a reactive component, but the load is purely resistive, then matching can be achieved by adding a reactance of the same magnitude but opposite sign to the load. This simple matching network, consisting of a single element, will usually achieve a perfect match at only a single frequency. This is because the added element will either be a capacitor or an inductor, whose impedance in both cases is frequency dependent, and will not, in general, follow the frequency dependence of the source impedance. For wide bandwidth applications, a more complex network must be designed.
Power transfer
Whenever a source of power with a fixed output impedance such as an electric signal source, a radio transmitter or a mechanical sound (e.g., a loudspeaker) operates into a load, the maximum possible power is delivered to the load when the impedance of the load (load impedance or input impedance) is equal to the complex conjugate of the impedance of the source (that is, its internal impedance or output impedance). For two impedances to be complex conjugates their resistances must be equal, and their reactances must be equal in magnitude but of opposite signs. In low-frequency or DC systems (or systems with purely resistive sources and loads) the reactances are zero, or small enough to be ignored. In this case, maximum power transfer occurs when the resistance of the load is equal to the resistance of the source (see maximum power theorem for a mathematical proof).
Impedance matching is not always necessary. For example, if delivering a high voltage (to reduce signal degradation or to reduce power consumption) is more important than maximizing power transfer, then impedance bridging or voltage bridging is often used.
In older audio systems (reliant on transformers and passive filter networks, and based on the telephone system), the source and load resistances were matched at 600 ohms. One reason for this was to maximize power transfer, as there were no amplifiers available that could restore lost signal. Another reason was to ensure correct operation of the hybrid transformers used at central exchange equipment to separate outgoing from incoming speech, so these could be amplified or fed to a four-wire circuit. Most modern audio circuits, on the other hand, use active amplification and filtering and can use voltage-bridging connections for greatest accuracy. Strictly speaking, impedance matching only applies when both source and load devices are linear; however, matching may be obtained between nonlinear devices within certain operating ranges.
Impedance-matching devices
Adjusting the source impedance or the load impedance, in general, is called "impedance matching". There are three ways to improve an impedance mismatch, all of which are called "impedance matching":
Devices intended to present an apparent load to the source of Zload = Zsource* (complex conjugate matching). Given a source with a fixed voltage and fixed source impedance, the maximum power theorem says this is the only way to extract the maximum power from the source.
Devices intended to present an apparent load of Zload = Zline (complex impedance matching), to avoid echoes. Given a transmission line source with a fixed source impedance, this "reflectionless impedance matching" at the end of the transmission line is the only way to avoid reflecting echoes back to the transmission line.
Devices intended to present an apparent source resistance as close to zero as possible, or presenting an apparent source voltage as high as possible. This is the only way to maximize energy efficiency, and so it is used at the beginning of electrical power lines. Such an impedance bridging connection also minimizes distortion and electromagnetic interference; it is also used in modern audio amplifiers and signal-processing devices.
There are a variety of devices used between a source of energy and a load that perform "impedance matching". To match electrical impedances, engineers use combinations of transformers, resistors, inductors, capacitors and transmission lines. These passive (and active) impedance-matching devices are optimized for different applications and include baluns, antenna tuners (sometimes called ATUs or roller-coasters, because of their appearance), acoustic horns, matching networks, and terminators.
Transformers
Transformers are sometimes used to match the impedances of circuits. A transformer converts alternating current at one voltage to the same waveform at another voltage. The power input to the transformer and output from the transformer is the same (except for conversion losses). The side with the lower voltage is at low impedance (because this has the lower number of turns), and the side with the higher voltage is at a higher impedance (as it has more turns in its coil).
One example of this method involves a television balun transformer. This transformer allows interfacing a balanced line (300-ohm twin-lead) and an unbalanced line (75-ohm coaxial cable such as RG-6). To match the impedances, both cables must be connected to a matching transformer with a turns ratio of 2:1. In this example, the 300-ohm line is connected to the transformer side with more turns; the 75-ohm cable is connected to the transformer side with fewer turns. The formula for calculating the transformer turns ratio for this example is:
Resistive network
Resistive impedance matches are easiest to design and can be achieved with a simple L pad consisting of two resistors. Power loss is an unavoidable consequence of using resistive networks, and they are only (usually) used to transfer line level signals.
Stepped transmission line
Most lumped-element devices can match a specific range of load impedances. For example, in order to match an inductive load into a real impedance, a capacitor needs to be used. If the load impedance becomes capacitive, the matching element must be replaced by an inductor. In many cases, there is a need to use the same circuit to match a broad range of load impedance and thus simplify the circuit design. This issue was addressed by the stepped transmission line, where multiple, serially placed, quarter-wave dielectric slugs are used to vary a transmission line's characteristic impedance. By controlling the position of each element, a broad range of load impedances can be matched without having to reconnect the circuit.
Filters
Filters are frequently used to achieve impedance matching in telecommunications and radio engineering. In general, it is not theoretically possible to achieve perfect impedance matching at all frequencies with a network of discrete components. Impedance matching networks are designed with a definite bandwidth, take the form of a filter, and use filter theory in their design.
Applications requiring only a narrow bandwidth, such as radio tuners and transmitters, might use a simple tuned filter such as a stub. This would provide a perfect match at one specific frequency only. Wide bandwidth matching requires filters with multiple sections.
L-section
A simple electrical impedance-matching network requires one capacitor and one inductor. In the figure to the right, R1 > R2, however, either R1 or R2 may be the source and the other the load. One of X1 or X2 must be an inductor and the other must be a capacitor. One reactance is in parallel with the source (or load), and the other is in series with the load (or source). If a reactance is in parallel with the source, the effective network matches from high to low impedance.
The analysis is as follows. Consider a real source impedance of and real load impedance of . If a reactance is in parallel with the source impedance, the combined impedance can be written as:
If the imaginary part of the above impedance is canceled by the series reactance, the real part is
Solving for
.
.
where .
Note, , the reactance in parallel, has a negative reactance because it is typically a capacitor. This gives the L-network the additional feature of harmonic suppression since it is a low pass filter too.
The inverse connection (impedance step-up) is simply the reverse—for example, reactance in series with the source. The magnitude of the impedance ratio is limited by reactance losses such as the Q of the inductor. Multiple L-sections can be wired in cascade to achieve higher impedance ratios or greater bandwidth. Transmission line matching networks can be modeled as infinitely many L-sections wired in cascade. Optimal matching circuits can be designed for a particular system using Smith charts.
Power factor correction
Power factor correction devices are intended to cancel the reactive and nonlinear characteristics of a load at the end of a power line. This causes the load seen by the power line to be purely resistive. For a given true power required by a load this minimizes the true current supplied through the power lines, and minimizes power wasted in the resistance of those power lines. For example, a maximum power point tracker is used to extract the maximum power from a solar panel and efficiently transfer it to batteries, the power grid or other loads.
The maximum power theorem applies to its "upstream" connection to the solar panel, so it emulates a load resistance equal to the solar panel source resistance. However, the maximum power theorem does not apply to its "downstream" connection. That connection is an impedance bridging connection; it emulates a high-voltage, low-resistance source to maximize efficiency.
On the power grid the overall load is usually inductive. Consequently, power factor correction is most commonly achieved with banks of capacitors. It is only necessary for correction to be achieved at one single frequency, the frequency of the supply. Complex networks are only required when a band of frequencies must be matched and this is the reason why simple capacitors are all that is usually required for power factor correction.
Transmission lines
In RF connections, impedance matching is desirable, because otherwise reflections may be created at the end of the mismatched transmission line. The reflection may cause frequency-dependent loss.
In electrical systems involving transmission lines (such as radio and fiber optics)—where the length of the line is long compared to the wavelength of the signal (the signal changes rapidly compared to the time it takes to travel from source to load)— the impedances at each end of the line may be matched to the transmission line's characteristic impedance () to prevent reflections of the signal at the ends of the line. In radio-frequency (RF) systems, a common value for source and load impedances is 50 ohms. A typical RF load is a quarter-wave ground plane antenna (37 ohms with an ideal ground plane).
The general form of the voltage reflection coefficient for a wave moving from medium 1 to medium 2 is given by
while the voltage reflection coefficient for a wave moving from medium 2 to medium 1 is
so the reflection coefficient is the same (except for sign), no matter from which direction the wave approaches the boundary.
There is also a current reflection coefficient, which is the negative of the voltage reflection coefficient. If the wave encounters an open at the load end, positive voltage and negative current pulses are transmitted back toward the source (negative current means the current is going the opposite direction). Thus, at each boundary there are four reflection coefficients (voltage and current on one side, and voltage and current on the other side). All four are the same, except that two are positive and two are negative. The voltage reflection coefficient and current reflection coefficient on the same side have opposite signs. Voltage reflection coefficients on opposite sides of the boundary have opposite signs.
Because they are all the same except for sign it is traditional to interpret the reflection coefficient as the voltage reflection coefficient (unless otherwise indicated). Either end (or both ends) of a transmission line can be a source or a load (or both), so there is no inherent preference for which side of the boundary is medium 1 and which side is medium 2. With a single transmission line it is customary to define the voltage reflection coefficient for a wave incident on the boundary from the transmission line side, regardless of whether a source or load is connected on the other side.
Single-source transmission line driving a load
Load-end conditions
In a transmission line, a wave travels from the source along the line. Suppose the wave hits a boundary (an abrupt change in impedance). Some of the wave is reflected back, while some keeps moving onwards. (Assume there is only one boundary, at the load.)
Let
and be the voltage and current that is incident on the boundary from the source side.
and be the voltage and current that is transmitted to the load.
and be the voltage and current that is reflected back toward the source.
On the line side of the boundary and and on the load side where , , , , , and are phasors.
At a boundary, voltage and current must be continuous, therefore
All these conditions are satisfied by
where is the reflection coefficient going from the transmission line to the load.
Source-end conditions
At the source end of the transmission line, there may be waves incident both from the source and from the line; a reflection coefficient for each direction may be computed with
,
where Zs is the source impedance. The source of waves incident from the line are the reflections from the load end. If the source impedance matches the line, reflections from the load end will be absorbed at the source end. If the transmission line is not matched at both ends reflections from the load will be re-reflected at the source and re-re-reflected at the load end ad infinitum, losing energy on each transit of the transmission line. This can cause a resonance condition and strongly frequency-dependent behavior. In a narrow-band system this can be desirable for matching, but is generally undesirable in a wide-band system.
Source-end impedance
where is the one-way transfer function (from either end to the other) when the transmission line is exactly matched at source and load. accounts for everything that happens to the signal in transit (including delay, attenuation and dispersion). If there is a perfect match at the load, and
Transfer function
where is the open circuit (or unloaded) output voltage from the source.
Note that if there is a perfect match at both ends
and
and then
.
Electrical examples
Telephone systems
Telephone systems also use matched impedances to minimise echo on long-distance lines. This is related to transmission-line theory. Matching also enables the telephone hybrid coil (2- to 4-wire conversion) to operate correctly. As the signals are sent and received on the same two-wire circuit to the central office (or exchange), cancellation is necessary at the telephone earpiece so excessive sidetone is not heard. All devices used in telephone signal paths are generally dependent on matched cable, source and load impedances. In the local loop, the impedance chosen is 600 ohms (nominal). Terminating networks are installed at the exchange to offer the best match to their subscriber lines. Each country has its own standard for these networks, but they are all designed to approximate about 600 ohms over the voice frequency band.
Loudspeaker amplifiers
Audio amplifiers typically do not match impedances, but provide an output impedance that is lower than the load impedance (such as < 0.1 ohm in typical semiconductor amplifiers), for improved speaker damping. For vacuum tube amplifiers, impedance-changing transformers are often used to get a low output impedance, and to better match the amplifier's performance to the load impedance. Some tube amplifiers have output transformer taps to adapt the amplifier output to typical loudspeaker impedances.
The output transformer in vacuum-tube-based amplifiers has two basic functions:
Separation of the AC component (which contains the audio signals) from the DC component (supplied by the power supply) in the anode circuit of a vacuum-tube-based power stage. A loudspeaker should not be subjected to DC current.
Reducing the output impedance of power pentodes (such as the EL34) in a common-cathode configuration.
The impedance of the loudspeaker on the secondary coil of the transformer will be transformed to a higher impedance on the primary coil in the circuit of the power pentodes by the square of the turns ratio, which forms the impedance scaling factor.
The output stage in common-drain or common-collector semiconductor-based end stages with MOSFETs or power transistors has a very low output impedance. If they are properly balanced, there is no need for a transformer or a large electrolytic capacitor to separate AC from DC current.
Non-electrical examples
Acoustics
Similar to electrical transmission lines, an impedance matching problem exists when transferring sound energy from one medium to another. If the acoustic impedance of the two media are very different most sound energy will be reflected (or absorbed), rather than transferred across the border. The gel used in medical ultrasonography helps transfer acoustic energy from the transducer to the body and back again. Without the gel, the impedance mismatch in the transducer-to-air and the air-to-body discontinuity reflects almost all the energy, leaving very little to go into the body.
The bones in the middle ear function as a series of levers, which matches mechanical impedance between the eardrum (which is acted upon by vibrations in air) and the fluid-filled inner ear.
Horns in loudspeaker systems are used like transformers in electrical circuits to match the impedance of the transducer to the impedance of the air. This principle is used in both horn loudspeakers and musical instruments. Because most driver impedances are poorly matched to the impedance of free air at low frequencies, loudspeaker enclosures are designed to both match impedance and minimize destructive phase cancellations between output from the front and rear of a speaker cone. The loudness of sound produced in air from a loudspeaker is directly related to the ratio of the diameter of the speaker to the wavelength of the sound being produced: larger speakers can produce lower frequencies at a higher level than smaller speakers. Elliptical speakers are a complex case, acting like large speakers lengthwise and small speakers crosswise. Acoustic impedance matching (or the lack of it) affects the operation of a megaphone, an echo and soundproofing.
Optics
A similar effect occurs when light (or any electromagnetic wave) hits the interface between two media with different refractive indices. For non-magnetic materials, the refractive index is inversely proportional to the material's characteristic impedance. An optical or wave impedance (that depends on the propagation direction) can be calculated for each medium, and may be used in the transmission-line reflection equation
to calculate reflection and transmission coefficients for the interface. For non-magnetic dielectrics, this equation is equivalent to the Fresnel equations. Unwanted reflections can be reduced by the use of an anti-reflection optical coating.
Mechanics
If a body of mass m collides elastically with a second body, maximum energy transfer to the second body will occur when the second body has the same mass m. In a head-on collision of equal masses, the energy of the first body will be completely transferred to the second body (as in Newton's cradle for example). In this case, the masses act as "mechanical impedances", which must be matched to maximize energy transfer.
If and are the masses of the moving and stationary bodies, and P is the momentum of the system (which remains constant throughout the collision), the energy of the second body after the collision will be E2:
which is analogous to the power-transfer equation.
If we cannot change the masses of bodies, then we can match their impedance with a lever. Imagine a large ball dropping to the ground, and a small ball lying on the ground. The large ball hits the short end of a lever, and the small ball is launched from the long end of the lever. If the lever arm lengths satisfy , then all energy would be transferred to the small ball if collisions are elastic. This is roughly how the middle ear works (see above).
These principles are useful in the application of highly energetic materials (explosives). If an explosive charge is placed on a target, the sudden release of energy causes compression waves to propagate through the target radially from the point-charge contact. When the compression waves reach areas of high acoustic impedance mismatch (such as the opposite side of the target), tension waves reflect back and create spalling. The greater the mismatch, the greater the effect of creasing and spalling will be. A charge initiated against a wall with air behind it will do more damage to the wall than a charge initiated against a wall with soil behind it.
See also
Ringing (signal)
Standing wave ratio
Line isolation transformer
Notes
References
Further reading
(175 pages)
External links
Impedance Matching Impedance Matching with the Smith Chart
Electronic design
Electronics concepts
Filter theory
es:Adaptación de impedancias | Impedance matching | [
"Engineering"
] | 4,835 | [
"Telecommunications engineering",
"Electronic design",
"Filter theory",
"Electronic engineering",
"Design"
] |
320,737 | https://en.wikipedia.org/wiki/Frankincense | Frankincense, also known as olibanum (), is an aromatic resin used in incense and perfumes, obtained from trees of the genus Boswellia in the family Burseraceae. The word is from Old French ('high-quality incense'). There are several species of Boswellia that produce true frankincense: Boswellia sacra (syn. B. bhaw-dajiana, syn. B. carteri), B. frereana, B. serrata (B. thurifera), and B. papyrifera. Resin from each is available in various grades, which depends on the time of harvesting. The resin is hand-sorted for quality.
Etymology
The English word frankincense derives from the Old French expression , meaning 'true incense', maybe with the sense of 'high quality incense'. The adjective in Old French meant 'noble, true', in this case perhaps 'pure'; although franc is ultimately derived from the tribal name of the Franks, it is not a direct reference to them in the word francincense.
The word for frankincense in the Koine Greek of the New Testament, (or ), is cognate with the name of Lebanon (); the same can be said with regard to Arabic, Phoenician, Hebrew, and . This is postulated to be because they both derive from the word for 'white' and that the spice route went via Mount Lebanon ().
derived from or . The leading "o" may have come from , or from the Greek article o- or Arabic article al-. Other names include , , , , , , , .
Description
The trees start producing resin at about eight to 10 years old. Tapping is done two to three times per year with the final taps producing the best tears because of their higher aromatic terpene, sesquiterpene and diterpene content. Generally speaking, the more opaque resins are the best quality. Today 90 percent of the world's production of frankincense comes from the Horn of Africa, predominantly from the border communities on the Somalia–Ethiopia border.
The main species in trade are:
Boswellia frereana grows in northern Somalia.
Boswellia occulta: Somalia. For a long time Somali harvesters considered Boswellia occulta to be the same species as Boswellia carteri even though their shapes are different, and sold resins from both species as the same thing. However in 2019, it was clear that the chemical compositions of their essential oils are completely different.
Boswellia sacra: Somalia, South Arabia.
Boswellia bhaw-dajiana (older spelling Boswellia bhau-dajiana) It is a synonym of Boswellia sacra
Boswellia carteri (older spelling Boswellia carterii): It was long considered an independent species, but in the 1980s it was determined to be a synonym of Boswellia sacra.
Boswellia serrata (synonym Boswellia thurifera): India.
Boswellia papyrifera: Ethiopia, Eritrea, Sudan.
Recent studies indicate that frankincense tree populations are declining, partly from overexploitation. Heavily tapped trees produce seeds that germinate at only 16% while seeds of trees that had not been tapped germinate at more than 80%. In addition, burning, grazing, and attacks by the longhorn beetle have reduced the tree population. Clearing of frankincense woodlands for conversion to agriculture is also a major threat.
Chemical composition
These are some of the chemical compounds present in frankincense:
acid resin (6%), soluble in alcohol and having the formula C20H32O4
gum (similar to gum arabic) 30–36%
3-acetyl-beta-boswellic acid (Boswellia sacra)
alpha-boswellic acid (Boswellia sacra)
incensole acetate, C21H34O3
phellandrene
olibanic acid
Among various plants in the genus Boswellia, only Boswellia sacra, Boswellia serrata and Boswellia papyrifera have been confirmed to contain significant amounts of boswellic acids.
History
Frankincense has been traded on the Somali and Arabian Peninsula for more than 5,000 years. Greek historian Herodotus wrote in The History that frankincense was harvested from trees in southern Arabia. He reported that the gum was dangerous to harvest because of winged snakes that guard the trees and that the smoke from burning storax would drive the snakes away. Pliny the Elder also mentioned frankincense in his Naturalis Historia.
Frankincense, which was used in the Roman Empire prior to the spread of Christianity, was reintroduced to Western Europe possibly by Frankish Crusaders and other Western Europeans on their journeys to the Eastern Roman Empire, where it was commonly used in church services. Although named frankincense, the name refers to the quality of incense brought to Western Europe, not to the Franks themselves.
Southern Arabia was an exporter of frankincense in antiquity, with some of it being traded as far as China. The 13th-century Chinese writer and customs inspector Zhao Rugua wrote that or (Chinese: / ) comes from the three Dashi states (Chinese: – Caliphate (Arab Muslims)) of Maloba (Murbat), Shihe (Shihr), and Nufa (Dhofar), from the depths of the remotest mountains; the trunk of the tree is notched with a hatchet, upon which the resin flows out, and, when hardened, turns into incense, which is gathered and made into lumps; it is transported on elephants to the Dashi ports, then on ship to Sanfoqi; which is why it was known as a product of Sanfoqi.
In Christian tradition, frankincense is one of the gifts given by the Biblical Magi to Jesus at his nativity as described in the Gospel of Matthew.
Production
Thousands of tons of frankincense are traded every year to be used in religious ceremonies as incense in thuribles and by makers of perfumes, natural medicines, and essential oils.
Somalia
In Somalia, frankincense is harvested in the Bari and Sanaag regions: mountains lying at the northwest of Erigavo; El Afweyn District; Cal Madow mountain range, a westerly escarpment that runs parallel to the coast; Cal Miskeed, including Hantaara and Habeeno plateau and a middle segment of the frankincense-growing escarpment; Karkaar mountains or eastern escarpment, which lies at the eastern fringe of the frankinscence escarpment.
Oman
In Dhofar, Oman, frankincense species grow north of Salalah. It was traded in the ancient coastal city of Sumhuram, now Khor Rori, and Al-Baleed, an ancient port. In 2000, UNESCO inscribed the sites as a World Heritage Site Land of Frankincense.
Ecological status
In 1998, the International Union for Conservation of Nature warned that one of the primary frankincense species, Boswellia sacra, is "near threatened". Frankincense trees are not covered by the Convention on International Trade in Endangered Species of Wild Fauna and Flora, but experts argue that Boswellia species meet the criteria for protection. In a 2006 study, an ecologist at Wageningen University & Research claimed that, by the late-1990s, Boswellia papyrifera trees in Eritrea were becoming hard to find. In 2019, a new paper predicted a 50% reduction in Boswellia papyrifera within the next two decades. This species, found mainly in Ethiopia, Eritrea, and Sudan, accounts for about two-thirds of global frankincense production. The paper warns that all Boswellia species are threatened by habitat loss and overexploitation. Most Boswellia grow in harsh, arid regions beset by poverty and conflict. Harvesting and selling the tree's resin is one of the few sources of income for the inhabitants, resulting in overtapping.
Research
Limited clinical studies have provided weak evidence for the use of frankincense resin in certain disease conditions, but the inconsistent, low quality of research remains inconclusive for determining any effect.
Uses
The Egyptians cleansed body cavities in the mummification process with frankincense and natron. In Persian medicine, it is used for diabetes, gastritis and stomach ulcer. The oil is used in Abrahamic religions to cleanse a house or building of bad or evil energy—including used in exorcisms and to bless one's being (like the bakhoor commonly found in Persian Gulf cultures by spreading the fumes towards the body).
The incense offering occupied a prominent position in the sacrificial legislation of the ancient Hebrews. The Book of Exodus (30:34–38) prescribes frankincense, blended with equal amounts of three aromatic spices, to be ground and burnt in the sacred altar before the Ark of the Covenant in the wilderness Tabernacle, where it was meant to be a holy offering—not to be enjoyed for its fragrance. Scholars have identified frankincense as what the Book of Jeremiah (6:20) relates was imported from Sheba during the 6th century BC Babylonian captivity. Frankincense is mentioned in the New Testament as one of the three gifts (with gold and myrrh) that the magi "from the East" presented to the Christ Child ().
In traditional Chinese medicine, frankincense ( ) along with myrrh ( ) are considered to have anti-bacterial properties and blood-moving uses. It can be used topically or orally, also used in surgical and internal medicine of traditional Chinese medicine. It is used to relieve pain, remove blood stasis, promote blood circulation and treat deafness, stroke, locked jaw, and abnormalities in women's menstruation.
Essential oil
The essential oil of frankincense is produced by steam distillation of the tree resin. The oil's chemical components are 75% monoterpenes, sesquiterpenes, and ketones. Contrary to some commercial claims, steam distilled frankincense oils do not contain the insufficiently volatile boswellic acids (triterpenoids), although they may be present in solvent extractions. The chemistry of the essential oil is mainly monoterpenes and sesquiterpenes, such as alpha-pinene, Limonene, alpha-Thujene, and beta-Pinene with small amounts of diterpenoid components being the upper limit in terms of molecular weight.
Essential oils can be diluted and applied to skin or the fragrance can be inhaled.
See also
Trade
Land of Frankincense (Frankincense Trail), site in Oman
Incense trade route, a large network around the Mediterranean and beyond
Nabataeans, a trader tribe
Literature
Desi Sangye Gyatso, author of a Tibetan herbal
Historia Plantarum (Theophrastus book)
Similar plants and products
Elemi, resin or tree
Myrrh, resin
Palo santo (Bursera graveolens), tree
Agarwood
Benzoin (resin)
Copal
References
Further reading
External links
Incense material
Resins
Plant products | Frankincense | [
"Physics",
"Chemistry"
] | 2,373 | [
"Natural products",
"Resins",
"Unsolved problems in physics",
"Incense material",
"Materials",
"Plant products",
"Amorphous solids",
"Matter"
] |
320,757 | https://en.wikipedia.org/wiki/Groove%20%28music%29 | In music, groove is the sense of an effect ("feel") of changing pattern in a propulsive rhythm or sense of "swing". In jazz, it can be felt as a quality of persistently repeated rhythmic units, created by the interaction of the music played by a band's rhythm section (e.g. drums, electric bass or double bass, guitar, and keyboards). Groove is a significant feature of popular music, and can be found in many genres, including salsa, rock, soul, funk, and fusion.
From a broader ethnomusicological perspective, groove has been described as "an unspecifiable but ordered sense of something that is sustained in a distinctive, regular and attractive way, working to draw the listener in." Musicologists and other scholars have analyzed the concept of "groove" since around the 1990s. They have argued that a "groove" is an "understanding of rhythmic patterning" or "feel" and "an intuitive sense" of "a cycle in motion" that emerges from "carefully aligned concurrent rhythmic patterns" that stimulates dancing or foot-tapping on the part of listeners. The concept can be linked to the sorts of ostinatos that generally accompany fusions and dance musics of African derivation (e.g. African-American, Afro-Cuban, Afro-Brazilian, etc.).
The term is often applied to musical performances that make one want to move or dance, and enjoyably "groove" (a word that also has sexual connotations). The expression "in the groove" (as in the jazz standard) was widely used from around 1936 to 1945, at the height of the swing era, to describe top-notch jazz performances. In the 1940s and 1950s, groove commonly came to denote musical "routine, preference, style, [or] source of pleasure."
Description
Musicians' perspectives
Like the term "swing", which is used to describe a cohesive rhythmic "feel" in a jazz context, the concept of "groove" can be hard to define. Marc Sabatella's article Establishing The Groove argues that "groove is a completely subjective thing." He claims that "one person may think a given drummer has a great feel, while another person may think the same drummer sounds too stiff, and another may think he is too loose." Similarly, a bass educator states that while "groove is an elusive thing" it can be defined as "what makes the music breathe" and the "sense of motion in the context of a song".
In a musical context, general dictionaries define a groove as "a pronounced, enjoyable rhythm" or the act of "creat[ing], danc[ing] to, or enjoy[ing] rhythmic music". Steve Van Telejuice explains the "groove" as the point in this sense when he defines it as a point in a song or performance when "even the people who can't dance wanna feel like dancing..." due to the effect of the music.
Bernard Coquelet argues that the "groove is the way an experienced musician will play a rhythm compared with the way it is written (or would be written)" by playing slightly "before or after the beat". Coquelet claims that the "notion of groove actually has to do with aesthetics and style"; "groove is an artistic element, that is to say human,...and "it will evolve depending on the harmonic context, the place in the song, the sound of the musician's instrument, and, in interaction with the groove of the other musicians", which he calls "collective" groove". Minute rhythmic variations by the rhythm section members such as the bass player can dramatically change the feel as a band plays a song, even for a simple singer-songwriter groove.
Theoretical analysis
UK musicologist Richard Middleton (1999) notes that while "the concept of groove" has "long [been] familiar in musicians' own usage", musicologists and theorists have only more recently begun to analyze this concept. Middleton states that a groove "... marks an understanding of rhythmic patterning that underlies its role in producing the characteristic rhythmic 'feel' of a piece". He notes that the "feel created by a repeating framework" is also modified with variations. "Groove", in terms of pattern-sequencing, is also known as "shuffle note"—where there is deviation from exact step positions.
When the musical slang phrase "Being in the groove" is applied to a group of improvisers, this has been called "an advanced level of development for any improvisational music group", which is "equivalent to Bohm and Jaworski's descriptions of an evoked field", which systems dynamics scholars claim are "forces of unseen connection that directly influence our experience and behaviour". Peter Forrester and John Bailey argue that the "chances of achieving this higher level of playing" (i.e., attain a "groove") are improved when the musicians are "open to other's musical ideas", "finding ways of complementing other participant's musical ideas", and "taking risks with the music".
Turry and Aigen cite Feld's definition of groove as "an intuitive sense of style as process, a perception of a cycle in motion, a form or organizing pattern being revealed, a recurrent clustering of elements through time". Aigen states that "when [a] groove is established among players, the musical whole becomes greater than the sum of its parts, enabling a person [...] to experience something beyond himself which he[/she] cannot create alone (Aigen 2002, p.34)".
Jeff Pressing's 2002 article claimed that a "groove or feel" is "a cognitive temporal phenomenon emerging from one or more carefully aligned concurrent rhythmic patterns, characterized by...perception of recurring pulses, and subdivision of structure in such pulses,...perception of a cycle of time, of length 2 or more pulses, enabling identification of cycle locations, and...effectiveness of engaging synchronizing body responses (e.g. dance, foot-tapping)".
Neuroscientific perspectives
The "groove" has been cited as an example of sensory-motor coupling between neural systems. Sensory-motor coupling is the coupling or integration of the sensory system and motor system. Sensorimotor integration is not a static process. For a given stimulus, there is no one single motor command. "Neural responses at almost every stage of a sensorimotor pathway are modified at short and long timescales by biophysical and synaptic processes, recurrent and feedback connections, and learning, as well as many other internal and external variables". Recent research has shown that at least some styles of modern groove-oriented rock music are characterized by an "aesthetics of exactitude" and the strongest groove stimulation could be observed for drum patterns without microtiming deviations.
Use in different genres
Jazz
In some more traditional styles of jazz, the musicians often use the word "swing" to describe the sense of rhythmic cohesion of a skilled group. However, since the 1950s, musicians from the organ trio and latin jazz subgenres have also used the term "groove". Jazz flute player Herbie Mann talks a lot about "the groove." In the 1950s, Mann "locked into a Brazilian groove in the early '60s, then moved into a funky, soulful groove in the late '60s and early '70s. By the mid-'70s he was making hit disco records, still cooking in a rhythmic groove." He describes his approach to finding the groove as follows: "All you have to do is find the waves that are comfortable to float on top of." Mann argues that the "epitome of a groove record" is "Memphis Underground or Push Push", because the "rhythm section [is] locked all in one perception."
Reggae
In Jamaican reggae, dancehall, and dub music, the creole term "riddim" is used to describe the rhythm patterns created by the drum pattern or a prominent bassline. In other musical contexts a "riddim" would be called a "groove" or beat. One of the widely copied "riddims", Real Rock, was recorded in 1967 by Sound Dimension. "It was built around a single, emphatic bass note followed by a rapid succession of lighter notes. The pattern repeated over and over hypnotically. The sound was so powerful that it gave birth to an entire style of reggae meant for slow dancing called rub a dub."
R&B
The "groove" is also associated with funk performers, such as James Brown's drummers Clyde Stubblefield and Jabo Starks, and with soul music. "In the 1950s, when 'funk' and 'funky' were used increasingly as adjectives in the context of soul music—the meaning being transformed from the original one of a pungent odor to a re-defined meaning of a strong, distinctive groove." As "[t]he soul dance music of its day, the basic idea of funk was to create as intense a groove as possible."
When a drummer plays a groove that "is very solid and with a great feel...", this is referred to informally as being "in the pocket"; when a drummer "maintains this feel for an extended period of time, never wavering, this is often referred to as a deep pocket."
Hip hop
A concept similar to "groove" or "swing" is also used in other African-American genres such as hip hop. The rhythmic groove that jazz artists call a sense of “swing” is sometimes referred to as having "flow" in the hip hop scene. "Flow is as elemental to hip hop as the concept of swing is to jazz". Just as the jazz concept of "swing" involves performers deliberately playing behind or ahead of the beat, the hip-hop concept of flow is about "funking with one's expectations of time"—that is, the rhythm and pulse of the music. "Flow is not about what is being said so much as how one is saying it".
Groove metal
In the 1990s the term "groove" was used to describe a form of thrash metal called groove metal, which is based around the use of mid-tempo thrash riffs and detuned power chords played with heavy syncopation. "Speed wasn’t the main point anymore, it was what Pantera singer Phil Anselmo called the 'power groove.' Riffs became unusually heavy without the need of growling or the extremely distorted guitars of death metal, rhythms depended more on a heavy groove."
With heavy metal, the term "groove" can also be associated with stoner metal, sludge metal, doom metal and death metal genres as well as djent.
Jam/improvisational rock
See also
Groove (drumming)
Rare groove
Tempo rubato
References
Further reading
Busse, W. G. (2002): Toward Objective Measurement and Evaluation of Jazz Piano Performance Via MIDI-Based Groove Quantize Templates. Music Perception 19, 443–461.
Clark, Mike, and Paul Jackson (1992) Rhythm Combination, realisation Setsuro Tsukada. Video recording, 1 cassette (VHS). Video Workshop Series. [N.p.]: Atoss.
Klingmann, Heinrich (2010): Improvising with a Groove – Pedagogic Steps Towards an Elusive Task, Lecture at the 2nd IASJ Jazz Education Conference, Corfu 2010
Pressing, Jeff (2002): "Black Atlantic Rhythm. Its Computational and Transcultural Foundations." Music Perception 19, 285–310.
Prögler, J. A. (1995): "Searching for Swing. Participatory Discrepancies in the Jazz Rhythm Section." Ethnomusicology 39, 21- 54.
PopScriptum (2010): The Groove Issue
list of literature on groove
African-American music
Jazz techniques
Jazz terminology
Musical techniques
Popular music
Rhythm and meter | Groove (music) | [
"Physics"
] | 2,478 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
320,819 | https://en.wikipedia.org/wiki/Cantor%20function | In mathematics, the Cantor function is an example of a function that is continuous, but not absolutely continuous. It is a notorious counterexample in analysis, because it challenges naive intuitions about continuity, derivative, and measure. Though it is continuous everywhere and has zero derivative almost everywhere, its value still goes from 0 to 1 as its argument reaches from 0 to 1. Thus, in one sense the function seems very much like a constant one which cannot grow, and in another, it does indeed monotonically grow.
It is also called the Cantor ternary function, the Lebesgue function, Lebesgue's singular function, the Cantor–Vitali function, the Devil's staircase, the Cantor staircase function, and the Cantor–Lebesgue function. introduced the Cantor function and mentioned that Scheeffer pointed out that it was a counterexample to an extension of the fundamental theorem of calculus claimed by Harnack. The Cantor function was discussed and popularized by , and .
Definition
To define the Cantor function , let be any number in and obtain by the following steps:
Express in base 3, using digits 0, 1, 2.
If the base-3 representation of contains a 1, replace every digit strictly after the first 1 with 0.
Replace any remaining 2s with 1s.
Interpret the result as a binary number. The result is .
For example:
has the ternary representation 0.02020202... There are no 1s so the next stage is still 0.02020202... This is rewritten as 0.01010101... This is the binary representation of , so .
has the ternary representation 0.01210121... The digits after the first 1 are replaced by 0s to produce 0.01000000... This is not rewritten since it has no 2s. This is the binary representation of , so .
has the ternary representation 0.21102 (or 0.211012222...). The digits after the first 1 are replaced by 0s to produce 0.21. This is rewritten as 0.11. This is the binary representation of , so .
Equivalently, if is the Cantor set on [0,1], then the Cantor function can be defined as
This formula is well-defined, since every member of the Cantor set has a unique base 3 representation that only contains the digits 0 or 2. (For some members of , the ternary expansion is repeating with trailing 2's and there is an alternative non-repeating expansion ending in 1. For example, = 0.13 = 0.02222...3 is a member of the Cantor set). Since and , and is monotonic on , it is clear that also holds for all .
Properties
The Cantor function challenges naive intuitions about continuity and measure; though it is continuous everywhere and has zero derivative almost everywhere, goes from 0 to 1 as goes from 0 to 1, and takes on every value in between. The Cantor function is the most frequently cited example of a real function that is uniformly continuous (precisely, it is Hölder continuous of exponent α = log 2/log 3) but not absolutely continuous. It is constant on intervals of the form (0.x1x2x3...xn022222..., 0.x1x2x3...xn200000...), and every point not in the Cantor set is in one of these intervals, so its derivative is 0 outside of the Cantor set. On the other hand, it has no derivative at any point in an uncountable subset of the Cantor set containing the interval endpoints described above.
The Cantor function can also be seen as the cumulative probability distribution function of the 1/2-1/2 Bernoulli measure μ supported on the Cantor set: . This probability distribution, called the Cantor distribution, has no discrete part. That is, the corresponding measure is atomless. This is why there are no jump discontinuities in the function; any such jump would correspond to an atom in the measure.
However, no non-constant part of the Cantor function can be represented as an integral of a probability density function; integrating any putative probability density function that is not almost everywhere zero over any interval will give positive probability to some interval to which this distribution assigns probability zero. In particular, as pointed out, the function is not the integral of its derivative even though the derivative exists almost everywhere.
The Cantor function is the standard example of a singular function.
The Cantor function is also a standard example of a function with bounded variation but, as mentioned above, is not absolutely continuous. However, every absolutely continuous function is continuous with bounded variation.
The Cantor function is non-decreasing, and so in particular its graph defines a rectifiable curve. showed that the arc length of its graph is 2. Note that the graph of any nondecreasing function such that and has length not greater than 2. In this sense, the Cantor function is extremal.
Lack of absolute continuity
Because the Lebesgue measure of the uncountably infinite Cantor set is 0, for any positive ε < 1 and δ, there exists a finite sequence of pairwise disjoint sub-intervals with total length < δ over which the Cantor function cumulatively rises more than ε.
In fact, for every δ > 0 there are finitely many pairwise disjoint intervals (xk,yk) (1 ≤ k ≤ M) with and .
Alternative definitions
Iterative construction
Below we define a sequence {fn} of functions on the unit interval that converges to the Cantor function.
Let f0(x) = x.
Then, for every integer , the next function fn+1(x) will be defined in terms of fn(x) as follows:
Let fn+1(x) = , when ;
Let fn+1(x) = 1/2, when ;
Let fn+1(x) = , when .
The three definitions are compatible at the end-points 1/3 and 2/3, because fn(0) = 0 and fn(1) = 1 for every n, by induction. One may check that fn converges pointwise to the Cantor function defined above. Furthermore, the convergence is uniform. Indeed, separating into three cases, according to the definition of fn+1, one sees that
If f denotes the limit function, it follows that, for every n ≥ 0,
Fractal volume
The Cantor function is closely related to the Cantor set. The Cantor set C can be defined as the set of those numbers in the interval [0, 1] that do not contain the digit 1 in their base-3 (triadic) expansion, except if the 1 is followed by zeros only (in which case the tail 1000 can be replaced by 0222 to get rid of any 1). It turns out that the Cantor set is a fractal with (uncountably) infinitely many points (zero-dimensional volume), but zero length (one-dimensional volume). Only the D-dimensional volume (in the sense of a Hausdorff-measure) takes a finite value, where is the fractal dimension of C. We may define the Cantor function alternatively as the D-dimensional volume of sections of the Cantor set
Self-similarity
The Cantor function possesses several symmetries. For , there is a reflection symmetry
and a pair of magnifications, one on the left and one on the right:
and
The magnifications can be cascaded; they generate the dyadic monoid. This is exhibited by defining several helper functions. Define the reflection as
The first self-symmetry can be expressed as
where the symbol denotes function composition. That is, and likewise for the other cases. For the left and right magnifications, write the left-mappings
and
Then the Cantor function obeys
Similarly, define the right mappings as
and
Then, likewise,
The two sides can be mirrored one onto the other, in that
and likewise,
These operations can be stacked arbitrarily. Consider, for example, the sequence of left-right moves Adding the subscripts C and D, and, for clarity, dropping the composition operator in all but a few places, one has:
Arbitrary finite-length strings in the letters L and R correspond to the dyadic rationals, in that every dyadic rational can be written as both for integer n and m and as finite length of bits with Thus, every dyadic rational is in one-to-one correspondence with some self-symmetry of the Cantor function.
Some notational rearrangements can make the above slightly easier to express. Let and stand for L and R. Function composition extends this to a monoid, in that one can write and generally, for some binary strings of digits A, B, where AB is just the ordinary concatenation of such strings. The dyadic monoid M is then the monoid of all such finite-length left-right moves. Writing as a general element of the monoid, there is a corresponding self-symmetry of the Cantor function:
The dyadic monoid itself has several interesting properties. It can be viewed as a finite number of left-right moves down an infinite binary tree; the infinitely distant "leaves" on the tree correspond to the points on the Cantor set, and so, the monoid also represents the self-symmetries of the Cantor set. In fact, a large class of commonly occurring fractals are described by the dyadic monoid; additional examples can be found in the article on de Rham curves. Other fractals possessing self-similarity are described with other kinds of monoids. The dyadic monoid is itself a sub-monoid of the modular group
Note that the Cantor function bears more than a passing resemblance to Minkowski's question-mark function. In particular, it obeys the exact same symmetry relations, although in an altered form.
Generalizations
Let
be the dyadic (binary) expansion of the real number 0 ≤ y ≤ 1 in terms of binary digits bk ∈ {0,1}. This expansion is discussed in greater detail in the article on the dyadic transformation. Then consider the function
For z = 1/3, the inverse of the function x = 2 C1/3(y) is the Cantor function. That is, y = y(x) is the Cantor function. In general, for any z < 1/2, Cz(y) looks like the Cantor function turned on its side, with the width of the steps getting wider as z approaches zero.
As mentioned above, the Cantor function is also the cumulative distribution function of a measure on the Cantor set. Different Cantor functions, or Devil's Staircases, can be obtained by considering different atom-less probability measures supported on the Cantor set or other fractals. While the Cantor function has derivative 0 almost everywhere, current research focusses on the question of the size of the set of points where the upper right derivative is distinct from the lower right derivative, causing the derivative to not exist. This analysis of differentiability is usually given in terms of fractal dimension, with the Hausdorff dimension the most popular choice. This line of research was started in the 1990s by Darst, who showed that the Hausdorff dimension of the set of non-differentiability of the Cantor function is the square of the dimension of the Cantor set, . Subsequently Falconer showed that this squaring relationship holds for all Ahlfors's regular, singular measures, i.e.Later, Troscheit obtain a more comprehensive picture of the set where the derivative does not exist for more general normalized Gibb's measures supported on self-conformal and self-similar sets.
Hermann Minkowski's question mark function loosely resembles the Cantor function visually, appearing as a "smoothed out" form of the latter; it can be constructed by passing from a continued fraction expansion to a binary expansion, just as the Cantor function can be constructed by passing from a ternary expansion to a binary expansion. The question mark function has the interesting property of having vanishing derivatives at all rational numbers.
See also
Dyadic transformation
Weierstrass function, a function that is continuous everywhere but differentiable nowhere.
Notes
References
Reprinted in: E. Zermelo (Ed.), Gesammelte Abhandlungen Mathematischen und Philosophischen Inhalts, Springer, New York, 1980.
External links
Cantor ternary function at Encyclopaedia of Mathematics
Cantor Function by Douglas Rivers, the Wolfram Demonstrations Project.
Fractals
Measure theory
Special functions
Georg Cantor
De Rham curves | Cantor function | [
"Mathematics"
] | 2,655 | [
"Functions and mappings",
"Mathematical analysis",
"Special functions",
"Mathematical objects",
"Fractals",
"Combinatorics",
"Mathematical relations"
] |
320,861 | https://en.wikipedia.org/wiki/Constant%20function | In mathematics, a constant function is a function whose (output) value is the same for every input value.
Basic properties
As a real-valued function of a real-valued argument, a constant function has the general form or just For example, the function is the specific constant function where the output value is . The domain of this function is the set of all real numbers. The image of this function is the singleton set . The independent variable does not appear on the right side of the function expression and so its value is "vacuously substituted"; namely , , , and so on. No matter what value of is input, the output is .
The graph of the constant function is a horizontal line in the plane that passes through the point . In the context of a polynomial in one variable , the constant function is called non-zero constant function because it is a polynomial of degree 0, and its general form is , where is nonzero. This function has no intersection point with the axis, meaning it has no root (zero). On the other hand, the polynomial is the identically zero function. It is the (trivial) constant function and every is a root. Its graph is the axis in the plane. Its graph is symmetric with respect to the axis, and therefore a constant function is an even function.
In the context where it is defined, the derivative of a function is a measure of the rate of change of function values with respect to change in input values. Because a constant function does not change, its derivative is 0. This is often written: . The converse is also true. Namely, if for all real numbers , then is a constant function. For example, given the constant function The derivative of is the identically zero function
Other properties
For functions between preordered sets, constant functions are both order-preserving and order-reversing; conversely, if is both order-preserving and order-reversing, and if the domain of is a lattice, then must be constant.
Every constant function whose domain and codomain are the same set is a left zero of the full transformation monoid on , which implies that it is also idempotent.
It has zero slope or gradient.
Every constant function between topological spaces is continuous.
A constant function factors through the one-point set, the terminal object in the category of sets. This observation is instrumental for F. William Lawvere's axiomatization of set theory, the Elementary Theory of the Category of Sets (ETCS).
For any non-empty , every set is isomorphic to the set of constant functions in . For any and each element in , there is a unique function such that for all . Conversely, if a function satisfies for all , is by definition a constant function.
As a corollary, the one-point set is a generator in the category of sets.
Every set is canonically isomorphic to the function set , or hom set in the category of sets, where 1 is the one-point set. Because of this, and the adjunction between Cartesian products and hom in the category of sets (so there is a canonical isomorphism between functions of two variables and functions of one variable valued in functions of another (single) variable, ) the category of sets is a closed monoidal category with the Cartesian product of sets as tensor product and the one-point set as tensor unit. In the isomorphisms natural in , the left and right unitors are the projections and the ordered pairs and respectively to the element , where is the unique point in the one-point set.
A function on a connected set is locally constant if and only if it is constant.
References
Herrlich, Horst and Strecker, George E., Category Theory, Heldermann Verlag (2007).
External links
Elementary mathematics
Elementary special functions
Polynomial functions | Constant function | [
"Mathematics"
] | 788 | [
"Elementary mathematics"
] |
320,873 | https://en.wikipedia.org/wiki/Rocketdyne | Rocketdyne is an American rocket engine design and production company headquartered in Canoga Park, in the western San Fernando Valley of suburban Los Angeles, in southern California.
Rocketdyne was founded as a division of North American Aviation in 1955 and was later part of Rockwell International from 1967 until 1996 and Boeing from 1996 to 2005. In 2005, Boeing sold the Rocketdyne division to United Technologies Corporation, becoming Pratt & Whitney Rocketdyne as part of Pratt & Whitney. In 2013, Rocketdyne was sold to GenCorp, Inc., which merged it with Aerojet to form Aerojet Rocketdyne.
History
After World War II, North American Aviation (NAA) was contracted by the Defense Department to study the German V-2 missile and adapt its engine to Society of Automotive Engineers (SAE) measurements and U.S. construction details. NAA also used the same general concept of separate burner/injectors from the V-2 engine design to build a much larger engine for the Navaho missile project (1946–1958). This work was considered unimportant in the 1940s and funded at a very low level, but the start of the Korean War in 1950 changed priorities. NAA had begun to use the Santa Susana Field Laboratory (SSFL) high in the Simi Hills around 1947 for the Navaho's rocket engine testing. At that time the site was much further away from major populated areas than the early test sites NAA had been using within Los Angeles.
Navaho ran into continual difficulties and was canceled in 1958 when the Chrysler Corporation Missile Division's Redstone missile design (essentially an improved V-2) had caught up in development. However the Rocketdyne engine, known as the A-5 or NAA75-110, proved to be considerably more reliable than the one developed for Redstone, so the missile was redesigned with the A-5 even though the resulting missile had much shorter range.
As the missile entered production, NAA spun off Rocketdyne in 1955 as a separate division, and built its new plant in the then small Los Angeles suburb of Canoga Park, in the San Fernando Valley near and below its Santa Susana Field Laboratory.
In 1967, NAA, with its Rocketdyne and Atomics International divisions, merged with the Rockwell Corporation to form North American Rockwell, becoming in 1973 Rockwell International.
Thor, Delta, Atlas
Rocketdyne's next major development was its first all-new design, the S-3D, which had been developed in parallel to the V-2 derived A series. The S-3 was used on the Army's Jupiter missile design, essentially a development of the Redstone, and was later selected for the competitor Air Force Thor missile. An even larger design, the LR89/LR105, was used on the Atlas missile. The Thor had a short military career, but it was used as a satellite launcher through the 1950s and 60s in a number of different versions. One, Thor Delta, became the baseline for the current Delta series of space launchers, although since the late 1960s the Delta has had almost nothing in common with the Thor. Although the original S-3 engine was used on some Delta versions, most use its updated RS-27 design, originally developed as a single engine to replace the three-engine cluster on the Atlas.
The Atlas also had a short military career as a deterrent weapon, but the Atlas rocket family descended from it became an important orbital launcher for many decades, both for the Project Mercury crewed spacecraft, and in the much-employed Atlas-Agena and Atlas-Centaur rockets. The Atlas V is still in manufacture and use.
NASA
Rocketdyne also became the major supplier for NASA's development efforts, supplying all of the major engines for the Saturn rocket, and potentially, the huge Nova rocket designs. Rocketdyne's H-1 engine was used by the Saturn I booster main stage. Five F-1 engines powered the Saturn V's S-IC first stage, while five J-2 engines powered its S-II second stage, and one J-2 the S-IVB third stages. By 1965, Rocketdyne built the vast majority of United States rocket engines, excepting those of the Titan rocket (built by Aerojet), and its payroll had grown to 65,000. This sort of growth appeared to be destined to continue in the 1970s when Rocketdyne won the contract for the RS-25 Space Shuttle Main Engine (SSME), but the rapid downturn in other military and civilian contracts led to downsizing of the company. North American Aviation, largely a spacecraft manufacturer, and also tied almost entirely to the Space Shuttle, merged with the Rockwell Corporation in 1966 to form the North American Rockwell company, which became Rockwell International in 1973, with Rocketdyne as a major division.
Downsizing
During continued downsizing in the 1980s and 1990s, Rockwell International shed several parts of the former North American Rockwell corporation. The aerospace entities of Rockwell International, including the former NAA and Rocketdyne, were sold to Boeing in 1996. Rocketdyne became part of Boeing's Defense division. In February 2005, Boeing reached an agreement to sell what was by then referred to as "Rocketdyne Propulsion & Power" to Pratt & Whitney of United Technologies Corporation. The transaction was completed on August 2, 2005. Boeing retained ownership of Rocketdyne's Santa Susana Field Lab.
GenCorp, Inc. purchased Pratt & Whitney Rocketdyne in 2013 from United Technologies Corporation, and merged it with Aerojet to form Aerojet Rocketdyne.
Facilities and operations
Canoga Park, California
Rocketdyne maintained division headquarters and rocket engine manufacturing facilities at Canoga Park from 1955 until 2014.
North American Aviation's rocket development activities began with engine tests nearby the Los Angeles Airport. In 1948, NAA began testing liquid rocket engines within the Simi Hills which would later become the Santa Susana Field Laboratory. The company sought a location for a manufacturing plant nearby the Simi Hills testing site. In 1954, North American Aviation purchased 56 acres of land within the current Warner Center area then deeded the property to the Air Force. The Air Force, in turn, designated the site Air Force Plant No. 56 and contracted with Rocketdyne to build and operate the facility. NAA completed construction of the main manufacturing building and designated Rocketdyne as a new company division in November 1955.
Rocketdyne's success resulted in the addition of buildings within a growing footprint. At its peak, the Rocketdyne Canoga facility comprised some 27 different buildings over 119 acres of land, including over one million square feet of manufacturing area plus 516,000 square feet of office space. The Canoga plant grew into areas both east and southeast of the original location. In 1960, Rocketdyne opened a headquarters building at the southeast corner of Victory Boulevard and Canoga Avenue. A pedestrian tunnel underneath Victory Boulevard east of Canoga Avenue provided access between buildings to the South (including the Headquarters) and those located to the North of the street. (The tunnel was removed in 1973.)
The Canoga plant shrank over time via piecemeal property sales and building demolitions into the 2000s. With the completion of the Apollo program in 1969, Rocketdyne ended the leases of several facilities and returned the headquarters offices to the Canoga Main building. In 1973, Rocketdyne repurchased the Air Force Plant No. 56 property, thereby ending the government designation. The Space Shuttle program ended in 2011, and further reductions followed. Pratt and Whitney retained ownership of the Canoga property when Rocketdyne was sold to Aerojet in 2013; the remaining property measured roughly 47 acres with buildings and structures comprising a total of 770,000 square feet.
Rocketdyne played a key role in the United States space program and the development of propulsion systems. Ten years after being established, the Canoga plant produced the vast majority of America's United States liquid rocket engines (except those of the Titan rocket, them being built by Aerojet). Through the end of the twentieth century, Rocketdyne products powered all major engines for the Saturn program and every space program in the United States.
Six specific periods of liquid rocket engine development and manufacturing programs took place at the Canoga plant: Atlas (1954-late 1960s), Thor (1961-1975), Jupiter (1955-1962), Saturn (1961-1975); Apollo (1961-1972); Space Shuttle (1981-2011). Key rocket engine technologies were advanced at the Rocketdyne Canoga plant: gimbaling of rocket engines, introduction of engine injector baffling plates for improved combustion stability, tubular regenerative cooling, "stage and a half" engine configuration first used on Atlas, thrust chamber ignition using pyrophoric chemicals and electrically controlled starting sequences.
Aerojet Rocketdyne moved its office and manufacturing operations to the DeSoto campus in 2014. Demolition and site clearing of the former Rocketdyne facility in Canoga Park commenced in August 2016. As of February 2019, the future land use of the site has not been announced.
McGregor, Texas
Rocketdyne's Solid Propulsion Operations business unit was engaged in the development, testing and production of solid rocket engines at McGregor, Texas for nearly twenty years.
The Rocket Fuels Division of Phillips Petroleum Company began using the former Bluebonnet Ordnance Plant in 1952. In 1958, Phillips and Rocketdyne entered a partnership to form Astrodyne Incorporated. In 1959, Rocketdyne purchased full ownership of the company and renamed it Solid Propulsion Operations (later designated the Solid Rocket Division). The purchase caused Rocketdyne to invest in facilities and research at McGregor towards diversification into other propellant types and rockets engines. Notably, Rocketdyne installed a facility capable of testing engines having up to three million pounds of thrust.
The Solid Propulsion Operations initially used ammonium nitrate-based propellants in the manufacture of gas generators used to start aircraft jet engines, turbopumps of the Rocketdyne H-1 rocket engine and the manufacture of the Jet Assisted Take Off (JATO) rocket engines. Ullage motors were developed for the Saturn V Space Vehicle. The group also built solid propellant boosters providing for the zero-length launching of North American F-100 Super Sabre and Lockheed F-104 Starfighter aircraft. The motor provided a takeoff thrust of 130,000 lbf for 4 seconds, accelerating the aircraft to 275 miles per hour and 4 g before separating and dropping away from the jet.
In 1959, the group began using ammonium perchlorate oxidizer combined with carboxyl-terminated polybutadiene (CTPB) binder to produce solid propellants marketed under the trade name "Flexadyne." For the next nineteen years, Rocketdyne used the formulation in the production of solid rocket motors for three major missile systems: the AIM-7 Sparrow III, AGM-45 Shrike, and the AIM-54 Phoenix. Rocketdyne transferred operation of the McGregor plant to Hercules Inc. in 1978. A portion of the former Bluebonnet Ordnance Plant is now used by SpaceX as their Rocket Development and Test Facility.
Neosho, Missouri
A rocket engine manufacturing plant was operated by Rocketdyne over a twelve-year period at Neosho, Missouri. The plant was constructed by the U.S. Air Force within a 2,000-acre portion of Fort Crowder, a decommissioned World War II training base. The Rocketdyne division of North American Aviation operated the site, employing approximately 1,250 workers beginning in 1956. The plant primarily produced the MA-5 booster, sustainer and vernier rocket engines, H-1 engines and components for the F-1 and J-2 rocket engines. The P4-1 (a.k.a. LR64) engine was also manufactured for the AQM-37A target drone. The engines and components were evaluated at an on-site test area located approximately one mile from the plant. Rocketdyne closed the plant in 1968. The plant has been used by several different companies for the refurbishment of jet aircraft engines. The citizens of Neosho have placed a commemorative monument dedicated to the men and women of Rocketdyne Neosho "whose tireless efforts and relentless pursuit of quality resulted in the world's finest liquid rocket engines."
Nevada Field Laboratory
Rocketdyne established and operated a 120,000 acre rocket engine test and development facility nearby Reno, Nevada from 1962 until 1970. The Nevada Field Laboratory had three active open-air test facilities and two administrative areas. The test facilities were used for the Gemini and Apollo space programs, the annular aerospike engine and the early (proposal-stage) development of the Space Shuttle main engine.
Power generation
In addition to its primary business of building rocket engines, Rocketdyne has developed power generation and control systems. These included early nuclear power generation experiments, radioisotope thermoelectric generators (RTG), and solar power equipment, including the main power system for the International Space Station.
In the Boeing sale to Pratt & Whitney, the Power Systems division of Rocketdyne was transferred to Hamilton Sundstrand, another subsidiary of United Technologies Corporation.
List of engines
Some of the engines developed by Rocketdyne are:
Rocketdyne A1 to A6 (LOX/Alcohol) Used on Redstone
Rocketdyne A7 (LOX/Alcohol) Used on Jupiter-C
Rocketdyne 16NS-1,000
Rocketdyne Kiwi Nuclear rocket engine
Rocketdyne M-34
Rocketdyne MA-2
Rocketdyne MA-3
Rocketdyne Megaboom modular sled rocket
Rocketdyne P
Rocketdyne LR64
Rocketdyne LR70
Rocketdyne LR79 family:
XLR83-NA-1 - Navaho G-26
XLR89-1 - Atlas A
LR79-7 - Thor, Delta, Thor-Able, Thor-Agena A, Thor Agena B, Thor Agena D, Thor-Burner
S-3D - Jupiter
XLR89-1 - Atlas A, B, C
XLR71-NA-1 - Navaho II
B-2C - Atlas A
XLR89-5 - Atlas D
S-3 - Juno II, Saturn A-2
MB-3-1 - Delta A, B, C, Thor Ablestar
LR89-5 - Atlas E, F
H-1 - Saturn I/IB
MB-3 Press Mod - Sea Horse
LR89-7 - Atlas LV-3C, Atlas Agena, Atlas Centaur, Atlas F/Agena D, Atlas H, Atlas G, Atlas I
MB-3-3 - H-I
RZ.2 - Europa
H-1c - Saturn IB-A, IB-B
H-1b - Saturn B-1, Saturn A-2, Saturn IB, Saturn IB-C, Saturn IB-CE, Saturn IB-D, Saturn INT-11, Saturn INT-12, Saturn INT-13, Saturn INT-14, Saturn INT-15.
RS-27 - N-I, N-II, Delta 1000, Delta 4000, Delta 5000, Delta 2000, Delta 3000
MB-3-J - N
RS-27A - Delta 6925, Delta 6920-8, Delta 6925-8, Delta 6920-10, Delta 8930
RS-27C - Barbarian MDD, Delta 7925
RS-56-OBA - Atlas II, IIA, IIAS
Rocketdyne LR-101 Vernier engine used by Atlas, Thor and Delta
Rocketdyne LR105 family:
S-4 - Super-Jupiter
XLR105-5 - Atlas Able, Atlas B, Atlas C, Atlas LV-3C, Atlas D, Atlas-Agena, Atlas LV-3B
LR105-3
LR105-5 - Atlas LV-3C, Atlas E, Atlas Agena B, Atlas F, Atlas Agena D, Atlas Centaur D, Atlas SLV-3
LR105-7 - Atlas Agena D, Atlas F/Agena D, Atlas H, Atlas G, Atlas I
RS-56-OSA - Atlas II, IIA, IIAS
Rocketdyne Aeolus
Rocketdyne XRS-2200, linear aerospike engine, tested for X-33
Rocketdyne RS-2200, linear aerospike engine, intended for Venturestar
Rocketdyne E-1 (RP-1/LOX) Backup design for the Titan I
Rocketdyne F-1 (RP-1/LOX) Used by the Saturn V.
Rocketdyne H-1 (RP-1/LOX) Used by the Saturn I and IB
Rocketdyne J-2 (LH2/LOX) Used by both the Saturn V and Saturn IB.
Rocketdyne RS-25 Space Shuttle Main Engine (SSME) (LH2/LOX) The main engine for the Space Shuttle, also used on the Space Launch System
Rocketdyne RS-27A (RP-1/LOX) Used by the Delta II/III and Atlas ICBM
Rocketdyne RS-56 (RP-1/LOX) Used by the Atlas II first stage
Rocketdyne RS-68 (LH2/LOX) Used by the Delta IV first stage
Rocketdyne XLR46-NA-2, intended for the North American NA-247 interceptor proposal
Gallery
See also
Rocketdyne engines
Aerojet Rocketdyne
Pratt & Whitney Rocketdyne
Atomics International Division
Santa Susana Field Laboratory
References
External links
Rocketdyne internet archives (unofficial)
GenCorp, Inc.: Rocketdyne Acquisition presentation
01
Rocketry
Aerospace companies of the United States
Former defense companies of the United States
Technology companies based in Greater Los Angeles
Manufacturing companies based in Los Angeles
Canoga Park, Los Angeles
Simi Hills
North American Aviation
Boeing mergers and acquisitions
Aerojet Rocketdyne Holdings
United Technologies
American companies established in 1955
Manufacturing companies established in 1955
Technology companies established in 1955
Manufacturing companies disestablished in 2005
Technology companies disestablished in 2005
1955 establishments in California
2005 disestablishments in California
Defunct manufacturing companies based in Greater Los Angeles
History of the San Fernando Valley
1967 mergers and acquisitions
1996 mergers and acquisitions
2005 mergers and acquisitions | Rocketdyne | [
"Engineering"
] | 3,784 | [
"Rocketry",
"Aerospace engineering"
] |
320,997 | https://en.wikipedia.org/wiki/Symmedian | In geometry, symmedians are three particular lines associated with every triangle. They are constructed by taking a median of the triangle (a line connecting a vertex with the midpoint of the opposite side), and reflecting the line over the corresponding angle bisector (the line through the same vertex that divides the angle there in half). The angle formed by the symmedian and the angle bisector has the same measure as the angle between the median and the angle bisector, but it is on the other side of the angle bisector.
The three symmedians meet at a triangle center called the Lemoine point. Ross Honsberger has called its existence "one of the crown jewels of modern geometry".
Isogonality
Many times in geometry, if we take three special lines through the vertices of a triangle, or cevians, then their reflections about the corresponding angle bisectors, called isogonal lines, will also have interesting properties. For instance, if three cevians of a triangle intersect at a point , then their isogonal lines also intersect at a point, called the isogonal conjugate of .
The symmedians illustrate this fact.
In the diagram, the medians (in black) intersect at the centroid .
Because the symmedians (in red) are isogonal to the medians, the symmedians also intersect at a single point, .
This point is called the triangle's symmedian point, or alternatively the Lemoine point or Grebe point.
The dotted lines are the angle bisectors; the symmedians and medians are symmetric about the angle bisectors (hence the name "symmedian.")
Construction of the symmedian
Let be a triangle. Construct a point by intersecting the tangents from and to the circumcircle. Then is the symmedian of .
first proof. Let the reflection of across the angle bisector of meet at . Then:
second proof. Define as the isogonal conjugate of . It is easy to see that the reflection of about the bisector is the line through parallel to . The same is true for , and so, is a parallelogram. is clearly the median, because a parallelogram's diagonals bisect each other, and is its reflection about the bisector.
third proof. Let be the circle with center passing through and , and let be the circumcenter of . Say lines intersect at , respectively. Since , triangles and are similar. Since
we see that is a diameter of and hence passes through . Let be the midpoint of . Since is the midpoint of , the similarity implies that , from which the result follows.
fourth proof. Let be the midpoint of the arc . , so is the angle bisector of . Let be the midpoint of , and It follows that is the Inverse of with respect to the circumcircle. From that, we know that the circumcircle is an Apollonian circle with foci . So is the bisector of angle , and we have achieved our wanted result.
Tetrahedra
The concept of a symmedian point extends to (irregular) tetrahedra. Given a tetrahedron two planes through are isogonal conjugates if they form equal angles with the planes and . Let be the midpoint of the side . The plane containing the side that is isogonal to the plane is called a symmedian plane of the tetrahedron. The symmedian planes can be shown to intersect at a point, the symmedian point. This is also the point that minimizes the squared distance from the faces of the tetrahedron.
References
External links
Symmedian and Antiparallel at cut-the-knot
Symmedian and 2 Antiparallels at cut-the-knot
Symmedian and the Tangents at cut-the-knot
An interactive Java applet for the symmedian point
Straight lines defined for a triangle | Symmedian | [
"Mathematics"
] | 836 | [
"Line (geometry)",
"Straight lines defined for a triangle"
] |
320,998 | https://en.wikipedia.org/wiki/Lemoine%20point | In geometry, the Lemoine point, Grebe point or symmedian point is the intersection of the three symmedians (medians reflected at the associated angle bisectors) of a triangle.
Ross Honsberger called its existence "one of the crown jewels of modern geometry".
In the Encyclopedia of Triangle Centers the symmedian point appears as the sixth point, X(6). For a non-equilateral triangle, it lies in the open orthocentroidal disk punctured at its own center, and could be any point therein.
The symmedian point of a triangle with side lengths , and has homogeneous trilinear coordinates .
An algebraic way to find the symmedian point is to express the triangle by three linear equations in two unknowns given by the hesse normal forms of the corresponding lines. The solution of this overdetermined system found by the least squares method gives the coordinates of the point. It also solves the optimization problem to find the point with a minimal sum of squared distances from the sides.
The Gergonne point of a triangle is the same as the symmedian point of the triangle's contact triangle.
The symmedian point of a triangle can be constructed in the following way: let the tangent lines of the circumcircle of through and meet at , and analogously define and ; then is the tangential triangle of , and the lines , and intersect at the symmedian point of . It can be shown that these three lines meet at a point using Brianchon's theorem. Line is a symmedian, as can be seen by drawing the circle with center through and .
The French mathematician Émile Lemoine proved the existence of the symmedian point in 1873, and Ernst Wilhelm Grebe published a paper on it in 1847. Simon Antoine Jean L'Huilier had also noted the point in 1809.
For the extension to an irregular tetrahedron see symmedian.
Notes
References
External links
Triangle centers | Lemoine point | [
"Physics",
"Mathematics"
] | 415 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
321,017 | https://en.wikipedia.org/wiki/Allen%20Brain%20Atlas | The Allen Mouse and Human Brain Atlases are projects within the Allen Institute for Brain Science which seek to combine genomics with neuroanatomy by creating gene expression maps for the mouse and human brain. They were initiated in September 2003 with a $100 million donation from Paul G. Allen and the first atlas went public in September 2006.
, seven brain atlases have been published: Mouse Brain Atlas, Human Brain Atlas, Developing Mouse Brain Atlas, Developing Human Brain Atlas, Mouse Connectivity Atlas, Non-Human Primate Atlas, and Mouse Spinal Cord Atlas. There are also three related projects with data banks: Glioblastoma, Mouse Diversity, and Sleep. It is the hope of the Allen Institute that their findings will help advance various fields of science, especially those surrounding the understanding of neurobiological diseases. The atlases are free and available for public use online.
History
In 2001, Paul Allen gathered a group of scientists, including James Watson and Steven Pinker, to discuss the future of neuroscience and what could be done to enhance neuroscience research (Jones 2009). During these meetings David Anderson from the California Institute of Technology proposed the idea that a three-dimensional atlas of gene expression in the mouse brain would be of great use to the neuroscience community. The project was set in motion in 2003 with a 100 million dollar donation by Allen through the Allen Institute for Brain Science.
The project used a technique for mapping gene expression developed by Gregor Eichele and colleagues at the Max Planck Institute for Biophysical Chemistry in Goettingen, Germany. The technique uses colorimetric in situ hybridization to map gene expression. The project set a 3-year goal of finishing the project and making it available to the public.
An initial release of the first atlas, the mouse brain atlas, occurred in December 2004. Subsequently, more data for this atlas was released in stages. The final genome-wide data set was released in September 2006. However, the final release of the atlas was not the end of the project; the Atlas is still being improved upon. Also, other projects including the human brain atlas, developing mouse brain, developing human brain, mouse connectivity, non-human primate atlas, and the mouse spinal cord atlas are being developed through the Allen Institute for Brain Science in conjunction with the Allen Mouse Brain Atlas.
Goals for the project
The overarching goal and motto for all Allen Institute projects is "fueling discovery". The project strives to fulfill this goal and advance science in a few ways. First, they create brain atlases to better understand the connections between genes and brain functioning. They aim to advance the research and knowledge about neurobiological conditions such as Parkinson's, Alzheimer's, and Autism with their mapping of gene expression throughout the brain.
The Brain Atlas projects also follow the "Allen Institute" motto with their open release of data and findings. This policy is also related to another goal of the Institute: collaborative and multidisciplinary research. Thus, any scientist from any discipline is able to look at the findings and take them into account while designing their own experiments. Also available to the public is the Brain Explorer application.
Research techniques
The Allen Institute for Brain Science uses a project-based philosophy for their research. Each brain atlas focuses on its own project, made up of its own team of researchers. To complete an atlas, each research team collects and synthesizes brain scans, medical data, genetic information and psychological data. With this information, they are able to construct the 3-D biochemical architecture of the brain and figure out which proteins are expressed in certain parts of the brain. To gather the needed data, scientists at the Allen Institute use various techniques. One technique involves the use of postmortem brains and brain scanning technology to discover where in the brain genes are turned on and off. Another technique, called in situ hybridization, or ISH, is used to view gene expression patterns as in situ hybridization images.
Within the Brain Atlases, these 3-D ISH digital images and graphs reveal, in color, the regions where a given gene is expressed. In the Brain Explorer, any gene can be searched for and selected resulting in the in situ image appearing as an easily manipulated and explored fashion. Part of the creation of this anatomy-centred database of gene expression, includes aligning ISH data for each gene with a three-dimensional coordinate space through registration with a reference atlas created for the project.
Contributions to neuroscience
The different types of cells in the central nervous system originate from varying gene expression. A map of gene expression in the brain allows researchers to correlate forms and functions. The Allen Brain Atlas lets researchers view the areas of differing expression in the brain which enables the viewing of neural connections throughout the brain. Viewing these pathways through differing gene expression as well as functional imaging techniques permits researchers to correlate between gene expression, cell types, and pathway function in relation to behaviors or phenotypes.
Even though the majority of research has been done in mice, 90% of genes in mice have a counterpart in humans. This makes the Atlas particularly useful for modeling neurological diseases. The gene expression patterns in normal individuals provide a standard for comparing and understanding altered phenotypes. Extending information learned from mouse diseases will help better the understanding of human neurological disorders. The atlas can show which genes and particular areas are effected in neurological disorders; the action of a gene in a disease can be evaluated in conjunction with general expression patterns and this data could shed light on the role of the particular gene in the disorder.
Brain explorer
The Allen Brain Atlas website contains a downloadable 3-D interactive Brain explorer. The explorer is essentially a search engine for locations of gene expression; this is particularly useful in finding regions that express similar genes. Users can delineate networks and pathways using this application by connecting regions that co-express a certain gene. The explorer uses a multicolor scale and contains multiple planes of the brain that let viewers see differences in density and expression level. The images are a composite of many averaged samples so it is useful when comparing to individuals with abnormally low gene expression.
Atlases
Mouse Brain
The Allen Mouse Brain Atlas is a comprehensive genome-wide map of the adult mouse brain that reveals where each gene is expressed. The mouse brain atlas was the original project of the Allen Brain Atlas and was finished in 2006. The purpose of the atlas is to aid in the development of neuroscience research. The hope of the project is that it will allow scientists to gain a better understanding of brain diseases and disorders such as autism and depression.
Human Brain
The Allen Human Brain Atlas was made public in May 2010. It was the first anatomically and genomically comprehensive three-dimensional human brain map. The atlas was created to enhance research in many neuroscience research fields including neuropharmacology, human brain imaging, human genetics, neuroanatomy, genomics and more. The atlas is also geared toward furthering research into mental health disorders and brain injuries such as Alzheimer's disease, autism, schizophrenia and drug addiction.
Developing Mouse Brain
The Allen Developing Mouse Brain Atlas is an atlas which tracks gene expression throughout the development of a C57BL/6 mouse brain. The project began in 2008 and is currently ongoing. The atlas is based on magnetic resonance imaging (MRI). It traces the growth, white matter, connectivity, and development of the C57BL/6 mouse brain from embryonic day 12 to postnatal day 80.
This atlas enhances the ability of neuroscientists to study how pollutants and genetic mutations effect the development of the brain. Thus, the atlas may be used to determine what toxins pose special threats to children and pregnant mothers.
Mouse Brain Connectivity
The Allen Mouse Brain Connectivity Atlas was launched in November 2011. Unlike other atlases from the Allen Institute, this atlas focuses on the identification of neural circuitry that govern behavior and brain function. This neural circuitry is responsible for functions like behavior and perception. This map will allow scientists to further understand how the brain works and what causes brain diseases and disorders, such as Parkinson's disease and depression.
Mouse Spinal Cord
Unveiled in July 2008, the Allen Mouse Spinal Cord Atlas was the first genome-wide map of the mouse spinal cord ever constructed. The spinal cord atlas is a map of genome wide gene expression in the spinal cord of adult and juvenile C57 black mice. The initial unveiling included data for 2,000 genes and an anatomical reference section. A plan for the future includes expanding the amount of data to about 20,000 genes spanning the full length of the spinal cord.
The aim of the spinal cord atlas is to enhance research in the treatment of spinal cord injury, diseases, and disorders such as Lou Gehrig's diseases and spinal muscular atrophy. The project was funded by an array of donors including the Allen Research Institute, Paralyzed Veterans of America Research Foundation, the ALS Association, Wyeth Research, PEMCO Insurance, National Multiple Sclerosis Society, International Spinal Research Trust, and many other organizations, foundations, corporate and private donors.
See also
List of neuroscience databases
EMAGE, the e-Mouse Atlas of Gene Expression
References
Pawel K. Olszewski, "Analysis of the network of feeding neuroregulators using the AllenBrain Atlas" Neuroscience of Behavior, 1 January 2009.
Robert Lee Hotz, "Probing the Brain's Mysteries" The Wall Street Journal, 24 January 2012.
Allan Jones, "The Allen Brain Atlas: 5 years and beyond", Nature, 2009. .
External links
Genomics
Neuroscience projects
Biological databases
Open science | Allen Brain Atlas | [
"Biology"
] | 1,944 | [
"Bioinformatics",
"Biological databases"
] |
321,157 | https://en.wikipedia.org/wiki/Model%20checking | In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash).
In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure.
Overview
Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.
An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking.
Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, e.g., by means of UML activity diagrams or control-interpreted Petri nets.
The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite-state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.
Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula , and a structure with initial state , decide if . If is finite, as it is in hardware, model checking reduces to a graph search.
Symbolic model checking
Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is symbolic.
Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for linear temporal logic (LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking. The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.
Example
One example of such a system requirement:
Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula:
Here, should be read as "always", as "eventually", as "until" and the other symbols are standard logical symbols, for "or", for "and" and for "not".
Techniques
Model-checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem.
Symbolic algorithms avoid ever explicitly constructing the graph for the FSM; instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan, as well as of Olivier Coudert and Jean-Christophe Madre, and the development of open-source BDD manipulation libraries such as CUDD and BuDDy.
Bounded model-checking algorithms unroll the FSM for a fixed number of steps, , and check whether a property violation can occur in or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of until all possible violations have been ruled out (cf. Iterative deepening depth-first search).
Abstraction attempts to prove properties of a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, sometimes the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is to ignore the values of non-Boolean variables and to only consider Boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may, in fact, be sufficient to prove e.g. properties of mutual exclusion.
Counterexample-guided abstraction refinement (CEGAR) begins checking with a coarse (i.e. imprecise) abstraction and iteratively refines it. When a violation (i.e. counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user. If it is not, the proof of infeasibility is used to refine the abstraction and checking begins again.
Model-checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems.
First-order logic
Model checking is also studied in the field of computational complexity theory. Specifically, a first-order logical formula is fixed without free variables and the following decision problem is considered:
Given a finite interpretation, for instance, one described as a relational database, decide whether the interpretation is a model of the formula.
This problem is in the circuit class AC0. It is tractable when imposing some restrictions on the input structure: for instance, requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures. These results have been extended to the task of enumerating all solutions to a first-order formula with free variables.
Tools
Here is a list of significant model-checking tools:
Afra: a model checker for Rebeca which is an actor-based language for modeling concurrent and reactive systems
Alloy (Alloy Analyzer)
BLAST (Berkeley Lazy Abstraction Software Verification Tool)
CADP (Construction and Analysis of Distributed Processes) a toolbox for the design of communication protocols and distributed systems
CPAchecker: an open-source software model checker for C programs, based on the CPA framework
ECLAIR: a platform for the automatic analysis, verification, testing, and transformation of C and C++ programs
FDR2: a model checker for verifying real-time systems modelled and specified as CSP Processes
FizzBee: an easier to use alternative to TLA+, that uses Python-like specification language, that has both behavioral modeling like TLA+ and probabilistic modeling like PRISM
ISP code level verifier for MPI programs
Java Pathfinder: an open-source model checker for Java programs
Libdmc: a framework for distributed model checking
mCRL2 Toolset, Boost Software License, Based on ACP
NuSMV: a new symbolic model checker
PAT: an enhanced simulator, model checker and refinement checker for concurrent and real-time systems
Prism: a probabilistic symbolic model checker
Roméo: an integrated tool environment for modelling, simulation, and verification of real-time systems modelled as parametric, time, and stopwatch Petri nets
SPIN: a general tool for verifying the correctness of distributed software models in a rigorous and mostly automated fashion
Storm: A model checker for probabilistic systems.
TAPAs: a tool for the analysis of process algebra
TAPAAL: an integrated tool environment for modelling, validation, and verification of Timed-Arc Petri Nets
TLA+ model checker by Leslie Lamport
UPPAAL: an integrated tool environment for modelling, validation, and verification of real-time systems modelled as networks of timed automata
Zing – experimental tool from Microsoft to validate state models of software at various levels: high-level protocol descriptions, work-flow specifications, web services, device drivers, and protocols in the core of the operating system. Zing is currently being used for developing drivers for Windows.
See also
References
Further reading
. JA Bergstra, A. Ponse and SA Smolka, editors." ().
(this is also a very good introduction and overview of model checking) | Model checking | [
"Mathematics"
] | 2,144 | [
"Mathematical logic",
"Logic in computer science"
] |
321,181 | https://en.wikipedia.org/wiki/Couch%20potato | A couch potato is a person who spends most of his or her free time sitting or lying on a couch. This stereotype often refers to a lazy and overweight person who watches a lot of television. Generally speaking, the term refers to a lifestyle in which children or adults don't get enough physical activity.
History
The actual term "couch potato" was first coined in 1976 by Tom Iacino, a friend of American underground comics artist Robert Armstrong. In the early-1980s, he registered the term as a trademark with the U.S. government; he also co-authored a book with Jack Mingo, called The Official Couch Potato Handbook, which delves into the lives of couch potatoes.
The term eventually entered common American vocabulary, generally defining one who unceasingly watches television. The phrase was entered into the Oxford English Dictionary in 1993.
Health
Some studies have said that the "couch potato lifestyle" is a serious health hazard to its practitioners; in the United Kingdom, a plan of the Prime Minister's Strategy Unit tried attempts "to combat the couch potato culture" to "[improving the U.K.'s] international sporting performance."
Studies presented at the 2003 meeting of the American College of Sports Medicine suggested that there could be a genetic basis for the "couch potato lifestyle".
Research suggests that being a couch potato could make a person a decade older biologically than someone who is physically active.
Popular culture
Various activities have been designed for the couch potato, including a type of investment portfolio ("Couch Potato Portfolio") and fantasy football leagues.
Greyhound dogs, who are well-known for their sprinting ability but otherwise require little exercise, are sometimes called "forty-five mile per hour couch potatoes" by adoption and rescue agencies.
Music artist "Weird Al" Yankovic's song "Couch Potato" (a parody of "Lose Yourself" by Eminem) describes him watching hours upon hours of television, "until [his] legs are numb, [his] eyes bloodshot."
The phrase has coined the spin-off mouse potato (or sometimes computer potato), meaning one who spends too much time in front of a computer.
Couch Potatoes was the name of a game show hosted by Double Dare host Marc Summers.
Couch Potato was a Sunday morning kids TV show aired on the ABC in Australia in the 1990s.
References
External links
CouchPotato – A personalized tv-show guide for couch potatoes
On Mirror Neurons or Why It Is Okay to be a Couch Potato
Stereotypes
Human behavior | Couch potato | [
"Biology"
] | 509 | [
"Behavior",
"Human behavior"
] |
321,363 | https://en.wikipedia.org/wiki/Fiasco%20%28novel%29 | Fiasco () is a science fiction novel by Polish author Stanisław Lem, first published in a German translation in 1986. The book, published in Poland the following year and translated into English by Michael Kandel in the same year, is a further elaboration of Lem's skepticism: in Lem's opinion, the difficulty in communication with extraterrestrial intelligence (the main theme of the novel) is more likely cultural disparity rather than spatial distance. It was nominated for the Arthur C. Clarke Award.
The novel was written on order from publisher S. Fischer Verlag around the time Lem was emigrating from Poland due to the introduction of martial law. Lem stated that this was the only occasion he wrote something upon publisher's request, accepting an advance for a nonexistent novel.
Plot summary
At a base on Saturn's moon Titan, a young spaceship pilot Parvis sets out in a strider (a mecha-like machine) to find several missing people, among them Pirx (the spaceman appearing in Lem's Tales of Pirx the Pilot). Parvis ventures to the dangerous geyser region, where the others were lost. Unfortunately, he suffers an accident. Seeing no way to get out of the machine and return to safety, he triggers a built-in cryogenic device.
An expedition is sent to a distant star in order to make first contact with a civilization that may have been detected. It is set more than a century after the prologue, when a starship is built in Titan's orbit. This future society is described as globally unified and peaceful with high regard for success. During starship preparations, the geyser region is cleared, and the frozen bodies are discovered. They are exhumed and taken aboard, to be awakened, if possible, during the voyage. However, only one of them can be revived (or more precisely, pieced together from the organs of several of them) with a high likelihood of success. The identity of the man is unclear; it has been narrowed to two men (whose last names begin with 'P'). It is never revealed whether he is in fact Pirx or Parvis (and he seems to have amnesia). In his new life, he adopts the name Tempe.
The explorer spaceship Eurydika (Eurydice) first travels to a black hole near the Beta Harpiae to perform maneuvers to minimize the effects of time dilation. Before closing on the event horizon, the Eurydice launches the Hermes, a smaller explorer ship, which continues to Beta Harpiae.
Approaching the planet Quinta, which exhibits signs of harboring intelligent life, the crew of the Hermes attempts to establish contact with the denizens of the planet, who, contrary to the expectations of the mission's crewmen, are strangely unwilling to communicate. The crew reaches the conclusion that there is a Cold War-like state on the planet's surface and throughout the planetary system, halting the locals' industrial development.
The crew of the Hermes assumes that the Quintan civilisation is inevitably doomed to collapse in mutual assured destruction. They try to force the aliens to engage contact by means of an event impossible to hide by the aliens' governments: staging the implosion of their moon. Surprisingly, just before impact, several of the deployed rockets are destroyed by missiles of the Quintans, undermining the symmetry of the implosion which causes fragments of the moon to be thrown clear, some impacting the planet's surface.
However, even this cataclysm does not drive the locals to engage with their alien visitors, so the crewmen deploy a device working as a giant lens or laser, capable of displaying images (but also concentrating beams to the point of being a powerful weapon). Following a suggestion by Tempe, they show the Quintans a "fairy tale" by projecting a cartoon onto Quinta's clouds. At last, the Quintans contact the Hermes and make arrangements for a meeting. The humans do not trust the Quintans, so to gauge the Quintans' intentions, they send a smaller replica of the Hermes which is destroyed shortly before landing. The humans retaliate by firing their laser on the ice ring around the planet, shattering it and sending chunks falling on the planet.
Finally, the Quintans are forced to receive an 'ambassador', who is Tempe; the Quintans are warned that the projecting device will be used to destroy the planet if the man should fail to report back his continued safety. After landing, Tempe discovers that there is no trace of anyone at the landing site. After investigating a peculiar structure nearby, he finds a strange-looking mound, which he opens with a small shovel. To his horror, he notices that in his distracted state he has allowed the allotted time to expire without signaling his crewmates. As the planet is engulfed by fiery destruction at the hands of those who were sent to establish contact with its denizens, Tempe finally realizes what the Quintans are. However, he has no time to share his discovery with the others.
Interpretation
The book is the fifth in Lem's series of pessimistic first contact scenarios, after Eden, Solaris, The Invincible, and His Master's Voice.
According to critic Paul Delany:
References
External links
1986 novels
Novels by Stanisław Lem
Novels set on Titan (moon)
1986 science fiction novels
Fiction about black holes
Novels about extraterrestrial life
Polish novels
Polish science fiction novels
S. Fischer Verlag books
Pirx
Science fiction about first contact | Fiasco (novel) | [
"Physics"
] | 1,159 | [
"Black holes",
"Unsolved problems in physics",
"Fiction about black holes"
] |
321,365 | https://en.wikipedia.org/wiki/Railway%20track | A railway track ( and UIC terminology) or railroad track (), also known as permanent way () or "P Way" ( and Indian English), is the structure on a railway or railroad consisting of the rails, fasteners, sleepers (railroad ties in American English) and ballast (or slab track), plus the underlying subgrade. It enables trains to move by providing a dependable, low-friction surface on which their wheels can roll. Early tracks were constructed with wooden or cast-iron rails, and wooden or stone sleepers. Since the 1870s, rails have almost universally been made from steel.
Historical development
The first railway in Britain was the Wollaton wagonway, built in 1603 between Wollaton and Strelley in Nottinghamshire. It used wooden rails and was the first of about 50 wooden-railed tramways built over the subsequent 164 years. These early wooden tramways typically used rails of oak or beech, attached to wooden sleepers with iron or wooden nails. Gravel or small stones were packed around the sleepers to hold them in place and provide a walkway for the people or horses that moved wagons along the track. The rails were usually about long and were not joined - instead, adjacent rails were laid on a common sleeper. The straight rails could be angled at these joints to form primitive curved track.
The first iron rails laid in Britain were at the Darby Ironworks in Coalbrookdale in 1767.
When steam locomotives were introduced, starting in 1804, the track then in use proved too weak to carry the additional weight. Richard Trevithick's pioneering locomotive at Pen-y-darren broke the plateway track and had to be withdrawn. As locomotives became more widespread in the 1810s and 1820s, engineers built rigid track formations, with iron rails mounted on stone sleepers, and cast-iron chairs holding them in place. This proved to be a mistake, and was soon replaced with flexible track structures that allowed a degree of elastic movement as trains passed over them.
Structure
Traditional track structure
Traditionally, tracks are constructed using flat-bottomed steel rails laid on and spiked or screwed into timber or pre-stressed concrete sleepers (known as ties in North America), with crushed stone ballast placed beneath and around the sleepers.
Most modern railroads with heavy traffic use continuously welded rails that are attached to the sleepers with base plates that spread the load. When concrete sleepers are used, a plastic or rubber pad is usually placed between the rail and the tie plate. Rail is usually attached to the sleeper with resilient fastenings, although cut spikes are widely used in North America. For much of the 20th century, rail track used softwood timber sleepers and jointed rails, and a considerable amount of this track remains on secondary and tertiary routes.
In North America and Australia, flat-bottomed rails were typically fastened to the sleepers with dog spikes through a flat tie plate. In Britain and Ireland, bullhead rails were carried in cast-iron chairs which were spiked to the sleepers. In 1936, the London, Midland and Scottish Railway pioneered the conversion to flat-bottomed rail in Britain, though earlier lines had made some use of it.
Jointed rails were used at first because contemporary technology did not offer any alternative. However, the intrinsic weakness in resisting vertical loading results in the ballast becoming depressed and a heavy maintenance workload is imposed to prevent unacceptable geometrical defects at the joints. The joints also needed to be lubricated, and wear at the fishplate (joint bar) mating surfaces needed to be rectified by shimming. For this reason jointed track is not financially appropriate for heavily operated railroads.
Timber sleepers are of many available timbers, and are often treated with creosote, chromated copper arsenate, or other wood preservatives. Pre-stressed concrete sleepers are often used where timber is scarce and where tonnage or speeds are high. Steel is used in some applications.
Track ballast is usually stone crushed to particular specifications. Its purpose is to support the sleepers and allow some adjustment of their position while allowing free drainage.
Ballastless track
A disadvantage of traditional track structures is the heavy demand for maintenance, particularly surfacing (tamping) and lining to restore the desired track geometry and smoothness of vehicle running. Weakness of the subgrade and drainage deficiencies also lead to heavy maintenance costs. This can be overcome by using ballastless track. In its simplest form this consists of a continuous slab of concrete (like a highway structure) with the rails supported directly on its upper surface (using a resilient pad).
There are a number of proprietary systems; variations include a continuous reinforced concrete slab and the use of pre-cast pre-stressed concrete units laid on a base layer. Many permutations of design have been put forward.
However, ballastless track has a high initial cost, and in the case of existing railroads the upgrade to such requires closure of the route for a long period. Its whole-life cost can be lower because of the reduction in maintenance. Ballastless track is usually considered for new very high speed or very high loading routes, in short extensions that require additional strength (e.g. railway stations), or for localised replacement where there are exceptional maintenance difficulties, for example in tunnels. Most rapid transit lines and rubber-tyred metro systems use ballastless track.
Continuous longitudinally supported track
Early railways (c. 1840s) experimented with continuous bearing railtrack, in which the rail was supported along its length, with examples including Brunel's baulk road on the Great Western Railway, as well as use on the Newcastle and North Shields Railway, on the Lancashire and Yorkshire Railway to a design by John Hawkshaw, and elsewhere. Continuous-bearing designs were also promoted by other engineers. The system was tested on the Baltimore and Ohio railway in the 1840s, but was found to be more expensive to maintain than rail with cross sleepers.
This type of track still exists on some bridges on Network Rail where the timber baulks are called waybeams or longitudinal timbers. Generally the speed over such structures is low.
Later applications of continuously supported track include Balfour Beatty's 'embedded slab track', which uses a rounded rectangular rail profile (BB14072) embedded in a slipformed (or pre-cast) concrete base (development 2000s). The 'embedded rail structure', used in the Netherlands since 1976, initially used a conventional UIC 54 rail embedded in concrete, and later developed (late 1990s) to use a 'mushroom' shaped SA42 rail profile; a version for light rail using a rail supported in an asphalt concrete–filled steel trough has also been developed (2002).
Modern ladder track can be considered a development of baulk road. Ladder track utilizes sleepers aligned along the same direction as the rails with rung-like gauge restraining cross members. Both ballasted and ballastless types exist.
Rail
Modern track typically uses hot-rolled steel with a profile of an asymmetrical rounded I-beam. Unlike some other uses of iron and steel, railway rails are subject to very high stresses and have to be made of very high-quality steel alloy. It took many decades to improve the quality of the materials, including the change from iron to steel. The stronger the rails and the rest of the trackwork, the heavier and faster the trains the track can carry.
Other profiles of rail include: bullhead rail; grooved rail; flat-bottomed rail (Vignoles rail or flanged T-rail); bridge rail (inverted U–shaped used in baulk road); and Barlow rail (inverted V).
North American railroads until the mid- to late-20th century used rails long so they could be carried in gondola cars (open wagons), often long; as gondola sizes increased, so did rail lengths.
According to the Railway Gazette International the planned-but-cancelled 150-kilometre rail line for the Baffinland Iron Mine, on Baffin Island, would have used older carbon steel alloys for its rails, instead of more modern, higher performance alloys, because modern alloy rails can become brittle at very low temperatures.
Iron-topped wooden rails
Early North American railroads used iron on top of wooden rails as an economy measure but gave up this method of construction after the iron came loose, began to curl, and intruded into the floors of the coaches, leading early railroaders to refer to them as "snake heads".
The Deeside Tramway in North Wales used this form of rail. It opened around 1870 and closed in 1947, with long sections still using these rails. It was one of the last uses of iron-topped wooden rails.
Rail classification (weight)
Rail is graded by its linear density, that is, its mass over a standard length. Heavier rail can support greater axle loads and higher train speeds without sustaining damage than lighter rail, but at a greater cost. In North America and the United Kingdom, rail is graded in pounds per yard (usually shown as pound or lb), so 130-pound rail would weigh . The usual range is . In Europe, rail is graded in kilograms per metre and the usual range is . The heaviest mass-produced rail was , rolled for the Pennsylvania Railroad.
Rail lengths
The rails used in rail transport are produced in sections of fixed length. Rail lengths are made as long as possible, as the joints between rails are a source of weakness. Throughout the history of rail production, lengths have increased as manufacturing processes have improved.
Timeline
The following are lengths of single sections produced by steel mills, without any thermite welding. Shorter rails may be welded with flashbutt welding, but the following rail lengths are unwelded.
(1767) Richard Reynolds laid the first iron rails at Coalbrookdale.
(1825) Stockton and Darlington Railway
(1830) Liverpool and Manchester Railway. Fish-belly rails at , laid mostly on stone blocks
(1831) long and weighing , reached Philadelphia the first use of the flanged T-rail in the United States
(1880) United States to suit gondola waggons
(1928) London, Midland and Scottish Railway
(1950) British Rail
(1900) – steel works weighing machine for rails (steelyard balance)
(1940s) – double 39 ft
(1953) Australia
Welding of rails into longer lengths was first introduced around 1893, making train rides quieter and safer. With the introduction of thermite welding after 1899, the process became less labour-intensive, and ubiquitous.
(1895) Hans Goldschmidt developed exothermic welding
(1899) the Essen Tramway became the first railway to use thermite welding; also suited track circuits
(1904) George Pellissier welded the Holyoke Street Railway, first to use the process in the Americas
(1935) Charles Cadwell developed non-ferrous exothermic welding
(1950) welded – (4 x )
Modern production techniques allowed the production of longer unwelded segments.
(2007) Corus (now British Steel (2016–present))
(2011) Tata Steel Europe
(2011) Voestalpine,
(2011) Jindal
Multiples
Newer longer rails tend to be made as simple multiples of older shorter rails, so that old rails can be replaced without cutting. Some cutting would be needed as slightly longer rails are needed on the outside of sharp curves compared to the rails on the inside.
Boltholes
Rails can be supplied pre-drilled with boltholes for fishplates or without where they will be welded into place. There are usually two or three boltholes at each end.
Joining rails
Rails are produced in fixed lengths and need to be joined end-to-end to make a continuous surface on which trains may run. The traditional method of joining the rails is to bolt them together using metal fishplates (jointbars in the US), producing jointed track. For more modern usage, particularly where higher speeds are required, the lengths of rail may be welded together to form continuous welded rail (CWR).
Jointed track
Jointed track is made using lengths of rail, usually about long (in the UK) and long (in North America), bolted together using perforated steel plates known as fishplates (UK) or joint bars (North America).
Fishplates are usually long, used in pairs either side of the rail ends and bolted together (usually four, but sometimes six bolts per joint). The bolts have alternating orientations so that in the event of a derailment and a wheel flange striking the joint, only some of the bolts will be sheared, reducing the likelihood of the rails misaligning with each other and worsening the derailment. This technique is not applied universally; European practice is to have all the bolt heads on the same side of the rail.
Small gaps which function as expansion joints are deliberately left between the rail ends to allow for expansion of the rails in hot weather. European practice was to have the rail joints on both rails adjacent to each other; North American practice is to stagger them. Because of these small gaps, when trains pass over jointed tracks they make a "clickety-clack" sound, and in time the rail ends are deflected downwards. Unless it is well-maintained, jointed track does not have the ride quality of welded rail and is not suitable for high speed trains. However, jointed track is still used in many countries on lower-speed lines and sidings, and is used extensively in poorer countries due to the lower construction cost and the simpler equipment required for its installation and maintenance.
A major problem of jointed track is cracking around the bolt holes, which can lead to breaking of the rail head (the running surface). This was the cause of the Hither Green rail crash which caused British Railways to begin converting much of its track to continuous welded rail.
Insulated joints
Where track circuits exist for signalling purposes, insulated block joints are required. These compound the weaknesses of ordinary joints. Specially-made glued joints, where all the gaps are filled with epoxy resin, increase the strength again.
As an alternative to the insulated joint, audio frequency track circuits can be employed using a tuned loop formed in approximately of the rail as part of the blocking circuit. Some insulated joints are unavoidable within turnouts.
Another alternative is an axle counter, which can reduce the number of track circuits and thus the number of insulated rail joints required.
Continuous welded rail
Most modern railways use continuous welded rail, sometimes referred to as ribbon rails or seamless rails. In this form of track, the rails are welded together by utilising flash butt welding to form one continuous rail that may be several kilometres long. Because there are few joints, this form of track is very strong, gives a smooth ride, and needs less maintenance; trains can travel on it at higher speeds and with less friction. Welded rails are more expensive to lay than jointed tracks, but have much lower maintenance costs. The first welded track was used in Germany in 1924. and has become common on main lines since the 1950s.
The preferred process of flash butt welding involves an automated track-laying machine running a strong electric current through the touching ends of two unjoined rails. The ends become white hot due to electrical resistance and are then pressed together forming a strong weld. Thermite welding is used to repair or splice together existing continuous welded rail segments. This manual process requires a reaction crucible and form to contain the molten iron.
North American practice is to weld segments of rail at a rail facility and load it on a special train to carry it to the job site. This train is designed to carry many segments of rail which are placed so they can slide off their racks to the rear of the train and be attached to the ties (sleepers) in a continuous operation.
If not restrained, rails would lengthen in hot weather and shrink in cold weather. To provide this restraint, the rail is prevented from moving in relation to the sleeper by use of clips or anchors. Attention needs to be paid to compacting the ballast effectively, including under, between, and at the ends of the sleepers, to prevent the sleepers from moving. Anchors are more common for wooden sleepers, whereas most concrete or steel sleepers are fastened to the rail by special clips that resist longitudinal movement of the rail. There is no theoretical limit to how long a welded rail can be. However, if longitudinal and lateral restraint are insufficient, the track could become distorted in hot weather and cause a derailment. Distortion due to heat expansion is known in North America as sun kink, and elsewhere as buckling. In extreme hot weather special inspections are required to monitor sections of track known to be problematic. In North American practice, extreme temperature conditions will trigger slow orders to allow for crews to react to buckling or "sun kinks" if encountered. The German railway company Deutsche Bahn is starting to paint rails white to lower the peak temperatures reached in summer days.
After new segments of rail are laid, or defective rails replaced (welded-in), the rails can be artificially stressed if the temperature of the rail during laying is cooler than what is desired. The stressing process involves either heating the rails, causing them to expand, or stretching the rails with hydraulic equipment. They are then fastened (clipped) to the sleepers in their expanded form. This process ensures that the rail will not expand much further in subsequent hot weather. In cold weather the rails try to contract, but because they are firmly fastened, cannot do so. In effect, stressed rails are a bit like a piece of stretched elastic firmly fastened down. In extremely cold weather, rails are heated to prevent "pull aparts".
Continuous welded rails, complete with fastenings, are laid at a temperature known as "rail neutral temperature" that is approximately midway between the extremes experienced at that location. This installation procedure is intended to prevent tracks from buckling in summer heat or pulling apart in the winter cold. In North America, because broken rails are typically detected by interruption of the current in the signaling system, they are seen as less of a potential hazard than undetected heat kinks.
Joints are used in the continuous welded rail when necessary, usually for signal circuit gaps. Instead of a joint that passes straight across the rail, the two rail ends are sometimes cut at an angle to give a smoother transition. In extreme cases, such as at the end of long bridges, a breather switch (referred to in North America and Britain as an expansion joint) gives a smooth path for the wheels while allowing the end of one rail to expand relative to the next rail.
Sleepers
A sleeper (tie or crosstie) is a rectangular object on which the rails are supported and fixed. The sleeper has two main roles: to transfer the loads from the rails to the track ballast and the ground underneath, and to hold the rails to the correct width apart (to maintain the rail gauge). They are generally laid transversely to the rails.
Fixing rails to sleepers
Various methods exist for fixing the rail to the sleeper. Historically, rails were spiked directly on to ties, the practice giving way baseplates being fitted between the rails and sleepers; subsequently, spikes were replaced by sprung steel clips, such as Pandrol clips, to fix the rail to the baseplates.
Portable track
Sometimes rail tracks are designed to be portable and moved from one place to another as required. During construction of the Panama Canal, tracks were moved around excavation works. These track gauge were and the rolling stock full size. Portable tracks have often been used in open pit mines. In 1880 in New York City, sections of heavy portable track (along with much other improvised technology) helped in the move of the ancient obelisk in Central Park to its final location from the dock where it was unloaded from the cargo ship SS Dessoug.
Cane railways often had permanent tracks for the main lines, with portable tracks serving the canefields themselves. These tracks were narrow gauge (for example, ) and the portable track came in straights, curves, and turnouts, rather like on a model railway.
Decauville was a source of many portable light rail tracks, also used for military purposes.
The permanent way is so called because temporary way tracks were often used in the construction of that permanent way.
Layout
The geometry of the tracks is three-dimensional by nature, but the standards that express the speed limits and other regulations in the areas of track gauge, alignment, elevation, curvature and track surface are usually expressed in two separate layouts for horizontal and vertical.
Horizontal layout is the track layout on the horizontal plane. This involves the layout of three main track types: tangent track (straight line), curved track, and track transition curve (also called transition spiral or spiral) which connects between a tangent and a curved track.
Vertical layout is the track layout on the vertical plane including the concepts such as crosslevel, cant and gradient.
A sidetrack is a railroad track other than siding that is auxiliary to the main track. The word is also used as a verb (without object) to refer to the movement of trains and railcars from the main track to a siding, and in common parlance to refer to giving in to distractions apart from a main subject. Sidetracks are used by railroads to order and organise the flow of rail traffic.
Gauge
During the early days of rail, there was considerable variation in the gauge used by different systems, and in the UK during the railway building boom of the 1840s Brunel's broad gauge of was in competition with what was referred to at the time as the 'narrow' gauge of . Eventually the gauge won the battle, and became the standard gauge, with the term 'narrow gauge' henceforth used for gauges narrower than the new standard. , about 60% of the world's railways use a gauge of , known as standard or international gauge Gauges wider than standard gauge are called broad gauge; narrower, narrow gauge. Some stretches of track are dual gauge, with three (or sometimes four) parallel rails in place of the usual two, to allow trains of two different gauges to use the same track.
Gauge can safely vary over a range. For example, U.S. federal safety standards allow standard gauge to vary from to for operation up to .
Maintenance
Track needs regular maintenance to remain in good order, especially when high-speed trains are involved. Inadequate maintenance may lead to a "slow order" (North American terminology, or temporary speed restriction in the United Kingdom) being imposed to avoid accidents (see Slow zone). Track maintenance was at one time hard manual labour, requiring teams of labourers, or trackmen (US: gandy dancers; UK: platelayers; Australia: fettlers or packers) under the supervision of a skilled ganger, who used lining bars to correct irregularities in horizontal alignment (line) of the track, and tamping and jacks to correct vertical irregularities (surface). Currently, maintenance is facilitated by a variety of specialised machines.
The surface of the head of each of the two rails can be maintained by using a railgrinder.
Common maintenance jobs include changing sleepers, lubricating and adjusting switches, tightening loose track components, and surfacing and lining track to keep straight sections straight and curves within maintenance limits. The process of sleeper and rail replacement can be automated by using a track renewal train.
Spraying ballast with herbicide to prevent weeds growing through and redistributing the ballast is typically done with a special weed killing train.
Over time, ballast is crushed or moved by the weight of trains passing over it, periodically requiring relevelling ("tamping") and eventually to be cleaned or replaced. If this is not done, the tracks may become uneven, causing swaying, rough riding and possibly derailments. An alternative to tamping is to lift the rails and sleepers and reinsert the ballast beneath. For this, specialist "stoneblower" trains are used.
Rail inspections utilize nondestructive testing methods to detect internal flaws in the rails. This is done by using specially equipped HiRail trucks, inspection cars, or in some cases, handheld inspection devices.
Rails must be replaced before the railhead profile wears to a degree that may trigger a derailment. Worn mainline rails usually have sufficient life remaining to be used on a branch line, siding or stub afterwards and are "cascaded" to those applications.
The environmental conditions along railroad track create a unique railway ecosystem. This is particularly so in the United Kingdom, where steam locomotives are only used on special services and vegetation has not been trimmed back so thoroughly. This creates a fire risk in prolonged dry weather.
In the UK, the cess is used by track repair crews to walk to a work site, and as a safe place to stand when a train is passing. This helps when doing minor work, while needing to keep trains running, by not needing a Hi-railer or transport vehicle blocking the line to transport crew to get to the site.
Bed and foundation
Railway tracks are generally laid on a bed of stone track ballast or track bed, which in turn is supported by prepared earthworks known as the track formation. The formation comprises the subgrade and a layer of sand or stone dust (often sandwiched in impervious plastic), known as the blanket, which restricts the upward migration of wet clay or silt. There may also be layers of waterproof fabric to prevent water penetrating to the subgrade. The track and ballast form the permanent way. The foundation may refer to the ballast and formation, i.e. all man-made structures below the tracks.
Some railroads are using asphalt pavement below the ballast in order to keep dirt and moisture from moving into the ballast and spoiling it. The fresh asphalt also serves to stabilize the ballast so it does not move around so easily.
Additional measures are required where the track is laid over permafrost, such as on the Qingzang Railway in Tibet. For example, transverse pipes through the subgrade allow cold air to penetrate the formation and prevent that subgrade from melting.
Geosynthetic reinforcement
Geosynthetics are used to reduce or replace traditional layers in trackbed construction and rehabilitation worldwide to improve track support and reduce track maintenance costs. Reinforcement geosynthetics, such as geocells (which rely on 3D soil confinement mechanisms) have demonstrated efficacy in stabilizing soft subgrade soils and reinforcing substructural layers to limit progressive track degradation. Reinforcement geosynthetics increase soil bearing capacity, limit ballast movement and degradation and reduce differential settlement that affects track geometry. They also reduce construction time and cost, while reducing environmental impact and carbon footprint. The increased use of geosynthetic reinforcement solutions is supported by new high-performance geocell materials (e.g., NPA - Novel Polymeric Alloy), published research, case studies projects and international standards (ISO, ASTM, CROW/SBRCURnet)
The hybrid use of high-performance geogrids at the subgrade and high-performance geocell in the upper subbase/subballast layer has been shown to increase the reinforcement factor greater than their separate sums, and is particularly effective in attenuating heaving of expansive subgrade clay soils. A field test project on Amtrak's NE Corridor suffering clay mud-pumping demonstrated how the hybrid solution improved track quality index (TQI) significantly reduced track geometry degradation and lowered track surface maintenance by factor of 6.7x utilizing high-performance NPA geocell. Geosynthetic reinforcement is also used to stabilize railway embankments, which must be robust enough to withstand repeated cyclical loading. Geocells can utilize recycled marginal or poorly graded granular material to create stable embankments, make railway construction more economical and sustainable.
Buses
Some buses can use tracks. This concept came out of Germany and was called . The first such track, the O-Bahn Busway, was built in Adelaide, Australia.
See also
Degree of curvature
Difference between train and tram rails
Exothermic welding
Gauntlet track
Glossary of rail terminology (including US/UK and other regional/national differences)
Green track
Maglev
Minimum railway curve radius
Monorail
Permanent way (history)
Rack railway
Rail profile
Roll way, part of the track of a rubber-tyred metro
Rubber-tyred metro
Street running
Subgrade
Tie plate
TGV track construction
Tramway (industrial)
Tramway track
References
Bibliography
Pike, J., (2001), Track, Sutton Publishing,
External links
Table of North American tee rail (flat bottom) sections
ThyssenKrupp handbook, Vignoles rail
ThyssenKrupp handbook, Light Vignoles rail
Track Details in photographs
"Drawing of England Track Laying in Sections at 200 yards an hour" Popular Mechanics, December 1930
illustrated description of the construction and maintenance of the railway
Railway technical
Railway track layouts
Structural steel
Rail infrastructure | Railway track | [
"Engineering"
] | 5,921 | [
"Structural engineering",
"Structural steel"
] |
321,371 | https://en.wikipedia.org/wiki/Cottaging | Cottaging is a gay slang term, originating from the United Kingdom, referring to anonymous sex between men in a public lavatory (a "cottage" or "tea-room"), or cruising for sexual partners with the intention of having sex elsewhere. The term has its roots in self-contained English toilet blocks resembling small cottages in their appearance; in the English cant language of Polari this became a double entendre by gay men referring to sexual encounters.
The word "cottage", usually meaning a small, cosy, countryside home, is documented as having been in use during the Victorian era to refer to a public toilet and by the 1960s its use in this sense had become an exclusively homosexual slang term. This usage is predominantly British, though the term is occasionally used with the same meaning in other parts of the world. Among gay men in the United States, lavatories used for this purpose are called tea rooms.
Locations
Cottages were and are located in places heavily used by many people such as bus stations, railway stations, airports and university campuses. Often, glory holes are drilled in the walls between cubicles in popular cottages. Foot signals—tapping a foot, sliding a foot slightly under the divider between stalls, attracting the attention of the occupant of the next stall—are used to signify that one wishes to connect with the person in the next cubicle. In some heavily used cottages, an etiquette develops and one person may function as a lookout to warn if non-cottagers are coming.
Since the 1980s, more individuals in authority have become more aware of the existence of cottages in places under their jurisdiction and have reduced the height of or even removed doors from the cubicles of popular cottages, or extended the walls between the cubicles to the floor to prevent foot signalling.
Cottages as meeting places
Before the gay liberation movement, many, if not most, gay and bisexual men at the time were closeted and there were almost no public gay social groups for those under legal drinking age. As such, cottages were among the few places where men too young to get into gay bars could meet others whom they knew to be gay.
The internet brought significant changes to cottaging, which was previously an activity engaged in by men with other men, often in silence with no communication beyond the markings of a cubicle wall. Today, an online community is being established in which men exchange details of locations, discussing aspects such as when it receives the highest traffic, when it is safest and to facilitate sexual encounters by arranging meeting times. The term cybercottage is used by some gay and bisexual men who use the role-play and nostalgia of cottaging in a virtual space or as a notice board to arrange real life anonymous sexual encounters.
Laud Humphrey's Tearoom Trade, published in 1970, was a sociological analysis and observance between the social space public "restrooms" (as toilets are euphemistically known in the US) offer for anonymous sex and the men—either closeted, gay, or straight—who sought to fulfill sexual desires that their wives, religion, or social lives could not. The study, which was met with praise on one side due to its innovation and criticism on the other due to having outed "straight" men and risked their privacy, brought to light the multidimensionality of public restrooms and the intricacy and complexity of homosexual sex amongst self-identifying straight men.
Legal status
Sexual acts in public lavatories are outlawed by many jurisdictions. It is likely that the element of risk involved in cottaging makes it an attractive activity to some.
Historically, in the United Kingdom, public gay sex often resulted in a charge and conviction of gross indecency, an offence only pertaining to sexual acts committed by males and particularly applied to homosexual activity. Anal penetration was a separate and much more serious crime that came under the definition of buggery. Buggery was a capital offence between 1533 and 1861 under UK law, although it rarely resulted in a death sentence. Importuning was an offer of sexual gratification between men, often for money. The Sexual Offences Act 1967 permitted sex between consenting men over 21 years of age when conducted in private, but the act specifically excluded public lavatories from being "private". The Sexual Offences Act 2003 replaced this aspect with the offence of "Sexual activity in a public lavatory" which includes solo masturbation.
In some of the cases where people were brought to court for cottaging, the issue of entrapment arose. Since the offences were public but often carried out behind closed lavatory doors, the police sometimes found it easier to use undercover police officers, who would frequent toilets posing as homosexuals in an effort to entice other men to approach them for sex. These men would then be arrested for importuning or soliciting and in some cases indecent assault.
Timeline of historic cases
Cultural response
After the murder of playwright Joe Orton by his boyfriend Kenneth Halliwell in 1967, Orton's diaries were published and included explicit accounts of cottaging in London toilets. The diaries were the basis of the 1987 film Prick Up Your Ears and the play of the same name.
The film Get Real was based on the 1992 play What's Wrong with Angry?, which features schoolboys cottaging as a key theme.
The 1992 play Porcelain by Singaporean-born playwright Chay Yew describes cottaging as a backdrop of violence between a gay Asian man and his white lover in a Bethnal Green lavatory.
The Chinese film East Palace, West Palace, released in 1996, is centred on cottaging activity in Beijing.
The modern dance company, DV8, staged a piece in 2003 called Men Who Have Sex With Men (MSM), which explicitly portrayed the theme of cottaging.
Nicholas de Jongh's play Plague Over England was based on the arrest and conviction of John Gielgud for cottaging and premièred in 2008.
The 2017 video game The Tearoom by independent developer Robert Yang simulates cottaging practices set in a public restroom in 1962 Mansfield, Ohio.
See also
Cruising for sex
Gay bathhouse
Dogging
References
Citations
Sources
LGBTQ culture in the United Kingdom
Gay culture
Toilets
Casual sex
Gay slang
Urinals | Cottaging | [
"Biology"
] | 1,286 | [
"Excretion",
"Toilets"
] |
321,373 | https://en.wikipedia.org/wiki/High-level%20assembler | A high-level assembler in computing is an assembler for assembly language that incorporate features found in a high-level programming language.
The earliest high-level assembler was probably Burroughs' Executive Systems Problem Oriented Language (ESPOL) in about 1960, which provided an ALGOL-like syntax around explicitly-specified Burroughs B5000 machine instructions. This was followed by Niklaus Wirth's PL360 in 1968; this replicated the Burroughs facilities, with which he was familiar, on an IBM System/360. More recent high-level assemblers are Borland's Turbo Assembler (TASM), Netwide Assembler (NASM), Microsoft's Macro Assembler (MASM), IBM's High Level Assembler (HLASM) for z/Architecture systems, Alessandro Ghignola's Linoleum, X# used in Cosmos and Ziron.
High-level assemblers typically provide instructions that directly assemble one-to-one into low-level machine code as in any assembler, plus control statements such as IF, WHILE, REPEAT...UNTIL, and FOR, macros, and other enhancements. This allows the use of high-level control statement abstractions wherever maximal speed or minimal space is not essential; low-level statements that assemble directly to machine code can be used to produce the fastest or shortest code. The end result is assembly source code that is far more readable than standard assembly code while preserving the efficiency inherent with using assembly language.
High-level assemblers generally provide information-hiding facilities and the ability to call functions and procedures using a high-level-like syntax (i.e., the assembler automatically produces code to push parameters on the call stack rather than the programmer having to manually write the code to do this).
High-level assemblers also provide data abstractions normally found in high-level languages. Examples include: data structures, unions, classes, and sets. Some high-level assemblers (e.g., TASM and High Level Assembly (HLA)) support object-oriented programming.
References
(xiv+294+4 pages) (NB. Presents definitions and examples of older high-level assemblers.)
The Art of Assembly Language, Randall Hyde
HAL70 , Hamish Dewar A high level assembly language for Interdata series 70 mini-computers.
Webster site with information and links on HLA and assembler
High-level | High-level assembler | [
"Technology"
] | 502 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
321,382 | https://en.wikipedia.org/wiki/Energy%20flow%20%28ecology%29 | Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, where in water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient or inefficient that ecosystem is. This decrease in efficiency occurs because organisms need to perform cellular respiration to survive, and energy is lost as heat when cellular respiration is performed. That is also why there are fewer tertiary consumers than there are producers.
Primary production
A producer is any organism that performs photosynthesis. Producers are important because they convert energy from the sun into a storable and usable chemical form of energy, glucose, as well as oxygen. The producers themselves can use the energy stored in glucose to perform cellular respiration. Or, if the producer is consumed by herbivores in the next trophic level, some of the energy is passed on up the pyramid. The glucose stored within producers serves as food for consumers, and so it is only through producers that consumers are able to access the sun’s energy. Some examples of primary producers are algae, mosses, and other plants such as grasses, trees, and shrubs.
Chemosynthetic bacteria perform a process similar to photosynthesis, but instead of energy from the sun they use energy stored in chemicals like hydrogen sulfide. This process, referred to as chemosynthesis, usually occurs deep in the ocean at hydrothermal vents that produce heat and chemicals such as hydrogen, hydrogen sulfide and methane. Chemosynthetic bacteria can use the energy in the bonds of the hydrogen sulfide and oxygen to convert carbon dioxide to glucose, releasing water and sulfur in the process. Organisms that consume the chemosynthetic bacteria can take in the glucose and use oxygen to perform cellular respiration, similar to herbivores consuming producers.
One of the factors that controls primary production is the amount of energy that enters the producer(s), which can be measured using productivity. Only one percent of solar energy enters the producer, the rest bounces off or moves through. Gross primary productivity is the amount of energy the producer actually gets. Generally, 60% of the energy that enters the producer goes to the producer’s own respiration. The net primary productivity is the amount that the plant retains after the amount that it used for cellular respiration is subtracted. Another factor controlling primary production is organic/inorganic nutrient levels in the water or soil that the producer is living in.
Secondary production
Secondary production is the use of energy stored in plants converted by consumers to their own biomass. Different ecosystems have different levels of consumers, all end with one top consumer. Most energy is stored in organic matter of plants, and as the consumers eat these plants they take up this energy. This energy in the herbivores and omnivores is then consumed by carnivores. There is also a large amount of energy that is in primary production and ends up being waste or litter, referred to as detritus. The detrital food chain includes a large amount of microbes, macroinvertebrates, meiofauna, fungi, and bacteria. These organisms are consumed by omnivores and carnivores and account for a large amount of secondary production. Secondary consumers can vary widely in how efficient they are in consuming. The efficiency of energy being passed on to consumers is estimated to be around 10%. Energy flow through consumers differs in aquatic and terrestrial environments.
In aquatic environments
Heterotrophs contribute to secondary production and it is dependent on primary productivity and the net primary products. Secondary production is the energy that herbivores and decomposers use and thus depends on primary productivity. Primarily herbivores and decomposers consume all the carbon from two main organic sources in aquatic ecosystems, autochthonous and allochthonous. Autochthonous carbon comes from within the ecosystem and includes aquatic plants, algae and phytoplankton. Allochthonous carbon from outside the ecosystem is mostly dead organic matter from the terrestrial ecosystem entering the water. In stream ecosystems, approximately 66% of annual energy input can be washed downstream. The remaining amount is consumed and lost as heat.
In terrestrial environments
Secondary production is often described in terms of trophic levels, and while this can be useful in explaining relationships it overemphasizes the rarer interactions. Consumers often feed at multiple trophic levels. Energy transferred above the third trophic level is relatively unimportant. The assimilation efficiency can be expressed by the amount of food the consumer has eaten, how much the consumer assimilates and what is expelled as feces or urine. While a portion of the energy is used for respiration, another portion of the energy goes towards biomass in the consumer. There are two major food chains: The primary food chain is the energy coming from autotrophs and passed on to the consumers; and the second major food chain is when carnivores eat the herbivores or decomposers that consume the autotrophic energy. Consumers are broken down into primary consumers, secondary consumers and tertiary consumers. Carnivores have a much higher assimilation of energy, about 80% and herbivores have a much lower efficiency of approximately 20 to 50%. Energy in a system can be affected by animal emigration/immigration. The movements of organisms are significant in terrestrial ecosystems. Energetic consumption by herbivores in terrestrial ecosystems has a low range of ~3-7%. The flow of energy is similar in many terrestrial environments. The fluctuation in the amount of net primary product consumed by herbivores is generally low. This is in large contrast to aquatic environments of lakes and ponds where grazers have a much higher consumption of around ~33%. Ectotherms and endotherms have very different assimilation efficiencies.
Detritivores
Detritivores consume organic material that is decomposing and are in turn consumed by carnivores. Predator productivity is correlated with prey productivity. This confirms that the primary productivity in ecosystems affects all productivity following.
Detritus is a large portion of organic material in ecosystems. Organic material in temperate forests is mostly made up of dead plants, approximately 62%.
In an aquatic ecosystem, leaf matter that falls into streams gets wet and begins to leech organic material. This happens rather quickly and will attract microbes and invertebrates. The leaves can be broken down into large pieces called coarse particulate organic matter (CPOM). The CPOM is rapidly colonized by microbes. Meiofauna is extremely important to secondary production in stream ecosystems. Microbes breaking down and colonizing this leaf matter are very important to the detritovores. The detritovores make the leaf matter more edible by releasing compounds from the tissues; it ultimately helps soften them. As leaves decay nitrogen will decrease since cellulose and lignin in the leaves is difficult to break down. Thus the colonizing microbes bring in nitrogen in order to aid in the decomposition. Leaf breakdown can depend on initial nitrogen content, season, and species of trees. The species of trees can have variation when their leaves fall. Thus the breakdown of leaves is happening at different times, which is called a mosaic of microbial populations.
Species effect and diversity in an ecosystem can be analyzed through their performance and efficiency. In addition, secondary production in streams can be influenced heavily by detritus that falls into the streams; production of benthic fauna biomass and abundance decreased an additional 47–50% during a study of litter removal and exclusion.
Energy flow across ecosystems
Research has demonstrated that primary producers fix carbon at similar rates across ecosystems. Once carbon has been introduced into a system as a viable source of energy, the mechanisms that govern the flow of energy to higher trophic levels vary across ecosystems. Among aquatic and terrestrial ecosystems, patterns have been identified that can account for this variation and have been divided into two main pathways of control: top-down and bottom-up. The acting mechanisms within each pathway ultimately regulate community and trophic level structure within an ecosystem to varying degrees. Bottom-up controls involve mechanisms that are based on resource quality and availability, which control primary productivity and the subsequent flow of energy and biomass to higher trophic levels. Top-down controls involve mechanisms that are based on consumption by consumers. These mechanisms control the rate of energy transfer from one trophic level to another as herbivores or predators feed on lower trophic levels.
Aquatic vs terrestrial ecosystems
Much variation in the flow of energy is found within each type of ecosystem, creating a challenge in identifying variation between ecosystem types. In a general sense, the flow of energy is a function of primary productivity with temperature, water availability, and light availability. For example, among aquatic ecosystems, higher rates of production are usually found in large rivers and shallow lakes than in deep lakes and clear headwater streams. Among terrestrial ecosystems, marshes, swamps, and tropical rainforests have the highest primary production rates, whereas tundra and alpine ecosystems have the lowest. The relationships between primary production and environmental conditions have helped account for variation within ecosystem types, allowing ecologists to demonstrate that energy flows more efficiently through aquatic ecosystems than terrestrial ecosystems due to the various bottom-up and top-down controls in play.
Bottom-up
The strength of bottom-up controls on energy flow are determined by the nutritional quality, size, and growth rates of primary producers in an ecosystem. Photosynthetic material is typically rich in nitrogen (N) and phosphorus (P) and supplements the high herbivore demand for N and P across all ecosystems. Aquatic primary production is dominated by small, single-celled phytoplankton that are mostly composed of photosynthetic material, providing an efficient source of these nutrients for herbivores. In contrast, multi-cellular terrestrial plants contain many large supporting cellulose structures of high carbon but low nutrient value. Because of this structural difference, aquatic primary producers have less biomass per photosynthetic tissue stored within the aquatic ecosystem than in the forests and grasslands of terrestrial ecosystems. This low biomass relative to photosynthetic material in aquatic ecosystems allows for a more efficient turnover rate compared to terrestrial ecosystems. As phytoplankton are consumed by herbivores, their enhanced growth and reproduction rates sufficiently replace lost biomass and, in conjunction with their nutrient dense quality, support greater secondary production.
Additional factors impacting primary production includes inputs of N and P, which occurs at a greater magnitude in aquatic ecosystems. These nutrients are important in stimulating plant growth and, when passed to higher trophic levels, stimulate consumer biomass and growth rate. If either of these nutrients are in short supply, they can limit overall primary production. Within lakes, P tends to be the greater limiting nutrient while both N and P limit primary production in rivers. Due to these limiting effects, nutrient inputs can potentially alleviate the limitations on net primary production of an aquatic ecosystem. Allochthonous material washed into an aquatic ecosystem introduces N and P as well as energy in the form of carbon molecules that are readily taken up by primary producers. Greater inputs and increased nutrient concentrations support greater net primary production rates, which in turn supports greater secondary production.
Top-down
Top-down mechanisms exert greater control on aquatic primary producers due to the roll of consumers within an aquatic food web. Among consumers, herbivores can mediate the impacts of trophic cascades by bridging the flow of energy from primary producers to predators in higher trophic levels. Across ecosystems, there is a consistent association between herbivore growth and producer nutritional quality. However, in aquatic ecosystems, primary producers are consumed by herbivores at a rate four times greater than in terrestrial ecosystems. Although this topic is highly debated, researchers have attributed the distinction in herbivore control to several theories, including producer to consumer size ratios and herbivore selectivity.
Modeling of top-down controls on primary producers suggests that the greatest control on the flow of energy occurs when the size ratio of consumer to primary producer is the highest. The size distribution of organisms found within a single trophic level in aquatic systems is much narrower than that of terrestrial systems. On land, the consumer size ranges from smaller than the plant it consumes, such as an insect, to significantly larger, such as an ungulate, while in aquatic systems, consumer body size within a trophic level varies much less and is strongly correlated with trophic position. As a result, the size difference between producers and consumers is consistently larger in aquatic environments than on land, resulting in stronger herbivore control over aquatic primary producers.
Herbivores can potentially control the fate of organic matter as it is cycled through the food web. Herbivores tend to select nutritious plants while avoiding plants with structural defense mechanisms. Like support structures, defense structures are composed of nutrient poor, high carbon cellulose. Access to nutritious food sources enhances herbivore metabolism and energy demands, leading to greater removal of primary producers. In aquatic ecosystems, phytoplankton are highly nutritious and generally lack defense mechanisms. This results in greater top-down control because consumed plant matter is quickly released back into the system as labile organic waste. In terrestrial ecosystems, primary producers are less nutritionally dense and are more likely to contain defense structures. Because herbivores prefer nutritionally dense plants and avoid plants or plant parts with defense structures, a greater amount of plant matter is left unconsumed within the ecosystem. Herbivore avoidance of low-quality plant matter may be why terrestrial systems exhibit weaker top-down control on the flow of energy.
See also
References
Further reading
Ecology terminology
Energy
Environmental science
Ecological economics | Energy flow (ecology) | [
"Physics",
"Biology",
"Environmental_science"
] | 3,169 | [
"Ecology terminology",
"Physical quantities",
"Energy (physics)",
"Energy",
"nan"
] |
321,438 | https://en.wikipedia.org/wiki/Perfect%20information | In economics, perfect information (sometimes referred to as "no hidden information") is a feature of perfect competition. With perfect information in a market, all consumers and producers have complete and instantaneous knowledge of all market prices, their own utility, and own cost functions.
In game theory, a sequential game has perfect information if each player, when making any decision, is perfectly informed of all the events that have previously occurred, including the "initialization event" of the game (e.g. the starting hands of each player in a card game).
Perfect information is importantly different from complete information, which implies common knowledge of each player's utility functions, payoffs, strategies and "types". A game with perfect information may or may not have complete information.
Games where some aspect of play is hidden from opponents – such as the cards in poker and bridge – are examples of games with imperfect information.
Examples
Chess is an example of a game with perfect information, as each player can see all the pieces on the board at all times. Other games with perfect information include tic-tac-toe, Reversi, checkers, and Go.
Academic literature has not produced consensus on a standard definition of perfect information which defines whether games with chance, but no secret information, and games with simultaneous moves are games of perfect information.
Games which are sequential (players alternate in moving) and which have chance events (with known probabilities to all players) but no secret information, are sometimes considered games of perfect information. This includes games such as backgammon and Monopoly. But there are some academic papers which do not regard such games as games of perfect information because the results of chance themselves are unknown prior to them occurring.
Games with simultaneous moves are generally not considered games of perfect information. This is because each player holds information which is secret, and must play a move without knowing the opponent's secret information. Nevertheless, some such games are symmetrical, and fair. An example of a game in this category includes rock paper scissors.
See also
Extensive form game
Information asymmetry
Partial knowledge
Screening game
Signaling game
References
Further reading
Fudenberg, D. and Tirole, J. (1993) Game Theory, MIT Press. (see Chapter 3, sect 2.2)
Gibbons, R. (1992) A primer in game theory, Harvester-Wheatsheaf. (see Chapter 2)
Luce, R.D. and Raiffa, H. (1957) Games and Decisions: Introduction and Critical Survey, Wiley & Sons (see Chapter 3, section 2)
The Economics of Groundhog Day by economist D.W. MacKenzie, using the 1993 film Groundhog Day to argue that perfect information, and therefore perfect competition, is impossible.
Watson, J. (2013) Strategy: An Introduction to Game Theory, W.W. Norton and Co.
Game theory
Perfect competition
Board game terminology | Perfect information | [
"Mathematics"
] | 590 | [
"Game theory"
] |
321,451 | https://en.wikipedia.org/wiki/Artificial%20world | An artificial world may refer to:
Megastructure, large man-made structures, especially those located in outer space.
Artificial planet, specific type of megastructure
See also
Megastructures | Artificial world | [
"Technology"
] | 44 | [
"Exploratory engineering",
"Megastructures"
] |
321,459 | https://en.wikipedia.org/wiki/Religion%20and%20circumcision | Religious circumcision is generally performed shortly after birth, during childhood, or around puberty as part of a rite of passage. Circumcision for religious reasons is most frequently practiced in Judaism and Islam.
Abrahamic religions
Judaism
Christianity
Ancient church
Modern Christianity
Circumcision is considered a customary practice among Oriental Christian denominations such as the Coptic, Ethiopian, and Eritrean Orthodox churches. The practice is near-universal in the Ethiopian Orthodox Church. Some Christian churches in South Africa oppose circumcision, viewing it as a pagan ritual, while others, including the Nomiya church in Kenya, require circumcision. It is common in Cameroon, Democratic Republic of the Congo, Eritrea, Ghana, Liberia, and Nigeria. Circumcision is widely practiced among Christian communities in the Anglosphere, Oceania, South Korea, the Philippines, and the Middle East. Circumcision is rare in Europe, East Asia, as well as in India. Christians in the East and West Indies (excluding the Philippines) do not practice it. Circumcision is also widely practiced among Christian communities in Philippines, South Korea, Syria, Lebanon, Jordan, Palestine, Israel, and North Africa.
The Lutheran Church and the Greek Orthodox Church celebrate the Circumcision of Christ on 1 January, while Orthodox churches following the Julian calendar celebrate it on 14 January. All Orthodox churches consider it a "Great Feast". In much of Western Christianity, the Feast of the Circumcision of Christ has been replaced by other commemorations, such as the Solemnity of Mary in the Roman Catholic Church or the Feast of the Holy Name of Jesus in the Lutheran Churches. Exceptions, such as among most Traditionalist Catholics, who reject Novus Ordo and other changes following Vatican II to varying degrees, maintained the feast as a Holy day of obligation.
According to Scholar Heather L. Armstrong of University of Southampton, about half of Christian males worldwide are circumcised, with most of them being located in Africa, Anglosphere countries (with notable prevalence in the United States) and the Philippines. Many Christians have been circumcised for reasons such as family preferences, medical or cultural reasons. Circumcision is also part of a traditional practice among the adherents of certain Oriental Christian denominations, including those of Coptic Christianity, the Ethiopian Orthodox Church and Eritrean Orthodox Church.
Roman Catholic Church
The Roman Catholic Church denounced religious circumcision for its members in the Cantate Domino, written during the 11th Council of Florence in 1442, warning of loss of salvation for converts who observe it. This decision was based on the belief that baptism had superseded circumcision (), and may also have been a response to Coptic Christians, who continued to practice circumcision.
Origen stated in his work Contra Celsum that circumcision "was discontinued by Jesus, who desired that His disciples should not practise it."
Pope Pius XII taught that circumcision is only §"[morally] permissible if, in accordance with therapeutic principles, it prevents a disease that cannot be countered in any other way."
On another occasion, he stated:
The Church has been viewed as maintaining a neutral position on the practice of cultural circumcision, due to its policy of inculturation, although some Catholic scholars argue that the church condemns it as "elective male infant circumcision not only violates the proper application of the time-honored principle of totality, but even fits the ethical definition of mutilation, which is gravely sinful."
Fr. John J. Dietzen, a priest and columnist, argued that paragraph number 2297 from the Catholic Catechism (Respect for bodily integrity) makes the practice of elective and neonatal circumcision immoral. John Paul Slosar and Daniel O'Brien, counter that the therapeutic benefits of neonatal circumcision are inconclusive, but that recent findings that circumcision may prevent disease puts the practice outside the realm of paragraph 2297. They claim that the "Respect for bodily integrity" paragraph apply in the context of kidnapping, hostage-taking or torture, and that if circumcision is included, any removal of tissue or follicle could be considered a violation of moral law. The proportionality of harm versus benefit of medical procedures, as defined by Directives 29 and 33 of the Ethical and Religious Directives for Catholic Health Care Services (National Conference of Catholic Bishops), have been interpreted to support and reject circumcision. These arguments represent the conscience of the individual writers, and not official doctrine. The most recent statement from the Church was that of Pope Emeritus Benedict XVI:
The Church of Antioch sent Barnabas on a mission with Paul, which became known as the Apostle's first missionary journey . . . Together with Paul, he then went to the so-called Council of Jerusalem where after a profound examination of the question, the Apostles with the Elders decided to discontinue the practice of circumcision so that it was no longer a feature of the Christian identity (cf. Acts 15: 1-35). It was only in this way that, in the end, they officially made possible the Church of the Gentiles, a Church without circumcision; we are children of Abraham simply through faith in Christ.
Latter Day Saints
Passages from scriptures connected with the Latter Day Saint movement (Mormons) explain that the "law of circumcision is done away" by Christ and thus unnecessary.
Druze
Circumcision is widely practiced by the Druze: practiced as a cultural tradition, and has no religious significance. No special interval is specified: Druze infants are usually circumcised shortly after birth, however some remain uncircumcised until age ten or older. Some Druses do not circumcise their male children, and refuse to observe this "common Muslim practice".
Islam
The origin of circumcision in Islam is a matter of religious and scholarly debate. It is mentioned in some hadith and the sunnah, but it not in the Quran, though perhaps it is implied by the command to "follow the way of Ibrahim, the true in Faith". In the time of Muhammad, circumcision was carried out by Pagan Arabian tribes, and by the Jewish tribes of Arabia for religious reasons. This was attested by al-Jahiz and by Jewish historian Flavius Josephus.
The four schools of Islamic jurisprudence have different views towards circumcision.Some state that it is recommendable, others that it is permissible but not binding, while others regard it as a legal obligation. According to Shafi‘i and Hanbali jurists male circumcision is obligatory for Muslims, while Hanafi jurists consider circumcision to be recommendable. Some Salafis have argued that circumcision is required in Islam to provide ritual cleanliness based on the covenant with Abraham.
Whereas Jewish circumcision is closely bound by ritual timing and tradition, Islam states no fixed age for circumcision. In Muslim communities, children are often circumcised in late childhood or early adolescence. It varies by family, region, and country. The age when boys get circumcised, and the procedures used, tend to change across cultures, families, and time. In some Muslim-majority countries, circumcision is performed after boys have learned to recite the Quran from start to finish. In Malaysia and other regions, the boy usually undergoes the operation between the ages of ten and twelve, and is thus a puberty rite, serving to introduce him in the adult world. The procedure is sometimes semi-public, accompanied with music, special foods, and much festivity.
Islam has no equivalent of a Jewish mohel. Circumcisions are usually carried out in health facilities or hospitals, and performed by trained medical practitioners. The circumciser can be either male or female, and is not required to be a Muslim, and circumcision is not required of converts to Islam.
Indian religions
Hindu canons make no reference to circumcision. Both Hinduism and Buddhism appear to have a neutral view on circumcision. However, Hinduism discourages non-medical circumcision, as according to them, the body is made by almighty God, and nobody has right to alter it without the concern of the person who is going for it. Certain Hindu gurus consider it to be directly against nature and God's Design.
Sikh infants are not circumcised. Sikhism criticizes the practice. For example, Bhagat Kabir criticizes the practise of circumcision in the following hymn of Guru Granth Sahib.
Africa
In West Africa, infant circumcision had religious significance as a rite of passage or otherwise in the past; today in some non-Muslim Nigerian societies it is medicalised and is simply a cultural norm. In many West African traditional societies circumcision has become medicalised and is simply performed in infancy without ado or any particular conscious cultural significance. Among the Urhobo of southern Nigeria it is symbolic of a boy entering into manhood. The ritual expression, Omo te Oshare ("the boy is now man"), constitutes a rite of passage from one age set to another.
In East Africa, specifically in Kenya among various so-classified Bantu and Nilotic peoples, such as the Maragoli and Idakho of the Luhya super-ethnic group, the Kikuyu, Kalenjin and Maasai, circumcision is a rite of passage observed collectively by a number of boys every few years, and boys circumcised at the same time are taken to be members of a single age set.
Authority derives from the age-group and the age-set. Prior to circumcision a natural leader or Olaiguenani is selected; he leads his age-group through a series of rituals until old age, sharing responsibility with a select few, of whom the ritual expert (Oloiboni) is the ultimate authority. Masai youths are not circumcised until they are mature, and a new age-set is initiated together at regular intervals of twelve to fifteen years. The young warriors (Il-Murran) remain initiates for some time, using blunt arrows to hunt small birds which are stuffed and tied to a frame to form a head-dress. Traditionally, among the Luhya, boys of certain age-sets, typically between 8 and 18 years of age would, under the leadership of specific men engage in various rites leading up to the day of circumcision. After circumcision, they would live apart from the rest of society for a certain number of days. Not even their mothers nor sisters would be allowed to see them.
The Xhosa Tribe from the Eastern Cape in South Africa has a circumcision ritual. The ceremony is part of a transition to manhood. It is called the Abakwetha - "A Group Learning". A group of normally five aged between 16 and 20 go off for three months and live in a special hut (sutu). The circumcision is the climax of the ritual. Nelson Mandela describes his experiences undergoing this ritual in his biography, Long Walk to Freedom. Traditional circumcisions are often performed in unsterile conditions where no anesthetic is administered; improper treatment of the wound can lead to sepsis and dehydration, which has in the past lead to initiate deaths.
Among some West African animist groups, such as the Dogon and Dowayo, circumcision represents a removal of "feminine" aspects of the male, turning boys into fully masculine males.
Ancient Egypt
Sixth Dynasty (2345–2181 ) tomb artwork in Egypt is thought to be the oldest documentary evidence of circumcision. The most ancient depiction is a bas-relief from the necropolis at Saqqara ( 2400 ) with the inscription "Hold him and do not allow him to faint". The oldest written account, by an Egyptian named Uha, in the 23rd century , describes a mass circumcision and boasts of his ability to stoically endure the pain: "When I was circumcised, together with one hundred and twenty men ... there was none thereof who hit out, there was none thereof who was hit, and there was none thereof who scratched and there was none thereof who was scratched."
Circumcision in ancient Egypt was thought to be a rite of passage from childhood to adulthood. The alteration of the body and ritual of circumcision was supposed to give access to ancient mysteries reserved for the initiated. The content of those mysteries are unclear but are likely to be myths, prayers, and incantations central to Egyptian religion. The Egyptian Book of the Dead, for example, tells of the sun god Ra performing a self-circumcision, whose blood created two minor guardian deities. Circumcisions were performed by priests in a public ceremony, using a stone blade. It is thought to have been more popular among society's upper echelons, although it was not universal and those lower down the social order also had the procedure.
Asia
In early 2007 it was announced that rural aidpost orderlies in the East Sepik Province of Papua New Guinea were to undergo training in circumcision with a view to introducing the procedure as a means of prophylaxis against HIV/AIDS, which was becoming a significant problem in the country.
Neither the Avesta nor the Zoroastrian Pahlavi texts mention circumcision. Traditionally, Zoroastrians do not practice circumcision. Circumcision is not required in Yazidism, but is practised by some Yazidis due to regional customs.
Circumcision is forbidden in Mandaeism, and the sign of the Jews given to Abraham by God, circumcision, is considered abhorrent. According to the Mandaean doctrine a circumcised man cannot serve as a priest.
Circumcision in South Korea is largely the result of American cultural and military influence following the Korean War.
The origin of circumcision (tuli) in the Philippines is uncertain. One newspaper article speculates that it is due to the influence of Western colonisation. However, Antonio de Morga's 17th-century History of the Philippine Islands documents its existence in pre-Colonial Philippines, owing it to Islamic influence.
Circumcision is not a religious practice of the Bahá'í Faith, and leaves that decision to the parents.
Like Judaism, the religion of Samaritanism requires ritual circumcision on the eighth day of life.
Oceania
Circumcision is part of initiation rites in some Pacific Island, and Australian aboriginal traditions in areas such as Arnhem Land, where the practice was introduced by Makassan traders from Sulawesi. Circumcision ceremonies among certain Australian aboriginal societies are noted for their painful nature, including subincision for some aboriginal peoples in the Western Desert.
In the Pacific, ritual circumcision is nearly universal in the Melanesian islands of Fiji and Vanuatu; participation in the traditional land diving on Pentecost Island is reserved for those who have been circumcised. Circumcision is also commonly practised in the Polynesian islands of Samoa, Tonga, Niue, and Tikopia. In Samoa, it is accompanied by a celebration.
See also
History of circumcision
Prevalence of circumcision
References
Works cited:
Glick, Leonard B. Marked in Your Flesh: Circumcision from Ancient Judea to Modern America. New York: Oxford University Press, 2005. ()
The rabbinic literature and Converts to Judaism are sections are an evolution of the corresponding article which gives the following
Bibliography:
Pocock, Specimen Historiœ Arabum, pp. 319 et seq.;
Millo, Histoire du Mahométisme, p. 350;
Hoffmann, Beschneidung, in Ersch and Gruber, Encyc.;
Steinschneider, Die Beschneidung der Araber und Muhammedaner, in Glassberg, Die Beschneidung;
Jolly, Etude Critique du Manuel Opératoire des Musulmans et des Israélites, Paris, 1899.
External links
Circumcision
circum
circum
Rites of passage
circum
24th-century BC establishments | Religion and circumcision | [
"Biology"
] | 3,409 | [
"Behavior",
"Religious practices",
"Human behavior"
] |
321,481 | https://en.wikipedia.org/wiki/Temporal%20logic | In logic, temporal logic is any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time (for example, "I am always hungry", "I will eventually be hungry", or "I will be hungry until I eat something"). It is sometimes also used to refer to tense logic, a modal logic-based system of temporal logic introduced by Arthur Prior in the late 1950s, with important contributions by Hans Kamp. It has been further developed by computer scientists, notably Amir Pnueli, and logicians.
Temporal logic has found an important application in formal verification, where it is used to state requirements of hardware or software systems. For instance, one may wish to say that whenever a request is made, access to a resource is eventually granted, but it is never granted to two requestors simultaneously. Such a statement can conveniently be expressed in a temporal logic.
Motivation
Consider the statement "I am hungry". Though its meaning is constant in time, the statement's truth value can vary in time. Sometimes it is true, and sometimes false, but never simultaneously true and false. In a temporal logic, a statement can have a truth value that varies in time—in contrast with an atemporal logic, which applies only to statements whose truth values are constant in time. This treatment of truth-value over time differentiates temporal logic from computational verb logic.
Temporal logic always has the ability to reason about a timeline. So-called "linear-time" logics are restricted to this type of reasoning. Branching-time logics, however, can reason about multiple timelines. This permits in particular treatment of environments that may act unpredictably.
To continue the example, in a branching-time logic we may state that "there is a possibility that I will stay hungry forever", and that "there is a possibility that eventually I am no longer hungry". If we do not know whether or not I will ever be fed, these statements can both be true.
History
Although Aristotle's logic is almost entirely concerned with the theory of the categorical syllogism, there are passages in his work that are now seen as anticipations of temporal logic, and may imply an early, partially developed form of first-order temporal modal bivalent logic. Aristotle was particularly concerned with the problem of future contingents, where he could not accept that the principle of bivalence applies to statements about future events, i.e. that we can presently decide if a statement about a future event is true or false, such as "there will be a sea battle tomorrow".
There was little development for millennia, Charles Sanders Peirce noted in the 19th century:
Surprisingly for Peirce, the first system of temporal logic was constructed, as far as we know, in the first half of 20th century. Although Arthur Prior is widely known as a founder of temporal logic, the first formalization of such logic was provided in 1947 by Polish logician, Jerzy Łoś. In his work Podstawy Analizy Metodologicznej Kanonów Milla (The Foundations of a Methodological Analysis of Mill’s Methods) he presented a formalization of Mill's canons. In Łoś' approach, emphasis was placed on the time factor. Thus, to reach his goal, he had to create a logic that could provide means for formalization of temporal functions. The logic could be seen as a byproduct of Łoś' main aim, albeit it was the first positional logic that, as a framework, was used later for Łoś' inventions in epistemic logic. The logic itself has syntax very different than Prior's tense logic, which uses modal operators. The language of Łoś' logic rather uses a realization operator, specific to positional logic, which binds the expression with the specific context in which its truth-value is considered. In Łoś' work this considered context was only temporal, thus expressions were bound with specific moments or intervals of time.
In the following years, research of temporal logic by Arthur Prior began. He was concerned with the philosophical implications of free will and predestination. According to his wife, he first considered formalizing temporal logic in 1953. Results of his research were first presented at the conference in Wellington in 1954. The system Prior presented, was similar syntactically to Łoś' logic, although not until 1955 did he explicitly refer to Łoś' work, in the last section of Appendix 1 in Prior’s Formal Logic.
Prior gave lectures on the topic at the University of Oxford in 1955–6, and in 1957 published a book, Time and Modality, in which he introduced a propositional modal logic with two temporal connectives (modal operators), F and P, corresponding to "sometime in the future" and "sometime in the past". In this early work, Prior considered time to be linear. In 1958 however, he received a letter from Saul Kripke, who pointed out that this assumption is perhaps unwarranted. In a development that foreshadowed a similar one in computer science, Prior took this under advisement, and developed two theories of branching time, which he called "Ockhamist" and "Peircean". Between 1958 and 1965 Prior also corresponded with Charles Leonard Hamblin, and a number of early developments in the field can be traced to this correspondence, for example Hamblin implications. Prior published his most mature work on the topic, the book Past, Present, and Future in 1967. He died two years later.
Along with tense logic, Prior constructed a few systems of positional logic, which inherited their main ideas from Łoś. Work in positional temporal logics was continued by Nicholas Rescher in the 60s and 70s. In such works as Note on Chronological Logic (1966), On the Logic of Chronological Propositions (1968), Topological Logic (1968), and Temporal Logic (1971) he researched connections between Łoś' and Prior's systems. Moreover, he proved that Prior's tense operators could be defined using a realization operator in specific positional logics. Rescher, in his work, also created more general systems of positional logics. Although the first ones were constructed for purely temporal uses, he proposed the term topological logics for logics that were meant to contain a realization operator but had no specific temporal axioms—like the clock axiom.
The binary temporal operators Since and Until were introduced by Hans Kamp in his 1968 Ph.D. thesis, which also contains an important result relating temporal logic to first-order logic—a result now known as Kamp's theorem.
Two early contenders in formal verifications were linear temporal logic, a linear-time logic by Amir Pnueli, and computation tree logic (CTL), a branching-time logic by Mordechai Ben-Ari, Zohar Manna and Amir Pnueli. An almost equivalent formalism to CTL was suggested around the same time by E. M. Clarke and E. A. Emerson. The fact that the second logic can be decided more efficiently than the first does not reflect on branching- and linear-time logics in general, as has sometimes been argued. Rather, Emerson and Lei show that any linear-time logic can be extended to a branching-time logic that can be decided with the same complexity.
Łoś's positional logic
Łoś’s logic was published as his 1947 master’s thesis Podstawy Analizy Metodologicznej Kanonów Milla (The Foundations of a Methodological Analysis of Mill’s Methods). His philosophical and formal concepts could be seen as continuations of those of the Lviv–Warsaw School of Logic, as his supervisor was Jerzy Słupecki, disciple of Jan Łukasiewicz. The paper was not translated into English until 1977, although Henryk Hiż presented in 1951 a brief, but informative, review in the Journal of Symbolic Logic. This review contained core concepts of Łoś’s work and was enough to popularize his results among the logical community. The main aim of this work was to present Mill's canons in the framework of formal logic. To achieve this goal the author researched the importance of temporal functions in the structure of Mill's concept. Having that, he provided his axiomatic system of logic that would fit as a framework for Mill's canons along with their temporal aspects.
Syntax
The language of the logic first published in Podstawy Analizy Metodologicznej Kanonów Milla (The Foundations of a Methodological Analysis of Mill’s Methods) consisted of:
first-order logic operators ‘¬’, ‘∧’, ‘∨’, ‘→’, ‘≡’, ‘∀’ and ‘∃’
realization operator U
functional symbol δ
propositional variables p1,p2,p3,...
variables denoting time moments t1,t2,t3,...
variables denoting time intervals n1,n2,n3,...
The set of terms (denoted by S) is constructed as follows:
variables denoting time moments or intervals are terms
if and is a time interval variable, then
The set of formulas (denoted by For) is constructed as follows:
all first-order logic formulas are valid
if and is a propositional variable, then
if , then
if and , then
if and and υ is a propositional, moment or interval variable, then
Original Axiomatic System
Prior's tense logic (TL)
The sentential tense logic introduced in Time and Modality has four (non-truth-functional) modal operators (in addition to all usual truth-functional operators in first-order propositional logic).
P: "It was the case that..." (P stands for "past")
F: "It will be the case that..." (F stands for "future")
G: "It always will be the case that..."
H: "It always was the case that..."
These can be combined if we let π be an infinite path:
: "At a certain point, is true at all future states of the path"
: " is true at infinitely many states on the path"
From P and F one can define G and H, and vice versa:
Syntax and semantics
A minimal syntax for TL is specified with the following BNF grammar:
where a is some atomic formula.
Kripke models are used to evaluate the truth of sentences in TL. A pair (, <) of a set and a binary relation < on (called "precedence") is called a frame. A model is given by triple (, <, ) of a frame and a function called a valuation that assigns to each pair (, ) of an atomic formula and a time value some truth value. The notion " is true in a model =(, <, ) at time " is abbreviated ⊨[]. With this notation,
Given a class of frames, a sentence of TL is
valid with respect to if for every model =(,<,) with (,<) in and for every in , ⊨[]
satisfiable with respect to if there is a model =(,<,) with (,<) in such that for some in , ⊨[]
a consequence of a sentence with respect to if for every model =(,<,) with (,<) in and for every in , if ⊨[], then ⊨[]
Many sentences are only valid for a limited class of frames. It is common to restrict the class of frames to those with a relation < that is transitive, antisymmetric, reflexive, trichotomic, irreflexive, total, dense, or some combination of these.
A minimal axiomatic logic
Burgess outlines a logic that makes no assumptions on the relation <, but allows for meaningful deductions, based on the following axiom schema:
where is a tautology of first-order logic
G(→)→(G→G)
H(→)→(H→H)
→GP
→HF
with the following rules of deduction:
given → and , deduce (modus ponens)
given a tautology , infer G
given a tautology , infer H
One can derive the following rules:
Becker's rule: given →, deduce T→T where T is a tense, any sequence made of G, H, F, and P.
Mirroring: given a theorem , deduce its mirror statement §, which is obtained by replacing G by H (and so F by P) and vice versa.
Duality: given a theorem , deduce its dual statement *, which is obtained by interchanging ∧ with ∨, G with F, and H with P.
Translation to predicate logic
Burgess gives a Meredith translation from statements in TL into statements in first-order logic with one free variable 0 (representing the present moment). This translation is defined recursively as follows:
where is the sentence with all variable indices incremented by 1 and is a one-place predicate defined by .
Temporal operators
Temporal logic has two kinds of operators: logical operators and modal operators. Logical operators are usual truth-functional operators (). The modal operators used in linear temporal logic and computation tree logic are defined as follows.
Alternate symbols:
operator R is sometimes denoted by V
The operator W is the weak until operator: is equivalent to
Unary operators are well-formed formulas whenever is well-formed. Binary operators are well-formed formulas whenever and are well-formed.
In some logics, some operators cannot be expressed. For example, N operator cannot be expressed in temporal logic of actions.
Temporal logics
Temporal logics include:
Some systems of positional logic
Linear temporal logic (LTL) temporal logic without branching timelines
Computation tree logic (CTL) temporal logic with branching timelines
Interval temporal logic (ITL)
Temporal logic of actions (TLA)
Signal temporal logic (STL)
Timestamp temporal logic (TTL)
Property specification language (PSL)
CTL*, which generalizes LTL and CTL
Hennessy–Milner logic (HML)
Modal μ-calculus, which includes as a subset HML and CTL*
Metric temporal logic (MTL)
Metric interval temporal logic (MITL)
Timed propositional temporal logic (TPTL)
Truncated Linear Temporal Logic (TLTL)
Hyper temporal logic (HyperLTL)
A variation, closely related to temporal or chronological or tense logics, are modal logics based upon "topology", "place", or "spatial position".
See also
HPO formalism
Kripke structure
Automata theory
Chomsky grammar
State transition system
Duration calculus (DC)
Hybrid logic
Modal logic
Temporal logic in finite-state verification
Reo Coordination Language
Research Materials: Max Planck Society Archive
Notes
References
Mordechai Ben-Ari, Zohar Manna, Amir Pnueli: The Temporal Logic of Branching Time. POPL 1981: 164–176
Amir Pnueli: The Temporal Logic of Programs FOCS 1977: 46–57
Venema, Yde, 2001, "Temporal Logic," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell.
E. A. Emerson and Chin-Laung Lei, "Modalities for model checking: branching time logic strikes back", in Science of Computer Programming 8, pp. 275–306, 1987.
E. A. Emerson, "Temporal and modal logic", Handbook of Theoretical Computer Science, Chapter 16, the MIT Press, 1990
A Practical Introduction to PSL, Cindy Eisner, Dana Fisman
preprint. Historical perspective on how seemingly disparate ideas came together in computer science and engineering. (The mention of Church in the title of this paper is a reference to a little-known 1957 paper, in which Church proposed a way to perform hardware verification.)
Further reading
External links
Stanford Encyclopedia of Philosophy: "Temporal Logic"—by Anthony Galton.
Temporal Logic by Yde Venema, formal description of syntax and semantics, questions of axiomatization. Treating also Kamp's dyadic temporal operators (since, until)
Notes on games in temporal logic by Ian Hodkinson, including a formal description of first-order temporal logic
CADP – provides generic model checkers for various temporal logic
PAT is a powerful free model checker, LTL checker, simulator and refinement checker for CSP and its extensions (with shared variable, arrays, wide range of fairness).
Philosophy of time | Temporal logic | [
"Physics"
] | 3,457 | [
"Spacetime",
"Philosophy of time",
"Physical quantities",
"Time"
] |
321,544 | https://en.wikipedia.org/wiki/Interrobang | The interrobang (), also known as the interabang (often represented by any of the following: ?!, !?, ?!?,?!!, !?? or !?!), is an unconventional punctuation mark intended to combine the functions of the question mark (also known as the interrogative point) and the exclamation mark (also known in the jargon of printers and programmers as a "bang"). The glyph is a ligature of these two marks and was first proposed in 1962 by Martin K. Speckter.
Application
A sentence ending with an interrobang states a question in an excited manner, expresses excitement, disbelief or confusion in the form of a question, or asks a rhetorical question.
For example:
You call that a hat‽
Are you out of your mind‽
Writers using informal language may use several alternating question marks and exclamation marks for even more emphasis. However, this is regarded as poor style in formal writing.
History
Historically, writers have used multiple consecutive punctuation marks to end a sentence expressing both surprise and question.
Invention
American Martin K. Speckter (June 14, 1915 – February 14, 1988) conceptualized the interrobang in 1962. As the head of an advertising agency, Speckter believed that advertisements would look better if copywriters conveyed surprised rhetorical questions using a single mark. He proposed the concept of a single punctuation mark in an article in the magazine TYPEtalks. Speckter solicited possible names for the new character from readers. Contenders included exclamaquest, and exclarotive, but he settled on interrobang. He chose the name to reference the punctuation marks that inspired it: interrogatio is Latin for "rhetorical question" or "cross-examination"; bang is printers' slang for the exclamation mark. Graphic treatments for the new mark were also submitted in response to the article.
Early interest
In 1965, Richard Isbell created the Americana typeface for American Type Founders and included the interrobang as one of the characters. In 1968, an interrobang key was available on some Remington typewriters. In the 1970s, replacement interrobang keycaps and typefaces were available for some Smith-Corona typewriters.
The interrobang was in vogue for much of the 1960s; the word interrobang appeared in some dictionaries, and the mark was used in magazine and newspaper articles.
Continued support
Most fonts do not include the interrobang, but it has not disappeared. Lucida Grande, the default font for many UI elements of legacy versions of Apple's OS X operating system, includes the interrobang, and Microsoft provides several versions of the interrobang in the Wingdings 2 character set (on the right bracket and tilde keys on US keyboard layouts), included with Microsoft Office. It was accepted into Unicode and is included in several fonts, including Lucida Sans Unicode, Arial Unicode MS, and Calibri, the default font in the Office 2007, 2010, and 2013 suites.
Upside-down interrobang
An upside-down interrobang (combining ¿ and ¡, Unicode character: ⸘), suitable for starting phrases in Spanish, Galician and Asturian—which use inverted question and exclamation marks—is called an "inverted interrobang" or a gnaborretni (interrobang spelled backwards), but the latter is rarely used. In current practice, interrobang-like emphatic ambiguity in Hispanic languages is usually achieved by including both sets of punctuation marks one inside the other (¿¡De verdad!? or ¡¿De verdad?! [Really!?]). Older usage, still official but not widespread, recommended mixing the punctuation marks: ¡Verdad? or ¿Verdad!
Codepoint
The symbol is encoded in Unicode at codepoint . The inverted interrobang is at codepoint . Single-character versions of the double-glyph versions are also available at codepoints and .
Examples of use
The State Library of New South Wales, in Australia, uses an interrobang as its logo, as does the educational publishing company Pearson, which thus intends to convey "the excitement and fun of learning".
Chief Judge Frank H. Easterbrook used an interrobang in the 2012 United States Seventh Circuit opinion Robert F. Booth Trust v. Crowley.
Australian Federal Court Justice Michael Wigney used an interrobang in the first paragraph of his 2018 judgment in Faruqi v Latham [2018] FCA 1328 (defamation proceedings between former Federal Opposition Leader Mark Latham and political campaigner and writer Osman Faruqi).
In chess, an interrobang is used to represent a dubious move, one that is questionable but possibly has merits. (See also the evaluation symbols ?! (dubious move) and !? (interesting move).)
See also
Irony mark (⸮)
Inverted question and exclamation marks (¿¡)
Interrabang — Italian film
Interbang — Italian television series
Interrobang — album by Switchfoot
References
External links
National Punctuation Day Reignites: Interrobang Passion
99 Percent Invisible podcast episode and article about the interrobang
Typographical symbols
Punctuation
Symbols introduced in 1962 | Interrobang | [
"Mathematics"
] | 1,108 | [
"Symbols",
"Typographical symbols"
] |
321,614 | https://en.wikipedia.org/wiki/Supervised%20injection%20site | Supervised injection sites (SIS) or drug consumption rooms (DCRs) are a health and social response to drug-related problems. They are fixed or mobile spaces where people who use drugs are provided with sterile drug use equipment and can use illicit drugs under the supervision of trained staff. They are usually located in areas where there is an open drug scene and where injecting in public places is common. The primary target group for DCR services are people who engage in risky drug use.
The geographical distribution of DCRs is uneven, both at the international and regional levels. In 2022, there were over 100 DCRs operating globally, with services in Belgium, Denmark, France, Germany, Greece, Luxembourg, the Netherlands, Norway, Portugal and Spain, as well as in Switzerland, Australia, Canada, Mexico and the USA.
Primarily, DCRs aim to prevent drug-related overdose deaths, reduce the acute risks of disease transmission through unhygienic injecting, and connect people who use drugs with addiction treatment and other health and social services. They can also aim to minimise public nuisance.
Proponents say they save lives and connect users to services, while opponents believe they promote drug use and attract crime to the community around the site. Supervised injection sites are part of a harm reduction approach towards drug problems.
Terminology
Supervised injection sites are also known as overdose prevention centers (OPCs), supervised injection facilities, safe consumption rooms, safe injection sites, safe injection rooms, fix rooms, fixing rooms, safer injection facilities (SIF), drug consumption facilities (DCF), drug consumption rooms (DCRs), medically supervised injecting centres (MSICs) and medically supervised injecting rooms (MSIRs).
Facilities
Australia
The legality of supervised injection is handled on a state-by state basis. New South Wales trialed a supervised injection site in Sydney in 2001, which was made permanent in 2010. After several years of community activism, Victoria agreed to open a supervised injection site in Melbourne's North Richmond neighbourhood in 2018 on a trial basis. In 2020 the trial was extended for a further three years, and the site remains open as of 2024.
A second site for Melbourne's CBD was approved and was to be placed in a building on Flinders Street which had previously housed Yooralla. However, as of 2024, the site has been rejected by Premier Jacinta Allan who cited disagreements over location, preferring to set up a new community health and pharmacotherapy centre instead.
Europe
During the 1990s legal facilities emerged in cities in Switzerland, Germany and the Netherlands. In the first decade of 2000, facilities opened in Spain, Luxembourg, and Norway.
Whereas injection facilities in Europe often evolved from something else, such as different social and medical outreaches or perhaps a homeless shelter, the degree and quality of actual supervision varies. The history of the European centers also mean that there have been no or little systematic collection of data needed to do a proper evaluation of effectiveness of the scheme. At the beginning of 2009 there were 92 facilities operating in 61 cities, including 30 cities in the Netherlands, 16 cities in Germany and 8 cities in Switzerland. Denmark passed a law allowing municipalities to run "fix rooms" in 2012, and by the end of 2013 there were three open.
To date in July 2022, according to European Monitoring Centre for Drugs and Drug Addiction Belgium has one facility, Denmark five, France two, Germany 25, Greece one, Luxembourg two, Netherlands 25, Norway two, Portugal two, Spain 13, and Switzerland 14.
Ireland
Ireland has legislation to permit the opening of a service (as of May 2017) in the Misuse of Drugs (Supervised Injecting Facilities) Bill 2017; however, it has been halted by planning concerns.
Netherlands
The first professionally staffed service where drug injection was accepted emerged in the Netherlands during the early 1970s as part of the "alternative youth service" provided by the St. Paul's church in Rotterdam. At its peak it had two centers that combined an informal meeting place with a drop-in center providing basic health care, food and a laundering service. One of the centers was also a pioneer in providing needle-exchange. Its purpose was to improve the psychosocial function and health of its clients. The centers received some support from law enforcement and local government officials, although they were not officially sanctioned until 1996.
Switzerland
The first modern supervised consumption site was opened in Bern, Switzerland in June, 1986. Part of a project combatting HIV, the general concept of the café was a place where simple meals and beverages would be served, and information on safe sex, safe drug use, condoms and clean needles provided. Social workers providing counselling and referrals were also present. An injection room was not originally conceived, however, drug users began to use the facility for this purpose, and this soon became the most attractive aspect of the café. After discussions with the police and legislature, the café was turned into the first legally sanctioned drug consumption facility provided that no one under the age of 18 was admitted.
United Kingdom
The United Kingdom opened one (officially unsanctioned) facility in Glasgow in September 2020. It was opened by Peter Krykant, a local drugs worker; however, lack of funding and support led to its closure in May 2021. In nine months of operation, 894 injection events were recorded at the facility and volunteers reported attending to nine overdose events, seven opioid overdoses, and two involving powder cocaine; but there were no fatalities.
In 2023, the Lord Advocate—Scotland's chief legal officer—announced that the Crown Office and Procurator Fiscal Service would institute a policy of not criminally prosecuting those using approved supervised drug consumption sites. Police Scotland have also confirmed they will exercise discretion in not prosecuting those using such a facility. An official facility is planned to open in Glasgow in October, 2024.
Latin America
The first site opened in Latin American was in Bogota, Colombia during October 2024.
North America
Canada
There are 39 government authorized SCS in Canada as of July 2019: 7 in Alberta, 9 in British Columbia, 19 in Ontario, and 4 in Quebec. An exemption to controlled substances law under Canadian Criminal Code is granted inside the facilities, but drug possession remains illegal outside the facility and there is no buffer zone around the facility. Canada's first SCS, Insite in Downtown Eastside of Vancouver, commenced operation in 2003.
In August 2020, ARCHES Lethbridge in Lethbridge, Alberta, the largest SCS in North America, closed shortly after Alberta revoked their grant for misuse of grant funds. Shortly after opening in February 2018, ARCHES Lethbridge found itself repetitively necessitating police intervention and/or emergency medical services for opioid-related issues; indeed, three weeks after its closure, the city noted a 36% decline in opioid-related EMS requests.
The average per-capita operating cost of government sanctioned sites are reported to be CAD$600 per unique-client, with the exception of the ARCHES Lethbridge which had a disproportionately high cost of CAD $3,200 per unique client.
In September 2020, a group in Lethbridge, Alberta led by an ARCHES employee started hosting an unauthorized SCS in public places in a tent. The group did not have authorizations to operate an SCS or a permit to pitch a tent in the park. The organizer was issued citations for the tent; and the Lethbridge Police Service advised that users utilizing the unauthorized SCS would be arrested for drug possession, because exemptions do not apply to unauthorized sites. This opening of this illegal drug consumption tent was controversial and became a subject of discussion at the City Council meeting.
United States
Clandestine injection sites have existed for years. A New England Journal of Medicine study from July 2020 reports that an illegal supervised consumption site has been operating at an "undisclosed" city in the U.S. since 2014 where over 10,000 doses of illegal drugs have been injected over a five-year period. Supervised consumption sites with some degree of official sanction from a state or local government have been contemplated, but are rare due to the federal regulation of drugs and the explicit opposition of federal law enforcement to any form of decriminalization.
Local governments in Seattle, Boston, Vermont, Delaware, and Portland, Oregon have considered opening safe injection sites as well. Plans to open an injection site in Somerville, Massachusetts in 2020 were delayed by the COVID-19 pandemic.
The governors of California and Vermont both vetoed supervised consumption site bills in 2022, and Pennsylvania's Senate voted for a ban on them in 2023.
Denver (2018)
In November, 2018, Denver city council approved a pilot program for a safe injection site with a 12-to-1 vote. The Drug Enforcement Administration's Denver field office and the United States Attorney's office for the District of Colorado issued a statement together on the proposed site stating that "the operation of such sites is illegal under federal law. 21 U.S.C. Sec. 856 prohibits the maintaining of any premises for the purpose of using any controlled substance."
New York City (2021)
The first government-authorized supervised injection sites in the US (operated by OnPoint NYC) began operating in New York City in November 2021.
A peer-reviewed study of the first two months of the OPC's operation has been published in JAMA. News media have been allowed access to the OPC sites as well.
Public criticism of the New York City OPC's has so far been limited. One problem brought up by the leadership of the Metropolitan Transportation Authority is how use migrates from the centers to nearby New York City Subway stations when the OPC's are closed. In response Mayor Eric Adams called for the centers to be funded to operate continuously.
Though sanctioned by the city, the sites arguably remain illegal under federal law, and rely on non-enforcement by federal officials to keep operating. The United States Department of Justice, during the Presidency of Joe Biden, has signaled some openness and stated that it is "evaluating supervised consumption sites, including discussions with state and local regulators about appropriate guardrails for such sites, as part of an overall approach to harm reduction and public safety."
Pennsylvania
An organization called Safehouse was hoping to open a safe consumption site in Philadelphia in February 2020 with the support of the city government. Immediate neighbors strongly objected to the site, and the owner of the first proposed location withdraw a lease offer under pressure. United States District Attorney William McSwain sued to stop the Safehouse project, losing in district court in October 2019, but winning an injunction in January 2021 from a 3-judge panel of the United States Court of Appeals for the Third Circuit. Safehouse said its proposed operation was "a legitimate medical intervention, not illicit drug dens" and claimed protection under the Free Exercise Clause because "religious beliefs compel them to save lives at the heart of one of the most devastating overdose crises in the country".
In May 2023, Pennsylvania senate passed a bill to ban supervised injection sites anywhere within the State of Pennsylvania with a 41-9 vote and it is pending house approval. The Pennsylvania governor Josh Shapiro expressed support for the bill.
San Francisco, California
For 11 months between January and December 2022, there had been drug addicts using within the center established by the health department. The center "morphed" from a social services linking service to a drug usage site.
Virtual overdose monitoring services / non physical site
Virtual overdose monitoring services are similar to safe consumption rooms. These programs use phone lines or smartphone apps to monitor clients while they use drugs, contacting emergency services if the caller becomes unresponsive. These services include the National Overdose Response Service in Canada and Never Use Alone in the US, as well as the smartphone apps Canary and Brave.
Evaluations
In the late 1990s there were a number of studies available on consumption rooms in Germany, Switzerland and the Netherlands. "The reviews concluded that the rooms contributed to improved public and client health and reductions in public nuisance but stressed the limitations of the evidence and called for further and more comprehensive evaluation studies into the impact of such services." To that end, the two non-European injecting facilities, Australia's Sydney Medically Supervised Injecting Centre (MSIC) and Canada's Vancouver Insite Supervised Injection Site have had more rigorous research designs as a part of their mandate to operate.
The NSW state government has provided extensive funding for ongoing evaluations of the Sydney MSIC, with a formal comprehensive evaluation produced in 2003, 18 months after the centre was opened. Other later evaluations studied various aspects of the operationservice provision (2005), community attitudes (2006), referral and client health (2007) and a fourth (2007) service operation and overdose related events. Other evaluations of drug-related crime in the area were completed in 2006, 2008 and 2010, the SAHA International cost-effectiveness evaluation in 2008 and a final independent KPMG evaluation in 2010.
The Vancouver Insite facility was evaluated during the first three years of its operation by researchers from the BC Center for Excellence in HIV/AIDS with published and some unpublished reports available. In March 2008 a final report was released that evaluated the performance of the Vancouver Insite against its stated objectives.
Some posit that safe injection sites help reduce improperly discarded needles in public. This was found to be the case in a report by the Canadian Mental Health Association in 2018. Prior to the establishment of a supervised injection site in Vesterbro, Copenhagen in Denmark in 2012, up to 10,000 syringes were found on its streets each week. Within a year of the supervised injection site opening this number fell to below 1,000.
There has been some attempt to standardise evaluation reporting across supervised injection sites in a type of Core outcome set with researchers from the United States funded by Drug Policy Alliance available; however, the intermediary process of how this consensus set was generated is unpublished.
The Expert Advisory Committee found that Insite had referred clients such that it had contributed to an increased use of detoxification services and increased engagement in treatment. Insite had encouraged users to seek counseling. Funding has been supplied by the Canadian government for detoxification rooms above Insite.
SIS sites and social disorder
A longitudinal studyUrban Social Issues Study (USIS)from January 2018 and February 2019undertaken by University of Lethbridge's professor Em M. Pijl and commissioned by the City of Lethbridge, Alberta, Canada explore "any unintended consequences" of supervised consumption services (SCS) within the "surrounding community". The USIS study was undertaken in response to a drug crisis in Lethbridge that impacted "many neighbourhoods in many different ways." Researchers studied the "perceptions and observations of social disorder by business owners and operators" in a neighborhood where SCS was introduced. The report cautioned, that drug abuse-related antisocial behavior in Lethbridge, in particular, and in cities, in general, has increased, as the "quantity and type of drugs in circulation" increases. As the use of crystal meth eclipses the use of opiates, users exhibit more "erratic behavior". Crystal meth and other "uppers" also "require more frequent use" than "downers" like opiates. The report also notes that not all social disorder in communities that have a SCS, can be "unequivocally and entirely attributed" to the SCS, partly because of the "ongoing drug epidemic." Other variables that explain increased anti-social behaviour includes an increase in the number of people aggregating outdoors as part of seasonal trends with warmer temperatures.
Philadelphia's WPVI-TV Action News team traveled to Toronto, Canada in 2018 to make first hand field observations of several safe consumption sites already in operation. A drug addict interviewed by the reporter said she visits the site to obtain supply, but did not stick around and used the supplies to shoot up drugs elsewhere and acknowledged the site attracts drug users and drug dealers. A neighbor interviewed by the reporter said there was drug use before, but he reports it has increased since the site opened.
WPVI-TV's Chad Pradelli narrated the news team's observation as: Over the two days we sat outside several of Toronto's safe injection facilities, we witnessed prevalent drug use out front, drug deals, and even violence. We watched as one man harassed several people passing by on the sidewalk, even putting one in a chokehold. One guy decided to fight back and security arrived.
Sydney, Australia
The Sydney MSIC client survey conducted in 2005, found that public injecting (defined as injecting in a street, park, public toilet or car), which is a high risk practice with both health and public amenity impacts, was reported as the main alternative to injecting at the MSIC by 78% of clients. 49% of clients indicated resorting to public injection if the MSIC was not available on the day of registration with the MSIC. From this, the evaluators calculated a total 191,673 public injections averted by the centre.
Vancouver, Canada
Observations before and after the opening of the Vancouver, British Columbia, Canada Insite facility indicated a reduction in public injecting. "Self-reports" of INSITE users and "informal observations" at INSITE, Sydney and some European SISs suggest that SISs "can reduce rates of public self-injection."
Alberta, Canada
In response to the opioid epidemic in the province of Alberta, the Alberta Health Services's (AHS), Alberta Health, Indigenous Relations, Justice and Solicitor General including the Office of the Chief Medical Examiner, and the College of Physicians and Surgeons of Alberta met to discuss potential solutions. In the November 2016 Alberta Health report that resulted from that meeting, the introduction of supervised consumption services, along with numerous other responses to the crisis, was listed as a viable solution. The 2016 Alberta Health report stated that, SIS, "reduce overdose deaths, improve access to medical and social supports, and are not found to increase drug use and criminal activity."
According to January 2020 Edmonton Journal editorial, by 2020 Alberta had seven SIS with a "100-per-cent success rate at reversing the more than 4,300 overdoses" that occurred from November 2017when the first SIS opened in the provinceuntil August 2019.
Calgary: Safeworks Supervised Consumption Services (SCS)
Safeworks was located at the Sheldon M. Chumir Health Centre, which operated for several months, as a temporary facility, became fully operational starting April 30, 2018 with services available 24 hours, 7 days a week. From the day it initially launched in October 30, 2017 to March 31, 2019, 71,096 people had used its services The staff "responded to a total of 954 overdoses." In one month alone, "848 unique individuals" made 5,613 visits to the SCS. Its program is monitored by the Province of Alberta in partnership with the Institute of Health Economics.
In the City of Lethbridge's commissioned 2020 102-page report, the author noted that "Calgary's Sheldon Chumir SCS has received considerable negative press about the "rampant" social disorder around the SCS, a neighbourhood that is mixed residential and commercial." According to a May 2019 Calgary Herald article, the 250 meter radius around the safe consumption site Safeworks in Calgary located within the Sheldon M. Chumir Centre has seen a major spike in crime since its opening and described in a report by the police as having become "ground zero for drug, violent and property crimes in the downtown." Within this zone, statistics by the police in 2018 showed a call volume increase to the police by 276% for drug related matters 29% overall increase relative to the three-year average statistics. In May 2019, the Calgary Herald, said that Health Canada announced in February 2019 of approval for Siteworks to operate for another year, conditional to addressing neighborhood safety issues, drug debris and public disorder. There has been a plan for mobile safe consumption site intending to operate in the Forest Lawn, Calgary, Alberta, however in response to the statistics at the permanent site at the Sheldon M. Chumir Centre, community leaders have withdrawn their support.
By September 2019, the number of overdose treatment at Safeworks spiked. The staff were overwhelmed and 13.5% of their staff took psychological leave. They have had dealt with 134 overdose reversals in 2019 which was 300% more than the same time period from the previous year. The center's director reported they're dealing with an average of one overdose reversal every other day.
Lethbridge: ARCHES (Closed August 2020)
In response to the mounting death toll of drug overdose in Lethbridge, the city opened its first SCS in February, 2018. The controversial SCS, known as ARCHES was once the busiest SCS in North America.
The province defunded ARCHES after an audit ordered by government discovered misuse and mismanagement of public monies. Around 70% of ARCHES funding comes from the province, and it chose to shut it down on August 31, 2020 after the funding was revoked. The audit found “funding misappropriation, non-compliance with grant agreement [and] inappropriate governance and organizational operations.” The Alberta government requested that the site be investigated for possible criminal misuse of funds. Shortly afterwards, Lethbridge Police Service announced that the funds, which had previously been reported as missing, had been present and accounted for in bank accounts belonging to the SCS. Acting Inspector Pete Christos stated that the initial auditors did not have the means to determine whether money was missing, and confirmed that, during police interviews with Arches staff, all spent funds had been accounted for. Police Chief Shahin Mehdizadeh told reporters that the Alberta Justice Specialized Prosecutions Branch supported the police's findings and were not recommending criminal charges.
The City of Lethbridge commissioned a report that included an Urban Social Issues Study (USIS) which examined unintended consequences of the SIS site in Lethbridge. The research found that in smaller cities, such as Lethbridge, that in communities with a SCS, social disorder may be more noticeable. The report's author, University of Lethbridge's Em M. Pijl, said that news media tended to the "personal experiences of business owners and residents who work and/or live near an SCS", which contrasts with "scholarly literature that demonstrates a lack of negative neighbourhood impacts related to SCSs."
Impact on community levels of overdose
Over a nine-year period the Sydney MSIC managed 3,426 overdose-related events with not one fatality while Vancouver’s Insite had managed 336 overdose events in 2007 with not a single fatality.
The 2010 MSIC evaluators found that over 9 years of operation it had made no discernible impact on heroin overdoses at the community level with no improvement in overdose presentations at hospital emergency wards.
Research by injecting room evaluators in 2007 presented statistical evidence that there had been later reductions in ambulance callouts during injecting room hours, but failed to make any mention of the introduction of sniffer dog policing, introduced to the drug hot-spots around the injecting room a year after it opened.
Site experience of overdose
While overdoses are managed on-site at Vancouver, Sydney and the facility near Madrid, German consumption rooms are forced to call an ambulance due to naloxone being administered only by doctors. A study of German consumption rooms indicated that an ambulance was called in 71% of emergencies and naloxone administered in 59% of cases. The facilities in Sydney and Frankfurt indicate 2.2-8.4% of emergencies resulting in hospitalization.
Vancouver’s Insite yielded 13 overdoses per 10,000 injections shortly after commencement, but in 2009 had more than doubled to 27 per 10,000. The Sydney MSIC recorded 96 overdoses per 10,000 injections for those using heroin. Commenting on the high overdose rates in the Sydney MSIC, the evaluators suggested that,
“In this study of the Sydney injecting room there were 9.2 (sic) heroin overdoses per 1000 heroin injections in the centre. This rate of overdose is higher than amongst heroin injectors generally. The injecting room clients seem to have been a high-risk group with a higher rate of heroin injections than others not using the injection room facilities. They were more often injecting on the streets and they appear to have taken greater risks and used more heroin whilst in the injecting room.
People living with HIV/AIDS
The results of a research project undertaken at the Dr. Peter Centre (DPC), a 24-bed residential HIV/AIDS care facility located in Vancouver, were published in the Journal of the International AIDS Society in March 2014, stating that the provision of supervised injection services at the facility improved health outcomes for DPC residents. The DPC considers the incorporation of such services as central to a "comprehensive harm reduction strategy" and the research team concluded, through interviews with 13 residents, that "the harm reduction policy altered the structural-environmental context of healthcare services and thus mediated access to palliative and supportive care services", in addition to creating a setting in which drug use could be discussed honestly. Highly active antiretroviral therapy (HAART) medication adherence and survival are cited as two improved health outcomes.
Crime
The Sydney MSIC was judged by its evaluators to have caused no increase in crime and not to have caused a ‘honey-pot effect’ of drawing users and drug dealers to the Kings Cross area.
Observations before and after the opening of Insite indicated no increases in drug dealing or petty crime in the area. There was no evidence that the facility influenced drug use in the community, but concerns that Insite ‘sends the wrong message’ to non-users could not be addressed from existing data. The European experience has been mixed.
Financial impropriety by SCS service providers
An audit of Lethbridge ARCHES SCS by accounting firm Deloitte, ordered by the Alberta provincial government, found the SCS had $1.6 million in unaccounted funds between 2017 and 2018; additionally they found that led $342,943 of grant funds had been expended on senior executive compensation despite the grant agreement allowing only $80,000. Beyond this, an additional $13,000 was spent on parties, staff retreats, entertainment and gift cards, and numerous other inappropriate expenditures.
The Lethbridge Police Service and Alberta Justice Specialized Prosecutions Branch later contradicted these findings, stating that all funds were present and accounted for in accounts belonging to the agency. When asked why these funds had previously been reported as missing, LPS Acting Inspector Pete Christos stated that the initial auditors did not have the means to investigate the agency's finances, and that all spending had been accounted for during the criminal probe.
Premier Jason Kenney did not dispute the results of the investigation, but declined to reinstate funding, claiming that the site's management had lost the confidence of his government.
Community perception
The Expert Advisory Committee for Vancouver’s Insite found that health professionals, local police, the local community and the general public have positive or neutral views of the service, with opposition decreasing over time.
Predicted cost effectiveness
The cost of running Insite per annum is CA$3 million. Mathematical modeling showed cost to benefit ratios of one dollar spent ranging from 1.5 to 4.02 in benefit. However, the Expert Advisory Committee expressed reservation about the certainty of Insite’s cost effectiveness until proper longitudinal studies had been undertaken. Mathematical models for HIV transmissions foregone had not been locally validated and mathematical modeling from lives saved by the facility had not been validated.
See also
References
External links
Drug culture
Drug overdose
Drug policy
Drug safety
Epidemiology
Harm reduction
Medical emergencies
Medical ethics
Medical hygiene
Infection-control measures
Medical waste
Medicine in society
Prevention of HIV/AIDS
Substance dependence
Types of health care facilities | Supervised injection site | [
"Chemistry",
"Biology",
"Environmental_science"
] | 5,747 | [
"Medical waste",
"Environmental social science",
"Epidemiology",
"Drug safety"
] |
321,652 | https://en.wikipedia.org/wiki/Term%20logic | In logic and formal semantics, term logic, also known as traditional logic, syllogistic logic or Aristotelian logic, is a loose name for an approach to formal logic that began with Aristotle and was developed further in ancient history mostly by his followers, the Peripatetics. It was revived after the third century CE by Porphyry's Isagoge.
Term logic revived in medieval times, first in Islamic logic by Alpharabius in the tenth century, and later in Christian Europe in the twelfth century with the advent of new logic, remaining dominant until the advent of predicate logic in the late nineteenth century.
However, even if eclipsed by newer logical systems, term logic still plays a significant role in the study of logic. Rather than radically breaking with term logic, modern logics typically expand it.
Aristotle's system
Aristotle's logical work is collected in the six texts that are collectively known as the Organon. Two of these texts in particular, namely the Prior Analytics and De Interpretatione, contain the heart of Aristotle's treatment of judgements and formal inference, and it is principally this part of Aristotle's works that is about term logic. Modern work on Aristotle's logic builds on the tradition started in 1951 with the establishment by Jan Lukasiewicz of a revolutionary paradigm. Lukasiewicz's approach was reinvigorated in the early 1970s by John Corcoran and Timothy Smiley – which informs modern translations of Prior Analytics by Robin Smith in 1989 and Gisela Striker in 2009.
The Prior Analytics represents the first formal study of logic, where logic is understood as the study of arguments. An argument is a series of true or false statements which lead to a true or false conclusion. In the Prior Analytics, Aristotle identifies valid and invalid forms of arguments called syllogisms. A syllogism is an argument that consists of at least three sentences: at least two premises and a conclusion. Although Aristotle does not call them "categorical sentences", tradition does; he deals with them briefly in the Analytics and more extensively in On Interpretation. Each proposition (statement that is a thought of the kind expressible by a declarative sentence) of a syllogism is a categorical sentence which has a subject and a predicate connected by a verb. The usual way of connecting the subject and predicate of a categorical sentence as Aristotle does in On Interpretation is by using a linking verb e.g. P is S. However, in the Prior Analytics Aristotle rejects the usual form in favour of three of his inventions:
P belongs to S
P is predicated of S
P is said of S
Aristotle does not explain why he introduces these innovative expressions but scholars conjecture that the reason may have been that it facilitates the use of letters instead of terms avoiding the ambiguity that results in Greek when letters are used with the linking verb. In his formulation of syllogistic propositions, instead of the copula ("All/some... are/are not..."), Aristotle uses the expression, "... belongs to/does not belong to all/some..." or "... is said/is not said of all/some..." There are four different types of categorical sentences: universal affirmative (A), universal negative (E), particular affirmative (I) and particular negative (O).
A - A belongs to every B
E - A belongs to no B
I - A belongs to some B
O - A does not belong to some B
A method of symbolization that originated and was used in the Middle Ages greatly simplifies the study of the Prior Analytics.
Following this tradition then, let:
a = belongs to every
e = belongs to no
i = belongs to some
o = does not belong to some
Categorical sentences may then be abbreviated as follows:
AaB = A belongs to every B (Every B is A)
AeB = A belongs to no B (No B is A)
AiB = A belongs to some B (Some B is A)
AoB = A does not belong to some B (Some B is not A)
From the viewpoint of modern logic, only a few types of sentences can be represented in this way.
Basics
The fundamental assumption behind the theory is that the formal model of propositions are composed of two logical symbols called terms – hence the name "two-term theory" or "term logic" – and that the reasoning process is in turn built from propositions:
The term is a part of speech representing something, but which is not true or false in its own right, such as "man" or "mortal". As originally conceived, all terms would be drawn from one of ten categories enumerated by Aristotle in his Organon, classifying all objects and qualities within the domain of logical discourse.
The formal model of proposition consists of two terms, one of which, the "predicate", is "affirmed" or "denied" of the other, the "subject", and which is capable of truth or falsity.
The syllogism is an inference in which one proposition (the "conclusion") follows of necessity from two other propositions (the "premises").
A proposition may be universal or particular, and it may be affirmative or negative. Traditionally, the four kinds of propositions are:
A-type: Universal and affirmative ("All philosophers are mortal")
E-type: Universal and negative ("All philosophers are not mortal")
I-type: Particular and affirmative ("Some philosophers are mortal")
O-type: Particular and negative ("Some philosophers are not mortal")
This was called the fourfold scheme of propositions (see types of syllogism for an explanation of the letters A, I, E, and O in the traditional square). Aristotle's original square of opposition, however, does not lack existential import.
Term
A term (Greek ὅρος horos) is the basic component of the proposition. The original meaning of the horos (and also of the Latin terminus) is "extreme" or "boundary". The two terms lie on the outside of the proposition, joined by the act of affirmation or denial.
For early modern logicians like Arnauld (whose Port-Royal Logic was the best-known text of his day), it is a psychological entity like an "idea" or "concept". Mill considers it a word. To assert "all Greeks are men" is not to say that the concept of Greeks is the concept of men, or that word "Greeks" is the word "men". A proposition cannot be built from real things or ideas, but it is not just meaningless words either.
Proposition
In term logic, a "proposition" is simply a form of language: a particular kind of sentence, in which the subject and predicate are combined, so as to assert something true or false. It is not a thought, nor an abstract entity. The word "propositio" is from the Latin, meaning the first premise of a syllogism. Aristotle uses the word premise (protasis) as a sentence affirming or denying one thing or another (Posterior Analytics 1. 1 24a 16), so a premise is also a form of words.
However, as in modern philosophical logic, it means that which is asserted by the sentence. Writers before Frege and Russell, such as Bradley, sometimes spoke of the "judgment" as something distinct from a sentence, but this is not quite the same. As a further confusion the word "sentence" derives from the Latin, meaning an opinion or judgment, and so is equivalent to "proposition".
The logical quality of a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative (the predicate is denied of the subject). Thus every philosopher is mortal is affirmative, since the mortality of philosophers is affirmed universally, whereas no philosopher is mortal is negative by denying such mortality in particular.
The quantity of a proposition is whether it is universal (the predicate is affirmed or denied of all subjects or of "the whole") or particular (the predicate is affirmed or denied of some subject or a "part" thereof). In case where existential import is assumed, quantification implies the existence of at least one subject, unless disclaimed.
Singular terms
For Aristotle, the distinction between singular and universal is a fundamental metaphysical one, and not merely grammatical. A singular term for Aristotle is primary substance, which can only be predicated of itself: (this) "Callias" or (this) "Socrates" are not predicable of any other thing, thus one does not say every Socrates one says every human (De Int. 7; Meta. D9, 1018a4). It may feature as a grammatical predicate, as in the sentence "the person coming this way is Callias". But it is still a logical subject.
He contrasts universal (katholou) secondary substance, genera, with primary substance, particular (kath' hekaston) specimens. The formal nature of universals, in so far as they can be generalized "always, or for the most part", is the subject matter of both scientific study and formal logic.
The essential feature of the syllogism is that, of the four terms in the two premises, one must occur twice. Thus
All Greeks are men
All men are mortal.
The subject of one premise, must be the predicate of the other, and so it is necessary to eliminate from the logic any terms which cannot function both as subject and predicate, namely singular terms.
However, in a popular 17th-century version of the syllogism, Port-Royal Logic, singular terms were treated as universals:
All men are mortals
All Socrates are men
All Socrates are mortals
This is clearly awkward, a weakness exploited by Frege in his devastating attack on the system.
The famous syllogism "Socrates is a man ...", is frequently quoted as though from Aristotle, but in fact, it is nowhere in the Organon. Sextus Empiricus in his Hyp. Pyrrh (Outlines of Pyrronism) ii. 164 first mentions the related syllogism "Socrates is a human being, Every human being is an animal, Therefore, Socrates is an animal."
The three figures
Depending on the position of the middle term, Aristotle divides the syllogism into three kinds: syllogism in the first, second, and third figure. If the Middle Term is subject of one premise and predicate of the other, the premises are in the First Figure. If the Middle Term is predicate of both premises, the premises are in the Second Figure. If the Middle Term is subject of both premises, the premises are in the Third Figure.
Symbolically, the Three Figures may be represented as follows:
The fourth figure
In Aristotelian syllogistic (Prior Analytics, Bk I Caps 4-7), syllogisms are divided into three figures according to the position of the middle term in the two premises. The fourth figure, in which the middle term is the predicate in the major premise and the subject in the minor, was added by Aristotle's pupil Theophrastus and does not occur in Aristotle's work, although there is evidence that Aristotle knew of fourth-figure syllogisms.
Syllogism in the first figure
In the Prior Analytics translated by A. J. Jenkins as it appears in volume 8 of the Great Books of the Western World, Aristotle says of the First Figure: "... If A is predicated of all B, and B of all C, A must be predicated of all C." In the Prior Analytics translated by Robin Smith, Aristotle says of the first figure: "... For if A is predicated of every B and B of every C, it is necessary for A to be predicated of every C."
Taking a = is predicated of all = is predicated of every, and using the symbolical method used in the Middle Ages, then the first figure is simplified to:
If AaB
and BaC
then AaC.
Or what amounts to the same thing:
AaB, BaC; therefore AaC
When the four syllogistic propositions, a, e, i, o are placed in the first figure, Aristotle comes up with the following valid forms of deduction for the first figure:
AaB, BaC; therefore, AaC
AeB, BaC; therefore, AeC
AaB, BiC; therefore, AiC
AeB, BiC; therefore, AoC
In the Middle Ages, for mnemonic reasons they were called "Barbara", "Celarent", "Darii" and "Ferio" respectively.
The difference between the first figure and the other two figures is that the syllogism of the first figure is complete while that of the second and third is not. This is important in Aristotle's theory of the syllogism for the first figure is axiomatic while the second and third require proof. The proof of the second and third figure always leads back to the first figure.
Syllogism in the second figure
This is what Robin Smith says in English that Aristotle said in Ancient Greek: "... If M belongs to every N but to no X, then neither will N belong to any X. For if M belongs to no X, neither does X belong to any M; but M belonged to every N; therefore, X will belong to no N (for the first figure has again come about)."
The above statement can be simplified by using the symbolical method used in the Middle Ages:
If MaN
but MeX
then NeX.
For if MeX
then XeM
but MaN
therefore XeN.
When the four syllogistic propositions, a, e, i, o are placed in the second figure, Aristotle comes up with the following valid forms of deduction for the second figure:
MaN, MeX; therefore NeX
MeN, MaX; therefore NeX
MeN, MiX; therefore NoX
MaN, MoX; therefore NoX
In the Middle Ages, for mnemonic reasons they were called respectively "Camestres", "Cesare", "Festino" and "Baroco".
Syllogism in the third figure
Aristotle says in the Prior Analytics, "... If one term belongs to all and another to none of the same thing, or if they both belong to all or none of it, I call such figure the third." Referring to universal terms, "... then when both P and R belongs to every S, it results of necessity that P will belong to some R."
Simplifying:
If PaS
and RaS
then PiR.
When the four syllogistic propositions, a, e, i, o are placed in the third figure, Aristotle develops six more valid forms of deduction:
PaS, RaS; therefore PiR
PeS, RaS; therefore PoR
PiS, RaS; therefore PiR
PaS, RiS; therefore PiR
PoS, RaS; therefore PoR
PeS, RiS; therefore PoR
In the Middle Ages, for mnemonic reasons, these six forms were called respectively: "Darapti", "Felapton", "Disamis", "Datisi", "Bocardo" and "Ferison".
Table of syllogisms
Decline of term logic
Term logic began to decline in Europe during the Renaissance, when logicians like Rodolphus Agricola Phrisius (1444–1485) and Ramus (1515–1572) began to promote place logics. The logical tradition called Port-Royal Logic, or sometimes "traditional logic", saw propositions as combinations of ideas rather than of terms, but otherwise followed many of the conventions of term logic. It remained influential, especially in England, until the 19th century. Leibniz created a distinctive logical calculus, but nearly all of his work on logic remained unpublished and unremarked until Louis Couturat went through the Leibniz Nachlass around 1900, publishing his pioneering studies in logic.
19th-century attempts to algebraize logic, such as the work of Boole (1815–1864) and Venn (1834–1923), typically yielded systems highly influenced by the term-logic tradition. The first predicate logic was that of Frege's landmark Begriffsschrift (1879), little read before 1950, in part because of its eccentric notation. Modern predicate logic as we know it began in the 1880s with the writings of Charles Sanders Peirce, who influenced Peano (1858–1932) and even more, Ernst Schröder (1841–1902). It reached fruition in the hands of Bertrand Russell and A. N. Whitehead, whose Principia Mathematica (1910–13) made use of a variant of Peano's predicate logic.
Term logic also survived to some extent in traditional Roman Catholic education, especially in seminaries. Medieval Catholic theology, especially the writings of Thomas Aquinas, had a powerfully Aristotelean cast, and thus term logic became a part of Catholic theological reasoning. For example, Joyce's Principles of Logic (1908; 3rd edition 1949), written for use in Catholic seminaries, made no mention of Frege or of Bertrand Russell.
Revival
Some philosophers have complained that predicate logic:
Is unnatural in a sense, in that its syntax does not follow the syntax of the sentences that figure in our everyday reasoning. It is, as Quine acknowledged, "Procrustean," employing an artificial language of function and argument, quantifier, and bound variable.
Suffers from theoretical problems, probably the most serious being empty names and identity statements.
Even academic philosophers entirely in the mainstream, such as Gareth Evans, have written as follows:
"I come to semantic investigations with a preference for homophonic theories; theories which try to take serious account of the syntactic and semantic devices which actually exist in the language ...I would prefer [such] a theory ... over a theory which is only able to deal with [sentences of the form "all A's are B's"] by "discovering" hidden logical constants ... The objection would not be that such [Fregean] truth conditions are not correct, but that, in a sense which we would all dearly love to have more exactly explained, the syntactic shape of the sentence is treated as so much misleading surface structure" (Evans 1977)
Boole’s acceptance of Aristotle
George Boole's unwavering acceptance of Aristotle's logic is emphasized by the historian of logic John Corcoran in an accessible introduction to Laws of Thought Corcoran also wrote a point-by-point comparison of Prior Analytics and Laws of Thought. According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were “to go under, over, and beyond” Aristotle's logic by:
providing it with mathematical foundations involving equations;
extending the class of problems it could treat– from assessing validity to solving equations; and
expanding the range of applications it could handle– e.g. from propositions having only two terms to those having arbitrarily many.
More specifically, Boole agreed with what Aristotle said; Boole's ‘disagreements’, if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced the four propositional forms of Aristotle's logic to formulas in the form of equations– by itself a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic– another revolutionary idea –involved Boole's doctrine that Aristotle's rules of inference (the “perfect syllogisms”) must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce “No quadrangle that is a square is a rectangle that is a rhombus” from “No square that is a quadrangle is a rhombus that is a rectangle” or from “No rhombus that is a rectangle is a square that is a quadrangle”.
See also
Converse (logic)
Obversion
Propositional calculus
Stoic logic
Syntax–semantics interface
Traditional grammar
Transposition (logic)
Notes
References
Bochenski, I. M., 1951. Ancient Formal Logic. North-Holland.
Louis Couturat, 1961 (1901). La Logique de Leibniz. Hildesheim: Georg Olms Verlagsbuchhandlung.
Gareth Evans, 1977, "Pronouns, Quantifiers and Relative Clauses," Canadian Journal of Philosophy.
Peter Geach, 1976. Reason and Argument. University of California Press.
Hammond and Scullard, 1992. The Oxford Classical Dictionary. Oxford University Press, .
Joyce, George Hayward, 1949 (1908). Principles of Logic, 3rd ed. Longmans. A manual written for use in Catholic seminaries. Authoritative on traditional logic, with many references to medieval and ancient sources. Contains no hint of modern formal logic. The author lived 1864–1943.
Jan Łukasiewicz, 1951. Aristotle's Syllogistic, from the Standpoint of Modern Formal Logic. Oxford Univ. Press.
William Calvert Kneale and Martha Kneale, 1962. The Development of Logic. Oxford [England] Clarendon Press. Reviews Aristotelean logic and its influences up to modern times.
. Chapter 2 presents a modern overview, with a bibliography.
John Stuart Mill, 1904. A System of Logic, 8th ed. London.
Parry and Hacker, 1991. Aristotelian Logic. State University of New York Press.
Arthur Prior
1962: Formal Logic, 2nd ed. Oxford Univ. Press. While primarily devoted to modern formal logic, contains much on term and medieval logic.
1976: The Doctrine of Propositions and Terms. Peter Geach and A. J. P. Kenny, eds. London: Duckworth.
Willard Quine, 1986. Philosophy of Logic 2nd ed. Harvard Univ. Press.
Rose, Lynn E., 1968. Aristotle's Syllogistic. Springfield: Clarence C. Thomas.
Sommers, Fred
1970: "The Calculus of Terms," Mind 79: 1-39. Reprinted in Englebretsen, G., ed., 1987. The new syllogistic New York: Peter Lang.
1982: The logic of natural language. Oxford University Press.
1990: "Predication in the Logic of Terms," Notre Dame Journal of Formal Logic 31: 106–26.
and Englebretsen, George, 2000: An invitation to formal reasoning. The logic of terms. Aldershot UK: Ashgate. .
Szabolcsi Lorne, 2008. Numerical Term Logic. Lewiston: Edwin Mellen Press.
External links
Aristotle's term logic online-This online program provides a platform for experimentation and research on Aristotelian logic.
Annotated bibliographies:
Fred Sommers.
George Englebretsen.
PlanetMath: Aristotelian Logic.
Interactive Syllogistic Machine for Term Logic A web based syllogistic machine for exploring fallacies, figures, terms, and modes of syllogisms.
Logic
Concepts in epistemology
Philosophy of language
Concepts in logic
Semantics
Concepts in metaphysics
History of logic
Mathematical logic
Formal semantics (natural language)
Philosophical logic
Philosophy of logic | Term logic | [
"Mathematics"
] | 4,915 | [
"Mathematical logic"
] |
321,671 | https://en.wikipedia.org/wiki/Tessellation | A tessellation or tiling is the covering of a surface, often a plane, using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellation can be generalized to higher dimensions and a variety of geometries.
A periodic tiling has a repeating pattern. Some special kinds include regular tilings with regular polygonal tiles all of the same shape, and semiregular tilings with regular tiles of more than one shape and with every corner identically arranged. The patterns formed by periodic tilings can be categorized into 17 wallpaper groups. A tiling that lacks a repeating pattern is called "non-periodic". An aperiodic tiling uses a small set of tile shapes that cannot form a repeating pattern (an aperiodic set of prototiles). A tessellation of space, also known as a space filling or honeycomb, can be defined in the geometry of higher dimensions.
A real physical tessellation is a tiling made of materials such as cemented ceramic squares or hexagons. Such tilings may be decorative patterns, or may have functions such as providing durable and water-resistant pavement, floor, or wall coverings. Historically, tessellations were used in Ancient Rome and in Islamic art such as in the Moroccan architecture and decorative geometric tiling of the Alhambra palace. In the twentieth century, the work of M. C. Escher often made use of tessellations, both in ordinary Euclidean geometry and in hyperbolic geometry, for artistic effect. Tessellations are sometimes employed for decorative effect in quilting. Tessellations form a class of patterns in nature, for example in the arrays of hexagonal cells found in honeycombs.
History
Tessellations were used by the Sumerians (about 4000 BC) in building wall decorations formed by patterns of clay tiles.
Decorative mosaic tilings made of small squared blocks called tesserae were widely employed in classical antiquity, sometimes displaying geometric patterns.
In 1619, Johannes Kepler made an early documented study of tessellations. He wrote about regular and semiregular tessellations in his ; he was possibly the first to explore and to explain the hexagonal structures of honeycomb and snowflakes.
Some two hundred years later in 1891, the Russian crystallographer Yevgraf Fyodorov proved that every periodic tiling of the plane features one of seventeen different groups of isometries. Fyodorov's work marked the unofficial beginning of the mathematical study of tessellations. Other prominent contributors include Alexei Vasilievich Shubnikov and Nikolai Belov in their book Colored Symmetry (1964), and Heinrich Heesch and Otto Kienzle (1963).
Etymology
In Latin, tessella is a small cubical piece of clay, stone, or glass used to make mosaics. The word "tessella" means "small square" (from tessera, square, which in turn is from the Greek word τέσσερα for four). It corresponds to the everyday term tiling, which refers to applications of tessellations, often made of glazed clay.
Overview
Tessellation in two dimensions, also called planar tiling, is a topic in geometry that studies how shapes, known as tiles, can be arranged to fill a plane without any gaps, according to a given set of rules. These rules can be varied. Common ones are that there must be no gaps between tiles, and that no corner of one tile can lie along the edge of another. The tessellations created by bonded brickwork do not obey this rule. Among those that do, a regular tessellation has both identical regular tiles and identical regular corners or vertices, having the same angle between adjacent edges for every tile. There are only three shapes that can form such regular tessellations: the equilateral triangle, square and the regular hexagon. Any one of these three shapes can be duplicated infinitely to fill a plane with no gaps.
Many other types of tessellation are possible under different constraints. For example, there are eight types of semi-regular tessellation, made with more than one kind of regular polygon but still having the same arrangement of polygons at every corner. Irregular tessellations can also be made from other shapes such as pentagons, polyominoes and in fact almost any kind of geometric shape. The artist M. C. Escher is famous for making tessellations with irregular interlocking tiles, shaped like animals and other natural objects. If suitable contrasting colours are chosen for the tiles of differing shape, striking patterns are formed, and these can be used to decorate physical surfaces such as church floors.
More formally, a tessellation or tiling is a cover of the Euclidean plane by a countable number of closed sets, called tiles, such that the tiles intersect only on their boundaries. These tiles may be polygons or any other shapes. Many tessellations are formed from a finite number of prototiles in which all tiles in the tessellation are congruent to the given prototiles. If a geometric shape can be used as a prototile to create a tessellation, the shape is said to tessellate or to tile the plane. The Conway criterion is a sufficient, but not necessary, set of rules for deciding whether a given shape tiles the plane periodically without reflections: some tiles fail the criterion, but still tile the plane. No general rule has been found for determining whether a given shape can tile the plane or not, which means there are many unsolved problems concerning tessellations.
Mathematically, tessellations can be extended to spaces other than the Euclidean plane. The Swiss geometer Ludwig Schläfli pioneered this by defining polyschemes, which mathematicians nowadays call polytopes. These are the analogues to polygons and polyhedra in spaces with more dimensions. He further defined the Schläfli symbol notation to make it easy to describe polytopes. For example, the Schläfli symbol for an equilateral triangle is {3}, while that for a square is {4}. The Schläfli notation makes it possible to describe tilings compactly. For example, a tiling of regular hexagons has three six-sided polygons at each vertex, so its Schläfli symbol is {6,3}.
Other methods also exist for describing polygonal tilings. When the tessellation is made of regular polygons, the most common notation is the vertex configuration, which is simply a list of the number of sides of the polygons around a vertex. The square tiling has a vertex configuration of 4.4.4.4, or 44. The tiling of regular hexagons is noted 6.6.6, or 63.
In mathematics
Introduction to tessellations
Mathematicians use some technical terms when discussing tilings. An edge is the intersection between two bordering tiles; it is often a straight line. A vertex is the point of intersection of three or more bordering tiles. Using these terms, an isogonal or vertex-transitive tiling is a tiling where every vertex point is identical; that is, the arrangement of polygons about each vertex is the same. The fundamental region is a shape such as a rectangle that is repeated to form the tessellation. For example, a regular tessellation of the plane with squares has a meeting of four squares at every vertex.
The sides of the polygons are not necessarily identical to the edges of the tiles. An edge-to-edge tiling is any polygonal tessellation where adjacent tiles only share one full side, i.e., no tile shares a partial side or more than one side with any other tile. In an edge-to-edge tiling, the sides of the polygons and the edges of the tiles are the same. The familiar "brick wall" tiling is not edge-to-edge because the long side of each rectangular brick is shared with two bordering bricks.
A normal tiling is a tessellation for which every tile is topologically equivalent to a disk, the intersection of any two tiles is a connected set or the empty set, and all tiles are uniformly bounded. This means that a single circumscribing radius and a single inscribing radius can be used for all the tiles in the whole tiling; the condition disallows tiles that are pathologically long or thin.
A is a tessellation in which all tiles are congruent; it has only one prototile. A particularly interesting type of monohedral tessellation is the spiral monohedral tiling. The first spiral monohedral tiling was discovered by Heinz Voderberg in 1936; the Voderberg tiling has a unit tile that is a nonconvex enneagon. The Hirschhorn tiling, published by Michael D. Hirschhorn and D. C. Hunt in 1985, is a pentagon tiling using irregular pentagons: regular pentagons cannot tile the Euclidean plane as the internal angle of a regular pentagon, , is not a divisor of 2.
An isohedral tiling is a special variation of a monohedral tiling in which all tiles belong to the same transitivity class, that is, all tiles are transforms of the same prototile under the symmetry group of the tiling. If a prototile admits a tiling, but no such tiling is isohedral, then the prototile is called anisohedral and forms anisohedral tilings.
A regular tessellation is a highly symmetric, edge-to-edge tiling made up of regular polygons, all of the same shape. There are only three regular tessellations: those made up of equilateral triangles, squares, or regular hexagons. All three of these tilings are isogonal and monohedral.
A semi-regular (or Archimedean) tessellation uses more than one type of regular polygon in an isogonal arrangement. There are eight semi-regular tilings (or nine if the mirror-image pair of tilings counts as two). These can be described by their vertex configuration; for example, a semi-regular tiling using squares and regular octagons has the vertex configuration 4.82 (each vertex has one square and two octagons). Many non-edge-to-edge tilings of the Euclidean plane are possible, including the family of Pythagorean tilings, tessellations that use two (parameterised) sizes of square, each square touching four squares of the other size. An edge tessellation is one in which each tile can be reflected over an edge to take up the position of a neighbouring tile, such as in an array of equilateral or isosceles triangles.
Wallpaper groups
Tilings with translational symmetry in two independent directions can be categorized by wallpaper groups, of which 17 exist. It has been claimed that all seventeen of these groups are represented in the Alhambra palace in Granada, Spain. Although this is disputed, the variety and sophistication of the Alhambra tilings have interested modern researchers. Of the three regular tilings two are in the p6m wallpaper group and one is in p4m. Tilings in 2-D with translational symmetry in just one direction may be categorized by the seven frieze groups describing the possible frieze patterns. Orbifold notation can be used to describe wallpaper groups of the Euclidean plane.
Aperiodic tilings
Penrose tilings, which use two different quadrilateral prototiles, are the best known example of tiles that forcibly create non-periodic patterns. They belong to a general class of aperiodic tilings, which use tiles that cannot tessellate periodically. The recursive process of substitution tiling is a method of generating aperiodic tilings. One class that can be generated in this way is the rep-tiles; these tilings have unexpected self-replicating properties. Pinwheel tilings are non-periodic, using a rep-tile construction; the tiles appear in infinitely many orientations. It might be thought that a non-periodic pattern would be entirely without symmetry, but this is not so. Aperiodic tilings, while lacking in translational symmetry, do have symmetries of other types, by infinite repetition of any bounded patch of the tiling and in certain finite groups of rotations or reflections of those patches. A substitution rule, such as can be used to generate Penrose patterns using assemblies of tiles called rhombs, illustrates scaling symmetry. A Fibonacci word can be used to build an aperiodic tiling, and to study quasicrystals, which are structures with aperiodic order.
Wang tiles are squares coloured on each edge, and placed so that abutting edges of adjacent tiles have the same colour; hence they are sometimes called Wang dominoes. A suitable set of Wang dominoes can tile the plane, but only aperiodically. This is known because any Turing machine can be represented as a set of Wang dominoes that tile the plane if, and only if, the Turing machine does not halt. Since the halting problem is undecidable, the problem of deciding whether a Wang domino set can tile the plane is also undecidable.
Truchet tiles are square tiles decorated with patterns so they do not have rotational symmetry; in 1704, Sébastien Truchet used a square tile split into two triangles of contrasting colours. These can tile the plane either periodically or randomly.
An einstein tile is a single shape that forces aperiodic tiling. The first such tile, dubbed a "hat", was discovered in 2023 by David Smith, a hobbyist mathematician. The discovery is under professional review and, upon confirmation, will be credited as solving a longstanding mathematical problem.
Tessellations and colour
Sometimes the colour of a tile is understood as part of the tiling; at other times arbitrary colours may be applied later. When discussing a tiling that is displayed in colours, to avoid ambiguity, one needs to specify whether the colours are part of the tiling or just part of its illustration. This affects whether tiles with the same shape, but different colours, are considered identical, which in turn affects questions of symmetry. The four colour theorem states that for every tessellation of a normal Euclidean plane, with a set of four available colours, each tile can be coloured in one colour such that no tiles of equal colour meet at a curve of positive length. The colouring guaranteed by the four colour theorem does not generally respect the symmetries of the tessellation. To produce a colouring that does, it is necessary to treat the colours as part of the tessellation. Here, as many as seven colours may be needed, as demonstrated in the image at left.
Tessellations with polygons
Next to the various tilings by regular polygons, tilings by other polygons have also been studied.
Any triangle or quadrilateral (even non-convex) can be used as a prototile to form a monohedral tessellation, often in more than one way. Copies of an arbitrary quadrilateral can form a tessellation with translational symmetry and 2-fold rotational symmetry with centres at the midpoints of all sides. For an asymmetric quadrilateral this tiling belongs to wallpaper group p2. As fundamental domain we have the quadrilateral. Equivalently, we can construct a parallelogram subtended by a minimal set of translation vectors, starting from a rotational centre. We can divide this by one diagonal, and take one half (a triangle) as fundamental domain. Such a triangle has the same area as the quadrilateral and can be constructed from it by cutting and pasting.
If only one shape of tile is allowed, tilings exist with convex N-gons for N equal to 3, 4, 5, and 6. For , see Pentagonal tiling, for , see Hexagonal tiling, for , see Heptagonal tiling and for , see octagonal tiling.
With non-convex polygons, there are far fewer limitations in the number of sides, even if only one shape is allowed.
Polyominoes are examples of tiles that are either convex of non-convex, for which various combinations, rotations, and reflections can be used to tile a plane. For results on tiling the plane with polyominoes, see Polyomino § Uses of polyominoes.
Voronoi tilings
Voronoi or Dirichlet tilings are tessellations where each tile is defined as the set of points closest to one of the points in a discrete set of defining points. (Think of geographical regions where each region is defined as all the points closest to a given city or post office.) The Voronoi cell for each defining point is a convex polygon. The Delaunay triangulation is a tessellation that is the dual graph of a Voronoi tessellation. Delaunay triangulations are useful in numerical simulation, in part because among all possible triangulations of the defining points, Delaunay triangulations maximize the minimum of the angles formed by the edges. Voronoi tilings with randomly placed points can be used to construct random tilings of the plane.
Tessellations in higher dimensions
Tessellation can be extended to three dimensions. Certain polyhedra can be stacked in a regular crystal pattern to fill (or tile) three-dimensional space, including the cube (the only Platonic polyhedron to do so), the rhombic dodecahedron, the truncated octahedron, and triangular, quadrilateral, and hexagonal prisms, among others. Any polyhedron that fits this criterion is known as a plesiohedron, and may possess between 4 and 38 faces. Naturally occurring rhombic dodecahedra are found as crystals of andradite (a kind of garnet) and fluorite.
Tessellations in three or more dimensions are called honeycombs. In three dimensions there is just one regular honeycomb, which has eight cubes at each polyhedron vertex. Similarly, in three dimensions there is just one quasiregular honeycomb, which has eight tetrahedra and six octahedra at each polyhedron vertex. However, there are many possible semiregular honeycombs in three dimensions. Uniform honeycombs can be constructed using the Wythoff construction.
The Schmitt-Conway biprism is a convex polyhedron with the property of tiling space only aperiodically.
A Schwarz triangle is a spherical triangle that can be used to tile a sphere.
Tessellations in non-Euclidean geometries
It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometry. A uniform tiling in the hyperbolic plane (that may be regular, quasiregular, or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).
A uniform honeycomb in hyperbolic space is a uniform tessellation of uniform polyhedral cells. In three-dimensional (3-D) hyperbolic space there are nine Coxeter group families of compact convex uniform honeycombs, generated as Wythoff constructions, and represented by permutations of rings of the Coxeter diagrams for each family.
In art
In architecture, tessellations have been used to create decorative motifs since ancient times. Mosaic tilings often had geometric patterns. Later civilisations also used larger tiles, either plain or individually decorated. Some of the most decorative were the Moorish wall tilings of Islamic architecture, using Girih and Zellige tiles in buildings such as the Alhambra and La Mezquita.
Tessellations frequently appeared in the graphic art of M. C. Escher; he was inspired by the Moorish use of symmetry in places such as the Alhambra when he visited Spain in 1936. Escher made four "Circle Limit" drawings of tilings that use hyperbolic geometry. For his woodcut "Circle Limit IV" (1960), Escher prepared a pencil and ink study showing the required geometry. Escher explained that "No single component of all the series, which from infinitely far away rise like rockets perpendicularly from the limit and are at last lost in it, ever reaches the boundary line."
Tessellated designs often appear on textiles, whether woven, stitched in, or printed. Tessellation patterns have been used to design interlocking motifs of patch shapes in quilts.
Tessellations are also a main genre in origami (paper folding), where pleats are used to connect molecules, such as twist folds, together in a repeating fashion.
In manufacturing
Tessellation is used in manufacturing industry to reduce the wastage of material (yield losses) such as sheet metal when cutting out shapes for objects such as car doors or drink cans.
Tessellation is apparent in the mudcrack-like cracking of thin films – with a degree of self-organisation being observed using micro and nanotechnologies.
In nature
The honeycomb is a well-known example of tessellation in nature with its hexagonal cells.
In botany, the term "tessellate" describes a checkered pattern, for example on a flower petal, tree bark, or fruit. Flowers including the fritillary, and some species of Colchicum, are characteristically tessellate.
Many patterns in nature are formed by cracks in sheets of materials. These patterns can be described by Gilbert tessellations, also known as random crack networks. The Gilbert tessellation is a mathematical model for the formation of mudcracks, needle-like crystals, and similar structures. The model, named after Edgar Gilbert, allows cracks to form starting from being randomly scattered over the plane; each crack propagates in two opposite directions along a line through the initiation point, its slope chosen at random, creating a tessellation of irregular convex polygons. Basaltic lava flows often display columnar jointing as a result of contraction forces causing cracks as the lava cools. The extensive crack networks that develop often produce hexagonal columns of lava. One example of such an array of columns is the Giant's Causeway in Northern Ireland. Tessellated pavement, a characteristic example of which is found at Eaglehawk Neck on the Tasman Peninsula of Tasmania, is a rare sedimentary rock formation where the rock has fractured into rectangular blocks.
Other natural patterns occur in foams; these are packed according to Plateau's laws, which require minimal surfaces. Such foams present a problem in how to pack cells as tightly as possible: in 1887, Lord Kelvin proposed a packing using only one solid, the bitruncated cubic honeycomb with very slightly curved faces. In 1993, Denis Weaire and Robert Phelan proposed the Weaire–Phelan structure, which uses less surface area to separate cells of equal volume than Kelvin's foam.
In puzzles and recreational mathematics
Tessellations have given rise to many types of tiling puzzle, from traditional jigsaw puzzles (with irregular pieces of wood or cardboard) and the tangram, to more modern puzzles that often have a mathematical basis. For example, polyiamonds and polyominoes are figures of regular triangles and squares, often used in tiling puzzles. Authors such as Henry Dudeney and Martin Gardner have made many uses of tessellation in recreational mathematics. For example, Dudeney invented the hinged dissection, while Gardner wrote about the "rep-tile", a shape that can be dissected into smaller copies of the same shape. Inspired by Gardner's articles in Scientific American, the amateur mathematician Marjorie Rice found four new tessellations with pentagons. Squaring the square is the problem of tiling an integral square (one whose sides have integer length) using only other integral squares. An extension is squaring the plane, tiling it by squares whose sizes are all natural numbers without repetitions; James and Frederick Henle proved that this was possible.
Examples
See also
Discrete global grid
Honeycomb (geometry)
Space partitioning
Explanatory footnotes
References
Sources
External links
Tegula (open-source software for exploring two-dimensional tilings of the plane, sphere and hyperbolic plane; includes databases containing millions of tilings)
Wolfram MathWorld: Tessellation (good bibliography, drawings of regular, semiregular and demiregular tessellations)
Dirk Frettlöh and Edmund Harriss. "Tilings Encyclopedia" (extensive information on substitution tilings, including drawings, people, and references)
Tessellations.org (how-to guides, Escher tessellation gallery, galleries of tessellations by other artists, lesson plans, history)
(list of web resources including articles and galleries)
Mosaic
Symmetry | Tessellation | [
"Physics",
"Mathematics"
] | 5,219 | [
"Tessellation",
"Euclidean plane geometry",
"Geometry",
"Planes (geometry)",
"Symmetry"
] |
321,787 | https://en.wikipedia.org/wiki/UTF-7 | UTF-7 (7-bit Unicode Transformation Format) is an obsolete variable-length character encoding for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable.
UTF-7 (according to its RFC) isn't a "Unicode Transformation Format", as the definition can only encode code points in the BMP (the first 65536 Unicode code points, which does not include emojis and many other characters). However if a UTF-7 translator is to/from UTF-16 then it can (and probably does) encode each surrogate half as though it was a 16-bit code point, and thus can encode all code points. It is unclear if other UTF-7 software (such as translators to UTF-32 or UTF-8) support this.
UTF-7 has never been an official standard of the Unicode Consortium. It is known to have security issues, which is why software has been changed to disable its use. It is prohibited in HTML 5.
Motivation
MIME, the modern standard for e-mail formats, forbids encoding of headers using byte values above the ASCII range. Although MIME allows encoding the message body in various character sets (broader than ASCII), the underlying transmission infrastructure (SMTP, the main E-mail transfer standard) is still not guaranteed to be 8-bit clean. Therefore, a non-trivial content transfer encoding has to be applied in case of doubt. Unfortunately, Base64 has a disadvantage of making even ASCII characters unreadable in non-MIME clients. On the other hand, UTF-8 combined with quoted-printable produces a very size-inefficient format requiring 6–9 bytes for non-ASCII characters from the BMP and 12 bytes for characters outside the BMP.
Provided certain rules are followed during encoding, UTF-7 can be sent in e-mail without using an underlying MIME transfer encoding, but still must be explicitly identified as the text character set. In addition, if used within e-mail headers such as "Subject:", UTF-7 must be contained in MIME encoded words identifying the character set. Since encoded words force use of either quoted-printable or Base64, UTF-7 was designed to avoid using the = sign as an escape character to avoid double escaping when it is combined with quoted-printable (or its variant, the RFC 2047/1522 "Q"-encoding of headers).
UTF-7 is generally not used as a native representation within applications as it is very awkward to process. Despite its size advantage over the combination of UTF-8 with either quoted-printable or Base64, the now defunct Internet Mail Consortium recommended against its use.
8BITMIME has also been introduced, which reduces the need to encode message bodies in a 7-bit format.
A modified form of UTF-7 (sometimes dubbed 'mUTF-7') was used in the Internet Message Access Protocol (IMAP) e-mail retrieval protocol, version 4 rev 1, for "international" mailbox names.
The following version, IMAP version 4 rev 2, uses UTF-8 instead.
Description
UTF-7 was first proposed as an experimental protocol in RFC 1642, A Mail-Safe Transformation Format of Unicode. This RFC has been made obsolete by RFC 2152, an informational RFC which never became a standard. As RFC 2152 clearly states, the RFC "does not specify an Internet standard of any kind". Despite this, RFC 2152 is quoted as the definition of UTF-7 in the IANA's list of charsets. Neither is UTF-7 a Unicode Standard. The Unicode Standard 5.0 only lists UTF-8, UTF-16 and UTF-32.
There is also a modified version, specified in RFC 2060, which is sometimes identified as UTF-7.
Some characters can be represented directly as single ASCII bytes. The first group is known as "direct characters" and contains 62 alphanumeric characters and 9 symbols: ' ( ) , - . / : ?. The direct characters are safe to include literally. The other main group, known as "optional direct characters", contains all other printable characters in the range –U+007E except ~ \ + and space (the characters and being excluded due to being redefined in "variants of ASCII" such as JIS-Roman). Using the optional direct characters reduces size and enhances human readability but also increases the chance of breakage by things like badly designed mail gateways and may require extra escaping when used in encoded words for header fields.
Space, tab, carriage return and line feed may also be represented directly as single ASCII bytes. However, if the encoded text is to be used in e-mail, care is needed to ensure that these characters are used in ways that do not require further content transfer encoding to be suitable for e-mail. The plus sign (+) may be encoded as +-.
Other characters must be encoded in UTF-16 (hence U+10000 and higher would be encoded into two surrogates), and then in modified Base64. The start of these blocks of modified Base64-encoded UTF-16 is indicated by a + sign. The end is indicated by any character not in the modified Base64 set. If the character after the modified Base64 is a - (ASCII hyphen-minus) then it is consumed by the decoder and decoding resumes with the next character. Otherwise decoding resumes with the character after the Base64.
Examples
"Hello, World!" is encoded as "Hello, World+ACE-"
"1 + 1 = 2" is encoded as "1 +- 1 +AD0- 2"
"£1" is encoded as "+AKM-1". The Unicode code point for the pound sign is U+00A3 which converts into modified Base64 as in the table below. There are two bits left over, which are padded to 0.
Algorithm for encoding and decoding
Encoding
First, an encoder must decide which characters to represent directly in ASCII form, which + have to be escaped as +-, and which to place in blocks of Unicode characters. The expansion cost of UTF-7 can be high: for example, the character sequence U+10FFFF U+0077 U+10FFFF is 9 bytes in UTF-8, but 17 bytes in UTF-7. (At worst, treating every codepoint as a sequence in its own right produces the maximum expansion of 5x, e.g. when encoding @@ as +AEA-+AEA-.) Each Unicode sequence must be encoded using the following procedure, then surrounded by the appropriate delimiters.
Using the £† (U+00A3 U+2020) character sequence as an example:
Decoding
First an encoded data must be separated into plain ASCII text chunks (including +es followed by a dash) and nonempty Unicode blocks as mentioned in the description section. Once this is done, each Unicode block must be decoded with the following procedure (using the result of the encoding example above as our example)
Express each Base64 code as the bit sequence it represents:AKMgIA → 000000 001010 001100 100000 001000 000000
Regroup the binary into groups of sixteen bits, starting from the left:000000 001010 001100 100000 001000 000000 → 0000000010100011 0010000000100000 0000
If there is an incomplete group at the end containing only zeros, discard it (if the incomplete group contains any ones, the code is invalid):0000000010100011 0010000000100000
Each group of 16 bits is a character's Unicode (UTF-16) number and can be expressed in other forms:0000 0000 1010 0011 ≡ 0x00A3 ≡ 16310
Byte order mark
A byte order mark (BOM) is an optional special byte sequence at the very start of a stream or file that, without being data itself, indicates the encoding used for the data that follows; it can be used in the absence of metadata that denotes the encoding. For a given encoding scheme, it's that scheme's representation of Unicode code point U+FEFF.
While it's typically a single, fixed byte sequence, in UTF-7 four variations may appear, because the last 2 bits of the 4th byte of the UTF-7 encoding of U+FEFF belong to the following character, resulting in 4 possible bit patterns and therefore 4 different possible bytes in the 4th position. See the UTF-7 entry in the table of Unicode byte order marks.
Security
UTF-7 allows multiple representations of the same source string. In particular, ASCII characters can be represented as part of Unicode blocks. As such, if standard ASCII-based escaping or validation processes are used on strings that may be later interpreted as UTF-7, then Unicode blocks may be used to slip malicious strings past them. To mitigate this problem, systems should perform decoding before validation and should avoid attempting to autodetect UTF-7.
Older versions of Internet Explorer can be tricked into interpreting the page as UTF-7. This can be used for a cross-site scripting attack as the < and > marks can be encoded as +ADw- and +AD4- in UTF-7, which most validators let through as simple text.
UTF-7 is considered obsolete, at least for Microsoft software (.NET), with code paths previously supporting it intentionally broken (to prevent security issues) in .NET 5, in 2020.
See also
Comparison of Unicode encodings
References
Character encoding
Unicode Transformation Formats | UTF-7 | [
"Technology"
] | 2,107 | [
"Natural language and computing",
"Character encoding"
] |
321,790 | https://en.wikipedia.org/wiki/Cunningham%20chain | In mathematics, a Cunningham chain is a certain sequence of prime numbers. Cunningham chains are named after mathematician A. J. C. Cunningham. They are also called chains of nearly doubled primes.
Definition
A Cunningham chain of the first kind of length n is a sequence of prime numbers (p1, ..., pn) such that pi+1 = 2pi + 1 for all 1 ≤ i < n. (Hence each term of such a chain except the last is a Sophie Germain prime, and each term except the first is a safe prime).
It follows that
or, by setting (the number is not part of the sequence and need not be a prime number), we have
Similarly, a Cunningham chain of the second kind of length n is a sequence of prime numbers (p1, ..., pn) such that pi+1 = 2pi − 1 for all 1 ≤ i < n.
It follows that the general term is
Now, by setting , we have .
Cunningham chains are also sometimes generalized to sequences of prime numbers (p1, ..., pn) such that pi+1 = api + b for all 1 ≤ i ≤ n for fixed coprime integers a and b; the resulting chains are called generalized Cunningham chains.
A Cunningham chain is called complete if it cannot be further extended, i.e., if the previous and the next terms in the chain are not prime numbers.
Examples
Examples of complete Cunningham chains of the first kind include these:
2, 5, 11, 23, 47 (The next number would be 95, but that is not prime.)
3, 7 (The next number would be 15, but that is not prime.)
29, 59 (The next number would be 119, but that is not prime.)
41, 83, 167 (The next number would be 335, but that is not prime.)
89, 179, 359, 719, 1439, 2879 (The next number would be 5759, but that is not prime.)
Examples of complete Cunningham chains of the second kind include these:
2, 3, 5 (The next number would be 9, but that is not prime.)
7, 13 (The next number would be 25, but that is not prime.)
19, 37, 73 (The next number would be 145, but that is not prime.)
31, 61 (The next number would be 121 = 112, but that is not prime.)
Cunningham chains are now considered useful in cryptographic systems since "they provide two concurrent suitable settings for the ElGamal cryptosystem ... [which] can be implemented in any field where the discrete logarithm problem is difficult."
Largest known Cunningham chains
It follows from Dickson's conjecture and the broader Schinzel's hypothesis H, both widely believed to be true, that for every k there are infinitely many Cunningham chains of length k. There are, however, no known direct methods of generating such chains.
There are computing competitions for the longest Cunningham chain or for the one built up of the largest primes, but unlike the breakthrough of Ben J. Green and Terence Tao – the Green–Tao theorem, that there are arithmetic progressions of primes of arbitrary length – there is no general result known on large Cunningham chains to date.
q# denotes the primorial 2 × 3 × 5 × 7 × ... × q.
, the longest known Cunningham chain of either kind is of length 19, discovered by Jaroslaw Wroblewski in 2014.
Congruences of Cunningham chains
Let the odd prime be the first prime of a Cunningham chain of the first kind. The first prime is odd, thus . Since each successive prime in the chain is it follows that . Thus, , , and so forth.
The above property can be informally observed by considering the primes of a chain in base 2. (Note that, as with all bases, multiplying by the base "shifts" the digits to the left; e.g. in decimal we have 314 × 10 = 3140.) When we consider in base 2, we see that, by multiplying by 2, the least significant digit of becomes the secondmost least significant digit of . Because is odd—that is, the least significant digit is 1 in base 2–we know that the secondmost least significant digit of is also 1. And, finally, we can see that will be odd due to the addition of 1 to . In this way, successive primes in a Cunningham chain are essentially shifted left in binary with ones filling in the least significant digits. For example, here is a complete length 6 chain which starts at 141361469:
A similar result holds for Cunningham chains of the second kind. From the observation that and the relation it follows that . In binary notation, the primes in a Cunningham chain of the second kind end with a pattern "0...01", where, for each , the number of zeros in the pattern for is one more than the number of zeros for . As with Cunningham chains of the first kind, the bits left of the pattern shift left by one position with each successive prime.
Similarly, because it follows that . But, by Fermat's little theorem, , so divides (i.e. with ). Thus, no Cunningham chain can be of infinite length.
See also
Primecoin, which uses Cunningham chains as a proof-of-work system
Bi-twin chain
Primes in arithmetic progression
References
External links
The Prime Glossary: Cunningham chain
Primecoin discoveries (primes.zone): online database of primecoin findings with list of records and visualization
PrimeLinks++: Cunningham chain
-- the first term of the lowest complete Cunningham chains of the first kind of length n, for 1 ≤ n ≤ 14
-- the first term of the lowest complete Cunningham chains of the second kind with length n, for 1 ≤ n ≤ 15
Prime numbers | Cunningham chain | [
"Mathematics"
] | 1,230 | [
"Prime numbers",
"Mathematical objects",
"Numbers",
"Number theory"
] |
321,801 | https://en.wikipedia.org/wiki/Multiply%20perfect%20number | In mathematics, a multiply perfect number (also called multiperfect number or pluperfect number) is a generalization of a perfect number.
For a given natural number k, a number n is called (or perfect) if the sum of all positive divisors of n (the divisor function, σ(n)) is equal to kn; a number is thus perfect if and only if it is . A number that is for a certain k is called a multiply perfect number. As of 2014, numbers are known for each value of k up to 11.
It is unknown whether there are any odd multiply perfect numbers other than 1. The first few multiply perfect numbers are:
1, 6, 28, 120, 496, 672, 8128, 30240, 32760, 523776, 2178540, 23569920, 33550336, 45532800, 142990848, 459818240, ... .
Example
The sum of the divisors of 120 is
1 + 2 + 3 + 4 + 5 + 6 + 8 + 10 + 12 + 15 + 20 + 24 + 30 + 40 + 60 + 120 = 360
which is 3 × 120. Therefore 120 is a number.
Smallest known k-perfect numbers
The following table gives an overview of the smallest known numbers for k ≤ 11 :
Properties
It can be proven that:
For a given prime number p, if n is and p does not divide n, then pn is . This implies that an integer n is a number divisible by 2 but not by 4, if and only if n/2 is an odd perfect number, of which none are known.
If 3n is and 3 does not divide n, then n is .
Odd multiply perfect numbers
It is unknown whether there are any odd multiply perfect numbers other than 1. However if an odd number n exists where k > 2, then it must satisfy the following conditions:
The largest prime factor is ≥ 100129
The second largest prime factor is ≥ 1009
The third largest prime factor is ≥ 101
Bounds
In little-o notation, the number of multiply perfect numbers less than x is for all ε > 0.
The number of k-perfect numbers n for n ≤ x is less than , where c and c are constants independent of k.
Under the assumption of the Riemann hypothesis, the following inequality is true for all numbers n, where k > 3
where is Euler's gamma constant. This can be proven using Robin's theorem.
The number of divisors τ(n) of a number n satisfies the inequality
The number of distinct prime factors ω(n) of n satisfies
If the distinct prime factors of n are , then:
Specific values of k
Perfect numbers
A number n with σ(n) = 2n is perfect.
Triperfect numbers
A number n with σ(n) = 3n is triperfect. There are only six known triperfect numbers and these are believed to comprise all such numbers:
120, 672, 523776, 459818240, 1476304896, 51001180160
If there exists an odd perfect number m (a famous open problem) then 2m would be , since σ(2m) = σ(2) σ(m) = 3×2m. An odd triperfect number must be a square number exceeding 1070 and have at least 12 distinct prime factors, the largest exceeding 105.
Variations
Unitary multiply perfect numbers
A similar extension can be made for unitary perfect numbers. A positive integer n is called a unitary multi number if σ*(n) = kn where σ*(n) is the sum of its unitary divisors. (A divisor d of a number n is a unitary divisor if d and n/d share no common factors.).
A unitary multiply perfect number is simply a unitary multi number for some positive integer k. Equivalently, unitary multiply perfect numbers are those n for which n divides σ*(n). A unitary multi number is naturally called a unitary perfect number. In the case k > 2, no example of a unitary multi number is yet known. It is known that if such a number exists, it must be even and greater than 10102 and must have more than forty four odd prime factors. This problem is probably very difficult to settle. The concept of unitary divisor was originally due to R. Vaidyanathaswamy (1931) who called such a divisor as block factor. The present terminology is due to E. Cohen (1960).
The first few unitary multiply perfect numbers are:
1, 6, 60, 90, 87360
Bi-unitary multiply perfect numbers
A positive integer n is called a bi-unitary multi number if σ**(n) = kn where σ**(n) is the sum of its bi-unitary divisors. This concept is due to Peter Hagis (1987). A bi-unitary multiply perfect number is simply a bi-unitary multi number for some positive integer k. Equivalently, bi-unitary multiply perfect numbers are those n for which n divides σ**(n). A bi-unitary multi number is naturally called a bi-unitary perfect number, and a bi-unitary multi number is called a bi-unitary triperfect number.
A divisor d of a positive integer n is called a bi-unitary divisor''' of n if the greatest common unitary divisor (gcud) of d and n/d equals 1. This concept is due to D. Surynarayana (1972). The sum of the (positive) bi-unitary divisors of n is denoted by σ**(n).
Peter Hagis (1987) proved that there are no odd bi-unitary multiperfect numbers other than 1. Haukkanen and Sitaramaiah (2020) found all bi-unitary triperfect numbers of the form 2au where 1 ≤ a ≤ 6 and u is odd, and partially the case where a = 7.
Further, they fixed completely the case a'' = 8.
The first few bi-unitary multiply perfect numbers are:
1, 6, 60, 90, 120, 672, 2160, 10080, 22848, 30240
References
Sources
See also
Hemiperfect number
External links
The Multiply Perfect Numbers page
The Prime Glossary: Multiply perfect numbers
Arithmetic dynamics
Divisor function
Perfect numbers | Multiply perfect number | [
"Mathematics"
] | 1,370 | [
"Recreational mathematics",
"Perfect numbers",
"Arithmetic dynamics",
"Number theory",
"Dynamical systems"
] |
321,827 | https://en.wikipedia.org/wiki/Thermosetting%20polymer | In materials science, a thermosetting polymer, often called a thermoset, is a polymer that is obtained by irreversibly hardening ("curing") a soft solid or viscous liquid prepolymer (resin). Curing is induced by heat or suitable radiation and may be promoted by high pressure or mixing with a catalyst. Heat is not necessarily applied externally, and is often generated by the reaction of the resin with a curing agent (catalyst, hardener). Curing results in chemical reactions that create extensive cross-linking between polymer chains to produce an infusible and insoluble polymer network.
The starting material for making thermosets is usually malleable or liquid prior to curing, and is often designed to be molded into the final shape. It may also be used as an adhesive. Once hardened, a thermoset cannot be melted for reshaping, in contrast to thermoplastic polymers which are commonly produced and distributed in the form of pellets, and shaped into the final product form by melting, pressing, or injection molding.
Chemical process
Curing a thermosetting resin transforms it into a plastic, or elastomer (rubber) by crosslinking or chain extension through the formation of covalent bonds between individual chains of the polymer. Crosslink density varies depending on the monomer or prepolymer mix, and the mechanism of crosslinking:
Acrylic resins, polyesters and vinyl esters with unsaturated sites at the ends or on the backbone are generally linked by copolymerisation with unsaturated monomer diluents, with cure initiated by free radicals generated from ionizing radiation or by the photolytic or thermal decomposition of a radical initiator – the intensity of crosslinking is influenced by the degree of backbone unsaturation in the prepolymer;
Epoxy functional resins can be homo-polymerized with anionic or cationic catalysts and heat, or copolymerised through nucleophilic addition reactions with multifunctional crosslinking agents which are also known as curing agents or hardeners. As reaction proceeds, larger and larger molecules are formed and highly branched crosslinked structures develop, the rate of cure being influenced by the physical form and functionality of epoxy resins and curing agents – elevated temperature postcuring induces secondary crosslinking of backbone hydroxyl functionality which condense to form ether bonds;
Polyurethanes form when isocyanate resins and prepolymers are combined with low- or high-molecular weight polyols, with strict stoichiometric ratios being essential to control nucleophilic addition polymerisation – the degree of crosslinking and resulting physical type (elastomer or plastic) is adjusted from the molecular weight and functionality of isocyanate resins, prepolymers, and the exact combinations of diols, triols and polyols selected, with the rate of reaction being strongly influenced by catalysts and inhibitors; polyureas form virtually instantaneously when isocyanate resins are combined with long-chain amine functional polyether or polyester resins and short-chain diamine extenders – the amine-isocyanate nucleophilic addition reaction does not require catalysts. Polyureas also form when isocyanate resins come into contact with moisture;
Phenolic, amino, and furan resins all cured by polycondensation involving the release of water and heat, with cure initiation and polymerisation exotherm control influenced by curing temperature, catalyst selection or loading and processing method or pressure – the degree of pre-polymerisation and level of residual hydroxymethyl content in the resins determine the crosslink density.
Polybenzoxazines are cured by an exothermal ring-opening polymerisation without releasing any chemical, which translates in near zero shrinkage upon polymerisation.
Thermosetting polymer mixtures based on thermosetting resin monomers and pre-polymers can be formulated and applied and processed in a variety of ways to create distinctive cured properties that cannot be achieved with thermoplastic polymers or inorganic materials.
Properties
Thermosetting plastics are generally stronger than thermoplastic materials due to the three-dimensional network of bonds (crosslinking), and are also better suited to high-temperature applications up to the decomposition temperature since they keep their shape as strong covalent bonds between polymer chains cannot be broken easily. The higher the crosslink density and aromatic content of a thermoset polymer, the higher the resistance to heat degradation and chemical attack. Mechanical strength and hardness also improve with crosslink density, although at the expense of brittleness. They normally decompose before melting.
Hard, plastic thermosets may undergo permanent or plastic deformation under load. Elastomers, which are soft and springy or rubbery and can be deformed and revert to their original shape on loading release.
Conventional thermoset plastics or elastomers cannot be melted and re-shaped after they are cured. This usually prevents recycling for the same purpose, except as filler material. New developments involving thermoset epoxy resins which on controlled and contained heating form crosslinked networks permit repeatedly reshaping, like silica glass by reversible covalent bond exchange reactions on reheating above the glass transition temperature. There are also thermoset polyurethanes shown to have transient properties and which can thus be reprocessed or recycled.
Fiber-reinforced materials
When compounded with fibers, thermosetting resins form fiber-reinforced polymer composites, which are used in the fabrication of factory-finished structural composite OEM or replacement parts, and as site-applied, cured and finished composite repair and protection materials. When used as the binder for aggregates and other solid fillers, they form particulate-reinforced polymer composites, which are used for factory-applied protective coating or component manufacture, and for site-applied and cured construction, or maintenance purposes.
Materials
Epoxy resin used as the matrix component in many fiber reinforced plastics such as glass-reinforced plastic and graphite-reinforced plastic; casting; electronics encapsulation; construction; protective coatings; adhesives; sealing and joining.
Polyimides and Bismaleimides used in printed circuit boards and in body parts of modern aircraft, aerospace composite structures, as a coating material and for glass reinforced pipes.
Cyanate esters or polycyanurates for electronics applications with need for dielectric properties and high glass temperature requirements in aerospace structural composite components.
Polyester resin fiberglass systems: sheet molding compounds and bulk molding compounds; filament winding; wet lay-up lamination; repair compounds and protective coatings.
Polyurethanes: insulating foams, mattresses, coatings, adhesives, car parts, print rollers, shoe soles, flooring, synthetic fibers, etc. Polyurethane polymers are formed by combining two bi- or higher functional monomers/oligomers.
Polyurea/polyurethane hybrids used for abrasion resistant waterproofing coatings.
Vulcanized rubber.
Bakelite, a phenol-formaldehyde resin used in electrical insulators and plasticware.
Duroplast, light but strong material, similar to Bakelite formerly used in the manufacture of the Trabant automobile, currently used for household objects
Urea-formaldehyde foam used in plywood, particleboard and medium-density fibreboard.
Melamine resin used on worktop surfaces and some plastic dishes.
Diallyl-phthalate (DAP) used in high temperature and mil-spec electrical connectors and other components. Usually glass filled.
Epoxy novolac resins used for printed circuit boards, electrical encapsulation, adhesives and coatings for metal.
Benzoxazines, used alone or hybridised with epoxy and phenolic resins, for structural prepregs, liquid molding and film adhesives for composite construction, bonding and repair.
Mold or mold runners (the black plastic part in integrated circuits or semiconductors).
Furan resins used in the manufacture of sustainable biocomposite construction, cements, adhesives, coatings and casting/foundry resins.
Silicone resins used for thermoset polymer matrix composites and as ceramic matrix composite precursors.
Thiolyte, an electrical insulating thermoset phenolic laminate material.
Vinyl ester resins used for wet lay-up laminating, molding and fast setting industrial protection and repair materials.
Applications
Application/process uses and methods for thermosets include protective coating, seamless flooring, civil engineering construction grouts for jointing and injection, mortars, foundry sands, adhesives, sealants, castings, potting, electrical insulation, encapsulation, solid foams, wet lay-up laminating, pultrusion, gelcoats, filament winding, pre-pregs, and molding.
Specific methods of molding thermosets are:
Reactive injection moulding (used for objects such as milk bottle crates)
Extrusion molding (used for making pipes, threads of fabric and insulation for electrical cables)
Compression molding (used to shape SMC and BMC thermosetting plastics)
Spin casting (used for producing fishing lures and jigs, gaming miniatures, figurines, emblems as well as production and replacement parts)
See also
Fusion bonded epoxy coating
Thermoset polymer matrix
Vulcanization
References
Polymer chemistry | Thermosetting polymer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,030 | [
"Materials science",
"Polymer chemistry"
] |
321,831 | https://en.wikipedia.org/wiki/Abundant%20number | In number theory, an abundant number or excessive number is a positive integer for which the sum of its proper divisors is greater than the number. The integer 12 is the first abundant number. Its proper divisors are 1, 2, 3, 4 and 6 for a total of 16. The amount by which the sum exceeds the number is the abundance. The number 12 has an abundance of 4, for example.
Definition
An abundant number is a natural number for which the sum of divisors satisfies , or, equivalently, the sum of proper divisors (or aliquot sum) satisfies .
The abundance of a natural number is the integer (equivalently, ).
Examples
The first 28 abundant numbers are:
12, 18, 20, 24, 30, 36, 40, 42, 48, 54, 56, 60, 66, 70, 72, 78, 80, 84, 88, 90, 96, 100, 102, 104, 108, 112, 114, 120, ... .
For example, the proper divisors of 24 are 1, 2, 3, 4, 6, 8, and 12, whose sum is 36. Because 36 is greater than 24, the number 24 is abundant. Its abundance is 36 − 24 = 12.
Properties
The smallest odd abundant number is 945.
The smallest abundant number not divisible by 2 or by 3 is 5391411025 whose distinct prime factors are 5, 7, 11, 13, 17, 19, 23, and 29 . An algorithm given by Iannucci in 2005 shows how to find the smallest abundant number not divisible by the first k primes. If represents the smallest abundant number not divisible by the first k primes then for all we have
for sufficiently large k.
Every multiple of a perfect number (except the perfect number itself) is abundant. For example, every multiple of 6 greater than 6 is abundant because
Every multiple of an abundant number is abundant. For example, every multiple of 20 (including 20 itself) is abundant because
Consequently, infinitely many even and odd abundant numbers exist.
Furthermore, the set of abundant numbers has a non-zero natural density. Marc Deléglise showed in 1998 that the natural density of the set of abundant numbers and perfect numbers is between 0.2474 and 0.2480.
An abundant number which is not the multiple of an abundant number or perfect number (i.e. all its proper divisors are deficient) is called a primitive abundant number
An abundant number whose abundance is greater than any lower number is called a highly abundant number, and one whose relative abundance (i.e. s(n)/n ) is greater than any lower number is called a superabundant number
Every integer greater than 20161 can be written as the sum of two abundant numbers. The largest even number that is not the sum of two abundant numbers is 46.
An abundant number which is not a semiperfect number is called a weird number. An abundant number with abundance 1 is called a quasiperfect number, although none have yet been found.
Every abundant number is a multiple of either a perfect number or a primitive abundant number.
Related concepts
Numbers whose sum of proper factors equals the number itself (such as 6 and 28) are called perfect numbers, while numbers whose sum of proper factors is less than the number itself are called deficient numbers. The first known classification of numbers as deficient, perfect or abundant was by Nicomachus in his Introductio Arithmetica (circa 100 AD), which described abundant numbers as like deformed animals with too many limbs.
The abundancy index of n is the ratio σ(n)/n. Distinct numbers n1, n2, ... (whether abundant or not) with the same abundancy index are called friendly numbers.
The sequence (ak) of least numbers n such that σ(n) > kn, in which a2 = 12 corresponds to the first abundant number, grows very quickly .
The smallest odd integer with abundancy index exceeding 3 is 1018976683725 = 33 × 52 × 72 × 11 × 13 × 17 × 19 × 23 × 29.
If p = (p1, ..., pn) is a list of primes, then p is termed abundant if some integer composed only of primes in p is abundant. A necessary and sufficient condition for this is that the product of pi/(pi − 1) be > 2.
References
External links
The Prime Glossary: Abundant number
Arithmetic dynamics
Divisor function
Integer sequences | Abundant number | [
"Mathematics"
] | 946 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Arithmetic dynamics",
"Combinatorics",
"Numbers",
"Number theory",
"Dynamical systems"
] |
321,843 | https://en.wikipedia.org/wiki/Deficient%20number | In number theory, a deficient number or defective number is a positive integer for which the sum of divisors of is less than . Equivalently, it is a number for which the sum of proper divisors (or aliquot sum) is less than . For example, the proper divisors of 8 are , and their sum is less than 8, so 8 is deficient.
Denoting by the sum of divisors, the value is called the number's deficiency. In terms of the aliquot sum , the deficiency is .
Examples
The first few deficient numbers are
1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 21, 22, 23, 25, 26, 27, 29, 31, 32, 33, 34, 35, 37, 38, 39, 41, 43, 44, 45, 46, 47, 49, 50, ...
As an example, consider the number 21. Its divisors are 1, 3, 7 and 21, and their sum is 32. Because 32 is less than 42, the number 21 is deficient. Its deficiency is 2 × 21 − 32 = 10.
Properties
Since the aliquot sums of prime numbers equal 1, all prime numbers are deficient. More generally, all odd numbers with one or two distinct prime factors are deficient. It follows that there are infinitely many odd deficient numbers. There are also an infinite number of even deficient numbers as all powers of two have the sum ().
More generally, all prime powers are deficient, because their only proper divisors are which sum to , which is at most .
All proper divisors of deficient numbers are deficient. Moreover, all proper divisors of perfect numbers are deficient.
There exists at least one deficient number in the interval for all sufficiently large n.
Related concepts
Closely related to deficient numbers are perfect numbers with σ(n) = 2n, and abundant numbers with σ(n) > 2n.
Nicomachus was the first to subdivide numbers into deficient, perfect, or abundant, in his Introduction to Arithmetic (circa 100 CE). However, he applied this classification only to the even numbers.
See also
Almost perfect number
Amicable number
Sociable number
Superabundant number
Notes
References
External links
The Prime Glossary: Deficient number
Arithmetic dynamics
Divisor function
Integer sequences | Deficient number | [
"Mathematics"
] | 512 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Arithmetic dynamics",
"Combinatorics",
"Numbers",
"Number theory",
"Dynamical systems"
] |
321,869 | https://en.wikipedia.org/wiki/Coding%20theory | Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
There are four types of coding:
Data compression (or source coding)
Error control (or channel coding)
Cryptographic coding
Line coding
Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination.
Error correction adds useful redundancy to the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes.
History of coding theory
In 1948, Claude Shannon published "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory.
The binary Golay code was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth.
Richard Hamming won the Turing Award in 1968 for his work at Bell Labs in numerical methods, automatic coding systems, and error-detecting and error-correcting codes. He invented the concepts known as Hamming codes, Hamming windows, Hamming numbers, and Hamming distance.
In 1972, Nasir Ahmed proposed the discrete cosine transform (DCT), which he developed with T. Natarajan and K. R. Rao in 1973. The DCT is the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3.
Source coding
The aim of source coding is to take the source data and make it smaller.
Definition
Data can be seen as a random variable , where appears with probability .
Data are encoded by strings (words) over an alphabet .
A code is a function
(or if the empty string is not part of the alphabet).
is the code word associated with .
Length of the code word is written as
Expected length of a code is
The concatenation of code words .
The code word of the empty string is the empty string itself:
Properties
is non-singular if injective.
is uniquely decodable if injective.
is instantaneous if is not a proper prefix of (and vice versa).
Principle
Entropy of a source is the measure of information. Basically, source codes try to reduce the redundancy present in the source, and represent the source with fewer bits that carry more information.
Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is called entropy encoding.
Various techniques used by source coding schemes try to achieve the limit of entropy of the source. C(x) ≥ H(x), where H(x) is entropy of source (bitrate), and C(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy of the source.
Example
Facsimile transmission uses a simple run length code. Source coding removes all data superfluous to the need of the transmitter, decreasing the bandwidth required for transmission.
Channel coding
The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade-off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches.
CDs use cross-interleaved Reed–Solomon coding to spread the data out over the disk.
Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we do not merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used.
Other codes are more appropriate for different applications. Deep space communications are limited by the thermal noise of the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise, present in the telephone network and also modeled better as a continuous disturbance. Cell phones are subject to rapid fading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading.
Linear codes
The term algebraic coding theory denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched.
Algebraic coding theory is basically divided into two major types of codes:
Linear block codes
Convolutional codes
It analyzes the following three properties of a code – mainly:
Code word length
Total number of valid code words
The minimum distance between two valid code words, using mainly the Hamming distance, sometimes also other distances like the Lee distance
Linear block codes
Linear block codes have the property of linearity, i.e. the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property.
Linear block codes are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n,m,dmin) where
n is the length of the codeword, in symbols,
m is the number of source symbols that will be used for encoding at once,
dmin is the minimum hamming distance for the code.
There are many types of linear block codes, such as
Cyclic codes (e.g., Hamming codes)
Repetition codes
Parity codes
Polynomial codes (e.g., BCH codes)
Reed–Solomon codes
Algebraic geometric codes
Reed–Muller codes
Perfect codes
Locally recoverable code
Block codes are tied to the sphere packing problem, which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful (24,12) Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called "perfect" codes. The only nontrivial and useful perfect codes are the distance-3 Hamming codes with parameters satisfying (2r – 1, 2r – 1 – r, 3), and the [23,12,7] binary and [11,6,5] ternary Golay codes.
Another code property is the number of neighbors that a single codeword may have.
Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly. The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.
Properties of linear block codes are used in many applications. For example, the syndrome-coset uniqueness property of linear block codes is used in trellis shaping, one of the best-known shaping codes.
Convolutional codes
The idea behind a convolutional code is to make every codeword symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response.
So we generally find the output of the system convolutional encoder, which is the convolution of the input bit, against the states of the convolution encoder, registers.
Fundamentally, convolutional codes do not offer more protection against noise than an equivalent block code. In many cases, they generally offer greater simplicity of implementation over a block code of equal power. The encoder is usually a simple circuit which has state memory and some feedback logic, normally XOR gates. The decoder can be implemented in software or firmware.
The Viterbi algorithm is the optimum algorithm used to decode convolutional codes. There are simplifications to reduce the computational load. They rely on searching only the most likely paths. Although not optimum, they have generally been found to give good results in low noise environments.
Convolutional codes are used in voiceband modems (V.32, V.17, V.34) and in GSM mobile phones, as well as satellite and military communication devices.
Cryptographic coding
Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that block adversaries; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons from doing the same. Since World War I and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.
Line coding
A line code (also called digital baseband modulation or digital baseband transmission method) is a code chosen for use within a communications system for baseband transmission purposes. Line coding is often used for digital data transport.
Line coding consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding. The common types of line encoding are unipolar, polar, bipolar, and Manchester encoding.
Other applications of coding theory
Another concern of coding theory is designing codes that help synchronization. A code may be designed so that a phase shift can be easily detected and corrected and that multiple signals can be sent on the same channel.
Another application of codes, used in some mobile phone systems, is code-division multiple access (CDMA). Each phone is assigned a code sequence that is approximately uncorrelated with the codes of other phones. When transmitting, the code word is used to modulate the data bits representing the voice message. At the receiver, a demodulation process is performed to recover the data. The properties of this class of codes allow many users (with different codes) to use the same radio channel at the same time. To the receiver, the signals of other users will appear to the demodulator only as a low-level noise.
Another general class of codes are the automatic repeat-request (ARQ) codes. In these codes the sender adds redundancy to each message for error checking, usually by adding check bits. If the check bits are not consistent with the rest of the message when it arrives, the receiver will ask the sender to retransmit the message. All but the simplest wide area network protocols use ARQ. Common protocols include SDLC (IBM), TCP (Internet), X.25 (International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes are used, as in TCP.
Group testing
Group testing uses codes in a different way. Consider a large group of items in which a very few are different in a particular way (e.g., defective products or infected test subjects). The idea of group testing is to determine which items are "different" by using as few tests as possible. The origin of the problem has its roots in the Second World War when the United States Army Air Forces needed to test its soldiers for syphilis.
Analog coding
Information is encoded analogously in the neural networks of brains, in analog signal processing, and analog electronics. Aspects of analog coding include analog error correction,
analog data compression and analog encryption.
Neural coding
Neural coding is a neuroscience-related field concerned with how sensory and other information is represented in the brain by networks of neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble. It is thought that neurons can encode both digital and analog information, and that neurons follow the principles of information theory and compress information, and detect and correct
errors in the signals that are sent throughout the brain and wider nervous system.
See also
Coding gain
Covering code
Error correction code
Folded Reed–Solomon code
Group testing
Hamming distance, Hamming weight
Lee distance
List of algebraic coding theory topics
Spatial coding and MIMO in multiple antenna research
Spatial diversity coding is spatial coding that transmits replicas of the information signal along different spatial paths, so as to increase the reliability of the data transmission.
Spatial interference cancellation coding
Spatial multiplex coding
Timeline of information theory, data compression, and error correcting codes
Notes
References
Elwyn R. Berlekamp (2014), Algebraic Coding Theory, World Scientific Publishing (revised edition), .
MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003.
Vera Pless (1982), Introduction to the Theory of Error-Correcting Codes, John Wiley & Sons, Inc., .
Randy Yates, A Coding Theory Tutorial.
Error detection and correction | Coding theory | [
"Mathematics",
"Engineering"
] | 3,600 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.