id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
597,244 | https://en.wikipedia.org/wiki/Carbon%20star | A carbon star (C-type star) is typically an asymptotic giant branch star, a luminous red giant, whose atmosphere contains more carbon than oxygen. The two elements combine in the upper layers of the star, forming carbon monoxide, which consumes most of the oxygen in the atmosphere, leaving carbon atoms free to form other carbon compounds, giving the star a "sooty" atmosphere and a strikingly ruby red appearance. There are also some dwarf and supergiant carbon stars, with the more common giant stars sometimes being called classical carbon stars to distinguish them.
In most stars (such as the Sun), the atmosphere is richer in oxygen than carbon. Ordinary stars not exhibiting the characteristics of carbon stars but cool enough to form carbon monoxide are therefore called oxygen-rich stars.
Carbon stars have quite distinctive spectral characteristics, and they were first recognized by their spectra by Angelo Secchi in the 1860s, a pioneering time in astronomical spectroscopy.
Spectra
By definition carbon stars have dominant spectral Swan bands from the molecule C2. Many other carbon compounds may be present at high levels, such as CH, CN (cyanogen), C3 and SiC2. Carbon is formed in the core and circulated into its upper layers, dramatically changing the layers' composition. In addition to carbon, S-process elements such as barium, technetium, and zirconium are formed in the shell flashes and are "dredged up" to the surface.
When astronomers developed the spectral classification of the carbon stars, they had considerable difficulty when trying to correlate the spectra to the stars' effective temperatures. The trouble was with all the atmospheric carbon hiding the absorption lines normally used as temperature indicators for the stars.
Carbon stars also show a rich spectrum of molecular lines at millimeter wavelengths and submillimeter wavelengths. In the carbon star CW Leonis more than 50 different circumstellar molecules have been detected. This star is often used to search for new circumstellar molecules.
Secchi
Carbon stars were discovered already in the 1860s when spectral classification pioneer Angelo Secchi erected the Secchi class IV for the carbon stars, which in the late 1890s were reclassified as N class stars.
Harvard
Using this new Harvard classification, the N class was later enhanced by an R class for less deeply red stars sharing the characteristic carbon bands of the spectrum. Later correlation of this R to N scheme with conventional spectra, showed that the R-N sequence approximately run in parallel with c:a G7 to M10 with regards to star temperature.
Morgan–Keenan C system
The later N classes correspond less well to the counterparting M types, because the Harvard classification was only partially based on temperature, but also carbon abundance; so it soon became clear that this kind of carbon star classification was incomplete. Instead a new dual number star class C was erected so to deal with temperature and carbon abundance. Such a spectrum measured for Y Canum Venaticorum, was determined to be C54, where 5 refers to temperature dependent features, and 4 to the strength of the C2 Swan bands in the spectrum. (C54 is very often alternatively written C5,4). This Morgan–Keenan C system classification replaced the older R-N classifications from 1960 to 1993.
The Revised Morgan–Keenan system
The two-dimensional Morgan–Keenan C classification failed to fulfill the creators' expectations:
it failed to correlate to temperature measurements based on infrared,
originally being two-dimensional it was soon enhanced by suffixes, CH, CN, j and other features making it impractical for en-masse analyses of foreign galaxies' carbon star populations,
and it gradually occurred that the old R and N stars actually were two distinct types of carbon stars, having real astrophysical significance.
A new revised Morgan–Keenan classification was published in 1993 by Philip Keenan, defining the classes: C-N, C-R and C-H. Later the classes C-J and C-Hd were added. This constitutes the established classification system used today.
Astrophysical mechanisms
Carbon stars can be explained by more than one astrophysical mechanism. Classical carbon stars are distinguished from non-classical ones on the grounds of mass, with classical carbon stars being the more massive.
In the classical carbon stars, those belonging to the modern spectral types C-R and C-N, the abundance of carbon is thought to be a product of helium fusion, specifically the triple-alpha process within a star, which giants reach near the end of their lives in the asymptotic giant branch (AGB). These fusion products have been brought to the stellar surface by episodes of convection (the so-called third dredge-up) after the carbon and other products were made. Normally this kind of AGB carbon star fuses hydrogen in a hydrogen burning shell, but in episodes separated by 104–105 years, the star transforms to burning helium in a shell, while the hydrogen fusion temporarily ceases. In this phase, the star's luminosity rises, and material from the interior of the star (notably carbon) moves up. Since the luminosity rises, the star expands so that the helium fusion ceases, and the hydrogen shell burning restarts. During these shell helium flashes, the mass loss from the star is significant, and after many shell helium flashes, an AGB star is transformed into a hot white dwarf and its atmosphere becomes material for a planetary nebula.
The non-classical kinds of carbon stars, belonging to the types C-J and C-H, are believed to be binary stars, where one star is observed to be a giant star (or occasionally a red dwarf) and the other a white dwarf. The star presently observed to be a giant star accreted carbon-rich material when it was still a main-sequence star from its companion (that is, the star that is now the white dwarf) when the latter was still a classical carbon star. That phase of stellar evolution is relatively brief, and most such stars ultimately end up as white dwarfs. These systems are now being observed a comparatively long time after the mass transfer event, so the extra carbon observed in the present red giant was not produced within that star. This scenario is also accepted as the origin of the barium stars, which are also characterized as having strong spectral features of carbon molecules and of barium (an s-process element). Sometimes the stars whose excess carbon came from this mass transfer are called "extrinsic" carbon stars to distinguish them from the "intrinsic" AGB stars which produce the carbon internally. Many of these extrinsic carbon stars are not luminous or cool enough to have made their own carbon, which was a puzzle until their binary nature was discovered.
The enigmatic hydrogen deficient carbon stars (HdC), belonging to the spectral class C-Hd, seems to have some relation to R Coronae Borealis variables (RCB), but are not variable themselves and lack a certain infrared radiation typical for RCB:s. Only five HdC:s are known, and none is known to be binary, so the relation to the non-classical carbon stars is not known.
Other less convincing theories, such as CNO cycle unbalancing and core helium flash have also been proposed as mechanisms for carbon enrichment in the atmospheres of smaller carbon stars.
Other characteristics
Most classical carbon stars are variable stars of the long period variable types.
Observing carbon stars
Due to the insensitivity of night vision to red and a slow adaption of the red sensitive eye rods to the light of the stars, astronomers making magnitude estimates of red variable stars, especially carbon stars, have to know how to deal with the Purkinje effect in order not to underestimate the magnitude of the observed star.
Generation of interstellar dust
Owing to its low surface gravity, as much as half (or more) of the total mass of a carbon star may be lost by way of powerful stellar winds. The star's remnants, carbon-rich "dust" similar to graphite, therefore become part of the interstellar dust. This dust is believed to be a significant factor in providing the raw materials for the creation of subsequent generations of stars and their planetary systems. The material surrounding a carbon star may blanket it to the extent that the dust absorbs all visible light.
Silicon carbide outflow from carbon stars was accreted in the early solar nebula and survived in the matrices of relatively unaltered chondritic meteorites. This allows for direct isotopic analysis of the circumstellar environment of 1-3 M☉ carbon stars. Stellar outflow from carbon stars is the source of the majority of presolar silicon carbide found in meteorites.
Other classifications
Other types of carbon stars include:
CCS – Cool Carbon Star
CEMP – Carbon-Enhanced Metal-Poor
CEMP-no – Carbon-Enhanced Metal-Poor star with no enhancement of elements produced by the r-process or s-process nucleosynthesis
CEMP-r – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by r-process nucleosynthesis
CEMP-s – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by s-process nucleosynthesis
CEMP-r/s – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by both r-process and s-process nucleosynthesis
CGCS – Cool Galactic Carbon Star
Use as standard candles
Classical carbon stars are very luminous, especially in the near-infrared, so they can be detected in nearby galaxies. Because of the strong absorption features in their spectra, carbon stars are redder in the near-infrared than oxygen-rich stars are, and they can be identified by their photometric colors. While individual carbon stars do not all have the same luminosity, a large sample of carbon stars will have a luminosity probability density function (PDF) with nearly the same median value, in similar galaxies. So the median value of that function can be used as a standard candle for the determination of the distance to a galaxy. The shape of the PDF may vary depending upon the average metallicity of the AGB stars within a galaxy, so it is important to calibrate this distance indicator using several nearby galaxies for which the distances are known through other means.
See also
S-type star, similar, but not as extreme
Technetium star, another type of chemically peculiar star
Marc Aaronson, American astronomer and researcher of carbon stars
La Superba, one of the more well known carbon stars
LL Pegasi, which has so much soot in it that it has created a spiral trail of smoke extending light years into space
References
External links
List of 110 carbon stars. Includes HD number; secondary identification for most; position in right ascension and declination ; magnitude; spectrum; magnitude range (for variable stars); period (of variability cycle).
Star types | Carbon star | Astronomy | 2,225 |
28,367,322 | https://en.wikipedia.org/wiki/N-flake | An n-flake, polyflake, or Sierpinski n-gon, is a fractal constructed starting from an n-gon. This n-gon is replaced by a flake of smaller n-gons, such that the scaled polygons are placed at the vertices, and sometimes in the center. This process is repeated recursively to result in the fractal. Typically, there is also the restriction that the n-gons must touch yet not overlap.
In two dimensions
The most common variety of n-flake is two-dimensional (in terms of its topological dimension) and is formed of polygons. The four most common special cases are formed with triangles, squares, pentagons, and hexagons, but it can be extended to any polygon. Its boundary is the von Koch curve of varying types – depending on the n-gon – and infinitely many Koch curves are contained within. The fractals occupy zero area yet have an infinite perimeter.
The formula of the scale factor r for any n-flake is:
where cosine is evaluated in radians and n is the number of sides of the n-gon. The Hausdorff dimension of a n-flake is , where m is the number of polygons in each individual flake and r is the scale factor.
Sierpinski triangle
The Sierpinski triangle is an n-flake formed by successive flakes of three triangles. Each flake is formed by placing triangles scaled by 1/2 in each corner of the triangle they replace. Its Hausdorff dimension is equal to ≈ 1.585. The is obtained because each iteration has 3 triangles that are scaled by 1/2.
Vicsek fractal
If a sierpinski 4-gon were constructed from the given definition, the scale factor would be 1/2 and the fractal would simply be a square. A more interesting alternative, the Vicsek fractal, rarely called a quadraflake, is formed by successive flakes of five squares scaled by 1/3. Each flake is formed either by placing a scaled square in each corner and one in the center or one on each side of the square and one in the center. Its Hausdorff dimension is equal to ≈ 1.4650. The is obtained because each iteration has 5 squares that are scaled by 1/3. The boundary of the Vicsek Fractal is a Type 1 quadratic Koch curve.
Pentaflake
A pentaflake, or sierpinski pentagon, is formed by successive flakes of six regular pentagons.
Each flake is formed by placing a pentagon in each corner and one in the center. Its Hausdorff dimension is equal to ≈ 1.8617, where (golden ratio). The is obtained because each iteration has 6 pentagons that are scaled by . The boundary of a pentaflake is the Koch curve of 72 degrees.
There is also a variation of the pentaflake that has no central pentagon. Its Hausdorff dimension equals ≈ 1.6723. This variation still contains infinitely many Koch curves, but they are somewhat more visible.
Concentric patterns of pentaflake boundary shaped tiles can cover the plane, with the central point being covered by a third shape formed of segments of 72-degree Koch curve, also with 5-fold rotational and reflective symmetry.
Hexaflake
A hexaflake, is formed by successive flakes of seven regular hexagons. Each flake is formed by placing a scaled hexagon in each corner and one in the center. Each iteration has 7 hexagons that are scaled by 1/3. Therefore the hexaflake has 7n−1 hexagons in its nth iteration, and its Hausdorff dimension is equal to ≈ 1.7712. The boundary of a hexaflake is the standard Koch curve of 60 degrees and infinitely many Koch snowflakes are contained within. Also, the projection of the cantor cube onto the plane orthogonal to its main diagonal is a hexaflake.
The hexaflake has been applied in the design of antennas and optical fibers.
Like the pentaflake, there is also a variation of the hexaflake, called the Sierpinski hexagon, that has no central hexagon. Its Hausdorff dimension equals ≈ 1.6309. This variation still contains infinitely many Koch curves of 60 degrees.
Polyflake
n-flakes of higher polygons also exist, though they are less common and usually do not have a central polygon. [If a central polygon is generated, the scale factor differs for odd and even : for even and for odd .] Some examples are shown below; the 7-flake through 12-flake. While it may not be obvious, these higher polyflakes still contain infinitely many Koch curves, but the angle of the Koch curves decreases as n increases. Their Hausdorff dimensions are slightly more difficult to calculate than lower n-flakes because their scale factor is less obvious. However, the Hausdorff dimension is always less than two but no less than one. An interesting n-flake is the ∞-flake, because as the value of n increases, an n-flake's Hausdorff dimension approaches 1,
In three dimensions
n-flakes can generalized to higher dimensions, in particular to a topological dimension of three. Instead of polygons, regular polyhedra are iteratively replaced. However, while there are an infinite number of regular polygons, there are only five regular, convex polyhedra. Because of this, three-dimensional n-flakes are also called platonic solid fractals. In three dimensions, the fractals' volume is zero.
Sierpinski tetrahedron
A Sierpinski tetrahedron is formed by successive flakes of four regular tetrahedrons. Each flake is formed by placing a tetrahedron scaled by 1/2 in each corner. Its Hausdorff dimension is equal to , which is exactly equal to 2. On every face there is a Sierpinski triangle and infinitely many are contained within.
Hexahedron flake
A hexahedron, or cube, flake defined in the same way as the Sierpinski tetrahedron is simply a cube and is not interesting as a fractal. However, there are two pleasing alternatives. One is the Menger Sponge, where every cube is replaced by a three dimensional ring of cubes. Its Hausdorff dimension is ≈ 2.7268.
Another hexahedron flake can be produced in a manner similar to the Vicsek fractal extended to three dimensions. Every cube is divided into 27 smaller cubes and the center cross is retained, which is the opposite of the Menger sponge where the cross is removed. However, it is not the Menger Sponge complement. Its Hausdorff dimension is ≈ 1.7712, because a cross of 7 cubes, each scaled by 1/3, replaces each cube.
Octahedron flake
An octahedron flake, or sierpinski octahedron, is formed by successive flakes of six regular octahedra. Each flake is formed by placing an octahedron scaled by 1/2 in each corner. Its Hausdorff dimension is equal to ≈ 2.5849. On every face there is a Sierpinski triangle and infinitely many are contained within.
Dodecahedron flake
A dodecahedron flake, or sierpinski dodecahedron, is formed by successive flakes of twenty regular dodecahedra. Each flake is formed by placing a dodecahedron scaled by in each corner. Its Hausdorff dimension is equal to ≈ 2.3296.
Icosahedron flake
An icosahedron flake, or sierpinski icosahedron, is formed by successive flakes of twelve regular icosahedra. Each flake is formed by placing an icosahedron scaled by in each corner. Its Hausdorff dimension is equal to ≈ 2.5819.
See also
List of fractals by Hausdorff dimension
References
External links
Quadraflakes, Pentaflakes, Hexaflakes and more – includes Mathematica code to generate these fractals
Javascript for covering the plane with 5-fold symmetric Pentaflake tiles.
Fractals
Fractal curves | N-flake | Mathematics | 1,772 |
40,333,670 | https://en.wikipedia.org/wiki/3-Hydroxyoctanoic%20acid | 3-Hydroxyoctanoic acid is a beta-hydroxy acid that is naturally produced in humans, other animals, and plants.
3-Hydroxyoctanoic acid is the primary endogenous agonist of hydroxycarboxylic acid receptor 3 (HCA3), a G protein-coupled receptor protein which is encoded by the human gene HCAR3. In plants, signalling chemical emitted by the orchid Cymbidium floribundum and recognized by Japanese honeybees (Apis cerana japonica).
References
Fatty acids
Beta hydroxy acids
Cymbidium | 3-Hydroxyoctanoic acid | Chemistry | 123 |
37,811,633 | https://en.wikipedia.org/wiki/Sky%20City%20%28Changsha%29 | Sky City (), or Sky City One, was an planned skyscraper in the city of Changsha, Hunan in south-central China. The prospective builders, Broad Sustainable Building, estimated it would take just 90 days to construct. Including the 120 days required for prefabrication before on-site work commenced, the sum of time needed was 210 days. Pre-construction activities were halted in August 2013 after government regulators required additional approvals.
Broad Sustainable Building had intended to build a skyscraper, but the local government wanted the world's tallest building, hence the current plans. The company has constructed 20 buildings in China using the same method and has several franchise partners globally. It has a planned helipad at a height of above ground.
, plans for the building's construction are stalled, and the foundations for the planned building are being used as a fish farm.
On 8 June 2016, it was reported by the People's Daily that the project had been dropped due to protests over environmental damage to the Daze Lake wetland. The People's Daily said, "Central China's Hunan province finally announced a halt on its ambitious plan to build the world's tallest tower within one part of its rare wetland area. The Daze Lake wetland is the location where the world's next tallest tower was originally scheduled to be built. This wetland is now listed as one of the 20 waters to be permanently protected and will follow non-construction zone policies. It is a pristine wetland hailed as the last wetland in Changsha where many rare bird species take habitat. Shelter of these birds would be largely disrupted by the building of this tower." However Broad Group declared: "We will definitely build the tower" in 2016.
Building layout
Had it been built, Sky City would be the tallest building in the world, until the Jeddah Tower in Jeddah is complete, with 202 floors. The construction plan calls for it to be built from pre-fabricated units constructed on site in an unprecedentedly short period of 90 days. BSB's plan is to assemble 95% of the building in its factory before any excavation takes place at the construction site. The fabrication process is due to take around six months before the actual construction begins.
According to the plan, the building's 202 stories will have a hotel accommodating 1000 guests, a hospital, 5 schools, and offices. Of the total space available, nearly 83% will be for residential purposes, housing up to 17,000 people. Another 5% will be for the hotel housing 1000, while 3% each will be dedicated to schools, hospitals, offices and shops. There will be 10 fire escape routes, which will evacuate a given floor within 15 minutes; the building will be fire-resistant for up to three hours. It will also have 17 helipads. Sport facilities will include six basketball courts and 10 tennis courts. Plans include preserving some green space around the building.
For transportation there will be 104 high-speed elevators installed. The safety of these potential elevators has been questioned because they take several minutes to get from bottom to top. The 5000 residential properties will be able to accommodate 17,400 residents. The proposed building will have total floor space of 1.2 million m2 (13 million sq ft). The main building will have 1.05 million m2 (11 million sq ft) of this area, with a basement of and a 3 to 7 floor-high annex of . The total capacity of the building will be about 30,000.
The four-layered glass used for the building's windows will keep the temperature of the building constant between . The air indoors will be filtered to be up to 20 times cleaner than the air outside. The lamps used in the building will be made of LEDs. The builders have claimed to be working with some of the same architects who designed the Burj Khalifa, such as Adrian Smith.
Structural features
The project is planned to consume 270,000 tons of steel. For its assembly factory, sustainable building technology and independent research and development will be required. The main advantages of the building will be its earthquake resistance, energy saving, cleanliness, durability, and materials, which consist of recycled building materials, non-aldehyde / non-lead / non-asbestos building materials, etc. The technology at the core of the whole steel structure is modular construction. The building would have of insulated walls and quadruple glazing, contributing greatly to energy efficiency.
Although structural details are not available, outside architects have expressed doubts that a modular design would have the stiffness on lower floors to withstand the wind loads imposed by such a height, without unacceptable amounts of sway, or that the building could be built without high-strength concrete, whose curing time would preclude such a rapid construction rate. Most of the construction and production of materials will take place on site so as to benefit the local economy of Changsha.
Cost
As currently planned, Sky City would cost RMB 9 billion ($1.46 billion) to build. A cost estimate of $1,500 per square meter of floor space would make Sky City considerably cheaper than the similarly tall Burj Khalifa ($4,500 per square meter). Broad Sustainable Group has purchased of land, for a cost of 390 million yuan ($63 million).
Schedule
The start date for the project has been moved back several times due to delays in getting government approval to start construction, and amidst multiple conflicting announcements about whether the project has received final approvals. In October 2012, the group announced that they had received approval from the local government and that construction would begin in November, but announced a new build time of 210 days. This meant completion of the project in June 2013. On November 16, 2012, Juliet Jiang, senior vice president of Broad Group, said in an interview that the company would adhere to its previous time table of building five floors a day and completing the building in a 90-day time frame. She also said the building was still waiting for government approval. Later that month the company announced January 2013 as the start for the construction.
The final architectural renderings had been completed and the project was to be approved by the central government in early December 2012. Construction was scheduled to begin at some point in 2013 and plans still called for the 90 day timetable to complete.
On 14 May 2013, TreeHugger reported that the project had received governmental approval and was set to break ground in June 2013.
On 17 June 2013, Broad Sustainable Building Group Chairman Zhang Yue said that construction would commence in August 2013 with the first four months spent for prefabrication and the next three months for installation onsite. The expected date of completion was March 2014.
On 20 July 2013, pictures of the groundbreaking ceremony, at which Zhang Yue arrived by helicopter, along with several dignitaries, started to circulate on Chinese websites and Skyscy. These reports suggest that China State Construction Engineering is the main contractor, that the building was expected to be complete by April, and that it would open in May or June 2014.
On 25 July 2013, it was reported that construction was halted by the authorities because the building did not receive adequate permission. On August 14, 2013, China Daily USA reported that the actual onsite assembly won't start until April 2014, according to Wang Shuguang, general manager of Broad Group's US operation. On September 4, 2013, China.com.cn reported that the project had begun environmental assessment, the first of a number of stages in obtaining official planning approval for the project. By October 30, the building was in the final approval phases, according to Broad Group.
, there appear to be no reliable reports that even offsite prefabrication has started. The company has recently confirmed that it is still actively seeking to develop the tower, pending government approvals.
In February 2015, the developer attracted attention when Mini Sky City, also located in Changsha, topped out, the 204 metre tower was built in two bursts, the first twenty floors went up in a week but was then halted by red tape, the second phase of thirty-seven storeys was completed in twelve working days. The developer has reiterated his determination to build the full-size Sky City and stated that construction would start by early 2016, however buildings of 350m need to be approved at the national level in Beijing thus the official start date is unclear.
In July 2015, it was reported that no work had been done on the site for more than two years. The excavated foundation pits were being used by local villagers as a fish farm. The developer stated that there was no further news on securing approval for the building.
Furthermore, the project was scrapped in June 2016 due to environmental concerns. The no-construction zone will span a total of 199.5 square kilometers.
In 2017, Sky City renderings show a different site has been chosen near waterfront area.
Criticism
The project faces a great deal of skepticism due to the nature of its claims, and doubts have been expressed about Broad Group's ability to complete such a project in such a short time. There is also speculation that it is a marketing ploy rather than an actual project. The head of structures for WSP Middle East (the company behind The Shard in London), Bart Leclercq, jokingly said that he would give up structural engineering if the project were completed on time. Even so, he and others have commended the project for its use of innovative prefabrication techniques.
Doubts were cast by critics over the ability of the Broad Group to achieve such a grand scale project, such as one from 2012 which mentioned Broad's construction of two buildings only, neither of which reached over 30 floors. A tower of such height requires stiffness, typically requiring enormous amounts of concrete and steel and a lengthy period of time for curing concrete. The ability of the engineers to understand the complexity of a project this size has also come into question. (The later Mini Sky City is 57 stories, and took 19 days of actual build time, see above.)
Another factor that may have been overlooked in the designs, according to engineers, is the wind factor. A building with such a height and shape will need to deal with large horizontal forces, but a wind strategy seems to be absent from the Broad Group's blueprint on the tower. Because of a lack of stiffness, winds would generate a huge draft around the building and would cause it to sway, making it potentially unstable.
Other concerns by critics have related to the tower's ability to deal with emergencies. In case of a fire, there might not be means to douse it or to evacuate people as fire rescue crews are generally not equipped to reach such heights. Also, the elevators might be too slow to transport people who need emergency treatment to the hospital in time (though the building is planned to include a hospital). Finally, the structure may also cause subsidence in the local soil.
While Dr. Sang Dae Kim, chairman of the Council on Tall Buildings and Urban Habitat, has claimed that towers up to 2 kilometers are possible with modern technology, he has said that the floor to elevator ratio at that height would be impractical.
References
External links
Sky City Videos
BBC News: Chairman's Zhang's flatpack skyscrapers
Skyrise Cities: http://skyrisecities.com/news/2016/06/environmental-concerns-halt-plans-worlds-tallest-building
Unbuilt buildings and structures
Unbuilt skyscrapers
Skyscrapers in Changsha
Prefabricated buildings
Proposed arcologies | Sky City (Changsha) | Technology,Engineering | 2,353 |
32,960,675 | https://en.wikipedia.org/wiki/Laser-assisted%20water%20condensation | Laser-assisted water condensation is an experimental technique for artificially causing rainfall. This technique was developed in 2011 by scientists from the University of Geneva. It is related to cloud seeding.
The technique works by creating nitric particles in the clouds that cause condensation with laser pulses.
References
Weather modification | Laser-assisted water condensation | Engineering | 64 |
6,252,791 | https://en.wikipedia.org/wiki/Marconi%20Plaza | Marconi Plaza is an urban park square located in South Philadelphia, Philadelphia, Pennsylvania, United States. The plaza was named to recognize the 20th-century cultural identity in Philadelphia of the surrounding Italian American enclave neighborhood and became the designation location of the annual Columbus Day Parade.
Marconi Plaza has two main halves, east and west, which are divided in the middle by Broad Street. It is located at the most southern end of the city and within the northern border of the Sports Complex Special Services District and the southern border of Lower Moyamensing. The park plaza is accessible via the Oregon Avenue station of the Broad Street subway.
Boundaries of the Marconi Plaza neighborhood:
The urban park plaza itself, from which the neighborhood derives its name (Marconi East and "Marco" Marconi West), is a rectangular park. The Roman-styled plaza is divided in the center by Broad Street and is bordered by 13th Street, 15th Street, Bigler Street, and Oregon Avenue.
History
The plaza design is credited to the strong influence of renowned architect Paul Philippe Cret in 1904 as part of his participation in the Art Jury reviewing the preliminary plans presented by landscape architects the Olmsted Brothers, who were then charged with a modified design to complete the work.
. The Plaza later served as the grand pre-entrance for the 1926 Sesquicentennial Exposition, leading visitors south along a tree lined Southern Boulevard Parkway (landscaped segment of South Broad Street) to the exhibition grounds that started at Packer Avenue and continued to League Island Park. This neighborhood twin park is mirrored on both sides of Broad Street and became property of the Fairmount Park system. It held the common name of Oregon Plaza until October 18, 1937 when it was officially named Marconi Plaza in honor of the Nobel Prize Laureate Guglielmo Marconi, the inventor of radio.
The F. Amadee Bregy School was added to the National Register of Historic Places in 1988.
Architecture
The original design of the Plaza was a two level terrace with pathways, marble trims, urns, influenced by landscaped architecture modeling after Roman gardens and English gardens. The east and west plaza reflected the same winding pathways, leading to a raised stepped terrace surrounded by stone railings and entrance sculptures of large urns, with two small "reflecting" pools of water facing Broad Street at the center point, which at that time was cut away from the curbline, forming half circles open to traffic on both the east and west. This accent was used in 1926 to position a large Liberty Bell at the center of the street, permitting traffic to circle around.
Over the years, many of the fine details have been erased, including the half circled indented curbline on either side of Broad Street at the center. This location also had, on both sides of the plaza, two reflecting pools of water. The pools were filled in to provide the foundation for the two statues that were later erected to support the cultural history of the immigrant Italian community and respond to Anti-Italianism.
The park is currently lushly covered with 25% trees adorned with park benches, open areas for two tot lots, a baseball field, basketball court, and country cottage style enclosed bocce court. The sidewalk border surrounding the park is densely lined with large maple trees with heights of 30–50 feet high.
Public art
Bronze statue of Guglielmo Marconi, sculpted by Saleppichi Giancarlo, was erected on the east plaza in 1975 though the efforts of the Italo-American Community organized as the "Marconi Memorial Association" headed by Dr. Frank P. DiDio. The statue was dedicated on April 25, 1980, to commemorate the 106th anniversary of the birthday of the world-famous Italian scientist and inventor.
Marble statue of Christopher Columbus was erected on the west plaza in 1976. This work was originally located along Belmont Avenue in Fairmount Park, having been unveiled on October 12, 1876, for Philadelphia's Centennial Exposition. Thought to be the work of Emanuele Caroni, this is said to be first publicly funded monument to Christopher Columbus in the United States. It was purchased for $18,000 with money raised by Italian-Americans and the Columbus Monument Association, through the efforts of Alonzo Viti of Philadelphia and his brothers. The statue's initial installation began an annual tradition for the colony of mostly Italian Americans in South Philadelphia to march each year on Columbus Day to the statue in Fairmount Park. The journey was found to be too exhausting and in 1920 the celebration changed locations.
Controversy surrounding Christopher Columbus statue
2018
The words "Italian-Americans against racism" were painted on the pavement in front of the statue as part of a series of protest events on Columbus Day.
2020
During the aftermath of the George Floyd protests and greater Black Lives Matter (BLM) movement in June, statues depicting Christopher Columbus as well as other historical figures had become a target for vandalism and city sanctioned removal nationwide. Some members of the Italian-American community of South Philadelphia assembled in Marconi Plaza, believing that the Italian immigrant-created Columbus statue would be destroyed; some of them were armed with weapons and surrounded the statue. However, in the wake of the George Floyd protests, outsider far-right counterprotesters who were not from the neighborhood began to show up. Jim Kenney released a statement on twitter; city officials have since declared that the statue will remain on site for the time being and on June 17 city workers boarded it up with a wooden box to protect it. On June 15 conservative WPHT radio talk show host Dom Giordano interviewed a South Philadelphia resident who defended the statue in a segment called "The 'Gravy Seals' Speak Out". When questioned as to why the man would defend the statue, he is quoted as saying "...it's more than just a statue, that statue was donated from the community and was paid for by the community. So it represents Italian heritage, even though the history may be blemished on Columbus himself. It's still recognized as an Italian heritage symbol, so we feel like we're being attacked. Because you know - they took down Rizzo, they took down the mural, now they're gonna take down this and they're probably going to stop the parade..." On June 23 a second wave of violence broke out in the otherwise quiet Marconi plaza, when a group of around 50 protesters met a group of around 100 counter-protesters. The latter group was heard chanting "U.S.A" before a brawl ensued and a man from each side was detained.
On June 24 it was announced the city would request permission from the Philadelphia Art Commission to remove the statue, with public feedback collected online and an official hearing set for July 22. On August 12, the Philadelphia Art Commission issued an order to remove the statue from Marconi Plaza and to place it in temporary storage. This followed an endorsement of a city proposal, two weeks prior, by the Philadelphia Historical Commission, to remove the statue, citing public safety and susceptibility of damage to the statue as a result of the George Floyd protests.
Marconi East: Residential
Mollbore Terraces of Marconi: The 1930s Mollbore Terrace was a unique urban change from the densely lined row houses that characterized most of South Philadelphia. The design included front porches and a rear yard with an access service roadway for trash pick-up. Three separate Mollbore Terrace sections were constructed east of the plaza within the boundaries of 13th Street to 7th Street, and from Oregon Ave to Johnston Street. The layout departed from the standard street grid, offsetting the numbered streets that permitted placing a "mini-public-square" of green space for houses to face inward on all four sides and directions. The center large rectangular common parks space was originally designated as a "Terrace" that included pathways, grass and trees with an octagon-shaped wading pool at the west end and a raised octagon sand pit platform with a flag pole at the east end.
Marconi West: Residential
Roman Terraces of Marconi: The Greco-Roman–accented homes west of the plaza from 15th to 19th street, using the same concept but on a smaller scale, include two oval-shaped terrace streets at Smedley and Colorado. The terrace at Colorado Street became well known citywide for its annual decorations and street lighting during the Christmas holidays from 1950 to 2000.
Moyamensing Avenue Parkway of Marconi: This main angular dual street with an approximately 50-foot center median landscaped area and tree-lined street, crosses the standard street grid and was designed as an alternative roadway access to the 1926 Sesquicentennial Exposition. It begins at Oregon Avenue, that once was a headhouse entrance for the 1926 Expo, through to the intersection of 20th Street, Penrose Avenue and Packer Avenue. An architectural design for a grand public square like the squares of Center City Philadelphia (inspired by the Benjamin Franklin Parkway) was planned at the parkway's end point of Penrose Avenue, which was viewed by city planners to be the significant southern gateway to the City. The 1926 square was never developed.
Boundaries
In 2002, the City of Philadelphia legislated boundaries of the Sports Complex Special Service District. The residential communities defined included Marconi Plaza. The Special District established an overlay providing the basis for a new definition to Marconi East as community 3 and Marconi West as community 4.
See also
List of parks in Philadelphia
References
University of Pennsylvania Library, South Philadelphia Neighborhoods - See Map: Neighborhoods south of Passyunk Avenue and Mifflin Street
External links
Sports Complex Special District
Friends of Marconi Plaza
Sports Complex Community Boundaries
Statues of Marconi and Columbus
Philly History 1906-1926 Plaza and landscaping South of Oregon Avenue concept design by Olmsted Brothers
Neighborhoods in Philadelphia
Little Italys in the United States
Italian-American culture in Pennsylvania
Italian-American culture in Philadelphia
Landscape architecture
Landscape
Cultural landscapes
South Philadelphia
Municipal parks in Philadelphia
Cultural depictions of Christopher Columbus
Cultural depictions of Guglielmo Marconi | Marconi Plaza | Engineering | 2,038 |
15,317,831 | https://en.wikipedia.org/wiki/Algebraic%20reconstruction%20technique | The algebraic reconstruction technique (ART) is an iterative reconstruction technique used in computed tomography. It reconstructs an image from a series of angular projections (a sinogram). Gordon, Bender and Herman first showed its use in image reconstruction; whereas the method is known as Kaczmarz method in numerical linear algebra.
An advantage of ART over other reconstruction methods (such as filtered backprojection) is that it is relatively easy to incorporate prior knowledge into the reconstruction process.
ART can be considered as an iterative solver of a system of linear equations , where:
is a sparse matrix whose values represent the relative contribution of each output pixel to different points in the sinogram ( being the number of individual values in the sinogram, and being the number of output pixels);
represents the pixels in the generated (output) image, arranged as a vector, and:
is a vector representing the sinogram. Each projection (row) in the sinogram is made up of a number of discrete values, arranged along the transverse axis. is made up of all of these values, from each of the individual projections.
Given a real or complex matrix and a real or complex vector , respectively, the method computes an approximation of the solution of the linear systems of equations as in the following formula,
where , is the i-th row of the matrix , is the i-th component of the vector .
is an optional relaxation parameter, of the range . The relaxation parameter is used to slow the convergence of the system. This increases computation time, but can improve the signal-to-noise ratio of the output. In some implementations, the value of is reduced with each successive iteration.
A further development of the ART algorithm is the simultaneous algebraic reconstruction technique (SART) algorithm.
References
Medical imaging
Radiography | Algebraic reconstruction technique | Technology | 367 |
2,358,470 | https://en.wikipedia.org/wiki/Moxi%20%28DVR%29 | Moxi was a line of high-definition digital video recorders produced by Moxi Digital Digeo and Arris International. Moxi was originally released only to cable operators, but in December 2008 it was released as a retail product. Moxi was removed from the market in November 2011. The former retail product, the Moxi HD DVR, provided a high-definition user interface with support for either two or three CableCARD TV tuners. Arris also offered a companion appliance, the Moxi Mate, which could stream live or recorded TV from a Moxi HD DVR.
History
Digeo was founded in 1999 (originally under the name Broadband Partners, Inc.) by Microsoft co-founder Paul Allen, with headquarters in Kirkland, Washington. In the same year, Rearden Steel was started by Steve Perlman, founder of WebTV, under a veil of secrecy. In 2000, Rearden Steel was renamed to Moxi Digital while unveiling a line of media centers designed to bridge the gap between personal computers and televisions. Digeo, Inc. purchased Moxi Digital in 2002. Digeo kept its own name but adopted Moxi as its product family name. Its Palo Alto offices and most of Moxi Digital's staff were kept. Digeo also adopted most of the Moxi hardware (originally focused on satellite consumer electronics), as well as some of the Linux extensions, which were merged into Digeo's own Linux-based infrastructure and cable-specific hardware with Digeo's Emmy award-winning user interface, known as Moxi Menu.
On September 22, 2009, the assets of Digeo, Inc. were purchased by the Arris International. Arris announced it would continue to develop and market the Moxi product line to both retail customers and cable operators.
Retail DVR products
The Moxi HD DVR was a high-definition digital video recorder (DVR) with both three-tuner and two-tuner models available, though the two-tuner model was produced only briefly before being updated. It was designed for use with cable television and supported multi-stream CableCARDs, as well as channel scanning for unencrypted channels. Multi-room viewing was supported using a small (and less expensive) companion device called a Moxi Mate. The Moxi product line was released to retail in December 2008 after many years of being available only to cable operators. From 2009, Arris offered multi-room packages. Retail sales were suspended early in 2012.
DVR hardware
The Moxi HD DVR was a Broadcom BCM7400-based set-top box designed to work with a Multi-stream CableCARD. Moxi features were added to the Arris Moxi Gateway and Moxi Player, for sale to cable companies only.
The hardware features two or three HD tuners, allowing users to record two or three shows at the same time, depending on the model; 500 GB storage, which equates to 75 hours of 1080 HD recording or 300 hours of SD (480i) recording; and Dolby Digital surround sound.
The Moxi HD DVR was compatible with eSATA external hard drives certified for DVR use. External drives allowed users to extend their hard drive space. Drives up to 6.5TB were supported.
Moxi HD DVRs supported 480i, 480p, 720p, 1080i, and 1080p 24 and 30 Hz TV resolutions.
Moxi Mate hardware
The Moxi Mate was the multi-room extender for the Moxi HD DVR released in August 2009. It was a set-top box that connects with the Moxi HD DVR over a home network to let users watch TV in other rooms. The Moxi Mate can could play media files available from the home network or the Internet using the same interface as the HD DVR.
Awards
Emmy Awards
2005 Advanced Media Technology Emmy
The Moxi Media Center was recognized for Outstanding Achievement in Advanced Media Technology for the Creation of Non-Traditional Programs or Platforms.
See also
Hauppauge MediaMVP
Dreambox
DBox2
Monsoon HAVA
HDHomeRun
Slingbox
TiVo digital video recorders
LocationFree Player
Home theater PC
Telly (home entertainment server)
References
External links
Official Moxi web site
Official Digeo web site
Digital television
Digital video recorders
Entertainment companies of the United States
Interactive television
Mass media companies of the United States | Moxi (DVR) | Technology | 898 |
1,832,436 | https://en.wikipedia.org/wiki/Phase%20synchronization | Phase synchronization is the process by which two or more cyclic signals tend to oscillate with a repeating sequence of relative phase angles.
Phase synchronisation is usually applied to two waveforms of the same frequency with identical phase angles with each cycle. However it can be applied if there is an integer relationship of frequency, such that the cyclic signals share a repeating sequence of phase angles over consecutive cycles. These integer relationships are called Arnold tongues which follow from bifurcation of the circle map.
One example of phase synchronization of multiple oscillators can be seen in the behavior of Southeast Asian fireflies. At dusk, the flies begin to flash periodically with random phases and a gaussian distribution of native frequencies. As night falls, the flies, sensitive to one another's behavior, begin to synchronize their flashing. After some time all the fireflies within a given tree (or even larger area) will begin to flash simultaneously in a burst.
Thinking of the fireflies as biological oscillators, we can define the phase to be 0° during the flash and +-180° exactly halfway until the next flash. Thus, when they begin to flash in unison, they synchronize in phase.
One way to keep a local oscillator "phase synchronized" with a remote transmitter uses a phase-locked loop.
See also
Algebraic connectivity
Coherence (physics)
Kuramoto model
Synchronization (alternating current)
References
Sync by S. H. Strogatz (2002).
Synchronization - A universal concept in nonlinear sciences by A. Pikovsky, M. Rosenblum, J. Kurths (2001)
External links
A tutorial on calculating Phase locking and Phase synchronization in Matlab.
Wave mechanics
Synchronization | Phase synchronization | Physics,Engineering | 373 |
229,073 | https://en.wikipedia.org/wiki/Congestion%20pricing | Congestion pricing or congestion charges is a system of surcharging users of public goods that are subject to congestion through excess demand, such as through higher peak charges for use of bus services, electricity, metros, railways, telephones, and road pricing to reduce traffic congestion; airlines and shipping companies may be charged higher fees for slots at airports and through canals at busy times. This pricing strategy regulates demand, making it possible to manage congestion without increasing supply.
According to the economic theory behind congestion pricing, the objective of this policy is to use the price mechanism to cover the social cost of an activity where users otherwise do not pay for the negative externalities they create (such as driving in a congested area during peak demand). By setting a price on an over-consumed product, congestion pricing encourages the redistribution of the demand in space or in time, leading to more efficient outcomes.
Singapore was the first country to introduce congestion pricing on its urban roads in 1975, and was refined in 1998. Since then, it has been implemented in cities including London, Stockholm, Milan, Gothenburg, and in the central business district of Manhattan in New York City. It was also considered in Washington, D.C. and San Francisco prior to the COVID-19 pandemic. Greater awareness of the harms of pollution and emissions of greenhouse gases in the context of climate change has recently created greater interest in congestion pricing.
Implementation of congestion pricing has reduced traffic congestion in urban areas, reduced pollution, reduced asthma, and increased home values, but has also sparked criticism and public discontent. Critics maintain that congestion pricing is not equitable, places an economic burden on neighboring communities, and adversely affects retail businesses and general economic activity.
There is a consensus among economists that congestion pricing in crowded transportation networks, and subsequent use of the proceeds to lower other taxes, makes the average citizen better off. Economists disagree over how to set tolls, how to cover common costs, what to do with any excess revenues, whether and how "losers" from tolling previously free roads should be compensated, and whether to privatize highways.
Description
Congestion pricing is a concept from market economics regarding the use of pricing mechanisms to charge the users of public goods for the negative externalities generated by the peak demand in excess of available supply. Its economic rationale is that, at a price of zero, demand exceeds supply, causing a shortage, and that the shortage should be corrected by charging the equilibrium price rather than shifting it down by increasing the supply. Usually this means increasing prices during certain periods of time or at the places where congestion occurs; or introducing a new usage tax or charge when peak demand exceeds available supply in the case of a tax-funded public good provided free at the point of usage.
According to the economic theory behind congestion pricing, the objective of this policy is the use of the price mechanism to make users more aware of the costs that they impose upon one another when consuming during the peak demand, and that they should pay for the additional congestion they create, thus encouraging the redistribution of the demand in space or in time, or shifting it to the consumption of a substitute public good; for example, switching from private transport to public transport.
This pricing mechanism has been used in several public utilities and public services for setting higher prices during congested periods, as a means to better manage the demand for the service, and whether to avoid expensive new investments just to satisfy peak demand, or because it is not economically or financially feasible to provide additional capacity to the service. Congestion pricing has been widely used by telephone and electric utilities, metros, railways and autobus services, and has been proposed for charging internet access. It also has been extensively studied and advocated by mainstream transport economists for ports, waterways, airports and road pricing, though actual implementation is rather limited due to the controversial issues subject to debate regarding this policy, particularly for urban roads, such as undesirable distribution effects, the disposition of the revenues raised, and the social and political acceptability of the congestion charge.
Congestion pricing is one of a number of alternative demand side (as opposed to supply side) strategies offered by economists to address traffic congestion. Congestion is considered a negative externality by economists. An externality occurs when a transaction causes costs or benefits to a third party, often, although not necessarily, from the use of a public good: for example, if manufacturing or transportation cause air pollution imposing costs on others when making use of public air. Congestion pricing is an efficiency pricing strategy that requires the users to pay more for that public good, thus increasing the welfare gain or net benefit for society.
Nobel-laureate William Vickrey is considered by some to be the father of congestion pricing, as he first proposed adding a distance- or time-based fare system for the New York City Subway in 1952. In the road transportation arena these theories were extended by Maurice Allais, Gabriel Roth who was instrumental in the first designs and upon whose World Bank recommendation the first system was put in place in Singapore. Also, it was considered by the Smeed Report, published by the British Ministry of Transport in 1964, but its recommendations were rejected by successive British governments.
The transport economics rationale for implementing congestion pricing on roads, described as "one policy response to the problem of congestion", was summarized in testimony to the United States Congress Joint Economic Committee in 2003: "congestion is considered to arise from the mispricing of a good; namely, highway capacity at a specific place and time. The quantity supplied (measured in lane-miles) is less than the quantity demanded at what is essentially a price of zero. If a good or service is provided free of charge, people tend to demand more of it—and use it more wastefully—than they would if they had to pay a price that reflected its cost. Hence, congestion pricing is premised on a basic economic concept: charge a price in order to allocate a scarce resource to its most valuable use, as evidenced by users' willingness to pay for the resource".
As applied to traffic, there are technically two types of congestion pricing. Cordon or area pricing defines the boundaries of an affected area -- typically an area of dense travel demand such as a city center -- and charges for personal vehicles to cross its boundaries. Lane or facility pricing charges for access to a single facility, such as a segment of road or bridge. In practice, the term "congestion pricing" is often used to refer to cordon pricing but not facility pricing, as this is a newer idea.
Roads
Practical implementations of road congestion pricing are found almost exclusively in urban areas, because traffic congestion is common in and around city centers. Congestion pricing can be fixed (the same at all times of day and days of the week), variable (set in advance to be higher at typically high-traffic times), or dynamic (varying according to actual conditions).
As congestion pricing has been increasing worldwide, the schemes implemented have been classified into four different types: cordon area around a city center; area wide congestion pricing; city center toll ring; and corridor or single facility congestion pricing.
Cordon area and area wide
Cordon area congestion pricing is a fee or tax paid by users to enter a restricted area, usually within a city center, as part of a demand management strategy to relieve traffic congestion within that area. The economic rationale for this pricing scheme is based on the externalities or social costs of road transport, such as air pollution, noise, traffic accidents, environmental and urban deterioration, and the extra costs and delays imposed by traffic congestion upon other drivers when additional users enter a congested road.
The first implementation of such a scheme was Singapore Area Licensing Scheme in 1975, together with a comprehensive package of road pricing measures, stringent car ownership rules and improvements in mass transit. Thanks to technological advances in electronic toll collection, electronic detection, and video surveillance technology, collecting congestion fees has become easier. Singapore upgraded its system in 1998, and similar pricing schemes were implemented in Rome in 2001, London in 2003 with extensions in 2007; Stockholm in 2006, as a seven-month trial, and then on a permanent basis. In January 2008 Milan began a one-year trial program called Ecopass, charging low emission standard vehicles and exempting cleaner and alternative fuel vehicles. The Ecopass program was extended until December 31, 2011, and on January 16, 2012, was replaced by Area C, a trial program that converted the scheme from a pollution-charge to a congestion charge. The Gothenburg congestion tax was implemented in January 2013 and it was modeled after the Stockholm scheme.
Singapore and Stockholm charge a congestion fee every time a user crosses the cordon area, while London charges a daily fee for any vehicle driving in a public road within the congestion charge zone, regardless of how many times the user crosses the cordon. Stockholm has put a cap on the maximum daily tax, while in Singapore the charge is based on a pay-as-you-use principle, and rates are set based on traffic conditions at the pricing points, and reviewed on a quarterly basis. Through this policy, the Land Transport Authority (LTA) reports that the electronic road pricing "has been effective in maintaining an optimal speed range of 45 to 65 km/h for expressways and 20 to 30 km/h for arterial roads".
Singapore
In an effort to improve the pricing mechanism, and, to introduce real-time variable pricing,
Singapore's LTA together with IBM, ran a pilot from December 2006 to April 2007, with a traffic estimation and prediction tool (TrEPS), which uses historical traffic data and real-time feeds with flow conditions from several sources, in order to predict the levels of congestion up to an hour in advance. By accurately estimating prevailing and emerging traffic conditions, this technology is expected to allow variable pricing, together with improved overall traffic management, including the provision of information in advance to alert drivers about conditions ahead, and the prices being charged at that moment.
In 2010 the Land Transport Authority began exploring the potential of Global Navigation Satellite System as a technological option for a second generation ERP. LTA objective is to evaluate if the latest technologies available in the market today are accurate and effective enough for use as a congestion charging tool, especially taking into consideration the dense urban environment in Singapore. Implementation of such system is not expected in the short term.
London
A proposal by former Mayor of London Ken Livingstone would have resulted in a new pricing structure based on potential CO2 emission rates by October 2008. Livingstone's successor as Mayor of London, Boris Johnson, announced in July 2008 that the new CO2 charging structure will no longer be implemented. Johnson decided to remove the 2007 Western Extension from the congestion charging zone beginning on January 4, 2011, to increase the basic charge to , and also to introduce an automated payment system called Congestion Charging Auto Pay (CC Auto Pay), which will charge vehicles based on the number of charging days a vehicle travels within the charging zone each month, and the drivers of these vehicles will pay a reduced daily charge. In November 2012 Transport for London (TfL) presented a proposal to abolish the Greener Vehicle Discount, and the Ultra Low Emission Discount (ULED) went into effect on 1 July 2013, limiting the free access to the congestion charge zone to selected vehicles. There has been criticism because during the first ten years since the scheme was implemented, gross revenue reached about £2.6 billion, but only £1.2 billion has been invested, meaning that 54% of gross revenues have been spent in operating the system and administrative expenses.
A new toxicity charge, known as T-charge was introduced from 23 October 2017. Older and more polluting cars and vans that do not meet Euro 4 standards will have to pay an extra £10 charge within the Congestion Charge Zone (CCZ). On 8 April 2019, the T-charge was expanded into the Ultra Low Emission Zone (ULEZ).
Milan
The Ecopass pollution charge ended on December 31, 2011, and was replaced by the Area C scheme, which went into effect on January 16, 2012, initially as an 18-month pilot program. The Area C scheme is a conventional congestion pricing scheme and is based on the same Ecopass geographic area. Vehicles entering the charging zone incur a charge of regardless of their pollution level. However, residents inside the area have 40 free entries per year and then a discounted charge of . Electric vehicles, public utility vehicles, police and emergency vehicles, buses and taxis are exempted from the charge. Hybrid electric and bi-fuel natural gas vehicles (CNG and LPG) were exempted until January 1, 2013, Exemption has been postponed until December 31, 2016.
The scheme was made permanent in March 2013. All net earnings from Area C are invested to promote sustainable mobility and policies to reduce air pollution, including the redevelopment, protection and development of public transport, "soft mobility" (pedestrians, cycling, Zone 30) and systems to rationalize the distribution of goods.
Stockholm
On 1 January 2016, congestion taxes were increased in the inner-city parts of Stockholm, and also the congestion tax was introduced on Essingeleden motorway. This was the first increase of the tax since it was introduced permanently in 2007.
The congestion tax is being introduced at the access and exit ramps of two interchanges on Essingeleden in order to reduce traffic jams in peak periods, and with shorter traffic jams on Essingeleden, the surrounding roads are expected to have shorter tailbacks. The transport agencies involved expected to reduce traffic on Essingeleden by some 10% in peak hours. One week after the tax began to be charged, traffic on the motorway had decreased by 22% compared to a normal day in mid-December.
The tax increase was implemented not only to improve accessibility and the environment, but also to help develop the infrastructure. The additional funds will contribute to finance the extension of the Stockholm metro. As the Stockholm congestion tax varies by time of the day, the highest increase took place at the two busiest rush hour periods, 7:30 to 8:29, and 16:00 to 17:29, from SEK 20 to SEK 30. The objective was to steer the traffic towards other times of the day and public transport, and in this way reduce congestion in the Inner City area. Also the maximum amount levied was raised to SEK 105 per day and vehicle.
Norway
Several cities in Norway have tolled entrances to the more central urban areas, the first being Bergen in 1986.
Starting with Trondheim in 2010, later in Kristiansand, Bergen and Oslo, time differing fees were introduced, so that rush hours (in Oslo 06.30 – 09.00 and 15.00 – 17.00) cost more. The price is (in 2020) typically NOK 28 (€2.37) per passage, but to enter Oslo to the inner city and leave means passing five stations which costs NOK 126 (€10,66).
New York City
Congestion pricing in New York City was implemented in 2025. Most vehicles entering Manhattan south of 60th Street are charged a fee that varies throughout the day.
Old town centres
Around Europe several relatively small cities, such as Durham, England; Znojmo, Czech Republic; Riga, Latvia; and Valletta, Malta, have implemented congestion pricing to reduce traffic crowding, parking problems and pollution, particularly during the peak tourism season.
Durham introduced charges in October 2002, reducing vehicle traffic by 85% after a year; prior to this 3,000 daily vehicles had shared the streets with 17,000 pedestrians.
Valletta has reduced daily vehicles entering the city from 10,000 to 7,900; making 400 readily available parking places in the center. There has been a 60% drop in car stays by non-residents of more than eight hours, but there has been a marked increase of 34% in non-residential cars visiting the city for an hour or less.
Rejected proposals
Hong Kong conducted a pilot test on an electronic congestion pricing system between 1983 and 1985 with positive results. However, public opposition against this policy stalled its permanent implementation.
In 2002 Edinburgh, United Kingdom, initiated an implementation process; a referendum was conducted in 2005, with a majority of 74.4% rejecting the proposal.
Councils from across the West Midlands in the United Kingdom, including Birmingham and Coventry, rejected the idea of imposing congestion pricing schemes on the area in 2008, despite promises from central government of transport project funding in exchange for the implementation of a road pricing pilot scheme.
In 2007, New York City shelved a proposal for a three-year pilot program for implementation in Manhattan, and a new proposition was denied in 2008, with potential federal grants of being reallocated to other American cities.
Greater Manchester, United Kingdom, was considering a scheme with two cordons, one covering the main urban core of the Greater Manchester Urban Area and another covering the Manchester city centre. The measure was supported by the government, but three local authorities rejected it (Bury, Trafford and Stockport); the support of two-thirds of Manchester's 10 local councils was needed for it to be implemented. A comprehensive transport investment package for Manchester, which included the congestion pricing element, was released for further public consultation and was to be subject of a referendum in December 2008. On 12 December 2008 the scheme was overwhelmingly rejected by 10 out of 10 councils by a public referendum.
Current proposals
United States
In August 2007, the United States Department of Transportation selected five metropolitan areas to initiate congestion pricing demonstration projects under the Urban Partnerships Congestion Initiative, for US$1 billion of federal funding. The five projects under this initiative are Golden Gate Bridge in San Francisco, State Route 520 serving downtown Seattle and communities to its east, Interstate 95 between Miami and Ft. Lauderdale, Interstate 35W serving downtown Minneapolis, and a variable rate parking meter system in Chicago plus Metro ExpressLanes in Los Angeles County, which replaced New York City after it left the program in 2008.
San Francisco transport authorities began a feasibility study in 2006 to evaluate the introduction of congestion pricing. The charge would be combined with other traffic reduction implementations, allowing money to be raised for public transit improvements and bike and pedestrian enhancements. The initial pricing scenarios were presented in public meetings conducted in December 2008, and the final study results were announced in November 2010, proposing modified alternatives based on the public's feedbacks, and the updated proposal calls for implementing a six-month to one-year trial in 2015.
Governor Andrew Cuomo reintroduced a congestion pricing proposal for New York City in 2017 in response to the New York City Subway's state of emergency, a proposal that Mayor Bill de Blasio opposed. A commission to investigate the feasibility of congestion pricing, organized in late 2017, found that a congestion pricing scheme could benefit New York City. Cuomo's congestion pricing plan was approved in March 2019, though congestion pricing in New York City would not go into effect until 2022 at the earliest. New York City's congestion pricing zone will be the first in North America. The Federal Highway Administration gave its final approval on June 26, 2023, allowing the MTA to begin setting toll rates for the proposed congestion zone. Implementation was scheduled for 30 June 2024, but in an announcement by Governor Kathy Hochul on 5 June 2024, the plan was indefinitely postponed. In November 2024, Hochul announced an intent for the toll to go forward with a planned implementation in 5 January 2025, at a reduced rate.
China
In September 2011, local officials announced plans to introduce congestion pricing in Beijing. No details were provided regarding the magnitude of the congestion charges or the charge zone. The measure was initially proposed in 2010 and was recommended by the World Bank. A similar scheme was proposed for the city Guangzhou, Guangdong province, in early 2010. The city opened a public discussion on whether to introduce congestion charges. An online survey conducted by two local news outlets found that 84.4% of respondents opposed the charges.
In December 2015, the Beijing Municipal Commission of Transport announced plans to introduce congestion charges in 2016. According to city's motor vehicle emission control plan 2013–2017, the congestion charge will be a real-time variable pricing scheme based on actual traffic flows and emissions data, and allow the fee to be charged for different vehicles and varying by time of the day and for different districts. The Dongcheng and Xicheng are among the districts that are most likely to firstly implement congestion charge. Vehicle emissions account for 31% of the city's smog sources, according to Beijing Environmental Protection Bureau. The local government has implemented already several policies to address air quality and congestion, such as a driving restriction scheme based upon the last digits on their license plates. Also a vehicle quota system was introduced in 2011, awarding new car licenses through a lottery, with a ceiling of 6 million units set by the city authority for 2017. In May 2016, the Beijing city legislature announced it will consider to start levying traffic congestion charges by 2020 as part of a package of measures to reform the vehicle quota system. , the city's environmental and transport departments are working together on a congestion pricing proposal.
Brazil
In January 2012, the federal government of Brazil enacted the Urban Mobility Law that authorizes municipalities to implement congestion pricing to reduce traffic flows. The law also seeks to encourage the use of public transportation and reduce air pollution. According to the law, revenues from congestion charges should be destined exclusively to urban infrastructure for public transportation and non-motorized modes of locomotion (such as walking and cycling), and to finance public subsidies to transit fares. The law went into effect in April 2013.
In April 2012, one of the committees of the São Paulo city council approved a bill to introduce congestion pricing within the same area as the existing road space rationing () by the last digit of the license plate, which has been in force 1996. The proposed charge is R$4 (~ ) per day and it is expected to reduce traffic by 30% and raise about R$2.5 billion (~ billion) per year, most of which will be destined to the expansion of the São Paulo Metro system and bus corridors. The bill still needs approval by two other committees before going for a final vote at the city council. Since 1995, 11 bills have been presented in the city council to introduce congestion pricing. Opinion surveys have shown that the initiative is highly umpopular. A survey by Veja magazine found that 80% of drivers are against congestion pricing, and another survey by Exame magazine found that only 1% of São Paulo's residents support the initiative, while 30% find that extending the metro system is a better solution to reduce traffic congestion. São Paulo's strategic urban development plan "SP 2040", approved in November 2012, proposes the implementation of congestion pricing by 2025, when the density of metro and bus corridors is expected to reach 1.25 km/km2. The Plan also requires ample consultation and even a referendum before beginning implementation.
Thailand
In October 2024, Thailand's Ministry of Transport announced plans for a 40-50 Baht congestion charge for motorists who enter streets in inner Bangkok. The funds would be used to subsidize a 20 Baht fare for all railway lines in Greater Bangkok. The plans were supported by Governor of Bangkok Chadchart Sittipunt, who advocated for an expansion of Bangkok's transit network, including electric train and bus service along with pedestrian infrastructure.
Urban corridors and toll rings
Congestion pricing has also been implemented in urban freeways. Between 2004 and 2005, Santiago de Chile implemented the first 100% non-stop urban toll for a freeway passing through a downtown area, charging by the distance traveled. Congestion pricing has been used since 2007 during rush hours in order to maintain reasonable speeds within the city core.
Norway pioneered the implementation of electronic urban tolling in the main corridors of Norway's three major cities: Bergen (1986), Oslo (1990), and Trondheim (1991). In Bergen cars can only enter the central area using a toll road, so that the effect is similar to a congestion charge. Though initially intended only to raise revenues to finance road infrastructure, the urban toll ring at Oslo created an unintended congestion pricing effect, as traffic decreased by around 5%. The Trondheim Toll Scheme also has congestion pricing effects, as charges vary by time of day. The Norwegian authorities pursued authorization to implement congestion charges in cities, and legislation was approved by Parliament in 2001. In October 2011 the Norwegian government announced the introduction of rules allowing congestion charging in cities. The measure is intended to cut greenhouse gas and air pollutant emissions, and relieve traffic congestion. , Norwegian authorities have implemented urban charging schemes that operates both on the motorways and for access into downtown areas in five additional cities or municipalities: Haugesund, Kristiansand, Namsos, Stavanger, and Tønsberg.
The Norwegian electronic toll collection system is called AutoPASS and is part of the joint venture EasyGo.
Single facilities
Urban
Congestion pricing has also been applied to specific roadways. The first of this kind of specific schemes allowed users of low or single-occupancy vehicles to use a high-occupancy vehicle lanes (HOV) if they pay a toll. This scheme is known as high-occupancy toll lanes (HOT) lanes, and it has been introduced mainly in the United States and Canada. The first practical implementations was California's private toll 91 Express Lanes, in Orange County in 1995, followed in 1996 by Interstate 15 in San Diego. There has been controversy over this concept, and HOT schemes have been called "Lexus" lanes, as critics see this new pricing scheme as a perk to the rich. According to the Texas A&M Transportation Institute, by 2012 there were in the United States 722 corridor-miles of HOV lanes, 294 corridor-miles of HOT/Express lanes and 163 corridor-miles of HOT/Express lanes under construction.
Congestion pricing in the form of variable tolls by time-of-the-day have also been implemented in bridges and tunnels providing access to the central business districts of several major cities. In most cases there was a toll already in existence. Dynamic pricing is relatively rare compared to variable pricing. One example of dynamic tolling is Interstate 66 in the Washington, D.C., metro area, where at times of severe congestion tolls can reach almost . However, on average, round trip prices are much lower: $11.88 (2019), $5.04 (2020), $4.75 (2021).
In March 2001, the Port Authority of New York and New Jersey (PANYNJ) implemented a discount on regular toll fees during off-peak hours for those vehicles paying electronically with an E-ZPass issued in New York State. These discount toll was implemented at several tunnels and bridges connecting New York City and New Jersey, including the George Washington Bridge, Lincoln Tunnel, and Holland Tunnel, and at some other bridges administered by PANYNJ. Since March 2008, qualified low-emission automobiles with a fuel economy of at least 45 miles per gallon are eligible to receive a Port Authority Green Pass, which allows for a 50% discount during off-peak hours as compared to the regular full toll.
In January 2009, variable tolls were implemented at Sydney Harbour Bridge, two weeks after upgrading to 100% free-flow electronic toll collection. The highest fees are charged during the morning and afternoon peak periods; a toll 25% lower applies for the shoulder periods; and a toll lower than the previously existing is charged at nights, weekends, and public holidays. This is Australia's first road congestion pricing scheme, and has had only a very minor effect on traffic levels, reducing them by 0.19%.
In July 2010 congestion tolls were implemented at the San Francisco–Oakland Bay Bridge. The Bay Bridge congestion pricing scheme charges a toll from 5 a.m. to 10 a.m. and 3 p.m. to 7 p.m., Monday through Friday. During weekends cars pay . The toll remained at the previous toll of at all other times on weekdays. According to the Bay Area Toll Authority fewer users are driving during the peak hours and more vehicles are crossing the Bay Bridge before and after the 5–10 a.m. period in which the congestion toll goes into effect. The agency also reported that commute delays in the first six months have dropped by an average of 15 percent compared with 2009. When the congestion tolls were proposed, the agency expected the scheme to produce a 20 to 30 percent drop in commute traffic.
Non-urban
Autoroute A1 in Northern France is one of the few cases of congestion pricing implemented outside of urban areas. This is an expressway connecting Paris to Lille, and since 1992 congestion prices have been applied during weekends with the objective of spreading demand on the trip back to Paris on Sunday afternoons and evenings.
Research
Measurement of effects
In a road network, congestion can be considered a specific measure of the time delay in a journey or time lost through traffic jams. Delays can be caused by some combination of traffic density, road capacity, and the delaying effects of other road users and traffic management schemes such as traffic lights, junctions, and street works. This can be measured as the extra journey time needed to traverse a congested route when compared to the same route with no such interference. However, this technical definition of congestion as a measurement of delay can get confused and used interchangeably with traffic density in the public mind.
To measure the true effects of any traffic management scheme it is normally necessary to establish a baseline, or "do nothing" case, which estimates the effects on the network without any changes other than normal trends and expected local changes. Notably this was not done for the London Congestion Charging Scheme, which has led to claims that it is not possible to determine the extent of the actual influence of the scheme. Regardless of the scheme's impact, in a retrospective analysis Transport for London (TfL) estimated there would have already been a significant reduction in traffic as a consequence of parking policies and increased congestion due to traffic management and other interventions that had the effect of reducing highway capacity. In 2006, the last year before the zone was expanded, TfL observed that traffic flows were lower than in any recent year, while network traffic speeds were also lower than in any recent year.
In 2013, ten years since its implementation, TfL reported that the congestion charging scheme resulted in a 10% reduction in traffic volumes from baseline conditions, and an overall reduction of 11% in vehicle kilometres in London between 2000 and 2012. Despite these gains, traffic speeds have also been getting progressively slower over the past decade, particularly in central London. TfL explains that the historic decline in traffic speeds is most likely due to interventions that have reduced the effective capacity of the road network in order to improve the urban environment, increase road safety and prioritise public transport, pedestrian and cycle traffic, as well as an increase in road works by utilities and general development activity since 2006. TfL concludes that while levels of congestion in central London are close to pre-charging levels, the effectiveness of the congestion charge in reducing traffic volumes means that conditions would be worse without the Congestion Charging scheme.
New York City Congestion Pricing Environmental Assessment Study
In May 2023, the Metropolitan Transportation Authority (MTA) finalized and published the Environmental Assessment (EA) of the congestion program which included a public comment period of 30 days, ending on June 12, 2023. The EA included a comprehensive regional study of 22 million people taking 28.8 million journeys per average weekday in a 28-county area covering New York, New Jersey and Connecticut. Under the regional study, the MTA along with other sponsoring organizations included data from local studies as well as county data and observed significant patterns if congestion pricing were implemented. By June 22, 2023, the Federal Highway Administration (FHWA) published its Finding of No Significant Impact (FONSI) decision of the project. Under the FONSI decision, the FHWA found that the EA addressed public input, considered the impacts, and appropriately mitigated adverse effects. In the EA, it showed potential improvements on regional air quality as some drivers will shift from vehicle transportation to transit within the New York Metropolitan region.
The EA reports 7 “alternative” scenarios for the congestion pricing project with the outlier differences where congestion pricing is fully implemented (known as “Scenario A”) and the scenario where no action is taken from the project. The result of congestion pricing (Scenario A) in relative to the “no action” alternative is a 7.1% to 9.2% reduction in daily vehicle miles traveled, a 15.4% to 19.9% reduction in the number of vehicles entering congestion zone, and would have an estimated $1.02 to $1.48 billion net revenue for the program. Nitrogen oxide emissions would have been estimated to drop by 9.54% within the zone and approximately 5.96% in New York County (Manhattan). Other pollutants such as carbon emissions would have been expected to drop by 11.48% as well as fine particulate matter (PM2.5 & PM10) by 11.37% and 12.16% respectively.
The study also projects an increase in air pollutants in neighboring areas outside of the congestion zone. In particular, Bergen County in New Jersey would have been estimated to observe a 0.63% increase in nitrogen oxide emissions if congestion tolls are placed. Residents of the Bronx would have seen a 0.09% increase in nitrogen oxide emissions. Other areas with similar increasing air pollutant projections include Richmond and Putnam counties. Furthermore, the EA also indicated a slight significance on the number of vehicles on some highways if congestion pricing were implemented but showed a lack of significance in vehicle miles traveled. The EA data for a 2045 projection projected a 10.4% reduction in crossings from Brooklyn to the congestion zone which includes the BQE (Brooklyn-Queens Expressway) and other connected entryways such as the Manhattan and Brooklyn Bridge. Under congestion pricing, there will be a 5.4% decrease in vehicles on FDR Drive. These reductions represent 16,000 to 42,000 fewer people accessing the congestion zone in a private automobile on an average weekday.
The EA reported a lack of significant change in congestion (in vehicle miles traveled). For instance, the highway, FDR Drive would have been projected to experience a 0.2% increase in vehicle miles traveled. Nonetheless, some officials remain optimistic, citing that congestion pricing would have reduced the amount of traffic especially in areas like the BQE by around 7-10%.
The study also projects an increase in air pollutants in neighboring areas outside of the congestion zone. In particular, Bergen County in New Jersey would have been estimated to observe a 0.63% increase in nitrogen oxide emissions if congestion tolls are placed. Residents of the Bronx would have seen a 0.09% increase in nitrogen oxide emissions. Other areas with similar increasing air pollutant projections include Richmond and Putnam counties. Furthermore, the EA also indicated a slight significance on the number of vehicles on some highways if congestion pricing were implemented but showed a lack of significance in vehicle miles traveled. The EA data for a 2045 projection projected a 10.4% reduction in crossings from Brooklyn to the congestion zone which includes the BQE (Brooklyn-Queens Expressway) and other connected entryways such as the Manhattan and Brooklyn Bridge. Under congestion pricing, there will be a 5.4% decrease in vehicles on FDR Drive. These reductions represent 16,000 to 42,000 fewer people accessing the congestion zone in a private automobile on an average weekday.
The EA reported a lack of significant change in congestion (in vehicle miles traveled). For instance, the highway, FDR Drive would have been projected to experience a 0.2% increase in vehicle miles traveled. Nonetheless, some officials remain optimistic, citing that congestion pricing would have reduced the amount of traffic especially in areas like the BQE by around 7-10%.
Academic debate and concerns
Even the transport economists who advocate congestion pricing have anticipated several practical limitations, concerns and controversial issues regarding the actual implementation of this policy. As summarized by Cervero:
True social-cost pricing of metropolitan travel has proven to be a theoretical ideal that so far has eluded real-world implementation. The primary obstacle is that except for professors of transportation economics and a cadre of vocal environmentalists, few people are in favor of considerably higher charges for peak-period travel. Middle-class motorists often complain they already pay too much in gasoline taxes and registration fees to drive their cars, and that to pay more during congested periods would add insult to injury. In the United States, few politicians are willing to champion the cause of congestion pricing for fear of reprisal from their constituents. Critics also argue that charging more to drive is elitist policy, pricing the poor off of roads so that the wealthy can move about unencumbered. It is for all these reasons that peak-period pricing remains a pipe dream in the minds of many.
Both Button and Small et al., have identified the following issues:
The real-world demand functions for urban road travel are more complex than the theoretical functions used in transport economics analysis. Congestion pricing was developed as a first-best solution, based on the assumption that the optimal price of road space equals the marginal cost price if all other goods in the economy are also marginal cost priced. In the real world this is not true, thus, actual implementation of congestion pricing is just a proxy or second-best solution. Based on the economic principles behind congestion pricing, the optimal congestion charge should make up for the difference between the average cost paid by the driver and the marginal cost imposed on other drivers (such as extra delay) and on society as a whole (such as air pollution). The practical challenge of setting optimal link-based tolls is daunting given that neither the demand functions nor the link-specific speed-flow curves can be known precisely. Therefore, transport economists recognize that in practice setting the right price for the congestion charge becomes a trial and error experience.
Inequality issue: A main concern is the possibility of undesirable distribution repercussions because of the diversity of road users. The use of the tolled road depends on the user's level of income. Where some cannot afford to pay the congestion charge, then this policy is likely to privilege the middle-class and rich. The users who shift to some less-preferred alternative are also worse off. The less wealthy are the more likely to switch to public transit. Road space rationing is another strategy generally viewed as more equitable than congestion pricing. However, high-income users can always avoid the travel restrictions by owning a second car and users with relatively inelastic demand (such as a worker who needs to transport tools to a job site) are relatively more impacted.
There are difficulties in deciding how to allocate the revenues raised. This is a controversial issue among scholars. The revenues can be used to improve public transport (as is the case in London), or to invest in new road infrastructure (as in Oslo). Some academics make the case that revenues should be disposed as a direct transfer payments to former road users. Congestion pricing is not intended to increase public revenues or to become just another tax, however this is precisely one of the main concerns of road users and taxpayers.
One alternative, aimed at avoiding inequality and revenue allocation issues, is to implement a rationing of peak period travel through mobility rights or revenue-neutral credit-based congestion pricing. This system would be similar to the existing emissions trading of carbon credit. Metropolitan area or city residents, or the taxpayers, would be issued mobility rights or congestion credits, and would have the option of using these for themselves, or trading or selling them to anyone willing to continue traveling by automobile beyond their personal quota. This trading system would allow direct benefits to be accrued by those users shifting to public transportation or by those reducing their peak-hour travel rather than the government.
Public controversy
Experience from the few cities where congestion pricing has been implemented shows that social and political acceptability is key. Public discontent with congestion pricing, or rejection of congestion pricing proposals, is due mainly to the inequality issues, the economic burden on neighboring communities, the effect on retail businesses and the economic activity in general, and the fears that the revenues will become just another tax.
Congestion pricing remains highly controversial with the public both before and after implementation. This has in part been resolved through referendums, such as after the seven-month trial period in Stockholm; however this creates a debate as to where the border line for the referendum should go, since it is often the people living outside the urban area who have to pay the tax, while the external benefit is granted to those who live within the area. In Stockholm there was a majority in the referendum within the city border (where the votes counted), but not outside.
Some concerns have also been expressed regarding the effects of cordon area congestion pricing on economic activity and land use, as the benefits are usually evaluated from the urban transportation perspective only. However, congestion pricing schemes have been used with the main objective of improving urban quality and to preserve historical heritage in the small cities.
The effects of a charge on business have been disputed; reports have shops and businesses being heavily impacted by the cost of the charge, both in terms of lost sales and increased delivery costs in London, while others show that businesses were then supporting the charge six months after implementation. Reports show business activity within the charge zone had been higher in both productivity and profitability and that the charge had a "broadly neutral impact" on the London wide economy, while others claim an average drop in business of 25% following the 2007 extension.
Other criticism has been raised concerning the environmental effects on neighborhoods bordering the congestion zone, with critics claiming that congestion pricing would create "parking lots" and add more traffic and pollution to those neighborhoods, and the imposition of a regressive tax on some commuters. Stockholm's trial of congestion pricing, however, showed a reduction in traffic in areas outside the congestion zone. Other opponents argue that the pricing could become a tax on middle- and lower-class residents, since those citizens would be affected the most financially. The installation of cameras for tracking purposes may also raise civil liberties concerns.
Effects
A 2019 study of congestion pricing in Stockholm between 2006 and 2010 found that in the absence of congestion pricing that Stockholm's air would have been 5 to 15 percent more polluted between 2006 and 2010", and that young children would have suffered substantially more asthma attacks. A 2020 study that analyzed driving restrictions in Beijing estimated that the implementation of congestion pricing would reduce total traffic, increase traffic speed, reduce pollution, reduce greenhouse gas emissions, reduce traffic accidents, and increase tax revenues. A 2020 study of London found that congestion pricing (introduced in 2003) led to reductions in pollution and reductions in driving, but it increased pollution from diesel vehicles (which were exempt from the congestion pricing). A 2021 study found that congestion pricing reduced emissions through downsizing commuting distances and housing sizes.
A 2013 study found that after congestion pricing was implemented in Seattle, drivers reported greater satisfaction with the routes covered by congestion pricing and reported lower stress.
A 2016 study found that more people used public transportation due to increases in congestion pricing in Singapore. A 2016 study found that real estate prices dropped by 19% within the cordoned-off areas of Singapore where congestion pricing was in place relative to the areas outside of the area.
Waterways
Panama Canal booking system and auction
The Panama Canal has a limited capacity determined by operational times and cycles of the existing locks and further constrained by the current trend towards larger (close to Panamax-sized) vessels transiting the canal which take more transit time within the locks and navigational channels, and the need for permanent periodical maintenance works due to the aging canal, which forces periodical shutdowns of this waterway. On the other hand, demand has been growing due to the rapid growth of international trade. Also, many users require a guarantee of certain level of service. Despite the gains which have been made in efficiency, the Panama Canal Authority (ACP) estimates that the canal will reach its maximum sustainable capacity between 2009 and 2012. The long-term solution for the congestion problems was the expansion of the canal through a new third set of locks. Work started in 2007 and the expanded canal enter commercial operation in June 2016. The new locks allow transit of larger, Post-Panamax ships, which have a greater cargo capacity than the current locks are capable of handling.
Considering the high operational costs of the vessels (container ships have daily operational costs of approximately ), the long queues that occur during the high season (sometimes up to a week's delay), and the high value of some of the cargo transported through the canal, the ACP implemented a congestion pricing scheme to allow a better management of the scarce capacity available and to increase the level of service offered to the shipping companies. The scheme gave users two choices: (1) transit by order of arrival on a first-come first-served basis, as the canal historically has operated or (2) booked service for a fee—a congestion charge.
The booked service allowed two options of fees. The Transit Booking System, available online, allowing customers who do not want to wait in queue to pay an additional 15% over the regular tolls, guaranteeing a specific day for transit and crossing the canal in 18 hours or less. ACP sells 24 of these daily slots up to 365 days in advance. The second choice was high priority transit. Since 2006, ACP has available a 25th slot, sold through the Transit Slot Auction to the highest bidder. The main customers of the Transit Booking System are cruise ships, container ships, vehicle carriers, and non-containerized cargo vessels.
The highest toll for high priority passage paid through the Transit Slot Auction was charged on a tanker in August 2006, bypassing a 90-ship queue awaiting the end of maintenance works on the Gatun locks, thus avoiding a seven-day delay. The normal fee would have been just . The average regular toll is around .
Airports
Many airports are facing extreme congestion, runway capacity being the scarcest resource. Congestion pricing schemes have been proposed to mitigate this problem, including slot auctions, such as with the Panama Canal, but implementation has been piecemeal. The first scheme was started in 1968 when higher landing fees for peak-hour use by aircraft with 25 seats or fewer at Newark, Kennedy, and LaGuardia airports in New York City. As a result of the higher charges, general aviation activity during peak periods decreased by 30%. These fees were applied until deregulation of the industry, but higher fees for general aviation were kept to discourage this type of operations at New York's busiest airports. In 1988 a higher landing fee for smaller aircraft at Boston's Logan Airport was adopted; with this measure much of the general aviation abandoned Logan for secondary airports. In both US cases the pricing scheme was challenged in court. In the case of Boston, the judge ruled in favor of general aviation users due to lack of alternative airports. In the case of New York, the judge dismissed the case because "the fee was a justified means of relieving congestion".
Congestion pricing has also been implemented for scheduled airline services. The British Airports Authority (BAA) has been a pioneer in implementing congestion pricing for all types of commercial aviation. In 1972 implemented the first peak pricing policy, with surcharges varying depending on the season and time of the day, and by 1976 raised these peak charges. London-Heathrow had seven pricing structures between 1976 and 1984. In this case it was the US carriers that went to international arbitration in 1988 and won their case.
In 1991, the Athens Airport charged a 25% higher landing fee for those aircraft arriving between 11:00 and 17:00 during the high tourism season during summer. Hong Kong charges an additional flat fee to the basic weight charge. In 1991–92 peak pricing at London's main airports Heathrow, Gatwick and Stansted was implemented; airlines were charged different landing fees for peak and off-peak operations depending on the weight of aircraft. For example, in the case of a Boeing 757, the peak landing fee was about 2.5 times higher than the off-peak fee in all three airports. For a Boeing 747 the differential was even higher, as the old 747 carries a higher noise charge. Though related to runway congestion, the main objective of these peak charges at the major British airports was to raise revenue for investment.
See also
Automobile costs
Braess's paradox
Deadweight loss
Downs–Thomson paradox
Electricity pricing
Low-emission zone
Energy demand management (congestion pricing applied to electric utilities)
GNSS road pricing
Induced demand
Jevons paradox
Lewis–Mogridge position
Pareto efficiency
Road pricing
Road space rationing
Tax incidence
Tragedy of the commons
Transport economics
Transportation demand management
Variable pricing
Vehicle miles traveled tax
Water pricing
References
Bibliography
(See Chapter 9: Optimizing Traffic Congestion)
(See Chapter 6: The Master-Planned Transit Metropolis: Singapore)
(See Chapter 4: Pricing and 4-3: Congestion Pricing in Practice)
External links
Electronic toll collection
Intelligent transportation systems
Pricing
Road traffic management
Transport economics
Transportation planning
+
fr:Péage urbain
it:Pedaggio urbano | Congestion pricing | Technology | 10,032 |
47,993,889 | https://en.wikipedia.org/wiki/Iron%20tris%28dimethyldithiocarbamate%29 | Iron tris(dimethyldithiocarbamate) is the coordination complex of iron with dimethyldithiocarbamate with the formula Fe(S2CNMe2)3 (Me = methyl). It is marketed as a fungicide.
Synthesis, structure, bonding
Iron tris(dithiocarbamate)s are typically are prepared by salt metathesis reactions.
Iron tris(dimethyldithiocarbamate) is an octahedral coordination complex of iron(III) with D3 symmetry.
Spin crossover (SCO) was first observed in 1931 by Cambi et al. who discovered anomalous magnetic behavior for the tris(N,N-dialkyldithiocarbamatoiron(III) complexes. The spin states of these complexes are sensitive to the nature of the amine substituents.
Reactions
Iron tris(dithiocarbamate)s react with nitric oxide to give a nitrosyl complex:
This efficient chemical trapping reaction provides a means to detect NO.
Reflecting the strongly donating properties of dithiocarbamate ligands, iron tris(dithiocarbamate)s oxidize at relatively mild potentials to give isolable iron(IV) derivatives [Fe(S2CNR2)3]+.
Iron tris(dithiocarbamate)s react with hydrochloric acid to give the pentacoordinate chloride:
Safety
The U.S. Occupational Safety and Health Administration (OSHA) has set the legal (permissible exposure limit) for ferbam exposure in the workplace as 15 mg/m3 over an 8-hour workday. The U.S. National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3 over an 8-hour workday. At levels of 800 mg/m3, ferbam is immediately dangerous to life and health.
See also
Zinc dimethyldithiocarbamate - a related dimethyldithiocarbamate complex of zinc
Nickel bis(dimethyldithiocarbamate) - a related dimethyldithiocarbamate complex of nickel
Iron tris(diethyldithiocarbamate) - a related dimethyldithiocarbamate complex of iron
References
Fungicides
Dithiocarbamates
Iron(III) compounds
Iron complexes | Iron tris(dimethyldithiocarbamate) | Chemistry,Biology | 502 |
24,624,589 | https://en.wikipedia.org/wiki/Immunoglobulin%20C2-set%20domain | The basic structure of immunoglobulin (Ig) molecules is a tetramer of two light chains and two heavy chains linked by disulphide bonds. There are two types of light chains: kappa and lambda, each composed of a constant domain (CL) and a variable domain (VL). There are five types of heavy chains: alpha, delta, epsilon, gamma and mu, all consisting of a variable domain (VH) and three (in alpha, delta and gamma) or four (in epsilon and mu) constant domains (CH1 to CH4). Ig molecules are highly modular proteins, in which the variable and constant domains have clear, conserved sequence patterns. The domains in Ig and Ig-like molecules are grouped into four types: V-set (variable; ), C1-set (constant-1; ), C2-set (constant-2; ) and I-set (intermediate; ). Structural studies have shown that these domains share a common core Greek-key beta-sandwich structure, with the types differing in the number of strands in the beta-sheets as well as in their sequence patterns.
Immunoglobulin-like domains that are related in both sequence and structure can be found in several diverse protein
families. Ig-like domains are involved in a variety of functions, including cell–cell recognition, cell-surface receptors, muscle structure and the immune system.
C2-set domains, which are Ig-like domains resembling the antibody constant domain. C2-set domains are found primarily in the mammalian T-cell surface antigens CD2 (Cluster of Differentiation 2), CD4 and CD80, as well as in vascular (VCAM) and intercellular (ICAM) cell adhesion molecules.
CD2 mediates T-cell adhesion via its ectodomain, and signal transduction utilising its 117-amino acid cytoplasmic tail. CD2 displays structural and functional similarities with African swine fever virus (ASFV) LMW8-DR, a protein that is involved in cell–cell adhesion and immune response modulation, suggesting
a possible role in the pathogenesis of ASFV infection. CD4 is the primary receptor for HIV-1. CD4 has four immunoglobulin-like domains in its extracellular region that share the same structure, but can differ in sequence. Certain extracellular domains may be involved in dimerisation.
Human proteins containing this domain
CD2
CD4
VCAM1
References
Protein domains
Single-pass transmembrane proteins | Immunoglobulin C2-set domain | Biology | 532 |
2,650,504 | https://en.wikipedia.org/wiki/Rho1%20Sagittarii | {{DISPLAYTITLE:Rho1 Sagittarii}}
Rho1 Sagittarii, Latinized from ρ1 Sagittarii, is a single, variable star in the southern constellation of Sagittarius. It has a white hue and is visible to the naked eye with an apparent visual magnitude that fluctuates around 3.93. The distance to this star is approximately 127 light years based on parallax, and it is drifting further away with a radial velocity of +1.2 km/s. It is positioned near the ecliptic and so it can be occulted by the Moon.
This object has a stellar classification of A9IV, matching a subgiant star that is evolving away from the main sequence. It is a low amplitude Delta Scuti variable, ranging from 3.94 to 3.90 magnitude with a period of 0.05 days. The star is 893 million years old and is spinning with a projected rotational velocity of 68 km/s. It has 1.9 times the mass of the Sun and 3.3 times the Sun's radius. The star is radiating 31 times the luminosity of the Sun from its photosphere at an effective temperature of 7,469 K.
References
A-type subgiants
Delta Scuti variables
Sagittarius (constellation)
Sagittarii, Rho1
BD-18 5322
Sagittarii, 44 5
4107
181577
095168 88
7340 | Rho1 Sagittarii | Astronomy | 311 |
2,352,659 | https://en.wikipedia.org/wiki/Philips%20circle%20pattern | The Philips circle pattern (also referred to as the Philips pattern or PTV Circle pattern) refers to a family of related electronically generated complex television station colour test cards. The content and layout of the original colour circle pattern was designed by Danish engineer (1939–2011) in the Philips TV & Test Equipment laboratory in Amager (moved to Brøndby Municipality in 1989) near Copenhagen under supervision of chief engineer Erik Helmer Nielsen in 1966–67, largely building on their previous work with the monochrome PM5540 pattern. The first piece of equipment, the PM5544 colour pattern generator, which generates the pattern, was made by Finn Hendil and his group in 1968–69. The same team would also develop the Spanish TVE colour test card in 1973.
Since the widespread introduction of the original PM5544 from the early-1970s, the Philips Pattern has become one of the most commonly used test cards, with only the SMPTE and EBU colour bars as well as the BBC's Test Card F coming close to its usage.
The Philips circle pattern was later incorporated into other test pattern generators from Philips itself, as well as test pattern generators from various other manufacturers. Equipment from Philips and succeeding companies which generate the circle pattern are the PM5544, PM5534, PM5535, PM5644, PT5210, PT5230 and PT5300. Other related (non circle pattern) test card generators by Philips are the PM5400 (TV serviceman) family, PM5515/16/18, PM5519, PM5520 (monochrome), PM5522 (PAL), PM5540 (monochrome), PM5547, PM5552 and PM5631.
Operation
Rather than previous test card approaches that worked by a live camera or monoscope filming a printed card, the Philips PM5544 generates the test patterns fully using electronic circuits, with separate paths for Y, R-Y and B-Y colour components (), allowing engineers to reliably test and adjust transmitters and receivers for signal disturbances and colour separation, for instance for PAL broadcasts.
In simple terms, the displayed pattern provides reference levels of black, white and colour saturation, to which a receiver can be set. Displayed image geometry (image centering, correct proportions of the circle, etc.) can also be corrected. More technical adjustments are also possible.
Main technical features of the test card:
Circle with b/w and colour information
Square wave – repeating black and white (75% amplitude) blocks resembling a (same amplitude as the colour bar);
Colour bar – yellow, cyan, green, magenta, red and blue with 100% saturation and 75% amplitude (EBU colour bars);
Crossed lines – at the centre of the circle, they allow to check for proper interlace;
Definition lines – sine wave gratings with TV line frequencies corresponding to:
0.8, 1.8, 2.8, 3.8 and 4.8 MHz (PAL-B/G);
1.5, 2.5, 3.5, 4.0, 4.5 and 5.25 MHz (PAL-I);
0.8, 1.8, 2.8, 3.8, 4.8 and 5.63 MHz (PAL-D/K);
0.5, 1.0, 2.0, 3.0 and 4.0 MHz (NTSC/PAL-M/PAL-N);
0.8, 1.8, 2.8, 1.8 and 0.8 MHz (SECAM);
Stair case – greyscale with 6 levels (can display up to 11);
White black step with needle pulse;
Colour step – red on yellow background colours, 75% amplitude.
To the left of the circle:
Vertical bar – line alternating positive and negative R-Y signal;
Vertical bars – positive and negative R-Y signal;
Two rectangles – G-Y signal.
To the right of the circle:
Vertical bar – line alternating positive and negative B-Y signal;
Vertical bars – positive and negative B-Y signal;
Two rectangles – G-Y signal.
Background:
Grid – made from 14 horizontal x 19 vertical lines;
Background Level – adjustable between 0 and 80% amplitude;
B/W border castellations.
Pattern variations
4:3 (original)
While the basic specifications of the pattern normally remain consistent, there are often small variations depending on the brand and type of generator used to produce it, as well as how the broadcaster has chosen to configure it. Some television stations have included a digital clock and/or date, as well as the station logo or ID, inside the circle. This practice was common in Asia and some parts of Europe, as well as in South Africa.
SECAM
The Philips circle pattern is geared towards the PAL colour-coding system, but SECAM versions do exist (for example, it was used by TVP in Poland, MTV in Hungary and TDF in France, without side bars, as well as ERT in Greece, VTV in Vietnam and Télé Sahel in Niger, with side bars). The most obvious difference is the absence of PAL specific test features (to two normally invisible outmost vertical bars). Less noticeable is the change to the multiburst gratings, instead at 0.8, 1.8, 2.8, 1.8 and 0.8 MHz due to the lower luminance bandwidth in the SECAM system.
NTSC
Likewise, there are 525-lines NTSC versions of the pattern. One of the NTSC variants, used in Philippines, Taiwan, Haiti and Japan (by NHK, with the multiburst gratings slightly modified for NTSC-J), has a modified square wave near the top of the circle at 300 kHz and the multiburst gratings at 0.5, 1.0, 2.0, 3.0 and 4.0 MHz. (WNYW's configuration simply removed the side colour bars.) A second variation, used by CBC Montreal in Quebec, Canada, had different gratings and added extra colour bars.
PAL-M
In addition to the 525-line NTSC pattern, a PAL-M version of the pattern was also offered for the Brazilian market. Although no public transmissions are known to exist as of , the pattern is identical to NTSC but also includes achromatic fields adjacent to the side bars.
PAL-N
Though as of no surviving equipment or captures/recordings are known, a version of the pattern existed for the PAL-N system. It is expected to resemble the PAL-B/G pattern however with the gratings of the NTSC version.
BBC Test Card G
Test card G was a quasi-Philips pattern developed by the BBC. It is realised by the physical modification of standard PM5544 generators and differs from the original as follows:
Colour bar saturation - 95% (changed from 75%)
Colour bar contrast - 75% (changed from 100%)
Colour bar set-up - 25% (changed from 0%)
Multiburst gratings (see PAL-I listing above)
Multiburst amplitude - 71.4% (changed from 100%)
The above specifications were incorporated back into standard Philips generators such as the PM5534I/00 and the PM5644I/00.
16:9 (widescreen)
The widescreen version of the Philips circle pattern was designed in 1991. It was only known to be used in PAL regions and retains the signals present in the original, and features additional signals to test signal and picture quality, including Television lines resolution, corner circles and correct overscan and image centering.
Several different types of hardware are known to generate it, including the PM5644 (widescreen versions only), PT5230 and PT5300 (with appropriate pattern generator modules installed) and the PM5420.
There are two major variations of the 16:9 circle pattern. The original 1991 pattern contains high frequency components which were useful for testing widescreen televisions in factories, specifically 450/400 TVL diagonal lines and a sixth 5.8 MHz grating. This however made it sub-optimum for public broadcasts as these exceeded the bandwidth of most PAL transmission systems. All pattern components of the PT5230/PT5300 were within the bandwidth of PAL B/G.
Widescreen circle patterns were used by broadcasters such as RAI (Italy), BRT (Belgium), RTL-TVI (Belgium/Luxembourg), Ned3 (Netherlands), TVE (Spain) and KNR TV (Greenland).
NTSC
Although no public transmissions are known to exist as of , an NTSC widescreen version of the PM5644 and later models were available.
High definition
An HD (1080p) version of the Philips circle pattern was developed for the PT5300 via the PT8612 HD Signal Generator add-on. It was never formally integrated into the PT8612 thus was not offered for sale.
A 1250 line HD-MAC version of the Philips circle pattern also exists.
Squared version
A variation of the PM5544/34 pattern has been recorded where the circle generator is bypassed or faulty. This reveals the full contents of the central pattern elements, which are normally cropped. Anecdotally this pattern has been referred to as PM5538 however this was not a Philips pattern generator. It was used in some parts of the Middle East like Dubai 33 in the UAE and Jordan Radio and Television Corporation (JRTV) in Jordan.
Mainland Chinese variants
Starting from the 1980s, China Central Television and some provincial mainland Chinese broadcasters began using a heavily modified version of the PM5544 called the GB2097 inspection chart. Later, another modification, anecdotally called PM5549 (the Philips PM5549 was an unrelated product) began to be used at the headends of some mainland Chinese cable television providers.
Physical equipment
PM5544
The design of the original PM5544 is fairly complicated, with an array of analogue signal generators generating each component of the pattern continuously. Digital circuitry is used to sequence the outputs from each module into the final pattern. The circle is internally generated as a square and cropped according to coordinates defined in a 264x252 grid defining half of the circle.
The original used magnetic core to store data for the circle, essentially a very small core rope memory. Suitable ROM chips were not available at the time. Four-fold symmetry was used to minimise the memory requirements. Later versions replaced the core with ROM.
The original PM5544 was not capable of generating a composite video signal by itself. At the time it was introduced three additional pieces of equipment were required:
a PM5554 PAL colour encoder,
a PM5555 PAL subcarrier generator,
a PM5530 sync generator.
Over the years the physical implementation of this supporting equipment was refined, with later PM5544's only requiring two extra pieces of equipment:
a PM5545 colour encoder,
a PM5532 sync generator,
Eventually all the supporting equipment could be replaced by a PM5638 which fitted into a single 1RU unit.
The physical configuration of the PM5544 depends upon its purpose. A common application was in TV factories where it was typically used in its most basic configuration with no optional extras. When used for broadcasting it was usually fitted with the PM5543 text generator which allowed broadcasters to display text in the upper and lower black boxes.
It was available in 4 versions (PAL-G, PAL-M, PAL-N & NTSC) and did not have the option of an in-pattern clock (first introduced in PM5534).
PM5534
In the late-1970s, Philips introduced the PM5534 which replaced the original PM5544. It was fundamentally a very similar design using a mixture of analogue and digital circuitry to generate the pattern, however it no longer required an external sync generator and colour encoder, reducing the rack footprint from 6RU/12RU to 3RU.
The PM5534 was available as a variant with component video outputs (PM5535) and had two pattern effecting options: PM8503 text generator (station ID) and the PM8504 clock generator.
The PM5534 was available in 6 different versions: PAL-G, PAL-I, PAL-M, PAL-N, SECAM and NTSC.
PM5644
Some time during the late-1980s Philips introduced a new design of colour pattern generator bearing the model number PM5644. The PM5644 further improves upon the PM5534 by reducing the overall size to just 1RU.
It also differs from the previous PM5544 and PM5534 in that its pattern generating circuitry is entirely digital, stored in EPROM chips allowing easy customisation of patterns. The PM5644 (and later generators) patterns' were compiled using an array of in-house tools defined by vectors in MS-DOS batch files.
Many variations of the PM5644 are known to exist, each with different purposes and capabilities:
4:3 models
The earliest design shares the chassis and the sync module with the PM5631 colour generator and has 576 KB of pattern ROM (can be increased to 4.5 MB for multiple / more complex patterns). The pattern produced by these units is nearly identical to the PM5544. Differences are typically a result of design constraints of the hardware or software generating the test cards.
Variations exist for every video standard: PM5644G/00 (PAL-B/G), PM5644I/00 (PAL-I), PM5644M/00 (NTSC), PM5644L/00 (SECAM), PM5644P/00 (PAL-M), PM5644N/00 (PAL-N). This model is able to replace every type of PM55xx pattern generator.
Indian head model(s)
A variant generating a 625 line version of the “Indian head” pattern is known to exist.
FuBK models
Two variations bearing the model numbers PM5644G/50 (PAL B/G) and PM5644G/70 (YCbCr) were available programmed with the FuBK pattern.
16:9 models
The earliest known PM5644 16:9 hardware is the PM5644G/90 and PM5644G/924 which use the same chassis and PCB as the 4:3 models, however, both are programmed with the well-known 16:9 circle pattern alongside several other simple patterns. They generate an anamorphic signal but do not support PALplus encoding.
The last known design has controls and a display on the front panel and is labelled PM5644 PALplus test pattern generator and bears the model number PM5644/85. No other variations of this hardware are presently known. This design also generates the well known 16:9 colour circle pattern but unlike the previously mentioned G/90 and G/924 models, it is capable of encoding a PALplus signal. It also is capable of generating the 4:3 pattern of the original PM5544.
Custom models
The original PM5644 was accompanied by a service offered by Philips whereby customers could have the pattern customised. The most common type of pattern modification was a simple logo inserted in-place of the top station ID box. These models usually carry a three digit model suffix starting with '9' i.e. PM5644/9xx.
PM5655
In the mid-1990s Philips completed their final VITS generator/inserter, the PM5655. Some versions of it have been found which generate both Philips and FuBK patterns (4:3 and 16:9). It is not presently publicly known if this is standard or special functionality. Appearance of text is significantly different to the PM5644 and from experimentation with physical equipment it is considered likely to be responsible for at least one public transmission (from Nederland 3 in the Netherlands), which was able to be exactly recreated.
PT5210/PT5230/PT5300
Around 1997 with the PM5644 nearing end of life, Philips began work on the final generation of hardware that would generate the circle pattern. The first in this series was the PT5210, which through the PT8601 analogue pattern generator module, was able to generate a single complex pattern however only by special order. At least one circle pattern configuration is known.
Around this time Philips exited the TV test equipment business, with the lab that developed these products independently incorporated as ProTeleVision Technologies A/S. All products were immediately rebranded ProTeleVision.
The PT5230 was the first to exclusively carry the new brand and included an enhanced analogue pattern generator option – the PT8631, which was able to generate all Philips (NTSC and PAL) and FuBK patterns in both 4:3 and 16:9 in a single configuration. Like its predecessor (the PM5644), customer specific patterns were offered.
In 2001, shortly after the release of the PT5230, said product line was further divested to DK-Audio A/S (also based in Copenhagen, Denmark at the time). Other pattern generator products not included in the sale, but still under warranty or with active support plans such as the PM5644 and PM5534 were abandoned by ProTeleVision to be fulfilled by Arepa Test & Calibration. ProTeleVision became the current ProTelevision Technologies (lowercase 'V') shortly thereafter signifying a shift exclusively to digital transmission products.
DK-Audio (also known as DK-Technologies) subsequently released the PT5300, which superseded the PT5230, accepting all of its pattern generator modules and included many new options and features developed by DK-Audio. It was the last physical pattern generator directly descending from the original PM5544 to generate the Philips circle pattern.
SECAM was not supported by any models from this series, leaving the PM5644L as the last SECAM variant.
In 2005 the last ever custom Philips circle pattern was compiled by DK-Technologies for Danmarks Radio (DR), the first to transmit it in 1970. In 2018 the PT5300 was discontinued. In 2022 the PT5300 (and PT5210/PT5230) were open sourced by DK-Technologies.
SDI (digital) pattern generation
The aforementioned PT5210, PT5230 and PT5300 all optionally include SDI outputs in addition to analogue. While these products are mostly focused on patterns for digital transmission, an optional hardware upgrade was available for the PT5210, PT5230 and PT5300 (PT8603/903 for the PT5210 and PT8633 for the PT5230/PT5300) which provides the traditional 4:3 and 16:9 circle patterns of the PM5644 (in standard definition only) and offer an upgrade path for component versions of the PM5644. Although it was not the intended use-case these modules are known to have been used with exclusively digital transmissions.
In the case of the PT8633, options available include a 5 or 10 step grayscale staircase for both formats and the option to omit the corner circles for the 16:9 pattern combined with digital specific features such a moving bar in the bottom box to test if the stream is live or frozen. Due to hardware limitations, it was not possible to enable both text in the bottom box and a moving bar with the sole exception of a workaround developed for SVT. Pulsed audio can also be generated for synchronisation testing.
Known transmissions include 2RN in Ireland, TVB J2 in Hong Kong and ABC Television in Australia. Other digital pattern generators which generate patterns resembling that of the PM5644 are known such as those from Promax.
Non-Philips
Many broadcast Philips circle patterns were generated by non-Philips equipment. Such vendors include Rohde & Schwarz, Tektronix, ShibaSoku and PROMAX Electronics.
Pattern variation gallery
Worldwide usage
PAL broadcasts
Many broadcasters that adopted the 625-line PAL system used some form of the Philips circle pattern.
Africa
In South Africa, the South African Broadcasting Corporation (SABC) made use of the PM5544 pattern since it started testing its first television system in 1975, but independent broadcasters M-Net, which launched in 1986, and e.tv, which launched in 1998, opted to use Telefunken FuBK instead.
In Zimbabwe, the Philips circle pattern was used by Zimbabwe Broadcasting Corporation (ZBC) from the start of its regular colour broadcasts in the early-1980s, replacing the Indian-head test pattern.
In Algeria, the Philips circle pattern was used by Établissement public de télévision.
In Western Sahara, a modified version of the Philips circle pattern was used by RASD TV.
Asia and Middle East
The Philips circle pattern was first introduced in Singapore by its national broadcaster Radio Television Singapore (RTS; now Mediacorp, in conjunction with a modified version of Test Card F) upon the start of regular colour broadcasts in Singapore in 1974. While the Philips circle pattern ceased to be seen on said country's two main television channels (Channel 5 and Channel 8) upon introducing 24/7 schedules in 1995, the pattern continued to be seen on its minority and thematic channels until approximately 2005–06.
The Philips circle pattern was later introduced in Malaysia by its public broadcaster Radio Televisyen Malaysia (RTM) from its introduction of regular colour broadcasts in 1978–80 (replacing its previous monochrome Pye Test Card G) until it switched to a 24/7 schedule in 2012; and was also used by said country's first commercial station TV3 from the launch of its television service in 1984 until it adopted a 24/7 schedule in 2014.
The Philips circle pattern was also used by the Indonesian national TV broadcaster TVRI, replacing its previous Telefunken FuBK test card, from the mid-1980s until it switched to a 24/7 schedule in 2021.
In the Thailand, the Philips circle pattern was used by Channel 5 from 1974 (when station was transition to color broadcasting; replacing its previous Indian Head test pattern) until 1988 when station was replaced its by Telefunken FuBK test pattern. Philips circle pattern was also used by Channel 7 from 1995 (when station was replacing its previously Telefunken FuBK which was used from 1982 until 1995) until it switched to a 24/7 schedule on 11 March 2010. the Philips circle pattern was also used by MCOT HD (then as Channel 9) from 1995 until it switched to a 24/7 schedule in 2002. the Philips circle pattern was also used by Channel 3 in May 2010 in short time (from 21-24 May 2010) during 2010 Thai political protests. the modified version of the Philips circle pattern was also used by ThaiPBS from 2008 until 2010. the Philips circle pattern was also used by national broadcaster, NBT from 1996 until it swtiched to a 24/7 schedule in 2008 (until 2019).
In the People's Republic of China, the Philips circle pattern was used by its national broadcaster CCTV as well as some provincial/regional broadcasters such as Shenzhen Media Group and Television Southern in Guangdong Province, Xizang STV in Tibet Autonomous Region, Yuyao TV in Zhejiang Province and Ningxia Television in Ningxia Hui Autonomous Region. CCTV also later used a heavily modified version of the PM5544 called the GB2097 inspection chart. Nowadays, many modern mainland Chinese test card designs, like in Hong Kong, incorporate elements of the PM5544, PM5644 and Snell & Wilcox test card designs. In Hong Kong, the PM5544 was used by RTV/ATV and TVB from the 1970s (replacing the RMA 1946 Resolution Chart and EIA 1956 resolution chart) until approximately 2007–09. TVB then switched to its own test card designs incorporating elements of the PM5544, PM5644 and Snell & Wilcox SW2 designs in HD Jade and its sister channel TVB J2 respectively.
In Israel, the Philips circle pattern was used by Israel Broadcasting Authority (IBA) and Israeli Educational Television (IETV) from their launch of colour broadcasts in the early-1980s, replacing its previous monochrome Philips PM5540 test card after a nearly decade-long delay in introducing colour television to said country for various sociopolitical reasons.
In Qatar, the Philips circle pattern was used by Qatar TV.
In Kuwait, the Philips circle pattern was used by Kuwait Television, replacing the Indian-head test pattern.
In Jordan, the Philips circle pattern, along with the aforementioned squared variant, was used by Jordan Radio and Television Corporation (JRTV) from the start of its colour transmissions in the mid-1970s, replacing the monochrome Marconi Resolution Chart No. 1.
Saudi Broadcasting Authority (SBA) in Saudi Arabia used a heavily modified version of the Philips circle pattern from 1982 until 2009, with the side "brackets" removed and 1/4 of the top half of the PM5544 "circle" replaced with a white and black background and colour bars. Aramco TV Channel 3, broadcasting to Aramco employees and their dependents residing in the Saudi Aramco Residential Camp in Dhahran, used the standard Philips circle pattern (PM5534) with clock cut-out. These replaced a modified version of the Indian-head test pattern.
Oceania
The Philips circle pattern was also in widespread use in Australia for many years, most notably with the Australian Broadcasting Corporation (ABC) from its launch of colour broadcasts in 1974–75 and Special Broadcasting Service (SBS) from its launch of television services in 1980. Some commercial stations also used it.
In New Zealand, it was used by TVNZ from its launch of colour broadcasting in 1973.
Europe
In Denmark, where the Philips circle pattern was invented, it was used by its national broadcaster Danmarks Radio (DR) from its launch of regular colour broadcasts in 1970, immediately replacing Test Card F and Philips PM5552, and later on the monochrome Pye Test Card G and Philips PM5540; as well as its first nationwide commercial channel TV 2 during its pre-launch tests and its downtime hours and subsequently also on most of the latter's regional and themed channels. DR, TV 2 and TV 2 Film also later used the widescreen Philips circle pattern for widescreen broadcasts from the 1990s. In Greenland, the standard and widescreen Philips circle pattern are used by its public broadcaster Kalaallit Nunaata Radioa (KNR) from its launch of television services in 1982. A modified variant of the widescreen Philips circle pattern is used by the Faroese public broadcaster Kringvarp Føroya (KvF) alongside the EBU colour bars during off-air hours.
In the Netherlands, where Philips is headquartered, the Philips circle pattern began to be used by the Staatsbedrijf der Posterijen, Telegrafie en Telefonie (Dutch PTT agency) for the benefit of the television trade from 1 January 1974, alternating with colour test slides. , the operator of all national radio and television broadcasting infrastructure in the Netherlands, also used the Philips circle pattern. From 1 March 1975, the Dutch public broadcasting system also started to use the Philips circle pattern on its TV channels, replacing the monochrome RMA 1946 Resolution Chart, the electronic monochrome chequerboard test card generated by a Philips GM 2671/50 video signal generator, the Philips PM5552 early colour test card, and after the late-1980s, the EBU electronic monochrome test pattern and the Philips PM5540 monochrome test card. From the 1980s until the end of all public test card transmissions in the Netherlands in December 2004, the Philips circle pattern (in both standard and widescreen formats) also alternated with Telefunken FuBK during downtime on the Dutch public TV channels. However, Ziggo, the largest cable television provider in the Netherlands, still offers the widescreen Philips circle pattern on channel 997.
The BBC in the United Kingdom occasionally used a slightly modified version of the Philips circle pattern called Test Card G from 1971 until the late-1990s, in conjunction with Test Card F. The Independent Broadcasting Authority (IBA) initially used this card in the 1970s, also in conjunction with Test Card F and EBU colour bars, but eventually abandoned Test Card G and developed a unique test card called the ETP-1, which was brought into use on ITV from 1979 onwards. However, London Weekend Television (LWT) and ITV Channel Television, two constituent franchisee companies in the ITV network structure, continued to use Test Card G well into the 1980s. Test Card G was also used on BFBS/SSVC Television's low-powered terrestrial broadcasts serving British Armed Forces personnel in West Germany and West Berlin in the 1980s and 1990s. A modified version of Test Card G was also briefly used on Sky One alongside the Simplified Telefunken FuBK pattern in the early-1990s.
The Philips circle pattern was also used by Raidió Teilifís Éireann (RTÉ) in the Republic of Ireland (in conjunction with a modified version of the EBU colour bars shown after the Irish national anthem was played at closedown) from the start of its regular colour broadcasts in 1972 until they were replaced by RTÉ Aertel overnight in-vision teletext in mid-1996.
In the DACH countries, the Philips circle pattern was used by the German commercial terrestrial channel RTL and the German public-service channel Phoenix. The Austrian public broadcaster ORF used a slightly modified version of the Philips circle pattern. Use of the Philips circle pattern in the DACH was solely confined to these broadcasters, as most TV stations in these areas instead preferred to use the Telefunken FuBK test card when they adopted colour television.
In Italy, its national broadcaster RAI introduced the Philips circle pattern in 1977 at the same time as it launched its first regular colour broadcasts, replacing heavily modified versions of the Indian-head test pattern. Later on, RAI then used the 1990s widescreen variation for PALplus broadcasts. Telefriuli also used a heavily modified version of the PM5544 in the 1980s.
In Spain, the Philips circle pattern was introduced by the various autonomous and private channels in the early-1980s notably by TV3, El 33, Telemadrid, Antena 3, EITB and Canal+ Spain, as well as on point-to-point terrestrial and satellite links operated by Retevisión and Telefónica Sistemas de Satélites. Spain's national public broadcaster TVE however instead primarily used its own TVE colour test card from 1975 until the mid-2000s, although in the 1990s it did also briefly used the widescreen PM5644 circle pattern.
In Iceland, the 4:3 Philips circle pattern was used by its national broadcaster RÚV from its launch of colour broadcasts in 1973–76, only fully replacing its heavily modified monochrome Philips PM5540 test card after 1982. RÚV subsequently replaced its aforementioned 4:3 pattern with a widescreen Philips circle pattern in 2009, then discontinued all their on-air test card broadcasts in 2011. However, the privately owned subscription channels Stöð 2, launched in 1986, and Sjónvarp Símans, launched in 1999, opted not to use the Philips circle pattern.
In the former SFR Yugoslavia, the Philips circle pattern was used by its national broadcaster Yugoslav Radio Television (JRT) in conjunction with the Telefunken FuBK test card. Use of the PM5544 continued for some time afterwards in some of its constituent successor countries.
In Bulgaria, the privately owned nationwide broadcaster bTV introduced the Philips circle pattern in November 2000 with the start of its new program schedules, replacing the EBU colour bars, used from its launch on 1 June 2000. Use of the Philips circle pattern was in its test card broadcasts until 17 February 2001 between 12:00 AM and 06:00 AM (the next day, bTV started 24-hour transmissions), and 2 times a year during transmitter maintenance until 2013.
The Philips circle pattern was also used in Hungary, Belgium, Norway, and Sweden.
South America
In Argentina, the Philips circle pattern was used by América TV, El Nueve and El Trece from the start of their color transmissions in 1980.
SECAM broadcasts
SECAM users of the Philips circle pattern included TDF (TF1, Antenne 2, FR3, Canal+, La Cinquième and M6) in France, Télé Sahel in Niger, Iraqi TV in Iraq, VTV in Vietnam and SNRT in Morocco. ERT in Greece and TVP in Poland started using the PM5544 for SECAM transmissions since the 1970s and continued using it after switching to PAL in the 1990s.
NTSC broadcasts
NTSC users of the Philips circle pattern included CBFT and CBMT in Quebec, Canada, WBOY-TV and WNYW in the United States, DZBB-TV in the Philippines, Myawaddy TV in Myanmar, KBS and MBC in South Korea, TTV, CTV, CTS and FTV in Taiwan and RTNH in Haiti. The Japanese national broadcaster, NHK, also used a 525-line version of the test card, albeit with slight technical differences as compared to those used by the American and Canadian broadcasters so as to conform with the NTSC-J system.
Usage gallery
See also
Hanover bars
TVE test card
Philips PM5540
Telefunken FuBK
ETP-1
References
External links
Philips TV Measuring Equipment, 1980
Technical information on the PM5544 test card
Jerome Glick : Test Cards & Signals
Test cards
Broadcast engineering
Philips | Philips circle pattern | Engineering | 6,935 |
2,065,886 | https://en.wikipedia.org/wiki/BET%20theory | Brunauer–Emmett–Teller (BET) theory aims to explain the physical adsorption of gas molecules on a solid surface and serves as the basis for an important analysis technique for the measurement of the specific surface area of materials. The observations are very often referred to as physical adsorption or physisorption. In 1938, Stephen Brunauer, Paul Hugh Emmett, and Edward Teller presented their theory in the Journal of the American Chemical Society. BET theory applies to systems of multilayer adsorption that usually utilizes a probing gas (called the adsorbate) that does not react chemically with the adsorptive (the material upon which the gas attaches to) to quantify specific surface area. Nitrogen is the most commonly employed gaseous adsorbate for probing surface(s). For this reason, standard BET analysis is most often conducted at the boiling temperature of N2 (77 K). Other probing adsorbates are also utilized, albeit less often, allowing the measurement of surface area at different temperatures and measurement scales. These include argon, carbon dioxide, and water. Specific surface area is a scale-dependent property, with no single true value of specific surface area definable, and thus quantities of specific surface area determined through BET theory may depend on the adsorbate molecule utilized and its adsorption cross section.
Concept
The concept of the theory is an extension of the Langmuir theory, which is a theory for monolayer molecular adsorption, to multilayer adsorption with the following hypotheses:
gas molecules physically adsorb on a solid in layers infinitely;
gas molecules only interact with adjacent layers; and
the Langmuir theory can be applied to each layer.
the enthalpy of adsorption for the first layer is constant and greater than the second (and higher).
the enthalpy of adsorption for the second (and higher) layers is the same as the enthalpy of liquefaction.
The resulting BET equation is
where c is referred to as the BET C-constant, is the vapor pressure of the adsorptive bulk liquid phase which would be at the temperature of the adsorbate and θ is the surface coverage, defined as:
.
Here is the amount of adsorbate and is called the monolayer equivalent. The is the entire amount that would be present as a monolayer (which is theoretically impossible for physical adsorption) that would cover the surface with exactly one layer of adsorbate. The above equation is usually rearranged to yield the following equation for the ease of analysis:
where and are the equilibrium and the saturation pressure of adsorbates at the temperature of adsorption, respectively; is the adsorbed gas quantity (for example, in volume units) while is the monolayer adsorbed gas quantity. is the BET constant,
where is the heat of adsorption for the first layer, and is that for the second and higher layers and is equal to the heat of liquefaction or heat of vaporization.
Equation (1) is an adsorption isotherm and can be plotted as a straight line with on the y-axis and on the x-axis according to experimental results. This plot is called a BET plot. The linear relationship of this equation is maintained only in the range of . The value of the slope and the y-intercept of the line are used to calculate the monolayer adsorbed gas quantity and the BET constant . The following equations can be used:
The BET method is widely used in materials science for the calculation of surface areas of solids by physical adsorption of gas molecules. The total surface area and the specific surface area are given by
where is in units of volume which are also the units of the monolayer volume of the adsorbate gas,
is the Avogadro number, the adsorption cross section of the adsorbate, the molar volume of the adsorbate gas, and the mass of the solid sample or adsorbent.
Derivation
The BET theory can be derived similarly to the Langmuir theory, but by considering multilayered gas molecule adsorption, where it is not required for a layer to be completed before an upper layer formation starts. Furthermore, the authors made five assumptions:
Adsorptions occur only on well-defined sites of the sample surface (one per molecule)
The only molecular interaction considered is the following one: a molecule can act as a single adsorption site for a molecule of the upper layer.
The uppermost molecule layer is in equilibrium with the gas phase, i.e. similar molecule adsorption and desorption rates.
The desorption is a kinetically limited process, i.e. a heat of adsorption must be provided:
these phenomena are homogeneous, i.e. same heat of adsorption for a given molecule layer.
it is E1 for the first layer, i.e. the heat of adsorption at the solid sample surface
the other layers are assumed similar and can be represented as condensed species, i.e. liquid state. Hence, the heat of adsorption is EL is equal to the heat of liquefaction.
At the saturation pressure, the molecule layer number tends to infinity (i.e. equivalent to the sample being surrounded by a liquid phase)
Consider a given amount of solid sample in a controlled atmosphere. Let θi be the fractional coverage of the sample surface covered by a number i of successive molecule layers. Let us assume that the adsorption rate Rads,i-1 for molecules on a layer (i-1) (i.e. formation of a layer i) is proportional to both its fractional surface θi-1 and to the pressure P, and that the desorption rate Rdes,i on a layer i is also proportional to its fractional surface θi:
where ki and k−i are the kinetic constants (depending on the temperature) for the adsorption on the layer (i−1) and desorption on layer i, respectively. For the adsorptions, these constants are assumed similar whatever the surface.
Assuming an Arrhenius law for desorption, the related constants can be expressed as
where Ei is the heat of adsorption, equal to E1 at the sample surface and to EL otherwise.
Consider some substance A. The adsorption of A onto an available surface site produces a new site on the first layer. In summary,
A(g) + <=>
Extending this to higher order layers one obtains
A(g) + <=>
and similarly
A(g) + <=>
Denoting the activity of the number of available sites of the th layer with and the partial pressure of A with , the last equilibrium can be written
It follows that the coverage of the first layer can be written
and that the coverage of the second layer can be written
Realising that the adsorption of A onto the second layer is equivalent to adsorption of A onto its own liquid phase, the rate constant for should be the same, which results in the recursion
In order to simplify some infinite summations, let and let . Then the th layer coverage can written
if . The coverage of any layer is defined as the relative number of available sites. An alternative definition, which leads to a set of coverage's that are numerically to those resulting from the original way of defining surface coverage, is that denotes the relative number of sites covered by only adsorbents. Doing so it is easy to see that the total volume of adsorbed molecules can be written as the sum
where is the molecular volume. Employing the fact that this sum is the first derivative of a geometric sum, the volume becomes
Since the total coverage of a mono-layer must be unity, the full mono-layer coverage must be
In order to properly make the substitution for , the restriction forces us to take the zeroth contribution outside the summation, resulting in
Lastly, defining the excess coverage as , the excess volume relative to the volume of an adsorbed mono-layer becomes
where the last equality was obtained by making use of the series expansions presented above. The constant must be interpreted as the relative binding affinity the substance A has towards a surface, relative to its own liquid. If then the initial part of the isotherm will be reminiscent of the Langmuir isotherm which reaches a plateau at full mono-layer coverage, whereas means the mono-layer will have a slow build-up. Another thing to note is that in order for the geometric substitutions to hold, . The isotherm above exhibits a singularity at . Since one can write , implying that . This means that must be true, ultimately resulting in .
Finding the linear BET range
It is still not clear on how to find the linear range of the BET plot for microporous materials in a way that reduces any subjectivity in the assessment of the monolayer capacity. A crowd-sourced study involving 61 research groups has shown that reproducibility of BET area determination from identical isotherms is, in some cases, problematic. Rouquerol et al. suggested a procedure that is based on two criteria:
C must be positive implying that any negative intercept on the BET plot indicates that one is outside the valid range of the BET equation.
Application of the BET equation must be limited to the range where the term V(1-P/P0) continuously increases with P/P0.
These corrections are an attempt to salvage the BET theory, which is restricted to type II isotherms. Even while using this type, use of the data itself is restricted to 0.05 to 0.35 of , routinely discarding 70% of the data. This restriction must be modified depending upon conditions.
Limitations of BET
Terrell L. Hill described BET as a theory that is "... extremely useful as a qualitative guide; but it is not quantitatively correct". Although BET adsorption isotherm is still extensively used for different applications and is used for specific surface area determinations of powders whose calculation is not sensitive to the simplifications of the BET theory. Both Hackerman's and Sing's group have highlighted the limitations of the BET method. Hackerman et al. noted the potential for 10% uncertainty in the method's values, with Sing's group attributed the significant variation in reported values of molecular area to the BET method's possible inaccurate assessment of monolayer capacity. In subsequent studies using the BET interpretation of nitrogen and water vapor adsorption isotherms, the reported area occupied by an adsorbed water molecule on fully hydroxylated silica ranged from 0.25 to 0.44 nm². Other issues with the BET include the fact that in certain cases, BET leads to anomalies such as reaching an infinite amount adsorbed at reaching unity, and in some cases, the constant C (surface binding energy) can be determined to be negative.
Applications
Cement and concrete
The rate of curing of concrete depends on the fineness of the cement and of the components used in its manufacture, which may include fly ash, silica fume and other materials, in addition to the calcinated limestone which causes it to harden. Although the Blaine air permeability method is often preferred, due to its simplicity and low cost, the nitrogen BET method is also used.
When hydrated cement hardens, the calcium silicate hydrate (or C-S-H), which is responsible for the hardening reaction, has a large specific surface area because of its high porosity. This porosity is related to a number of important properties of the material, including the strength and permeability, which in turn affect the properties of the resulting concrete. Measurement of the specific surface area using the BET method is useful for comparing different cements. This may be performed using adsorption isotherms measured in different ways, including the adsorption of water vapour at temperatures near ambient, and adsorption of nitrogen at 77 K (the boiling point of liquid nitrogen). Different methods of measuring cement paste surface areas often give very different values, but for a single method the results are still useful for comparing different cements.
Activated carbon
Activated carbon has strong affinity for many gases and has an adsorption cross section of 0.162 nm2 for nitrogen adsorption at liquid-nitrogen temperature (77 K). BET theory can be applied to estimate the specific surface area of activated carbon from experimental data, demonstrating a large specific surface area, even around 3000 m2/g. However, this surface area is largely overestimated due to enhanced adsorption in micropores, and more realistic methods should be used for its estimation, such as the subtracting pore effect (SPE) method.
Catalysis
In the field of solid catalysis, the surface area of catalysts is an important factor in catalytic activity. Inorganic materials such as mesoporous silica and layered clay minerals have high surface areas of several hundred m2/g calculated by the BET method, indicating the possibility of application for efficient catalytic materials.
Specific surface area calculation
The ISO 9277 standard for calculating the specific surface area of solids is based on the BET method. The method has also been adapted for determination of specific area of ceramics and non-ferrous metal powders.
Thermal desorption
In 2023, researchers in the United States developed a method to determine BET surface areas using a thermogravimetric analyzer (TGA). This method uses a TGA to heat a porous sample loaded with an adsorbate, the produced plot of sample mass vs. temperature is then mapped into a standard isotherm to which BET theory is applied as normal. Common fluids, e.g. water or toluene, can be used as adsorbates for the TGA method allowing the specific interactions of different adsorbates to be determined, as these frequently differ from the commonly used nitrogen.
See also
Adsorption
Capillary condensation
Langmuir adsorption model
Mercury intrusion porosimetry
Physisorption
Surface tension
References
Scientific techniques
Physical chemistry
Gas technologies | BET theory | Physics,Chemistry | 2,921 |
8,656,285 | https://en.wikipedia.org/wiki/Night%20hunting | "Night hunting", known in Bhutan as Bomena, is a traditional courtship custom that is practiced in some parts of Bhutan.
Similar customs have also existed in other cultures, namely in Japan.
Practice
"Night hunting" is a traditional culture of nightly courtship and romance that is practiced mostly in eastern and central rural Bhutan. There is neither the word "night" nor the word "hunting" in the original terms. The original words can be best rendered as "prowling for girls".
Young men go out at night to sneak into girls' windows to engage in sexual activities. The prowling can be solo or in groups depending on whether or not the man has a fixed date. It is the rural equivalent of an urban date. If one has talked with the girl in advance then it can be a solo activity but usually it happens after a gathering when friends decide to go prowling for girls. Most boys would have a girl in mind. Although they set out as a group, they disperse gradually as they find a partner.
Traditional two-story buildings makes the prowling difficult but the sliding window shutter with only wooden latches from inside makes it easier. Strategies vary from sneaking in the door to climbing up the side of a house to enter a window or even dropping in from the roof. The uniform architecture of Bhutanese houses, with same design of doors and windows also make it easier. The age old tradition has also come up with special tools to undo doors and windows. If the boy successfully infiltrates the dwelling, he still may be rejected by the girl he is pursuing.
The prowling may be foiled due to wrong footing, which may wake up the whole family. The intruder may get chased away with hot water splashed on him, or be thrown out of the window. Strict parents chase the intruder or threaten him with marriage or a stick while liberal ones pretend to be asleep even if they know the prowler is around. This is more likely if they know the prowler is a suitor they would like to have for their daughter. It is not difficult to guess who the prowler might be in small close-knit villages.
Boys generally attempt to complete the task and make a quick exit if the parents of the girl are in and may stay longer if the girl is alone. It is in some places a custom that a boy discovered in the morning by the parents shall become the husband of the girl, but usually the boy and the girl make sure that the boy exits before the parents get up in the morning. If he oversleeps, they may still find a way to sneak out.
The practice is far more dramatic because this happens under pitch darkness and traditionally the whole family sleep in one large room, which is the kitchen and living room. The prowler must know pretty well where the girl sleeps in order to find the right bed. There are stories of boys getting into the wrong bed and the grannies yelling the boy out or having a good laugh or even quietly enjoying the visit.
The culture of night prowling is fading away due to socio-economic changes. With new metal latches and locks in many houses, it is difficult for young boys now to get into the house. With modern education, modern western form of romance and dating tradition is growing and young people are no longer keen on this traditional practice, preferring to exchange love letters and fix dates.
Issues
One potential issue is the abuse of this cultural practice leading to sexual assault and rape. Perhaps a more common downside of night prowling has been rampant bastardy. Bastardy and single motherhood were less of a problem in the traditional setting with extended families and grandparents always around to look after the child.
However, the growing culture of nuclear families, the requirement for marriage certificates, requirement of a father to register the child as citizen, the increasing practice of western styled wedding culture are leading to an increased stigma for single motherhood. This subsequently is leading to the fall in sex outside wedlock and practices such as prowling for girls.
Modern education and the literature associated with it are spreading fast and with them a worldview and culture heavily influenced by a Western, Christian moral ethos. This is fast replacing a more liberal Buddhist attitude toward sex which was prevalent in Bhutan.
In literature
In the book “Love, Courtship and Marriage in Rural Bhutan” by Kyle Bauer, the Centre for Bhutan Studies, discusses night hunting. According to the author, Bomena, a “custom whereby a boy stealthily enters a girl’s house at night for courtship or coitus with or without prior consultation”, is commonly misunderstood in Bhutan as ‘night hunting’. The use of a vernacular word Bomena, not ‘night hunting’, a term loaded with ethnocentrism and ignorance of the custom, tells a lot of this original village ethnography.
The current discourse and understanding of Bomena, according to the author, are naïve, biased and misrepresented, heavily influenced by changing values especially among the urban societies. One common notion is that any rural culture is ‘inferior’ and all urban cultures are ‘superior’, and replacing the rural culture with urban culture is seen as a way of emancipating the Bhutanese farmers from their ‘primitive’ culture and advancing the country.
See also
Yobai, Japan
References
Gender in Bhutan
Human sexuality
Night in culture
Sexuality in Bhutan
Sleep
Society of Bhutan
Social history of Bhutan
Women in Bhutan | Night hunting | Biology | 1,109 |
72,023,462 | https://en.wikipedia.org/wiki/Observability%20%28software%29 | In software engineering, more specifically in distributed computing, observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components. To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to site reliability engineering, as it is the first step in triaging a service outage.
One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue.
Etymology, terminology and definition
The term is borrowed from control theory, where the "observability" of a system measures how well its state can be determined from its outputs. Similarly, software observability measures how well a system's state can be understood from the obtained telemetry (metrics, logs, traces, profiling).
The definition of observability varies by vendor:
The term is frequently referred to as its numeronym o11y (where 11 stands for the number of letters between the first letter and the last letter of the word). This is similar to other computer science abbreviations such as i18n and l10n and k8s.
Observability vs. monitoring
Observability and monitoring are sometimes used interchangeably. As tooling, commercial offerings and practices evolved in complexity, "monitoring" was re-branded as observability in order to differentiate new tools from the old.
The terms are commonly contrasted in that systems are monitored using predefined sets of telemetry, and monitored systems may be observable.
Majors et al. suggest that engineering teams that only have monitoring tools end up relying on expert foreknowledge (seniority), whereas teams that have observability tools rely on exploratory analysis (curiosity).
Telemetry types
Observability relies on three main types of telemetry data: metrics, logs and traces. Those are often referred to as "pillars of observability".
Metrics
A metric is a point in time measurement (scalar) that represents some system state. Examples of common metrics include:
number of HTTP requests per second;
total number of query failures;
database size in bytes;
time in seconds since last garbage collection.
Monitoring tools are typically configured to emit alerts when certain metric values exceed set thresholds. Thresholds are set based on knowledge about normal operating conditions and experience.
Metrics are typically tagged to facilitate grouping and searchability.
Application developers choose what kind of metrics to instrument their software with, before it is released. As a result, when a previously unknown issue is encountered, it is impossible to add new metrics without shipping new code. Furthermore, their cardinality can quickly make the storage size of telemetry data prohibitively expensive. Since metrics are cardinality-limited, they are often used to represent aggregate values (for example: average page load time, or 5-second average of the request rate). Without external context, it is impossible to correlate between events (such as user requests) and distinct metric values.
Logs
Logs, or log lines, are generally free-form, unstructured text blobs that are intended to be human readable. Modern logging is structured to enable machine parsability. As with metrics, an application developer must instrument the application upfront and ship new code if different logging information is required.
Logs typically include a timestamp and severity level. An event (such as a user request) may be fragmented across multiple log lines and interweave with logs from concurrent events.
Traces
Distributed traces
A cloud native application is typically made up of distributed services which together fulfill a single request. A distributed trace is an interrelated series of discrete events (also called spans) that track the progression of a single user request. A trace shows the causal and temporal relationships between the services that interoperate to fulfill a request.
Instrumenting an application with traces means sending span information to a tracing backend. The tracing backend correlates the received spans to generate presentable traces. To be able to follow a request as it traverses multiple services, spans are labeled with unique identifiers that enable constructing a parent-child relationship between spans. Span information is typically shared in the HTTP headers of outbound requests.
Continuous profiling
Continuous profiling is another telemetry type used to precisely determine how an application consumes resources.
Instrumentation
To be able to observe an application, telemetry about the application's behavior needs to be collected or exported. Instrumentation means generating telemetry alongside the normal operation of the application. Telemetry is then collected by an independent backend for later analysis.
Instrumentation can be automatic, or custom. Automatic instrumentation offers blanket coverage and immediate value; custom instrumentation brings higher value but requires more intimate involvement with the instrumented application.
Instrumentation can be native - done in-code (modifying the code of the instrumented application) - or out-of-code (e.g. sidecar, eBPF).
Verifying new features in production by shipping them together with custom instrumentation is a practice called "observability-driven development".
"Pillars of observability"
Metrics, logs and traces are most commonly listed as the pillars of observability. Majors et al. suggest that the pillars of observability are high cardinality, high-dimensionality, and explorability, arguing that runbooks and dashboards have little value because "modern systems rarely fail in precisely the same way twice."
Self monitoring
Self monitoring is a practice where observability stacks monitor each other, in order to reduce the risk of inconspicuous outages. Self monitoring may be put in place in addition to high availability and redundancy to further avoid correlated failures.
See also
Application performance management (APM)
OpenTelemetry (OTel)
Real user monitoring (RUM)
Synthetic monitoring
DevOps
Site reliability engineering (SRE)
Sociotechnical system
External links
CNCF Observability Technical Advisory Group (TAG)
Bibliography
References
Distributed computing | Observability (software) | Technology,Engineering | 1,276 |
9,528,907 | https://en.wikipedia.org/wiki/Nucleic%20acid%20quantitation | In molecular biology, quantitation of nucleic acids is commonly performed to determine the average concentrations of DNA or RNA present in a mixture, as well as their purity. Reactions that use nucleic acids often require particular amounts and purity for optimum performance. To date, there are two main approaches used by scientists to quantitate, or establish the concentration, of nucleic acids (such as DNA or RNA) in a solution. These are spectrophotometric quantification and UV fluorescence tagging in presence of a DNA dye.
Spectrophotometric analysis
One of the most commonly used practices to quantitate DNA or RNA is the use of spectrophotometric analysis using a spectrophotometer. A spectrophotometer is able to determine the average concentrations of the nucleic acids DNA or RNA present in a mixture, as well as their purity.
Spectrophotometric analysis is based on the principles that nucleic acids absorb ultraviolet light in a specific pattern. In the case of DNA and RNA, a sample is exposed to ultraviolet light at a wavelength of 260 nanometres (nm) and a photo-detector measures the light that passes through the sample. Some of the ultraviolet light will pass through and some will be absorbed by the DNA / RNA. The more light absorbed by the sample, the higher the nucleic acid concentration in the sample. The resulting effect is that less light will strike the photodetector and this will produce a higher optical density (OD)
Using the Beer–Lambert law it is possible to relate the amount of light absorbed to the concentration of the absorbing molecule. At a wavelength of 260 nm, the average extinction coefficient for double-stranded DNA is 0.020 (μg/mL)−1 cm−1, for single-stranded DNA it is 0.027 (μg/mL)−1 cm−1, for single-stranded RNA it is 0.025 (μg/mL)−1 cm−1 and for short single-stranded oligonucleotides it is dependent on the length and base composition. Thus, an Absorbance (A) of 1 corresponds to a concentration of 50 μg/mL for double-stranded DNA. This method of calculation is valid for up to an A of at least 2. A more accurate extinction coefficient may be needed for oligonucleotides; these can be predicted using the nearest-neighbor model.
Calculations
The optical density is generated from equation:
Optical density= Log (Intensity of incident light / Intensity of Transmitted light)
In practical terms, a sample that contains no DNA or RNA should not absorb any of the ultraviolet light and therefore produce an OD of 0
Optical density= Log (100/100)=0
When using spectrophotometric analysis to determine the concentration of DNA or RNA, the Beer–Lambert law is used to determine unknown concentrations without the need for standard curves. In essence, the Beer Lambert Law makes it possible to relate the amount of light absorbed to the concentration of the absorbing molecule. The following absorbance units to nucleic acid concentration conversion factors are used to convert OD to concentration of unknown nucleic acid samples:
A260 dsDNA = 50 μg/mL
A260 ssDNA = 33 μg/mL
A260 ssRNA = 40 μg/mL
Conversion factors
When using a 10 mm path length, simply multiply the OD by the conversion factor to determine the concentration. Example, a 2.0 OD dsDNA sample corresponds to a sample with a 100 μg/mL concentration.
When using a path length that is shorter than 10mm, the resultant OD will be reduced by a factor of 10/path length. Using the example above with a 3 mm path length, the OD for the 100 μg/mL sample would be reduced to 0.6. To normalize the concentration to a 10mm equivalent, the following is done:
0.6 OD X (10/3) * 50 μg/mL=100 μg/mL
Most spectrophotometers allow selection of the nucleic acid type and path length such that resultant concentration is normalized to the 10 mm path length which is based on the principles of Beer's law.
A260 as quantity measurement
The "A260 unit" is used as a quantity measure for nucleic acids. One A260 unit is the amount of nucleic acid contained in 1 mL and producing an OD of 1. The same conversion factors apply, and therefore, in such contexts:
1 A260 unit dsDNA = 50 μg
1 A260 unit ssDNA = 33 μg
1 A260 unit ssRNA = 40 μg
Sample purity (260:280 / 260:230 ratios)
It is common for nucleic acid samples to be contaminated with other molecules (i.e. proteins, organic compounds, other). The secondary benefit of using spectrophotometric analysis for nucleic acid quantitation is the ability to determine sample purity using the 260 nm:280 nm calculation. The ratio of the absorbance at 260 and 280 nm (A260/280) is used to assess the purity of nucleic acids. For pure DNA, A260/280 is widely considered ~1.8 but has been argued to translate - due to numeric errors in the original Warburg paper - into a mix of 60% protein and 40% DNA. The ratio for pure RNA A260/280 is ~2.0. These ratios are commonly used to assess the amount of protein contamination that is left from the nucleic acid isolation process since proteins absorb at 280 nm.
The ratio of absorbance at 260 nm vs 280 nm is commonly used to assess DNA contamination of protein solutions, since proteins (in particular, the aromatic amino acids) absorb light at 280 nm. The reverse, however, is not true — it takes a relatively large amount of protein contamination to significantly affect the 260:280 ratio in a nucleic acid solution.
260:280 ratio has high sensitivity for nucleic acid contamination in protein:
260:280 ratio lacks sensitivity for protein contamination in nucleic acids (table shown for RNA, 100% DNA is approximately 1.8):
This difference is due to the much higher mass attenuation coefficient nucleic acids have at 260 nm and 280 nm, compared to that of proteins. Because of this, even for relatively high concentrations of protein, the protein contributes relatively little to the 260 and 280 absorbance. While the protein contamination cannot be reliably assessed with a 260:280 ratio, this also means that it contributes little error to DNA quantity estimation.
Contamination identification
Examination of sample spectra may be useful in identifying that a problem with sample purity exists.
Other common contaminants
Contamination by phenol, which is commonly used in nucleic acid purification, can significantly throw off quantification estimates. Phenol absorbs with a peak at 270 nm and a A260/280 of 1.2. Nucleic acid preparations uncontaminated by phenol should have a A260/280 of around 2. Contamination by phenol can significantly contribute to overestimation of DNA concentration.
Absorption at 230 nm can be caused by contamination by phenolate ion, thiocyanates, and other organic compounds. For a pure RNA sample, the A230:260:280 should be around 1:2:1, and for a pure DNA sample, the A230:260:280 should be around 1:1.8:1.
Absorption at 330 nm and higher indicates particulates contaminating the solution, causing scattering of light in the visible range. The value in a pure nucleic acid sample should be zero.
Negative values could result if an incorrect solution was used as blank. Alternatively, these values could arise due to fluorescence of a dye in the solution.
Analysis with fluorescent dye tagging
An alternative method to assess DNA and RNA concentration is to tag the sample with a Fluorescent tag, which is a fluorescent dye used to measure the intensity of the dyes that bind to nucleic acids and selectively fluoresce when bound (e.g. Ethidium bromide). This method is useful for cases where concentration is too low to accurately assess with spectrophotometry and in cases where contaminants absorbing at 260 nm make accurate quantitation by that method impossible. The benefit of fluorescence quantitation of DNA and RNA is the improved sensitivity over spectrophotometric analysis. Although, that increase in sensitivity comes at the cost of a higher price per sample and a lengthier sample preparation process.
There are two main ways to approach this. "Spotting" involves placing a sample directly onto an agarose gel or plastic wrap. The fluorescent dye is either present in the agarose gel, or is added in appropriate concentrations to the samples on the plastic film. A set of samples with known concentrations are spotted alongside the sample. The concentration of the unknown sample is then estimated by comparison with the fluorescence of these known concentrations. Alternatively, one may run the sample through an agarose or polyacrylamide gel, alongside some samples of known concentration. As with the spot test, concentration is estimated through comparison of fluorescent intensity with the known samples.
If the sample volumes are large enough to use microplates or cuvettes, the dye-loaded samples can also be quantified with a fluorescence photometer. Minimum sample volume starts at 0.3 μL.
To date there is no fluorescence method to determine protein contamination of a DNA sample that is similar to the 260 nm/280 nm spectrophotometric version.
See also
Nucleic acid methods
Phenol–chloroform extraction
Column purification
Protein methods
References
External links
IDT online tool for predicting nucleotide UV absorption spectrum
Ambion guide to RNA quantitation
Hillary Luebbehusen, The significance of 260/230 Ratio in Determining Nucleic Acid Purity (pdf document)
double stranded, single stranded DNA and RNA quantification by 260nm absorption, Sauer lab at OpenWetWare
Absorbance to Concentration Web App @ DNA.UTAH.EDU
Nucleic Acid Quantification Accuracy and Reproducibility
Spectroscopy
Biochemistry methods
Nucleic acids | Nucleic acid quantitation | Physics,Chemistry,Biology | 2,092 |
1,496,209 | https://en.wikipedia.org/wiki/Multistorey%20car%20park | A multistorey car park (Commonwealth English) or parking garage (American English), also called a multistorey, parking building, parking structure, parkade (Canadian), parking ramp, parking deck, or indoor parking, is a building designed for car, motorcycle, and bicycle parking in which parking takes place on more than one floor or level. The first known multistorey facility was built in London in 1901 and the first underground parking was built in Barcelona in 1904 (see history). The term multistorey (or multistory) is almost never used in the United States, because almost all parking structures have multiple parking levels. Parking structures may be heated if they are enclosed.
Design of parking structures can add considerable cost for planning new developments, with costs in the United States around $28,000 per space and $56,000 per space for underground (excluding the cost of land), and can be required by cities in parking mandates for new buildings. Some cities such as London have abolished previously enacted minimum parking requirements. Minimum parking requirements are a hallmark of zoning and planning codes for municipalities in the US. (States do not prescribe parking requirements, while counties and cities can).
History
The earliest known multi-storey car park was opened in May 1901 by City & Suburban Electric Carriage Company at 6 Denman Street, central London. The location had space for 100 vehicles over seven floors, totaling 19,000 square feet. The same company opened a second location in 1902 for 230 vehicles. The company specialized in the sale, storage, valeting, and on-demand delivery of electric vehicles that could travel about 40 miles and had a top speed of 20 miles per hour.
The earliest known parking garage in the United States was built in 1918 for the Hotel La Salle at 215 West Washington Street in the West Loop area of downtown Chicago, Illinois. It was designed by Holabird and Roche. The Hotel La Salle was demolished in 1976, but the parking structure remained because it had been designated as preliminary landmark status and the structure was several blocks from the hotel. It was demolished in 2005 after failing to receive landmark status from the city of Chicago. A 49-storey apartment tower, 215 West, has taken its place, also featuring a parking garage. When the Capital Garage in Washington, D.C. was built in 1927, it was reportedly the largest parking structure of its kind in the country. It was imploded in 1974.
Design
The movement of vehicles between floors can take place by means of:
interior inclined parking ramps and express ramps without parking – common
interior circular or helical express ramps
exterior ramps – which may take the form of a circular or helical ramp
vehicle lifts (or elevators) – the least common
automated robot systems – combination of ramp and elevator
Where the car park is built on sloping land, it may be split-level or have sloped parking.
Many parking structures are independent buildings dedicated exclusively to that use. The design loads for car parks are often less than the office building they serve (50 psf versus 80 [100] psf), leading to long floor spans of 55–65 feet that permit cars to park in rows without supporting columns in between [called long span]. Podium parking below high-rise and mid-rise buildings are often short-span 25–30 feet clear between columns, since office/residential/retail floors above require more support [100 psf per International Building Code]. Columns in short -span structures obstruct row based parking spaces and will be less efficient than long-span designs; parking efficiency is measured in cars per level square footage [car count/level area]. Common structural systems in the United States for long-span structures are prestressed concrete double-tee floor systems, post-tensioned cast-in-place concrete floor systems or short-span podium parking with post-tensioned slabs and drop panels [drop heads. Steel embeds or thicker slabs can eliminate the need for drop panels, providing higher clearances for higher profile vehicles.]
In recent times, parking structures built to serve residential and some business properties have been built as part of a larger building, often underground as part of the basement, such as the parking lot at the Atlantic Station redevelopment in Atlanta. This saves land for other uses (as opposed surface parking), is cheaper and more practical in most cases than a separate structure, and is hidden from view. It protects customers and their cars from weather such as rain, snow, or hot summer sunshine that raises a vehicle's interior temperature to extremely high levels. Underground parking of only two levels was considered an innovative concept in 1964, when developer Louis Lesser developed a two-level underground parking structure under six 10-storey high-rise residential halls at California State University, Los Angeles, which lacked space for horizontal expansion in the university. The simple two-level parking structure was considered unusual enough in 1964 that a separate newspaper section entitled "Parking Underground" described the parking lot as an innovative "concept" and as "subterranean spaces". In Toronto, a 2,400 space underground parking structure below Nathan Phillips Square is one of the world's largest.
Parking which serve shopping centers can be built adjacent to the center for easier access at each floor between shops and parking. One example is Mall of America in Bloomington, Minnesota, USA, which has two large parking lots attached to the building, at the eastern and western ends. A common position for parking within shopping centers in the UK is on the roof, around the various utility systems, enabling customers to take lifts straight down into the center. Examples of such are The Oracle in Reading and Festival Place in Basingstoke. Parking garages without mixed use can provide excellent uses for the Roof area: The Grove Parking Garage is the site for movies on its 8th level roof, The Grand Prix of Long Beach, CA can be viewed from the Roof level of The Aquarium of the Pacific Parking Garage and The Pike Parking Garage (opposite the Queensway Structure) were built with a thickened post-tensioned roof slab to accommodate crowds of people.
These parking structures often have low ceiling clearances [7'-2" and 8'-4" for accessible parking], which restrict access by full-size vans and other large vehicles. On 15 December 2013, a man was killed during a robbery in the parking garage at The Mall at Short Hills in Millburn, New Jersey. The paramedics responding to the shooting were delayed because their ambulance was too large to enter the structure.
In the United States, costs for parking garages are estimated to cost between $25,000 per space, with underground parking costing around $35,000 per space.
Structural integrity
Parking structures are subjected to the heavy and shifting loads of moving vehicles, and must bear the associated physical stresses. Expansion joints are used between sections not only for thermal expansion but to accommodate the flexing of the structure's sections due to vehicle traffic. Parking structures are generally not subject to building inspections after being checked for their initial occupancy permit. Seismic retrofits can be applied where earthquakes are an issue.
Some parking structures have partly collapsed, either during construction or years later. In July 2009 a fourth-floor section failed at the Centergy building in midtown Atlanta, pancaking down and destroying more than 30 vehicles but injuring no-one. In December 2007, a car crashed into the wall of the deck at the SouthPark Mall in Charlotte, North Carolina, weakening it and causing a small collapse which destroyed two cars below. On the same day, one under construction in Jacksonville, Florida collapsed as concrete was being poured on the sixth floor.
In November 2008, the sudden collapse of the middle level of a deck in Montreal was preceded by warning signs some weeks before, including cracks and water leaks.
In June 2012, the Algo Centre Mall's rooftop parking deck collapsed into the building, crashing through the upper level lottery kiosk adjacent to the food court and escalators to the ground floor below, killing two people.
In October 2012 four people were killed and nine more injured when a parking structure under construction at a campus of Miami-Dade College in Florida collapsed, purportedly due to an unfinished column.
The Surfside condominium's main building's collapse that killed ninety-eight people was likely caused by the failure of the long-term degradation of reinforced concrete structural support in the basement-level parking garage.
Precast parking structures
As multi-storey car parks have become more common since the middle of the twentieth century, many constructions of such structures have been using precast concrete to reduce the construction time. The design involves putting parking structure parts together. The parts of precast concrete include multi-storey structural wall panels, interior and exterior columns, structural floors, girders, wall panels, stairs, and slabs. The precast concrete parts are transported using flatbed semi-trailers to the sites. The structural floor modules may need to be laid tilted during the transportation in order to cover as large floor area as possible while they can be easily transported on the roadways. The modules are lifted using precast concrete lifting anchor systems at the sites for assembly. Decorations may include using of covers to close the holes in the precast concrete that contains the lifting anchors, and installing facades to the exterior of the structures.
In modern construction of the precast modules, there are other features to improve the strength of the structure. An example is to use prestressed strands on post-tensioned concrete for the construction of the shear walls. Another example is the use of carbon-fiber-reinforced polymer to replace steel wire mesh to lighten the load and yield more corrosion resistance especially for the cold-climate areas which use salt for melting snow.
Architectural value
These structures are not usually known for their architectural value. As Architectural Record has noted, "In the Pantheon of Building Types, the parking garage lurks somewhere in the vicinity of prisons and toll plazas." The New York Times has labeled parking structures as "the grim afterthought of American design".
A handful of structures have received considerable praise for their design, including
1111 Lincoln Road, in the South Beach section of Miami Beach, Florida and designed by the internationally known Swiss architectural firm of Herzog & de Meuron.
The Brutalist Preston bus station in the United Kingdom, which incorporates a multistorey car park
Castle Terrace Car Park in Edinburgh, United Kingdom
In the United States, several have been listed on the National Register of Historic Places, including Boston's North Terminal Garage. In more recent developments, Queensway Bay Parking Garage, Long Beach CA, has received awards for it unique facade in 1992, Designed by International Parking Design and built by Bomel Construction Company Inc.
Nomenclature
The term multistorey car park (often abbreviated to multistorey or multistory) is used in the United Kingdom, Hong Kong, and many Commonwealth of Nations countries, and it is nowadays most commonly spelled without a hyphen. In the United States, the term parking structure is used, especially when it is necessary to distinguish such a structure from the "garage" connected with a house. In some places in North America, "parking garage" refers only to an indoor, often underground, structure. Outdoor, multi-level parking facilities are referred to by a number of regional terms:
Parking garage is used, to varying degrees, throughout the U.S. and Canada, often referring to underground parking, and designed professionally by Structural Engineers and Architects;
Parking Structure is used worldwide, and synonymously with “parking garage”.
Parking deck is used mostly in the Southern United States.
Parking ramp is used in the upper Midwest, especially Minnesota and Wisconsin, and has been observed as far east as Buffalo, New York.
Parkade is widely used in Western Canada and South Africa.
Parking building is used in New Zealand.
Architects and structural engineers in the USA are likely to call it a parking structure since their work is all about structures and since that term is the vernacular in the United States. When constructed as the base of a high-rise, it is sometimes called a parking podium. United States building codes use the term open parking garage to refer to a structure designed for car storage that has openings along at least 40% of the perimeter, as opposed to an enclosed parking garage that requires mechanical ventilation. Natural or mechanical ventilation provides fresh air flow to disperse car exhaust in normal conditions, or hot gas and smoke in case of fire.
Typically parking consultants in the UK describe the number of car park floors in terms of "G+x". G stands for ground and x for the number of floors above ground. For example, G+5 is a multi-storey car park structure with a ground floor and 5 floors above that, i.e. a total of 6 floors. The preceding does not apply to the United States where B+x refers to basement levels ascending in number x while descending in elevation, L1 or ground level [unlike European standards where ground level is below Level 1] with added levels as L2, etc.
Construction types
Concrete
Steel structure
Automated (mechanical)
Steel structure
Structure car parks are car parks made of structural steel components connected to each other to carry the loads and provide full structural rigidity.
Steel is a high-strength material requiring less material than other types of structures like concrete and timber. Steel construction features:
Cost savings: inexpensive to manufacture and erect, and requires less maintenance than traditional building methods.
Speed: Allows construction or prefabrication off-site with rapid installation on-site. Some suppliers claim construction in days.
Durability: Suppliers claim 50-plus years lifespan.
Removability: Steel car park structure could be designed to be removed at a later date.
Expandability: Steel car park structures can be expanded easily at a later date.
Creativity: Steel allows for long column-free spans.
The ceiling slab of the steel structure car park is typically made of composite material such as corrugated steel sheets and concrete. The surface of the first-floor parking can be left bare or covered with epoxy or tarmac.
Foundationless and modular
Demand, steel features, and innovation have led to the development of a foundationless, modular, removable steel car park structure.
Parking demand often grows quickly, significantly and sometimes unexpectedly. Modular steel car parks could be the proper solution if the surface area available is not sufficient and can be expanded upward, or whenever it is not feasible to build up a multi-storey parking.
The development of the building concept of modular car parks came about by using the modular assembling method of vertical and horizontal elements (such as columns and beams)
Modular car park structures are versatile and can be built in phases or in different sizes and shape.
The solution makes it possible to develop a parking structure even in case of particular conditions or constraints, such as archaeological sites or city centres, because it allows:
To virtually double the parking surface without leaving any footprint on the ground, as no settlement for excavations or traditional foundations is needed;
To double the parking surface by means of a light steel single-deck car park system.
Prefab modular components of the system make each project versatile and suitable for both large and small sized areas.
These parking structures are generally demountable and can be relocated to avoid making the choice of converting a surface to parking area irrevocably. They could be used as permanent structures or are conceived as temporary parking facilities for temporary parking demand needs. A number of parking decks have been demounted after a few years – to make room for the development of a permanent structure – and relocated to respond to local parking demand.
Automated parking
The earliest use of an automated parking system (APS) was in Paris in 1905 at the Garage Rue de Ponthieu. The APS consisted of a groundbreaking multi-storey concrete structure with an internal elevator to transport cars to upper levels where attendants parked the cars.
A 1931 Popular Mechanics article speculated about design for an underground garage where the car is taken to a parking area by a conveyor and then an elevator to shuttles mounted on rails.
The total cost of ownership of automated parking needs to be carefully considered. The actual cost of construction of automated car parks is typically higher than conventional car park structures, however, this can be offset by the higher space efficiency including reduced excavation waste from minimized footprints. The cost of the mechanical equipment needed to transport the cars needs to be added to the building cost. In addition, operation and maintenance costs of the mechanical equipment need to be added in order to determine the total cost of ownership. Other costs could be saved, for example, there is no need for an energy-intensive ventilating system, since cars are not driven inside and human cashiers or security personnel may not be needed. For naturally ventilated car parks structures, the ventilation equipment is not needed.
Automated car parks rely on similar technology to that used for mechanical handling and document retrieval. The driver leaves the car in an entrance module, and it is then transported to a parking slot by a robotic trolley. For the driver, the process of parking is reduced to leaving the car inside an entrance module.
At peak periods a wait may occur before entering or leaving, because loading passengers and luggage occurs at the entrance and exit rather than at the parking stall. This loading blocks the entrance or exit from being available to others. It is generally not recommended to use automated car parks for high peak hour volume facilities.
Additional factors that need to be taken into consideration are:
Fear of breakdowns (How does the user get the car back)
Maintenance contracts needed with suppliers
Automotive factories and car dealerships often use automated car parks to store inventory, which makes best use of space if they operate in urban areas, plus the car park may be decorated to promote the brand. For instance at the Autostadt there are two 60 meter/200 ft tall glass silos (AutoTürme) used as storage for new Volkswagens. The two towers are connected to the Volkswagen factory by a 700-metre tunnel. When cars arrive at the towers they are carried up at a speed of 1.5 metres per second. The render for the Autostadt shows 6 towers. When purchasing a car from Volkswagen (the main brand only, not the sub-brands) in select European countries, it is optional if the customer wants it delivered to the dealership where it was bought or if the customer wants to travel to Autostadt to pick it up. If the latter is chosen, the Autostadt supplies the customer with free entrance, meal tickets and a variety of events building up to the point where the customer can follow on screen as the automatic elevator picks up the selected car in one of the silos. The car is then transported out to the customer without having driven a single meter, and the odometer is thus on "0".
Automated car parks have been popular for multistorey residential buildings in New York City and Paris. In Toronto, automated car parks are gradually catching in the downtown core condominium developments sine the 2010s, due to developers having to meet city-mandated minimum parking space requirements while building on increasingly smaller lots.
Other technologies
Modern car parks utilize a variety of technologies to help motorists find unoccupied parking spaces, car location when returning to the vehicle and improve their experience. These include adaptive lighting, sensors and parking space LED indicators (red for occupied, green for available and blue is reserved for the disabled; above every parking space), indoor positioning system (IPS), including QR code, and mobile payment options. The Santa Monica Place shopping mall in California has cameras on each stall that can help count the lot occupancy and find lost cars.
Online booking technology service providers have been created to help drivers find long-term parking in an automated manner, while also providing significant savings for those who book parking spaces ahead of time. They use real-time inventory management checking technology to display car parks with availability, sorted by price and distance from the airport.
Other recent developments in technology include Vehicles Detection and Count Systems, Point of Sale & Revenue Control Systems, Traffic & Capacity Monitoring Systems, Valet Parking Point of Sale, Management & Revenue Control Systems. These systems help in way finding for parking clients with space availability shown at every turn, space monitoring using retrofit wifi transmitters in each space to update the space availability signs and to alert parking management of bottle necks and intervention measures. Revenue Control, Capacity Management, and Valet Point of Sale is a major issue for Office and Retail parking management and is also a means of parking management intervention, where website update the status of all of these issues for exclusive use by management. Irvine Spectrum Center, Irvine CA, with 3 parking structures, uses all of these systems The City of Santa Monica uses Traffic and Capacity Monitoring with its 30 parking structures. Disneyland, in Anaheim CA uses most of these hi-tech solutions on its 8 garages.
Multistorey parking ship
In 1991, a 1975 marine vessel was transformed into a floating pontoon multi-storey car parking facility. The ship was given the new name P-Arken (a pun on the words park and ark) and it is permanently towed in Gothenburg's harbour Lilla Bommen near Skeppsbron.
In November 2019, a fully-clad parking barge for automobiles was patented in the United States. Its angular sides are designed to protect against driving wind, rain, and debris.
Education and research
In October 2009, the National Building Museum opened an exhibition solely devoted to the study of parking garages and their impact on the built environment. This exhibition, titled House of Cars: Innovation and the Parking Garage, was on view until 11 July 2010. Additional information on the design and building of parking structures can be found in "Parking Structures: planning, design, construction, maintenance, and repair" This resource is on its third edition, written by prominent staff of Walker Parking Consultants, a preeminent Parking Structure designer in the US.
See also
Arcade, "parkade" is a portmanteau or parking and arcade due to the architectural similarity.
Auto Stacker
Autostadt
Automatic parking
Automated parking system
Automatic vehicle location
Car condo
Car parking system
Parking guidance and information
Trinity Square, Gateshead
References
External links
"Robotic Parking Garage: No Tip Necessary "
Garages (parking)
Indoor positioning system
Parking
Structural system | Multistorey car park | Technology,Engineering | 4,522 |
47,742 | https://en.wikipedia.org/wiki/Brooklyn%20Bridge | The Brooklyn Bridge is a hybrid cable-stayed/suspension bridge in New York City, spanning the East River between the boroughs of Manhattan and Brooklyn. Opened on May 24, 1883, the Brooklyn Bridge was the first fixed crossing of the East River. It was also the longest suspension bridge in the world at the time of its opening, with a main span of and a deck above Mean High Water. The span was originally called the New York and Brooklyn Bridge or the East River Bridge but was officially renamed the Brooklyn Bridge in 1915.
Proposals for a bridge connecting Manhattan and Brooklyn were first made in the early 19th century, which eventually led to the construction of the current span, designed by John A. Roebling. The project's chief engineer, his son Washington Roebling, contributed further design work, assisted by the latter's wife, Emily Warren Roebling. Construction started in 1870 and was overseen by the New York Bridge Company, which in turn was controlled by the Tammany Hall political machine. Numerous controversies and the novelty of the design prolonged the project over thirteen years. After opening, the Brooklyn Bridge underwent several reconfigurations, having carried horse-drawn vehicles and elevated railway lines until 1950. To alleviate increasing traffic flows, additional bridges and tunnels were built across the East River. Following gradual deterioration, the Brooklyn Bridge was renovated several times, including in the 1950s, 1980s, and 2010s.
The Brooklyn Bridge is the southernmost of four vehicular bridges directly connecting Manhattan Island and Long Island, with the Manhattan Bridge, the Williamsburg Bridge, and the Queensboro Bridge to the north. Only passenger vehicles and pedestrian and bicycle traffic are permitted. A major tourist attraction since its opening, the Brooklyn Bridge has become an icon of New York City. Over the years, the bridge has been used as the location of various stunts and performances, as well as several crimes, attacks and vandalism. The Brooklyn Bridge is designated a National Historic Landmark, a New York City landmark, and a National Historic Civil Engineering Landmark.
Description
The Brooklyn Bridge, an early example of a steel-wire suspension bridge, uses a hybrid cable-stayed/suspension bridge design, with both vertical and diagonal suspender cables. Its stone towers are neo-Gothic, with characteristic pointed arches. The New York City Department of Transportation (NYCDOT), which maintains the bridge, says that its original paint scheme was "Brooklyn Bridge Tan" and "Silver", but other accounts state that it was originally entirely "Rawlins Red".
Deck
To provide sufficient clearance for shipping in the East River, the Brooklyn Bridge incorporates long approach viaducts on either end to raise it from low ground on both shores. Including approaches, the Brooklyn Bridge is a total of long when measured between the curbs at Park Row in Manhattan and Sands Street in Brooklyn. A separate measurement of is sometimes given; this is the distance from the curb at Centre Street in Manhattan.
Suspension span
The main span between the two suspension towers is long and wide. The bridge "elongates and contracts between the extremes of temperature from 14 to 16 inches". Navigational clearance is above Mean High Water (MHW). A 1909 Engineering Magazine article said that, at the center of the span, the height above MHW could fluctuate by more than due to temperature and traffic loads, while more rigid spans had a lower maximum deflection.
The side spans, between each suspension tower and each side's suspension anchorages, are long. At the time of construction, engineers had not yet discovered the aerodynamics of bridge construction, and bridge designs were not tested in wind tunnels. John Roebling designed the Brooklyn Bridge's truss system to be six to eight times as strong as he thought it needed to be. As such, the open truss structure supporting the deck is, by its nature, subject to fewer aerodynamic problems. However, due to a supplier's fraudulent substitution of inferior-quality wire in the initial construction, the bridge was reappraised at the time as being only four times as strong as necessary.
The main span and side spans are supported by a structure containing trusses that run parallel to the roadway, each of which is deep. Originally there were six trusses, but two were removed during a late-1940s renovation. The trusses allow the Brooklyn Bridge to hold a total load of , a design consideration from when it originally carried heavier elevated trains. These trusses are held up by suspender ropes, which hang downward from each of the four main cables. Crossbeams run between the trusses at the top, and diagonal and vertical stiffening beams run on the outside and inside of each roadway.
An elevated pedestrian-only promenade runs in between the two roadways and above them. It typically runs below the level of the crossbeams, except at the areas surrounding each tower. Here, the promenade rises to just above the level of the crossbeams, connecting to a balcony that slightly overhangs the two roadways. The path is generally wide. The iron railings were produced by Janes & Kirtland, a Bronx iron foundry that also made the United States Capitol dome and the Bow Bridge in Central Park.
Approaches
Each of the side spans is reached by an approach ramp. The approach ramp from the Brooklyn side is shorter than the approach ramp from the Manhattan side. The approaches are supported by Renaissance-style arches made of masonry; the arch openings themselves were filled with brick walls, with small windows within. The approach ramp contains nine arch or iron-girder bridges across side streets in Manhattan and Brooklyn.
Underneath the Manhattan approach, a series of brick slopes or "banks" was developed into a skate park, the Brooklyn Banks, in the late 1980s. The park uses the approach's support pillars as obstacles. In the mid-2010s, the Brooklyn Banks were closed to the public because the area was being used as a storage site during the bridge's renovation. The skateboarding community has attempted to save the banks on multiple occasions; after the city destroyed the smaller banks in the 2000s, the city government agreed to keep the larger banks for skateboarding. When the NYCDOT removed the bricks from the banks in 2020, skateboarders started an online petition. In the 2020s, local resident Rosa Chang advocated for the space under the Manhattan approach to be converted into a recreational area known as Gotham Park. Some of the space under the Manhattan approach reopened in May 2023 as a park called the Arches; this was followed in November 2024 by another section of parkland.
Cables
The Brooklyn Bridge contains four main cables, which descend from the tops of the suspension towers and help support the deck. Two are located to the outside of the bridge's roadways, while two are in the median of the roadways. Each main cable measures in diameter and contains 5,282 parallel, galvanized steel wires wrapped closely together in a cylindrical shape. These wires are bundled in 19 individual strands, with 278 wires to a strand. This was the first use of bundling in a suspension bridge and took several months for workers to tie together. Since the 2000s, the main cables have also supported a series of 24-watt LED lighting fixtures, referred to as "necklace lights" due to their shape.
In addition, either 1,088, 1,096, or 1,520 galvanized steel wire suspender cables hang downward from the main cables. Another 400 cable stays extend diagonally from the towers. The vertical suspender cables and diagonal cable stays hold up the truss structure around the bridge deck. The bridge's suspenders originally used wire rope, which was replaced in the 1980s with galvanized steel made by Bethlehem Steel. The vertical suspender cables measure long, and the diagonal stays measure long.
Anchorages
Each side of the bridge contains an anchorage for the main cables. The anchorages are trapezoidal limestone structures located slightly inland of the shore, measuring at the base and at the top. Each anchorage weighs . The Manhattan anchorage rests on a foundation of bedrock while the Brooklyn anchorage rests on clay.
The anchorages both have four anchor plates, one for each of the main cables, which are located near ground level and parallel to the ground. The anchor plates measure , with a thickness of and weigh each. Each anchor plate is connected to the respective main cable by two sets of nine eyebars, each of which is about long and up to thick. The chains of eyebars curve downward from the cables toward the anchor plates, and the eyebars vary in size depending on their position.
The anchorages also contain numerous passageways and compartments. Starting in 1876, in order to fund the bridge's maintenance, the New York City government made the large vaults under the bridge's Manhattan anchorage available for rent, and they were in constant use during the early 20th century. The vaults were used to store wine, as they were kept at a consistent temperature due to a lack of air circulation. The Manhattan vault was called the "Blue Grotto" because of a shrine to the Virgin Mary next to an opening at the entrance. The vaults were closed for public use in the late 1910s and 1920s during World War I and Prohibition but were reopened thereafter. When New York magazine visited one of the cellars in 1978, it discovered a "fading inscription" on a wall reading: "Who loveth not wine, women and song, he remaineth a fool his whole life long." Leaks found within the vault's spaces necessitated repairs during the late 1980s and early 1990s. By the late 1990s, the chambers were being used to store maintenance equipment.
Towers
The bridge's two suspension towers are tall with a footprint of at the high water line. They are built of limestone, granite, and Rosendale cement. The limestone was quarried at the Clark Quarry in Essex County, New York. The granite blocks were quarried and shaped on Vinalhaven Island, Maine, under a contract with the Bodwell Granite Company, and delivered from Maine to New York by schooner. The Manhattan tower contains of masonry, while the Brooklyn tower has of masonry. There are 56 LED lamps mounted onto the towers.
Each tower contains a pair of Gothic Revival pointed arches, through which the roadways run. The arch openings are tall and wide. The tops of the towers are located above the floor of each arch opening, while the floors of the openings are above mean water level, giving the towers a total height of above mean high water.
Caissons
The towers rest on underwater caissons made of southern yellow pine and filled with cement. Inside both caissons were spaces for construction workers. The Manhattan side's caisson is slightly larger, measuring and located below high water, while the Brooklyn side's caisson measures and is located below high water. The caissons were designed to hold at least the weight of the towers which would exert a pressure of when fully built, but the caissons were over-engineered for safety. During an accident on the Brooklyn side, when air pressure was lost and the partially-built towers dropped full-force down, the caisson sustained an estimated pressure of with only minor damage. Most of the timber used in the bridge's construction, including in the caissons, came from mills at Gascoigne Bluff on St. Simons Island, Georgia.
The Brooklyn side's caisson, which was built first, originally had a height of and a ceiling composed of five layers of timber, each layer tall. Ten more layers of timber were later added atop the ceiling, and the entire caisson was wrapped in tin and wood for further protection against flooding. The thickness of the caisson's sides was at both the bottom and the top. The caisson had six chambers: two each for dredging, supply shafts, and airlocks.
The caisson on the Manhattan side was slightly different because it had to be installed at a greater depth. To protect against the increased air pressure at that depth, the Manhattan caisson had 22 layers of timber on its roof, seven more than its Brooklyn counterpart had. The Manhattan caisson also had fifty pipes for sand removal, a fireproof iron-boilerplate interior, and different airlocks and communication systems.
History
Planning
Proposals for a bridge between the then-separate cities of Brooklyn and New York had been suggested as early as 1800. At the time, the only travel between the two cities was by a number of ferry lines. Engineers presented various designs, such as chain or link bridges, though these were never built because of the difficulties of constructing a high enough fixed-span bridge across the extremely busy East River. There were also proposals for tunnels under the East River, but these were considered prohibitively expensive. German immigrant engineer John Augustus Roebling proposed building a suspension bridge over the East River in 1857. He had previously designed and constructed shorter suspension bridges, such as Roebling's Delaware Aqueduct in Lackawaxen, Pennsylvania, and the Niagara Suspension Bridge. In 1867, Roebling erected what became the John A. Roebling Suspension Bridge over the Ohio River between Cincinnati, Ohio, and Covington, Kentucky.
In February 1867, the New York State Senate passed a bill that allowed the construction of a suspension bridge from Brooklyn to Manhattan. Two months later, the New York and Brooklyn Bridge Company was incorporated with a board of directors (later converted to a board of trustees). There were twenty trustees in total: eight each appointed by the mayors of New York and Brooklyn, as well as the mayors of each city and the auditor and comptroller of Brooklyn. The company was tasked with constructing what was then known as the New York and Brooklyn Bridge. Alternatively, the span was just referred to as the "Brooklyn Bridge", a name originating in a January 25, 1867, letter to the editor sent to the Brooklyn Daily Eagle. The act of incorporation, which became law on April 16, 1867, authorized the cities of New York (now Manhattan) and Brooklyn to subscribe to $5 million in capital stock, which would fund the bridge's construction.
Roebling was subsequently named the chief engineer of the work and, by September 1867, had presented a master plan. According to the plan, the bridge would be longer and taller than any suspension bridge previously built. It would incorporate roadways and elevated rail tracks, whose tolls and fares would provide the means to pay for the bridge's construction. It would also include a raised promenade that served as a leisurely pathway. The proposal received much acclaim in both cities, and residents predicted that the New York and Brooklyn Bridge's opening would have as much of an impact as the Suez Canal, the first transatlantic telegraph cable or the first transcontinental railroad. By early 1869, however, some individuals started to criticize the project, saying either that the bridge was too expensive, or that the construction process was too difficult.
To allay concerns about the design of the New York and Brooklyn Bridge, Roebling set up a "Bridge Party" in March 1869, where he invited engineers and members of U.S. Congress to see his other spans. Following the bridge party in April, Roebling and several engineers conducted final surveys. During the process, it was determined that the main span would have to be raised from above MHW, requiring several changes to the overall design. In June 1869, while conducting these surveys, Roebling sustained a crush injury to his foot when a ferry pinned it against a piling. After amputation of his crushed toes, he developed a tetanus infection that left him incapacitated and resulted in his death the following month. Washington Roebling, John Roebling's 32-year-old son, was then hired to fill his father's role. Tammany Hall leader William M. Tweed also became involved in the bridge's construction because, as a major landowner in New York City, he had an interest in the project's completion. The New York and Brooklyn Bridge Company—later known simply as the New York Bridge Company—was actually overseen by Tammany Hall, and it approved Roebling's plans and designated him as chief engineer of the project.
Construction
Caissons
Construction of the Brooklyn Bridge began on January 2, 1870. The first work entailed the construction of two caissons, upon which the suspension towers would be built. The Brooklyn side's caisson was built at the Webb & Bell shipyard in Greenpoint, Brooklyn, and was launched into the river on March 19, 1870. Compressed air was pumped into the caisson, and workers entered the space to dig the sediment until it sank to the bedrock. As one sixteen-year-old from Ireland, Frank Harris, described the fearful experience:The six of us were working naked to the waist in the small iron chamber with the temperature of about 80 degrees Fahrenheit: In five minutes the sweat was pouring from us, and all the while we were standing in icy water that was only kept from rising by the terrific pressure. No wonder the headaches were blinding. Once the caisson had reached the desired depth, it was to be filled in with vertical brick piers and concrete. However, due to the unexpectedly high concentration of large boulders atop the riverbed, the Brooklyn caisson took several months to sink to the desired depth. Furthermore, in December 1870, its timber roof caught fire, delaying construction further. The "Great Blowout", as the fire was called, delayed construction for several months, since the holes in the caisson had to be repaired. On March 6, 1871, the repairs were finished, and the caisson had reached its final depth of ; it was filled with concrete five days later. Overall, about 264 individuals were estimated to have worked in the caisson every day, but because of high worker turnover, the final total was thought to be about 2,500 men in total. In spite of this, only a few workers were paralyzed. At its final depth, the caisson's air pressure was .
The Manhattan side's caisson was the next structure to be built. To ensure that it would not catch fire like its counterpart had, the Manhattan caisson was lined with fireproof plate iron. It was launched from Webb & Bell's shipyard on May 11, 1871, and maneuvered into place that September. Due to the extreme underwater air pressure inside the much deeper Manhattan caisson, many workers became sick with "the bends"—decompression sickness—during this work, despite the incorporation of airlocks (which were believed to help with decompression sickness at the time). This condition was unknown at the time and was first called "caisson disease" by the project physician, Andrew Smith. Between January 25 and May 31, 1872, Smith treated 110 cases of decompression sickness, while three workers died from the disease. When iron probes underneath the Manhattan caisson found the bedrock to be even deeper than expected, Washington Roebling halted construction due to the increased risk of decompression sickness. After the Manhattan caisson reached a depth of with an air pressure of , Washington deemed the sandy subsoil overlying the bedrock beneath to be sufficiently firm, and subsequently infilled the caisson with concrete in July 1872.
Washington Roebling himself suffered a paralyzing injury as a result of caisson disease shortly after ground was broken for the Brooklyn tower foundation. His debilitating condition left him unable to supervise the construction in person, so he designed the caissons and other equipment from his apartment, directing "the completion of the bridge through a telescope from his bedroom." His wife, Emily Warren Roebling, not only provided written communications between her husband and the engineers on site, but also understood mathematics, calculations of catenary curves, strengths of materials, bridge specifications, and the intricacies of cable construction. She spent the next 11 years helping supervise the bridge's construction, taking over much of the chief engineer's duties, including day-to-day supervision and project management.
Towers
After the caissons were completed, piers were constructed on top of each of them upon which masonry towers would be built. The towers' construction was a complex process that took four years. Since the masonry blocks were heavy, the builders transported them to the base of the towers using a pulley system with a continuous -diameter steel wire rope, operated by steam engines at ground level. The blocks were then carried up on a timber track alongside each tower and maneuvered into the proper position using a derrick atop the towers. The blocks sometimes vibrated the ropes because of their weight, but only once did a block fall.
Construction on the suspension towers started in mid-1872, and by the time work was halted for the winter in late 1872, parts of each tower had already been built. By mid-1873, there was substantial progress on the towers' construction. The Brooklyn side's tower had reached a height of above mean high water (MHW), while the tower on the Manhattan side had reached above MHW. The arches of the Brooklyn tower were completed by August 1874. The tower was substantially finished by December 1874 with the erection of saddle plates for the main cables at the top of the tower. However, the ornamentation on the Brooklyn tower could not be completed until the Manhattan tower was finished. The last stone on the Brooklyn tower was raised in June 1875 and the Manhattan tower was completed in July 1876. The saddle plates atop both towers were also raised in July 1876. The work was dangerous: by 1876, three workers had died having fallen from the towers, while nine other workers were killed in other accidents.
In 1875, while the towers were being constructed, the project had depleted its original $5 million budget. Two bridge commissioners, one each from Brooklyn and Manhattan, petitioned New York state lawmakers to allot another $8 million for construction. Ultimately, the legislators passed a law authorizing the allotment with the condition that the cities would buy the stock of Brooklyn Bridge's private stockholders.
Work proceeded concurrently on the anchorages on each side. The Brooklyn anchorage broke ground in January 1873 and was subsequently substantially completed in August 1875. The Manhattan anchorage was built in less time, having started in May 1875, it was mostly completed in July 1876. The anchorages could not be fully completed until the main cables were spun, at which point another would be added to the height of each anchorage.
Cables
The first temporary wire was stretched between the towers on August 15, 1876, using chrome steel provided by the Chrome Steel Company of Brooklyn. The wire was then stretched back across the river, and the two ends were spliced to form a traveler, a lengthy loop of wire connecting the towers, which was driven by a steam hoisting engine at ground level. The wire was one of two that were used to create a temporary footbridge for workers while cable spinning was ongoing. The next step was to send an engineer across the completed traveler wire in a boatswain's chair slung from the wire, to ensure it was safe enough. The bridge's master mechanic, E.F. Farrington, was selected for this task, and an estimated crowd of 10,000 people on both shores watched him cross. A second traveler wire was then stretched across the span, a task that was completed by August 30. The temporary footbridge, located some above the elevation of the future deck, was completed in February 1877.
By December 1876, a steel contract for the permanent cables still had not been awarded. There was disagreement over whether the bridge's cables should use the as-yet-untested Bessemer steel or the well-proven crucible steel. Until a permanent contract was awarded, the builders ordered of wire in the interim, 10 tons each from three companies, including Washington Roebling's own steel mill in Brooklyn. In the end, it was decided to use number 8 Birmingham gauge (approximately 4 mm or 0.165 inches in diameter) crucible steel, and a request for bids was distributed, to which eight companies responded. In January 1877, a contract for crucible steel was awarded to J. Lloyd Haigh, who was associated with bridge trustee Abram Hewitt, whom Roebling distrusted.
The spinning of the wires required the manufacture of large coils of it which were galvanized but not oiled when they left the factory. The coils were delivered to a yard near the Brooklyn anchorage. There they were dipped in linseed oil, hoisted to the top of the anchorage, dried out and spliced into a single wire, and finally coated with red zinc for further galvanizing. There were thirty-two drums at the anchorage yard, eight for each of the four main cables. Each drum had a capacity of of wire. The first experimental wire for the main cables was stretched between the towers on May 29, 1877, and spinning began two weeks later. All four main cables were being strung by that July. During that time, the temporary footbridge was unofficially opened to members of the public, who could receive a visitor's pass; by August 1877 several thousand visitors from around the world had used the footbridge. The visitor passes ceased that September after a visitor had an epileptic seizure and nearly fell off.
As the wires were being spun, work also commenced on the demolition of buildings on either side of the river for the Brooklyn Bridge's approaches; this work was mostly complete by September 1877. The following month, initial contracts were awarded for the suspender wires, which would hang down from the main cables and support the deck. By May 1878, the main cables were more than two-thirds complete. However, the following month, one of the wires slipped, killing two people and injuring three others. In 1877, Hewitt wrote a letter urging against the use of Bessemer steel in the bridge's construction. Bids had been submitted for both crucible steel and Bessemer steel; John A. Roebling's Sons submitted the lowest bid for Bessemer steel, but at Hewitt's direction, the contract was awarded to Haigh.
A subsequent investigation discovered that Haigh had substituted inferior quality wire in the cables. Of eighty rings of wire that were tested, only five met standards, and it was estimated that Haigh had earned $300,000 from the deception. At this point, it was too late to replace the cables that had already been constructed. Roebling determined that the poorer wire would leave the bridge only four times as strong as necessary, rather than six to eight times as strong. The inferior-quality wire was allowed to remain and 150 extra wires were added to each cable. To avoid public controversy, Haigh was not fired, but instead was required to personally pay for higher-quality wire. The contract for the remaining wire was awarded to the John A. Roebling's Sons, and by October 5, 1878, the last of the main cables' wires went over the river.
Nearing completion
After the suspender wires had been placed, workers began erecting steel crossbeams to support the roadway as part of the bridge's overall superstructure. Construction on the bridge's superstructure started in March 1879, but, as with the cables, the trustees initially disagreed on whether the steel superstructure should be made of Bessemer or crucible steel. That July, the trustees decided to award a contract for of Bessemer steel to the Edgemoor (or Edge Moor) Iron Works, based in Philadelphia, to be delivered by 1880. The trustees later passed another resolution for another of Bessemer steel. However, by February 1880 the steel deliveries had not started. That October, the bridge trustees questioned Edgemoor's president about the delay in steel deliveries. Despite Edgemoor's assurances that the contract would be fulfilled, the deliveries still had not been completed by November 1881. Brooklyn mayor Seth Low, who became part of the board of trustees in 1882, became the chairman of a committee tasked to investigate Edgemoor's failure to fulfill the contract. When questioned, Edgemoor's president stated that the delays were the fault of another contractor, the Cambria Iron Company, who was manufacturing the eyebars for the bridge trusses; at that point, the contract was supposed to be complete by October 1882.
Further complicating the situation, Washington Roebling had failed to appear at the trustees' meeting in June 1882, since he had gone to Newport, Rhode Island. After the news media discovered this, most of the newspapers called for Roebling to be fired as chief engineer, except for the Daily State Gazette of Trenton, New Jersey, and the Brooklyn Daily Eagle. Some of the longstanding trustees, including Henry C. Murphy, James S. T. Stranahan, and William C. Kingsley, were willing to vouch for Roebling, since construction progress on the Brooklyn Bridge was still ongoing. However, Roebling's behavior was considered suspect among the younger trustees who had joined the board more recently.
Construction on the bridge itself was noted in formal reports that Murphy presented each month to the mayors of New York and Brooklyn. For example, Murphy's report in August 1882 noted that the month's progress included 114 intermediate cords erected within a week, as well as 72 diagonal stays, 60 posts, and numerous floor beams, bridging trusses, and stay bars. By early 1883, the Brooklyn Bridge was considered mostly completed and was projected to open that June. Contracts for bridge lighting were awarded by February 1883, and a toll scheme was approved that March.
Opposition
There was substantial opposition to the bridge's construction from shipbuilders and merchants located to the north, who argued that the bridge would not provide sufficient clearance underneath for ships. In May 1876, these groups, led by Abraham Miller, filed a lawsuit in the United States District Court for the Southern District of New York against the cities of New York and Brooklyn.
In 1879, an Assembly Sub-Committee on Commerce and Navigation began an investigation into the Brooklyn Bridge. A seaman who had been hired to determine the height of the span, testified to the committee about the difficulties that ship masters would experience in bringing their ships under the bridge when it was completed. Another witness, Edward Wellman Serrell, a civil engineer, said that the calculations of the bridge's assumed strength were incorrect. The Supreme Court decided in 1883 that the Brooklyn Bridge was a lawful structure.
Opening
The New York and Brooklyn Bridge was opened for use on May 24, 1883. Thousands of people attended the opening ceremony, and many ships were present in the East River for the occasion. Officially, Emily Warren Roebling was the first to cross the bridge. The bridge opening was also attended by U.S. president Chester A. Arthur and New York mayor Franklin Edson, who crossed the bridge and shook hands with Brooklyn mayor Seth Low at the Brooklyn end. Abram Hewitt gave the principal address.
Though Washington Roebling was unable to attend the ceremony (and rarely visited the site again), he held a celebratory banquet at his house on the day of the bridge opening. Further festivity included the performance by a band, gunfire from ships, and a fireworks display. On that first day, a total of 1,800 vehicles and 150,300 people crossed the span. Less than a week after the Brooklyn Bridge opened, ferry crews reported a sharp drop in patronage, while the bridge's toll operators were processing over a hundred people a minute. However, cross-river ferries continued to operate until 1942.
The bridge had cost in 1883 dollars (about US$ in ) to build, of which Brooklyn paid two-thirds. The bonds to fund the construction would not be paid off until 1956. An estimated 27 men died during its construction. Since the New York and Brooklyn Bridge was the only bridge across the East River at that time, it was also called the East River Bridge. Until the construction of the nearby Williamsburg Bridge in 1903, the New York and Brooklyn Bridge was the longest suspension bridge in the world, 20% longer than any built previously.
At the time of opening, the Brooklyn Bridge was not complete; the proposed public transit across the bridge was still being tested, while the Brooklyn approach was being completed. On May 30, 1883, six days after the opening, a woman falling down a stairway at the Brooklyn approach caused a stampede which resulted in at least twelve people being crushed and killed. In subsequent lawsuits, the Brooklyn Bridge Company was acquitted of negligence. However, the company did install emergency phone boxes and additional railings, and the trustees approved a fireproofing plan for the bridge. Public transit service began with the opening of the New York and Brooklyn Bridge Railway, a cable car service, on September 25, 1883. On May 17, 1884, one of the circus master P. T. Barnum's most famous attractions, Jumbo the elephant, led a parade of 21 elephants over the Brooklyn Bridge. This helped to lessen doubts about the bridge's stability while also promoting Barnum's circus.
1880s to 1910s
Patronage across the Brooklyn Bridge increased in the years after it opened; a million people paid to cross in the six first months. The bridge carried 8.5 million people in 1884, its first full year of operation; this number doubled to 17 million in 1885 and again to 34 million in 1889. Many of these people were cable car passengers. Additionally, about 4.5 million pedestrians a year were crossing the bridge for free by 1892.
The first proposal to make changes to the bridge was sent in only two and a half years after it opened, when Linda Gilbert suggested glass steam-powered elevators and an observatory be added to the bridge and a fee charged for use, which would in part fund the bridge's upkeep and in part fund her prison reform charity. This proposal was considered but not acted upon. Numerous other proposals were made during the first fifty years of the bridge's life. Trolley tracks were added in the center lanes of both roadways in 1898, allowing trolleys to use the bridge as well. That year, the formerly separate City of Brooklyn was unified with New York City, and the Brooklyn Bridge fell under city control.
Concerns about the Brooklyn Bridge's safety were raised during the turn of the century. In 1898, traffic backups due to a dead horse caused one of the truss cords to buckle. There were more significant worries after twelve suspender cables snapped in 1901, though a thorough investigation found no other defects. After the 1901 incident, five inspectors were hired to examine the bridge each day, a service that cost $250,000 a year. The Brooklyn Rapid Transit Company, which operated routes across the Brooklyn Bridge, issued a notice in 1905 saying that the bridge had reached its transit capacity.
By 1890, due to the popularity of the Brooklyn Bridge, there were proposals to construct other bridges across the East River between Manhattan and Long Island. Although a second deck for the Brooklyn Bridge was proposed, it was thought to be infeasible because doing so would overload the bridge's structural capacity. The first new bridge across the East River, the Williamsburg Bridge, opened upstream in 1903 and connected Williamsburg, Brooklyn, with the Lower East Side of Manhattan. This was followed by the Queensboro Bridge between Queens and Manhattan in March 1909, and the Manhattan Bridge between Brooklyn and Manhattan in December 1909. Several subway, railroad, and road tunnels were also constructed, which helped to accelerate the development of Manhattan, Brooklyn, and Queens.
1910s to 1940s
Though carriages and cable-car customers had paid tolls ever since the bridge's opening, pedestrians were spared from the tolls originally. By the first decade of the 20th century, pedestrians were also paying tolls. Tolls on all four bridges across the East River—the Brooklyn Bridge, as well as the Manhattan, Williamsburg, and Queensboro bridges to the north—were abolished in July 1911 as part of a populist policy initiative headed by New York City mayor William Jay Gaynor. The city government passed a bill to officially name the structure the "Brooklyn Bridge" in January 1915.
Ostensibly in an attempt to reduce traffic on nearby city streets, Grover Whalen, the commissioner of Plant and Structures, banned motor vehicles from the Brooklyn Bridge on July 6, 1922. The real reason for the ban was an incident the same year where two cables slipped due to high traffic loads. Both Whalen and Roebling called for the renovation of the Brooklyn Bridge and the construction of a parallel bridge, though the parallel bridge was never built. Whalen's successor William Wirt Mills announced in 1924 that a new wood-block pavement would be installed, permitting motor vehicles to use the bridge again; motor traffic was again allowed on the bridge starting on May 12, 1925.
As part of an experiment, starting in November 1946, the Manhattan-bound roadway carried Brooklyn-bound traffic during the evening rush hours. The experiment ended after two months due to complaints about congestion.
Mid- to late 20th century
Upgrades
The first major upgrade to the Brooklyn Bridge commenced in 1948, when a contract to entirely reconstruct the approach ramps was awarded to David B. Steinman. The renovation was expected to double the capacity of the bridge's roadways to nearly 6,000 cars per hour, at a projected cost of $7 million. The renovation included the demolition of both the elevated and the trolley tracks on the roadways, the removal of trusses separating the inner elevated tracks from the existing vehicle lanes and the widening of each roadway from two to three lanes, as well as the construction of a new steel-and-concrete floor. In addition, new ramps were added to Adams Street, Cadman Plaza, and the Brooklyn Queens Expressway (BQE) on the Brooklyn side, and to Park Row on the Manhattan side. The bridge was briefly closed to all traffic for the first time ever in January 1950, and the trolley tracks closed that March to allow the widening work to occur. During the construction project, one roadway at a time was closed, allowing reduced traffic flows to cross the bridge in one direction only.
The widened south roadway was completed in May 1951, followed by the north roadway in October 1953. The restoration was finished in May 1954 with the completion of the reconstructed elevated promenade. While the rebuilding of the span was ongoing, a fallout shelter was constructed beneath the Manhattan approach in anticipation of the Cold War. The abandoned space in one of the masonry arches was stocked with emergency survival supplies for a potential nuclear attack by the Soviet Union; these supplies remained in place half a century later. In addition, defensive barriers were added to the bridge as a safeguard against sabotage.
Simultaneous with the rebuilding of the Brooklyn Bridge, a double-decked viaduct for the BQE was being built through an existing steel overpass of the bridge's Brooklyn approach ramp. The segment of the BQE from Brooklyn Bridge south to Atlantic Avenue opened in June 1954, but the direct ramp from the northbound BQE to the Manhattan-bound Brooklyn Bridge did not open until 1959. The city also widened the Adams Street approach in Brooklyn, between the bridge and Fulton Street, from between 1954 and 1955. Subsequently, Boerum Place from Fulton Street south to Atlantic Avenue was also widened. This required the demolition of the old Kings County courthouse. The towers were cleaned in 1958 and the Brooklyn anchorage was repaired the next year.
On the Manhattan side, the city approved a controversial rebuilding of the Manhattan entrance plaza in 1953. The project, which would add a grade-separated junction over Park Row, was hotly contested because it would require the demolition of 21 structures, including the old New York World Building. The reconstruction also necessitated the relocation of 410 families on Park Row. In December 1956, the city started a two-year renovation of the plaza. This required the closure of one roadway at a time, as was done during the rebuilding of the bridge itself. Work on redeveloping the area around the Manhattan approach started in the mid-1960s. At the same time, plans were announced for direct ramps to the elevated FDR Drive to alleviate congestion at the approach. The ramp from FDR Drive to the Brooklyn Bridge was opened in 1968, followed by the ramp from the bridge to FDR Drive the next year. A single ramp from the Manhattan-bound Brooklyn Bridge to northbound Park Row was constructed in 1970. A repainting of the bridge was announced two years later in advance of its 90th anniversary.
Deterioration and late-20th century repair
The Brooklyn Bridge gradually deteriorated due to age and neglect. While it had 200 full-time dedicated maintenance workers before World War II, that number dropped to five by the late 20th century, and the city as a whole only had 160 bridge maintenance workers. In 1974, heavy vehicles such as vans and buses were banned from the bridge to prevent further erosion of the concrete roadway. A report in The New York Times four years later noted that the cables were visibly fraying and the pedestrian promenade had holes in it. The city began planning to replace all the Brooklyn Bridge's cables at a cost of $115 million, as part of a larger project to renovate all four toll-free East River spans. By 1980, the Brooklyn Bridge was in such dire condition that it faced imminent closure. In some places, half of the strands in the cables were broken.
In June 1981, two of the diagonal stay cables snapped, killing a pedestrian. Subsequently, the anchorages were found to have developed rust, and an emergency cable repair was necessitated less than a month later after another cable developed slack. Following the incident, the city accelerated the timetable of its proposed cable replacement, and it commenced a $153 million rehabilitation of the Brooklyn Bridge in advance of the 100th anniversary. As part of the project, the bridge's original suspender cables installed by J. Lloyd Haigh were replaced by Bethlehem Steel in 1986, marking the cables' first replacement since construction. In addition, the staircase at Washington Street in Brooklyn was renovated, the stairs from Tillary and Adams Streets were replaced with a ramp, and the short flights of steps from the promenade to each tower's balcony were removed. In a smaller project, the bridge was floodlit at night starting in 1982 to highlight its architectural features.
Additional problems persisted, and in 1993, high levels of lead were discovered near the bridge's towers. Further emergency repairs were undertaken in mid-1999 after small concrete shards began falling from the bridge into the East River. The concrete deck had been installed during the 1950s renovations and had a lifespan of about 60 years. The Park Row exit from the bridge's westbound lanes was closed as a safety measure after the September 11, 2001, attacks on the nearby World Trade Center. That section of Park Row had been closed off since it ran right underneath 1 Police Plaza, the headquarters of the New York City Police Department (NYPD). In early 2003, to save money on electricity, the NYCDOT turned off the bridge's "necklace lights" at night. They were turned back on later that year after several private entities made donations to fund the lights.
21st century
After the 2007 collapse of the I-35W bridge in Minneapolis, public attention focused on the condition of bridges across the U.S. The New York Times reported that the Brooklyn Bridge approach ramps had received a "poor" rating during an inspection in 2007. However, a NYCDOT spokesman said that the poor rating did not indicate a dangerous state but rather implied it required renovation. In 2010, the NYCDOT began renovating the approaches and deck, as well as repainting the suspension span. Work included widening two approach ramps from one to two lanes by re-striping a new prefabricated ramp; raising clearance over the eastbound BQE at York Street; seismic retrofitting; replacement of rusted railings and safety barriers; and road deck resurfacing. The work necessitated detours for four years. At the time, the project was scheduled to be completed in 2014; but completion was later delayed to 2015, then again to 2017. The project's cost also increased from $508 million in 2010 to $811 million in 2016.
In August 2016, the NYCDOT announced that it would conduct a seven-month, $370,000 study to verify if the bridge could support a heavier upper deck that consisted of an expanded bicycle and pedestrian path. By then, about 10,000 pedestrians and 3,500 cyclists used the pathway on an average weekday. Work on the pedestrian entrance on the Brooklyn side was underway by 2017. The NYCDOT also indicated in 2016 that it planned to reinforce the Brooklyn Bridge's foundations to prevent it from sinking, as well as repair the masonry arches on the approach ramps, which had been damaged by Hurricane Sandy four years earlier. In July 2018, the New York City Landmarks Preservation Commission approved a further renovation of the Brooklyn Bridge's suspension towers and approach ramps. That December, the federal government gave the city $25 million in funding, which would pay for a $337 million rehabilitation of the bridge approaches and the suspension towers. Work started in late 2019 and was scheduled to be completed in four years. This restoration included removing bricks from the arches and putting fresh concrete behind them, using mortar from the same upstate quarries as the original mortar. The granite arches were also cleaned, revealing the original gray color of the stone, which had long been hidden by grime. Additionally, 56 LED lamps were installed on the bridge at a cost of $2.4 million.
In early 2020, City Council speaker Corey Johnson and the nonprofit Van Alen Institute hosted an international contest to solicit plans for the redesign of the bridge's walkway. Ultimately, in January 2021, the city decided to install a two-way protected bike path on the Manhattan-bound roadway, replacing the leftmost vehicular lane. The bike lane would allow the existing promenade to be used exclusively by pedestrians. Work on the bike lane started in June 2021, and the new path was completed on September 14, 2021. Despite the addition of the bike path, the bridge's walkway was still frequently overcrowded, prompting the city to propose in mid-2023 that street vendors be banned from the bridge and others citywide. All vendors were banned from the bridge at the beginning of January 2024. The same month, the bridge's new LED lights were illuminated for the first time.
A plan for congestion pricing in New York City was approved in mid-2023, allowing the Metropolitan Transportation Authority to toll drivers who enter Manhattan south of 60th Street. Congestion pricing was implemented in January 2025. Most traffic to and from FDR Drive is exempt from the toll, but all other Manhattan-bound drivers pay a toll, which varies based on the time of day.
Usage
Vehicular traffic
Horse-drawn carriages have been allowed to use the Brooklyn Bridge's roadways since its opening. Originally, each of the two roadways carried two lanes of a different direction of traffic. The lanes were relatively narrow at only wide. In July 1922, motor vehicles were banned from the bridge; the ban lasted until May 1925.
After 1950, the main roadway carried six lanes of automobile traffic, three in each direction. It was then reduced to five lanes with the addition of a two-way bike lane on the Manhattan-bound side in 2021. Because of the roadway's posted height restriction of and weight restriction of , commercial vehicles and buses are prohibited from using the Brooklyn Bridge. The weight restrictions prohibit heavy passenger vehicles such as pickup trucks and SUVs from using the bridge, though this is not often enforced in practice.
On the Brooklyn side, vehicles can enter the bridge from Tillary/Adams Streets to the south, Sands/Pearl Streets to the west, and exit 28B of the eastbound Brooklyn-Queens Expressway. In Manhattan, cars can enter from both the northbound and southbound FDR Drive, as well as Park Row to the west, Chambers/Centre Streets to the north, and Pearl Street to the south. However, the exit from the bridge to northbound Park Row was closed after the September 11 attacks because of increased security concerns: that section of Park Row ran under One Police Plaza, the NYPD headquarters.
Exit list
Vehicular access to the bridge is provided by a complex series of ramps on both sides of the bridge. There are two entrances to the bridge's pedestrian promenade on either side. The current configuration was constructed from the mid-1950s up until the early 1970s. After 9/11, the ramp onto Park Row was restricted to public traffic, there are no plans to reopen it.
Rail traffic
Formerly, rail traffic operated on the Brooklyn Bridge as well. Cable cars and elevated railroads used the bridge until 1944, while trolleys ran until 1950.
Cable cars and elevated railroads
The New York and Brooklyn Bridge Railway, a cable car service, began operating on September 25, 1883; it ran on the inner lanes of the bridge, between terminals at the Manhattan and Brooklyn ends. Since Washington Roebling believed that steam locomotives would put excessive loads upon the structure of the Brooklyn Bridge, the cable car line was designed as a steam/cable-hauled hybrid. They were powered from a generating station under the Brooklyn approach. The cable cars could not only regulate their speed on the % upward and downward approaches, but also maintain a constant interval between each other. There were 24 cable cars in total.
Initially, the service ran with single-car trains, but patronage soon grew so much that by October 1883, two-car trains were in use. The line carried three million people in the first six months, nine million in 1884, and nearly 20 million in 1885 following the opening of the Brooklyn Union Elevated Railroad. Accordingly, the track layout was rearranged and more trains were ordered. At the same time, there were highly controversial plans to extend the elevated railroads onto the Brooklyn Bridge, under the pretext of extending the bridge itself. After disputes, the trustees agreed to build two elevated routes to the bridge on the Brooklyn side. Patronage continued to increase, and in 1888, the tracks were lengthened and even more cars were constructed to allow for four-car cable car trains. Electric wires for the trolleys were added by 1895, allowing for the potential future decommissioning of the steam/cable system. The terminals were rebuilt once more in July 1895, and, following the implementation of new electric cars in late 1896, the steam engines were dismantled and sold.
Following the unification of the cities of New York and Brooklyn in 1898, the New York and Brooklyn Bridge Railway ceased to be a separate entity that June and the Brooklyn Rapid Transit Company (BRT) assumed control of the line. The BRT started running through-services of elevated trains, which ran from Park Row Terminal in Manhattan to points in Brooklyn via the Sands Street station on the Brooklyn side. Before reaching Sands Street (at Tillary Street for Fulton Street Line trains, and at Bridge Street for Fifth Avenue Line and Myrtle Avenue Line trains), elevated trains bound for Manhattan were uncoupled from their steam locomotives. The elevated trains were then coupled to the cable cars, which would pull the passenger carriages across the bridge.
The BRT did not run any elevated train through services from 1899 to 1901. Due to increased patronage after the opening of the Interborough Rapid Transit Company (IRT)'s first subway line, the Park Row station was rebuilt in 1906. In the early 20th century, there were plans for Brooklyn Bridge elevated trains to run underground to the BRT's proposed Chambers Street station in Manhattan, though the connection was never opened. The overpass across William Street was closed in 1913 to make way for the proposed connection. In 1929, the overpass was reopened after it became clear that the connection would not be built.
After the IRT's Joralemon Street Tunnel and the Williamsburg Bridge tracks opened in 1908, the Brooklyn Bridge no longer held a monopoly on rail service between Manhattan and Brooklyn, and cable service ceased. New subway lines from the IRT and from the BRT's successor Brooklyn–Manhattan Transit Corporation (BMT), built in the 1910s and 1920s, posed significant competition to the Brooklyn Bridge rail services. With the opening of the Independent Subway System in 1932 and the subsequent unification of all three companies into a single entity in 1940, the elevated services started to decline, and the Park Row and Sands Street stations were greatly reduced in size. The Fifth Avenue and Fulton Street services across the Brooklyn Bridge were discontinued in 1940 and 1941 respectively, and the elevated tracks were abandoned permanently with the withdrawal of Myrtle Avenue services in 1944.
Trolleys
A plan for trolley service across the Brooklyn Bridge was presented in 1895. Two years later, the Brooklyn Bridge trustees agreed to a plan where trolleys could run across the bridge under ten-year contracts. Trolley service, which began in 1898, ran on what are now the two middle lanes of each roadway (shared with other traffic). When cable service was withdrawn in 1908, the trolley tracks on the Brooklyn side were rebuilt to alleviate congestion. Trolley service on the middle lanes continued until the elevated lines stopped using the bridge in 1944, when they moved to the protected center tracks. On March 5, 1950, the streetcars also stopped running, and the bridge was redesigned exclusively for automobile traffic.
Walkway
The Brooklyn Bridge has an elevated promenade open to pedestrians in the center of the bridge, located above the automobile lanes. The promenade is usually located below the height of the girders, except at the approach ramps leading to each tower's balcony. The path is generally wide, though this is constrained by obstacles such as protruding cables, benches, and stairways, which create "pinch points" at certain locations. The path narrows to at the locations where the main cables descend to the level of the promenade. Further exacerbating the situation, these "pinch points" are some of the most popular places to take pictures. As a result, in 2016, the NYCDOT announced that it planned to double the promenade's width.
A center line was painted to separate cyclists from pedestrians in 1971, creating one of the city's first dedicated bike lanes. Initially, the northern side of the promenade was used by pedestrians and the southern side by cyclists. In 2000, these were swapped, with cyclists taking the northern side and pedestrians taking the southern side. On September 14, 2021, the DOT closed off the inner-most car lane on the Manhattan-bound side with protective barriers and fencing to create a new bike path. Cyclists are now prohibited from the upper pedestrian lane.
Pedestrian access to the bridge from the Brooklyn side is from either the median of Adams Street at its intersection with Tillary Street or a staircase near Prospect Street between Cadman Plaza East and West. In Manhattan, the pedestrian walkway is accessible from crosswalks at the intersection of the bridge and Centre Street, or through a staircase leading to Park Row.
Emergency use
While the bridge has always permitted the passage of pedestrians, the promenade facilitates movement when other means of crossing the East River have become unavailable. During transit strikes by the Transport Workers Union in 1980 and 2005, people commuting to work used the bridge; they were joined by Mayors Ed Koch and Michael Bloomberg, who crossed as a gesture to the affected public. Pedestrians also walked across the bridge as an alternative to suspended subway services following the 1965, 1977, and 2003 blackouts, and after the September 11 attacks.
During the 2003 blackouts, many crossing the bridge reported a swaying motion. The higher-than-usual pedestrian load caused this swaying, which was amplified by the tendency of pedestrians to synchronize their footfalls with a sway. Several engineers expressed concern about how this would affect the bridge, although others noted that the bridge did withstand the event and that the redundancies in its design—the inclusion of the three support systems (suspension system, diagonal stay system, and stiffening truss)—make it "probably the best secured bridge against such movements going out of control". In designing the bridge, John Roebling had stated that the bridge would sag but not fall, even if one of these structural systems were to fail altogether.
Notable events
Stunts
There have been several notable jumpers from the Brooklyn Bridge. The first person was Robert Emmet Odlum, brother of women's rights activist Charlotte Odlum Smith, on May 19, 1885. He struck the water at an angle and died shortly afterwards from internal injuries. Steve Brodie supposedly dropped from underneath the bridge in July 1886 and was briefly arrested for it, though there is some doubt about whether he actually jumped. Larry Donovan made a slightly higher jump from the railing a month afterward. The first known person to jump from the bridge with the intention of suicide was Francis McCarey in 1892. A lesser known early jumper was James Duffy of County Cavan, Ireland, who on April 15, 1895, asked several men to watch him jump from the bridge. Duffy jumped and was not seen again. Additionally, the cartoonist Otto Eppers jumped and survived in 1910, and was then tried and acquitted for attempted suicide. The Brooklyn Bridge has since developed a reputation as a suicide bridge due to the number of jumpers who do so intending to kill themselves, though exact statistics are difficult to find.
Other notable feats have taken place on or near the bridge. In 1919, Giorgio Pessi piloted what was then one of the world's largest airplanes, the Caproni Ca.5, under the bridge. In 1993, bridge jumper Thierry Devaux illegally performed eight acrobatic bungee jumps above the East River close to the Brooklyn tower.
Crimes and terrorism
On March 1, 1994, Lebanese-born Rashid Baz opened fire on a van carrying members of the Chabad-Lubavitch Orthodox Jewish Movement, striking 16-year-old student Ari Halberstam and three others traveling on the bridge. Halberstam died five days later from his wounds, and Baz was later convicted of murder. He was apparently acting out of revenge for the Hebron massacre of Palestinian Muslims a few days prior to the incident. After initially classifying the killing as one committed out of road rage, the Justice Department reclassified the case in 2000 as a terrorist attack. The entrance ramp to the bridge on the Manhattan side was dedicated as the Ari Halberstam Memorial Ramp in 1995.
Several potential attacks or disasters have also been averted. In 1979, police disarmed a stick of dynamite placed under the Brooklyn approach, and an artist in Manhattan was arrested that year after another bombing attempt. In 2003, truck driver Iyman Faris was sentenced to about 20 years in prison for providing material support to Al-Qaeda, after an earlier plot to destroy the bridge by cutting through its support wires with blowtorches was thwarted.
Arrests
At 9:00 a.m. on May 19, 1977, artist Jack Bashkow climbed one of the towers for Bridging, a "media sculpture" by the performance group Art Corporation of America Inc. Seven artists climbed the largest bridges connected to Manhattan "to replace violence and fear in mass media for one day". When each of the artists had reached the tops of the bridges, they ignited bright-yellow flares at the same moment, resulting in rush hour traffic disruption, media attention, and the arrest of the climbers, though the charges were later dropped. Called "the first social-sculpture to use mass-media as art" by conceptual artist Joseph Beuys, the event was on the cover of the New York Post, received international attention, and received ABC Eyewitness News' 1977 Best News of the Year award. John Halpern documented the incident in the film Bridging, 1977. Halpern attempted another "bridging" "social sculpture" in 1979, when he planted a radio receiver, gunpowder and fireworks in a bucket atop one of the towers. The piece was later discovered by police, leading to his arrest for possessing a bomb.
On October 1, 2011, more than 700 protesters with the Occupy Wall Street movement were arrested while attempting to march across the bridge on the roadway. Protesters disputed the police account of the events and claimed that the arrests were the result of being trapped on the bridge by the NYPD. The majority of the arrests were subsequently dismissed.
On July 22, 2014, the two American flags on the flagpoles atop each tower were found to have been replaced by bleached-white American flags. Initially, cannabis activism was suspected as a motive, but on August 12, 2014, two Berlin artists claimed responsibility for hoisting the two white flags, having switched out the original flags with their replicas. The artists said that the flags were meant to celebrate "the beauty of public space" and the anniversary of the death of German-born John Roebling, and they denied that it was an "anti-American statement".
Anniversary celebrations
The 50th-anniversary celebrations on May 24, 1933, included a ceremony featuring an airplane show, ships, and fireworks, as well as a banquet. During the centennial celebrations on May 24, 1983, a flotilla of ships visited the harbor, officials held parades, and Grucci Fireworks held a fireworks display that evening. For the centennial, the Brooklyn Museum exhibited a selection of the original drawings made for the bridge's construction, including those by Washington Roebling. Media coverage of the centennial was declared "the public relations triumph of 1983" by Inc.
The 125th anniversary of the bridge's opening was celebrated by a five-day event on May 22–26, 2008, which included a live performance by the Brooklyn Philharmonic, a special lighting of the bridge's towers, and a fireworks display. Other events included a film series, historical walking tours, information tents, a series of lectures and readings, a bicycle tour of Brooklyn, a miniature golf course featuring Brooklyn icons, and other musical and dance performances. Just before the anniversary celebrations, artist Paul St George installed the Telectroscope, a video link on the Brooklyn side of the bridge that connected to a matching device on London's Tower Bridge. A renovated pedestrian connection to Dumbo, Brooklyn, was also reopened before the anniversary celebrations.
Impact
At the time of construction, contemporaries marveled at what technology was capable of, and the bridge became a symbol of the era's optimism. John Perry Barlow wrote in the late 20th century of the "literal and genuinely religious leap of faith" embodied in the bridge's construction, saying that the "Brooklyn Bridge required of its builders faith in their ability to control technology".
Historical designations and plaques
The Brooklyn Bridge has been listed as a National Historic Landmark since January 29, 1964, and was subsequently added to the National Register of Historic Places on October 15, 1966. The bridge has also been a New York City designated landmark since August 24, 1967, and was designated a National Historic Civil Engineering Landmark in 1972. In addition, it was placed on UNESCO's list of tentative World Heritage Sites in 2017.
A bronze plaque is attached to the Manhattan anchorage, which was constructed on the site of the Samuel Osgood House at 1 Cherry Street in Manhattan. Named after Samuel Osgood, a Massachusetts politician and lawyer, it was built in 1770 and served as the first U.S. presidential mansion. The Osgood House was demolished in 1856.
Another plaque on the Manhattan side of the pedestrian promenade, installed by the city in 1975, indicates the bridge's status as a city landmark.
Culture
The Brooklyn Bridge has had an impact on idiomatic American English. For example, references to "selling the Brooklyn Bridge" are frequent in American culture, sometimes presented as a historical reality but more often as an expression meaning an idea that strains credulity. George C. Parker and William McCloundy were two early 20th-century con men who may have perpetrated this scam successfully, particularly on new immigrants, although the author of The Brooklyn Bridge: A Cultural History wrote, "No evidence exists that the bridge has ever been sold to a 'gullible outlander'".
As a tourist attraction, the Brooklyn Bridge is a popular site for clusters of love locks, wherein a couple inscribes a date and their initials onto a lock, attach it to the bridge, and throw the key into the water as a sign of their love. The practice is illegal in New York City and the NYPD can give violators a $100 fine. NYCDOT workers periodically remove the love locks from the bridge at a cost of $100,000 per year.
To highlight the Brooklyn Bridge's cultural status, the city proposed building a Brooklyn Bridge museum near the bridge's Brooklyn end in the 1970s. Though the museum was ultimately not constructed, as many as 10,000 drawings and documents relating to it were found in a carpenter shop in Williamsburg in 1976. These documents were given to the New York City Municipal Archives, where they are normally located, though a selection of them were displayed at the Whitney Museum of American Art when they were discovered.
Media
The bridge is often featured in wide shots of the New York City skyline in television and film and has been depicted in numerous works of art. Fictional works have used the Brooklyn Bridge as a setting; for instance, the dedication of a portion of the bridge, and the bridge itself, were key components in the 2001 film Kate & Leopold. Furthermore, the Brooklyn Bridge has also served as an icon of America, with mentions in numerous songs, books, and poems. Among the most notable of these works is that of American Modernist poet Hart Crane, who used the Brooklyn Bridge as a central metaphor and organizing structure for his second book of poetry, The Bridge (1930).
The Brooklyn Bridge has also been lauded for its architecture. One of the first positive reviews was "The Bridge As A Monument", a Harper's Weekly piece written by architecture critic Montgomery Schuyler and published a week after the bridge's opening. In the piece, Schuyler wrote: "It so happens that the work which is likely to be our most durable monument, and to convey some knowledge of us to the most remote posterity, is a work of bare utility; not a shrine, not a fortress, not a palace, but a bridge." Architecture critic Lewis Mumford cited the piece as the impetus for serious architectural criticism in the U.S. He wrote that in the 1920s the bridge was a source of "joy and inspiration" in his childhood, and that it was a profound influence in his adolescence. Later critics would regard the Brooklyn Bridge as a work of art, as opposed to an engineering feat or a means of transport. Not all critics appreciated the bridge, however. Henry James, writing in the early 20th century, cited the bridge as an ominous symbol of the city's transformation into a "steel-souled machine room".
The construction of the Brooklyn Bridge is detailed in numerous media sources, including David McCullough's 1972 book The Great Bridge and Ken Burns's 1981 documentary Brooklyn Bridge. It is also described in Seven Wonders of the Industrial World, a BBC docudrama series with an accompanying book, as well as Chief Engineer: Washington Roebling, The Man Who Built the Brooklyn Bridge, a biography published in 2017.
See also
Brooklyn Bridge Park
Brooklyn Bridge trolleys
List of bridges and tunnels in New York City
List of bridges and tunnels on the National Register of Historic Places in New York
List of bridges documented by the Historic American Engineering Record in New York
List of National Historic Landmarks in New York City
List of New York City Designated Landmarks in Manhattan below 14th Street
List of New York City Designated Landmarks in Brooklyn
List of tallest structures built before the 20th century
National Register of Historic Places listings in Manhattan below 14th Street
National Register of Historic Places listings in Brooklyn
References
Notes
Citations
Bibliography
External links
Brooklyn Bridge – New York City Department of Transportation
Brooklyn Bridge at Historical Marker Database
Wikisource items:
1883 establishments in New York (state)
Bike paths in New York City
Bridges completed in 1883
Bridges in Brooklyn
Bridges in Manhattan
Bridges on the National Register of Historic Places in New York City
Bridges over the East River
Brooklyn Heights
Brooklyn–Manhattan Transit Corporation
Buildings and structures on the National Register of Historic Places in Manhattan
Civic Center, Manhattan
Dumbo, Brooklyn
Former railway bridges in the United States
Historic American Engineering Record in New York City
Historic Civil Engineering Landmarks
National Historic Landmarks in New York City
National Register of Historic Places in Brooklyn
New York City Designated Landmarks in Brooklyn
New York City Designated Landmarks in Manhattan
New York State Register of Historic Places in Kings County
New York State Register of Historic Places in New York County
Pedestrian bridges in New York City
Railroad bridges in New York City
Railroad bridges on the National Register of Historic Places in New York City
Railroad-related National Historic Landmarks
Road bridges in New York City
Road bridges on the National Register of Historic Places in New York City
Road-rail bridges in the United States
Steel bridges in the United States
Suspension bridges in New York City
Symbols of New York City
Tourist attractions in Brooklyn
Tourist attractions in Manhattan | Brooklyn Bridge | Engineering | 14,006 |
25,687,934 | https://en.wikipedia.org/wiki/List%20of%20transport%20megaprojects | This is a list of megaprojects within the transport sector. Take care in comparing the cost of projects from different times—even a few years apart—due to inflation; comparing nominal costs without taking this into account can be highly misleading. Note that inflation-calculated values are current .
According to the Oxford Handbook of Megaproject Management in 2017, "Megaprojects are large-scale, complex ventures that typically cost $1 billion or more, take many years to develop and build, involve multiple public and private stakeholders, are transformational, and impact millions of people".
Completed projects
Partially completed and open
Under construction
Suspended or abandoned
Proposed
Airport projects
Notes
References
Megaprojects
Infrastructure-related lists
Megaprojects
Lists of most expensive things | List of transport megaprojects | Physics,Engineering | 153 |
2,475,889 | https://en.wikipedia.org/wiki/Dial%20%28measurement%29 | A dial is generally a flat surface, circular or rectangular, with numbers or similar markings on it, used for displaying the setting or output of a timepiece, radio, clock, watch, or measuring instrument. Many scientific and industrial instruments use dials with pointers to indicate physical properties. Examples include pressure and vacuum gauges, fluid-level gauges (for fuel, engine oil, and so on), voltmeters and ammeters, thermometers and hygrometers, speedometers and tachometers, and indicators (distance amplifying instruments).
Traditionally these have been mechanical devices, but with the advent of electronic displays, analog dials are often simulated from digital measurements.
The term may also refer to a movable control knob used to change the settings of the controlled instrument, for example, to change the frequency of the radio, or the desired temperature on a thermostat.
Styles of dials:
Circular,
Fixed pointer with moving scale,
Fixed scale with moving dial.
Examples of dial usage:
Pressure and vacuum gauges,
Level gauges,
Volt and current meters,
Thermometers and thermostats (mechanical),
Speedometers and tachometers.
Mirror dials are designed to reduce or eliminate the effect of parallax. They usually consist of a small mirrored strip running parallel to the graduations of the scale under the pointer. When the observer moves his position so that the pointer obscures the pointer's reflection in the mirror, an accurate reading may be taken.
See also
Dial (disambiguation)
Sundial
References
Measuring instruments | Dial (measurement) | Technology,Engineering | 334 |
23,355,975 | https://en.wikipedia.org/wiki/Denomination%20effect | The denomination effect is a form of cognitive bias relating to currency, suggesting people may be less likely to spend larger currency denominations than their equivalent value in smaller denominations. It was proposed by Priya Raghubir, professor at the New York University Stern School of Business, and Joydeep Srivastava, professor at University of Maryland, in their 2009 paper "Denomination Effect".
Raghubir and Srivastava conducted three studies in their research on the denomination effect; their findings suggested people may be more likely to spend money represented by smaller denominations and that consumers may prefer to receive money in a large denomination when there is a need to control spending. The denomination effect can occur when large denominations are perceived as less exchangeable than smaller denominations.
The effect's influence on spending decisions has implications throughout various sectors in society, including consumer welfare, monetary policy and the finance industry. For example, during the Great Recession, one businessman observed employees using more coins rather than banknotes in an office vending machine, perceiving the customers used coins to feel thriftier. Raghubir and Srivastava also suggested the effect may involve incentives to alter future behavior and that a large denomination can serve as a mechanism to prevent the urge to spend.
Raghubir and Srivastava experiment
Raghubir and Srivastava conducted three distinct studies as part of their experiment. Their first experiment involved 89 undergraduate students from two United States universities. As a cover story, the students were thanked for their participation and randomly given either a small denomination (four quarters) or a large denomination ($1 bill) and told they could keep or spend the money on confectionery. Small denominations were given to 43 students (48% of study group) and 46 students (52% of study group) were given large denominations. Approximately 44% (39/89) of the participants, in both conditions, chose to purchase confectionery. About 63% of the participants with the four quarters purchased candy, yet only 26% of the participants with the $1 bill spent money, suggesting the students were more inclined to spend when given a smaller denomination.
In a second study, 75 gas-station customers were each asked to participate in a short survey on gas usage. Each participant was given $5 as either five $1 bills, five $1 coins or one $5 bill and told they could spend the money at the gas station store. Customers who were given five $1 bills were more likely to buy something compared to customers receiving a single $5 bill. Customers who received five $1 coins had the lowest likelihood of spending, however the currency is in low-circulation and some are retained as souvenirs.
A third study sought to understand whether the effect was particular to American culture. In China, 150 housewives were given an envelope of money in exchange for completing a survey, containing either a single Renminbi (CNY) 100 banknote or five banknotes of equivalent value (in 2009, CNY 100 was equivalent to roughly $14.63 USD or €10.40 EUR). The cash represented a significant amount of money based on the monthly income of the participants, as 18.7% (28/150) earned less than CNY 300, 65% earned (97/150) between CNY 301 and 600 and 16.7% (25/150) earned over CNY 600. The average household size was about 3.3 people in both conditions. Some who purchased household items were less satisfied if they had received a large banknote, compared to the others who felt more satisfied spending smaller denominations.
Early studies
One study, conducted by marketing professors Arul Mishra, Himanshu Mishra and Dhananjay Nayakankuppam in 2006, documented a phenomenon whereby consumers spent less of a large denomination, but not with smaller denominations. In the study, they concluded that people give higher value to a large single denomination because it is more difficult to process the transaction, leading people to overvalue it and make them less likely to spend compared to an identical amount in smaller denominations. Unlike Mishra et al., who studied purchase intentions, Raghubir and Srivastava examined actual purchase decisions.
Previous research by Raghubir and Srivastava in 2008 found a higher inclination to spend using alternative payment methods, such as a credit or gift cards. Their experiment built on earlier research studies, including one by Harvard business professor John Gourville in 1998, which showed that people are more likely to analyze a transaction positively when the same amount of money is presented as an equally distributed sum each day instead of a single lump sum each year.
Conclusions
Raghubir and Srivastava concluded in study 1 that people are more likely to spend when an equivalent amount of money is represented by a smaller denomination relative to a single large denomination. In study 2, they concluded that consumers prefer to receive money in a large denomination compared to small denominations when there is a need to control spending. Study 3 further proves that the denomination effect depends on an individual's desire to reduce the uneasy feeling associated with spending money. The denomination effect occurs because people perceive a large denomination as less replaceable than smaller denominations, which can be used to control and regulate spending.
In 2009, Sean Gregory with Time magazine explained that consumers view large denominations as more valuable than smaller denominations and that they tend to isolate the cash in their minds. Each smaller denomination $20 bill, he noted, is a less valuable entity than the single large denomination $100 bill. It's easier to spend five $20 bills than it is to spend a single $100 bill. Gregory also added that consumers fear breaking a single large denomination because they won't be able to stop spending the change.
The researchers suggested that the denomination effect may involve imposing self-constraints or incentives to alter future behavior, noting a large denomination can serve as a precommitment mechanism to prevent the urge to spend compared to small denominations.
Applications
Raghubir and Srivastava believe the influence of denomination on spending decisions has implications on consumer welfare and monetary policy. Raghubir suggested offering smaller denominations to encourage spending and proposed increasing circulation of $1 coins and introduce $2 coins in the United States.
In 2012, Gary Belsky and Tom Gilovich of Time magazine stated that Raghubir and Srivastava's results were consistent with what they called mental accounting, suggesting small denomination banknotes tend to get assigned to a "mental petty cash account" to spend on trivial things. In contrast, larger denomination banknotes are perceived as "real money" and likely to spend on things of greater importance.
A 2009 National Public Radio report noted that as the recession worsened, a Sacramento businessman noticed that people were using more coins, rather than banknotes, in his office vending machine. The businessman believed the consumers were feeling economic hardship and that using coins instead of banknotes made them feel thriftier.
John Manning, columnist at the International Banker, noted the effect surfaces in the finance field, when an asset's unit of value exposes an investor's tendency to spend less when given in larger amounts. Manning cited the example of a stock split, suggesting the number of shares is increased by a certain ratio and diminished in price by the same factor, so that the company's total equity value remains the same. Stock splits are done largely because of the denomination effect, as the belief is that a less expensive share price can increase stock demand.
See also
Availability heuristic
Mean reversion (finance)
Money illusion
Oscar's grind
References
Citations
Sources
Behavioral finance
Cognitive biases
Consumer behaviour | Denomination effect | Biology | 1,552 |
24,589,706 | https://en.wikipedia.org/wiki/Deep%20Green%20Resistance | Deep Green Resistance (DGR) is a radical environmental movement that perceives the existence of industrial civilization itself as the greatest threat to the natural environment, and calls for its dismantlement and a return to a pre-agricultural level of technology. Although DGR operates as an aboveground group, it calls on others to use underground and violent tactics such as attacks on infrastructure or assassination. A repeated claim in DGR literature is that acts of sabotage could cause a cascading effect and lead to the end of civilization. DGR and far-right ecofascists use similar accelerationist and anti-majoritarian tactics, seeking systemic collapse.
DGR is widely denounced by other radical environmentalists, even those who support sabotage, because of "the group’s vanguardism, its disregard for billions of already-precarious human lives dependent on agriculture, its self-defeating attacks on anarchism and veganism, and the virulent transphobia of the group’s leaders, Lierre Keith and Derrick Jensen". Some Native American and other environmental groups have refused to work with DGR because of its controversial stance on transgender issues.
Beliefs
In the 2011 book Deep Green Resistance, the authors Lierre Keith, Derrick Jensen and Aric McBay state that civilization, particularly industrial civilization, is fundamentally unsustainable and must be actively and urgently dismantled in order to secure a future for all species on the planet.
DGR calls for the dismantling of industrial civilization, and the return to a pre-agricultural lifestyle.
Tactics
DGR operates as an aboveground movement and requires members to take a nonviolence pledge as of 2019, calling on others to use underground and violent tactics such as attacks on infrastructure or assassination. DGR is one of very few environmental groups to endorse lethal violence as sometimes justified. A repeated claim in DGR literature is that acts of sabotage could cause a cascading effect and lead to the end of civilization. Because the organization advocates sabotage and violence, which it views as necessary tactics to achieve its goal of dismantling industrialized society and capitalism, it can be classified as an apocalyptic or millenarian movement. DGR and far-right ecofascist groups such as The Green Brigade share similar tactics and an anti-majoritarian and vanguardist approach to activism, and both are accelerationist, seeking systemic collapse.
In 2017, DGR filed a lawsuit against the state of Colorado arguing that the Colorado River should be recognized as a legal person. The lawsuit was dismissed in 2019.
An article in Journal of Strategic Security describes the group as a "worrying bioterrorism threat", citing its strategy and propensity towards violence. Beginning in 2014, the FBI investigated Deep Green Resistance.
Criticism
Anarcho-primitivists John Zerzan, Kevin Tucker and others criticize DGR's promotion of hierarchy in organizing an underground resistance, the code of conduct, the historical understanding of revolution and radical history, and the cult of personality around Jensen and Keith. Michelle Renée Matisons and Alexander Reid Ross of the Institute for Anarchist Studies have accused DGR of "emulating right-wing militia rhetoric, with the accompanying hierarchical vanguardism, personality cultism, and reactionary moralism."
How to Blow Up a Pipeline author Andreas Malm—who argues that some forms of infrastructural sabotage are justified to advance the environmental movement—condemned DGR, arguing its proposals, if implemented, would spell disaster for the vast majority of people in the world.
Anti-trans views
DGR describes itself as a radical feminist organization, and has been described by critics as transphobic and TERF. The organisation has described hormone therapy for transgender youth as eugenics and excludes transgender women from women's spaces, while Keith has compared gender transitioning to mutilation. In 2019, Jensen, Keith, as well as DGR activist Max Wilbert published an article in Feminist Current saying "Hands up everyone who predicted that when Big Brother arrived, he’d be wearing a dress, hauling anyone who refuses to wax his ladyballs before a human rights tribunal, and bellowing ‘It’s Ma’am!’" Keith linked the group's views on transgender issues to the environment, claiming that trans women "want to violate the basic boundaries of women" and comparing that to "violating the boundaries of forests and rivers and prairies". During the fight against the Thacker Pass lithium mine, some members of DGR formed another group called Protect Thacker Pass without disclosing their affiliation with DGR. They worked with local Native American group People of Red Mountain, which broke off the affiliation saying that DGR members had not been transparent about their anti-trans views.
In 2012, founder McBay left the group, saying that it promoted transphobia. Earth First! Journal repudiated DGR in 2013 and said that it would "no longer print or in any way promote DGR material" because of its leaders' anti-transgender stances. In 2022, during the resistance to the Thacker Pass Lithium Mine, Indigenous group People of Red Mountain broke ties with attorney and DGR member Will Falk, citing transphobia as the reason. Other environmental groups involved in opposing the Thacker Pass project have distanced themselves from DGR. The organization has also faced criticism for its association with Jennifer Bilek, an investigative journalist, who has, with antisemitic connotations, argued that transgender rights are a transhumanist conspiracy.
See also
Deep ecology
Ecofeminism
Eco-terrorism
Luddite
Radical environmentalism
References
Further reading
External links
Could climate change fuel eco terrorism?, Deutsche Welle 14.05.2020
Environmental organizations based in the United States
Deep ecology
Radical environmentalism
Simple living
Anti-consumerist groups
Anti-capitalism
Organizations that oppose transgender rights in the United States
Anarcho-primitivism
Dark green environmentalism | Deep Green Resistance | Biology,Environmental_science | 1,197 |
410,419 | https://en.wikipedia.org/wiki/Overpass | An overpass, called an overbridge or flyover (for a road only) in the United Kingdom and some other Commonwealth countries, is a bridge, road, railway or similar structure that is over another road or railway. An overpass and underpass together form a grade separation. Stack interchanges are made up of several overpasses.
History
The world's first railroad flyover was constructed in 1843 by the London and Croydon Railway at Norwood Junction railway station to carry its atmospheric railway vehicles over the Brighton Main Line.
Highway and road
In North American usage, a flyover is a high-level overpass, built above main overpass lanes, or a bridge built over what had been an at-grade intersection. Traffic engineers usually refer to the latter as a grade separation. A flyover may also be an extra ramp added to an existing interchange, either replacing an existing cloverleaf loop (or being built in place of one) with a higher, faster ramp that eventually bears left, but may be built as a right or left exit.
A cloverleaf or partial cloverleaf contains some 270 degree loops, which can slow traffic and can be difficult to construct with multiple lanes. Where all such turns are replaced with flyovers (perhaps with some underpasses) only 90 degree turns are needed, and there may be four or more distinct levels of traffic. Depending upon design, traffic may flow in all directions at or near open road speeds (when not congested). For more examples, see Freeway interchange.
Pedestrian
A pedestrian overpass allows traffic to pass without affecting pedestrian safety.
Railway
Railway overpasses are used to replace level crossings (at-grade crossings) as a safer alternative. Using overpasses allows for unobstructed rail traffic to flow without conflicting with vehicular and pedestrian traffic. Rapid transit systems use complete grade separation of their rights of way to avoid traffic interference with frequent and reliable service.
Railroads also use balloon loops and flying junctions instead of flat junctions, as a way to reverse direction and to avoid trains conflicting with those on other tracks.
Gallery
See also
Footbridge
Skyway
Stack interchange
Viaduct
Wildlife crossing
References
External links
Bridges
Railway buildings and structures
Road infrastructure
pl:Estakada | Overpass | Engineering | 445 |
23,853,172 | https://en.wikipedia.org/wiki/Bike%20Arc | Bike Arc LLC, located in downtown Palo Alto, California, is a Silicon Valley startup that designs secure bicycle parking racks and systems. It was founded by Joseph Bellomo and Jeff Selzer in 2008. Jeff Selzer sits on the board of directors of the Silicon Valley Bicycle Coalition and is the General Manager of Palo Alto Bicycles. Joseph Bellomo, a California-licensed architect, is the founder and owner of Joseph Bellomo Architects, Inc. in Palo Alto, which he founded in 1986. In addition to collaborating on Bike Arc, Mr. Bellomo and Mr. Selzer also worked together on the Palo Alto Bikestation at the Caltrain depot.
In 2009, the American Institute of Architects, California Council, gave Bike Arc the Honor Award for Small Projects.
Philosophy
Mr. Bellomo and Mr. Selzer—both being bicycle enthusiasts—set out to design bicycle-parking systems that don't touch and potentially damage bicycles, a solution that prevents bicycles from contacting each other, which can cause wear-and-tear.
Products
Bike Arc is patented in both the United States and in Europe (Office for Harmonization In the Internal Market), and the company offers multiple iterations of the same fundamental concept: a modular structure of steel arcs. The Bike Arc family includes the Rack Arc, the Half Arc, the Umbrella Arc, the Tube Arc, the Car Arc, the Bus Arc, the House Arc, and the Ad Arc.
All Bike Arc products are manufactured in the United States.
Installations
The City of Palo Alto has purchased and installed multiple Bike Arc products, including numerous Rack Arcs, Half Arcs and Umbrella Arcs, and Bike Arcs are also in public spaces in Boston, Las Vegas, Redwood City, California, and Norfolk, Virginia. They have also been installed at the University of Buffalo and the University of Nebraska, as well as—among others—at Juniper Networks, Inc., Varian Medical Systems, Inc., and the Seattle Repertory Theater.
See also
Bicycle locker
Bicycle parking
Bicycle
Cycling
Sustainable design
Sustainable architecture
Urban planning
City planning
References
Architect's Newspaper, "Curve Your Wheels"
San Francisco Chronicle, "Bicycle rack yields design for modular homes"
Palo Alto Online, "Business owners create bike racks that's state of the arc"
SlashGear, "Bike Arc’s Car Arc Solar Powered Car Port Keeps Your Electric Cars and Bikes Charged"
Jetson Green, "Bike Arc Modular Bike Park System"
External links
Bike Arc homepage
The American Institute of Architects California Council Design Awards, "Merit Award for Small Projects"
2008 establishments in California
Bicycle parking
Urban planning
Bicycles
Cycle parts manufacturers
Industrial design firms
Buildings and structures in Palo Alto, California
Companies based in Palo Alto, California | Bike Arc | Engineering | 545 |
5,497,461 | https://en.wikipedia.org/wiki/William%20Bate%20Hardy | Sir William Bate Hardy, FRS (6 April 1864 – 23 January 1934) was a British biologist and food scientist. The William Bate Hardy Prize is named in his honour.
Life
He was born in Erdington, a suburb of Birmingham, the son of William Hardy of Llangollen and his wife Sarah Bate. Educated at Framlingham College, he graduated with a Master of Arts from the University of Cambridge in 1888, where he carried out biochemical research. He first suggested the word hormone to E.H. Starling.
He was elected a Fellow of the Royal Society in June 1902, and delivered their Croonian Lecture in 1905, their Bakerian Lecture (jointly) in 1925 and won their Royal Medal in 1926.
Hardy delivered the Guthrie lecture to the Physical Society in 1916.
In 1920 Hardy, in cooperation with Sir Walter Morley Fletcher, the secretary of the Medical Research Committee, persuaded the trustees of the Sir William Dunn legacy to use the money for research in biochemistry and pathology. To this end they funded Professor Sir Frederick Gowland Hopkins (1861–1947) in Cambridge with a sum of £210,000 in 1920 for the advancement of his work in biochemistry. Two years later they endowed Professor Georges Dreyer (1873–1934) of the Oxford University with a sum of £100,000 for research in pathology. The money enabled each of the recipients to establish a chair and sophisticated teaching and research laboratories, the Sir William Dunn Institute of Biochemistry at Cambridge and the Sir William Dunn School of Pathology at Oxford. Between them, the two establishments have yielded ten Nobel Prize winners, including Hopkins, for the discovery of vitamins, and professors Howard Florey and Ernst Chain (Oxford), for their developmental work on penicillin.
Hardy also made significant contributions to the field of tribology. Alongside Ida Doubleday, he introduced the concept of boundary lubrication. Hardy was named as one of the 23 "Men of Tribology" by Duncan Dowson.
Hardy was knighted in 1925.
Death
Hardy died at his home in Cambridge on 23 January 1934.
His long-time friend, Sir James Hopwood Jeans, elected as president of the British Science Association after Hardy's death, briefly eulogized him in the opening address to the Association's September 1934 meeting in Aberdeen:
The journal, Nature, commented on his death in a two page article, lamenting that, "science has lost a great captain and Great Britain a great public servant."
Family
William Bate Hardy married Alice Mary Finch in Cambridge in 1898.
References
1864 births
1934 deaths
Scientists from Birmingham, West Midlands
English biologists
Fellows of the Royal Society
Royal Medal winners
Tribologists | William Bate Hardy | Materials_science | 542 |
53,551,430 | https://en.wikipedia.org/wiki/ISCB%20Fellow | ISCB Fellowship is an award granted to scientists that the International Society for Computational Biology (ISCB) judges to have made “outstanding contributions to the fields of computational biology and bioinformatics”. , there are 76 Fellows of the ISCB including Michael Ashburner, Alex Bateman, Bonnie Berger, Steven E. Brenner, Janet Kelso, Daphne Koller, Michael Levitt, Sarah Teichmann and Shoshana Wodak. See List of Fellows of the International Society for Computational Biology for a comprehensive listing.
Fellows of the International Society for Computational Biology
The first seven fellows of the ISCB were laureates of the ISCB Senior Scientist Award from 2003 to 2009:
Webb Miller
David Haussler
Temple F. Smith
Michael Waterman
Janet Thornton
David J. Lipman
David Sankoff
Since 2009, new fellows have been nominated from the community of ISCB members and voted on annually by a selection committee. New fellows are traditionally inaugurated at the annual Intelligent Systems for Molecular Biology (ISMB) conference.
References
Bioinformatics
Computational biology
Academic awards | ISCB Fellow | Engineering,Biology | 217 |
29,501,475 | https://en.wikipedia.org/wiki/Kik%20Messenger | Kik Messenger, commonly called Kik, is a freeware instant messaging mobile app from the Canadian company Kik Interactive, available on iOS and Android operating systems.
The application uses a smartphone's internet connection to transmit and receive messages, photos, videos, sketches, mobile web pages, and other content after users register a username.
Kik is known for its features preserving users' anonymity, such as allowing users to register without the need to provide a telephone number or valid email address. However, the application does not employ end-to-end encryption, and the company also logs user IP addresses, which could be used to determine the user's ISP and approximate location. This information, as well as "reported" conversations, are regularly surrendered upon request by law enforcement organizations, sometimes without the need for a court order.
Kik was originally intended to be a music-sharing app before transitioning to messaging, briefly offering users the ability to send a limited number of SMS text messages directly from the application.
During the first 15 days after Kik's re-release as a messaging app, over 1 million accounts were created. In May 2016, Kik Messenger announced that they had approximately 300 million registered users, and was used by approximately 40% of United States' teenagers.
Kik Messenger was acquired by Medialab Technology in October 2019.
History
Kik Interactive was founded in 2009 by a group of students from the University of Waterloo in Canada who wished to create new technologies for use on mobile smartphones. Kik Messenger is the first app developed by Kik Interactive, and was released on October 19, 2010. Within 15 days of its release, Kik Messenger reached one million user registrations, with Twitter being credited as a catalyst for the new application's popularity.
On November 24, 2010, Research In Motion (RIM) removed Kik Messenger from BlackBerry App World and limited the functionality of the software for its users. RIM also sued Kik Interactive for patent infringement and misuse of trademarks. In October 2013, the companies settled the lawsuit, with the terms undisclosed.
In November 2014, Kik announced a $38.3 million Series C funding round and its first acquisition, buying GIF Messenger "Relay". The funding was from Valiant Capital Partners, Millennium Technology Value Partners, and SV Angel. By this time, Kik had raised a total of $70.5 million.
On August 16, 2015, Kik received a $50 million investment from Chinese Internet giant Tencent, the parent company of the popular Chinese messaging service WeChat. The investment earned the company a billion dollar valuation. Company CEO Ted Livingston stated Kik's aspirations to become "the WeChat of the West" and said that attracting younger users was an important part of the company's strategy.
In March 2016, the arrest of registered sex offender Thomas Paul Keeler II uncovered more than 200 Kik groups dedicated to the distribution and sale of child pornography on the site. In September 2016, CBS News published a story on the murder of 13-year-old Nicole Lovell, highlighting how easy it was to track and abuse children on Kik and calling it a "predator's paradise". Kik released a public statement through CBS on June 3, 2017 stating, "We take online safety very seriously, and we're constantly assessing and improving our trust and safety measures. Nicole Lovell suffered a terrible tragedy and our sincere condolences continue to go out to her family. Since the time of the incident, Kik has taken a variety of proactive measures to help increase safety on our platform." In an online investigation in August 2017, Forbes staffers signed up for the service posing as 14-year-old girls and encountered 20 profiles of charged or sentenced pedophiles. In November 2017, Kik Messenger was removed from the Windows Store. As of 23 January 2018, neither the developers nor Microsoft have provided a statement or an explanation on the removal of the app. In January 2018, Kik updated its Terms of Service and Community Standards to "make Kik a more respectful and fun place". The site also introduced a moderation and trust and safety team to enforce the new community standards.
Also in 2017, Kik decided against more VC funding, instead raising nearly $100 million in a high-profile initial coin offering (ICO) on the Ethereum blockchain. In this crowd sale, they sold "Kin" digital tokens to the contributors.
In July 2018, the Kin Foundation released the Kinit beta app on the Google Play store, restricted to US residents only. It offers different ways of earning and spending the Kin coin natively; for example, a user can do simple surveys to earn Kin and spend it on digital goods like gift cards.
In September 2019, Kik's CEO and founder Ted Livingston, announced in a blog post that Kik Messenger would be shut down on 19 October 2019, with over 100 employees laid off. However this decision was later reversed and in October 2019, Medialab acquired Kik Messenger.
Features
A main attraction of Kik that differentiates it from other messaging apps is its anonymity. To register for the Kik service, a user must enter a first and last name, e-mail address, and birth date (which must show that the user is at least 13 years old), and select a username. The Kik registration process does not request or require the entry of a phone number (although the user has the option to enter one), unlike some other messaging services that require a user to provide a functioning mobile phone number.
The New York Times has reported that, according to law enforcement, Kik's anonymity features go beyond those of most widely used apps. As of February 2016, Kik's guide for law enforcement said that the company cannot locate user accounts based on first and last name, e-mail address and/or birth date; the exact username is required to locate a particular account. The guide further said that the company does not have access to content or "historical user data" such as photographs, videos, and the text of conversations, and that photographs and videos are automatically deleted shortly after they are sent. A limited amount of data from a particular account (identified by exact username), including first and last name, birthdate, e-mail address, link to a current profile picture, device-related information, and user location information such as the most recently used IP address, can be preserved for a period of 90 days pending receipt of a valid order from law enforcement. Kik's anonymity has also been cited as a protective safety measure for good faith users, in that "users have screennames; the app doesn't share phone numbers or email addresses."
Kik introduced several new user features in 2015, including a full-screen in-chat browser that allows users to find and share content from the web; a feature allowing users to send previously recorded videos in Kik Messenger for Android and iOS; and "Kik Codes", which assigns each user a unique code similar to a QR code, making it easier to connect and chat with other users. Kik joined the Virtual Global Taskforce, a global anti-child-abuse organization, in March 2015. Kik began using Microsoft's PhotoDNA in March 2015 to premoderate images added by users. That same month, Kik released native video capture allowing users to record up to 15 seconds in the chat window. In October 2015, Kik partnered with the Ad Council as part of an anti-bullying campaign. The campaign was featured on the app and Kik released stickers in collaboration with the campaign. Kik released a feature to send GIFs as emojis in November 2015. Kik added SafePhoto to its safety features in October 2016 which "detects, reports, and deletes known child exploitation images" sent through the platform. Kik partnered with ConnectSafely in 2016 to produce a "parents handbook" and joined The Technology Coalition, an anti-sexual exploitation group including Facebook, Google, Twitter and LinkedIn.
Bots
Kik added promoted chats in 2014, which used bots to converse with users about promoted brands through keywords activating responses. The feature allows companies to communicate with more potential clients than would be possible manually. Promoted messages reach target audiences by gender, country and device. In April 2016, Kik added a bot store to its app, which allows users to order food or products through an automated chat. Third-party companies release bots which will access the company's offerings. The bot shop added a web bubble (also known as "wubbles") feature to allow rich media content to be shared in conversation threads, as well as suggested responses and a feature allowing bots to be active in group threads. An update, released in September 2016, added concierge bots which can give users tips, tutorials, or recommendations within a specific brand.
Security
On November 4, 2014, Kik scored 1 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. Kik received a point for encryption during transit but lost points because communications are not encrypted with a key to which the provider does not have access, users cannot verify contacts' identities, past messages are not secure if the encryption keys are stolen, the code is not open to independent review, the security design is not properly documented, and there had not been a recent independent security audit.
Awards and recognition
On October 1, 2014, Sony Music and Kik Interactive were given a Smarties award by the Mobile Marketing Association (MMA) for their global music marketing campaign with One Direction. In October 2016, company CEO Ted Livingston was recognized as Toronto's most brilliant tech innovator by Toronto Life for his work with Kik. Livingston was also recognized for being one of the "Most Creative People in Business" on Fast Companys 2017 list.
Controversies
Use by minors with explicit content
Like many other social media services, Kik has garnered negative attention due to instances of minors exchanging explicit messages and photos with adults causing law enforcement and the media to frequently express concerns about the app. Automated spam bots have also been used to distribute explicit images and text over Kik Messenger. A state law enforcement official interviewed by The New York Times in February 2016 identified Kik as "the problem app of the moment". Police said they found Kik's response frustrating and one detective said obtaining information from Kik was a "bureaucratic nightmare". Constable Jason Cullum of Northamptonshire Police paedophile online investigation team stated delays in obtaining information from the company increased the risk to children. Cullum stated, "It's incredibly frustrating. We're banging our heads against a brick wall. There's a child that's going to be abused for probably another 12 months before we know who that is." Since its acquisition by Medialab, Kik has revamped its policies and launched a variety of tools and resources including a guide for law enforcement and parents.
Prior to 2015, Kik Interactive addressed this issue by informing parents and police about their options to combat child exploitation. In March 2015, the company adopted a more aggressive strategy by utilizing Microsoft's PhotoDNA cloud service to automatically detect, delete, and report the distribution of child exploitation images on its app. Some experts have noted that because PhotoDNA operates by comparing images against an existing database of exploitative images, it does not effectively prevent "realtime" online child abuse and may not detect material not yet added to its comparison database. Kik Interactive also began collaborating internationally with law enforcement by joining the Virtual Global Taskforce, a partnership between businesses, child protection agencies, and international police services that combats online child exploitation and abuse. The company also sponsors an annual conference on crimes against children.
Kik has been criticized for providing inadequate parental control over minors' use of the app. The ability to share messages without alerting parents has been noted as "one of the reasons why teens like Kik". Parents cannot automatically view their child's Kik communications remotely from another device, but instead must have the password to their child's user account and view the communications on the same device used by their child. As of February 2016, Kik's parents' guide stresses that teens between 13 and 18 should have a parent's permission to use Kik, but there is no technical way to enforce the requirement or to guarantee that a minor will not enter a false birthdate. Kik Interactive has said that it uses "typical" industry standards for age verification, that "perfect age verification" is "not plausible", and that the company deletes accounts of users under 13 when it finds them, or when a parent requests the deletion.
npm left-pad incident
In March 2016, Kik Interactive was involved in a high-profile dispute over use of the name kik with independent code developer Azer Koçulu, the author of numerous open-source software modules published on npm, a package manager widely used by JavaScript projects to install dependencies. Koçulu had published an extension to Node.js on npm under the name kik. Kik Interactive contacted him objecting to his use of the name, for which the company claimed intellectual property rights, and asked him to change the name. When Koçulu refused, Kik Interactive contacted npm management, who agreed to transfer ownership of the module to Kik without Koçulu's consent. Koçulu then unpublished all of his modules from npm, including a popular eleven-line code module called left-pad upon which many JavaScript projects depended. Although Koçulu subsequently published left-pad on GitHub, its sudden removal from npm caused many projects (including Kik itself) to stop working, due to their dependency on the Node and Babel packages. In view of widespread software disruption, npm restored Koçulu's left-pad and made Cameron Westland of Autodesk its maintainer. The incident sparked controversies over the assertion of intellectual property rights and the use of dependencies in software development.
Cryptocurrency
Kin is an ERC-20 cryptocurrency token issued on the public Ethereum blockchain. Kin was first announced in early 2017 which marked a pivot in Kik's strategy, a response to difficulties faced from competing with larger social networks such as Facebook. Kin was launched in September 2017 with an initial coin offering (ICO) raising $98 million from 10,000 participants. The purpose of the token is to facilitate value transfers in digital services such as gaming applications and social media, and was initially launched on Kik Messenger to leverage the application's 15 million monthly active users.
As of 2019, the enforcement division of the U.S. Securities and Exchange Commission considers the cryptocurrency offering to have been an unregulated security issue and is expected to begin legal action against the company. Kik has challenged the SEC's ability to regulate cryptocurrencies.
On September 7, 2017, only days before the Kin ICO, Kik announced that Canadian citizens would be barred from participating, citing weak guidance from the Ontario Securities Commission for the decision.
By 2019 the value of Kin had fallen by 99%.
See also
Comparison of cross-platform instant messaging clients
References
External links
Companies based in Waterloo, Ontario
Instant messaging
Instant messaging clients
2010 software
Canadian brands
Ethereum tokens
Internet properties established in 2010 | Kik Messenger | Technology | 3,180 |
3,246,393 | https://en.wikipedia.org/wiki/Complex%20beam%20parameter | In optics, the complex beam parameter is a complex number that specifies the properties of a Gaussian beam at a particular point z along the axis of the beam. It is usually denoted by q. It can be calculated from the beam's vacuum wavelength λ0, the radius of curvature R of the phase front, the index of refraction n (n=1 for air), and the beam radius w (defined at 1/e2 intensity), according to:
.
Alternatively, q can be calculated according to
where z is the location, relative to the location of the beam waist, at which q is calculated, zR is the Rayleigh range, and i is the imaginary unit.
Beam propagation
The complex beam parameter is usually used in ray transfer matrix analysis, which allows the calculation of the beam properties at any given point as it propagates through an optical system, if the ray matrix and the initial complex beam parameter is known. This same method can also be used to find the fundamental mode size of a stable optical resonator.
Given the initial beam parameter, qi, one can use the ray transfer matrix of an optical system, , to find the resulting beam parameter, qf, after the beam has traversed the system:
.
It is often convenient to express this equation in terms of the reciprocals of q:
.
Free-space propagation
The effect of propagation in free space is just that of adding the travelled axial distance to the complex beam parameter:
.
Interfaces
For simple astigmatic fundamental Gaussian beams, the q- parameters for the tangential and sagittal planes are independent. This is no longer true if those planes do not coincide with the principal direction of the surface on which the beam impinges; that case is called general astigmatism. Formulas for an incidence angle θi were derived in Massey and Siegman's 1969 paper.
For reflection, the matrices read:
The ones for refraction are:
Fundamental mode of an optical resonator
To find the complex beam parameter of a stable optical resonator, one needs to find the ray matrix of the cavity. This is done by tracing the path of beam in the cavity. Assuming a starting point, find the matrix that goes through the cavity and return until the beam is in the same position and direction as the starting point. With this matrix and by making qi = qf, a quadratic is formed as:
.
Solving this equation gives the beam parameter for the chosen starting position in the cavity, and by propagating, the beam parameter for any other location in the cavity can be found.
References
Optics | Complex beam parameter | Physics,Chemistry | 532 |
38,353,349 | https://en.wikipedia.org/wiki/List%20of%20open-access%20journals | This is a list of open-access journals by field. The list contains notable journals which have a policy of full open access. It does not include delayed open access journals, hybrid open access journals, or related collections or indexing services.
True open-access journals can be split into two categories:
diamond or platinum open-access journals, which charge no additional publication, open access or article processing fees
gold open-access journals, which charge publication fees (also called article processing charges, APCs).
Agriculture
African Journal of Food, Agriculture, Nutrition and Development
Bulletin of Insectology
Open Access Journal of Medicinal and Aromatic Plants
Open Agriculture
Journal of Horticultural Sciences
Pertanika Journal of Tropical Agricultural Science
Astronomy
Journal of the Korean Astronomical Society
Open Astronomy
Bioethics
AMA Journal of Ethics
Canadian Journal of Bioethics
Indian Journal of Medical Ethics
Biology
African Invertebrates
Biology of Sex Differences
Biology Open
BMC Biology
BMC Evolutionary Biology
BMC Genomics
BMC Systems Biology
Cell Reports
Check List
Contributions to Zoology
Ecology and Evolution
eLife
F1000Research
Genome Biology
Genome Research
International Journal of Biological Sciences
Israel Journal of Entomology
Molecular Systems Biology
Myrmecological News
Nature Communications
Oncotarget
Open Biology
Open Life Sciences
PLOS Biology
PLOS Computational Biology
PLOS Genetics
Science Advances
Scientific Reports
Stem Cell Reports
ZooKeys
Botany
Acta Botanica Brasilica
Botanical Studies
Phytologia
Plant Ecology and Evolution
Chemistry
Arkivoc
Beilstein Journal of Organic Chemistry
Chemical Science
Molecules
Open Chemistry
Organic Syntheses
RSC Advances
Computer science
Advances in Distributed Computing and Artificial Intelligence Journal
Computational Linguistics
IEEE Access
Journal of Artificial Intelligence Research
Journal of Computational Geometry
Journal of Computer Graphics Techniques
Journal of Formalized Reasoning
Journal of Machine Learning Research
Journal of Object Technology
Journal of Open Source Software
Journal of Statistical Software
Logical Methods in Computer Science
Semantic Web
Theory of Computing
Transactions on Graph Data and Knowledge
Earth Sciences
Austrian Journal of Earth Sciences
Brazilian Journal of Geology
Geologica Belgica
GSA Today
Ecology
Ecography
Economics and finance
The Journal of Entrepreneurial Finance
Real-World Economics Review
Swiss Journal of Economics and Statistics
Theoretical Economics
Education
Australasian Journal of Educational Technology
Comunicar
Education Policy Analysis Archives
Educational Technology & Society
Journal of Higher Education Outreach and Engagement
Journal of International Students
Energy
Energies
Engineering
Advances in Production Engineering & Management
Frontiers in Heat and Mass Transfer
Open Engineering
Geography and environmental studies
Conservation and Society
Ecology and Society
Environmental Health Perspectives
Environmental Research Letters
Fennia
Journal of Political Ecology
Journal of Spatial Information Science
Open Geosciences
Nature Environment and Pollution Technology
Humanities and other journals
Anamesa
Ancient Iranian Studies
Continent
Culture Machine
Digital Humanities Quarterly
First Monday
GHLL
Medieval Worlds
Programming Historian
Reti Medievali Rivista
Sign Systems Studies
Southern Spaces
Transmotion
Language and linguistics
Glossa
Language Documentation & Conservation
Per Linguam
Law
Duke Law Journal
German Law Journal
Health and Human Rights
Melbourne University Law Review
Library and information science
College & Research Libraries
Evidence Based Library and Information Practice
In the Library with the Lead Pipe
Information Technologies and International Development
Scientific Data
Webology
Materials science
Polymers
Science and Technology of Advanced Materials
Mathematics
Acta Mathematica
Advances in Group Theory and Applications
Algebraic Geometry
Annales Academiae Scientiarum Fennicae. Mathematica
Annales de l'Institut Fourier
Arkiv för Matematik
Ars Mathematica Contemporanea
Australasian Journal of Combinatorics
Discrete Analysis
Discrete Mathematics & Theoretical Computer Science
Documenta Mathematica
Electronic Communications in Probability
Electronic Journal of Combinatorics
Electronic Journal of Probability
Electronic Transactions on Numerical Analysis
Forum of Mathematics
Hardy-Ramanujan Journal
Journal de Théorie des Nombres de Bordeaux
Journal of Formalized Reasoning
Journal of Graph Algorithms and Applications
Journal of Integer Sequences
Mathematics and Mechanics of Complex Systems
Münster Journal of Mathematics
The New York Journal of Mathematics
Open Mathematics
Rendiconti di Matematica e delle sue Applicazioni
Séminaire Lotharingien de Combinatoire
Medicine, pharmaceutical and health sciences
(omitting journals already previously mentioned)
Annals of Saudi Medicine
Bangladesh Journal of Pharmacology
Biomedical Imaging and Intervention Journal
BMC Health Services Research
BMC Medicine
BMJ Open
Bosnian Journal of Basic Medical Sciences
British Columbia Medical Journal
British Medical Journal
Canadian Medical Association Journal
Clinical and Translational Science
Cureus
Dermatology Online Journal
Emerging Infectious Diseases
International Journal of Medical Sciences
Journal of the American Heart Association
Journal of Clinical Investigation
Journal of Diabetes
Journal of Postgraduate Medicine
Medicina Internacia Revuo
The New England Journal of Medicine
Open Heart
Open Medicine
PLOS Medicine
PLOS Neglected Tropical Diseases
PLOS Pathogens
Scientia Pharmaceutica
Swiss Medical Weekly
Music
Music Theory Online
Nutrition
Journal of Nutrition
Philosophy
Philosophers' Imprint
Existenz
Physics
New Journal of Physics
Open Physics
Optica
Optics Express
Physical Review X
Physical Review Research
Pluridisciplinary
GigaScience
Nature Communications
Pertanika Journal of Science & Technology
PLOS ONE
Royal Society Open Science
Science Advances
Scientific Reports
Journal of the American Society of Questioned Document Examiners
Political science
Caucasian Review of International Affairs
Central European Journal of International and Security Studies
Journal of Politics & Society
Robotics
Paladyn
Social science
Cultural Anthropology
Demography
European Journal of Psychology Open
Frontiers in Psychology
Jadaliyya
Journal of Artificial Societies and Social Simulation
Journal of Political Ecology
Journal of World-Systems Research
Pertanika Journal of Social Sciences & Humanities
Swiss Journal of Social Work
Swiss Journal of Sociology
Statistics
Bayesian Analysis
Brazilian Journal of Probability and Statistics
Chilean Journal of Statistics
Electronic Journal of Statistics
Journal of Official Statistics
Journal of Modern Applied Statistical Methods
Journal of Statistical Software
Journal of Statistics Education
Revista Colombiana de Estadistica
REVSTAT
SORT
Statistics Surveys
Survey Methodology/Techniques d'enquête
The R Journal
See also
Directory of Open Access Journals
List of academic databases and search engines
Lists of academic journals
Open access around the world
References
Internet-related lists
Lists of academic journals | List of open-access journals | Technology | 1,155 |
54,287,754 | https://en.wikipedia.org/wiki/Mott%E2%80%93Schottky%20plot | In semiconductor electrochemistry, a Mott–Schottky plot describes the reciprocal of the square of capacitance versus the potential difference between bulk semiconductor and bulk electrolyte. In many theories, and in many experimental measurements, the plot is linear. The use of Mott–Schottky plots to determine system properties (such as flatband potential, doping density or Helmholtz capacitance) is termed Mott–Schottky analysis.
Consider the semiconductor/electrolyte junction shown in Figure 1. Under applied bias voltage the size of the depletion layer is
(1)
Here is the permittivity, is the elementary charge, is the doping density, is the built-in potential.
The depletion region contains positive charge compensated by ionic negative charge at the semiconductor surface (in the liquid electrolyte side). Charge separation forms a dielectric capacitor at the interface of the metal/semiconductor contact. We calculate the capacitance for an electrode area as
(2)
replacing as obtained from equation 1, the result of the capacitance per unit area is
(3)
a equation describing the capacitance of a capacitor constructed of two parallel plates both of area separated by a distance .
Replacing equation (3) in (1) we obtain the result
(4).
Therefore, a representation of the reciprocal square capacitance, is a linear function of the voltage, which constitutes the Mott–Schottky plot as shown in Fig. 1c. The measurement of the Mott–Schottky plot brings us two important pieces of information.
The slope gives the doping (semiconductor) density (provided that the dielectric constant is known).
The intercept to the x axis provides the built-in potential, or the flatband potential (as here the surface barrier has been flattened) and allows establishing the semiconductor conduction band level with respect to the reference of potential.
In liquid junction the reference of potential is normally a standard reference electrode. In solid junctions, we can take as a reference the metal Fermi level, if the work function is known, which provides a full energy diagram in the physical scale. The Mott–Schottky plot is sensitive to the electrode surface in contact with solution, see Figure 2.
A more accurate analysis considering the statistics of electrons provides the following result for the size of the depletion region
(5)
in this case the Mott–Schottky equation is
(6)
When the interfacial barrier is of the order , special care has to be taken to interpret the capacitance measurement. In fact at these small voltages the capacitance makes a peak that can be used for the determination of the built-in voltage.
The Mott–Schottky analysis can more generally resolve a variable doping profile in the semiconductor as follows
(7)
The derivative gives the doping at the edge of the depletion region, . This method only provides a spatial resolution of the order of a Debye length systems where more than one process gives a substantial kinetic response, it is necessary to adopt Electrochemical Impedance Spectroscopy that resolves the different capacitances in the system. For example, in the presence of a surface state at the semiconductor/electrolyte interface, the spectra show two arcs, one at low frequency and another one at high frequency. The depletion capacitance leading to Mott–Schottky plot is situated in the high frequency arc, as the depletion capacitance is a dielectric capacitance. On the other hand, the low frequency feature corresponds to the chemical capacitance of the surface states. The surface state charging produces a plateau as indicated in Fig. 1d. Similarly, defect levels in the gap affect the changes of capacitance and conductance.
Another widely used method to scan deep levels in Schottky barriers is termed admittance spectroscopy and consists on measuring the capacitance at a fixed frequency while varying the temperature.
Surface photovoltage technique or potentiostatically induced Burstein-Moss shifts can be used to determine the position of the band edges.
References
Semiconductors | Mott–Schottky plot | Physics,Chemistry,Materials_science,Engineering | 872 |
55,330,205 | https://en.wikipedia.org/wiki/Proper%20generalized%20decomposition | The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.
The PGD algorithm computes an approximation of the solution of the BVP by successive enrichment. This means that, in each iteration, a new component (or mode) is computed and added to the approximation. In principle, the more modes obtained, the closer the approximation is to its theoretical solution. Unlike POD principal components, PGD modes are not necessarily orthogonal to each other.
By selecting only the most relevant PGD modes, a reduced order model of the solution is obtained. Because of this, PGD is considered a dimensionality reduction algorithm.
Description
The proper generalized decomposition is a method characterized by
a variational formulation of the problem,
a discretization of the domain in the style of the finite element method,
the assumption that the solution can be approximated as a separate representation and
a numerical greedy algorithm to find the solution.
Variational formulation
In the Proper Generalized Decomposition method, the variational formulation involves translating the problem into a format where the solution can be approximated by minimizing (or sometimes maximizing) a functional. A functional is a scalar quantity that depends on a function, which in this case, represents our problem.
The most commonly implemented variational formulation in PGD is the Bubnov-Galerkin method. This method is chosen for its ability to provide an approximate solution to complex problems, such as those described by partial differential equations (PDEs). In the Bubnov-Galerkin approach, the idea is to project the problem onto a space spanned by a finite number of basis functions. These basis functions are chosen to approximate the solution space of the problem.
In the Bubnov-Galerkin method, we seek an approximate solution that satisfies the integral form of the PDEs over the domain of the problem. This is different from directly solving the differential equations. By doing so, the method transforms the problem into finding the coefficients that best fit this integral equation in the chosen function space.
While the Bubnov-Galerkin method is prevalent, other variational formulations are also used in PGD, depending on the specific requirements and characteristics of the problem, such as:
Petrov-Galerkin Method: This method is similar to the Bubnov-Galerkin approach but differs in the choice of test functions. In the Petrov-Galerkin method, the test functions (used to project the residual of the differential equation) are different from the trial functions (used to approximate the solution). This can lead to improved stability and accuracy for certain types of problems.
Collocation Method: In collocation methods, the differential equation is satisfied at a finite number of points in the domain, known as collocation points. This approach can be simpler and more direct than the integral-based methods like Galerkin's, but it may also be less stable for some problems.
Least Squares Method: This approach involves minimizing the square of the residual of the differential equation over the domain. It is particularly useful when dealing with problems where traditional methods struggle with stability or convergence.
Mixed Finite Element Method: In mixed methods, additional variables (such as fluxes or gradients) are introduced and approximated along with the primary variable of interest. This can lead to more accurate and stable solutions for certain problems, especially those involving incompressibility or conservation laws.
Discontinuous Galerkin Method: This is a variant of the Galerkin method where the solution is allowed to be discontinuous across element boundaries. This method is particularly useful for problems with sharp gradients or discontinuities.
Domain discretization
The discretization of the domain is a well defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions) and (c) the mapping of reference elements onto the elements of the mesh.
Separate representation
PGD assumes that the solution u of a (multidimensional) problem can be approximated as a separate representation of the form
where the number of addends N and the functional products X1(x1), X2(x2), ..., Xd(xd), each depending on a variable (or variables), are unknown beforehand.
Greedy algorithm
The solution is sought by applying a greedy algorithm, usually the fixed point algorithm, to the weak formulation of the problem. For each iteration i of the algorithm, a mode of the solution is computed. Each mode consists of a set of numerical values of the functional products X1(x1), ..., Xd(xd), which enrich the approximation of the solution. Due to the greedy nature of the algorithm, the term 'enrich' is used rather than 'improve', since some modes may actually worsen the approach. The number of computed modes required to obtain an approximation of the solution below a certain error threshold depends on the stopping criterion of the iterative algorithm.
Features
PGD is suitable for solving high-dimensional problems, since it overcomes the limitations of classical approaches. In particular, PGD avoids the curse of dimensionality, as solving decoupled problems is computationally much less expensive than solving multidimensional problems.
Therefore, PGD enables to re-adapt parametric problems into a multidimensional framework by setting the parameters of the problem as extra coordinates:
where a series of functional products K1(k1), K2(k2), ..., Kp(kp), each depending on a parameter (or parameters), has been incorporated to the equation.
In this case, the obtained approximation of the solution is called computational vademecum: a general meta-model containing all the particular solutions for every possible value of the involved parameters.
Sparse Subspace Learning
The Sparse Subspace Learning (SSL) method leverages the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the lowdimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms.
References
Numerical analysis
Mathematical modeling
Dimension reduction
Boundary value problems | Proper generalized decomposition | Mathematics | 1,407 |
487,529 | https://en.wikipedia.org/wiki/Search/Retrieve%20Web%20Service | Search/Retrieve Web service (SRW) is a web service for search and retrieval. SRW provides a SOAP interface to queries, to augment the URL interface provided by its companion protocol Search/Retrieve via URL (SRU). Queries in SRU and SRW are expressed using the Contextual Query Language (CQL).
Standards for SRW, SRU, and CQL are promulgated by the United States Library of Congress.
The SRW service and SRU protocol were both created by as part of the ZING (Z39.50 International: Next Generation) initiative as successors to the Z39.50 protocol.
Example usage
See also
Z39.50
Implementations
refbase
RefDB
Te Ara: The Encyclopedia of New Zealand
External links
SRU: Search/Retrieve via URL
SRW: Search/Retrieve Web Service
CQL: Contextual Query Language
Web services
Library science terminology | Search/Retrieve Web Service | Technology | 191 |
69,404,632 | https://en.wikipedia.org/wiki/Dipole%20glass | A dipole glass is an analog of a glass where the dipoles are frozen below a given freezing temperature Tf introducing randomness thus resulting in a lack of long-range ferroelectric order. A dipole glass is very similar to the concept of a spin glass where the atomic spins don't all align in the same direction (like in a ferromagnetic material) and thus result in a net-zero magnetization. The randomness of dipoles in a dipole glass creates local fields resulting in short-range order but no long-range order.
The dipole glass like state was first observed in Alkali halide crystal-type dielectrics containing dipole impurities. The dipole impurities in these materials result in off-center ions which results in anomalies in certain properties like specific heat, thermal conductivity as well as some spectroscopic properties. Other materials which show a dipolar glass phase include Rb(1-x)(NH4)xH2PO4 (RADP) and Rb(1-x)(ND4)xD2PO4 (DRADP). In materials like DRADP the dipole moment is introduced due to the deuteron O-D--O bond. Dipole glass like behavior is also observed in materials like ceramics, 3D water framework and perovskites.
Random-bond-random-field Ising model (RBRF)
The model describing the pseudo-spins (dipole moments) is given by the Hamiltonian as:
,
where is the Ising dipole moments. The refers to the random bond interactions which are described by a gaussian probability distribution with mean and variance . The second term provides a description of the interactions of the pseudo-spins in presence of random local fields where are represented by an independent gaussian distribution with zero mean and variance . The final term denotes the interaction in presence of an external electric field .
The replica method is used to obtain the glass order parameter:
.
where is the gaussian measure and under the assumption that the free energy is given by:
.
where and with .
The term is zero in case of magnetic spin glasses and with no presence of an external electric field this model reduces to the Edwards–Anderson model which is used to describe spin glasses. This model has been used to give quantitative description of DRADP type systems.
References
Electromagnetism
Electromagnetic compatibility
Types of magnets | Dipole glass | Physics,Engineering | 501 |
75,782,080 | https://en.wikipedia.org/wiki/Antisemitism%20on%20social%20media | Antisemitism on social media can manifest in various forms such as emojis, GIFs, memes, comments, and reactions to content. Studies have categorized antisemitic discourse into different types: hate speech, calls for violence, dehumanization, conspiracy theories and Holocaust denial.
Up to 69% of Jews in the U.S. having encountered antisemitism online according to the 2022 report. Jews have encountered antisemitism either as targets themselves or by being exposed to antisemitic content on their media page.
General
Quint Czymmek, a German social scientist, cited in his paper a 2019 study that found that young European Jews (ages 16–34) are more prone to encountering antisemitic harassment or violence compared to their older counterparts. Additionally, these younger individuals identified the internet and social media as the primary domains where antisemitism poses the most significant challenge in the present day.
Researcher Sophie Schmalenberger revealed that expressions of antisemitism go beyond explicit, offensive language and images on social media. They also manifest in subtle, coded forms that can easily go unnoticed. According to Schmalenberger an example of this is observed on Facebook, where the German far-right party, Alternative für Deutschland (AfD), has deliberately avoided referencing the Holocaust in its posts about the Second World War. Furthermore, the party employed antisemitic language and rhetoric, subtly normalizing antisemitism.
According to research, algorithms have played a significant role in amplifying antisemitism, as they are designed to prioritize content based on user engagement. This means that posts with higher engagement, including likes, dislikes, shares, and comments (including counter comments), are more prominently displayed to users. The issue arises because user reactions to posts also trigger rewarding dopamine responses. Consequently, the algorithmic emphasis on outrageous content, which tends to generate the most engagement, incentivizes users to contribute more hateful content. Two studies, provided exclusively to USA Today, found that Facebook, Instagram, and X (formerly Twitter) users to tropes and conspiracies. The 2023 result, researchers say, is provoking dangerous ideas as antisemitic incidents surge to historic levels.
Concerns have arisen among critics regarding the prevalence of antisemitism on social media, posing a significant issue for both the Jewish community and wider public discourse. While traditional methods of recording hate crimes, such as police crime records and the Crime Survey of England and Wales, have shown improvement, critics have said there remains a substantial underreporting of both online and offline antisemitic incidents. This discrepancy gives rise to a notable "dark figure" in the overall assessment of the problem.
Examples of antisemitic statements reported on social media include: "Jews are rats", "All Jews are greedy" and "I'm glad the holocaust happened".
A study conducted by the Ruderman Family Foundation and the Network Contagion Research Institute, released in July 2023, revealed Israel as the most attacked country on social media. X (formerly Twitter) users mentioned Israel in connection with human rights violations 12 times more than China, 38 times more than Russia, 55 times more than Iran, and 111 times more than North Korea. Notably, during the Israeli-Hamas conflict in Gaza in May 2021, the use of anti-Israel tropes surged. This escalation was accompanied by the release of a February 2022 Amnesty International report labeling Israel as an apartheid state.
The researchers also observed an increase in anti-Semitic comments on Twitter, which correlated with real-world Jewish-targeted hate crime incidents. The highest point coincided with conspiracy theories related to COVID-19 and the 2021 assault on the Capitol in Washington by supporters of Donald Trump.
Michael Bossetta, a researcher at Sweden's Lund University, points out that antisemitic content represents a tiny fraction of the traffic on social media. In his chapter in the book, he says most studies find that antisemitic content total in less than 1% of the total number of posts worldwide (as of year 2022). In one major survey, it was 0.00015%.
Germany
In their annual report "Antisemitic incidents in Germany 2023", the Federal Association of Departments for Research and Information on Antisemitism e.V. (RIAS) has found that 21 percent of all recorded antisemitic incidents took place online in that year. While the relation of the percentage of online antisemitism to the total of antisemitic incidents decreased compared to the previous year, the absolute numbers still increased. In 2022, RIAS recorded 853 cases of online antisemitism directed towards individuals or organisations compared to 999 cases in 2023. General hate speech and antisemitic rhetoric was not counted in that statistic. Most commonly, antisemitism online occurred via direct messages and of those 999 cases 51 percent took place on social media platforms.
Platforms
TikTok
TikTok, according to researchers and ratings, is very popular among young people, in addition to being widely used for news purposes, political platforms and following significant personages. Due to its widespread usage, "TikTok has become a magnet and a hotbed for violent and extremist content," the Israeli researchers Gabriel Weimann and Natalie Masri write in their chapter.
A study conducted between February 2020 to May 2021 by Weimann and Masri found a 41% increase in antisemitic posts, a 912% increase in antisemitic comments and a 1,375% increase in antisemitic usernames. For example, a song about Jewish people being killed in Auschwitz was accessed more than six million times worldwide.
According to the CCDH, TikTok, in particular, is falling in banning accounts that directly targets Jewish users. The study reveals that the platform only removes 5 percent of accounts engaged in activities such as sending direct messages promoting Holocaust denial. In 2023, Jewish American celebrities signed a letter to TikTok stating TikTok was not safe for Jewish users.
In December 2023, during the Republican Party presidential primary debate in the United States, candidate Nikki Haley referenced research conducted by Anthony Goldbloom, the founder of data science startup Kaggle, to argue for the banning of TikTok, claiming that "For every 30 minutes that someone watches TikTok every day, they become 17% more antisemitic and more pro-Hamas." In response, TikTok asserted via Twitter that Haley's "statement is 100% false."
Instagram
According to a 2021 report, there are "millions" of results for hashtags relating to antisemitic conspiracy theories on Instagram.
A report by the CST released in 2021 investigated antisemitism on Instagram. Following 27 trending antisemitic hashtags, for example: #gasjews, #israhell, #zionistagenda etc., which indicated a significant use of antisemitic hashtags on the platform.
On Instagram, antisemitism is perpetrated not only by users but also by hackers who hijack accounts to spread antisemitic content. On its Instagram feed, the Berlin Film Festival (Berlinale) temporarily displayed antisemitic information that was later removed by anonymous hackers. The posts that had been hacked quickly disappeared, containing antisemitic remarks regarding the war in Gaza and the Berlinale emblem. The festival made it clear that the posts did not represent its opinions and criticized the hacking.
X (formerly Twitter)
ADL examined the year between 2017 and 2018, determining that roughly 4.2 million antisemitic tweets were posted and reposted on Twitter during said timespan. The percentage of tweets pulled in by a query which tested positive for antisemitism ranged from a low 8.9% in week 33 (August 13–19) to a high of 34.2% in week 18 .
Josephine Ballon, the head of legal at HateAid, said that to pursue a free speech platform we must ensure that X (formerly Twitter) is a safe space for users and free of fear of being attacked or receiving death threats or Holocaust denial.
According to an article published in March 2023, antisemitism on X (formerly Twitter) remains "higher than ever" with some worried about the platform descending into a "hellscape" filled with toxic, inflammatory content and misinformation.
X suspended the account of Kanye West after he tweeted an image of the Star of David with a swastika inside. The rapper's account had been suspended before for antisemitic tweets.
YouTube
According to findings from the Institute for Strategic Dialogue, there was a 4963% increase in antisemitic comments on YouTube videos related to the conflict in the days following the October 7 Hamas-led attack on Israel. YouTube recorded a total of 15,720 hateful comments against Jewish people in the week following the attack by Hamas, as revealed by the Institute for Strategic Dialogue.
According to the report, the attacks include comments featured dehumanizing language, drawing inappropriate comparisons between Israelis and Nazis. They also propagated conspiracy theories, ranging from the unfounded notion that Jewish individuals control the media, political structures, and financial institutions to the claim that the Hamas attack was a 'false flag' orchestrated by Israel. Additionally, explicit threats were made against Jewish figures and officials, accompanied by the sharing and dissemination of graphic images, as well as calls for violence targeting Jewish officials.
Facebook
With 3.05 billion users (December 2023), Facebook is one of the largest social media platforms. As of 2016, 11% of available online antisemitic discourse (41,000 posts) is conducted on Facebook. The majority of these posts involve symbols or photos. Four percent of the discourse (1,500 posts) are calls to violence against Jews.
Two possible explanations for the relatively low scope of antisemitic discourse in relation to the network's popularity: either the users chose not to publicly upload offensive content on Facebook or the network puts a great deal of effort into removing such content.
Unlike X (formerly Twitter), hashtags such as #killthejews or #Holohoax don't exist on Facebook. Problematic usernames also were not found. Discourse glorifying Hitler, however, was found, including groups such as Hitler memes or pages of far-right organizations. Almost all of the users who uploaded antisemitic content on Facebook did so using fabricated usernames, which is prohibited by Facebook's terms of service.
In a 2021 report, researchers collected 714 antisemitic posts between May and June which included Holocaust denial, and conspiracy theories with false claims about Jews "controlling" governments and banks, or orchestrating world events. The report concluded that Facebook acted on only 14 out of 129 posts reported to it (10.9%). The report stated that Facebook groups from which it sourced many of its sample posts, with titles such as "Exposing the new world order" and "Exposing Zionism", were still active. Facebook reacted to the allegations noting that they have increased their actions against hate speech 15 percent more since 2017.
According to sources, Facebook have increased its removal antisemitic content and its rate of removals are higher than other social media. According to a 2023 report, Facebook has removed 35% of all antisemitic content in 2022 reported to the platform by the FOA in comparison to 23% content removed in 2021.
Telegram
A report from Hope not Hate highlighted the prevalence of antisemitism within Telegram which has emerged as a primary refuge for individuals expelled from other social networks due to their extremist views. In 2021, critics argued that Telegram's lax moderation policies have allowed numerous channels dedicated to antisemitic conspiracies and overtly violent content to thrive. One such channel, "Dismantling the Cabal," promoting the New World Order conspiracy theory since February 2021, has amassed over 90,000 followers. Another channel, managed by an antisemitic QAnon supporter known as GhostEzra, has a following of 333,000.
In addition to these concerning findings, Hope not Hate discovered that a minimum of 120 Telegram groups and channels have shared the racist and antisemitic manifesto authored by the perpetrator of the Christchurch mosque attacks in New Zealand in March 2019, resulting in the deaths of 51 individuals. Despite this dissemination of harmful content, Telegram has taken no action against such materials according to Hope Not Hate.
According to the Anti-Defamation League, Telegram played a significant role in the dissemination of antisemitic rhetoric and imagery pertaining to the COVID-19 pandemic. For example, on March 15, 2020, shortly after the onset of the pandemic, a Telegram user posted a depiction of a Jewish caricature within a COVID-19–headed Trojan horse. The seemingly cunning Jewish figure, who is being welcomed inside the metaphorical walls of society, reinforces antisemitic tropes of Jews as power-hungry and seeking world domination, deceitful liars, spreaders of disease, and scapegoats for others' problems. Telegram also enabled the circulation of additional COVID-19 antisemitism with user messages suggesting "Israel has unleashed a bio weapon" intended to teach China that "jealous, vindictive Jews" control the country's dynasty. Such content highlights how Telegram's severely limited content moderation policies facilitate the spread of antisemitism, misinformation, and hate speech in the broader context of social media strengthening age-old antisemitic tropes.
TamTam
TamTam is a new social media messenger application that is known for its advanced Transport Layer Security (TLS) encryption technology that keeps conversations very secure and private.
Although its privacy may initially be understood as a beneficial feature, there are many unintended consequences that have caused a surge in antisemitic rhetoric and violence.
In November 2022, a study by The Counter Extremism Project (CEP) revealed that on TamTam there were thirteen antisemitic, extreme right-wing channels promoting neo-Nazi and violent content.
Responses
In extensive interviews conducted by Czymmek, three young German Jewish adults disclosed that experiencing an antisemitic social media post left them with a profound sense of "loss of control," "unawareness of what would happen next," and despair over "the silence of other users." One of the study's participants decided to keep his Jewish identity on social media anonymous. "This anonymity protects me very much, it keeps the hate at bay."
In the online space, CEO of CCDH Imran Ahmed said, there are no limits, and people become radicalized without any boundaries. "The online spaces then have an effect on offline spaces because these people have worsened," Ahmed said. "The failure of these companies is a cost that's paid in lives."
In response to years of increased antisemitic incidents and a significant spike in reports since the start of the Israel-Hamas conflict, several universities have decided to take action. The University of Michigan (U-M) and New York University (NYU) are creating new institutes dedicated to researching and preventing antisemitism. The Raoul Wallenberg Institute, named for the Swedish businessman and humanitarian who saved thousands of Jews during the Holocaust, is being established by the University of Michigan. New York University (NYU) establish the NYU Center for the Study of Antisemitism with the help of a seven-figure donation. Center is anticipated to open in Fall 2024.
The parent company of Facebook and Instagram, Meta, announced a new policy to combat antisemitism by banning posts that misuse the term "Zionists" as a cover for hate speech directed towards Jews on July 9, 2024. With this modification, instances in which the term "Zionist" is used to degrade Jews, incite negative stereotypes, incite violence against Jews, or dispute the existence of Zionists fall within the expanded definition of antisemitic and "tier 1 hate speech."
Previously, on Meta social media platforms, the word "Zionist" was only allowed to be used in specific contexts, including when it was used to refer to Jews or Israelis. Following discussions with 145 stakeholders, including specialists in history,political science, law, civil rights, and human rights, the revised policy was developed.
As technology and artificial intelligence advances, it has been used in some cases to help remove antisemitic hate on social media. AI is given specific keywords and phrases to flag and remove from the internet. However, it is very challenging for AI to distinguish between educational and harmful content, resulting in the unsuccessful removal of antisemitic hate online. In some cases, AI works counterintuitively, removing educational information rather than harmful rhetoric. For instance, an educational post about the Holocaust to counter Holocaust denial on social media was taken down due to AI's inability to understand the purpose of the usage of the keywords.
Antisemitism following the 7 October attacks
According to a report by the Hebrew University of Jerusalem, antisemitism on social media increased following the October 7 Hamas-led attack on Israel. The antisemitic content, according to the report, includes admiration of Adolf Hitler and the Holocaust, and advocating violence against Jewish individuals. This upswing in online antisemitic content not only fuels the dissemination of hatred but also reinforces the worldwide normalization and legitimization of antisemitism.
According to recent findings from the Institute for Strategic Dialogue, there was a 4963% increase in antisemitic comments on YouTube videos related to the conflict in the days following the 2023 Hamas-led attack.
See also
Red triangle (Palestinian symbol)
The Holocaust and social media
Terrorism and social media
Wikipedia and antisemitism
References
Antisemitism
Hate speech
Internet-related controversies | Antisemitism on social media | Technology | 3,732 |
9,968,336 | https://en.wikipedia.org/wiki/XCL2 | Chemokine (C motif) ligand 2 (XCL2) is a small cytokine belonging to the XC chemokine family that is highly related to another chemokine called XCL1. It is predominantly expressed in activated T cells, but can also be found at low levels in unstimulated cells. XCL2 induces chemotaxis of cells expressing the chemokine receptor XCR1. Its gene is located on chromosome 1 in humans.
References
Cytokines | XCL2 | Chemistry | 108 |
52,363,707 | https://en.wikipedia.org/wiki/Chaetocladium%20elegans | Chaetocladium elegans is a species of fungus in the family Mucoraceae.
References
Fungi described in 1890
Mucoraceae
Taxa named by Friedrich Wilhelm Zopf
Fungus species | Chaetocladium elegans | Biology | 40 |
38,992 | https://en.wikipedia.org/wiki/Cosmological%20constant | In cosmology, the cosmological constant (usually denoted by the Greek capital letter lambda: ), alternatively called Einstein's cosmological constant,
is a coefficient that Albert Einstein initially added to his field equations of general relativity. He later removed it; however, much later it was revived to express the energy density of space, or vacuum energy, that arises in quantum mechanics. It is closely associated with the concept of dark energy.
Einstein introduced the constant in 1917 to counterbalance the effect of gravity and achieve a static universe, which was then assumed. Einstein's cosmological constant was abandoned after Edwin Hubble confirmed that the universe was expanding. From the 1930s until the late 1990s, most physicists agreed with Einstein's choice of setting the cosmological constant to zero. That changed with the discovery in 1998 that the expansion of the universe is accelerating, implying that the cosmological constant may have a positive value.
Since the 1990s, studies have shown that, assuming the cosmological principle, around 68% of the mass–energy density of the universe can be attributed to dark energy. The cosmological constant is the simplest possible explanation for dark energy, and is used in the standard model of cosmology known as the ΛCDM model.
According to quantum field theory (QFT), which underlies modern particle physics, empty space is defined by the vacuum state, which is composed of a collection of quantum fields. All these quantum fields exhibit fluctuations in their ground state (lowest energy density) arising from the zero-point energy existing everywhere in space. These zero-point fluctuations should contribute to the cosmological constant , but actual calculations give rise to an enormous vacuum energy. The discrepancy between theorized vacuum energy from quantum field theory and observed vacuum energy from cosmology is a source of major contention, with the values predicted exceeding observation by some 120 orders of magnitude, a discrepancy that has been called "the worst theoretical prediction in the history of physics!". This issue is called the cosmological constant problem and it is one of the greatest mysteries in science with many physicists believing that "the vacuum holds the key to a full understanding of nature".
History
The cosmological constant was originally introduced in Einstein's 1917 paper entitled “The cosmological considerations in the General Theory of Reality”. Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow for a static universe: gravity would cause a universe that was initially non-expanding to contract. To counteract this possibility, Einstein added the cosmological constant. However, Einstein was not happy about adding this cosmological term. He later stated that "Since I introduced this term, I had always a bad conscience. ... I am unable to believe that such an ugly thing is actually realized in nature". Einstein's static universe is unstable against matter density perturbations. Furthermore, without the cosmological constant Einstein could have found the expansion of the universe before Hubble's observations.
In 1929, not long after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding; this was consistent with a cosmological solution to the original general relativity equations that had been found by the mathematician Alexander Friedmann, working on the Einstein equations of general relativity. Einstein reportedly referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological redshift—as his "biggest blunder" (according to George Gamow).
It transpired that adding the cosmological constant to Einstein's equations does not lead to a static universe at equilibrium because the equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe that contracts slightly will continue contracting.
However, the cosmological constant remained a subject of theoretical and empirical interest. Empirically, the cosmological data of recent decades strongly suggest that our universe has a positive cosmological constant. The explanation of this small but positive value is a remaining theoretical challenge, the so-called cosmological constant problem.
Some early generalizations of Einstein's gravitational theory, known as classical unified field theories, either introduced a cosmological constant on theoretical grounds or found that it arose naturally from the mathematics. For example, Arthur Eddington claimed that the cosmological constant version of the vacuum field equation expressed the "epistemological" property that the universe is "self-gauging", and Erwin Schrödinger's pure-affine theory using a simple variational principle produced the field equation with a cosmological term.
In 1990s, Saul Perlmutter at Lawrence Berkeley National Laboratory, Brian Schmidt of the Australian National University and Adam Riess of the Space Telescope Science Institute were searching for type Ia supernovae. At that time, they expected to observe the deceleration of the supernovae caused by gravitational attraction of mass according to Einstein's gravitational theory. The first reports published in July 1997 from the Supernova Cosmology Project used the supernova observation to support such deceleration hypothesis. But soon they found that supernovae were accelerating away. Both teams announced this surprising result in 1998. It implied the universe is undergoing accelerating expansion. The cosmological constant is needed to explain such acceleration. Following this discovery, the cosmological constant was reinserted in the general relativity equations.
Sequence of events 1915–1998
In 1915, Einstein publishes his equations of general relativity, without a cosmological constant .
In 1917, Einstein adds the parameter to his equations when he realizes that his theory implies a dynamic universe for which space is a function of time. He then gives this constant a value that makes his Universe model remain static and eternal (Einstein static universe).
In 1922, the Russian physicist Alexander Friedmann mathematically shows that Einstein's equations (whatever ) remain valid in a dynamic universe.
In 1927, the Belgian astrophysicist Georges Lemaître shows that the Universe is expanding by combining general relativity with astronomical observations, those of Hubble in particular.
In 1931, Einstein accepts the theory of an expanding universe and proposes, in 1932 with the Dutch physicist and astronomer Willem de Sitter, a model of a continuously expanding universe with zero cosmological constant (Einstein–de Sitter spacetime).
In 1998, two teams of astrophysicists, the Supernova Cosmology Project and the High-Z Supernova Search Team, carried out measurements on distant supernovae which showed that the speed of galaxies' recession in relation to the Milky Way increases over time. The universe is in accelerated expansion, which requires having a strictly positive . The universe would contain a mysterious dark energy producing a repulsive force that counterbalances the gravitational braking produced by the matter contained in the universe (see Standard cosmological model). For this work, Perlmutter, Schmidt, and Riess jointly received the Nobel Prize in Physics in 2011.
Equation
The cosmological constant appears in the Einstein field equations in the form
where the Ricci tensor , Ricci scalar and the metric tensor describe the structure of spacetime, the stress–energy tensor describes the energy density, momentum density and stress at that point in spacetime, and . The gravitational constant and the speed of light are universal constants. When is zero, this reduces to the field equation of general relativity usually used in the 20th century. When is zero, the field equation describes empty space (a vacuum).
The cosmological constant has the same effect as an intrinsic energy density of the vacuum, (and an associated pressure). In this context, it is commonly moved to the right-hand side of the equation using . It is common to quote values of energy density directly, though still using the name "cosmological constant". The dimension of is generally understood as length.
Using the values known in 2018 and Planck units for = and the Hubble constant
0 = = ,
has the value of
where is the Planck length. A positive vacuum energy density resulting from a cosmological constant implies a negative pressure, and vice versa. If the energy density is positive, the associated negative pressure will drive an accelerated expansion of the universe, as observed. (See Dark energy and Cosmic inflation for details.)
ΩΛ (Omega sub lambda)
Instead of the cosmological constant itself, cosmologists often refer to the ratio between the energy density due to the cosmological constant and the critical density of the universe, the tipping point for a sufficient density to stop the universe from expanding forever. This ratio is usually denoted by and is estimated to be , according to results published by the Planck Collaboration in 2018.
In a flat universe, is the fraction of the energy of the universe due to the cosmological constant, i.e., what we would intuitively call the fraction of the universe that is made up of dark energy. Note that this value changes over time: The critical density changes with cosmological time but the energy density due to the cosmological constant remains unchanged throughout the history of the universe, because the amount of dark energy increases as the universe grows but the amount of matter does not.
Equation of state
Another ratio that is used by scientists is the equation of state, usually denoted , which is the ratio of pressure that dark energy puts on the universe to the energy per unit volume. This ratio is for the cosmological constant used in the Einstein equations; alternative time-varying forms of vacuum energy such as quintessence generally use a different value. The value = , measured by the Planck Collaboration (2018) is consistent with , assuming does not change over cosmic time.
Positive value
Observations announced in 1998 of distance–redshift relation for Type Ia supernovae indicated that the expansion of the universe is accelerating, if one assumes the cosmological principle. When combined with measurements of the cosmic microwave background radiation these implied a value of ≈ 0.7, a result which has been supported and refined by more recent measurements (as well as previous works). If one assumes the cosmological principle, as in the case for all models that use the Friedmann–Lemaître–Robertson–Walker metric, while there are other possible causes of an accelerating universe, such as quintessence, the cosmological constant is in most respects the simplest solution. Thus, the Lambda-CDM model, the current standard model of cosmology which uses the FLRW metric, includes the cosmological constant, which is measured to be on the order of . It may be expressed as (multiplying by ) or as 10−122 ℓ−2 (where ℓ is the Planck length). The value is based on recent measurements of vacuum energy density, = ≘ = . However, due to the Hubble tension and the CMB dipole, recently it has been proposed that the cosmological principle is no longer true in the late universe and that the FLRW metric breaks down, so it is possible that observations usually attributed to an accelerating universe are simply a result of the cosmological principle not applying in the late universe.
As was only recently seen, by works of 't Hooft, Susskind and others, a positive cosmological constant has surprising consequences, such as a finite maximum entropy of the observable universe (see Holographic principle).
Predictions
Quantum field theory
A major outstanding problem is that most quantum field theories predict a huge value for the quantum vacuum. A common assumption is that the quantum vacuum is equivalent to the cosmological constant. Although no theory exists that supports this assumption, arguments can be made in its favor.
Such arguments are usually based on dimensional analysis and effective field theory. If the universe is described by an effective local quantum field theory down to the Planck scale, then we would expect a cosmological constant of the order of ( in reduced Planck units). As noted above, the measured cosmological constant is smaller than this by a factor of ~10120. This discrepancy has been called "the worst theoretical prediction in the history of physics".
Some supersymmetric theories require a cosmological constant that is exactly zero, which further complicates things. This is the cosmological constant problem, the worst problem of fine-tuning in physics: there is no known natural way to derive the tiny cosmological constant used in cosmology from particle physics.
No vacuum in the string theory landscape is known to support a metastable, positive cosmological constant, and in 2018 a group of four physicists advanced a controversial conjecture which would imply that no such universe exists.
Anthropic principle
One possible explanation for the small but non-zero value was noted by Steven Weinberg in 1987 following the anthropic principle. Weinberg explains that if the vacuum energy took different values in different domains of the universe, then observers would necessarily measure values similar to that which is observed: the formation of life-supporting structures would be suppressed in domains where the vacuum energy is much larger. Specifically, if the vacuum energy is negative and its absolute value is substantially larger than it appears to be in the observed universe (say, a factor of 10 larger), holding all other variables (e.g. matter density) constant, that would mean that the universe is closed; furthermore, its lifetime would be shorter than the age of our universe, possibly too short for intelligent life to form. On the other hand, a universe with a large positive cosmological constant would expand too fast, preventing galaxy formation. According to Weinberg, domains where the vacuum energy is compatible with life would be comparatively rare. Using this argument, Weinberg predicted that the cosmological constant would have a value of less than a hundred times the currently accepted value. In 1992, Weinberg refined this prediction of the cosmological constant to 5 to 10 times the matter density.
This argument depends on the vacuum energy density being constant throughout spacetime, as would be expected if dark energy were the cosmological constant. There is no evidence that the vacuum energy does vary, but it may be the case if, for example, the vacuum energy is (even in part) the potential of a scalar field such as the residual inflaton (also see Quintessence). Another theoretical approach that deals with the issue is that of multiverse theories, which predict a large number of "parallel" universes with different laws of physics and/or values of fundamental constants. Again, the anthropic principle states that we can only live in one of the universes that is compatible with some form of intelligent life. Critics claim that these theories, when used as an explanation for fine-tuning, commit the inverse gambler's fallacy.
In 1995, Weinberg's argument was refined by Alexander Vilenkin to predict a value for the cosmological constant that was only ten times the matter density, i.e. about three times the current value since determined.
Failure to detect dark energy
An attempt to directly observe and relate quanta or fields like the chameleon particle or the symmetron theory to dark energy, in a laboratory setting, failed to detect a new force. Inferring the presence of dark energy through its interaction with baryons in the cosmic microwave background has also led to a negative result, although the current analyses have been derived only at the linear perturbation regime. It is also possible that the difficulty in detecting dark energy is due to the fact that the cosmological constant describes an existing, known interaction (e.g. electromagnetic field).
See also
Big Rip
Higgs mechanism
Lambdavacuum solution
Hierarchy problem
Quantum electrodynamics
de Sitter invariant special relativity
Unruh effect
References
Footnotes
Bibliography
Primary literature
Secondary literature: news, popular science articles & books
Secondary literature: review articles, monographs and textbooks
External links
Michael, E., University of Colorado, Department of Astrophysical and Planetary Sciences, "The Cosmological Constant"
Carroll, Sean M., "The Cosmological Constant" (short), "The Cosmological Constant"(extended).
News story: More evidence for dark energy being the cosmological constant
Cosmological constant article from Scholarpedia
Big Bang
General relativity
Theories of gravity
Albert Einstein
Astronomical hypotheses
Dark energy
Physical cosmological concepts | Cosmological constant | Physics,Astronomy | 3,377 |
6,451,387 | https://en.wikipedia.org/wiki/V-block | V-Blocks are precision metalworking jigs typically used to hold round metal rods or pipes for performing drilling or milling operations. They consist of a rectangular steel or cast iron block with a 120 degree channel rotated 45-degrees from the sides, forming a V-shaped channel in the top. A small groove is cut in the bottom of the "V". They often come with screw clamps to hold the work. There are also versions with internal magnets for magnetic work-holding. V-blocks are usually sold in pairs.
External links
How to make a V-block
Boilermaking
Metalworking
The inventor of an improved magnetic V-block was Harold Arlington Spanker
http://patentimages.storage.googleapis.com/pdfs/US2449255.pdf | V-block | Chemistry | 162 |
3,175,947 | https://en.wikipedia.org/wiki/Slingbox | The Slingbox was a TV streaming media device made by Sling Media that encoded local video for transmission over the Internet to a remote device (sometimes called placeshifting). It allowed users to remotely view and control their cable, satellite, or digital video recorder (DVR) system at home from a remote Internet-connected personal computer, smartphone, or tablet as if they were at home.
On November 9, 2020, Sling Media announced that all Slingboxes had been discontinued, and that the Slingbox servers would close on November 9, 2022, making all devices "inoperable".
History
The Slingbox was first developed in 2002 by two Californian brothers, Blake and Jason Krikorian, who were avid sports fans. They supported the San Francisco Giants, a Major League Baseball team whose games were broadcast regularly by their local TV station. However, when travelling away from their home state, they found they were unable to watch their favorite team because their games were not carried by television stations in other parts of the United States and could not be found for free online. The first edition of the Slingbox came to market in late 2005.
Future
Slingbox hardware is getting a second life thanks to the Open source Slinger project, written in Python.
Technology
Hardware
The traditional Slingbox embeds a video encoding chip to do real-time encoding of a video and audio stream into the SMPTE 421M / VC-1 format that can be transmitted over the Internet via the ASF streaming format. Later Slingboxes also support Apple's HTTP Live Streaming, which requires support for H.264.
The Slingboxes up until the Fourth Generation (or Next Generation Slingbox) used a Texas Instruments chipset. Current generation Slingboxes and OEM products are built around a ViXS chipset.
Control of the hosting video device, usually a set top box, is done through an IR blaster, which, on older Slingboxes, required the use of an IR blaster dongle. Current generation Slingboxes have built in IR blasters on the box itself, though customers can opt to continue to use the IR blaster dongle.
All Slingboxes include an Ethernet port that connects to a local network and out to the Internet. The Slingbox 500 was the first to include built-in Wi-Fi.
Cloud infrastructure
Sling used an Amazon Web Services-based infrastructure to support encoding, relaying streams and analytics. It also sourced data from multiple repositories to help guide recommendations to users, including social networks (Facebook and Twitter) and specialty services like Thuuz for sports.
This infrastructure also allowed Sling to report on aggregate television watching behavior. They have released several infographics and provide a Nielsen-like weekly report of the top shows.
Clients
Slingplayer for Desktop and the Watch client
Viewing content from a Slingbox requires a client application on a PC or mobile device. Sling initially offered a desktop application for Windows and the Macintosh, which was deprecated when the Slingbox Watch website was released. Watch is a NPAPI-based browser plug-in for Microsoft Internet Explorer, Mozilla Firefox, Google Chrome and Apple Safari. This website experience includes the ability to view and control your set top box, an integrated electronic program guide (US/Canada only) and the ability to manage your connected Slingboxes. A registered Sling account is required to access the Watch website. The Dish Anywhere website is based on this technology.
In July 2014, Sling announced the return of the Slingplayer for Desktop application with the launch of the Slingbox M1 and SlingTV.
Slingplayer for Mobile
In addition to the Watch Slingbox website, customers can purchase a SlingPlayer app for their mobile device. Supported platforms include iOS (iPhone and iPad), Android (phones and tablets), Kindle Fire and Microsoft Windows 8.1 tablets. Previously supported platforms include Blackberry, Palm OS and Symbian. The launch price for SlingPlayer apps was $29.99. The price was reduced to $14.99 when the Slingbox 350 and 500 were launched in October 2012.
Slingplayer Mobile for iPhone was demonstrated at Macworld Expo 2009 in January and became available in May of the same year. On May 12, 2009, the Slingplayer App became available at the Apple App Store, but only for US, Canadian and UK accounts, and was originally restricted to Wi-Fi for streaming content. Sling's promotional email confirmed that the Slingplayer for iPhone works with Wi-Fi connections only "at Apple's request" – a decision believed to have been made at the behest of incumbent iPhone network operators such as AT&T and O2. AT&T later relented to allow the app to stream over its cellular network. This change was made externally by AT&T as the SlingPlayer App already features quality scaling of content based on connection type.
In November 2010, Sling Media announced the release of a Slingplayer Mobile app for the iPad. The iPad-specific app offers a higher resolution stream than on other devices with smaller screens. In November 2013, an update added second screen capabilities.
Historically, Microsoft Windows Mobile and Windows Phone 7 platforms were supported. Sling released a native version for the Windows 8 platform in December 2013. This version supports both Windows RT and Windows x86 for tablets, laptops and hybrids.
References
External links
Slingbox Sharing
Set-top box
Television technology
Television placeshifting technology
Companies based in Foster City, California
Dish Network | Slingbox | Technology | 1,123 |
16,851,522 | https://en.wikipedia.org/wiki/NMD3 | 60S ribosomal export protein NMD3 is a protein that in humans is encoded by the NMD3 gene.
Interactions
NMD3 has been shown to interact with XPO1.
References
Further reading | NMD3 | Chemistry | 42 |
5,592,497 | https://en.wikipedia.org/wiki/World%20Nuclear%20Association | World Nuclear Association is the international organization that promotes nuclear power and supports the companies that comprise the global nuclear industry. Its members come from all parts of the nuclear fuel cycle, including uranium mining, uranium conversion, uranium enrichment, nuclear fuel fabrication, plant manufacture, transport, and the disposal of used nuclear fuel, as well as electricity generation itself.
Together, World Nuclear Association members are responsible for 70% of the world's nuclear power as well as the vast majority of world uranium, conversion and enrichment production. The Association says it aims to fulfill a dual role for its members: facilitating their interaction on technical, commercial and policy matters, and promoting wider public understanding of nuclear technology. It has a secretariat of around 30 staff. The Association was founded in 2001 on the basis of the Uranium Institute, itself founded in 1975.
Membership
World Nuclear Association continues to expand its membership, particularly in non-OECD countries where nuclear power is produced or where this option is under active consideration. Members are located in 44 countries representing 80% of the world's population.
The annual subscription fee for an institutional member is based on its size and scale of activity. Upon receiving an inquiry or application, the Association's London-based secretariat determines the fee according to standardized criteria and informs the candidate organisation accordingly. The fee structure provides, in many cases, significant discounts for organisations located in countries outside the OECD.
A low-fee non-commercial membership is available for organisations with a solely academic, research, policy or regulatory function.
A list of current members is published on the World Nuclear Association website.
Charter of Ethics
World Nuclear Association has established a Charter of Ethics to serve as a common credo for its member organizations. This affirmation of values and principles is intended to summarize the responsibilities of the nuclear industry and the surrounding legal and institutional framework that has been constructed through international cooperation to fulfill U.S. President Dwight D. Eisenhower's vision of 'Atoms for Peace'.
Leadership
World Nuclear Association members appoint a Director General and elect a 20-member board of management. The current Director General is Sama Bilbao y León. The Chairman of the board is H.E. Mohamed Al Hammadi, Managing Director and Chief Executive Officer of Emirates Nuclear Energy Corporation. The Vice chairman is Philippe Knoche CEO at Orano. A board of management fulfills statutory duties pertaining to the organization's governance and sets World Nuclear Association policies and strategic objectives, subject to approval by the full membership.
Activities and services
Industry interaction
An essential role of World Nuclear Association is to facilitate commercially valuable interaction among its members.
Ongoing World Nuclear Association Working Groups, consisting of members and supported by the secretariat, share information and develop analysis on a range of technical, trade and environmental matters. These subjects include:
Cooperation in reactor design, evaluation and licensing
Radiological protection
Industry economics
Nuclear law
Supply chain
Transport of radioactive materials
Waste management and decommissioning
Capacity optimization
Uranium mining standardization
Construction risk management
Security of the international fuel cycle
Fuel market report working group
When meeting to discuss industry issues, World Nuclear Association members are cautioned to avoid any topic that could potentially create even the impression of an attempt to set prices or engage in other anti-competitive behaviour. Accordingly, topics not discussed in meetings include terms of specific contracts; current or projected prices for products or services; allocation of markets; refusals to deal with particular suppliers or customers; or any similar matters that might impair competition within any segment of the nuclear industry.
Meetings
World Nuclear Association's annual Symposium in London provides a forum for speakers from the nuclear industry. The Association has previously presented an award for 'Distinguished Contribution to the Peaceful Worldwide Use of Nuclear Energy'.
The Association also cooperates with the Nuclear Energy Institute on annual World Nuclear Fuel Cycle meetings for industry representatives concerned with nuclear fuel supply and in particular the uranium market.
Representation
World Nuclear Association represents the interests of the international nuclear industry at key international forums such as:
International Atomic Energy Agency and Nuclear Energy Agency advisory committees on transport and all aspects of nuclear safety
United Nations policy forums focused on sustainable development and climate change. (The Association was in attendance at the 2009 Copenhagen climate change talks and at COP26)
International Commission on Radiological Protection and OSPAR deliberations on radiological protection.
In contrast to earlier less structured forms of industry representation the Association provides a unified voice from a single body; encompassing all manner of industry expertise and perspectives. It is clear and unreserved in its purpose of promoting the maximum feasible use of safe nuclear power.
Public information
The World Nuclear Association public website provides an available, non-technical source of information on the global nuclear industry. The site presents reference documents, and a wide range of educational and explanatory papers which are constantly updated. Australian nuclear power advocate Ian Hore-Lacy served as the organization's Director of Public Information for 12 years, after working for six years at the now defunct Melbourne-based Uranium Information Centre. In the late 2000's, the information-disseminating role was assumed by World Nuclear Association and World Nuclear News (WNN).
The Association supports WNN, the authoritative online news service intended to bring accurate and accessible information on developments in nuclear power to the Association's industry readers and the general public. Its output is free of charge and may be widely reproduced in accordance with WNN's copyright policy.
The Association's reactor database contains information on past, present and future nuclear power reactors across the world.
Other services
World Nuclear Association is engaged in a number of other initiatives to promote the peaceful development of nuclear power. These include World Nuclear University (WNU), which is a global partnership between World Nuclear Association, the International Atomic Energy Agency, The OECD Nuclear Energy Agency and the World Association of Nuclear Operators committed to enhancing international education and leadership in the peaceful applications of nuclear science and technology. It runs a series of programmes designed to complement existing institutions of nuclear learning in their curriculum. The premier event on the WNU calendar is the Summer Institute, which runs each year in July and brings together speakers from industry and government to present on all aspects of nuclear power. It also runs five one-week courses per year with partner universities around the world intended to enhance knowledge of today's nuclear industry among students.
See also
Institute of Nuclear Materials Management
International Atomic Energy Agency
World Nuclear Transport Institute
References
External links
World Nuclear Association Symposium
World Nuclear Fuel Cycle
Annual report
Atoms for Peace
International nuclear energy organizations
International organisations based in London
Nuclear industry organizations
Organisations based in the City of Westminster | World Nuclear Association | Engineering | 1,314 |
18,482,886 | https://en.wikipedia.org/wiki/Sussex%20Manifesto | The Sussex Manifesto was a report on science and technology for development written at the request of the United Nations and published in 1970.
History
In the late 1960s the United Nations asked for recommendations on science and technology for development from a team of academics at the Institute of Development Studies (IDS) and the Science Policy Research Unit (SPRU) at the University of Sussex, UK. This team became known as the Sussex Group and their report, Science and Technology to Developing Countries during the Second Development Decade, became known as the Sussex Manifesto.
The Sussex Manifesto was intended as the introductory chapter to the UN World Plan of Action on Science and Technology for Development. But the solutions presented in the Manifesto were deemed too radical to be used for that purpose. It was instead published in 1970 as an annex in Science and Technology for Development: Proposals for the Second United Nations Development Decade, a UN report by the Advisory Committee on the Application of Science and Technology to Development (ACAST).
The Sussex Manifesto helped raise awareness of science and technology for development in UN circles influenced the design of development institutions and was used for teaching courses in both North and South universities.
The Sussex Group were Hans Singer (Chairman), Charles Cooper (Secretary), R.C. Desai, Christopher Freeman, Oscar Gish, Stephen Hill and Geoffrey Oldham.
The Sussex Manifesto was originally published as the ‘Draft Introductory Statement for the World Plan of Action for the Application of Science and Technology to Development’, prepared by the ‘Sussex Group’, Annex II in 'Science and Technology for Development: Proposals for the Second Development
Decade', United Nations, Dept of Economic and Social Affairs, New York, 1970, Document ST/ECA/133, and reprinted as 'The Sussex Manifesto: Science and Technology to Developing Countries during the Second Development Decade', IDS Reprints 101.
Today
In 2008 one of the authors of the original report Professor Geoff Oldham gave a seminar at the STEPS Centre – a research centre and policy engagement based at IDS and SPRU. Following this event, the STEPS Centre decided to create a new manifesto in association with its partners around the world and Professor Oldham. The new publication, Innovation, Sustainability, Development: A New Manifesto, was launched in 2010, forty years after the original.
The New Manifesto has also been translated into Chinese, French, Portuguese and Spanish.
The STEPS Centre is funded by the Economic and Social Research Council (ESRC).
References
External links
The STEPS Centre
Hans Singer obituary
The Sussex Manifesto and its aftermath
Innovation, Sustainability, Development: A New Manifesto
STEPS Centre: About the New Manifesto
University of Sussex
Development studies
Science and technology studies | Sussex Manifesto | Technology | 529 |
46,256,827 | https://en.wikipedia.org/wiki/Chen%20Jining | Chen Jining (born 4 February 1964) is a Chinese environmental scientist and politician who has been serving as Communist Party Secretary of Shanghai and member of the 20th Politburo of the Chinese Communist Party since October 2022.
Chen graduated from the Imperial College London with a PhD in environmental systems analysis in 1992. Staying at the Imperial College after his graduation, he completed his postdoctoral studies in 1994 and served as an assistant researcher from 1994 to 1997. In 1998, he returned to his undergraduate alma mater Tsinghua University in Beijing to serve as vice chair of the Department of Environmental Science and Engineering. He then served as the university's vice president from 2006 to 2007, executive vice president from 2007 to 2012, and president from 2012 to 2015.
Joining the Chinese government in 2015, Chen served as Minister of Environmental Protection of China from 2015 to 2017, Vice Mayor of Beijing from 2017 to 2018, and Mayor of Beijing from 2018 to 2022. In October 2022, he was appointed as the Communist Party Secretary of Shanghai and joined the CCP Politburo.
Early life and education
Chen was born and raised in Gaizhou, Yingkou, Liaoning province in Northeast China. His ancestral home is in Lishu County, Siping, Jilin.
Chen started his undergraduate studies at Tsinghua University in 1981. From Tsinghua University, he received a Bachelor of Engineering with a major in civil and environmental engineering in 1986 and a Master of Science in environmental engineering in 1988. Chen traveled to the United Kingdom for graduate studies at Brunel University London in 1988. Chen transferred to Imperial College London in 1989, where he graduated with a Doctor of Philosophy in civil engineering in 1992.
After receiving his doctorate, Chen worked at Imperial College London as a postdoctoral researcher from 1992 to 1994 and as an assistant researcher from 1994 to 1998.
Academic career
In March 1998, Chen left England to serve as deputy director of the Department of Environmental Engineering at Tsinghua University in Beijing. He was promoted and served as the department director from 1999 to 2006.
In February 2006, he was appointed as vice-president of Tsinghua University, a year later, he was promoted to become the Executive Vice-president. He concurrently served as Dean of the Graduate School of Tsinghua University from January 2010 to February 2012, Dean of the Graduate School at Shenzhen, Tsinghua University between January 2010 to July 2011.
In February 2012, Chen was appointed president of Tsinghua University, he remained in that position until January 2015, when he was appointed Minister of Environmental Protection of China. At the time of his appointment, he was the youngest member of Li Keqiang's cabinet. In 2015, he was also a member of the judging panel for the Queen Elizabeth Prize for Engineering.
Political career
Ministry of Ecology and Environment
Chen succeeded Zhou Shengxian as party secretary of the Ministry of Environmental Protection on 28 January 2015. He was appointed minister later that year, becoming the youngest national minister at the time, at age 50.
During his term, Under the Dome, a film about air pollution in Northern China, was released. Chen praised the film and thanked its producer. Under the Dome was initially promoted by Chinese state media, but all mentions were removed and the film was censored in March. Chen subsequently stopped mentioning the film in all public events.
In September 2015, Chen pledged to make eight agencies affiliated with the Ministry independent by the end of next year, or revoke their qualifications otherwise. In March 2016, the Ministry of Environmental Protection announced major internal reforms, transitioning from hitting environmental targets to exercising comprehensive governance. Chen held an emergency meeting in October 2016 after Beijing was put on yellow smog alert. In January 2017, he inspected the monitoring of emissions on highways and industrial areas.
Beijing
In May 2017, Chen was appointed acting Mayor of Beijing, becoming the 17th person to hold the office since the founding of the People's Republic of China. Chen became a deputy to the 12th National People's Congress in March 2018.
In February 2020, amidst the growing COVID-19 pandemic, Chen visited several companies in Zhongguancun to check on their operations. He was awarded the Silver Olympic Order after the 2022 Winter Olympics.
Shanghai
Chen became a member of the Politburo after the 20th Party Congress and was appointed Communist Party Secretary of Shanghai, succeeding Li Qiang who became a standing member. As party secretary, Chen has visited the CCP's historical sites and repeatedly mentioned Xi Jinping's political theories.
In April 2023, Chen met with former Taiwanese President Ma Ying-jeou. In May 2023, after instructions by Xi, Chen convened a meeting to call for an "upgraded version" of the waste disposal scheme first implemented in 2019. In the same month, he met with JPMorgan Chase chief executive Jamie Dimon. In June 2023, Chen met with Tesla, Inc. CEO Elon Musk, where he encouraged Musk to increase investment in China. In July 2023, Chen convened a meeting of the Shanghai Municipal Committee to relay a decision made by the CCP leadership on Beijing to put Dong Yunhu, director of the Shanghai Municipal People's Congress Standing Committee, under investigation.
He met with US Commerce Secretary Gina Raimondo in August 2023. He met with US Senators led by Senate Majority Leader Chuck Schumer in October 2023, where he called for "healthy and stable" China–US relations. In 2024, Chen was in the forefront of efforts to reassure foreign investors amid concerns about the economy. In November 2024, he met with Hong Kong Chief Executive John Lee Ka-chiu, where both sides pledged to increase ties.
Notes
References
1964 births
21st-century mayors of places in China
Living people
Alumni of Brunel University London
Alumni of Imperial College London
Biologists from Liaoning
Chinese civil engineers
Chinese ecologists
Chinese Communist Party politicians from Liaoning
Educators from Liaoning
Engineering academics
Engineers from Liaoning
Environmental engineers
Mayors of Beijing
Members of the 20th Politburo of the Chinese Communist Party
Members of the 19th Central Committee of the Chinese Communist Party
Members of the Standing Committee of the 12th National People's Congress
Ministers of environmental protection of the People's Republic of China
People from Gaizhou
People's Republic of China politicians from Liaoning
Politicians from Yingkou
Political office-holders in Beijing
Presidents of Tsinghua University
Tsinghua University alumni
Academic staff of Tsinghua University
Recipients of the Olympic Order
Standing Members of the CCP Beijing Municipal Committee | Chen Jining | Chemistry,Engineering | 1,318 |
38,793,054 | https://en.wikipedia.org/wiki/Spectrum%20commons%20theory | The Spectrum Commons theory states that the telecommunication radio spectrum should be directly managed by its users rather than regulated by governmental or private institutions. Spectrum management is the process of regulating the use of radio frequencies to promote efficient use and gain a net social benefit. The theory of Spectrum Commons argues that there are new methods and strategies that will allow almost complete open access to this currently regulated commons with unlimited number of persons to share it without causing interference. This would eliminate the need for both a centralized, governmental management of the spectrum and the allocation of specific portions of the spectrum to private actors.
The Spectrum debate
The Spectrum Commons theory was developed to open up the spectrum to everyone. Users can share a spectrum as a commons without prior authorization from higher governance or regime. Proponents of spectrum commons theory believe government allocation of the spectrum is inefficient, and to be a true commons one must open up the spectrum to the users and minimize both government and private control. The promise of the commons approach as one technologist, George Gilder once put it, "You can use the spectrum as much as you want as long as you don't collide with anyone else or pollute it with high-powered noise or other nuisances."
The most basic characteristic of spectrum commons theory is the unlimited access to spectrum resources, but as most modern theorists point out, there is a need for some constraint of those resources. A commons by definition is a resource that is owned or controlled jointly by a group of individuals. In order for a commons to be viable, someone must control the resource and set orderly sharing rules to govern its use.
The radio spectrum is a shared resource that perhaps most strikingly affects the well being of society. Its use is governed by a set of rules and narrow restrictions, designed to limit interference, whose origins go back nearly a century. While in recent years some of those rules have been replaced by more flexible market like arrangements, the fundamental approach of this institution remains essentially unchanged.
The early days of radio communication had no regulations, and everyone could use the spectrum without limitation. When a particular spectrum was filled up or overused, it created harmful interference. In order to manage the spectrum and prevent harmful interference, the NRA began to regulate the use of the spectrum. The period without regulation only lasted a few years, but this concept guided Spectrum Commons Theory.
In the 1950s, economist Ronald Coase pointed out that the radio spectrum was no scarcer than wood or wheat, yet government did not routinely ration those items. Coase instead proposed the private ownership of, and a market in, spectrum, which would lead to a better allocation of the resource and avoid rent-seeking behavior by would be users of the spectrum. In the late 1990s, it seemed like the property rights view might carry the day as Congress finally allowed the FCC to auction licenses to use spectrum.
Radio spectrum is doled out to users by what the Federal Communications Commission calls a “command-and-control” process. The [FCC] first carves out a block of spectrum and decides to what use it will be put (e.g., television, mobile telephony). Then, the agency gives away, at no charge, the right to use the spectrum to applicants it deems appropriate. The FCC makes its choices based largely on a public record generated by a regulatory proceeding. The rationale for such a system has been that the radio spectrum is a scarce resource, that there are more people who would like to use it than there is space available, and thus that the government must apportion it lest there be chaos.
Types of Commons
Complete Open Commons
Spectrum Commons Theory although conceptually tries to focus on functioning as a completely free and open environment, facts points to this idea as flawed. Complete open commons, is a regime under which anyone has access to an unowned resource without limitation; no one controls access to the resource under open access. As previously mentioned however, in order for a commons to be viable, someone must control the resource and set orderly sharing rules to govern its use. While it is true that access to a commons can be open, this does not mean there is no central rule-setting authority.
Complete open commons is not a feasible regime for spectrum because, as a scarce resource, it will be subject to tragedy. Even given new spectrum-sharing technologies, a controller is still needed because these technologies require standards setting and enforcement in order to function.
Market Based Commons
Economists, who have long been skeptical about the ability of government agencies to allocate resources efficiently by “picking winners,” have preponderantly favored a market approach to the allocation of resources generally, and to the allocation of the spectrum in particular. As early as 1959, Ronald Coase wrote that spectrum was a fixed factor of production, like land or labor, and should be treated in the same way, with its use determined by the pricing system and awarded to the highest bidder. Coase concluded that government allocation of spectrum-use rights was not necessary to prevent interference and that, in fact, by preempting market allocation of spectrum, regulation was the source of extreme inefficiency.
Economists since Coase have favored a market-based approach if there is profit to be made from the charge of an entrance fee to such a park, then private enterprise and the profit motive can be relied upon to lead firms to carry out the necessary arrangements. And if entry into the commons is sufficiently beneficial to the entrants, there will indeed be profits to be made by giving them the opportunity to do so.
Supercommons
Another way to expand on the Spectrum Commons Theory is looking at it as a supercommons. As Werbach points out, a supercommons can operate alongside the property and commons regimes, which are just different configurations of usage rights associated with spectrum. In other words, the commons would be the baseline, with property encompassed within it, rather than the reverse. Bandwidth would not need to be infinite to justify a fundamental reconceptualization of the spectrum debate. Even with real-world scarcity and transaction-cost constraints, a default rule allowing unfettered wireless communication would most effectively balance interests to maximize capacity.
The initial legal rule for this spectrum should be universal access. Anyone would be permitted to transmit anywhere, at any time, in any manner, so long as they did not impose an excessive burden on others.
Modern Examples
Propagate Network's Swarm Logic Software
Which enables different communicate with one another and to choose nonconflicting frequencies or access points that will adjust their power levels to eliminate overlap. If this technology were able to reach a critical mass of adoption, even in localized areas, it could conceivably minimize those transaction costs necessary to adapt to neighboring uses of commons access spectrum. For neighboring buildings with scores of Wi-Fi transmitters, such technologies could prove very important, ensuring that different signals did not overlap and interfere with each other-thereby slowing data transmission and possibly triggering the destructive cycle of behavior noted above. Moreover, a logical extension of the swarm logic software is a function that could enable neighbors to identify those who deviated from accepted social norms in using commons access spectrum and, concomitantly, lower enforcement costs. Indeed, collective efforts-such as the Broadband Access Network Coordination ("BANC")-have already taken root to facilitate joint and controlled efforts to limit interference.
References
Radio spectrum
Wireless networking
Radio resource management | Spectrum commons theory | Physics,Technology,Engineering | 1,505 |
2,170,017 | https://en.wikipedia.org/wiki/Energy%20Policy%20Act%20of%202005 | The Energy Policy Act of 2005 () is a federal law signed by President George W. Bush on August 8, 2005, at Sandia National Laboratories in Albuquerque, New Mexico. The act, described by proponents as an attempt to combat growing energy problems, changed US energy policy by providing tax incentives and loan guarantees for energy production of various types. The most consequential aspect of the law was to greatly increase ethanol production to be blended with gasoline. The law also repealed the Public Utility Holding Company Act of 1935, effective February 2006.
Provisions
General provisions
The Act increases the amount of biofuel (usually ethanol) that must be mixed with gasoline sold in the United States to by 2006, by 2009 and by 2012; two years later, the Energy Independence and Security Act of 2007 extended the target to by 2022.
Under an amendment in the American Recovery and Reinvestment Act of 2009, Section 406, the Energy Policy Act of 2005 authorizes loan guarantees for innovative technologies that avoid greenhouse gases, which might include advanced nuclear reactor designs, such as pebble bed modular reactors (PBMRs) as well as carbon capture and storage and renewable energy;
It seeks to increase coal as an energy source while also reducing air pollution, through authorizing $200 million annually for clean coal initiatives, repealing the current cap on coal leases, allowing the advanced payment of royalties from coal mines and requiring an assessment of coal resources on federal lands that are not national parks;
It authorizes tax credits for wind and other alternative energy producers;
It adds ocean energy sources, including wave and tidal power for the first time as separately identified, renewable technologies;
It authorizes $50 million annually over the life of the law for biomass grants;
It includes provisions aimed at making geothermal energy more competitive with fossil fuels in generating electricity;
It requires the Department of Energy to:
Study and report on existing natural energy resources including wind, solar, waves and tides;
Study and report on national benefits of demand response and make a recommendation on achieving specific levels of benefits and encourages time-based pricing and other forms of demand response as a policy decision;
Designate National Interest Electric Transmission Corridors where there are significant transmission limitations adversely affecting the public (the Federal Energy Regulatory Commission may authorize federal permits for transmission projects in these regions);
Report in one year on how to dispose of high-level nuclear waste;
It authorizes the Department of the Interior to grant leases for activity that involves the production, transportation or transmission of energy on the Outer Continental Shelf lands from sources other than gas and oil (Section 388);
It requires all public electric utilities to offer net metering on request to their customers;
It prohibits the manufacture and importation of mercury-vapor lamp ballasts after January 1, 2008;
It provides tax breaks for those making energy conservation improvements to their homes;
It provides incentives to companies to drill for oil in the Gulf of Mexico;
It exempts oil and gas producers from certain requirements of the Safe Drinking Water Act;
It extends the daylight saving time by four to five weeks, depending upon the year (see below);
It requires that no drilling for gas or oil may be done in or underneath the Great Lakes;
It requires that the Federal Fleet vehicles capable of operating on alternative fuels be operated on these fuels exclusively (Section 701);
It sets federal reliability standards regulating the electrical grid (done in response to the 2003 North America blackout);
It includes nuclear-specific provisions;
It extends the Price-Anderson Nuclear Industries Indemnity Act through 2025;
It authorizes cost-overrun support of up to $2 billion total for up to six new nuclear power plants;
It authorizes production tax credit of up to $125 million total a year, estimated at 1.8 US¢/kWh during the first eight years of operation for the first 6.000 MW of capacity, consistent with renewables;
It authorizes loan guarantees of up to 80% of project cost to be repaid within 30 years or 90% of the project's life;
It authorizes $2.95 billion for R&D and the building of an advanced hydrogen cogeneration reactor at Idaho National Laboratory;
It authorizes 'standby support' for new reactor delays that offset the financial impact of delays beyond the industry's control for the first six reactors, including 100% coverage of the first two plants with up to $500 million each and 50% of the cost of delays for plants three through six with up to $350 million each for;
It allows nuclear plant employees and certain contractors to carry firearms;
It prohibits the sale, export or transfer of nuclear materials and "sensitive nuclear technology" to any state sponsor of terrorist activities;
It updates tax treatment of decommissioning funds;
The law exempted fluids used in the natural gas extraction process of hydraulic fracturing (fracking) from protections under the Clean Air Act, Clean Water Act, Safe Drinking Water Act, and CERCLA ("Superfund").
It directs the Secretary of the Interior to complete a programmatic environmental impact statement for a commercial leasing program for oil shale and tar sands resources on public lands with an emphasis on the most geologically prospective lands within each of the states of Colorado, Utah, and Wyoming.
Tax reductions by subject area
$4.3 billion for nuclear power
$2.8 billion for fossil fuel production
$2.7 billion to extend the renewable electricity production credit
$1.6 billion in tax incentives for investments in "clean coal" facilities
$1.3 billion for energy conservation and efficiency
$1.3 billion for alternative fuel vehicles and fuels (bioethanol, biomethane, liquified natural gas, propane)
$500 million Clean Renewable Energy Bonds (CREBS) for government agencies for renewable energy projects.
Change to daylight saving time
The law amended the Uniform Time Act of 1966 by changing the start and end dates of daylight saving time, beginning in 2007. Clocks were set ahead one hour on the second Sunday of March (March 11, 2007) instead of on the first Sunday of April (April 1, 2007). Clocks were set back one hour on the first Sunday of November (November 4, 2007), rather than on the last Sunday of October (October 28, 2007). This had the net effect of slightly lengthening the duration of daylight saving time.
Lobbyists for this provision included the Sporting Goods Manufacturers Association, the National Association of Convenience Stores, and the National Retinitis Pigmentosa Foundation Fighting Blindness.
Lobbyists against this provision included the U.S. Conference of Catholic Bishops, the United Synagogue of Conservative Judaism, the National Parent-Teacher Association, the Calendaring and Scheduling Consortium, the Edison Electric Institute, and the Air Transport Association. This section of the act is controversial; some have questioned whether daylight saving results in net energy savings.
Commercial building deduction
The Act created the Energy Efficient Commercial Buildings Tax Deduction, a special financial incentive designed to reduce the initial cost of investing in energy-efficient building systems via an accelerated tax deduction under section §179D of the Internal Revenue Code (IRC) Many building owners are unaware that the [Policy Act of 2005] includes a tax deduction (§179D) for investments in "energy efficient commercial building property" designed to significantly reduce the heating, cooling, water heating and interior lighting cost of new or existing commercial buildings placed into service between January 1, 2006 and December 31, 2013.
§179D includes full and partial tax deductions for investments in energy efficient commercial building that are designed to increase the efficiency of energy-consuming functions. Up to $.60 for lighting, $.60 for HVAC and $.60 for building envelope, creating a potential deduction of $1.80 per sq/ft. Interior lighting may also be improved using the Interim Lighting Rule, which provides a simplified process to earn the Deduction, capped at $0.30-$0.60/square foot. Improvements are compared to a baseline of ASHRAE 2001 standards.
To obtain these benefits the facilities/energy division of a business, its tax department, and a firm specializing in EPAct 179D deductions needed to cooperate. IRS mandated software had to be used and an independent 3rd party had to certify the qualification. For municipal buildings, benefits were passed through to the primary designers/architects in an attempt to encourage innovative municipal design.
The Commercial Buildings Tax Deduction expiration date had been extended twice, last by the Energy Improvement and Extension Act of 2008. With this extension, the CBTD could be claimed for qualifying projects completed before January 1, 2014.
Energy management
The commercial building tax deductions could be used to improve the payback period of a prospective energy improvement investment. The deductions could be combined by participating in demand response programs where building owners agree to curtail usage at peak times for a premium. The most common qualifying projects were in the area of lighting.
Energy savings
Summary of Energy Savings Percentages Provided by IRS Guidance
Percentages permitted under Notice 2006-52
(Effective for property placed in service January 1, 2006 – December 31, 2008)
Interior Lighting Systems 16⅔%,
Heating, Cooling, Ventilation, and Hot Water Systems 16⅔%,
Building Envelope 16⅔%.
Percentages permitted under Notice 2008-40
(Effective for property placed in service January 1, 2006 – December 31, 2013)
Interior Lighting Systems 20%,
Heating, Cooling, Ventilation, and Hot Water Systems 20%,
Building Envelope 10%.
Percentages permitted under Notice 2012-22
Interior Lighting Systems 25%,
Heating, Cooling, Ventilation, and Hot Water Systems 15%,
Building Envelope 10%.
Effective date of Notice 2012-22 – December 31, 2013; if §179D is extended beyond December 31, 2013, is also effective (except as otherwise provided in an amendment of §179D or the guidance thereunder) during the period of the extension.
Cost estimate
The Congressional Budget Office (CBO) review of the conference version of the bill estimated the Act would increase direct spending by $2.2 billion over the 2006–2010 period, and by $1.6 billion over the 2006–2015 period. The CBO did not attempt to estimate additional effects on discretionary spending. The CBO and the Joint Committee on Taxation estimated that the legislation would reduce revenues by $7.9 billion over the 2005–2010 period and by $12.3 billion over the 2005–2015 period.
Support
The collective reduction in national consumption of energy (gas and electricity) is significant for home heating. The Act provided gible financial incentives (tax credits) for average homeowners to make environmentally positive changes to their homes. It made improvements to home energy use more affordable for walls, doors, windows, roofs, water heaters, etc. Consumer spending, and hence the national economy, was abetted. Industry grew for manufacture of these environmentally positive improvements. These positive improvements have been near and long-term in effect.
The collective reduction in national consumption of oil is significant for automotive vehicles. The Act provided tangible financial incentives (tax credits) for operators of hybrid vehicles. It helped fuel competition among auto makers to meet rising demands for fuel-efficient vehicles. Consumer spending, and hence the national economy, was abetted. Dependence on imported oil was reduced. The national trade deficit was improved. Industry grew for manufacture of these environmentally positive improvements. These positive improvements have been near and long-term in effect.
Criticism
The Washington Post contended that the spending bill was a broad collection of subsidies for United States energy companies; in particular, the nuclear and oil industries.
Speaking for the National Republicans for Environmental Protection Association, President Martha Marks said that the organization was disappointed in the law because it did not support conservation enough, and continued to subsidize the well-established oil and gas industries that didn't require subsidizing.
The law did not include provisions for drilling in the Arctic National Wildlife Refuge (ANWR); some Republicans claimed "access to the abundant oil reserves in ANWR would strengthen America's energy independence without harming the environment."
Senator Hillary Clinton criticized Senator Barack Obama's vote for the bill in the 2008 Democratic Primary.
Legislative history
The Act was voted on and passed twice by the United States Senate, once prior to conference committee, and once after. In both cases, there were numerous senators who voted against the bill. John McCain, the Republican Party nominee for President of the United States in the 2008 election voted against the bill. Democrat Barack Obama, President of the United States from January 2009 to January 2017, voted in favor of the bill.
Provisions in the original bill that were not in the act
Limited liability for producers of MTBE.
Drilling for oil in the Arctic National Wildlife Refuge (ANWR).
Increasing vehicle efficiency standards (CAFE).
Requiring increased reliance on non-greenhouse gas-emitting energy sources similar to the Kyoto Protocol.
To remove from 18 CFR Part 366.1 the definitions of "electric utility company" and exempt wholesale generator (EWG), that an EWG is not an electric utility company.
Preliminary Senate vote
June 28, 2005, 10:00 a.m. Yeas - 85, Nays - 12
Conference committee
The bill's conference committee included 14 Senators and 51 House members. The senators on the committee were: Republicans Domenici, Craig, Thomas, Alexander, Murkowski, Burr, Grassley and Democrats Bingaman, Akaka, Dorgan, Wyden, Johnson, and Baucus.
Final Senate vote
July 29, 2005, 12:50 p.m. Yeas - 74, Nays - 26
Legislative history
See also
Energy Policy Act of 1992
Public Utility Regulatory Policies Act (PURPA) of 1978
Demand response
Energy crisis
FutureGen, zero-emissions coal-fired power plant
Hydrogen economy
Internal Revenue Service
Loan guarantee
Nuclear Power 2010 Program
Oil depletion
Oil industry
Power plant
Price-Anderson Nuclear Industries Indemnity Act
Public Utility Holding Company Act of 1935
Renewable energy in the United States
Synthetic Liquid Fuels Act
Energy policy of the United States
References
External links
Government
Energy Policy Act of 2005 as amended (PDF/details) in the GPO Statute Compilations collection
Energy Policy Act of 2005 as enacted (details) in the US Statutes at Large
on Congress.gov
Department of Energy spotlight on the bill - listing consumer savings (tax breaks).
Official News release and Allocution Bush / Albuquerque / 2005-08-08
Congressional Budget Office Cost Estimate for the bill conference agreement, July 27, 2005
Research Service summary
Events
GovEnergy Workshop and Trade Show
News
Christian Science Monitor: How Much New Oil? Not a Lot
Boston Herald: Editorial
Reuters: brief summary
MSNBC: news story
TaxPayer.net: How the Bill Passed – a view of the reasons for the bills passage and its costs to taxpayers. See also: TaxPayer.net on Subsidies
Yahoo! News: bill signing
CNN: Bush: Energy bill effects will be long-term
WashingtonWatch.com page on P.L. 109-58: The Energy Policy Act of 2005
InfoWorld.com Sustainable IT blog: New daylight saving time not so bright an idea – a criticism of the change to daylight saving time
Non-profit
Clean Fuels Ohio - This site focuses on alternative fuels as well as alt-fuels incentives created by the Energy Policy Act of 2005.
2005 in the environment
United States federal energy legislation
United States federal taxation legislation
Energy policy
Renewable energy law
Acts of the 109th United States Congress
Daylight saving time in the United States | Energy Policy Act of 2005 | Environmental_science | 3,139 |
40,918,274 | https://en.wikipedia.org/wiki/DIOP | DIOP (2,3-O-isopropylidene-2,3-dihydroxy-1,4-bis(diphenylphosphino)butane) is an organophosphorus compound that is used as a chiral ligand in asymmetric catalysis. It is a white solid that is soluble in organic solvents.
DIOP is prepared from the acetonide of d,l-tartaric acid, which is reduced prior to attachment of the PPh2 substituents.
Use
The DIOP ligand binds to metals via conformationally flexible seven-membered C4P2M chelate ring.
DIOP is a historically important in the development of ligands for use in asymmetric catalysis, an atom-economical method for the preparation of chiral compounds. Described in 1971, it was the first example of a C2-symmetric diphosphine. Its complexes have been applied to the reduction of prochiral olefins, ketones, and imines. Knowles et al. independently reported the related C2-symmetric diphosphine DIPAMP.
Since the discovery of DIOP, many analogues of DIOP have been introduced. These DIOP derivatives include MOD-DIOP, Cy-DIOP, DIPAMP, and DBP-DIOP. Out of many derivatives, DBP-DIOP exhibits good regio- and enantioselectivity in the hydroformylation of butenes and styrene. DIOP was the first chiral ligand used in the platinum-tin-catalyzed hydroformylation. The reactivity, chemo – and the enantioselectivity of DIOP is influenced by CO and H2 pressure and polarity of the solvents. The best results in asymmetric hydroformylation are achieved in solvents with medium polarity: benzene and toluene.
References
Chelating agents
Diphosphines
Phenyl compounds | DIOP | Chemistry | 409 |
28,013,942 | https://en.wikipedia.org/wiki/East-East | East-East is a multiplex architectural event jointly held by the Lithuanian and Japanese architects and students of architecture.
History
The concept of the Lithuania-Japan architectural event was conceived and its implementation coordinated by Dainius Kamaitis, a Lithuanian diplomat and former ambassador to Japan. He paved the way for the bilateral exchange in architecture domain by approaching both sides with a proposal to establish links which had not existed before. The idea was successfully implemented in Kaunas in cooperation with the Kaunas City Municipality and the Kaunas Section of the Architects Association of Lithuania in 2002. As the event enjoyed a considerable acclaim on both sides, following new initiatives by D.Kamaitis, subsequent architectural forums were implemented in Vilnius, Tokyo and Kaunas, in 2009, 2011 and 2013, accordingly. All the events received full support from the Architects Association of Lithuania and Japan Institute of Architects.
In 2019, the Government of Japan awarded D.Kamaitis with The Order of the Rising Sun, Gold and Silver Star, in recognition to his contribution to strengthening and promoting friendly relations between Japan and Lithuania.
Origin of name
The name East-East is derived from the concept that Lithuania is located in the east of Europe, while Japan lies in the east of Asia. It implies mutual understanding, close cooperation and harmony.
Objectives and structure
The initial target of establishing the links between the Lithuanian and Japanese architects has been achieved already. The long-term objectives are to strengthen the ties and make the joint events into a tradition, in order to foster continuous interaction not only between architects, but between students of architecture as well. The students' exchange is recognized by both sides as bearing particular importance to developing the future relations.
East-East spans almost a week and is built on three pillars:
exhibition of architectural works
public forum
students workshop.
East-East 1
The first event was held between July 30 and August 1, 2002, in Kaunas, at Mykolas Žilinskas Art Gallery. The Japanese delegation was led by the 1993 Pritzker Prize (often referred to as the Nobel Prize of architecture) winner Fumihiko Maki and included seven leading Japanese architects Taro Ashihara, Chiaki Arai, Tetsuo Furuichi, George Kunihiro, Koh Kitayama, Hidetoshi Ohno and Kengo Kuma.
The keynote presentation by Fumihiko Maki who was awarded with the honorary membership of the Architects Association of Lithuania, aroused considerable public interest. On behalf of the host side, Linas Tuleikis, Chairman of the Kaunas Section of the Architects Association of Lithuania, gave a lecture on the contemporary Lithuanian architecture.
Exhibition
Joint exhibition of works by the Lithuanian and Japanese architects kicked off at the very outset of the event. It was honoured by the very presence of Valdas Adamkus, President of Lithuania. Besides the works of the Japanese architects present at the event, those of Nobuaki Furuya, Kazuo Iwamura, Atsushi Kitagawara, Hiroshi Naito, Tadasu Ohe and Ken Yokogawa were exhibited as well. On the Lithuanian side of the exhibition, the works of Vilius Adomavičius, Audrius Ambrasas, Artūras Asauskas, Gintaras Čaikauskas, Darius Čiuta, Gediminas Jurevičius, (late) Algimantas Kančas, Audrys Karalius, Šarūnas Kiaunė, Kęstutis Kisielius, Sigitas Kuncevičius, Saulius Mikštas, Gintautas Natkevičius, Rolandas Palekas, Saulius Pamerneckis, Kęstutis Pempė, Ramūnas Raslavičius were presented.
Public forum
A public forum was held on July 30–31. It was divided into the following five sessions:
New program
New building type
New order
New material
East meets East.
A broad and interesting discussion on the above topics evolved. The Japanese architects presented their ideas featuring their own practical applications, while their Lithuanian counterparts Tomas Grunskis, Jūratė Tutlytė, Vytautas Petrušonis and Jonas Audėjaitis were more concerned with theoretical generalization.
Students workshop
The workshop was held between July 27 and August 1 at Kaunas Art Institute of Vilnius Academy of Arts, dealing with the acute issue which Kaunas is facing in relation to adjacent rivers. During the Soviet period of the Lithuanian history, rapidly developing industry in Kaunas city left a significant trace in its urban fabric. After the restoration of independence, most of the bigger industrial plants built on the bank of Nemunas river were closed. As a result, and also due to the transport reorganization scheme implemented during the Soviet period, despite its unique location on the confluence of two rivers, the city has virtually lost any contact with them.
The workshop groups were given the task to find effective points and propose a unique architectural concept which would facilitate the return of picturesque but desolated riverside areas back to the city.
A group of 17 Japanese and 19 Lithuanian students were split into 6 mixed teams. Design named "Tunnels" (by Io Kato, Shunsuke Tomida, Hiroaki Kishimoto, Tomas Kučinskas, Aurėja Leskauskaitė, Eivaras Rastauskas) was selected as the winning entry. "Parasite" was awarded with the second prize, and the third award went to "Magic Box".
The students represented the following higher institutions:
in Japan – Tokyo University of the Arts, Yokohama National University, Nihon University, University of Tokyo, Kokushikan University, Chiba Institute of Technology and Tokyo University of Science
in Lithuania – Kaunas University of Technology, Vilnius Academy of Arts, Kaunas Faculty of Vilnius Academy of Arts, Vilnius Gediminas Technical University.
East-East 2
The second event took place in Vilnius and was held at Vilnius City Municipality and Contemporary Art Centre between June 30 and July 4, 2009. The renowned architect Riken Yamamoto led the Japanese delegation which included Taro Ashihara, George Kunihiro, Ken Yokogawa, Nobuaki Furuya, Takaharu Tezuka, Taira Nishizawa, Manabu Chiba and Hiroshi Sambuichi. At the opening of the event, Riken Yamamoto delivered the keynote lecture at Vilnius City Municipality.
Exhibition
Five Japanese architects Takaharu Tezuka, Taira Nishizawa, Manabu Chiba, Hiroshi Sambuichi and Shuhei Endo (the latter was not present at the event himself) exhibited their works under the title "New Wave of Japanese Architecture 2009" at Vilnius City Municipality.
Public forum
On July 1, a public forum was held at Contemporary Art Centre, featuring lectures by two speakers from each side, followed by a discussion. Taro Ashihara and Tomas Grunskis presented their views on public space as the focus of life in a city, and the philosophy of the traditional house was introduced by Ken Yokogawa and Gintaras Čaikauskas.
Discussion on architecture was continued on July 3 at the international conference "Architecture: a Part of Culture (?)" held at Vilnius Town Hall. George Kunihiro and Nobuaki Furuya were among the lecturers.
Students workshop
Four mixed groups comprising 10 students each from Japan and Lithuania generated ideas aimed at the recovery of a problematic area. The point at issue – Park of Architecture, an area of about 58 hectares in one of the most beautiful locations in close proximity to Vilnius Old Town. Since the 19th century, noxious and polluting industries were developed there, which resulted in the loss of interface between the area and its natural environment, architectural integrity and broken functional and social links with the city. The workshop was therefore tasked with proposing "hot spots" which would help in resurrecting the Park of Architecture and give new life to this socially degraded part of Vilnius city.
On July 4, during the presentation of works at Cultural Platform KultFlux located next to Mindaugas Bridge, the design named "Valley" (by Hiroshi Yamada, Hiroyuki Nogami, Ježi Stankevič, Marius Ščerbinskas and Rasa Chmieliauskaitė, consultant architects George Kunihiro and Gintaras Čaikauskas) was selected as the winner. The idea of a valley was proposed as a public space, part of which would have to be flooded with water. Thus the once industrial plants surrounded with water would be turned into buildings for public use: swimming pool, theater, cinema etc. Elevated horizontal surfaces would be erected between the buildings in order to link the former plants with the land, serving as public spaces for communication and leisure.
The students represented the following higher institutions:
in Japan – Waseda University, Tokai University, Kyoto Institute of Technology, Kokushikan University
in Lithuania – Kaunas University of Technology, Vilnius Academy of Arts, Kaunas Faculty of Vilnius Academy of Arts, Vilnius Gediminas Technical University.
East-East 3
The third event was held in Tokyo at Japan Institute of Architects, Ginza TS Building and Gyoko-dori Underground Gallery between May 31 and June 4, 2011. The Lithuanian delegation included ten leading architects Gražina Janulytė-Bernotienė, Gintaras Balčytis, Linas Tuleikis, (late) Algimantas Kančas, Gintaras Čaikauskas, Linas Naujokaitis, Rolandas Palekas, Marius Šaliamoras, Donaldas Trainauskas, Gintautas Vieversys and 10 students of architecture.
Exhibition
Japanese and Lithuanian architects exhibited their designs at Gyoko-dori Underground Gallery between June 1 and 29. It was a part of an exhibition UIA2011 TOKYO 111 Days Before which kicked off as a pre-event to the 24th World Congress of Architecture in Tokyo (UIA2011).
The Lithuanian part of the exhibition included works by architectural firms, project teams and individual architects. Firms represented:
Kančo studija, Ambraso architektų biuras, G.Natkevičius ir partneriai, E.Miliūno studija, R.Paleko Arch-studija, Architektūros estetikos studija, Vilius ir partneriai, Vilniaus architektūros studija, Laimos ir Ginto projektai, G.Janulytės-Bernotienės studija, 4PLIUS, Architektūros linija, Gečia, Dviejų grupė, Urbanistika.
Project teams and individual architects included:
Tadas Balčiūnas, Vytautas Biekša, Marius Kanevičius; Jūras Balkevičius, (late) Vytautas Čekanauskas, Lina Masliukienė, Marius Šaliamoras, Algirdas Umbrasas; Kęstutis Pempė, (late) Gytis Ramunis; Darius Čiuta, Gintaras Auželis; Alvydas Šeibokas, Gabrielis Malžinskas; Andrius Skiezgelas, Aleksandras Kavaliauskas, Martynas Nagelė; Kęstutis Lupeikis.
Japanese participants displayed their own individual designs:
Fumihiko Maki, Riken Yamamoto, Chiaki Arai, Kazuo Iwamura, Tetsuo Furuichi, Ken Yokogawa, Hidetoshi Ohno, Taro Ashihara, Koh Kitajama, George Kunihiro, Kengo Kuma, Nobuaki Furuya, Tadasu Ohe, Manabu Chiba, Shuhei Endo, Makoto Yokomizo, Takaharu Tezuka+Yui Tezuka, Atelier Bow-Wow, Hiroshi Sambuichi, Takenori Naka, Masakatsu Matsuyama, Kumiko Inui, Yoshiaki Tanaka, Yukihide Mizuno, Shinichiro Akasaka, Osamu Fujita, Makoto Maeda, Ryuichi Ashizawa, Shogo Aratani, Yasutaka Yoshimura, Kazuhide Doi, Koichi Furumori, Tsukasa Kinjo, Yukiko Nadamoto, Hiroshi Nakamura, TNA Makoto Takei+Chie Nabeshima, Keisuke Maeda, Yuuoh Mino, Koji Kimi, Koji Nakawatase.
Public forum
On June 4, a public forum under the title 'Billows Over the Architecture and Cities in the 21st century' was held at Japan Institute of Architects. It provided a platform to exchange ideas for a better future of architecture, emerging from the need to address the issues of climate change, population decrease in industrialized countries, economic conflict between the old developed countries and rising economies and safety of nuclear power. These changes are on par with the industrial revolution in the 19th century, and will be followed by the paradigm shift in the fundamentals of knowledge, economy and society.
The forum featured lectures by three speakers from each side. Gintaras Čaikauskas, Linas Naujokaitis and Linas Tuleikis presented the Lithuanian views, while Hidetoshi Ohno, Nobuaki Furuya and Manabu Chiba discussed the topic from the Japanese perspective.
Students workshop
The workshop was held between May 31 and June 3 at Ginza TS Building. 10 Lithuanian and 11 Japanese students were split into four mixed groups and tasked with creating an attractive Ginza District in central Tokyo by means of new urban interventions.
For that purpose, each group made research and design proposals on one of the four themes: conservation and contemporary interpretation of historical buildings, vertical circulation between small-sized commercial buildings, facade/skin design of a humongous redeveloped building, and open space utilization of public/private properties. The completed designs were presented at the seminar on June 4.
The students represented the following higher institutions:
in Japan – Meiji University, Keio University, Shibaura Institute of Technology, Tama Art University
in Lithuania – Kaunas University of Technology, Vilnius Academy of Arts, Kaunas Faculty of Vilnius Academy of Arts, Vilnius Gediminas Technical University.
The students participants list:
Japanese students
Shogo Nagata, Eri Ohara, Kenta Sasaki, Hinako Hagino, Rei Yamaguchi, Masamitsu Tanikawa, Takeaki Yokoi, Masaru Iijima, Takahiro Idenoshita, Mayumi Suzumoto, Rei Koizumi.
Lithuanian students
Mykolas Svirskis, Antanas Šarkauskas, Ieva Bartkevičiūtė, Matas Šiupšinskas, Ieva Cicėnaitė, Vytenis Raugala, Rasa Marozaitė, Aistė Tarutytė, Mantas Gipas, Andrius Vilčinskas.
East-East 4
For the first time, the fourth East-East forum was incorporated in the framework of another event and was held on September 23–27, 2013 in Kaunas at Žalgiris Arena as part of Kaunas Architecture Festival (KAFe), an international event of contemporary architecture spanning two months.
Notably, East-East 4 received support from the Asia-Europe Foundation (ASEF) and its partners under the second edition of Creative Encounters – Cultural Partnerships between Asia and Europe. Creative Encounters is a programme developed by ASEF in partnership with Arts Network Asia (ANA), an Asia-wide network of artists and arts organisations, and in cooperation with Trans Europe Halles (TEH), the European Network of Creative Arts Spaces.
The Japanese delegation included architects Manabu Chiba, Nobuaki Furuya, Kazuko Akamatsu, Masahiro Harada, Koichi Yasuda, Toshikatsu Ienari, Akiko Miya and Takeshi Hosaka.
At the closing of the event, Ryue Nishizawa, laureate of the Pritzker Prize (2010), delivered a keynote lecture on behalf of Sejima and Nishizawa and Associates (SANAA).
Exhibition
On September 24, Japanese and Lithuanian architects unveiled their works at a joint exhibition which lasted until October 16. Besides the works of the Japanese architects who were present at the event, SANAA, Sou Fujimoto and Kazuhiro Kojima displayed their designs as well.
The Lithuanian architects represented at the exhibition were: Šarūno Kiaunės projektavimo studija, R.Paleko Arch-studija, a.s.a. Sigito Kuncevičiaus projektavimo firma, Andrė Baldi, Aketuri architektai, L&G projektai, G.Natkevičius ir partneriai, E.Miliūno studija, Kančo studija, G.Janulytės-Bernotienės studija, Baltas fonas, Gintaras Kuginys, Darius Čaplinskas, Andrius Ciplijauskas, Gediminas Bulavas, Eventus Pro, Projektavimo ir restauravimo institutas, Darius Čiuta, 4PLIUS.
Public forum
For the first time, the forum was substituted by two sets of lectures by the Japanese architects. On September 24, Toshikatsu Ienari, Akiko Miya and Koichi Yasuda presented their views on architecture, while Nobuaki Furuya and Manabu Chiba lectured on September 27.
Students workshop
The workshop was held between September 24 and 26. 11 Lithuanian and 7 Japanese students formed four mixed groups to challenge the task "The City Centre and Its Relationship With Rivers".
The idea of re-designing the waterfront area of Kaunas city was revisited after 11 years since East-East 1. Kaunas has exceptional features, as it had been built on the confluence of two large rivers. However, this authentic geographic context remains almost unused in the urban life. Therefore, finding ways to bring the waterfront to the center of everyday life was the core architectural task for the workshop.
On September 27, the presentation and exhibition of completed designs was held. The jury made of Japanese and Lithuanian architects selected the winning proposal "Floating Towers" (by Ayako Motai, Medeina Kurtinaitytė, Simon Tsing Shan Mok, Antanas Šarkauskas and Vytautas Lelys).
The students represented the following higher institutions:
in Japan – Waseda University, University of Tokyo, Shibaura Institute of Technology, Tokyo Institute of Technology, Japan Women's University
in Lithuania – Kaunas University of Technology, Vilnius Academy of Arts, Kaunas Faculty of Vilnius Academy of Arts, Vilnius Gediminas Technical University.
East-East 5
The fifth East-East edition was held at Žalgiris Arena in Kaunas on September 23-26, 2022, featuring Recovery as the overarching theme. It was the main focus of the 2022 Kaunas Architecture Festival (KAFe). Recovery theme was chosen to draw attention to the fact that human activity has so profoundly altered the surrounding environment that technological innovation, industrial development and urban sprawl have distorted sustainable living, depleting natural resources. East-East 5 therefore called for reflection on how to live in an increasingly hostile environment, and for recovery through well-thought-out healing of the urban fabric of the city, so as to make a pivot towards nature the priority for human activity.
Exhibition
Exhibition curators: Paulius Vaitiekūnas, Shinichi Kawakatsu.
Human power and activity have transformed natural landscapes and ecosystems, changing not only the land use of vast areas, but also the global climate. Throughout the ages, architects have had the power of vision. What is needed now is a vision of recovery.
The East-East 5 exhibition presented the “Recipe for Recovery” - the architectural techniques and philosophy and the architectural process of the best Japanese and Lithuanian architects: models, plans, cross-sections, details, hand drawings, diagrams, axonometric views etc.
The exhibition took place in the former Kaunas Central Post Office building, one of the most iconic examples of Kaunas modernist architecture. It was closed in 2019 and is planned to be transformed into an Architecture Centre.
The exhibition displayed 40 projects by Lithuanian and Japanese architects and architectural studios.
Participants from Japan: Kei Kaihoh, Osamu Nishida, Tsuyoshi Tane, Yasutaka Yoshimura, Suzuko Yamada, Eri Tsugawa, Miho Tominaga, Shingo Masuda, Kenichi Teramoto, Fuminori Nousaku, Kozo Kadowaki.
Participants from Lithuania: Audrius Ambrasas Architects, Do Architects, Gintaras Balčytis, A2SM Architektai, Aketuri Architektai, G. Natkevičius ir partneriai, Paleko architektų studija + architektų studija Plazma, Office de Architectura, Processoffice, Vilniaus architektūros studija, Arches, Šarūno Kiaunės projektavimo studija, Archinova + PLH Arkitekter, Nebrau, Laurynas Žakevičius, LG projektai & GAL architektai.
Public forum
Curators: Andrius Ropolas, Yasutaka Yoshimura
Japanese architects Osamu Nishida, Suzuko Yamada, Kei Kaihoh, Yasutaka Yoshimura, Kenichi Teramoto, Shingo Masuda took part in the forum, while the Lithuanian side was represented by Gabrielė Šarkauskienė and Antanas Šarkauskas, Vytautas Biekša, Gabrielė Ubarevičiūtė and Giedrius Mamavičius, Edgaras Neniškis.
George Kunihiro and Gintaras Balčytis, principal curators of East-East 5, summarized the discussion.
The highlight of the forum was the lecture "Back to Nature" by Kengo Kuma, one of Japan's leading contemporary architects.
On 26 September, Japanese architects Tsuyoshi Tane, Eri Tsugawa and Shinichi Kawakatsu gave lectures at the Faculty of Construction and Architecture of Kaunas University of Technology.
Students workshop
Curators: Martynas Marozas, Kei Kaihoh.
The theme of recovery was also reflected in the title of the students' workshop "A Playground for Recovery".
The unfinished Hotel Britanika in the very center of Kaunas served as the venue for the workshop.
Five teams were made of 10 Japanese and 8 Lithuanian students and tried out five different approaches to Hotel Britanika:
- Cultural Playground in Kaunas (Patricija Markevičiūtė, Mako Kijima, Ignas Arlauskas, Naoki Kitagaki) : playing with culture - an innovative cultural centre,
- Eat.Sleep.Work.Repeat (Akiha Shimizu, Adelė Astrauskaitė, Gabrielė Ibėnaitė, Motoki Susa) : playing with economy - an independent mechanism that generates added value to the city's economy,
- ECOctopus (Martynas Stakvilevičius, Tautvydas Zykevičius, Keika Sato, Naoya Ando) : playing with ecosystems - a place for reinforcing city's ecosystems,
- Play Energy! (Auksė Vilkevičiūtė, Erina Shibagaki, Masato Sako) : playing with energy - a building that generates energy and shares it with the neighbourhood,
- Neighbourhood-ing (Vilius Jagminas, Yumeno Noda, Tetsu Kimura) : playing with people - a bridge between various public spaces.
The preparation for the workshop started with video conference on August 22-26, and the workshop itself was held at the Faculty of Construction and Architecture of Kaunas University of Technology on September 24-25. It was rounded up by a public presentation of the workshop results on September 26.
The students represented the following higher institutions:
in Japan – Waseda University, University of Tokyo, Shibaura Institute of Technology, Tokyo University of Science, Meiji University, Hosei University, Kyoto University, Shinshu University.
in Lithuania – Kaunas University of Technology, Vilnius Academy of Arts, Kaunas Faculty of Vilnius Academy of Arts.
References
External links
Tutlytė, Jūratė. "Lithuania-Japan Architecture Event "EAST-EAST", Japan Institute of Architects website, 2003
(Japanese) East-East Report, Japan Institute of Architects webpage, May 2003
YouTube video East-East 1
(Lithuanian) "Rytai-Rytai II" Lietuvos-Japonijos architektūros dienos", Architects Association of Lithuania webpage
(Japanese) Kunihiro, George. "Kenchiku bunka ni yoru kokkakan kouryuu no datousei", Shinkenchiku 2009 General Index, November 2009, p. 019
(Japanese) "Ritoania-Nihon kokusai shinzen kenchiku wakushoppu East-East 2", Nobuaki Furuya Laboratory webpage, October 2009
YouTube video Lithuania-Japan Architecture Event East-East 2
(Lithuanian) „Rytai-Rytai III: Lietuvos-Japonijos architektūros renginys Tokijuje", Architects Association of Lithuania webpage, June 7, 2011
YouTube video Lithuania-Japan Architecture Event East-East 3. Part 1
YouTube video Lithuania-Japan Architecture Event East-East 3. Part 2
YouTube video UIA2011 TOKYO VISUAL NEWSLETTER Vol.8
East East 4 | Kaunas Architecture Festival 2013, Asia-Europe Foundation (ASEF) Culture360
YouTube video EAST-EAST Mini Review, August 29, 2022
EAST-EAST Mini Review | Pass the baton of 20 years of exchange, The Japan Institute of Architects International Relations Committee, October 6, 2022
YouTube video Kaunas Architecture Festival、September 23, 2022
YouTube video Lithuanian and Japanese Architecture Exhibition East-East 5 – "Recipe for Recovery"、January 4, 2023
Architecture festivals
International relations
Japan–Lithuania relations | East-East | Engineering | 5,348 |
2,236,341 | https://en.wikipedia.org/wiki/Systematic%20desensitization | Systematic desensitization, or graduated exposure therapy, is a behavior therapy developed by the psychiatrist Joseph Wolpe. It is used when a phobia or anxiety disorder is maintained by classical conditioning. It shares the same elements of both cognitive-behavioral therapy and applied behavior analysis. When used in applied behavior analysis, it is based on radical behaviorism as it incorporates counterconditioning principles. These include meditation (a private behavior or covert conditioning) and breathing (a public behavior or overt conditioning). From the cognitive psychology perspective, cognitions and feelings precede behavior, so it initially uses cognitive restructuring.
The goal of the therapy is for the individual to learn how to cope with and overcome their fear in each level of an exposure hierarchy. The process of systematic desensitization occurs in three steps. The first step is to identify the hierarchy of fears. The second step is to learn relaxation or coping techniques. Finally, the individual uses these techniques to manage their fear during a situation from the hierarchy. The third step is repeated for each level of the hierarchy, starting from the least fear-inducing situation.
Three steps of desensitization
There are three main steps that Wolpe identified to successfully desensitize an individual.
Establish anxiety stimulus hierarchy. The individual should first identify the items that are causing the anxiety problems. Each item that causes anxiety is given a subjective ranking on the severity of induced anxiety. If the individual is experiencing great anxiety to many different triggers, each item is dealt with separately. For each trigger or stimulus, a list is created to rank the events from least anxiety-provoking to most anxiety-provoking.
Learn the mechanism response. Relaxation training, such as meditation, is one type of best coping strategies. Wolpe taught his patients relaxation responses because it is not possible to be both relaxed and anxious at the same time. In this method, patients practice tensing and relaxing different parts of the body until the patient reaches a state of serenity. This is necessary because it provides the patient with a means of controlling their fear, rather than letting it increase to intolerable levels. Only a few sessions are needed for a patient to learn appropriate coping mechanisms. Additional coping strategies include anti-anxiety medicine and breathing exercises. Another example of relaxation is cognitive reappraisal of imagined outcomes. The therapist might encourage patients to examine what they imagine happening when exposed to the anxiety-inducing stimulus and then allowing for the client to replace the imagined catastrophic situation with any of the imagined positive outcomes.
Connect stimulus to the incompatible response or coping method by counter conditioning. In this step the client completely relaxes and is then presented with the lowest item that was placed on their hierarchy of severity of anxiety phobias. When the patient has reached a state of serenity again after being presented with the first stimuli, the second stimuli that should present a higher level of anxiety is presented. This will help the patient overcome their phobia. This activity is repeated until all the items of the hierarchy of severity anxiety is completed without inducing any anxiety in the client at all. If at any time during the exercise the coping mechanisms fail or became a failure, or the patient fails to complete the coping mechanism due to the severe anxiety, the exercise is then stopped. When the individual is calm, the last stimuli that is presented without inducing anxiety is presented again and the exercise is then continued depending on the patient outcomes.
Example
A client may approach a therapist due to their great phobia of snakes. This is how the therapist would help the client using the three steps of systematic desensitization:
Establish anxiety stimulus hierarchy. A therapist may begin by asking the patient to identify a fear hierarchy. This fear hierarchy would list the relative unpleasantness of various levels of exposure to a snake. For example, seeing a picture of a snake might elicit a low fear rating, compared to live snakes crawling on the individual—the latter scenario becoming highest on the fear hierarchy.
Learn coping mechanisms or incompatible responses. The therapist would work with the client to learn appropriate coping and relaxation techniques such as meditation and deep muscle relaxation responses.
Connect the stimulus to the incompatible response or coping method. The client would be presented with increasingly unpleasant levels of the feared stimuli, from lowest to highest—while utilizing the deep relaxation techniques (i.e. progressive muscle relaxation) previously learned. The imagined stimuli to help with a phobia of snakes may include: a picture of a snake; a small snake in a nearby room; a snake in full view; touching of the snake, etc. At each step in the imagined progression, the patient is desensitized to the phobia through exposure to the stimulus while in a state of relaxation. As the fear hierarchy is unlearned, anxiety gradually becomes extinguished.
Uses
Specific phobias
Specific phobias are one class of mental disorder often treated via systematic desensitization. When persons experience such phobias (for example fears of heights, dogs, snakes, closed spaces, etc.), they tend to avoid the feared stimuli; this avoidance, in turn, can temporarily reduce anxiety but is not necessarily an adaptive way of coping with it. In this regard, patients' avoidance behaviors can become reinforced – a concept defined by the tenets of operant conditioning. Thus, the goal of systematic desensitization is to overcome avoidance by gradually exposing patients to the phobic stimulus, until that stimulus can be tolerated. Wolpe found that systematic desensitization was successful 90% of the time when treating phobias.
Test anxiety
Between 25 and 40 percent of students experience test anxiety. Children can suffer from low self-esteem and stress-induced symptoms as a result of test anxiety. The principles of systematic desensitization can be used by children to help reduce their test anxiety. Children can practice the muscle relaxation techniques by tensing and relaxing different muscle groups. With older children and college students, an explanation of desensitization can help to increase the effectiveness of the process. After these students learn the relaxation techniques, they can create an anxiety inducing hierarchy. For test anxiety these items could include not understanding directions, finishing on time, marking the answers properly, spending too little time on tasks, or underperforming. Teachers, school counselors or school psychologists could instruct children on the methods of systematic desensitization.
Recent use
Desensitization is widely known as one of the most effective therapy techniques. In recent decades, systematic desensitization has become less commonly used as a treatment of choice for anxiety disorders. Since 1970 academic research on systematic desensitization has declined, and the current focus has been on other therapies. In addition, the number of clinicians using systematic desensitization has also declined since 1980. Those clinicians that continue to regularly use systematic desensitization were trained before 1986. It is believed that the decrease of systematic desensitization by practicing psychologist is due to the increase in other techniques such as flooding, implosive therapy, and participant modeling.
History
In 1947, Wolpe discovered that the cats of Wits University could overcome their fears through gradual and systematic exposure. Wolpe studied Ivan Pavlov's work on artificial neuroses and the research done on elimination of children's fears by Watson and Jones. In 1958, Wolpe did a series of experiments on the artificial induction of neurotic disturbance in cats. He found that gradually deconditioning the neurotic animals was the best way to treat them of their neurotic disturbances. Wolpe deconditioned the neurotic cats through different feeding environments. Wolpe knew that this treatment of feeding would not generalize to humans and he instead substituted relaxation as a treatment to relieve the anxiety symptoms.
Wolpe found that if he presented a client with the actual anxiety inducing stimulus, the relaxation techniques did not work. It was difficult to bring all of the objects into his office because not all anxiety inducing stimuli are physical objects, but instead are concepts. Wolpe instead began to have his clients imagine the anxiety inducing stimulus or look at pictures of the anxiety inducing stimulus, much like the process that is done today.
See also
Flooding (psychology)
Immersion therapy
Sensitization
References
External links
Self-administered Systematic Desensitization
Anxiety disorder treatment
Behavior therapy
Behaviorism | Systematic desensitization | Biology | 1,694 |
162,654 | https://en.wikipedia.org/wiki/Ren%C3%A9%20Antoine%20Ferchault%20de%20R%C3%A9aumur | René Antoine Ferchault de Réaumur (; ; 28 February 1683 – 17 October 1757) was a French entomologist and writer who contributed to many different fields, especially the study of insects. He introduced the Réaumur temperature scale.
Life
Réaumur was born in a prominent La Rochelle family and educated in Paris. He learned philosophy in the Jesuits' college at Poitiers, and in 1699 went to Bourges to study civil law and mathematics under the charge of an uncle, canon of La Sainte-Chapelle. In 1703 he went to Paris, where he continued the study of mathematics and physics. In 1708, at the age of 24, he was nominated by Pierre Varignon (who taught him mathematics) and elected a member of the Académie des Sciences. From this time onwards for nearly half a century hardly a year passed in which the did not contain at least one paper by Réaumur.
At first, his attention was occupied by mathematical studies, especially in geometry. In 1710, he was named the chief editor of the Descriptions of the Arts and Trades, a major government project which resulted in the establishment of manufactures new to France and the revival of neglected industries. For discoveries regarding iron and steel he was awarded a pension of 12,000 livres. Content with his ample private income, he requested that the money should go to the Académie des Sciences for the furtherance of experiments on improved industrial processes. In 1731 he became interested in meteorology, and invented the thermometer scale which bears his name: the Réaumur. In 1735, for family reasons, he accepted the post of commander and intendant of the royal and military Order of Saint Louis. He discharged his duties with scrupulous attention, but refused the pay. He took great delight in the systematic study of natural history. His friends often called him "the Pliny of the 18th century".
He loved retirement and lived at his country residences, including his chateau La Bermondière, Saint-Julien-du-Terroux, Maine, where he had a serious fall from a horse, which led to his death. He bequeathed his manuscripts, which filled 138 portfolios, and his natural history collections to the Académie des Sciences.
Réaumur's scientific papers deal with many branches of science. His first, in 1708, was on a general problem in geometry. His last, in 1756, on the forms of birds' nests. He proved experimentally the fact that the strength of a rope is more than the sum of the strengths of its separate strands. He examined and reported on the auriferous (gold-bearing) rivers, the turquoise mines, the forests and the fossil beds of France. He devised the method of tinning iron that is still employed, and investigated the differences between iron and steel, correctly showing that the amount of carbon is greatest in cast iron, less in steel, and least in wrought iron. His book on this subject (1722) was translated into English and German.
He was noted for a thermometer he constructed on the principle of taking the freezing point of water as 0°, and graduating the tube into degrees each of which was one-thousandth of the volume contained by the bulb and tube up to the zero mark. It was an accident dependent on the particular alcohol employed which made the boiling-point of water 108°; mercurial thermometers graduated into 80 equal parts between the freezing- and boiling-points of water are named Réaumur thermometers but diverge from his design and intention.
Réaumur wrote much on natural history. Early in life he described the locomotor system of the Echinodermata, and showed that the supposed ability of replacing their lost limbs was actually true. He has been considered as a founder of ethology.
In 1710 he wrote a paper on the possibility of spiders being used to produce silk, which was so celebrated at the time that the Kangxi Emperor of China had it translated into Chinese. His observations of wasps making paper from wood fibres have led some to credit him with this change in paper-making techniques. It was over a century before wood pulp was used on any industrial scale in paper making.
He studied the relationship between the growth of insects and temperature. He also computed the rate of growth of insect populations and noted that there must be natural checks since the theoretical population numbers achievable by geometric progression were not matched by observations of actual populations.
He also studied botanical and agricultural matters, and devised processes for preserving birds and eggs. He elaborated a system of artificial incubation, and made important observations on the digestion of carnivorous and graminivorous (grass-eating) birds. One of his greatest works is the , 6 vols., with 267 plates (Amsterdam, 1734–1742). It describes the appearance, habits and locality of all the known insects except the beetles, and is a marvel of patient and accurate observation. Among other important facts stated in this work are the experiments which enabled Réaumur to prove the correctness of Peyssonel's hypothesis, that corals are animals and not plants.
He was elected a Fellow of the Royal Society in November 1738 by virtue of the fact that:His Name hath been known for many years among the Learned by Several Curious disertations published in the Memoirs of the Royal Academy of Sciences at Paris & in particular by a very Learned and usefull book wrote in French entitled 'The Art of Converting Forged Iron into Steel' and 'the Art of Soft'ning Cast Iron' printed at Paris 1722 4to and lately by his 'Curious Memoires relating to the History of Insects' at Paris in 4to three Volumes of which work have been Laid before the Royal Society.He was elected a foreign member of the Royal Swedish Academy of Sciences in 1748.
He is commemorated in numerous place names including the rue Réaumur and the Réaumur - Sébastopol metro station in Paris and the Place Réaumur, Le Havre.
Selected works
Réaumur, R.-A. F. de. 1722. L'art de convertir le fer forgé en acier, et l'art d'adoucir le fer fondu, ou de faire des ouvrages de fer fondu aussi finis que le fer forgé. Paris, France.
Réaumur, R.-A. F. de. 1734–1742. Mémoires pour servir à l'histoire des insectes. Six volumes. Académie Royale des Sciences, Paris, France.
Réaumur, R.-A. F. de. 1749. Art de faire éclorre et d'élever en tout saison des oiseaux Domestiques de toutes espèces. Two volumes. Imprimerie royale, Paris, France.
Réaumur, R.-A. F. de. 1750. The art of hatching and bringing up domestic fowls. London, UK.
Réaumur, R.-A. F. de. 1800. Short history of bees I. The natural history of bees . . . Printed for Vernor and Hood in the Poultry, by J. Cundee, London, UK.
Réaumur, R.-A. F. de. 1926. The natural history of ants, from an unpublished manuscript. W. M. Wheeler, editor and translator. [Includes French text.] Knopf, New York City, USA. Reprinted 1977. Arno Press, New York City, USA.
Réaumur, R.-A. F. de. 1939. Morceaux choisis. Jean Torlais, editor. Gallimard, Paris, France.
Réaumur, R.-A. F. de. 1955. Histoire des scarabées. M. Caullery, introduction. Volume 11 of Encyclopédie Entomologique. Paul Lechevalier, Paris, France.
Réaumur, R.-A. F. de. 1956. Memoirs on steel and iron. A. G. Sisco, translator. C. S. Smith, introduction and notes. University of Chicago Press, Chicago, Illinois, USA.
Publications
Notes
References
External links
Digitalies text of Mémoires pour servir a l'histoire des insectes
Website of the Manoir Des Sciences at Reaumur
Gaedike, R.; Groll, E. K. & Taeger, A. 2012: Bibliography of the entomological literature from the beginning until 1863 : online database – version 1.0 – Senckenberg Deutsches Entomologisches Institut.
1683 births
1757 deaths
18th-century French writers
18th-century French male writers
Fellows of the Royal Society
French entomologists
Creators of temperature scales
French Roman Catholics
Members of the French Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Order of Saint Louis recipients
People from La Rochelle | René Antoine Ferchault de Réaumur | Physics | 1,830 |
75,458 | https://en.wikipedia.org/wiki/Windows%20Me | Windows Me (Millennium Edition) is an operating system developed by Microsoft as part of its Windows 9x family of Microsoft Windows operating systems. It was the successor to Windows 98, and was released to manufacturing on June 19, 2000, and then to retail on September 14, 2000. It was Microsoft's main operating system for home users until the introduction of its successor Windows XP on October 25, 2001.
Windows Me was targeted specifically at home PC users, and included Internet Explorer 5.5 (which could later be upgraded to Internet Explorer 6), Windows Media Player 7 (which could later be upgraded to Windows Media Player 9 Series), DirectX 7 (which could later be upgraded to DirectX 9) and the new Windows Movie Maker software, which provided basic video editing and was designed to be easy to use for consumers. Microsoft also incorporated features first introduced in Windows 2000, which had been released as a business-oriented operating system seven months earlier, into the graphical user interface, shell and Windows Explorer. Although Windows Me was still ultimately based around MS-DOS like its predecessors, access to real-mode DOS was restricted to decrease system boot time.
Windows Me was initially positively received when it was released; however, it soon garnered a more infamous reputation from many users due to numerous stability problems. In October 2001, Windows XP was released to the public, having already been under development at the time of Windows Me's release, and incorporated most, but not all, of the features of Windows Me, while being far more stable due to being based on the Windows NT kernel.
Mainstream support for Windows Me ended on December 31, 2003, followed by extended support on July 11, 2006.
Development history
At the 1998 Windows Hardware Engineering Conference, Microsoft CEO Bill Gates stated that Windows 98 would be the last iteration of Windows to use the Windows 9x kernel, with the intention for the next consumer-focused version to be based on the Windows NT kernel, unifying the two branches of Windows. However, it soon became apparent that the development work involved was too great to meet the aim of releasing before the end of 2000, particularly given the ongoing parallel work on the eventually-canceled Neptune project. The Consumer Windows development team was therefore re-tasked with improving Windows 98 while porting some of the look-and-feel from Windows 2000. Microsoft President Steve Ballmer publicly announced these changes at the next Windows HEIC in 1999.
On July 23, 1999, the first alpha version of Windows Me was released to testers. Known as Development Preview 1, it was very similar to Windows 98 SE, with the only major change being a very early iteration of the new Help and Support feature that would appear in the final version. Three more Development Previews were released over the subsequent two months.
The first beta version was released to testers and the industry press on September 24, 1999, with the second coming on November 24 that year. Beta 2 showed the first real changes from Windows 98, including importing much of the look-and-feel from Windows 2000, and the removal of real-mode DOS. Industry expert Paul Thurrott reviewed Beta 2 upon release and spoke positively of it in a review. By early 2000, Windows Me was reportedly behind schedule, and an interim build containing the new automatic update feature was released to allay concerns about a delayed-release.
In February 2000, Paul Thurrott revealed that Microsoft had planned to exclude Windows Me, as well as new releases of Windows NT 4.0, from CD shipments for MSDN subscribers. The reason given in the case of Me was that the OS was designed for consumers. However, Thurrott alleged that the real motivation behind both case to force software developers to move to Windows 2000. Three days later, following a write-in and call-in campaign by hundreds of readers, Microsoft announced that Windows Me (including development versions) would ship to MSDN subscribers after all. Microsoft also apologized personally to Thurrott, claiming he received misinformation, though in a follow-up article he stated that it was "clear that the decision [...] actually changed".
Beta 3 was released on April 11, 2000, and this version marked the first appearance of its final startup and shutdown sounds derived from Windows 2000, as the previous betas used Windows 98's startup and shutdown sounds.
Release
Although Microsoft signed off on the final build of Windows Me on June 28, 2000, after trialing three Release Candidate builds with testers, the final retail release was pushed back to September 14 for reasons that were not clear.
Shortly after Windows Me was released to manufacturing on June 19, 2000, Microsoft launched a marketing campaign to promote it in the U.S., which they dubbed the Meet Me Tour. A national partnered promotional program featured the new OS, OEMs and other partners in an interactive multimedia attraction in 25 cities.
Windows Me was released for retail sale on September 14, 2000. At launch time, Microsoft announced a time-limited promotion from September 2000 to January 2001 which entitled Windows 98 and Windows 98 SE users to upgrade to Windows Me for $59.95 instead of the regular retail upgrade price of $109. Non-upgrade versions cost $209, the same as Windows 98 on its release. In October 2001, Microsoft released Windows XP, which also included the ZIP folders, the Spider Solitaire game and Internet Explorer 6 by default, all while being based on the Windows NT kernel, which on XP was an evolution of the one in Windows 2000.
New and updated features
User interface
Windows Me featured the shell enhancements inherited from Windows 2000 such as personalized menus, customizable Windows Explorer toolbars, auto-complete in Windows Explorer address bar and Run box, Windows 2000 advanced file type association features, displaying comments in shortcuts as tooltips, extensible columns in Details view (IColumnProvider interface), icon overlays, integrated search pane in Windows Explorer, sort by name function for menus, Places bar in common dialogs for Open and Save, cascading Start menu special folders, some Plus! 95 and Plus! 98 themes, and updated graphics. The notification area in Windows Me and later supported 16-bit high color icons. The Multimedia control panel was also updated from Windows 98. Taskbar and Start Menu options allowed disabling of the drag and drop feature and could prevent moving or resizing the taskbar, which was easier for new users.
Hardware support improvements
Faster boot times: Windows Me features numerous improvements for improving cold boot time, pre and post-logon boot times and time required for resuming from hibernation. Processing of real mode configuration files, CONFIG.SYS and AUTOEXEC.BAT, is bypassed at startup and essential real mode drivers like HIMEM.SYS and SMARTDRV.EXE are embedded into IO.SYS. The registry is loaded only once; for efficient loading, the registry is split into three files instead of two (SYSTEM.DAT and USER.DAT), with the new file CLASSES.DAT containing the contents of the hive HKEY_CLASSES_ROOT required for boot loaded initially. Plug and Play device enumeration is more parallelized than in Windows 98. Boot time is not affected due to unavailability of a DHCP server or other network components. There are also optimizations to prevent boot slowdown due to BIOS POST operations.
USB Human Interface Device Class: Generic support for 5-button mice is also included as standard and installing IntelliPoint allows reassigning the programmable buttons.
Windows Image Acquisition: Windows Me introduced the Windows Image Acquisition API for a standardized method of allowing Windows applications to transparently and more easily communicate with image acquisition devices, such as digital cameras and scanners. WIA intended to improve the configuration and the user interface for interacting with scanners and such devices, (which were previously supported by the TWAIN standard) and simplify writing device drivers for developers. WIA also includes support for USB still image capture device classes such as scanners and cameras through the Picture Transfer Protocol.
Improved power management and suspend/resume operations: The OEM version of Windows Me supports OS-controlled ACPI S4 sleep state (hibernation) and other power management features without manufacturer-supplied drivers.
USB and FireWire support improvements: Windows Me is the only operating system in the Windows 9x series that includes generic drivers for USB mass storage devices and USB printers. Support for FireWire SBP-2 scanners and storage devices is also improved.
The , DirectSound, and DirectShow APIs support non-PCM formats such as AC-3 or WMA over S/PDIF.
Media
Windows Movie Maker: This utility is based on DirectShow and Windows Media technologies to provide Microsoft Windows computer systems with basic video capture and edit capabilities. It provides users with the ability to capture, edit, and re-encode media content into the Windows Media format, a tightly compressed format that requires a minimal amount of storage space on the computer's hard disk when compared to many other media formats.
Windows Media Player 7: The new version of the Windows multimedia player software introduces jukebox functionality featuring the Media Library, support for CD burning, an integrated media encoder, and the ability to transfer music directly to portable devices. Another new feature is its radio tuner that can be used to search for and connect to radio stations over the internet. Users can also customize the look and feel of the user interface through interactive skins. Windows Me can be upgraded to Windows Media Player 9 Series, which was later included in Windows XP SP2.
Windows DVD Player: The software DVD player in Windows Me is a redesigned version of the one featured in Windows 98 which, unlike its predecessor, does not require a dedicated decoder card for DVD playback. Instead, it supports software decoding through a third-party decoder.
Networking technologies
Net Crawler: Windows Me introduced a net crawling feature which automatically searches out and creates shortcuts to network shares and printers in My Network Places. This can be controlled using the Automatically search for network folders and printers option. Shortcuts that are added by the net crawler but not detected again on the network in a reasonable time period are aged out and deleted.
New TCP/IP Stack: Windows Me includes the Windows 2000 networking stack and architecture.
The Home Networking Wizard is designed to help users to set up a computer that is running Windows Me for use on a small home network. This includes setting up Internet Connection Sharing (ICS) on a computer running Windows Me so the computer can share a connection to the Internet with other computers on the home network.
Dial-up Networking component was updated in Windows Me and provides several enhancements while maintaining the desired features of prior releases of the operating system. The user interface had been reworked to provide all configurable parameters in one convenient location. The user interface now included three new tabs: Networking, Security and Dialing. To improve dial-up networking, Windows Me includes built-in support for the Connection Manager dial-up client. Using the Connection Manager Administration Kit (an optional networking component in Windows 2000 Server), network administrators can pre-configure and deploy dial-up networking connections, by means of a Connection Manager service profile, to Windows Me–based client machines.
Network Driver Interface Specification (NDIS) version 5.0 for Windows Me was enhanced to provide programming interface parity with NDIS version 5.0 in Windows 2000 (the programming interfaces used by network device drivers are the same for both platforms.)
Universal Plug and Play: Windows Me introduced support for Universal Plug and Play (UPnP). Universal Plug and Play and NAT traversal APIs can also be installed on Windows 98 and Windows 98 SE by installing the Windows XP Network Setup Wizard.
System utilities
System Restore: Windows Me introduced the "System Restore" logging and reversion system, which was meant to simplify troubleshooting and solve problems. It was intended to work as a rollback and recovery feature so that if the installation of an application or a driver adversely affected the system, the user could undo the installation and return the system to a previously working state. It does this by monitoring changes to Windows system files and the registry. System Restore protects only the operating system files, not documents, and therefore is not a substitute for a backup program.
System File Protection: First introduced with Windows 2000 (as Windows File Protection), and expanding on the capabilities introduced with System File Checker in Windows 98, System File Protection aimed to protect system files from modification and corruption silently and automatically. When the file protection is in effect, replacing or deleting a system file causes Windows Me to silently restore the original copy. The original is taken from a hard drive backup folder (%WinDir%\Options\Install) or from the Windows Me installation CD, if the cached copy of files on the hard disk has been deleted. If no installation CD is in the drive, a dialog box alerts the user about the problem and requests that the CD be inserted. System File Protection is a different technology from System Restore and should not be confused with the latter. System Restore maintains a broad set of changed files including added applications and user configuration data stored repeatedly at specific points in time restored by the user, whereas System File Protection protects operating system files with no user input.
System Configuration Utility allows users to manually extract and restore individual system files from the Windows Me setup files. It has also been updated with three new tabs called "Static VxDs", "Environment" and "International". The Static VxDs tab allows users to enable or disable static virtual device drivers to be loaded at startup, the Environment tab allows users to enable or disable environment variables, and the International tab allows users to set international language keyboard layout settings that were formerly set via the real mode MS-DOS configuration files. A Cleanup button on the Startup tab allows cleaning up invalid or deleted startup entries.
System Monitor has been updated with a Dial-Up Adapter section. Users can now monitor items such as Connection Speeds, Bytes Received or Transmitted / Second.
SCANDISK runs from within Windows upon an improper shutdown before the Windows Shell loads.
Automatic Updates: The Automatic Updates utility automatically downloads and installs critical updates from the Windows Update Web site with little user interaction. It is set up to check Windows Update once every 24 hours by default. Users can choose to download which update they want, although high-priority updates must be downloaded and installed.
Compressed Folders: Windows Me includes native support for ZIP files through the 'Compressed Folders' Explorer extension. This extension was originally introduced in the Plus! 98 collection for Windows 98, but is included in the base operating system in Windows Me.
A new Help and Support program has also been added, replacing the HTML Help-based documentation in Windows 2000 and Windows 98. The Help and Support Center is entirely HTML-based and takes advantage of a technology called Support Automation Framework (SAF), that can show support information from the internet, allows collecting data for troubleshooting via WMI and scripting and for third parties to plug into Windows Help and Support. Several other support tools also shipped with Windows Me.
Windows Me also includes Internet Explorer 5.5, which supports a new Print Preview feature. It also shipped with the MSN Messenger Service.
Accessibility features
On-Screen Keyboard: Originally introduced with Windows 2000, On-Screen Keyboard makes it possible to input characters using the mouse instead of the keyboard.
The Mouse Control Panel incorporates IntelliPoint features, namely ClickLock (selecting or dragging without continuously holding down the mouse button), hiding the pointer while typing, and showing it by pressing Ctrl.
The cursor (system caret) can be set to a thicker width.
Increased Active Accessibility support in utilities such as Calculator and Magnifier.
Removed features
Real mode DOS
Windows Me restricted support for real mode MS-DOS. As a result, IO.SYS in Windows Me disregards CONFIG.SYS, COMMAND.COM and WIN.COM and directly executes VMM32.VXD. In its default configuration the system would neither boot into an MS-DOS command prompt nor exit to DOS from Windows; real mode drivers such as ANSI.SYS could not be loaded and older applications that require real mode could not be run. Microsoft argued that the change improved the speed and reliability of the boot process.
In Windows Me, the CONFIG.SYS and AUTOEXEC.BAT files are used only to set global environment variables. The two files (if present) are scanned for settings relating to the environment variables, and any other commands present are moved into a Windows registry key (see below). The two files thus contain only settings and preferences which configure the "global environment" for the computer during the boot phase or when starting a new virtual DOS machine (VDM).
To specify or edit other startup values (which, in Windows 98, would be present in the AUTOEXEC.BAT file) the user must edit the following Windows registry key:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SessionManager\Environment
For troubleshooting and crash recovery, both the Windows Me CD-ROM and the Windows Me startup disk (a user-creatable floppy disk, known as the Emergency Boot Disk (EBD)) allowed booting into real mode MS-DOS.
It is possible to restore real mode DOS functionality through various unofficial means. Additionally, a registry setting exists that re-enables the "Restart in MS-DOS mode" option in the shutdown dialog box; however, unless patched unofficially with third-party software, Windows Me cannot be booted to MS-DOS real mode.
Other components
Unlike previous versions of Windows 9x, Windows Me was entirely aimed at home users, and thus had certain enterprise-oriented features removed. Several features of its predecessors did not work or were officially unsupported by Microsoft on Windows Me, including Automated Installation, Active Directory client services, System Policy Editor, Personal Web Server and ASP. These features were supported on the previous versions of Windows 9x. A Resource Kit publication, targeted towards system administrators, was never published for Windows Me.
Other features that were removed or never updated to work with Windows Me included Microsoft Fax, QuickView and DriveSpace, as well as the GUI FAT32 conversion tool. Several Windows Explorer commands were also modified in Windows Me, matching the menu structure in Windows 2000. While some were simply moved to a different location, certain functionality of the Go menu, as well as the Find command on the Tools menu, are no longer available. For the latter change Microsoft recommends using a variety of similar functionality labeled Search.
The Active Channels Channel bar from the original release of Windows 98 was removed like with Windows 98 Second Edition and is not installed upon first boot, but is retained if upgrading from the original release of Windows 98 to Windows 98 Second Edition or Windows Me.
Windows Me, like Windows 98 Second Edition, did not ship with the WinG API or RealPlayer 4.0, unlike the original release of Windows 98, due to both of these having been superseded by DirectX and Windows Media Player, respectively.
Upgradeability
Windows Me could have its components upgraded or have new components installed up to the latest versions:
Internet Explorer 6 SP1 and Outlook Express 6 SP1
Windows Media Format Runtime and Windows Media Player 9 Series (including Windows Media Encoder 7.1 and the Windows Media 8 Decoding Utility)
MSN Messenger 7.0
Windows Installer 2.0
DirectX 9.0c (the latest compatible runtime is from October 2007.)
.NET Framework 2.0
Microsoft Visual C++ 2005 runtime
Text Services Framework
Several other components such as MSXML 3.0 SP7, Microsoft Agent 2.0, NetMeeting 3.01, MSAA 2.0, ActiveSync 3.8, WSH 5.6, Microsoft Data Access Components 2.81 SP1, WMI 1.5 and Speech API 4.0.
Office XP SP3
The Microsoft Layer for Unicode can be installed to allow certain Unicode applications to run on the operating system.
System requirements
The /nm setup switch can be used at the DOS command line to bypass the minimum system requirement checks, allowing for installation on a CPU as low as the 16 MHz 80486SX.
Limitations
Windows Me is only designed to handle up to 512 MB of RAM without changes. Systems with larger RAM pools may lose stability; however, depending on the hardware and software configuration, it is sometimes possible to manually tweak the installation to continue working with somewhat larger amounts of RAM as well. The maximum amount of memory the operating system is designed to use is up to 1 GB of RAM. Systems with more than 1.5 GB of RAM may continuously reboot during startup.
Support lifecycle
Compared with other releases of Windows, Windows Me had a short shelf-life of just over a year. Windows 2000 and Windows Me were eventually succeeded by newer Microsoft operating systems: Windows Me by Windows XP Home Edition, and Windows 2000 Professional by Windows XP Professional. Windows XP is noteworthy that the first preview build of Windows XP (then codenamed "Whistler") was released to developers on July 13, 2000, two months before Windows Me's general availability date.
Microsoft originally planned to end support for Windows Me on December 31, 2004. However, in order to give customers more time to migrate to newer Windows versions, particularly in developing or emerging markets, Microsoft decided to extend support until July 11, 2006. Microsoft ended support for Windows Me (and Windows 98) on this date because the company considered the operating system to be obsolete and prone to security risks, and recommended customers to upgrade to a newer version of Windows such as Windows XP for the latest security improvements.
Retail availability for Windows Me ended on December 31, 2003. The operating system is no longer available from Microsoft in any form (through MSDN or otherwise) due to the terms of Java-related settlements Microsoft made with Sun Microsystems.
In 2011, Microsoft retired the Windows Update v4 website. An independent project named Windows Update Restored aims to restore the Windows Update websites for older versions of Windows, including Windows Me.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows Me (and XP) would end on July 31, 2019.
Reception
Windows Me initially received generally positive reviews, with reviewers citing the operating system's integrity protection (branded as "PC Health") and the new System Restore feature as steps forward for home users. Despite this, however, users' real-world experience did not bear this out, with industry publications receiving myriad reports of problems with the "PC Health" systems, PCs refusing to shut down cleanly, and general stability problems.
As time went on, reception of Windows Me became more negative, to the point where it was heavily panned by users, mainly due to stability issues. Retrospectively, Windows Me is viewed as one of the worst operating systems of all time, being unfavorably compared to its immediate predecessor and successor. A PC World article dubbed Windows Me the "Mistake Edition" and placed it 4th in their "Worst Tech Products of All Time" feature in 2006. The article states: "Shortly after Me appeared in late 2000, users reported problems installing it, getting it to run, getting it to work with other hardware or software, and getting it to stop running." Consequently, most home users remained with Windows 98, while some moved to Windows 2000 despite the latter being enterprise-oriented. In the Netherlands, Windows Me was infamously known as "Windows Meer Ellende" (Dutch for "more misery").
System Restore suffered from a bug in the date-stamping functionality that could cause System Restore to incorrectly date-stamp snapshots that were taken after September 8, 2001. This could prevent System Restore from locating these snapshots and cause the system restore process to fail. Microsoft released an update to fix this problem.
Byron Hinson and Julien Jay, writing for ActiveWin, took an appreciative look on the operating system. On the removal of real mode DOS support, they had noted "The removal of DOS has clearly made a difference in Windows Me in terms of stability (far less Blue Screens of Death are seen now) and booting speed has greatly increased." In a recommendation of the operating system upgrade for users of Windows 95 and 98, they had stated "If Windows Me isn't a revolutionary OS it's clear that Microsoft has focused its efforts to make it more user-friendly, stable and packed full of multimedia options. The result is great and the enhancements added are really worth the wait." The new features that Windows Me introduced were also praised and have since remained part of subsequent Windows versions.
Along with Windows 2000 from the Windows NT family, Windows Me was the last version of Windows that lacked product activation.
Notes
References
External links
GUIdebook – Graphical User Interface gallery
Interview with Nicolas Coudière, Chief Product Manager: Microsoft Windows Millennium Edition: at the Wayback Machine
Windows Me home page: The official Windows Me home page from Wayback Machine
Windows 9x Member Projects
ME
DOS variants
2000 software
Products and services discontinued in 2006
Microsoft criticisms and controversies
Turn of the third millennium
IA-32 operating systems
Products introduced in 2000 | Windows Me | Technology | 5,194 |
71,227,625 | https://en.wikipedia.org/wiki/Hamdy%20Doweidar | Hamdy Doweidar Taki El-Din Doweidar (PhD, DSc) was an Egyptian condensed matter physicist whose research topics included inorganic glasses, glass-ceramics, bio-active glasses, and structure-property correlations. He developed the Doweidar Model which is used to correlate density, thermal expansion coefficient, molar refraction, and refractive index with the concentration of structural units in numerous types of glass. Doweidar also obtained a patent with two researchers for the preparation of a biologically active glass ionomer cement as a dental filling, characterized by vital activity due to the presence of bioactive crystalline phases in the retina glass (such as apatite and fluoroapatite), which react with SBF simulating solution to precipitate layers of hydroxyapatite and hydroxyapatite. These represent the basic crystalline phases in the formation of bones and teeth. He was a Professor Emeritus at the Mansoura University.
Education and career
Doweidar graduated from Assiut University in 1964 with a bachelor's in physics and chemistry, a master's in Physical chemistry from Cairo University in 1969, and a Ph.D. in applied physics from Bauhaus-Universität Weimar in 1974. Doweidar was a researcher at the National Research Centre from 1965 to 1975 and became an associate professor at Mansoura University until 1986, when he became a Distinguished Professor. In 1977, Doweidar found the Glass Research Laboratory at the Mansoura University. He was a visiting professor at the École Normale Supérieure in Algeria from 1980 to 1984, and at the Sanaa University in Yemen from 1990 to 1994.
Recognition
Doweidar has received the Award of Academy of Scientific Research and Technology (Promotional State-Prize in Physics), Cairo in 1999, the Mansoura University Award (Distinction Prize in Physics) in 2000, and the Scopus Award for contribution to materials Science, presented from Elsevier and the Egyptian Ministry of High Education in 2008. He has over 120 peer-reviewed publications in international journals and was named one of the world's top 2% most cited scientists by Stanford University in 2019, 2020, 2021, 2022, and 2023.
References
Academic staff of Mansoura University
Cairo University alumni
Bauhaus University, Weimar alumni
Assiut University alumni
Condensed matter physicists
20th-century physicists
Egyptian physicists | Hamdy Doweidar | Physics,Materials_science | 492 |
3,595,658 | https://en.wikipedia.org/wiki/Narendra%20Nayak | Narendra Nayak (born 5 February 1951) is a rationalist, sceptic, and godman debunker from Mangalore, Karnataka, India. Nayak is the current president of the Federation of Indian Rationalist Associations (FIRA). He founded the Dakshina Kannada Rationalist Association in 1976 and has been its secretary since then. He also founded an NGO called Aid Without Religion in July 2011. He tours the country conducting workshops to promote scientific temper and showing people how to debunk godmen and frauds. He has conducted over 2000 such demonstrations in India, including some in Australia, Greece, England, Norway, Denmark, Sri Lanka and Nepal. He is also a polyglot who speaks 9 languages fluently, which helps him when he is giving talks in various parts of the country.
Life and work
Nayak was named after Swami Vivekananda (born Narendra Nath Datta). He has stated that seeing his father's business premises being repossessed by the bank and his father buying a lottery ticket on the advice of an astrologer to pay off the loan with the total confidence that it would get the first prize made him turn to rationalism. He married Asha Nayak, a lawyer in Mangaluru in a non-religious ceremony. Nayak started out working as a lecturer in the Department of biochemistry in the Kasturba Medical College in Mangalore in 1978. In 1982, he met Basava Premanand, a notable rationalist from Kerala, and was influenced by him.
Karnataka State Police withdrew his security wherein Nayak was quoted to say that it was an open invitation by forces to finish him.
Activism
Nayak decided to take on full-time anti-superstition activism in 2004 when he heard that a girl had been sacrificed in Gulbarga in Karnataka. He was an assistant professor of biochemistry when he took voluntary retirement on 25 November 2006, after working there for 28 years.
Before the general election in 2009, Nayak laid an open challenge to any soothsayer to answer 25 questions correctly about the forthcoming elections. The prize was set at (about ). About 450 responses were mailed to him, but none were found to be correct. The Federation of Indian Rationalist Associations has been conducting such challenges since 1991. During May 2013 Karnataka state assembly election, disappointed at the challenge being one-sided, Nayak had decided against the idea of challenging astrologers this time. But when a Bengaluru-based astrologer Shankar Hegde made claims to predict the election results accurately, he received the challenge. Nayak offered to hand over a cheque of Rs.10 lakh (after deducting taxes as applicable under income Tax Act), if 19 out of the 20 results were proven right. However, later on astrologer Hegde did not turn up.
Through the organisation named Aid Without Religion which was registered in July 2011, he has been helping people and institutions where there are no religious rituals, superstitious practices, unscientific systems of medicine and such supernatural beliefs. The registration was done at Rahu Kalam, a time of the day which is the most inauspicious – so it was a double rather a triple whammy, a Saturday, new moon day that too in the month of Ati which is considered to be the most unlucky time and at Rahu Kalam!
He has been featured on National Geographic's television show Is it real?. He has also appeared on the Discovery Channel. He has been a regular columnist at the newspaper Mangalore Today since its inception. He also serves on the editorial board of the Folks Magazine.
He has admitted to have been attacked for his activism a few times. He also has stated that his scooter's brake wires were once found severed, after an astrologer predicted his death or injury. He was a close associate to Gauri Lankesh, M. M. Kalburgi, and Narendra Dabholkar; all three like-minded and were assassinated in a more-or-less similar fashion.
He was also involved in fighting against Midbrain activation, an alleged modern technique that enables students to see objects despite being blindfolded.
In March 2017, there was an attempt on Narendra Nayak's life. During the early morning hours, while on his way to the Mangala swimming pool in his car, he was approached by two unidentified men in a bike wearing helmets and hinted that his tyres were punctured. An unfazed Nayak suspected foul play and with a great presence of mind drove all the way to a nearby gas station and saw that everything was in order. He immediately filed a Police Complain. Nayak suspected that this attempt on his life could possibly be the repercussions to his fight for the justice of the slain RTI activist Vinayak Baliga, who was murdered exactly a year previous to this episode. Nayak's personal gunman was on holidays. Nayak continues to have personal gunman handed over by Mangalore Police till date.
Narendra presented at the first Global Congress on Scientific Thinking and Action which was held on March 17–20, 2021. During Session III on Alternative Medicine, he talked about the wide use of alternative medicines in India, including homeopathy, and said that various alternative treatments are often claimed to be Indian in origin. In addition, he states that the relatively low death rate from COVID in India has been falsely attributed to the use of homeopathic medicines as preventative. When asked what should be done about the use of alternative medicines in India, he said, flatly, “They should be banned.”
Views
Nayak advocates that more people should be taught to perform the so-called miracles of godmen. He also advocates that people should be trained to recognize pseudoscience and demand scientific evidence. He holds the opinion that well-known scientists should be convinced to join the cause and form pressure groups against pseudoscience. He is also lobbying for a bill for the separation of state and religion to be introduced in the Indian parliament. After the murder of anti-superstition activist Narendra Dabholkar and enactment of the anti-superstition ordinance in Maharashtra state, Nayak expressed the need of a similar law in Karnataka. Regarding fellow Mangalorean George Fernandes, Nayak said the "You can hate George Fernandes, You can love Fernandes, but you cannot ignore him". Nayak was the guest of honour during the launch event of the book Bandh Samrat - Tales of Eternal Rebel written on George Fernandes's early trade union activities in Mangalore and Bombay
Awards
2011 "Distinguished Service to Humanism Award" from the International Humanist and Ethical Union
2015 "Lawrence Pinto Human Rights Award" from the Friends of Lawry
2017 "Academy Honorary Award" Karnataka Balavikas Academy, Directorate of Women and Child Development Department, Government of Karnataka
See also
Superstition in India
Federation of Indian Rationalist Associations
James Randi and his One Million Dollar Paranormal Challenge
Basava Premanand
Prabir Ghosh
Narendra Dabholkar
References
Further reading
External links
Official website
Indian religious sceptics
Indian atheism activists
Mangaloreans
1951 births
Living people
Konkani people
Indian biochemists
20th-century Indian chemists
Biochemistry educators
Scientists from Mangalore
Indian columnists
Writers from Mangalore | Narendra Nayak | Chemistry,Biology | 1,522 |
591,668 | https://en.wikipedia.org/wiki/Saponin | Saponins (Latin "sapon", soap + "-in", one of) are bitter-tasting, usually toxic plant-derived secondary metabolites. They are organic chemicals and have a foamy quality when agitated in water and a high molecular weight. They are present in a wide range of plant species throughout the bark, leaves, stems, roots and flowers but particularly in soapwort (genus Saponaria), a flowering plant, the soapbark tree (Quillaja saponaria), common corn-cockle (Agrostemma githago L.), baby's breath (Gypsophila spp.) and soybeans (Glycine max L.). They are used in soaps, medicines (e.g. drug adjuvants), fire extinguishers, dietary supplements, steroid synthesis, and in carbonated beverages (for example, being responsible for maintaining the head on root beer). Saponins are both water and fat soluble, which gives them their useful soap properties. Some examples of these chemicals are glycyrrhizin (licorice flavoring) and quillaia (alt. quillaja), a bark extract used in beverages.
Classification based on chemical structure
Structurally, they are glycosides with at least one glycosidic linkage between a sugar chain (glycone) and another non-sugar organic molecule (aglycone).
Steroid glycosides
Steroid glycosides are saponins with 27-C atoms. They are modified triterpenoids where their aglycone is a steroid, these compounds typically consist of a steroid aglycone attached to one or more sugar molecules, which can have various biological activities. These compounds are known for their significant cytotoxic, neurotrophic and antibacterial properties. These may also be used for partial synthesis of sex hormones or steroids.
Triterpene glycosides
Triterpene glycosides are natural glycosides present in various plants, herbs and sea cucumbers and possess 30-C atoms. These compounds consist of a triterpene aglycone attached to one or more sugar molecules. Triterpene glycosides exhibit a wide range of biological activities and pharmacological properties, making them valuable in traditional medicine and modern drug discovery.
Uses
The saponins are a subclass of terpenoids, the largest class of plant extracts. The amphipathic nature of saponins gives them activity as surfactants with potential ability to interact with cell membrane components, such as cholesterol and phospholipids, possibly making saponins useful for development of cosmetics and drugs. Saponins have also been used as adjuvants in development of vaccines, such as Quil A, an extract from the bark of Quillaja saponaria. This makes them of interest for possible use in subunit vaccines and vaccines directed against intracellular pathogens. In their use as adjuvants for manufacturing vaccines, toxicity associated with sterol complexation remains a concern.
Quillaja is toxic when consumed in large amounts, involving possible liver damage, gastric pain, diarrhea, or other adverse effects. The NOAEL of saponins is around 300 mg/kg in rodents, so a dose of 3 mg/kg should be safe with a safety factor (see Therapeutic index) of 100.
Saponins are used for their effects on ammonia emissions in animal feeding. In the United States, researchers are exploring the use of saponins derived from plants to control invasive worm species, including the jumping worm.
Decoction
The principal historical use of these plants was boiling down to make soap. Saponaria officinalis is most suited for this procedure, but other related species also work. The greatest concentration of saponin occurs during flowering, with the most saponin found in the woody stems and roots, but the leaves also contain some.
Biological sources
Saponins have historically been plant-derived, but they have also been isolated from marine organisms such as sea cucumber. They derive their name from the soapwort plant (genus Saponaria, family Caryophyllaceae), the root of which was used historically as a soap. In other representatives of this family, e.g. Agerostemma githago, Gypsophila spp., and Dianthus sp., saponins are also present in large quantities. Saponins are also found in the botanical family Sapindaceae, including its defining genus Sapindus (soapberry or soapnut) and the horse chestnut, and in the closely related families Aceraceae (maples) and Hippocastanaceae. It is also found heavily in Gynostemma pentaphyllum (Cucurbitaceae) in a form called gypenosides, and ginseng or red ginseng (Panax, Araliaceae) in a form called ginsenosides. Saponins are also found in the unripe fruit of Manilkara zapota (also known as sapodillas), resulting in highly astringent properties. Nerium oleander (Apocynaceae), also known as White Oleander, is a source of the potent cardiac toxin oleandrin. Within these families, this class of chemical compounds is found in various parts of the plant: leaves, stems, roots, bulbs, blossom and fruit. Commercial formulations of plant-derived saponins, e.g., from the soap bark tree, Quillaja saponaria, and those from other sources are available via controlled manufacturing processes, which make them of use as chemical and biomedical reagents. Soyasaponins are a group of structurally complex oleanane-type triterpenoid saponins that include soyasapogenol (aglycone) and oligosaccharide moieties biosynthesized on soybean tissues. Soyasaponins were previously associated to plant-microbe interactions from root exudates and abiotic stresses, as nutritional deficiency.
Role in plant ecology and impact on animal foraging
In plants, saponins may serve as anti-feedants, and to protect the plant against microbes and fungi. Some plant saponins (e.g., from oat and spinach) may enhance nutrient absorption and aid in animal digestion. However, saponins are often bitter to taste, and so can reduce plant palatability (e.g., in livestock feeds), or even imbue them with life-threatening animal toxicity. Some saponins are toxic to cold-blooded organisms and insects at particular concentrations. Further research is needed to define the roles of these natural products in their host organisms, which have been described as "poorly understood" to date.
Ethnobotany
Most saponins, which readily dissolve in water, are poisonous to fish. Therefore, in ethnobotany, they are known for their use by indigenous people in obtaining aquatic food sources. Since prehistoric times, cultures throughout the world have used fish-killing plants, typically containing saponins, for fishing.
Although prohibited by law, fish-poison plants are still widely used by indigenous tribes in Guyana.
On the Indian subcontinent, the Gondi people use poison-plant extracts in fishing.
In 16th century, saponins-rich plant, Agrostemma githago, was used to treat ulcers, fistulas, and hemorrhages.
Many of California's Native American tribes traditionally used soaproot (genus Chlorogalum), and/or the root of various yucca species, which contain saponin, as a fish poison. They would pulverize the roots, mix with water to generate a foam, then put the suds into a stream. This would kill or incapacitate the fish, which could be gathered easily from the surface of the water. Among the tribes using this technique were the Lassik, the Luiseño, and the Mattole.
Chemical structure
The vast heterogeneity of structures underlying this class of compounds makes generalizations difficult; they're a subclass of terpenoids, oxygenated derivatives of terpene hydrocarbons. Terpenes in turn are formally made up of five-carbon isoprene units (The alternate steroid base is a terpene missing a few carbon atoms). Derivatives are formed by substituting other groups for some of the hydrogen atoms of the base structure. In the case of most saponins, one of these substituents is a sugar, so the compound is a glycoside of the base molecule.
More specifically, the lipophilic base structure of a saponin can be a triterpene, a steroid (such as spirostanol or furostanol) or a steroidal alkaloid (in which nitrogen atoms replace one or more carbon atoms). Alternatively, the base structure may be an acyclic carbon chain rather than the ring structure typical of steroids. One or two (rarely three) hydrophilic monosaccharide (simple sugar) units bind to the base structure via their hydroxyl (OH) groups. In some cases other substituents are present, such as carbon chains bearing hydroxyl or carboxyl groups. Such chain structures may be 1-11 carbon atoms long, but are usually 2–5 carbons long; the carbon chains themselves may be branched or unbranched.
The most commonly encountered sugars are monosaccharides like glucose and galactose, though a wide variety of sugars occurs naturally. Other kinds of molecules such as organic acids may also attach to the base, by forming esters via their carboxyl (COOH) groups. Of particular note among these are sugar acids such as glucuronic acid and galacturonic acid, which are oxidized forms of glucose and galactose.
See also
Cardenolide
Cardiac glycoside
Phytochemical
References
Saponaceous plants
Wood extracts | Saponin | Chemistry | 2,123 |
48,494,934 | https://en.wikipedia.org/wiki/Supra-arcade%20downflows | Supra-arcade downflows (SADs) are sunward-traveling plasma voids that are sometimes observed in the Sun's outer atmosphere, or corona, during solar flares. In solar physics, refers to a bundle of coronal loops, and the prefix indicates that the downflows appear above flare arcades. They were first described in 1999 using the Soft X-ray Telescope (SXT) on board the Yohkoh satellite. SADs are byproducts of the magnetic reconnection process that drives solar flares, but their precise cause remains unknown.
Observations
Description
SADs are dark, finger-like plasma voids that are sometimes observed descending through the hot, dense plasma above bright coronal loop arcades during solar flares. They were first reported for a flare and associated coronal mass ejection that occurred on January 20, 1999, and was observed by the SXT onboard Yohkoh. SADs are sometimes referred to as “tadpoles” for their shape and have since been identified in many other events (e.g.). They tend to be most easily observed in the decay phases of long-duration flares, when sufficient plasma has accumulated above the flare arcade to make SADs visible, but they do begin earlier during the rise phase. In addition to the SAD voids, there are related structures known as supra-arcade downflowing loops (SADLs). SADLs are retracting (shrinking) coronal loops that form as the overlying magnetic field is reconfigured during the flare. SADs and SADLs are thought to be manifestations of the same process viewed from different angles, such that SADLs are observed if the viewer's perspective is along the axis of the arcade (i.e. through the arch), while SADs are observed if the perspective is perpendicular to the arcade axis.
Basic properties
SADs typically begin 100–200 Mm above the photosphere and descend 20–50 Mm before dissipating near the top of the flare arcade after a few minutes. Sunward speeds generally fall between 50 and 500 km s−1 but may occasionally approach 1000 km s−1. As they fall, the downflows decelerate at rates of 0.1 to 2 km s−2. SADs appear dark because they are considerably less dense than the surrounding plasma, while their temperatures (100,000 to 10,000,000 K) do not differ significantly from their surroundings. Their cross-sectional areas range from a few million to 70 million km2 (for comparison, the cross-sectional area of the Moon is 9.5 million km2).
Instrumentation
SADs are typically observed using soft X-ray and Extreme Ultraviolet (EUV) telescopes that cover a wavelength range of roughly 10 to 1500 Angstroms (Å) and are sensitive to the high-temperature (100,000 to 10,000,000 K) coronal plasma through which the downflows move. These emissions are blocked by Earth's atmosphere, so observations are made using space observatories. The first detection was made by the Soft X-ray Telescope (SXT) onboard Yohkoh (1991–2001). Observations soon followed from the Transition Region and Coronal Explorer (TRACE, 1998–2010), an EUV imaging satellite, and the spectroscopic SUMER instrument on board the Solar and Heliospheric Observatory (SOHO, 1995–2016). More recently, studies on SADs have used data from the X-Ray Telescope (XRT) onboard Hinode (2006—present) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO, 2010—present). In addition to EUV and X-ray instruments, SADs may also be seen by white light coronagraphs such as the Large Angle and Spectrometric Coronagraph (LASCO) onboard SOHO, though these observations are less common.
Causes
SADs are widely accepted to be byproducts of magnetic reconnection, the physical process that drives solar flares by releasing energy stored in the Sun's magnetic field. Reconnection reconfigures the local magnetic field surrounding the flare site from a higher-energy (non-potential, stressed) state to a lower-energy (potential) state. This process is facilitated by the development of a current sheet, often preceded by or in tandem with a coronal mass ejection. As the field is being reconfigured, newly formed magnetic field lines are swept away from the reconnection site, producing outflows both toward and away from the solar surface, respectively referred to as downflows and upflows. SADs are believed to be related to reconnection downflows that perturb the hot, dense plasma that collects above flare arcades, but precisely how SADs form is uncertain and is an area of active research.
SADs were first interpreted as cross sections of magnetic flux tubes, which comprise coronal loops, that retract down due to magnetic tension after being formed at the reconnection site. This interpretation was later revised to suggest that SADs are instead wakes behind much smaller retracting loops (SADLs), rather than cross sections of the flux tubes themselves. Another possibility, also related to reconnection outflows, is that SADs arise from an instability, such as the Rayleigh-Taylor instability or a combination of the tearing mode and Kelvin-Helmholtz instabilities.
References
External links
Supra-Arcade Downflows - RHESSI Wiki (berkeley.edu)
NASA: Closeup of Solar 'Tadpoles' (nasa.gov)
Hinode/XRT: Supra-Arcade Downflows Post X-Flare (cfa.harvard.edu)
Hinode/XRT: Supra-Arcade Downflowing Loops (cfa.harvard.edu)
Astrophysics
Magnetohydrodynamics
Solar phenomena
Space physics
Sun | Supra-arcade downflows | Physics,Chemistry,Astronomy | 1,239 |
33,882,236 | https://en.wikipedia.org/wiki/Adaptive%20collaborative%20control | Adaptive collaborative control is the decision-making approach used in hybrid models consisting of finite-state machines with functional models as subcomponents to simulate behavior of systems formed through the partnerships of multiple agents for the execution of tasks and the development of work products. The term “collaborative control” originated from work developed in the late 1990s and early 2000 by Fong, Thorpe, and Baur (1999). It is important to note that according to Fong et al. in order for robots to function in collaborative control, they must be self-reliant, aware, and adaptive. In literature, the adjective “adaptive” is not always shown but is noted in the official sense as it is an important element of collaborative control. The adaptation of traditional applications of control theory in teleoperations sought initially to reduce the sovereignty of “humans as controllers/robots as tools” and had humans and robots working as peers, collaborating to perform tasks and to achieve common goals. Early implementations of adaptive collaborative control centered on vehicle teleoperation. Recent uses of adaptive collaborative control cover training, analysis, and engineering applications in teleoperations between humans and multiple robots, multiple robots collaborating among themselves, unmanned vehicle control, and fault tolerant controller design.
Like traditional control methodologies, adaptive collaborative control takes inputs into the system and regulates the output based on a predefined set of rules. The difference is that those rules or constraints only apply to the higher-level strategy (goals and tasks) set by humans. Lower tactical level decisions are more adaptive, flexible, and accommodating to varying levels of autonomy, interaction and agent (human and/or robotic) capabilities. Models under this methodology may query sources in the event there is some uncertainty in a task that affects the overarching strategy. That interaction will produce an alternative course of action if it provides more certainty in support of the overarching strategy. If not or there is no response, the model will continue performing as originally anticipated.
Several important considerations are necessary for the implementation of adaptive collaborative control for simulation. As discussed earlier, data is provided from multiple collaborators to perform necessary tasks. This basic function requires data fusion on behalf of the model and potentially a need to set a prioritization scheme for handling continuous streaming of recommendations. The degree of autonomy of the robot in the case of human–robot interaction and weighting of decisional authority in robot-robot interaction are important for the control architecture. The design of interfaces is an important human system integration consideration that must be addressed. Due to the inherent varied interpretational scheme in humans, it becomes an important design factor to ensure the robot(s) are correctly conveying its message when interacting with humans.
History
The history of adaptive collaborative control began in 1999 through the efforts of Terrence Fong and Charles Thorpe of Carnegie Mellon University and Charles Baur of École Polytechnique Fédérale de Lausanne. Fong et al. believed existing telerobotic practices, which centered on a human point of view, while sufficient for some domains were sub-optimal for operating multiple vehicles or controlling planetary rovers. The new approach devised by Fong et al. focused on a robot-centric teleoperation model that treated the human as a peer and made requests to them in the manner a person would seek advice from experts. In the nominal work, Fong et al. implemented collaborative control design using a PioneerAT mobile robot and a UNIX workstation with wireless communications and distributed message-based computing. Two years later, Fong utilized collaborative control for several more applications, including the collaboration of a single human operator with multiple mobile robots for surveillance and reconnaissance. Around this same time, Goldberg and Chen presented an adaptive collaborative control system that possessed malfunctioning sources. The control design proved to create a model that maintained a robust performance when subjected to a sizeable fraction of malfunctioning sources. In the work, Goldberg and Chen expanded on the definition of collaborative control to include multiple sensors and multiple control processes in addition to human operators as sources. A collaborative, cognitive workspace in the form of a three-dimensional representation developed by Idaho National Laboratory to support understanding of tasks and environments for human operators expounds on Fong's seminal work which used textual dialogue as the human-robot interaction. The success of the 3-D display provided evidence of the use of mental models for increased team success. During that same time, Fong et al. developed a three dimensional display that was formed via a fusion of sensor data. A recent adaptation of adaptive collaborative control in 2010 was used to design a fault tolerant control system using a Lyapunov function based analysis.
Initialization
The simuland for adaptive collaborative control centers on robotics. As such, adaptive collaborative control follows the tenets of control theory applied to robotics at its basest level. That means the states of the robot are observed at a given instant and noted if it is within some accepted bound. If it is not, the estimated states of the robot are calculated using equations of dynamics and kinematics at some future time. The process of entering observation data into the model to generate initial conditions is called initialization. The process of initialization for adaptive collaborative control occurs differently depending on the environment: robotics only and human-robotic interaction. Under a robotics only environment, initialization occurs very similarly to the description above. The robotics, systems, subsystems, non-human entities observe some state it finds not in accordance with the higher-level strategy. The entities that are aware of this error use the appropriate equations to present a revised value for a future time step to its peers. For human-robotic interactions, initialization can occur at two different levels. The first level is what was previously described. In this instance, the robot notices some anomaly in its states that is not wholly consistent or is problematic with its higher-level strategy. It queries the human seeking advice to regulate its dilemma. In the other case, the human feels cause to either query some aspect of the robot's state (e.g. health, trajectory, speed) or present advice to the robot that is challenged against the robot's existing tactical approach to the higher-level strategy. The main inputs for adaptive collaborative control are a human-initiated dialogue based command or value presented by either a human or robotic element. The inputs used in the system models serve as the starting point for the collaboration.
A number of ways are available to gather observational data for use in functional models. The easiest method to gather observational data is simple human observation of the robotic system. Self-monitoring attributes such as built-in test (BIT) can provide regular reports on important system characteristics. A common approach to gather observations is to employ sensors throughout the robotic system. Vehicles operating in teleoperations have speedometers to indicate how fast they travel. Robotic systems with either stochastic or cyclic motion often employ accelerometers to note the forces exerted. GPS sensors provide a standardized data type that is used nearly universally for depicting location. Multi-sensor systems have been used to gather heterogeneous observational data for applications in path planning.
Computation
Adaptive collaborative control is most accurately modeled as a closed loop feedback control system. Closed loop feedback control describes the event where the outputs of a system from an input are used to influence the present or future behavior of the system. The feedback control model is governed by a set of equations that are used to predict the future state of the simuland and regulate its behavior. These equations – in conjunction with principles of control theory – are used to evolve physical operations of the simuland to include, but not limited to: dialogue, path planning, motion, monitoring, and lifting objects over time. Many times, these equations are modeled as nonlinear partial differential equations over a continuous time domain.
Due to their complexity, powerful computers are necessary to implement these models. A consequence of using computers to simulate these models is that continuous systems cannot be fully calculated. Instead, numerical solutions, such as the Runge–Kutta methods, are utilized to approximate these continuous models.
These equations are initialized from the response of one or more sources and rates of changes and outputs are calculated. These rates of changes predict the states of the simuland a short time in the future. The time increment for this prediction is called a time step. These new states are applied to the model to determine the new rates of changes and observational data. This behavior is continued until the desired number of iterations is completed. In the event a future state violates or comes within a tolerance of the violation the simuland will confer with its human counterpart seeking advice on how to proceed from that point. The outputs, or observational data, are used by the human operators to determine what they believe is the best course of action for the simuland. Their commands are fed with the input into the control system and assessed regarding its effectiveness in resolving the issues. If the human commands are determined to be valuable, the simuland will adjust its control input to what the human suggested. If the human's commands are determined to be unbeneficial, malicious, or non-existent, the model will seek its own correction approach.
Domain and Codomain
The domain for the models used to conduct adaptive collaborative control is commands, queries, and responses from the human operator at the finite-state machine level. Commands from the human operator allow the agent to be provided with additional input in its decision-making process. This information is particularly beneficial when the human is a subject matter expert or the human is aware of how to reach an overarching goal when the agent is focused on only one aspect of the entire problem. Queries from the human are used to gather status information on either support functions of the agent or to determine progress on missions. Many times the robot's response serves as precursor information for issuance of a command as human assistance to the agent. Responses from the human operator are initiated by queries from the agent and feedback into the system to provide additional input to potentially regulate an action or set of actions from the agent. At the functional model level, the system has translated all accepted commands from the human into control inputs used to carry out tasks defined to the agent. Due to the autonomous nature of the simuland, input from the agent is being fed into the machine to operate sustaining functions and tasking that the human operator has ignored or answered to an insufficient manner.
The codomain for the models that utilize adaptive collaborative control are queries, information statements, and responses from the agent. Queries and information statements are elements of the dialogue exchange at the finite-state machine level. Queries from the agent are the system's way of soliciting a response from a human operator. This is particularly important when the agent is physically stuck or at a logical impasse. The types of queries the agent can ask must be pre-defined by the modeler. The frequency and detail associated with a particular query depends on the expertise of the human operator or more accurately the expertise of the human operator identified to the agent. When the agent responds it will send an information statement to the human operator. This statement provides a brief description on what the adaptive collaborative control system decided. At the functional model level, the action associated with the information statement is carried out.
Applications
Vehicle teleoperation
Vehicle teleoperation has been around for many years. Early adaptations of vehicle teleoperations were robotic vehicles that were controlled continuously by human operators. Many of these systems were operated with line-of-sight RF communications and are now regarded as toys for children. Recent developments in the area of unmanned systems have brought a measure of autonomy to the robots. Adaptive collaborative control offers a shared mode of control where robotic vehicles and humans exchange ideas and advice regarding the best decisions to make on a route following and obstacle avoidance. This shared mode of operation mitigates problems of humans remotely operating in hazardous environments with poor communications and limited performance when humans have continuous, direct control. For vehicle teleoperations, robots will query humans to receive input on decisions that affect their tasks or when presented with safety-related issues. This dialogue is presented through an interface module that also allows the human operation to view the impact of the dialogue. In addition, this interface module allows the human operator to view what the robot's sensors capture in order to initiate commands or inquiries as necessary.
Fault Tolerant System
In practice, there are cases where multiple subsystems work together to achieve a common goal. This is a fairly common practice for reliability engineering. This technique involves systems working together collaboratively and the reliable operation of the overarching system is an important issue. Fault tolerant strategies are combined with the subsystems to form a fault tolerant collaborative system. A direct application is the case where two robotic manipulators work together to grasp a common object. For these systems, it is important that when one subsystem becomes faulty, the healthy subsystem reconfigures itself to operate alone to ensure the whole system can still perform its operations until the other subsystem is repaired. In this case, the subsystems create a dialogue between themselves to determine one another's status. In the event of one system starting to exhibit numerous or dangerous faults the secondary subsystem takes over the operation until the faulty system can be repaired.
Levels of Autonomy
Four levels of autonomy have been devised to serve as a baseline for human-robot interactions that included adaptive collaborative control. The four levels, ranging from full manual to fully autonomous, are: tele mode, safe mode, shared mode, and autonomous mode. Adaptive collaborative controllers typically range from shared mode to autonomous mode. The two modes of interest are:
Shared mode – robots can relieve the operator of the burden of direct control, using reactive navigation to find a path based on their perception of the environment. Shared mode provides for a dynamic allocation of roles and responsibilities. The robot accepts varying degrees of operator intervention and supports dialogue through the use of a finite number of scripted suggestions (e.g. “Path blocked! Continue left or right?”) and other text messages that appear within the graphical user interface.
Autonomous mode – robots self-regulate high-level tasks such as patrol, search region or follow path. In this mode, the only user intervention occurs at the tasking level, i.e. the robot manages all decision-making and navigation.
Limitations
Like many other control strategies, adaptive collaborative control has limits to its capabilities. Although the adaptive collaborative control allows for many tasks to be automated and other predefined cases to query the human operator, unstructured decision making remains the domain of humans, especially when common sense is required. Particularly, robots possess poor judgment at high-level perceptual functions, including object recognition and situation assessment.
A high number of tasks or a particular task that is very involved may create many questions, thereby increasing the complexity of the dialogue. This complexity to the dialogue in turn adds complexity to the system design.
To retain its adaptive nature, the flow of control and information through the simuland will vary with time and events. This dynamic makes debugging, verification, and validation difficult because it is harder to precisely identify an error condition or duplicate a failure situation. This becomes particularly problematic if the system must operate in a regulated facility, such as a nuclear power plant or waste water facility.
Issues that affect human-based teams also encumber adaptive collaborative controlled systems. In both cases, teams are required to coordinate activities, exchange information, communicate effectively, and minimize the potential for interference. Other factors that affect teams include resource distribution, timing, sequencing, progress monitoring, and procedure maintenance.
Collaboration involves that all partners exhibit trust in the other collaborators and understand the other. To do so, each collaborator needs to have an accurate idea of what the other is capable of doing and how they will carry out an assignment. In some cases, the agent may have to weigh the responses from a human and the human must believe in the decisions a robot makes.
References
Robot control
Collaboration | Adaptive collaborative control | Engineering | 3,268 |
73,716,999 | https://en.wikipedia.org/wiki/Torulaspora%20globosa | Torulaspora globosa is a yeast fungus in the genus Torulaspora. This species can be found in the rhizosphere and is beneficial for agricultural activities. Considered a plant growth promoting rhizobacteria, this species helps with plant health maintenance. It is important for biofuel production and is a promising biocontrol agent.
Description
Can use glucose, sucrose, ethanol and other caron sources for growth. Has a round-oval shape, arranges in pairs, has a creamy and shiny appearance on agar. Has a range of size of about 1-7 micrometers n breadth and 2-8 micrometers in length. Divides by multipolar budding. Can utilize ammonia as a nitrogen source. No spores present, asexual nor sexual. No filamentous growth.
Lipid and ethanol generation
Biodiesel is a mixture of mono-alkyl esters of long chain fatty acids can be used instead of regular diesel with superior performance. Recently there has been a search for new ways of producing biodiesel that is not made from food. Yeast strains including Torulaspora produce lipids that are similar in composition to the vegetable oils we use now to synthesize biodiesel. Torulaspora globosa was found to produce around 3.12g/L of usable lipids for days. After nitrogen and sugar sources deplete the yield decreases Zinc seems to play a role in its produced lipids along with nutritional elements. Alongside lipids, T. globosa can undergo fermentation to produce ethanol, another compound we can use as biofuel. It was found to be able to ferment effectively up to 40 degrees Celsius and could tolerate the increased ethanol levels.
Biocontrol
Studies have shown that Torulaspora globosa is a good mycelial growth inhibitor, specifically against Colletotrichum. In vitro tests showed that T. globosa has an antagonistic effect on mycelial growth against the phytopathogenic mold Colletotrichum sublineolum. Results showed hyphal damage caused by the yeast on the agar dishes. T. globosa is considered mycocinogenic, despite not producing any volatile compounds, siderophores, or hydrolytic enzymes.
Plant growth promotion
Decreases root length while increasing biomass of lettuce. This study found that the root length was decreased but the roots exhibited greater branching than before. Shoot biomass was also increased, along with wider and longer leaves.
Produces indole acetic acid which promotes growth of most plants and can solubilize minerals used by the plants.
References
Saccharomycetaceae
Fungi described in 1975
Fungus species | Torulaspora globosa | Biology | 572 |
11,548,117 | https://en.wikipedia.org/wiki/Pycnostysanus%20azaleae | Pycnostysanus azaleae or Seifertia azaleae is an ascomycete fungus that is a plant pathogen infecting azaleas and rhododendrons.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Enigmatic Ascomycota taxa
Fungus species | Pycnostysanus azaleae | Biology | 79 |
311,916 | https://en.wikipedia.org/wiki/Binge%20eating | Binge eating is a pattern of disordered eating which consists of episodes of uncontrollable eating. It is a common symptom of eating disorders such as binge eating disorder and bulimia nervosa. During such binges, a person rapidly consumes an excessive quantity of food. A diagnosis of binge eating is associated with feelings of loss of control. Binge eating disorder is also linked with being overweight and obesity.
Diagnosis
The DSM-5 includes a disorder diagnosis criterion for Binge Eating Disorder (BED). It is as follows:
Recurrent and persistent episodes of binge eating
Binge eating episodes are associated with three (or more) of the following:
Eating much more rapidly than normal
Eating until feeling uncomfortably full
Eating large amounts of food when not physically hungry
Eating alone because of being embarrassed by how much one is eating
Feeling disgusted with oneself, depressed, or very guilty after overeating
Marked distress regarding binge eating
Absence of regular compensatory behaviors (such as purging)
Warning signs
Typical warning signs of binge eating disorder include the disappearance of a large quantity of food in a relatively short period of time. A person who may be experiencing binge eating disorder may appear to be uncomfortable when eating around others or in public. A person may develop new and extreme eating patterns that they have never done before. These might include diets that cut out certain food groups completely such as a no dairy or no carb diet. Binge eating can begin after a first attempt at dieting. They might also steal or hoard food in unusual places. A person may be experiencing fluctuations in their weight. In addition, they may have feelings of disgust, depression, or guilt about overeating. Another possible warning sign of binge eating is that a person may be obsessed with their body image or weight.
Furthermore, patients who binge eat may also engage in other self-destructing behaviours like suicide attempts, drug use, shop-lifting, and drinking too much alcohol. The onset of binge eating without dieting is linked to a higher risk of mental health issues and a younger age of onset. BED patients can experience comorbid psychiatric instability.
Causes
There are no direct causes of binge eating; however, long-term dieting, psychological issues and an obsession with body image have been linked to binge eating. There are multiple factors that increase a person's risk of developing binge eating disorder. Family history could play a role if that person had a family member who was affected by binge eating. Said person may not have a supportive or friendly home environment, and they have a hard time expressing their problems with BED. Having a history of going on extreme diets may cause an urge to binge eat. Psychological issues such as feeling negatively about oneself or the way they look may trigger a binge.
Weight stigma has also been found to predict binge eating, highlighting the importance of weight inclusive approaches to binge eating disorder that do not exercerbate this potential cause.
Health risks
There are several physical, emotional, and social health risks when associated with binge eating disorder. These risks include depression, anxiety, and heart disease.
One study found that people with obesity who experience binge eating have a higher body mass index, and higher levels of depression and stress than those who did not have with binge eating disorder Exposure to two major categories of risk factors—those that raise the risk for obesity and those that raise the risk for psychiatric disorders in general—can be associated with binge eating disorder.
Effects
Typically, the eating is done rapidly, and a person will feel emotionally numb and unable to stop eating. Most people who have eating binges try to hide this behavior from others, and often feel ashamed about being overweight or depressed about their overeating. Although people who do not have any eating disorder may occasionally experience episodes of overeating, frequent binge eating is often a symptom of an eating disorder.
BED is characterized by uncontrollable, excessive eating, followed by feelings of shame and guilt. Unlike those with bulimia, those with BED symptoms typically do not purge their food, fast, or excessively exercise to compensate for binges. Additionally, these individuals tend to diet more often, enroll in weight-control programs and have a history of family obesity. However, many who have bulimia also have binge-eating disorder.
Along with the social and physical health that is affected when suffering from BED, there are psychiatric disorders that are often linked to BED. Some of them being but are not limited to:
depression, bipolar disorder, anxiety disorder, substance abuse/use disorder.
Treatments
Current treatments for binge eating disorder mainly consist of psychological therapies, such as Cognitive Behavioural Therapy (CBT), Interpersonal Psychotherapy (IPT), and Dialectical Behavioural Therapy (DBT). A study conducted on the long term efficacy of psychological treatments for binge eating showed that both cognitive behavioral therapy (CBT) and group interpersonal psychotherapy (IPT) effectively treat binge eating disorder, with 64.4% of patients completely recovering from binge eating.
Lisdexamfetamine dimesylate, also known as Vyvanse, is the only medication approved by the Food and Drug Administration (FDA) for the treatment of moderate-to-severe binge eating disorder in adults as of 2024. However, some studies have called into question its effectiveness for this indication.
History
APA DSM
The American Psychiatric Association mentioned and listed binge eating under the listed criteria and features of bulimia in the Diagnostic and Statistical Manual of Mental Disorders (DSM) - 3 in 1987. By including binge eating in the DSM-3, even if not on its own as a separate eating disorder, they brought awareness to the disorder and gave it mental disorder legitimacy. This allowed for people to receive the appropriate treatment for binge eating and for their disorder to be legitimized.
Drug therapy
In January 2015, the Food and Drug Administration (FDA) approved lisdexamfetamine dimesylate (Vyvanse), the first medication indicated for the treatment of moderate-to-severe binge eating disorder.
Men with binge eating
Men with binge eating often face unique barriers to seeking treatment due to socio-cultural expectations surrounding masculinity. After men compare their bodies to the culturally constructed masculine ideals, they often develop heightened concerns about their own body image and internalize the belief that their bodies should be muscular, lean, and strong, developing unhealthy behaviors like binge eating or using fad diets. Many men hesitate to reach out for help out of fear of appearing weak, 'less like a man' or even homosexual. The pervasive stereotype that eating disorders primarily affect women has contributed to feelings of shame and isolation among men who are affected by these disorders. This gender-based stigma surrounding eating disorders and strongly feminine branding of eating disorder treatment centers create a significant barrier to men's willingness to reach out for support. Men are more likely to partake in compulsive or excessive exercising as a compensation to highly calorific diets, leading to body dysmorphia.
See also
Binge drinking
Binge eating disorder
Cognitive behavioral treatment of eating disorders
Counterregulatory eating
Overeating
Polyphagia
Prader-Willi Syndrome
References
External links
Eating behaviors of humans
Hyperalimentation
de:Binge Eating | Binge eating | Biology | 1,530 |
32,018,124 | https://en.wikipedia.org/wiki/Amino%20acid%20kinase | In molecular biology, the amino acid kinase domain is a protein domain. It is found in protein kinases with various specificities, including the aspartate, glutamate and uridylate kinase families. In prokaryotes and plants the synthesis of the essential amino acids lysine and threonine is predominantly regulated by feed-back inhibition of aspartate kinase (AK) and dihydrodipicolinate synthase (DHPS). In Escherichia coli, thrA, metLM, and lysC encode aspartokinase isozymes that show feedback inhibition by threonine, methionine, and lysine, respectively. The lysine-sensitive isoenzyme of aspartate kinase from spinach leaves has a subunit composition of 4 large and 4 small subunits.
In plants although the control of carbon fixation and nitrogen assimilation has been studied in detail, relatively little is known about the regulation of carbon and nitrogen flow into amino acids. The metabolic regulation of expression of an Arabidopsis thaliana aspartate kinase/homoserine dehydrogenase (AK/HSD) gene, which encodes two linked key enzymes in the biosynthetic pathway of aspartate family amino acids has been studied. The conversion of aspartate into either the storage amino acid asparagine or aspartate family amino acids may be subject to a coordinated, reciprocal metabolic control, and this biochemical branch point is a part of a larger, coordinated regulatory mechanism of nitrogen and carbon storage and utilization.
References
Protein families | Amino acid kinase | Biology | 333 |
3,557,327 | https://en.wikipedia.org/wiki/Canalisation%20%28genetics%29 | Canalisation is a measure of the ability of a population to produce the same phenotype regardless of variability of its environment or genotype. It is a form of evolutionary robustness. The term was coined in 1942 by C. H. Waddington to capture the fact that "developmental reactions, as they occur in organisms submitted to natural selection...are adjusted so as to bring about one definite end-result regardless of minor variations in conditions during the course of the reaction". He used this word rather than robustness to consider that biological systems are not robust in quite the same way as, for example, engineered systems.
Biological robustness or canalisation comes about when developmental pathways are shaped by evolution. Waddington introduced the concept of the epigenetic landscape, in which the state of an organism rolls "downhill" during development. In this metaphor, a canalised trait is illustrated as a valley (which he called a creode) enclosed by high ridges, safely guiding the phenotype to its "fate". Waddington claimed that canals form in the epigenetic landscape during evolution, and that this heuristic is useful for understanding the unique qualities of biological robustness.
Genetic assimilation
Waddington used the concept of canalisation to explain his experiments on genetic assimilation. In these experiments, he exposed Drosophila pupae to heat shock. This environmental disturbance caused some flies to develop a crossveinless phenotype. He then selected for crossveinless. Eventually, the crossveinless phenotype appeared even without heat shock. Through this process of genetic assimilation, an environmentally induced phenotype had become inherited. Waddington explained this as the formation of a new canal in the epigenetic landscape.
It is, however, possible to explain genetic assimilation using only quantitative genetics and a threshold model, with no reference to the concept of canalisation. However, theoretical models that incorporate a complex genotype–phenotype map have found evidence for the evolution of phenotypic robustness contributing to genetic assimilation, even when selection is only for developmental stability and not for a particular phenotype, and so the quantitative genetics models do not apply. These studies suggest that the canalisation heuristic may still be useful, beyond the more simple concept of robustness.
Congruence hypothesis
Neither canalisation nor robustness are simple quantities to quantify: it is always necessary to specify which trait is canalised (robust) to which perturbations. For example, perturbations can come either from the environment or from mutations. It has been suggested that different perturbations have congruent effects on development taking place on an epigenetic landscape. This could, however, depend on the molecular mechanism responsible for robustness, and be different in different cases.
Evolutionary capacitance
The canalisation metaphor suggests that some phenotypic traits are very robust to small perturbations, for which development does not exit the canal, and rapidly returns down, with little effect on the final outcome of development. But perturbations whose magnitude exceeds a certain threshold will break out of the canal, moving the developmental process into uncharted territory. For instance, the study of an allelic series for Fgf8, an important gene for craniofacial development, with decreasing levels of gene expression demonstrated that the phenotype remains canalised as long as the expression level is above 40% of the wild-type expression.
Strong robustness up to a limit, with little robustness beyond, is a pattern that could increase evolvability in a fluctuating environment. Canalisation of a large set of genotypes into a limited phenotypic space has been suggested as a mechanism for the accumulation, in a neutral manner, of mutations that could otherwise be deleterious. Genetic canalisation could allow for evolutionary capacitance, where genetic diversity accumulates in a population over time, sheltered from natural selection because it does not normally affect phenotypes. This hidden diversity could then be unleashed by extreme changes in the environment or by molecular switches, releasing previously cryptic genetic variation that can then contribute to a rapid burst of evolution, a phenomenon termed decanalisation. Cycles of canalization-decanalization could explain the alternating periods of stasis, where genotypic diversity accumulates without morphological changes, followed by rapid morphological changes, where decanalization releases the phenotypic diversity and becomes subject to natural selection, in the fossil record, thus providing a potential developmental explanation for the punctuated equilibrium.
HSP90 and decanalisation
In 1998, Susan Lindquist discovered that Drosophila hsp83 heterozygous mutants exhibit a large diversity of phenotypes (from sexual combs on the head, to scutoid-like and notched wings phenotypes). She showed that these phenotypes could be passed on to the next generation, suggesting a genetic basis for those phenotypes. The authors hypothesized that Hsp90 (the gene mutated in hsp83), as a chaperone protein, plays a pivotal role in the folding and activation of many proteins involved in developmental signaling pathways, thus buffering against genetic variation in those pathways. hsp83 mutants would therefore release the cryptic genetic variation, resulting in a diversity of phenotypes.
In 2002, Lindquist showed that pharmacological inhibition of HSP90 in Arabidopsis thaliana also lead to a wide range of phenotypes, some of which could be considered adaptive, further supporting the canalising role of HSP90.
Finally, the same type of experiment in the cavefish Astyanax mexicanus yielded similar results. This species encompasses two populations: an eyed population living under the water surface and an eye-less blind population living in caves. Not only is the cave population eye-less but it also displays a largely reduced orbit size. HSP90 inhibition leads to an increased variation in orbit size that could explain how this trait could evolve in just a few generations. Further analysis showed that low conductivity in the cave water induces a stress response mimicking the inhibition of HSP90, providing a mechanism for decanalisation.
Interpretation of the original Drosophila paper is now subject to controversy. Molecular analysis of the hsp83 mutant showed that HSP90 is required for piRNA biogenesis, a set of small RNAs repressing transposons in the germline., causing massive transposon insertional mutagenesis that could explain the phenotypic diversification.
Significance of Variability in Components
Understanding variability is an important aspect of comprehending natural selection and mutations. Variability can be classified into two categories: modulating phenotypic variation and modulating the phenotypes that are produced. The presence of this so-called bias in genetic variability allows us to gain further insights into how certain phenotypes are more successful in terms of their actual morphology, biochemical makeup, or behavior. It is scientifically known that organisms need to develop systematically integrated systems in order to thrive in their specific ecosystems. This extends to morphology, where variations must occur in a systematic order; otherwise, phenotypic mutations will not persist due to the occurrence of natural selection. The variation affects the speed and rate of evolutionary change through the selection and modulation of phenotypic variations. Ultimately, this results in a lower amount of diversity observed throughout evolution, as the majority of phenotypes do not persist beyond a few generations due to their inferior morphology, biochemical makeup, or physical movement or appearance.
See also
Developmental noise
Phenotypic integration
Phenotypic plasticity
Developmental systems theory
Gene regulatory network
Systems biology
References
Developmental biology
Extended evolutionary synthesis
Population genetics | Canalisation (genetics) | Biology | 1,592 |
22,097,810 | https://en.wikipedia.org/wiki/Hyundai%20MB%20910 | The MB-910 is a tri-band/GPRS watch phone made by Hyundai Mobile Europe in Leitz Austria Vertriebs GmbH.
Its main feature is a 1.5-inch, 132 x 176 pixel, 65k colour touchscreen, which is contained within a plastic watch casing. The screen alternates between a clock mode and phone mode. The MB-910’s features include a multimedia player for music and video, a WAP 2.0 web browser, an email client and other productivity tools. The internal memory is 128MB and there is no expansion slot. Contacts, text and picture messages are limited to 300.
As a phone, it is used in conjunction with a Bluetooth headset and can operate in a hands free manner. It offers a talk-time of up to three hours and up to 70 hours on standby
The MB-910 was released in mainland Europe during early 2009, priced at around € 160. Following the MB-910 showing at the Mobile World Congress 2009 show, a UK release is planned for Q2, priced SIM-free at around £200.
References
External links
Hyundai Mobile’s official webpage
Photo gallery from Mobile World Congress 2009
Watch phones
Products introduced in 2009 | Hyundai MB 910 | Technology | 249 |
44,297,134 | https://en.wikipedia.org/wiki/National%20Authority%20for%20Chemical%20Weapons%20Convention | National Authority for Chemical Weapons Convention or NACWC is an office in Cabinet Secretariat, Government of India, established on 29 April 1997 by a resolution of the Cabinet and was later accorded a statutory status through Chemical Weapons Convention Act, 2000.
References
Cabinet Secretariat of India
Chemical weapons demilitarization | National Authority for Chemical Weapons Convention | Chemistry | 60 |
51,076,996 | https://en.wikipedia.org/wiki/Magic%20hexagram | A magic hexagram of order 2 is an arrangement of numbers in a hexagram with triangular cells with 2 cells on each edge, in such a way that the numbers in each row, in all three directions, sum to the same magic constant M.
Magic star hexagram
Magic star hexagram or 6-pointed magic star is a star polygon with Schläfli symbol {6/2} in which numbers are placed at each of the six vertices and six intersections, such that the four numbers on each line sum to the same magic constant.
Magic star hexagram with triangular cell
There are two solutions of magic star hexagram with 12 triangular cells.
Magic star hexagram with more than 12 vertices
Harold Reiter and David Ritchie calculated the solution of magic hexagrams with 19 vertices.
See also
Magic square
Magic hexagon
References
External links
The Magic Hexagram
A Complete Solution to the Magic Hexagram Problem
Magic figures
Star symbols | Magic hexagram | Mathematics | 198 |
45,630,702 | https://en.wikipedia.org/wiki/Lentinula%20reticeps | Lentinula reticeps is a species of agaric fungus in the family Omphalotaceae. It was originally described as Agaricus reticeps by French mycologist Camille Montagne in 1856. William Alphonso Murrill transferred it to the genus Lentinula in 1915.
References
External links
Fungi described in 1856
Fungi of North America
Marasmiaceae
Fungus species | Lentinula reticeps | Biology | 82 |
19,392,692 | https://en.wikipedia.org/wiki/Signal%20transfer%20function | The signal transfer function (SiTF) is a measure of the signal output versus the signal input of a system such as an infrared system or sensor. There are many general applications of the SiTF. Specifically, in the field of image analysis, it gives a measure of the noise of an imaging system, and thus yields one assessment of its performance.
SiTF evaluation
In evaluating the SiTF curve, the signal input and signal output are measured differentially; meaning, the differential of the input signal and differential of the output signal are calculated and plotted against each other. An operator, using computer software, defines an arbitrary area, with a given set of data points, within the signal and background regions of the output image of the infrared sensor, i.e. of the unit under test (UUT), (see "Half Moon" image below). The average signal and background are calculated by averaging the data of each arbitrarily defined region. A second order polynomial curve is fitted to the data of each line. Then, the polynomial is subtracted from the average signal and background data to yield the new signal and background. The difference of the new signal and background data is taken to yield the net signal. Finally, the net signal is plotted versus the signal input. The signal input of the UUT is within its own spectral response. (e.g. color-correlated temperature, pixel intensity, etc.). The slope of the linear portion of this curve is then found using the method of least squares.
SiTF curve
The net signal is calculated from the average signal and background, as in signal to noise ratio (imaging)#Calculations.
The SiTF curve is then given by the signal output data, (net signal data), plotted against the signal input data (see graph of SiTF to the right). All the data points in the linear region of the SiTF curve can be used in the method of least squares to find a linear approximation. Given data points a best fit line parameterized as is given by:
See also
Optical transfer function
Distortion
Minimum resolvable temperature difference
Noise equivalent temperature difference
Power spectral density
Minimum resolvable contrast
Signal to noise ratio (imaging)
References
External links
http://www.electro-optical.com
Signal processing
Image processing
Infrared imaging | Signal transfer function | Technology,Engineering | 465 |
21,677,236 | https://en.wikipedia.org/wiki/Rhizomucor | Rhizomucor is a genus of fungi in the family Lichtheimiaceae. The widespread genus contains six species. Rhizomucor parasiticus, the species originally selected as the type, is now considered synonymous with Rhizomucor pusillus.
References
Fungi
Fungus genera
Taxa described in 1900 | Rhizomucor | Biology | 65 |
1,808,630 | https://en.wikipedia.org/wiki/Collective%20responsibility | Collective responsibility or collective guilt, is the responsibility of organizations, groups and societies. Collective responsibility in the form of collective punishment is often used as a disciplinary measure in closed institutions, e.g., boarding schools (punishing a whole class for the actions of one known or unknown pupil), military units, prisons (juvenile and adult), psychiatric facilities, etc. The effectiveness and severity of this measure may vary greatly, but it often breeds distrust and isolation among their members. Historically, collective punishment is a sign of authoritarian tendencies in the institution or its home society.
In ethics, both methodological individualists and normative individualists question the validity of collective responsibility. Normally, only the individual actor can accrue culpability for actions that they freely cause. The notion of collective culpability seems to deny individual moral responsibility. Contemporary systems of criminal law accept the principle that guilt shall only be personal. According to genocide scholar A. Dirk Moses, "The collective guilt accusation is unacceptable in scholarship, let alone in normal discourse and is, I think, one of the key ingredients in genocidal thinking."
In business
As the business practices known as corporate social responsibility (CSR) and sustainability mature and converge with the responsibilities of governments and citizens, the term "collective responsibility" is beginning to be more widely used.
Collective responsibility is widely applied in corporations, where the entire workforce is held responsible for failure to achieve corporate targets (for example, profit targets), irrespective of the performance of individuals or teams which may have achieved or overachieved within their area. Collective punishment, even including measures that actually further harm the prospect of achieving targets, is applied as a measure to 'teach' the workforce.
In culture
The concept of collective responsibility is present in literature, most notably in Samuel Taylor Coleridge's "The Rime of the Ancient Mariner", a poem telling the tale of a ship's crew who died of thirst after they approved of one crew member's killing of an albatross.
1959's Ben-Hur and 1983's prison crime drama Bad Boys depict collective responsibility and punishment.
The play 'An Inspector Calls' by J.B Priestley also features the theme of collective responsibility throughout the investigation process.
In politics
In some countries with parliamentary systems, there is a convention that all members of a cabinet must publicly support all government decisions, even if they do not agree with them. Members of the cabinet that wish to dissent or object publicly must resign from their positions or be sacked.
As a result of collective responsibility, the entire government cabinet must resign if a vote of no confidence is passed in parliament.
In law
Where two or more persons are liable in respect of the same obligation, the extent of their joint liability varies among jurisdictions.
In religion
The Jewish faith recognizes two kinds of sin, offenses against other people, and offenses against God. An offense against God may be understood as a violation of a contract (the Covenant between God and the Children of Israel). Ezra, a priest and a scribe, was the leader of a large group of exiles. On his return to Jerusalem, where he was required to teach the Jews to obey the laws of God, he discovered that the Jews had been marrying non-Jews. He tore his garments in despair and confessed the sins of Israel before God, before he went on to purify the community. The Book of Jeremiah (Yirmiyahu [ירמיהו]) can be organized into five sub-sections. One part, Jeremiah 2-24, displays scorn for the sins of Israel. The poem in 2:1–3:5 shows the evidence of a broken covenant against Israel.
This concept is found in the Old Testament (or the Tanakh), some examples of it are the account of the Flood, the Tower of Babel, Sodom and Gomorrah and in some interpretations, the Book of Joshua's Achan. In those records, entire communities were punished for the actions of the vast majority of their members. This was accomplished in as much as it is impossible to state whether there were no other righteous people, or that there were children who were too young to be responsible for their deeds.
Through this framework of inductive reasoning, both the account of the Flood and Sodom and Gomorrah do identify righteous people who happen to be the immediate or prospective family members of a prophet or prophet's nephew, along with them. These sequences of events are reconciled for the former example afterwards as the etiological basis for the reader's presumed good fortunes in the Noahic covenant with all living creatures, in which God promises never again to destroy all life on Earth (a category implicitly broader than the unrighteous) by flood and creates the rainbow as the sign of this "everlasting covenant between God and every living creature of all flesh that is on the earth", and for the latter example pre-empted with an explicitly stated numerical target of 9 other community members' lives to be put in peril (and to have an ostensibly lower number of homes destroyed, being located in Sodom) due to a hypothetical 10th's evaluation as unrighteous.
The practice of blaming the Jews for Jesus' death is the longest-lasting example of collective responsibility. In this case, the blame was not only cast upon the Jews of Jesus's time, it was also cast upon successive generations of Jews. This practice is documented in Matthew 27:25-66 New International Version (NIV) 25: "All the people answered, 'His blood is on us and on our children!
Collective punishment
Collective responsibility in the form of collective punishment is often used as a disciplinary measure in closed institutions, e.g. boarding schools (punishing a whole class for the actions of one known or unknown pupil), military units, prisons (juvenile and adult), psychiatric facilities, etc. The effectiveness and severity of this measure may vary greatly, but it often breeds distrust and isolation among their members. Historically, collective punishment is a sign of authoritarian and/or totalitarian tendencies in the institution and/or its home society. For example, in the Soviet Gulags, all members of a brigada (work unit) were punished for bad performance of any of its members.
Collective punishment is also practiced in the situation of war, economic sanctions, etc., presupposing the existence of collective guilt. Collective guilt, or guilt by association, is the controversial collectivist idea that individuals who are identified as a member of a certain group carry the responsibility for an act or behavior that members of that group have demonstrated, even if they themselves were not involved. Contemporary systems of criminal law accept the principle that guilt shall only be personal.
During the occupation of Poland by Nazi Germany, the Germans applied collective responsibility: any kind of help which was given to a person of Jewish faith or origin was punished with death, and not only the rescuer, but his/her family was also executed. This was widely publicized by the Germans. During the occupation, for every German killed by a Pole, 100-400 Poles were shot in retribution. Communities were held collectively responsible for the purported Polish counter-attacks against the invading German troops. Mass executions of łapanka hostages were conducted every single day during the Wehrmacht advance across Poland in September 1939 and thereafter.
Another example of collective punishment was applied after the war, when ethnic Germans in Central and Eastern Europe were collectively blamed for Nazi crimes, resulting in the committing of numerous atrocities against the German population, including killings (see Expulsion of Germans after World War II and Beneš decrees).
Perception
Entitativity is the perception of groups as being entities in themselves (an entitative group), independent of any of the group's members.
Ethics
In ethics, individualists question the idea of collective responsibility.
Methodological individualists challenge the very possibility of associating moral agency with groups, as distinct from their individual members, and normative individualists argue that collective responsibility violates principles of both individual responsibility and fairness. (Stanford Encyclopedia of Philosophy)
Normally, only the individual actor can accrue culpability for actions that they freely cause. The notion of collective culpability seems to deny individual moral responsibility. Does collective responsibility make sense? History is filled with examples of a wronged man who tried to avenge himself, not only on the person who has wronged him, but on other members of the wrongdoer's family, tribe, ethnic group, religion, or nation.
According to A. Dirk Moses, "The collective guilt accusation is unacceptable in scholarship, let alone in normal discourse and is, I think, one of the key ingredients in genocidal thinking."
See also
References
Works cited
Further reading
External links
Collective punishment
Social privilege
Social influence
Political theories
Religious belief and doctrine
Applied ethics | Collective responsibility | Biology | 1,801 |
74,142,000 | https://en.wikipedia.org/wiki/Curium%28IV%29%20fluoride | Curium(IV) fluoride is an inorganic chemical compound, a salt of curium and fluorine with the chemical formula .
Synthesis
It is reported that the compound can be prepared by fluorination of with elemental fluorine at 400 °C.
Physical properties
The compound forms brownish-tan solid composed of Cm4+ and F− ions. It has a monoclinic crystal structure of the space group C2/c (No. 15), and lattice parameters a = 1250 pm, b = 1049 pm, and c = 818 pm. It has the same crystal structure as that of .
References
Curium compounds
Fluorides
Actinide halides | Curium(IV) fluoride | Chemistry | 139 |
58,006,182 | https://en.wikipedia.org/wiki/Fine-Resolution%20Epithermal%20Neutron%20Detector | The Fine-Resolution Epithermal Neutron Detector (FREND) is a neutron detector that is part of the instrument payload on board the Trace Gas Orbiter (TGO), launched to Mars in March 2016. This instrument is currently mapping hydrogen levels to a maximum depth of beneath the Martian surface, thus revealing shallow water ice distribution. This instrument has an improved resolution of 7.5 times over the one Russia contributed to NASA's 2001 Mars Odyssey orbiter.
Overview
FREND can provide information while orbiting Mars on the presence of hydrogen, in the form of water or hydrated minerals in the top of the Martian surface. Locations where hydrogen is found may indicate water-ice deposits, which is one of the key ingredients for life. Mapping ground ice could also be useful for future resource utilization (ISRU) and crewed missions.
FREND also features a dosimeter to monitor the radiation environment along its orbit around Mars.
Objectives
The main science objective of the instrument is to carry out high spatial resolution mapping of epithermal and fast neutron fluxes from the Martian surface. FREND will work in synergy and complement orbital and ground data as measured the Dynamic Albedo of Neutrons (DAN) instrument on the Curiosity rover, the ADRON-RM instrument on the Rosalind Franklin rover and the ADRON-EM on the Kazachok.
The second goal of FREND is to use its dosimeter to measure the radiation dose at the TGO orbit from energetic particles of galactic cosmic rays and solar flares. The data will be used to estimate exposure levels of spacecraft and maintain radiation safety of crewed interplanetary flights.
Principle and development
Cosmic rays are sufficiently energetic to break apart atoms in the top one or two metres of Mars' surface, releasing high-energy neutrons, which can be measured by FREND instrument. The distribution of neutron velocities measured reveals the hydrogen content, which are a good indicator of hydrogen abundance —water or hydrated minerals— in the shallow subsurface of Mars.
FREND uses inherited technology developed by the Russian Space Research Institute and flown on the High Energy Neutron Detector (HEND) on Mars Odyssey; the Mercury Gamma and Neutron Spectrometer (MGNS) on BepiColombo; the Lunar Exploration Neutron Detector (LEND) on the Lunar Reconnaissance Orbiter, and Dynamic Albedo of Neutrons (DAN) on Curiosity rover.
This instrument's key components are four detectors containing Helium-3 for neutrons with energies from 0.4 keV to 500 keV, and a stilbene-based scintillator for high-energy neutrons up to 10 MeV. Each of the four 3He detectors counts neutrons independently for increased reliability. All five detectors are encased within a collimator that improves the resolution 7.5 times over the one Russia contributed to NASA's Mars Odyssey orbiter.
The Principal Investigator is Igor G. Mitrofanov, from the Russian Space Research Institute (IKI). Mitrofanov is also the PI for ExoMars' ADRON-RM and ADRON-EM neutron detector instruments.
See also
Astrobiology
Life on Mars
Water on Mars
References
ExoMars
Mars imagers
Astrobiology
Space science experiments | Fine-Resolution Epithermal Neutron Detector | Astronomy,Biology | 656 |
54,149,652 | https://en.wikipedia.org/wiki/Ursa%20Major%20Filament | Ursa Major Filament is a galaxy filament. The filament is connected to the CfA Homunculus, a portion of the filament forms a portion of the "leg" of the Homunculus.
See also
Abell catalogue
Large-scale structure of the universe
Supercluster
References
Galaxy filaments
Large-scale structure of the cosmos | Ursa Major Filament | Astronomy | 77 |
4,243,381 | https://en.wikipedia.org/wiki/Russula%20nobilis | Formerly Russula mairei (Singer), and commonly known as the beechwood sickener, the now re-classified fungus Russula nobilis (Velen.) is a basidiomycete mushroom of the genus Russula. This group of mushrooms are noted for their brittle gills and bright colours.
Taxonomy
It was previously named in honour of French mycologist René Maire by Rolf Singer in 1929, but found to be the same taxon as the earlier 1920 Russula nobilis, which has naming priority.
Description
The cap is a red or rosy colour, 3–6 cm wide, convex to flat, or slightly depressed, and weakly sticky. It peels only to a third of its radius, which reveals pink flesh. The flesh is firm and white or sometimes yellowish, smells of coconut, and tastes peppery. It is often damaged by slugs. The stem is 2–5 cm long, 1–1.5 cm wide, cylindrical, (firmer than its conifer dwelling namesake, Russula emetica), and white. The gills are narrowly spaced, adnexed, rounded, and white, often with a faint blue-green sheen. The spore print is white.
Distribution and habitat
The species is mycorrhizal with beech (Fagus) in woodland areas. It is widespread and common in Europe, Asia, and North America, where these trees grow.
Edibility
Russula nobilis is inedible, and probably poisonous in quantity, but not deadly. Many bitter tasting red-capped species can cause problems if eaten raw; the symptoms are mainly gastrointestinal in nature: diarrhoea, vomiting and colicky abdominal cramps. The active agent has not been identified but thought to be caused by chemical compounds known as sesquiterpenes, which have been isolated from the related genus Lactarius and from Russula sardonia.
See also
List of Russula species
References
"Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990.
External links
nobilis
Inedible fungi
Fungi of North America
Fungi of Europe
Fungi of Asia
Fungi described in 1920
Taxa named by Josef Velenovský
Fungus species | Russula nobilis | Biology | 482 |
300,602 | https://en.wikipedia.org/wiki/Internet%20access | Internet access is a facility or service that provides connectivity for a computer, a computer network, or other network device to the Internet, and for individuals or organizations to access or use applications such as email and the World Wide Web. Internet access is offered for sale by an international hierarchy of Internet service providers (ISPs) using various networking technologies. At the retail level, many organizations, including municipal entities, also provide cost-free access to the general public. Types of connections range from fixed-line cable (such as DSL and fiber optic) to mobile (via cellular) and satellite.
The availability of Internet access to the general public began with the commercialization of the early Internet in the early 1990s, and has grown with the availability of useful applications, such as the World Wide Web. In 1995, only percent of the world's population had access, with well over half of those living in the United States and consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology. By 2014, 41 percent of the world's population had access, broadband was almost ubiquitous worldwide, and global average connection speeds exceeded one megabit per second.
History
The Internet developed from the ARPANET, which was funded by the US government to support projects within the government, at universities and research laboratories in the US, but grew over time to include most of the world's large universities and the research arms of many technology companies. Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted.
In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks (LANs) or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s while modem data-rates grew from 1200 bit/s in the early 1980s to 56 kbit/s by the late 1990s. Initially, dial-up connections were made from terminals or computers running terminal-emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal-to-host connections. The introduction of network access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users; although slower, due to the lower data rates available using dial-up.
An important factor in the rapid rise of Internet access speed has been advances in MOSFET (MOS transistor) technology. The MOSFET invented at Bell Labs between 1955 and 1960 following Frosch and Derick discoveries, is the building block of the Internet telecommunications networks. The laser, originally demonstrated by Charles H. Townes and Arthur Leonard Schawlow in 1960, was adopted for MOS light-wave systems around 1980, which led to exponential growth of Internet bandwidth. Continuous MOSFET scaling has since led to online bandwidth doubling every 18 months (Edholm's law, which is related to Moore's law), with the bandwidths of telecommunications networks rising from bits per second to terabits per second.
Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access that is always on, and faster than the traditional dial-up access" and so covers a wide range of technologies. The core of these broadband Internet technologies are complementary MOS (CMOS) digital circuits, the speed capabilities of which were extended with innovative design techniques. Broadband connections are typically made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card.
Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not interfere with voice use of phone lines. Broadband provides improved access to Internet services such as:
Faster World Wide Web browsing
Faster downloading of documents, photographs, videos, and other large files
Telephony, radio, television, and videoconferencing
Virtual private networks and remote system administration
Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive
In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million.
The broadband technologies in widest use are of digital subscriber line (DSL), ADSL, and cable Internet access. Newer technologies include VDSL and optical fiber extended closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology.
In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless, satellite, and microwave Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available.
Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless.
Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE.
Availability
In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafés, where computers with Internet connections are available. Some libraries provide stations for physically connecting users' laptops to LANs.
Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based.
Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a confined location since multiple ones combined can cover a whole campus or park, or even an entire city can be enabled.
Additionally, mobile broadband access allows smartphones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made, subject to the capabilities of that mobile network.
Speed
The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection from 220 (V.42bis) to 320 (V.44) kbit/s. However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s.
Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. A 2006 Organisation for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s. And in 2015 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s upstream (from the user's computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available.
The higher data rate dial-up modems and many broadband services are "asymmetric"—supporting much higher data rates for download (toward the user) than for upload (toward the Internet).
Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer. Actual end-to-end data rates can be lower due to a number of factors. In late June 2016, internet connection speeds averaged about 6 Mbit/s globally. Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user.
Network congestion
Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well, and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users who experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases, the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable.
When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.
Outages
An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.
On April 25, 1997, due to a combination of human error and a software bug, an incorrect routing table at MAI Network Service (a Virginia Internet service provider) propagated across backbone routers and caused major disruption to Internet traffic for a few hours.
Technologies
When the Internet is accessed using a modem, digital data is converted to analog for transmission over analog networks such as the telephone and cable networks. A computer or other device accessing the Internet would either be connected directly to a modem that communicates with an Internet service provider (ISP) or the modem's Internet connection would be shared via a LAN which provides access in a limited area such as a home, school, computer laboratory, or office building.
Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet access speed is limited by the upstream link to the ISP. LANs may be wired or wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, LocalTalk, FDDI, and other technologies were used in the past.
Ethernet is the name of the IEEE 802.3 standard for physical LAN communication and Wi-Fi is a trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards. Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or more wireless antenna called access points.
Many "modems" (cable modems, DSL gateways or Optical Network Terminals (ONTs)) provide the additional functionality to host a LAN so most Internet access today is through a LAN such as that created by a WiFi router connected to a modem or a combo modem router, often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this raises the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections, or in other words, how customers' modems (Customer-premises equipment) are most often connected to internet service providers (ISPs).
Dial-up technologies
Dial-up access
Dial-up Internet access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection.
Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of , as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet).
Multilink dial-up
Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and accessing them as a single data channel. It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking – and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking.
Hardwired broadband access
The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. The following technologies use wires or cables in contrast to wireless broadband described later.
Integrated Services Digital Network
Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting voice and digital data, and is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies.
Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9 Mbit/s. ISDN has been replaced by DSL technology, and it required special telephone switches at the service provider.
Leased lines
Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created.
T-carrier technology dates to 1957 and provides data rates that range from 56 and (DS0) to (DS1 or T1), to (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and . T-carrier lines require special termination equipment such as Data service units that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP. In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels () on an E1 () and 512 user channels or 16 E1s on an E3 ().
Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high-data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries . Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (), OC-48c (), OC-192c (), and OC-768c (). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams. Optical transport network (OTN) may be used instead of SONET for higher data transmission speeds of up to per OTN channel.
The 1, 10, 40, and 100 Gigabit Ethernet IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to .
Cable Internet access
Cable Internet provides access using a cable modem on hybrid fiber coaxial (HFC) wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. Using a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means – usually fiber optic cable or digital satellite and microwave transmissions. Like DSL, broadband cable provides a continuous connection with an ISP.
Downstream, the direction toward the user, bit rates can be as much as 1000 Mbit/s in some countries, with the use of DOCSIS 3.1. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s. DOCSIS 4.0 promises up to downstream and upstream, however this technology is yet to have been implemented in real-world usage. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings; commercial buildings do not always include wiring for coaxial cable networks. In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted.
Digital subscriber line (DSL, ADSL, SDSL, and VDSL)
Digital subscriber line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication. These frequency bands are subsequently separated by filters installed at the customer's premises.
DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e., in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric. With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal.
Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1) is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires and up to 85 Mbit/s down- and upstream on coaxial cable. VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection.
VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL. Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases.
DSL Rings
DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400 Mbit/s.
Fiber to the home
Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN). These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar in function and architecture to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. Fiber internet connections to customers are either AON (Active optical network) or more commonly PON (Passive optical network). Examples of fiber optic internet access standards are G.984 (GPON, G-PON) and 10G-PON (XG-PON). ISPs may instead use Metro Ethernet as a replacement for T1 and Frame Relay lines for corporate and institutional customers, or offer carrier-grade Ethernet.
The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, LTE) for final delivery to customers. Fiber optic is immune to electromagnetic interference.
In 2010, Australia began rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses. The project was abandoned by the subsequent LNP government, in favor of a hybrid FTTN design, which turned out to be more expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country).
Power-line Internet
Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s.
Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must detect existing usage and avoid interfering with it.
Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer. In the U.S. a transformer serves a small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than a comparable European city.
ATM and Frame Relay
Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates.
While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did.
Wireless broadband access
Wireless broadband is used to provide both fixed and mobile Internet access with the following technologies.
Satellite broadband
Satellite Internet access provides fixed, portable, and mobile Internet access. Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the equatorial position of all geostationary satellites. In the southern hemisphere, this situation is reversed, and dishes are pointed north. Service can be adversely affected by moisture, rain, and snow (known as rain fade). The system requires a carefully aimed directional antenna.
Satellites in geostationary Earth orbit (GEO) operate in a fixed position above the Earth's equator. At the speed of light (about ), it takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies negatively affect some applications that require real-time response, particularly online games, voice over IP, and remote control devices. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions. HughesNet, Exede, AT&T and Dish Network have GEO systems.
Satellite internet constellations in low Earth orbit (LEO, below ) and medium Earth orbit (MEO, between ) operate at lower altitudes, and their satellites are not fixed in their position above the Earth. Because they operate at a lower altitude, more satellites and launch vehicles are needed for worldwide coverage. This makes the initial required investment very large which initially caused OneWeb and Iridium to declare bankruptcy. However, their lower altitudes allow lower latencies and higher speeds which make real-time interactive Internet applications more feasible. LEO systems include Globalstar, Starlink, OneWeb and Iridium. The O3b constellation is a medium Earth-orbit system with a latency of 125 ms. COMMStellation™ is a LEO system, scheduled for launch in 2015, that is expected to have a latency of just 7 ms.
Mobile broadband
Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers (cellular networks) to computers, mobile phones (called "cell phones" in North America and South Africa, and "hand phones" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used.
New mobile phone technology and infrastructure is introduced periodically and generally involves a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations. The first mobile data services became available during the second generation (2G).
The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates.
WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed.
In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage.
5G was designed to be faster and have lower latency than its predecessor, 4G. It can be used for mobile broadband in smartphones or separate modems that emit WiFi or can be connected through USB to a computer, or for fixed wireless.
Fixed wireless
Fixed wireless internet connections that do not use a satellite nor are designed to support moving equipment such as smartphones due to the use of, for example, customer premises equipment such as antennas that can't be moved over a significant geographical area without losing the signal from the ISP, unlike smartphones. Microwave wireless broadband or 5G may be used for fixed wireless.
WiMAX
Worldwide Interoperability for Microwave Access (WiMAX) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. It enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL". The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates. Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi LAN. WiMAX signals also penetrate building walls much more effectively than Wi-Fi. WiMAX is most often used as a fixed wireless standard.
Wireless ISP
Wireless Internet service providers (WISPs) operate independently of mobile phone operators. WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems as well, such as microwave and WiMAX.
Traditional 802.11a/b/g/n/ac is an unlicensed omnidirectional service designed to span between 100 and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna (where allowed by regulations), 802.11 can operate reliably over a distance of many km(miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are usually slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems.
With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages:
usually regulatory bodies allow for more power and using (better-) directional antennae,
there exists much more bandwidth to share, allowing both better throughput and improved coexistence,
there are fewer consumer devices that operate over 5 GHz than over 2.4 GHz, hence fewer interferers are present,
the shorter wavelengths don't propagate as well through walls and other structures, so much less interference leaks outside of the homes of consumers.
Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX. There are a number of companies that provide this service.
Local Multipoint Distribution Service
Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26 GHz and 29 GHz. Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s. Distance is typically limited to about , but links of up to from the base station are possible in some circumstances.
LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards.
Hybrid Access Networks
In some regions, notably in rural areas, the length of the copper lines makes it difficult for network operators to provide high-bandwidth services. One alternative is to combine a fixed-access network, typically XDSL, with a wireless network, typically LTE. The Broadband Forum has standardized an architecture for such Hybrid Access Networks.
Non-commercial alternatives for using Internet services
Grassroots wireless networking movements
Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless networks. It is usually ordered by the local municipality from commercial WISPs.
Grassroots efforts have also led to wireless community networks widely deployed in numerous countries, both developing and developed ones. Rural wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available.
Where radio spectrum regulation is not community-friendly, the channels are crowded or when equipment can not be afforded by local residents, free-space optical communication can also be deployed in a similar manner for point to point transmission in air (rather than in fiber optic cable).
Packet radio
Packet radio connects computers or whole networks operated by radio amateurs with the option to access the Internet. Note that as per the regulatory rules outlined in the HAM license, Internet access and email should be strictly related to the activities of hardware amateurs.
Sneakernet
The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to the wearing of sneakers as the transport mechanism for the data.
For those who do not have access to or can not afford broadband at home, downloading large files and disseminating information is done by transmission through workplace or library networks, taken home and shared with neighbors by sneakernet. The Cuban El Paquete Semanal is an organized example of this.
There are various decentralized, delay tolerant peer to peer applications which aim to fully automate this using any available interface, including both wireless (Bluetooth, Wi-Fi mesh, P2P or hotspots) and physically connected ones (USB storage, Ethernet, etc.).
Sneakernets may also be used in tandem with computer network data transfer to increase data security or overall throughput for big data use cases. Innovation continues in the area to this day; for example, AWS has recently announced Snowball, and bulk data processing is also done in a similar fashion by many research institutes and government agencies.
Pricing and spending
Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT). In Mexico, the poorest 30% of the society spend an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population merely has US$9 per year to spend on ICT (US$0.75 per month). From Latin America, it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year. This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries.
Dial-up users pay the costs for making local or long-distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access.
Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access.
Internet services like Facebook, Wikipedia and Google have built special programs to partner with mobile network operators (MNO) to introduce zero-rating the cost for their data volumes as a means to provide their service more broadly into developing markets.
With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80–90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03.
Some ISPs estimate that a small number of their users consume a disproportionate portion of the total bandwidth. In response some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps. Others claim that because the marginal cost of extra bandwidth is very small with 80 to 90 percent of the costs fixed regardless of usage level, that such steps are unnecessary or motivated by concerns other than the cost of delivering bandwidth to the end user.
In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps. In 2008 Time Warner began experimenting with usage-based pricing in Beaumont, Texas. In 2009 an effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned.
On August 1, 2012, in Nashville, Tennessee and on October 1, 2012, in Tucson, Arizona Comcast began tests that impose data caps on area residents. In Nashville exceeding the 300 Gbyte cap mandates a temporary purchase of 50 Gbytes of additional data.
Digital divide
Despite its tremendous growth, Internet access is not distributed equally within or between countries. The digital divide refers to "the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access". The gap between people with Internet access and those without is one of many aspects of the digital divide. Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. "Low-income, rural, and minority populations have received special scrutiny as the technological 'have-nots'."
Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example, in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011. In North Korea there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet. The U.S. trade embargo is a barrier limiting Internet access in Cuba.
Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access. The majority of people in developing countries do not have Internet access. About 4 billion people do not have Internet access. When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007).
Internet access has changed the way in which many people think and has become an integral part of people's economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the "political, social, economic, educational, and career opportunities" available over the Internet. Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide. To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world. The Global Gateway, the EU's initiative to assist infrastructure development throughout the world, plans to raise €300 billion for connectivity projects, including those in the digital sector, between 2021 and 2027.
Growth in number of users
Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.7 billion in 2013. With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia, Africa, Latin America, the Caribbean, and the Middle East. Across Africa, an estimated 900 million people are still not connected to the internet; for those who are, connectivity fees remain generally expensive, and bandwidth is severely constrained in many locations. The number of mobile customers in Africa, however, is expanding faster than everywhere else. Mobile financial services also allow for immediate payment of products and services.
There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011. In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available.
Bandwidth divide
Traditionally the divide has been measured in terms of the existing numbers of subscriptions and digital devices ("have and have-not of subscriptions"). Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing, but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH). As shown in the Figure, Internet access in terms of bandwidth is more unequally distributed in 2014 as it was in the mid-1990s.
For example, only 0.4% of the African population has a fixed-broadband subscription. The majority of internet users use it through mobile broadband.
Rural access
One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project.
Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service.
Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas. The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.
The Canadian Broadband for Rural Nova Scotia initiative public private partnership is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.
In New Zealand, a fund has been formed by the government to improve rural broadband, and mobile phone coverage. Current proposals include: (a) extending fiber coverage and upgrading copper to support VDSL, (b) focusing on improving the coverage of cellphone technology, or (c) regional wireless.
Several countries have started Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks.
Access as a civil or human right
The actions, statements, opinions, and recommendations outlined below have led to the suggestion that Internet access itself is or should become a civil or perhaps a human right.
Several countries have adopted laws requiring the state to work to ensure that Internet access is broadly available or preventing the state from unreasonably restricting an individual's access to information and the Internet:
Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with the government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web."
Estonia: In 2000, the parliament launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the twenty-first century.
Finland: By July 2010, every person in Finland was to have access to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection.
France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings
Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to electronically transmitted information.
Spain: Starting in 2011, , the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain.
In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights:
1. We, the representatives of the peoples of the world, assembled in Geneva from 10–12 December 2003 for the first phase of the World Summit on the Information Society, declare our common desire and commitment to build a people-centered, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes and principles of the Charter of the United Nations and respecting fully and upholding the Universal Declaration of Human Rights.
3. We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs.
The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating:
4. We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organization. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers."
A poll of 27,973 adults in 26 countries, including 14,306 Internet users, conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right. 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion.
The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access:
67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an "enabler" of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. In this regard, the Special Rapporteur encourages other Special Procedures mandate holders to engage on the issue of the Internet with respect to their particular mandates.
78. While blocking and filtering measures deny users access to specific content on the Internet, States have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights.
79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest.
85. Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. Each State should thus develop a concrete and effective policy, in consultation with individuals from all sections of society, including the private sector and relevant Government ministries, to make the Internet widely available, accessible and affordable to all segments of population.
Network neutrality
Network neutrality (also net neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken. In April 2017, a recent attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai. The vote on whether or not to abolish net neutrality was passed on December 14, 2017, and ended in a 3–2 split in favor of abolishing net neutrality.
Natural disasters and access
Natural disasters disrupt internet access in profound ways. This is important—not only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary for disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages.
One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable. At Hurricane Katrina's peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisiana's networks were disrupted. Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at "network edges were important emergency organizations such as hospitals and government agencies are mostly located". Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service. The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted.
A second way natural disasters destroy internet connectivity is by severing submarine cables—fiber-optic cables placed on the ocean floor that provide international internet connection. A sequence of undersea earthquakes cut six out of seven international cables connected to Taiwan and caused a tsunami that wiped out one of its cable and landing stations. The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe.
With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012. AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone. This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram.
See also
Back-channel, a low bandwidth, or less-than-optimal, transmission channel in the opposite direction to the main channel
Broadband mapping in the United States
Comparison of wireless data standards
Connectivity in a social and cultural sense
Fiber-optic communication
History of the Internet
IP over DVB, Internet access using MPEG data streams over a digital television network
List of countries by number of broadband Internet subscriptions
National broadband plan
Public switched telephone network (PSTN)
Residential gateway
White spaces (radio), a group of technology companies working to deliver broadband Internet access via unused analog television frequencies
References
External links
European broadband
Corporate vs. Community Internet , AlterNet, June 14, 2005, – on the clash between US cities' attempts to expand municipal broadband and corporate attempts to defend their markets
Broadband data, from Google public data
FCC Broadband Map
Types of Broadband Connections , Broadband.gov
Broadband
Human rights by issue
Rights | Internet access | Technology | 12,587 |
40,763,445 | https://en.wikipedia.org/wiki/Germacrone | Germacrone is a sesquiterpene which has been isolated from Geranium macrorrhizum. It has shown antiviral properties in an animal model of influenza infection.
References
Sesquiterpenes
Ten-membered rings
Cyclic ketones | Germacrone | Chemistry | 58 |
12,601,256 | https://en.wikipedia.org/wiki/Cabilly%20patents | The Cabilly patents are two US patents issued to Genentech and City of Hope which relate to the "fundamental technology required for the artificial synthesis of antibody molecules." The name refers to lead inventor Shmuel Cabilly, who was awarded the patent while working at City of Hope in the 1980s. There was ongoing legal controversy surrounding these patents since their original filing in 1983. In 2008, the litigation that was pending before the U.S. District Court for the Central District of California, was fully resolved and dismissed.
An interference between (the original "Cabilly patent") co-issued to Genentech and City of Hope and (the "Boss" patent) issued to Celltech resulted in the issuance of a second "Cabilly patent" to Genentech in 2001. This new patent would extend into 2018, an effective term of 29 years.
A lawsuit filed by MedImmune, a licensee of the Cabilly patent, resulted in the United States Supreme Court case MedImmune, Inc. v. Genentech, Inc. which was decided in favor of MedImmune. Following the supreme court decision, the United States Patent and Trademark Office (USPTO) declared Genentech's patent invalid, but Genentech appealed that decision to the USPTO. The patent remained valid and enforceable until the process was completed. Genentech's "Cabilly II" patent 6,331,415 was then found, after USPTO reexamination, to be enforceable. MedImmune and Centocor have settled with Genentech. Currently, two other challenges are in the courts. GlaxoSmithKline and Human Genome Sciences both are challenging the patent under antitrust law. This is based on the settlement between Genentech and Celltech and their dispute over the original Cabilly patent 4,816,567 and the Celltech's patent 4,816,397.
An important implication of this case is the affirmation that a licensee retains the right to challenge a licensed patent. The controversy has also called attention to the amount of time the USPTO takes to resolve Interference proceedings, and has been cited in arguments for changing to a First to file patent system.
References
Genentech Claims Rejected on Patent Which Was Subject of Recent Supreme Court Decision. California Biotech Law Blog. February 21, 2007.
USPTO issues double patenting rejection on Genentech's 29-year-old patent Patent Baristas Blog. February 22, 2006.
Biotech patent dispute involves millions. Gazette.net. November 16, 2005.
Genentech Hit with Adverse Patent Ruling. California Biotech Law Blog. September 30, 2005.
It Lives for 29 Years?. Legal Times. November 2003. vol.26 no.44.
United States biotechnology law | Cabilly patents | Biology | 586 |
876,272 | https://en.wikipedia.org/wiki/Ammonal | Ammonal is an explosive made up of ammonium nitrate and aluminium powder. TNT is added to create T-ammonal which improves properties such as brisance. The mixture is often referred to as Tannerite, which is a brand of ammonal.
The ammonium nitrate functions as an oxidizer and the aluminium as fuel. The use of the relatively cheap ammonium nitrate and aluminium makes it a replacement for pure TNT.
The mixture is affected by humidity because ammonium nitrate is highly hygroscopic. Ammonal's ease of detonation depends on fuel and oxidizer ratios, 95:5 ammonium nitrate and aluminium being fairly sensitive, however not very oxygen balanced. Even copper metal traces are known to sensitize bulk amounts of ammonium nitrate and further increase danger of spontaneous detonation during a fire, most likely due to the formation of tetramines. More oxygen balanced mixtures are not easily detonated, requiring a fairly substantial shock, though it remains more sensitive than trinitrotoluene and C-4.
The detonation velocity of ammonal is approximately .
History
From early 1916, the British Army employed ammonal for their mines during World War I, starting with the Hawthorn Ridge mine during the Battle of the Somme, and reaching a zenith in the mines in the Battle of Messines which were exploded on 7 June 1917 at the start of the Third Battle of Ypres (also known as the Battle of Passchendaele). Several of the mines in the Battle of Messines contained 30,000 lbs (over 13.6 tonnes) of ammonal, and others contained 20,000 lbs (over 9 tonnes). The joint explosion of the ammonal mines beneath the German lines at Messines created 19 large craters, killing 10,000 German soldiers in one of the largest non-nuclear explosions in history. Not all of the mines laid by the British Army at Messines were detonated, however. Two mines were not ignited in 1917 because they had been abandoned before the battle, and four were outside the area of the offensive. On 17 July 1955, a lightning strike set off one of these four latter mines. There were no human casualties, but one cow was killed. Another of the unused mines is believed to have been found in a location beneath a farmhouse, but no attempt has been made to remove it.
Ammonal used for military mining purposes was generally contained within metal cans or rubberised bags to prevent moisture ingress problems. The composition of ammonal used at Messines was 65% ammonium nitrate, 17% aluminium, 15% trinitrotoluene (TNT), and 3% charcoal. Ammonal remains in use as an industrial explosive. Typically, it is used for quarrying or mining purposes.
ETA, a Basque separatist organisation, used 250 kg of ammonal in a car bomb in its attack on the Zaragoza barracks on 11 December 1987 in Zaragoza, Spain.
Proportions
A T-ammonal mixture previously used in hand grenades and shells has the proportions (by mass):
ammonium nitrate 58.6%
aluminium powder 21%
charcoal 2.4%
TNT 18%
See also
Amatol
ANFO
BLU-82 "Daisy Cutter"
Tritonal
References
Explosives | Ammonal | Chemistry | 670 |
5,235,884 | https://en.wikipedia.org/wiki/EF%20Johnson%20Technologies | EF Johnson Technologies, Inc. is a two-way radio manufacturer founded by its namesake, Edgar Frederick Johnson, in Waseca, Minnesota, United States in 1923. Today it is a wholly owned subsidiary of JVCKenwood of Yokohama, Japan.
EF Johnson Technologies offers a wide range of equipment for use by law enforcement, firefighters, EMS, and military. Products include Project 25 systems, portable/mobile two-way radios, and radio encryption products.
(Recent) Product introductions
2013: Introduced Viking VP900 multi-band portable radio.
2012: Introduced Viking VP600 portable radio.
2011: Introduced ATLAS P25 System Solutions. Named the Hot Product by APCO's Public Safety Communications magazine.
2010: Introduced the 51FIRE ES, the first portable radio engineered specifically for firefighters.
2009: Introduced Hybrid IP25, a Project 25 compliant wide area conventional system and a hybrid network intended to allow first responders to operate and interoperate between the conventional and trunked systems and eliminate the need for dispatchers to manually patch calls between the two systems.
2009: Introduced StarGate Dispatch Console, an IP-based dispatch console for first responders. StarGate was named the Hot Product by Public Safety Communications, the official magazine of the Association of Public-Safety Communications Officials (APCO).
2008: Introduced the Lightning Control Head, a mobile radio control head that incorporates electroluminescent technology.
2007: Introduced IP25 MultiSite, a switchless Project 25 trunked infrastructure system that is specifically designed for first responders. This Voice over Internet Protocol (VoIP) based system meets the NTIA mandates for narrowband operation in VHF and UHF frequencies as well as DOD mandates for Project 25 compliance.
2006: EF Johnson introduces the Enhanced (AMBE+2) Project 25 Vocoder in its entire radio product line.
History
The company was founded in 1923 by Edgar F. Johnson and his wife Ethel Johnson. The company began as a mail order business, selling radio transmitting parts to amateurs and early radio broadcasters from space shared with a woodworking shop located in downtown Waseca. In 1936, E.F. Johnson Co. built its first factory and office building in Waseca, and had 17 employees. The company designed and produced electronic components in volume, and was active in World War II defense production. By 1945, the company had grown to 500 employees with expanded facilities in a garage, a nearby grocery store and the Odd Fellows Hall. In 1949, the first of the company's amateur radio transmitters were manufactured, the Viking I model. The Viking line of amateur transmitters included the Valiant, Ranger and Pacemaker. In 1958, the company manufactured equipment for the Class D Citizens Band Radio. One such transceiver, the Johnson Messenger, is exhibited in the Smithsonian Institution as an example of early American-made technology.
The company transitioned to development and manufacturing of land-mobile radio products such as the Logic Trunked Radio trunking format. EFJohnson's discontinued Viking line of amateur radio transmitter products are collected, restored, and operated by a number of vintage amateur radio enthusiasts.
In 1982, the company merged with Western Union, and was later purchased in 1997 by software manufacturer Transcrypt International. Headquarters relocated to Irving, Texas in 2005 and the company became EF Johnson Technologies in 2008. In 2010, EF Johnson Technologies, Inc. was acquired by Francisco Partners, and later absorbed by Japanese electronics company JVCKenwood in 2014.
See also
Zetron
JVCKenwood
Notes
External links
E.F. Johnson Rigs
Waseca County, Minnesota
Radio technology
Amateur radio companies
Companies based in Minnesota
Electronics companies established in 1923
Electronics companies of the United States
1923 establishments in Minnesota
2014 mergers and acquisitions
JVCKenwood
American subsidiaries of foreign companies
Radio manufacturers | EF Johnson Technologies | Technology,Engineering | 766 |
41,460,482 | https://en.wikipedia.org/wiki/Russula%20mukteshwarica | Russula mukteshwarica is a mushroom closely related to R. violeipes. It has a purple planoconvex cap 65–130 mm in diameter, and gills that are yellow to yellow-green. The type specimen was collected from a forested region in Uttaranchand State in northern India.
See also
List of Russula species
References
External links
mukteshwarica
Fungi described in 2006
Fungi of Asia
Fungus species | Russula mukteshwarica | Biology | 91 |
764,781 | https://en.wikipedia.org/wiki/Colander | A colander (or cullender) is a kitchen utensil perforated with holes used to strain foods such as pasta or to rinse vegetables. The perforations of the colander allow liquid to drain through while retaining the solids inside. It is sometimes called a pasta strainer. A sieve, with much finer mesh, is also used for straining.
Description and history
Traditionally, colanders are made of a light metal, such as aluminium or thinly rolled stainless steel. Colanders are also made of plastic, silicone, ceramic, and enamelware.
The word colander comes from the Latin , meaning sieve.
Types
Bowl- or cone-shaped – the usual colander
Mated colander pot – a colander inside a cooking pot, allowing the food to drain as it is lifted out
Other uses
The colander in the form of a pasta strainer was adopted as the religious headgear of the satirical religion Pastafarianism, which worships the Flying Spaghetti Monster.
Colanders may be used during solar eclipses to project multiple small low-resolution images of a partial eclipse onto a flat surface for safe viewing.
See also
Chinois
Zaru
References
External links
Colander vs Strainer
Food preparation utensils
Filters
Religious headgear | Colander | Chemistry,Engineering | 260 |
11,460,080 | https://en.wikipedia.org/wiki/Bipolaris%20cookei | Bipolaris cookei is a plant pathogen that affects sorghum, infecting leaf veins and lesions and causing target leaf spot. It is found in the United States, Sudan, Israel, Cyprus, South America, and India.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungi of North America
Fungi of South America
Fungi of Asia
Fungi of Africa
Fungal plant pathogens and diseases
Sorghum diseases
Pleosporaceae
Fungus species | Bipolaris cookei | Biology | 92 |
52,777,879 | https://en.wikipedia.org/wiki/Base%20change%20theorems | In mathematics, the base change theorems relate the direct image and the inverse image of sheaves. More precisely, they are about the base change map, given by the following natural transformation of sheaves:
where
is a Cartesian square of topological spaces and is a sheaf on X.
Such theorems exist in different branches of geometry: for (essentially arbitrary) topological spaces and proper maps f, in algebraic geometry for (quasi-)coherent sheaves and f proper or g flat, similarly in analytic geometry, but also for étale sheaves for f proper or g smooth.
Introduction
A simple base change phenomenon arises in commutative algebra when A is a commutative ring and B and A' are two A-algebras. Let . In this situation, given a B-module M, there is an isomorphism (of A' -modules):
Here the subscript indicates the forgetful functor, i.e., is M, but regarded as an A-module.
Indeed, such an isomorphism is obtained by observing
Thus, the two operations, namely forgetful functors and tensor products commute in the sense of the above isomorphism.
The base change theorems discussed below are statements of a similar kind.
Definition of the base change map
The base change theorems presented below all assert that (for different types of sheaves, and under various assumptions on the maps involved), that the following base change map
is an isomorphism, where
are continuous maps between topological spaces that form a Cartesian square and is a sheaf on X. Here denotes the higher direct image of under f, i.e., the derived functor of the direct image (also known as pushforward) functor .
This map exists without any assumptions on the maps f and g. It is constructed as follows: since is left adjoint to , there is a natural map (called unit map)
and so
The Grothendieck spectral sequence then gives the first map and the last map (they are edge maps) in:
Combining this with the above yields
Using the adjointness of and finally yields the desired map.
The above-mentioned introductory example is a special case of this, namely for the affine schemes and, consequently, , and the quasi-coherent sheaf associated to the B-module M.
It is conceptually convenient to organize the above base change maps, which only involve only a single higher direct image functor, into one which encodes all at a time. In fact, similar arguments as above yield a map in the derived category of sheaves on S':
where denotes the (total) derived functor of .
General topology
Proper base change
If X is a Hausdorff topological space, S is a locally compact Hausdorff space and f is universally closed (i.e., is a closed map for any continuous map ), then
the base change map
is an isomorphism. Indeed, we have: for ,
and so for
To encode all individual higher derived functors of into one entity, the above statement may equivalently be rephrased by saying that the base change map
is a quasi-isomorphism.
The assumptions that the involved spaces be Hausdorff have been weakened by .
has extended the above theorem to non-abelian sheaf cohomology, i.e., sheaves taking values in simplicial sets (as opposed to abelian groups).
Direct image with compact support
If the map f is not closed, the base change map need not be an isomorphism, as the following example shows (the maps are the standard inclusions) :
One the one hand is always zero, but if is a local system on corresponding to a representation of the fundamental group (which is isomorphic to Z), then can be computed as the invariants of the monodromy action of on the stalk (for any ), which need not vanish.
To obtain a base-change result, the functor (or its derived functor) has to be replaced by the direct image with compact support . For example, if is the inclusion of an open subset, such as in the above example, is the extension by zero, i.e., its stalks are given by
In general, there is a map , which is a quasi-isomorphism if f is proper, but not in general. The proper base change theorem mentioned above has the following generalization: there is a quasi-isomorphism
Base change for quasi-coherent sheaves
Proper base change
Proper base change theorems for quasi-coherent sheaves apply in the following situation: is a proper morphism between noetherian schemes, and is a coherent sheaf which is flat over S (i.e., is flat over ). In this situation, the following statements hold:
"Semicontinuity theorem":
For each , the function is upper semicontinuous.
The function is locally constant, where denotes the Euler characteristic.
"Grauert's theorem": if S is reduced and connected, then for each the following are equivalent
is constant.
is locally free and the natural map
is an isomorphism for all .
Furthermore, if these conditions hold, then the natural map
is an isomorphism for all .
If, for some p, for all , then the natural map
is an isomorphism for all .
As the stalk of the sheaf is closely related to the cohomology of the fiber of the point under f, this statement is paraphrased by saying that "cohomology commutes with base extension".
These statements are proved using the following fact, where in addition to the above assumptions : there is a finite complex of finitely generated projective A-modules and a natural isomorphism of functors
on the category of -algebras.
Flat base change
The base change map
is an isomorphism for a quasi-coherent sheaf (on ), provided that the map is flat (together with a number of technical conditions: f needs to be a separated morphism of finite type, the schemes involved need to be Noetherian).
Flat base change in the derived category
A far reaching extension of flat base change is possible when considering the base change map
in the derived category of sheaves on S', similarly as mentioned above. Here is the (total) derived functor of the pullback of -modules (because involves a tensor product, is not exact when is not flat and therefore is not equal to its derived functor ).
This map is a quasi-isomorphism provided that the following conditions are satisfied:
is quasi-compact and is quasi-compact and quasi-separated,
is an object in , the bounded derived category of -modules, and its cohomology sheaves are quasi-coherent (for example, could be a bounded complex of quasi-coherent sheaves)
and are Tor-independent over , meaning that if and satisfy , then for all integers ,
.
One of the following conditions is satisfied:
has finite flat amplitude relative to , meaning that it is quasi-isomorphic in to a complex such that is -flat for all outside some bounded interval ; equivalently, there exists an interval such that for any complex in , one has for all outside ; or
has finite Tor-dimension, meaning that has finite flat amplitude relative to .
One advantage of this formulation is that the flatness hypothesis has been weakened. However, making concrete computations of the cohomology of the left- and right-hand sides now requires the Grothendieck spectral sequence.
Base change in derived algebraic geometry
Derived algebraic geometry provides a means to drop the flatness assumption, provided that the pullback is replaced by the homotopy pullback. In the easiest case when X, S, and are affine (with the notation as above), the homotopy pullback is given by the derived tensor product
Then, assuming that the schemes (or, more generally, derived schemes) involved are quasi-compact and quasi-separated, the natural transformation
is a quasi-isomorphism for any quasi-coherent sheaf, or more generally a complex of quasi-coherent sheaves.
The afore-mentioned flat base change result is in fact a special case since for g flat the homotopy pullback (which is locally given by a derived tensor product) agrees with the ordinary pullback (locally given by the underived tensor product), and since the pullback along the flat maps g and g' are automatically derived (i.e., ). The auxiliary assumptions related to the Tor-independence or Tor-amplitude in the preceding base change theorem also become unnecessary.
In the above form, base change has been extended by to the situation where X, S, and S' are (possibly derived) stacks, provided that the map f is a perfect map (which includes the case that f is a quasi-compact, quasi-separated map of schemes, but also includes more general stacks, such as the classifying stack BG of an algebraic group in characteristic zero).
Variants and applications
Proper base change also holds in the context of complex manifolds and complex analytic spaces.
The theorem on formal functions is a variant of the proper base change, where the pullback is replaced by a completion operation.
The see-saw principle and the theorem of the cube, which are foundational facts in the theory of abelian varieties, are a consequence of proper base change.
A base-change also holds for D-modules: if X, S, X', and S' are smooth varieties (but f and g need not be flat or proper etc.), there is a quasi-isomorphism
where and denote the inverse and direct image functors for D-modules.
Base change for étale sheaves
For étale torsion sheaves , there are two base change results referred to as proper and smooth base change, respectively: base change holds if is proper. It also holds if g is smooth, provided that f is quasi-compact and provided that the torsion of is prime to the characteristic of the residue fields of X.
Closely related to proper base change is the following fact (the two theorems are usually proved simultaneously): let X be a variety over a separably closed field and a constructible sheaf on . Then are finite in each of the following cases:
X is complete, or
has no p-torsion, where p is the characteristic of k.
Under additional assumptions, extended the proper base change theorem to non-torsion étale sheaves.
Applications
In close analogy to the topological situation mentioned above, the base change map for an open immersion f,
is not usually an isomorphism. Instead the extension by zero functor satisfies an isomorphism
This fact and the proper base change suggest to define the direct image functor with compact support for a map f by
where is a compactification of f, i.e., a factorization into an open immersion followed by a proper map.
The proper base change theorem is needed to show that this is well-defined, i.e., independent (up to isomorphism) of the choice of the compactification.
Moreover, again in analogy to the case of sheaves on a topological space, a base change formula for vs. does hold for non-proper maps f.
For the structural map of a scheme over a field k, the individual cohomologies of , denoted by referred to as cohomology with compact support. It is an important variant of usual étale cohomology.
Similar ideas are also used to construct an analogue of the functor in A1-homotopy theory.
See also
Grothendieck's relative point of view in algebraic geometry
Change of base (disambiguation)
Base change lifting of automorphic forms
Further reading
Notes
References
Gabber, "Finiteness theorems for étale cohomology of excellent schemes"
External links
Brian Conrad's handout
Trouble with semicontinuity
Topology
Theorems in algebraic geometry
Sheaf theory
Geometry | Base change theorems | Physics,Mathematics | 2,447 |
4,911,272 | https://en.wikipedia.org/wiki/Disco%20Corporation | is a Japanese precision tools maker, especially for the semiconductor production industry.
The company makes dicing saws and laser saws to cut semiconductor silicon wafers and other materials; grinders to process silicon and compound semiconductor wafers to ultra-thin levels; polishing machines to remove the grinding damage layer from the wafer back-side and to increase chip strength.
History
The company was founded as Daiichi-Seitosho in May 1937, as an industrial abrasive wheel manufacturer.
After World War II Japan faced a construction boom which also helped DISCO to boost its sales. The company's grinder discs were in high demand from utility companies, which needed them to manufacture watt-meters.
In December 1968 the company developed and released an ultra-thin resinoid cutting wheel, Microncut. The wheel contained diamond powder and as a result it was capable of making sharp, precision cuts as demanded in the semiconductor manufacturing process. There were no cutting machines available in the market on which ultra-thin precision wheels could be mounted and run, DISCO decided to develop its own machine in 1975. The cutting machine, DAD-2h, received instant recognition from semiconductor companies, including Texas Instruments.
The company adopted the name of DISCO Corporation in May 1977, was listed with the Japan Securities Dealers' Association in October 1989, and entered the First Section of the Tokyo Stock Exchange in December 1999.
References
External links
Disco Corporation global website
European Website
Manufacturing companies of Japan
Equipment semiconductor companies
Companies based in Tokyo
Manufacturing companies established in 1937
Companies listed on the Tokyo Stock Exchange
Japanese brands
Japanese companies established in 1937 | Disco Corporation | Engineering | 319 |
373,998 | https://en.wikipedia.org/wiki/Stuffing | Stuffing, filling, or dressing is an edible mixture, often composed of herbs and a starch such as bread, used to fill a cavity in the preparation of another food item. Many foods may be stuffed, including poultry, seafood, and vegetables. As a cooking technique stuffing helps retain moisture, while the mixture itself serves to augment and absorb flavors during its preparation.
Poultry stuffing often consists of breadcrumbs, onion, celery, spices, and herbs such as sage, combined with the giblets. Additions in the United Kingdom include dried fruits and nuts (such as apricots and flaked almonds), and chestnuts.
History
It is not known when stuffings were first used. The earliest documentary evidence is the Roman cookbook, Apicius De Re Coquinaria, which contains recipes for stuffed chicken, dormouse, hare, and pig. Most of the stuffings described consist of vegetables, herbs and spices, nuts, and spelt (a cereal), and frequently contain chopped liver, brains, and other organ meat.
Names for stuffing include "farce" (~1390), "stuffing" (1538), "forcemeat" (1688), and relatively more recently in the United States, "dressing" (1850).
Cavities
In addition to stuffing the body cavities of animals, including birds, fish, and mammals, various cuts of meat may be stuffed after they have been deboned or a pouch has been cut into them. Recipes include stuffed chicken legs, stuffed pork chops, stuffed breast of veal, as well as the traditional holiday stuffed turkey or goose.
Many types of vegetables are also suitable for stuffing, after their seeds or flesh has been removed. Tomatoes, capsicums (sweet or hot peppers), and vegetable marrows such as zucchini may be prepared in this way. Cabbages and similar vegetables can also be stuffed or wrapped around a filling. They are usually blanched first, in order to make their leaves more pliable. Then the interior may be replaced by stuffing, or small amounts of stuffing may be inserted between the individual leaves.
Purportedly ancient Roman, or else Medieval, cooks developed engastration recipes, stuffing animals with other animals. An anonymous Andalusian cookbook from the 13th century includes a recipe for a ram stuffed with small birds. A similar recipe for a camel stuffed with sheep stuffed with bustards stuffed with carp stuffed with eggs is mentioned in T. C. Boyle's book Water Music. Multi-bird-stuffed dishes such as the turducken or gooducken are contemporary variations.
Fillers
Almost anything can serve as a stuffing. Many American stuffings contain a starchy ingredient like bread or cereals, usually together with vegetables, ground meats, herbs and spices, and eggs. Middle Eastern vegetable stuffings may be based on seasoned rice, on minced meat, or a combination thereof. Other stuffings may contain only vegetables and herbs. Some types of stuffing contain sausage meat, or forcemeat, while vegetarian stuffings sometimes contain tofu. Roast pork is often accompanied by sage and onion stuffing in England; roast poultry in a Christmas dinner may be stuffed with sweet chestnuts. Oysters are used in one traditional stuffing for Thanksgiving. These may also be combined with mashed potatoes, for a heavy stuffing. Fruits and dried fruits can be added to stuffing including apples, apricots, dried prunes, and raisins. In England, a stuffing is sometimes made of minced pork shoulder seasoned with various ingredients, such as sage, onion, bread, chestnuts, dried apricots, and dried cranberries. The stuffing mixture may be cooked separately and served as a side dish. This may still be called stuffing or it may be called dressing.
Food safety
The United States Department of Agriculture (USDA) states that cooking animals with a body cavity filled with stuffing can present potential food safety hazards. Even when the meat reaches a safe temperature, the stuffing can still harbor bacteria, and if the meat is cooked until the stuffing reaches a safe temperature, the meat may be overcooked. For turkeys, for instance, the USDA recommends cooking stuffing separately from the bird and not buying pre-stuffed birds.
See also
Breadcrumb
Breading
Forcemeat
Kousa mahshi, squash or zucchini stuffed with rice and meat
List of stuffed dishes
List of bread dishes
Panada
Paxo
Sarma and dolma
Stove Top stuffing
Stuffed pepper
References
External links
External links
Cooking techniques
Food ingredients
Poultry dishes
Christmas food
Culinary terminology
Bread dishes
Rice dishes
Thanksgiving food | Stuffing | Technology | 941 |
7,512,586 | https://en.wikipedia.org/wiki/Wardrobe%20supervisor | The wardrobe supervisor is responsible for overseeing all wardrobe related activities during the course of a theatrical run or film shoot. The modern title "wardrobe supervisor" has evolved from the more traditional titles of "wardrobe mistress/master" or "mistress/master of the wardrobe". The wardrobe supervisor may be present at some production meetings and fittings, their primary responsibilities generally begin at the load-in stage of a production and during prep of a film. At load-in physical custody and responsibility for the costumes shifts from the costume designer and shop staff to the wardrobe supervisor.
The wardrobe supervisor supervises all dressers and costumers working on a production. In consultation with the production manager, stage manager, costume designer, and sometimes the director, the wardrobe supervisor helps to coordinate and assign dressers to specific performers and tasks. They help determine where and how costume changes are made. Generally, the wardrobe supervisor decides whether a point in a production requires a quick change backstage, or if there is time for a normal change in the dressing room. All dressers report directly to the wardrobe supervisor, who acts as primary liaison between dressers, the costumer, and stage management.
Duties
The wardrobe supervisor's primary responsibilities include:
The care and proper maintenance of all costumes, shoes, undergarments, hats and costume related personal props such as gloves, jewelry, parasols, fans and pocket books.
To ensure the proper labeling, hanging, storage and preset of all costume pieces.
To create and execute a proper cleaning schedule for all garments, ensuring that laundry and dry cleaning are done on a regular basis between performances. They also coordinate the regular changing of dress shields, the application of garment freshening sprays and providing clean undergarments to the performers.
To ensure that all costumes are properly pressed or steamed prior to each performance.
The wardrobe supervisor also regularly inventories and inspects all costumes and coordinates all costume repairs. The majority of minor costume repairs are done on site at the theatre by either the Wardrobe Supervisor or in the cases of many regional theatres, the onsite wardrobe maintenance crew which is connected to the in house costume shop. Most repairs are considered "emergencies", however, and whenever possible they are done onsite at the theatre before and sometimes during the actual performance. The wardrobe supervisor's space in the theatre, with few exceptions, contains a sewing machine, glue gun, and all sewing supplies necessary for any type of emergency repair that could be required. Most wardrobe supervisors are very qualified seamstresses in their own right. The rule of thumb is that only in the case of very significant damage is a costume sent back to the shop for repair. The one exception to this rule are shoes. Although most supervisor's maintain a regular schedule for polishing and re-spraying of shoes, for safety reasons, actual shoe repair work is always sent out.
At the end of a production run, the wardrobe supervisor oversees all aspects of the costume strike. In the case of rented costumes, they coordinate restoring costumes to original condition. This includes ripping out hems or alterations that may have been done for fitting, and removing any added trim or ornamentation. Sometimes this work is extensive enough that the costumes are returned to either the designer's or theatre's shop for laundering and restoration. Regardless, the wardrobe supervisor is responsible for providing a complete and accurate inventory that ensures all pieces are returned.
A good wardrobe supervisor ensures that costumes look as fresh and new for the last performance as they did on opening night.
References
J. Michael Gillette Theatrical Design & Production Mayfield Publishing Company, Mountainview CA, 1992
Stage crew
Theatrical occupations
Costume design
Television terminology
Mass media occupations | Wardrobe supervisor | Engineering | 738 |
35,471,350 | https://en.wikipedia.org/wiki/Method%20of%20fundamental%20solutions | In scientific computation and simulation, the method of fundamental solutions (MFS) is a technique for solving partial differential equations based on using the fundamental solution as a basis function. The MFS was developed to overcome the major drawbacks in the boundary element method (BEM) which also uses the fundamental solution to satisfy the governing equation. Consequently, both the MFS and the BEM are of a boundary discretization numerical technique and reduce the computational complexity by one dimensionality and have particular edge over the domain-type numerical techniques such as the finite element and finite volume methods on the solution of infinite domain, thin-walled structures, and inverse problems.
In contrast to the BEM, the MFS avoids the numerical integration of singular fundamental solution and is an inherent meshfree method. The method, however, is compromised by requiring a controversial fictitious boundary outside the physical domain to circumvent the singularity of fundamental solution, which has seriously restricted its applicability to real-world problems. But nevertheless the MFS has been found very competitive to some application areas such as infinite domain problems.
The MFS is also known by different names in the literature, including the charge simulation method, the superposition method, the desingularized method, the indirect boundary element method and the virtual boundary element method.
MFS formulation
Consider a partial differential equation governing certain type of problems
where is the differential partial operator, represents the computational domain, and denote the Dirichlet and Neumann boundary, respectively,
and .
The MFS employs the fundamental solution of the operator as its basis function to represent the approximation of unknown function u as follows
where denotes the Euclidean distance between collocation points and source points , is the fundamental solution which satisfies
where denotes Dirac delta function, and are the unknown coefficients.
With the source points located outside the physical domain, the MFS avoid the fundamental solution singularity. Substituting the approximation into boundary condition yields the following matrix equation
where and denote the collocation points, respectively, on Dirichlet and Neumann boundaries. The unknown coefficients can uniquely be determined by the above algebraic equation. And then we can evaluate numerical solution at any location in physical domain.
History and recent developments
The ideas behind the MFS were developed primarily by V. D. Kupradze and M. A. Alexidze in the late 1950s and early 1960s. However, the method was first proposed as a computational technique much later by R. Mathon and R. L. Johnston in the late 1970s, followed by a number of papers by Mathon, Johnston and Graeme Fairweather with applications. The MFS then gradually became a useful tool for the solution of a large variety of physical and engineering problems.
In the 1990s, M. A. Golberg and C. S. Chen extended the MFS to deal with inhomogeneous equations and time-dependent problems, greatly expanding its applicability. Later developments indicated that the MFS can be used to solve partial differential equations with variable coefficients. The MFS has proved particularly effective for certain classes of problems such as inverse, unbounded domain, and free-boundary problems.
Some techniques have been developed to cure the fictitious boundary problem in the MFS, such as the boundary knot method, singular boundary method, and regularized meshless method.
See also
Radial basis function
Boundary element method
Boundary knot method
Boundary particle method
Singular boundary method
Regularized meshless method
References
External links
International Center for Numerical Simulation Software in Engineering & Sciences
Numerical analysis
Numerical differential equations | Method of fundamental solutions | Mathematics | 708 |
23,850,329 | https://en.wikipedia.org/wiki/International%20conference%20on%20Physics%20of%20Light%E2%80%93Matter%20Coupling%20in%20Nanostructures | The International Conference on Physics of Light–Matter Coupling in Nanostructures (PLMCN) is a yearly academic conference on various topics of semiconductor science and nanophotonics.
Topic
The conferences are devoted to the fundamental and technological issues relevant to the realization of a new generation of optoelectronic devices based on advanced low-dimensional and photonic structures, such as low threshold polariton lasers, new optical switches, single photon emitters, photonic band-gap structures, etc. They review the most recent achievements in the fundamental understanding of strong light–matter coupling, and follow the progress in the development of epitaxial and processing technologies of wide-bandgap semiconductors and organic nanostructures and microcavities providing the basis for advanced optical studies. The conferences are open to new emerging fields such as carbon nanotubes and quantum information.
The scope of these conferences covers both physics and application of a variety of phenomena related to light–matter coupling in solids such as:
Light–matter coupling in microcavities and photonic crystals
Basic exciton–polariton physics
Bose–Einstein condensates and polariton superfluid
Spin-related phenomena
Physics and application of quantum dots
Plasmons and near-field optics in light matter coupling
Growth and characterization of advanced wide-bandgap semiconductors (GaN, ZnSe, ZnO, organic materials)
Novel optical devices (polariton lasers, single-photon emitters, entangled-photon pair generators, optical switches...)
Quantum information science
Editions
The International Conference on Physics of Light–Matter Coupling in Nanostructures started in 2000 in Saint-Nectaire, France. The 14th edition was held as PLMCN14 instead of PLMCN13. The next issue in 2015 was held as PLMCN2014 instead of PLMCN15. The next issue after that was, confusingly, labeled as both PLMCN2015 and PLMCN16. The next conference, PLMCN17, reverted to the traditional labeling but now in sync with the edition number. The pattern broke again with PLMCN2020 for the 21st edition which was held online due to the COVID-19 pandemic, instead of in Clermont-Ferrand as initially planned. No conference was scheduled for 2021, to avoid holding another PLMCN online, and thus, the PLMCN22 in Cuba got back in sync, this time both with the year (2022) and the edition number (22th).
List of previous editions:
PLMCN0: Saint-Nectaire, France (2000)
PLMCN1: Rome, Italy (2001)
PLMCN2: Rethymno, Greece (2002)
PLMCN3: Acireale, Italy (2003)
PLMCN4: Saint Petersburg, Russia (2004)
PLMCN5 : Glasgow, Scotland (2005)
PLMCN6: Magdeburg, Germany (2006)
PLMCN7: Havana, Cuba (2007)
PLMCN8: Tokyo, Japan (2008)
PLMCN9: Lecce, Italy (2009)
PLMCN10: Cuernavaca, Mexico (2010)
PLMCN11: Berlin, Germany (2011)
PLMCN12 : Hangzhou, China (2012)
PLMCN14: Hersonissos, Crete (2013)
PLMCN2014: Montpellier, France (2014)
PLMCN16: Medellin, Colombia (2015)
PLMCN17: Nara, Japan (2016)
PLMCN18: Würzburg, Germany (2017)
PLMCN19: Chengdu, China (2018)
PLMCN20: Moscow and Suzdal, Russia (2019)
PLMCN2020: online (Clermont-Ferrand host), France (2020)
PLMCN22: Varadero, Cuba (2022)
PLMCN23: Medellin, Colombia (2023)
PLMCN24: Tbilisi, Georgia (9-13 April 2024)
Next scheduled edition:
PLMCN25: Xiamen, China (8-13 April 2025)
Logo
The logo is a cat that travels around the world featuring each particular venue's folklore. It is designed every year by Alexey Kavokin (University of Southampton), one of the creators and chairmen of the conference.
See also
International Conference on the Physics of Semiconductors
References
External links
Twitter account
Physics conferences
Technology conferences
Nanotechnology institutions | International conference on Physics of Light–Matter Coupling in Nanostructures | Materials_science | 934 |
9,664,491 | https://en.wikipedia.org/wiki/TOMNET | The TOMNET optimization Environment is a platform for solving applied optimization problems in Microsoft .NET. It makes it possible to use solvers like SNOPT, MINOS and CPLEX with one single model formulation. The solvers handle everything from linear programming and integer programming to global optimization.
External links
(home page)
Numerical software
Mathematical optimization software | TOMNET | Mathematics | 71 |
1,067,057 | https://en.wikipedia.org/wiki/ReplayGain | ReplayGain is a proposed technical standard published by David Robinson in 2001 to measure and normalize the perceived loudness of audio in computer audio formats such as MP3 and Ogg Vorbis. It allows media players to normalize loudness for individual tracks or albums. This avoids the common problem of having to manually adjust volume levels between tracks when playing audio files from albums that have been mastered at different loudness levels.
Although this de facto standard is now formally known as ReplayGain, it was originally known as Replay Gain and is sometimes abbreviated RG.
ReplayGain is supported in a large number of media software and portable devices.
Operation
ReplayGain works by first performing a psychoacoustic analysis of an entire audio track or album to measure peak level and perceived loudness. Equal-loudness contours are used to compensate for frequency effects and statistical analysis is used to accommodate for effects related to time. The difference between the measured perceived loudness and the desired target loudness is calculated; this is considered the ideal replay gain value. Typically, the replay gain and peak level values are then stored as metadata in the audio file. ReplayGain-capable audio players use the replay gain metadata to automatically attenuate or amplify the signal on a per-track or per-album basis such that tracks or albums play at a similar loudness level. The peak level metadata can be used to prevent gain adjustments from inducing clipping in the playback device.
Metadata
The original ReplayGain proposal specified an 8-byte field in the header of any file. Most implementations now use tags for ReplayGain information. FLAC and Ogg Vorbis use the REPLAYGAIN_* Vorbis comment fields. MP3 files usually use ID3v2. Other formats such as AAC and WMA use their native tag formats with a specially formatted tag entry listing the track's replay gain and peak loudness.
ReplayGain utilities usually add metadata to the audio files without altering the original audio data. Alternatively, a tool can amplify or attenuate the data itself and save the result to another, gain-adjusted audio file; this is not perfectly reversible in most cases. Some lossy audio formats, such as MP3, are structured in a way that they encode the volume of each compressed frame in a stream, and tools such as MP3Gain take advantage of this for directly applying the gain adjustment to MP3 files, adding undo information so that the process is reversible.
Target loudness
The target loudness is specified as the loudness of a stereo pink noise signal played back at 89 dB sound pressure level or −14 dB relative to full scale. This is based on SMPTE recommendation RP 200:2002, which specifies a similar method for calibrating playback levels in movie theaters using a reference level 6 dB lower (83 dB SPL, −20 dBFS).
Track-gain and album-gain
ReplayGain analysis can be performed on individual tracks so that all tracks will be of equal volume on playback. Analysis can also be performed on a per-album basis. In album-gain analysis an additional peak-value and gain-value, which will be shared by the whole album, is calculated. Using the album-gain values during playback will preserve the volume differences among tracks on an album.
On playback, listeners may decide if they want all tracks to sound equally loud or if they want all albums to sound equally loud with different tracks having different loudness. In album-gain mode, when album-gain data is missing, players should use track-gain data instead.
Alternatives
Peak amplitude is not a reliable indicator of loudness, so consequently peak normalization does not offer reliable normalization of perceived loudness. RMS normalization is more accurate but does not take into account psychoacoustic aspects of loudness perception.
With dynamic range compression, volume may be altered on the fly on playback producing a variable-gain normalization, as opposed to the constant gain as rendered by ReplayGain. While dynamic range compression is beneficial in keeping volume constant, it changes the artistic intent of the recording.
Sound Check is a proprietary Apple Inc. technology similar in function to ReplayGain. It is available in iTunes and on the iPod.
Standard measurement algorithms for broadcast loudness monitoring applications have recently been developed by the International Telecommunication Union (ITU-R BS.1770) and the European Broadcasting Union (EBU R128). This new method has been used to measure loudness in newer ReplayGain utilities such as foobar2000 (since 1.1.6) and loudgain.
Implementations
Streaming
Spotify
See also
Alignment level
Dialnorm
EBU R 128
Loudness war
Notes
References
Media player features pages
External links
ReplayGain specification
ReplayGain at Hydrogenaudio wiki
Replay Gain – A Proposed Standard, the original proposal, now out of date with respect to current practice
Replay Gain in Linux — guide to using graphical and command line ReplayGain tools in Linux.
Computer standards
Digital audio | ReplayGain | Technology | 1,026 |
39,254,765 | https://en.wikipedia.org/wiki/MOWAG-AEG |
History and development
In cooperation with AEG built MOWAG 37 aircraft tug named Flz Sch 4x2 for the Swiss Air Force. The vehicles were used primarily to move the Dassault Mirage IIIS and Mirage III RS in and out of the aircraft caverns. A special feature compared to other aircraft tugs (e.g. Bucher aircraft tractor) was that in these, the aircraft could be suspended during the journey of the towing hook to keep the time between leaving the cavern and the lift off of the Mirage short.
The Mirage began immediately upon leaving the "Vorstollens" to start their engine. Once the engine was running, the latch was released by the tractor driver and he drove from the plane away and turned to the right, so the Mirage freely under its own power could roll on the taxiway to the runway.
The aircraft tractors were in use from 1967 to 2003 by the Swiss Air Force. One is now part of the Military museum Full.
References
Aircraft ground handling
Tractors
Military vehicles of Switzerland | MOWAG-AEG | Engineering | 208 |
58,838,548 | https://en.wikipedia.org/wiki/European%20Union%20Observatory%20for%20Nanomaterials | The European Union Observatory for Nanomaterials (EUON) is an initiative that aims to increase the transparency and availability of information on nanomaterials to the general public. It was launched in June 2017.
The EUON collects existing information from databases, registries and studies and generates new data through additional studies and surveys on nanomaterials on the EU market.
The EUON website has content in 23 EU languages covering uses, safety, regulation, international activities as well as research and innovation.
It has been set-up, managed and maintained by the European Chemicals Agency (ECHA).
Establishment
The EUON is the result of a perceived lack of necessary information about existing nanomaterials on the EU market. Against this background the European Commission conducted an impact assessment on different solutions to address the perception and increase the transparency of information. The European Commission also committed to address the issue of transparency as part of their Second Regulatory Review on Nanomaterials
The European Commission delegated ECHA with the creation, management and maintenance of the EUON because of the synergies between its scope and ECHA's tasks of managing and evaluating data from the implementation of European chemicals legislation, REACH, CLP and BPR.
Lists and databases
The EUON compiles information about nanomaterials from different databases and lists.
NanoData – A database with information on the development of nanomaterials and nanotechnology, focusing mainly on the EU market. It gives information on different sectors using nanomaterials, including health, energy, photonics, manufacturing, and information and communication technology. It also lists products that use nanomaterials and nanotechnology, patents, and research projects funded by the EU.
eNanoMapper – Database with data on the toxicological properties of nanomaterials. The data is collected from several sources, including the EU-funded NanoREG project, a part of the EU NanoSafety Cluster (NSC), a cluster of European Commission-funded projects related to nanomaterials. The eNanoMapper is funded by the EU under its research and innovation programme.
ECHA's database of registered nanoform substances – European Chemicals Agency (ECHA) publishes information on chemical substances registered under REACH. This information covers the intrinsic properties of each substance and their impact on health and the environment. The data comes directly from companies who make or import the substances. The database also includes substances in nano-form.
Catalogue of nanomaterials in cosmetics – The European Commission maintains a catalogue of nanomaterials used in cosmetics placed on the EU market, based on information notified by industry. The EUON has matched these substances with those registered under the REACH Regulation to display publicly available information collected from REACH registration dossiers including information on the toxicological and ecotoxicological properties of the substances.
List of nano-pigments on the EU market – The list consists of 81 substances from the European Chemicals Agency's chemicals database, the Belgian and French national inventories, the Danish Product Register and the EU catalogue of nanomaterials in cosmetic products.
References
External links
European Commission – definition of a nanomaterial
European Commission – nanomaterials
Nanomaterials | European Union Observatory for Nanomaterials | Materials_science | 657 |
48,334,189 | https://en.wikipedia.org/wiki/Hubbell%20Incorporated | Hubbell Incorporated, headquartered in Shelton, Connecticut, is an American company that designs, manufactures, and sells electrical and electronic products for non-residential and residential construction, industrial, and utility applications. Hubbell was founded by Harvey Hubbell as a proprietorship in 1888, and was incorporated in Connecticut in 1905.
The company is ranked 651st on the Fortune 500 list of the largest United States corporations by total revenue.
The company operates two segments: the utility solutions segment, which produces items such as arresters, insulators, connectors, anchors, bushings, enclosures, cutoffs and switches and the electrical solutions segment, which produces application wiring device products, rough-in electrical products, connector and grounding products, and lighting fixtures, as well as other electrical equipment.
Hubbell has manufacturing facilities in the United States, Canada, Switzerland, Puerto Rico, Mexico, China, Italy, the United Kingdom, Brazil and Australia and maintains sales offices in Singapore, China, India, Mexico, South Korea, and countries in the Middle East.
History
Hubbell Incorporated was founded as a proprietorship in 1888 by Harvey Hubbell II. Born in Connecticut in 1857, he was a U.S. inventor, entrepreneur, and industrialist. Hubbell's best-known inventions are the U.S. electrical plug and the pull-chain light socket. Hubbell graduated from high school and began working for companies manufacturing marine engines and printing machinery. During this time, he accumulated several ideas for new inventions, and in 1888, he set out on his own, opening a small manufacturing facility in Bridgeport, Connecticut. Hubbell's first product was taken from his own patent for a paper roll holder with a toothed blade for use in stores that sold wrapping paper. This cutter stand became a tremendous success; it was a common feature of retail stores that used wrapping paper in the early 1900s and remained widely used into the late 20th century.
Soon, Hubbell discovered that he had to design machinery to make its parts. One of the first was a tapping machine, as was its patent. Around 1896, with his business in machinery progressing, Hubbell's next patent was a major breakthrough in the fastener industry: the process and machinery for cold rolled screw threads which reduced the rate of material lost in production by more than 50%. He designed and built progressive blanking and forming dies, patented machinery to slot screw heads, a machine to assemble screws and small parts, devised tools to indicate speed, and patented a changeable speed screwdriver. Hubbell's idea was to provide convenience, safety, and control to an electric light with his new "pull socket" which was patented in August 1896. The same familiar device with its on/off pull chain is still in use today. Hubbell built three prototypes by hand using metal and insulated wood parts to design a product with individual wires permanently attached in the proper sequence and correct polarity, and one which could be connected or disconnected, easily and safely, to a power supply in the wall. Cartridge fuses and fuse block, lamp holders, key sockets, soon followed the same path Later, Hubbell's "separable plug" design took shape on the drawing board back in Bridgeport, and then submitted to the patent office in Washington, D.C. Additional designs based on that basic concept separable plugs in different configurations, a single flush mounted receptacle as well as new products for electrical circuits. One of the most successful and familiar today, was the duplex receptacle which is still found everywhere that electrical power is used.
In 1901, Hubbell published a 12-page catalogue that listed 63 electrical products of his company's manufacture, and four years later he incorporated his enterprise as Harvey Hubbell, Incorporated. In the same year, the company registered its trademark of "...a sphere with meridian lines and the name 'Hubbell' centered within". Hubbell's pace of new ideas and product design did not falter. In 1909, the company began constructing a four-floor factory and office building that would become the first building in New England made of reinforced concrete.
Between 1896 and 1909, he was granted 45 patents on a wide variety of electrical products company's product lines had been continuously expanded. Catalogue No. 17 was published in March 1917. The catalogue had more than 100 pages and listed more than a thousand products. In bulb sockets alone, the company manufactured 277 different types and sizes. Hubbell's toggle action light switch which incorporated a "quick make or break" feature to meet the rigid requirements of Underwriters' Laboratories (UL) was replacing the former two button type push switch. Hubbell designed a "Loxin" mechanism which fit into any standard socket and locked the bulb in place. Falling lightbulbs no longer endangered streetcar passengers, and overly thrifty commuters had to find a new source of replacement bulbs for home use. For the home, the company developed a system for lighting fixture connections called "Elexit" which allowed the homeowner to install most fixtures without hiring an electrician.
Company's first era ended when its founder Harvey Hubbell II died on December 17, 1927. He was succeeded as president of the company by his son, Harvey Hubbell III. Twenty-six years old when he succeeded his father, Harvey Hubbell III had already spent years working in the business. That early experience with electrical equipment engineering and learning the discipline of production was to stand the company in good stead in the decades to come.
Harvey Hubbell III soon showed that he had inherited his father's twin acumen for product innovation and business development. In product innovation, he devised the company's lines of Twist-Lock industrial connectors with new 2-, 3-, and 4-wire devices of various ratings, designed a whole new series of locking connectors for industrial use which he named "Hubbellock", and introduced heavy-duty, circuit-breaking devices. The company played a large part in the war effort by meeting the demand for electrical components and systems to power the nation's industries and by developing products for the special applications needed by the military. These included components for military vehicle electrical circuits, battery-charging systems for M-4 tanks, power jacks for test meters, vacuum tube sockets for radio communications, and a line of electrical and electronic connectors for aircraft. The company's years of experience in building devices reliable enough for industrial use was a valuable asset in the production of products that could perform under rugged battlefield conditions. A second plant was opened in Lexington, Kentucky, in order to meet the demand and as a safety measure since the main plant in Bridgeport was considered vulnerable to air or sea attack.
Hubbell Inc. assisted Allied efforts during World War II by manufacturing military vehicle electrical circuits, battery-charging systems for M4 Sherman tanks, power jacks for test meters, vacuum tube sockets for radio communications, and a line of electrical and electronic connectors for aircraft.
Hubbell had been one of the first to manufacture flush toggle switches for alternating current only. The first Safety Receptacle was designed and produced as were the original "grounding only" devices which helped to set the standards for the industry. And while Hubbell was busy on land, the company found new opportunities at sea. In 1952, the ocean liner "SS United States" was launched in Newport News, Virginia. Queen of the seas for many years, the ship was completely fitted with Hubbell wiring devices designed expressly for narrow stateroom partitions and to withstand the effects of salt air. An ardent yachtsman himself, Harvey Hubbell III designed a complete family of corrosion resistant devices including both on-board and dockside equipment for the expanding pleasure boat industry. Familiar sights at marinas today, these first products were so successful that alternative designs were produced for many industrial applications where corrosive atmospheres and materials posed challenges for standard wiring devices.
The company's sales in new products and continuing lines increased proportionately to these successes, but more was to come as Harvey Hubbell Incorporated added diversification. Beginning in 1960, the company entered a new period of rapidly expanding growth in both sales and income. Much of the growth resulted from the company's internal product development, a longstanding Hubbell tradition, and a source that expanded under the industry leadership of Harvey Hubbell III and other Hubbell engineers. A second source of growth through acquisition, from 1960 onwards to now, Hubbell Incorporated has acquired many companies working for power, electric, lighting sectors. Hubbell Incorporated has grown to be an international manufacturer of electrical and electronic products for a broad range of non-residential and residential construction, industrial, and utility applications.
In 2010, the company moved its headquarters from Orange, Connecticut to Shelton, Connecticut.
In March 2024, Hubbell was named one of Ethisphere Institute's World's Most Ethical Companies of the Year for the fourth time. In 2024, 136 honourees from 20 countries and 44 industries were recognised.
In the first quarter of 2024, Hubbel completed the sale of Progress Lighting, which manufactures decorative, recessed and energy-efficient lighting solutions. In 2023, Progress Lighting had revenue of $187.1 million (Hubbell Lighting had revenue of $515 million in 2022) and the sale amounted to $131 million. With this deal, Hubbel has completed its exit from the lighting industry, retaining only ownership of the Scottish niche brand Chalmit Lighting, specializing in LED hazardous area solutions.
Acquisitions
Operations
Hubbell designs, manufactures, and sells various product under three major sections which are Electrical Solutions and Utility Solutions.
The utility segment markets products under the following brands:
Aclara
Chance
Anderson
PenCell
Fargo
Hubbell
Polycast
Opti-loop Design
Quazite
Quadri*sil
Trinetics
Reuel
Electro Composites
USCO
CDR
RFL Design
Hot Box
PCORE
Delmar
Turner Electric
EMC
Longbow
Ohio Brass
Meramec
Reliaguard
Greenjacket
Armorcast
Beckwith Electric
Continental
R.W. Lyall
Gas Breaker
AEC
Ripley
The electrical solutions division markets products under the following brands:
Hubbell
BellRaco
Gleason Reel
ACME Electric
Kellems
TayMac
Hipotronics
Powerohm
EC&M Design
Bryant
Wiegmann
AccelTex Solutions
iDevices
Progress Lighting Design
Burndy
Killark
GAI-Tronics
Connector Products
Austdac
CMC
Hawke
Chalmit
PCX
References
1888 establishments in Connecticut
Companies based in Fairfield County, Connecticut
Companies listed on the New York Stock Exchange
Electrical equipment manufacturers
Electronics companies established in 1888
Electronics companies of the United States
Manufacturing companies based in Connecticut
Shelton, Connecticut | Hubbell Incorporated | Engineering | 2,177 |
40,807,085 | https://en.wikipedia.org/wiki/Academy%20of%20Military%20Engineering%20of%20Guadalajara | The Academy of Military Engineering of Guadalajara () was a military academy of the Spanish Army. It was located in Guadalajara, Spain and operated from 1833 to 1932.
The academy specialized in the training of military engineers and was recognized for its focus on technological and scientific education. In 1932, it was merged with the Artillery Academy (Academia de Artilleros), resulting in its relocation to Segovia.
This institution played a significant role in the professional development of military engineers within the Spanish Army during its operation.
The Academy was located in the Montesclaros Palace, in the west of the city, until a 1924 fire destroyed part of the premises and an important collection of models, documents, books, and artworks. Between 1924 and its final move, its activities were continued in the palace's annex buildings that today serve as the General Military Archive of Guadalajara (Archivo General Militar de Guadalajara) and the Palace of Antonio de Mendoza.
A total of 115 graduating classes were trained at the academy, producing 2,213 engineering officers. Some of these officers contributed to the early development of Spanish military aeronautics. Notable individuals who served as instructors or studied at the institution include Mariano Barberán, Eduardo Barrón, Alejandro Goicoechea, Emilio Herrera Linares, Alfredo Kindelán, José Ortiz Echagüe, Carlos Faraudo and Pedro Vives Vich.
History
Origins
The academy began operations on September 1, 1803, in the former Basilios College of Alcalá de Henares. Its establishment was among the key objectives of a series of reforms initiated by General Engineer José Urrutia de las Casas and approved by Charles IV.
The academy’s curriculum included four annual courses: one preparatory year followed by three specialized sessions. The preparatory course covered Algebra, Differential and Integral Calculus, Hydrodynamics, and Fortification. The second year addressed Artillery, Mines, Siege and Defense of Fortifications, Encampment Organization, and Strategy. The third year focused on Optics, Perspective, Spherical Trigonometry, Geography, Astronomy, Topography, and Civil Architecture. Instruction was further supplemented by drawing classes and weapons training.
During the War of Independence, professors and students of the Academy temporarily relocated from Alcalá to Cádiz, where a provisional academy operated from 1811 to 1814. Afterward, the academy was reestablished in Alcalá de Henares. In 1820, its members aligned with the liberal cause; in 1823, facing the advancing forces of the Duke of Angoulême, they moved to Granada, later continuing to Málaga to escape the threat posed by the troops of the Hundred Thousand Sons of St. Louis. On September 27, 1823, the Regency issued an order dissolving the academy. The following year, on April 23, 1824, King Ferdinand VII issued another order, establishing the General Military College in Segovia to replace the defunct academy.
In 1826, General Engineer Ambrosio de la Cuadra secured the issuance of a Royal Order, dated August 20, which led to the reopening of the Academy of Engineers in Madrid. From that time until 1833, the academy’s headquarters moved periodically among the towns of Ávila, Talavera de la Reina, and Arévalo.
The Academy in Guadalajara
By a Royal Order dated September 13, 1833, the Academy of Engineers was permanently established in Guadalajara. The institution occupied the former Royal Cloth Factory, located in the Montesclaros Palace. The facility’s open, adaptable interior spaces accommodated both the academy’s instructional and administrative functions. A nearby area in the Coquín ravine provided space for additional activities. The Montesclaros Palace was an older structure, shaped by various expansions and renovations; the most recent occurred in 1778 under the direction of the architect Diego García.
Between 1837 and 1839, following the instability caused by the First Carlist War, the academy relocated its classrooms to Madrid. Upon returning to Guadalajara in 1839, new regulations were introduced.
Between 1843 and 1860, when Antonio Remón y Zarco del Valle served as General Engineer, the academy experienced its most productive period. During these years, faculty delegations frequently visited equivalent institutions in allied countries. These visits facilitated exchanges of ideas, incorporation of new theories, and the acquisition of updated texts and precision instruments for the academy’s curriculum.
During this period, concerns emerged regarding the academy’s continued presence in Guadalajara. In 1864, due to visible deterioration in sections of the renovated Montesclaros Palace, a commission was formed to assess the extent of structural problems. The findings prompted proposals for a new academy building, not only in Guadalajara, but also in Madrid and Zaragoza. In response, the Guadalajara City Council initiated efforts to maintain the institution locally, ultimately securing a Royal Order on May 28, 1867, ensuring its continuation in the city.
Construction work, overseen by engineering officer Juan Puyol, lasted from November 14, 1867, to December 24, 1869. During this interval, students continued their training at the adjacent San Carlos barracks. To finance the project, both the Provincial Council of Guadalajara and the City Council contributed significant funds, amounting to 110,000 escudos.
In 1879, another renovation project began, focusing on the 18th-century structures atop the Coquín ravine, which were partially supported by the remnants of the medieval city wall. Engineer Commander Federico Vázquez Landa oversaw the work, designing a pavilion modeled after the architectural style of the papal palaces in Avignon and reconstructing a section of the city wall with certain Mudéjar-influenced features.
In 1888, the academy expanded its facilities with the construction of a riding school, based on a design prepared in 1881 by then-Captain José Marvá y Mayer. This structure remains in existence.
In 1905, a new renovation was initiated, focusing on the main façade. The proposal involved removing the mezzanine beneath the roof to create taller rooms and adjusting the enclosure wall to feature vertically proportioned windows. Captain Ramón Valcárcel López-Espila adopted a historicist approach with classical tendencies, incorporating various lintel designs, simulated ashlar masonry, giant pilasters, balustrades, and a newly proportioned tower.
By 1909, the renovation work provided the academy’s headquarters in the Montesclaros Palace with a new appearance. Its updated design was comparable in style to prominent civil institutions in Guadalajara, as well as to the architectural approaches employed by Ricardo Velázquez Bosco in projects commissioned by the Duchess of Sevillano.
Fire and Relocation
On the night of February 9, 1924, a fire severely damaged the Montesclaros Palace, sparing only the riding arena and the buildings situated along the Coquín ravine.
During the fire, the Construction, Chemistry, Mineralogy, Photography, and Physics rooms and their respective collections were lost. These included measuring and precision instruments, specialized models, mineral and fossil collections, and the institution’s historical archive. In addition, the Throne Room—with its extensive series of Military Engineers’ portraits—was destroyed, as was the library housing more than 28,000 volumes. Among these volumes were numerous incunabula originating from the historic Academy of Mathematics of Barcelona.
On the day following the fire, the President of the Council of Ministers, Miguel Primo de Rivera, and other members of the Military Directory visited the site. On Monday, at one o’clock in the afternoon, King Alfonso XIII arrived and expressed regret over the extensive damage, assuring the mayor that the burned building would be reconstructed. Ernesto Villar Peralta submitted the reconstruction plan on April 10 of that year. However, the work was ultimately limited to certain expansions in the orchard area. Some classrooms were accommodated in the rooms of the Palace of Antonio de Mendoza, which at that time housed the Provincial Council and the Secondary Education Institute.
In 1928, a new order reorganized military education, requiring individuals seeking training as engineers to split their study period between Zaragoza and Guadalajara. Following the proclamation of the Second Republic, the order of July 4, 1931, abolished the General Academy and merged the Academy of Engineers into the Artillery Academy, based in Segovia. This decision led to the transfer of the Academy of Engineers to Segovia.
After the Spanish Civil War, the Guadalajara City Council attempted, without success, to secure the return of the Academy of Engineers. The institution was subsequently established in Burgos and, in 1968, was permanently relocated to Hoyo de Manzanares. Meanwhile, the Infantry Academy occupied facilities belonging to the Foundation of San Diego de Alcalá in Guadalajara while its original location in Toledo underwent reconstruction.
References
Province of Guadalajara
Military academies of Spain
Educational institutions established in 1833
1833 establishments in Spain
1932 disestablishments in Spain
Defunct military academies
Military engineering | Academy of Military Engineering of Guadalajara | Engineering | 1,755 |
2,941,630 | https://en.wikipedia.org/wiki/Primary%20deviance | Primary deviance is the initial stage in defining deviant behavior. Prominent sociologist Edwin Lemert conceptualized primary deviance as engaging in the initial act of deviance. This is very common throughout society, as everyone takes part in basic form violations. Primary deviance does not result in a person internalizing a deviant identity, so one does not alter their self-concept to include this deviant identity. It is not until the act becomes labeled or tagged, that secondary deviation may materialize. According to Lemert, primary deviance is the acts that are carried out by the individual that allows them to carry the deviant label.
Influences on primary deviant behavior
Family and home life
Parental support and the influence that parents have on their children is one of the highest contributors to the behaviors in adolescents. This is the primary stage in which behaviors, morals and values are learned and adopted. The guidance from parents is intended to mold and shape the behaviors that will qualify them to properly function in society. Praise, love, affection, encouragement and many other aspects of positive enforcement are some of the largest components of parental support. However, this is not all it takes to prevent deviant behaviors from forming and occurring. Parents must enforce "effective discipline, monitoring, and problem-solving techniques." Children who come from homes where parents do not enforce positive behaviors and do not punish deviant behaviors appropriately, are children who are likely to engage in deviant behaviors. This type of bond is considered weak and cause the child to act out and become deviant."
Peers
Strong parental bonds are essential to the social group that the child will choose to associate with. When there is little to no control in the home, no positive enforcement from parents, and the child does not have positive feelings towards schooling and education; they are more likely to associate with deviant peers. When associating with deviant peers, they are more accepting of deviant behaviors than if they chose another social group. This is why it is vital that the parent-child bond be strong because it will have an ultimate influence on the peers they choose and will have an influence on if they choose to engage in primary deviant behaviors as juveniles.
Sociological contributors
Frank Tannenbaum
Frank Tannenbaum theorized that primary deviant behaviors may be innocent or fun for those committing the acts, but can become a nuisance and viewed as some form of delinquency to their parents, educators and even those in law enforcement. Tannenbaum distinguished two different types of deviancy. The first one being the initial act which the child considers to be of innocence but are labeled as deviant by the adult, this label is called "primary deviancy". The second is after they have been initially labeled, that they graduate to secondary deviance, in which both the adult and child agree that they are a deviant. Tannenbaum stated that the "over dramatization" of these deviant acts can cause one to be labeled and accept the label of being a deviant. Due to them accepting this label, they will eventually graduate from being a primary deviant to a secondary deviant thus committing greater crimes
Theoretical approaches
Labeling Theory
The most prevalent theory as it relates to primary deviance was developed in the early 1960s by a group of sociologists and was titled "labeling theory". The labeling theory is a variant of symbolic interactionism. Symbolic interactionism is "a theoretical approach in sociology developed by George Herbert Mead. It emphasizes the roles of symbols and language as core elements of human interaction. Labeling theory according, to labeling theorists, is applied by those put in place to keep law and order, such as police officers, judges; etc. Those are the people who typically label the people who have violated some law or another. The label "deviant" does not come from the person who has committed the act,but someone who is more powerful than the person being labeled. This theory has been tremendously criticized for not being able to explain what causes deviance early on. However, the labeling theory's main focus is to explain how labeling relates and can cause secondary deviance.
Anomie theory
Robert Merton developed the anomie theory which was dedicated specifically to the causes of deviance. The word anomie was derived from the "Godfather of Sociology" Emile Durkheim. Anomie is "the breakdown of social norms that results from society's urging people to be ambitious but failing to provide them with legitimate opportunities to succeed". Merton theorized that society places substantial emphasis on the importance of achieving success. However, this goal is not attainable for people of all social classes.Due to the absence of resources for people of lower social classes to achieve a great level of success, Merton theorized that people are forced to commit deviant acts. Merton has labeled the deviants behavior's as innovation.
Social learning theory
The social learning theory theorizes that deviant behavior is learned through social interactions with other people. Edwin Sutherland developed an explanation for this theory which explains how one learns deviant behavior. This explanation is called differential association.
Differential association
Differential association theorizes that "If an individual associates with people who hold deviant ideas more than with people who embrace conventional ideas, the individual is likely to become deviant." The person that is presenting the deviant act is not always necessarily the deviant. The emphasis of differential association is that if someone is presented with the opportunity they will likely commit the act. Although someone may associate with both deviants and those who hold conventional ideas, if the deviant contacts outweigh the conventional contacts then deviancy is likely to occur. Differential Association's main key point refers particularly to the association aspect. Differential Association is theorized to be "the cause of deviance".
Example of primary deviance
Charles Manson
One person who was labeled as deviant was the infamous murderer Charles Manson. Manson was born to a 16 year old Kathleen Maddox on November 12, 1934, in Cincinnati, Ohio. Manson's Father, Colonel Scott left Manson's mother to raise him alone. When Charles was seven years old, he was sent to live with his aunt and uncle in McMechen, West Virginia, after his mother was sentenced to five years in prison for armed robbery. Living with his aunt and uncle, Manson was given a more stable life that could allow him to be a positive contributor to society. However, the absence of his mother and the yearning he had for that motherly love and affection caused Manson to indulge in primary deviant behavior at a young age, which ultimately manifested into secondary deviance as he became older.
Following the counsel of another uncle, a "mountain man" who lived in the mountains of Kentucky, Manson labeled himself a rebel. Manson's first act of deviancy began at the age of 9 years old when he set his school on fire and was sent to reform school. Throughout his adolescent, Manson was sent to several reform schools in hopes of rehabilitating him. Between 1942 and 1947 after her release from prison, Manson's mother was unable to properly care for him and was unsuccessful in finding him a foster home. She turned him over to the courts and allowed them to place him in an all boys school called Gibault School for Boys. Ten months later, Manson ran away from the Gibault School for Boys in hopes of rekindling a relationship he had longed for with his mother. After she rejected him Manson turned to a life of deviancy. Manson thrived off of high-consensus deviant acts such as burglary and theft. Manson was then sent to Father Flanagan's Boys' Home in 1949. After 4 days at Father Flanagan's Boys' Home, Manson ran away and pursued other deviant acts; such as auto theft, burglary, and armed robbery. Manson ran away 18 times from the National Training School for Boys where he alleged he was molested and beaten. This behavior in Manson's early years, caused this label of deviant to shadow him through his adult life, where he eventually graduated to Secondary deviance and eventually led the dangerous cult The Manson Family.
References
Criminology
Deviance (sociology)
Sociological theories | Primary deviance | Biology | 1,663 |
31,636,726 | https://en.wikipedia.org/wiki/Pansteatitis | Pansteatitis, or yellow fat disease, is a physiological condition in which the body fat becomes inflamed.
Presentations
The condition has been found in cats, fish, herons, terrapins and Nile crocodiles, piscivores such as otters, cormorants, Pel's fishing-owls and fish eagles. The disorder is also regularly found in captive-bred animals fed on high fish diets, such as mink, pigs and poultry. It shows as a rubber-like hardening of fat reserves which then become unavailable for normal metabolism, resulting in extreme pain, loss of mobility and death.
Causes
It is thought to be brought about by any or a combination of a number of factors which include:
Vitamin E deficiency
Microcystin and nodularin poisoning, via inhibition of protein phosphatases
Heavy metals and other fat-accumulated pollutants such as DDT, PCBs, PCDDs and brominated flame retardants
Ingestion of affected animals
Pathogens as yet unidentified
Incidents
In 2008 about 170 crocodile deaths in the Olifants Gorge on the Olifants, and in the Letaba Rivers in the eastern part of the Kruger National Park in South Africa, alerted rangers and researchers to a problem that may eventually reach epidemic proportions. Dead and dying bottom-feeding catfish Clarias gariepinus that had ingested toxic pollutants such as heavy metals, had become an easy source of food for the crocodiles, with a resulting transfer of the toxins. With the onset of winter and lower temperatures, metabolism switches to fat reserves, at which time mortality peaks. These particular toxins are from the upper reaches of the river, originating from the industrial and mining complex in the Witbank and Middelburg area. Earlier, in 2007, at the Loskop Dam, higher in the Olifants River, crocodile deaths were linked to a mass die-off of fish after sewage pollution caused cyanobacterial blooms. The affected crocodiles had consumed masses of dead and dying fish. Cyanobacterial blooms (Anabaena spp.) are common in the stagnant water of dams, but do not occur in the flowing water of rivers. High autumnal mortality among migratory herons and caused by cyanobacterial blooms, is seen regularly in brackish impoundments at Chesapeake Bay in the United States.
The Massingir Dam in Mozambique, and just downstream of the Olifants Gorge, was constructed in the 1970s, but the country's civil war delayed installation of the sluice gates. The dam wall was raised and sluice gates installed in 2006, causing sediment to back up into the 8 km-long Olifants Gorge.
See also
Vitamin E deficiency
References
External links
Crocodile deaths in the Kruger National Park
Metabolic disorders
Nutritional diseases
Carnivoran diseases
Bird diseases
Reptile diseases | Pansteatitis | Chemistry | 591 |
58,319 | https://en.wikipedia.org/wiki/Serial%20number | A serial number is a unique identifier used to uniquely identify an item, and is usually assigned incrementally or sequentially.
Despite being called serial "numbers", they do not need to be strictly numerical and may contain letters and other typographical symbols, or may consist entirely of a character string.
Applications of serial numbering
Serial numbers identify otherwise identical individual units, thereby serving various practical uses. Serial numbers are a deterrent against theft and counterfeit products, as they can be recorded, and stolen or otherwise irregular goods can be identified. Banknotes and other transferable documents of value bear serial numbers to assist in preventing counterfeiting and tracing stolen ones.
They are valuable in quality control, as once a defect is found in the production of a particular batch of product, the serial number will identify which units are affected. Some items with serial numbers are automobiles, firearms, electronics, and appliances.
Smartphones and other smart devices
In smartphones, serial numbers are extended to the integrated components in addition to the electronic device as a whole, also known as serialization. This gives unique individual parts such as the screen, battery, chip and camera a separate serial number. This is queried by the software for proper release for use. This practice by manufacturers limits the serial number electronic devices.
Serial numbers for intangible goods
Serial numbers may be used to identify individual physical or intangible objects; for example, computer software or the right to play an online multiplayer game. The purpose and application is different. A software serial number, otherwise called a product key, is usually not embedded in the software but is assigned to a specific user with a right to use the software. The software will function only if a potential user enters a valid product code. The vast majority of possible codes are rejected by the software. If an unauthorised user is found to be using the software, the legitimate user can be identified from the code. It is usually not impossible, however, for an unauthorised user to create a valid but unallocated code either by trying many possible codes, or reverse engineering the software; use of unallocated codes can be monitored if the software makes an Internet connection.
Other uses of the term
The term serial number is sometimes used for codes which do not identify a single instance of something. For example, the International Standard Serial Number or ISSN used on magazines, journals and other periodicals, an equivalent to the International Standard Book Number (ISBN) applied to books, is assigned to each periodical. It takes its name from the library science use of the word serial to mean a periodical.
Certificates and certificate authorities (CA) are necessary for widespread use of cryptography. These depend on applying mathematically rigorous serial numbers and serial number arithmetic, again not identifying a single instance of the content being protected.
Military and government use
The term serial number is also used in military formations as an alternative to the expression service number. In air forces, the serial number is used to uniquely identify individual aircraft and is usually painted on both sides of the aircraft fuselage, most often in the tail area, although in some cases the serial is painted on the side of the aircraft's fin/rudder(s). Because of this, the serial number is sometimes called a tail number.
In the UK Royal Air Force (RAF) the individual serial takes the form of two letters followed by three digits, e.g., BT308—the prototype Avro Lancaster, or XS903—an English Electric Lightning F.6 at one time based at RAF Binbrook. During the Second World War RAF aircraft that were secret or carrying secret equipment had "/G" (for "Guard") appended to the serial, denoting that the aircraft was to have an armed guard at all times while on the ground, e.g., LZ548/G—the prototype de Havilland Vampire jet fighter, or ML926/G—a de Havilland Mosquito XVI experimentally fitted with H2S radar. Prior to this scheme the RAF, and predecessor Royal Flying Corps (RFC), utilised a serial consisting of a letter followed by four figures, e.g., D8096—a Bristol F.2 Fighter currently owned by the Shuttleworth Collection, or K5054—the prototype Supermarine Spitfire. The serial number follows the aircraft throughout its period of service.
In 2009, the U.S. FDA published draft guidance for the pharmaceutical industry to use serial numbers on prescription drug packages. This measure is intended to enhance the traceability of drugs and to help prevent counterfeiting.
Serial number arithmetic
Serial numbers are often used in network protocols. However, most sequence numbers in computer protocols are limited to a fixed number of bits, and will wrap around after sufficiently many numbers have been allocated. Thus, recently allocated serial numbers may duplicate very old serial numbers, but not other recently allocated serial numbers. To avoid ambiguity with these non-unique numbers, "Serial Number Arithmetic", defines special rules for calculations involving these kinds of serial numbers.
Lollipop sequence number spaces are a more recent and sophisticated sit for dealing with finite-sized sequence numbers in protocols.
See also
(serial identifiers for databases)
– one of the first machines to sport a unique serial number
Sources
Elz, R., and R. Bush, "Serial Number Arithmetic", Network Working Group, August 1996.
Plummer, William W. "Sequence Number Arithmetic" . Cambridge, Massachusetts: Bolt Beranek and Newman, Inc., 21 September 1978.
References
External links
ISSN International Centre
20081221213405 | Serial number | Mathematics | 1,139 |
55,397,876 | https://en.wikipedia.org/wiki/NGC%204606 | NGC 4606 is a spiral galaxy located about 55 million light-years away in the constellation of Virgo. NGC 4606 was discovered by astronomer William Herschel on March 15, 1784. It has a disturbed stellar disk suggesting the actions of gravitational interactions. NGC 4607 may be a possible companion of NGC 4606. However, their redshifts differ by about 600 km/s, making it unlikely that they are a gravitationally bound pair. NGC 4606 is a member of the Virgo Cluster.
See also
List of NGC objects (4001–5000)
References
External links
Spiral galaxies
Virgo (constellation)
4606
42516
7839
Astronomical objects discovered in 1784
Virgo Cluster | NGC 4606 | Astronomy | 141 |
11,421,382 | https://en.wikipedia.org/wiki/Anti-Q%20RNA | Anti-Q RNA (formerly Qa RNA) is a small ncRNA from the conjugal plasmid pCF10 of Enterococcus faecalis. It is coded in cis to its regulatory target, prgQ, but can also act in trans. Anti-Q is known to interact with nascent prgQ transcripts to allow formation of an intrinsic terminator, or attenuator, thus preventing transcription of downstream genes. This mode of regulation is essentially the same as that of the countertranscript-driven attenuators that control copy number in pT181, pAMbeta1 and pIP501 and related Staphylococcal plasmids.
Anti-Q is transcribed from the same segment of DNA as prgQ, except from the opposite strand, making it perfectly complementary to a portion of prgQ. Further experiments have experimentally confirmed the original consensus secondary structure and demonstrated that only certain regions of Anti-Q interact with prgQ.
Anti-Q is derived from the 5’ end a longer transcript. The 3’ end of this transcript encodes PrgX, a repressor of prgQ transcription.
References
External links
Non-coding RNA | Anti-Q RNA | Chemistry | 251 |
333,088 | https://en.wikipedia.org/wiki/Esalen%20Institute | The Esalen Institute, commonly called Esalen, is a non-profit American retreat center and intentional community in Big Sur, California, which focuses on humanistic alternative education. The institute played a key role in the Human Potential Movement beginning in the 1960s. Its innovative use of encounter groups, a focus on the mind-body connection, and their ongoing experimentation in personal awareness introduced many ideas that later became mainstream.
Esalen was founded by Michael Murphy and Dick Price in 1962. Their intention was to support alternative methods for exploring human consciousness, what Aldous Huxley described as "human potentialities". Over the next few years, Esalen became the center of practices and beliefs that make up the New Age movement, from Eastern religions/philosophy, to alternative medicine and mind-body interventions, from transpersonal to Gestalt practice.
Price ran the institute until he died in a hiking accident in 1985. In 2012, the board hired professional executives to help raise money and keep the institute profitable. Until 2016, Esalen offered over 500 workshops yearly in areas including Gestalt practice, personal growth, meditation, massage, yoga, psychology, ecology, spirituality, and organic food. In 2016, about 15,000 people attended its workshops.
In February 2017, the institute was cut off when Highway 1 was closed by a mud slide on either side of the hot springs. It closed its doors, evacuated guests via helicopter, and was forced to lay off 90% of its staff through at least July, when they reopened with limited workshop offerings. It also decided to revamp its offerings to include topics more relevant to a younger generation. As of July 2017, due to the limited access resulting from the road closures, the hot springs are only open to Esalen guests.
Early history
The grounds of the Esalen Institute were first home to a Native American tribe known as the Esselen, from whom the institute adopted its name. Carbon dating tests of artifacts found on Esalen's property have indicated a human presence as early as 2600 BCE.
The location was homesteaded by Thomas Slate on September 9, 1882, when he filed a land patent under the Homestead Act of 1862. The settlement became known as Slates Hot Springs. It was the first tourist-oriented business in Big Sur, frequented by people seeking relief from physical ailments. In 1910, the land was purchased by Henry Murphy, a Salinas, California, physician. The official business name was "Big Sur Hot Springs" although it was more generally referred to as "Slate's Hot Springs".
Founding
Stanford grads meet
Michael Murphy and Dick Price both attended Stanford University in the late 1940s and early 1950s. Both had developed an interest in human psychology and earned degrees in the subject in 1952. Price was influenced by a lecture he heard Aldous Huxley give in 1960 titled "Human Potentialities". After graduating from Stanford, Price attended Harvard University to continue studying psychology. Murphy, meanwhile, traveled to Sri Aurobindo's ashram in India where he resided for several months before returning to San Francisco.
Price's parents involuntarily committed him to a mental hospital for a year, ending on November 26, 1957. He hated the experience and thought he would like to create an environment where people could explore new ideas and thoughts without judgment and influence from the outside world. In May 1960, Price returned to San Francisco and lived at the East-West House with Taoist teacher Gia-Fu Feng. That year he met fellow Stanford University graduate Michael Murphy at Haridas Chaudhuri’s Cultural Integration Fellowship where Murphy was in residence. They met at the suggestion of Frederic Spiegelberg, a Stanford professor of comparative religion and Indic studies, with whom both had studied.
By then they had both dropped out of their graduate programs (Price at Harvard and Murphy at Stanford), and had served time in the military. Their similar experiences and interests were the basis for the partnership that created Esalen. Inspired by Buddhist practices, and based on his own understanding of Taoism, Price developed his teachings. He took what Fritz Perls had taught him and created a "Gestalt Awareness" process that is still taught and followed by many today.
Lease property
Price and Murphy wanted to create a venue where non-traditional workshops and lecturers could present their ideas free of the dogma associated with traditional education. The two began drawing up plans for a forum that would be open to ways of thinking beyond the constraints of mainstream academia while avoiding the dogma so often seen in groups organized around a single idea promoted by a charismatic leader. They envisioned offering a wide range of philosophies, religious disciplines and psychological techniques.
In 1961, they went to look at property owned by the Murphy family at Slates Hot Springs in Big Sur. It included a run-down hotel occupied in part by members of a Pentecostal church. The property was patrolled by gun-toting Hunter S. Thompson. Gay men from San Francisco filled the baths on the weekends.
Henry Murphy's widow and Michael's grandmother Vinnie "Bunnie" MacDonald Murphy, who owned the property, lived away in Salinas. She had previously refused to lease the property to anyone, even turning down an earlier request from Michael. She was afraid her grandson was going to "give the hotel to the Hindus," Murphy later said. Not long after, Thompson attempted to visit the baths with friends and got into a fistfight after antagonizing some of the gay men present. The men almost tossed him over the cliff. Murphy's father, a lawyer, finally persuaded his mother to allow her grandson to take over and she agreed to lease the property to them in 1962. The two men used capital that Price obtained from his father, who was a vice-president at Sears. They incorporated their business as a non-profit named Esalen Institute in 1963.
Develop counterculture workshops
Murphy and Price were assisted by Spiegelberg, Watts, Huxley and his wife Laura, as well as by Gerald Heard and Gregory Bateson. They modeled the concept of Esalen partially upon Trabuco College, founded by Heard as a quasi-monastic experiment in the mountains east of Irvine, California, and later donated to the Vedanta Society. Their intent was to provide "a forum to bring together a wide variety of approaches to enhancement of the human potential... including experiential sessions involving encounter groups, sensory awakening, gestalt awareness training, related disciplines." They stated that they did not want to be viewed as a "cult" or a new church but that it was to be a center where people could explore the concepts that Price and Murphy were passionate about. The philosophy of Esalen lies in the idea that "the cosmos, the universe itself, the whole evolutionary unfoldment is what a lot of philosophers call slumbering spirit. The divine is incarnate in the world and is present in us and is trying to manifest," according to Murphy.
Alan Watts gave the first lecture at Esalen in January 1962. Gia-fu Feng joined Price and Murphy, along with Bob Breckenridge, Bob Nash, Alice and Jim Sellers, as the first Esalen staff members. In the middle of that same year Abraham Maslow, a prominent humanistic psychologist, just happened to drive into the grounds and soon became an important figure at the institute. In the fall of 1962, they published a catalog advertising workshops with such titles as "Individual and Cultural Definitions of Rationality," "The Expanding Vision" and "Drug-Induced Mysticism". Their first seminar series in the fall of 1962 was "The Human Potentiality," based on a lecture by Huxley.
Fritz Perls residency
In 1964, Fritz Perls began what became a five-year long residency at Esalen, leaving a lasting influence. Perls offered many Gestalt therapy seminars at the institute until he left in July 1969. Jim Simkin and Perls led Gestalt training courses at Esalen. Simkin started a Gestalt training center on property next door that was later incorporated into Esalen's main campus.
When Perls left Esalen he considered it to be "in crisis again". He saw young people without any training leading encounter groups and he feared that charlatans would take the lead. Later, Grogan would write that Perls’ practice at Esalen had been ethically "questionable", and according to Kripal, Perls insulted Abraham Maslow.
Gestalt practice developed
Dick Price became one of Perls' closest students. Price managed the institute and developed his own form he called Gestalt practice, which he taught at Esalen until his death in a hiking accident in 1985. Michael Murphy lived in the San Francisco Bay Area and wrote non-fiction books about Esalen-related topics, as well as several novels.
Leads counterculture movement
Esalen gained popularity quickly and started to regularly publish catalogs full of programs. The facility was large enough to run multiple programs simultaneously, so Esalen created numerous resident teacher positions. Murphy recruited Will Schutz, the well-known encounter group leader, to take up permanent residence at Esalen. All this combined to firmly position Esalen in the nexus of the counterculture of the 1960s.
The institute gained increased attention in 1966 when several magazines wrote about it. George Leonard published an article in Look magazine about the California scene which mentioned Esalen and included a picture of Murphy. Time magazine published an article about Esalen in September 1967. The New York Times Magazine published an article by Leo E. Litwak in late December. Life also published an article about the resort. These articles increased the media and the public's awareness of the institute in the U.S. and abroad. Esalen responded by holding large-scale conferences in Midwestern and East Coast cities, as well as in Europe. Esalen opened a satellite center in San Francisco that offered extensive programming until it closed in the mid-1970s for financial reasons.
Programs and management
The institute continues to offer workshops about humanistic psychology, physical wellness, and spiritual awareness. The institute has also added workshops on permaculture and ecological sustainability. Other workshops cover a wide range of subjects including arts, health, Gestalt practice, integral thought, martial arts, massage, dance, mythology, philosophical inquiry, somatics, spiritual and religious studies, ecopsychology, wilderness experience, yoga, tai chi, mindfulness practice, and meditation. The institute was closed for the first half of 2017 and forced to drastically reduce staff. They also decided to revamp their offerings upon reopening to include topics more relevant to a younger generation.
Center for Theory and Research
In 1998, Esalen launched the Center for Theory and Research to initiate new areas of practice and action which foster social change and realization of the human potential. It is the research and development arm of Esalen Institute. , Michael Cornwall, who previously worked in the institutes' Schizophrenia Research Project at Agnews State Hospital, was conducting workshops titled the Alternative Views and Approaches to Psychosis Initiative at Esalen. He was inviting leaders in the field of psychosis treatment to attend the workshops.
Management changes
Esalen has been making changes to respond to internal and external factors. Dick Price was the key leader of the institute until his sudden death in a hiking accident in late 1985 brought about many changes in personnel and programming. Steven Donovan became president of the institute, and Brian Lyke served as general manager. Nancy Lunney became the director of programming, and Dick Price's son David Price served as general manager of Esalen beginning in the mid-1990s.
The baths were destroyed in 1998 by severe weather and were rebuilt at great expense, but this caused severe institutional stress. Afterward, Andy Nusbaum developed an economic plan to stabilize Esalen's finances.
In 2011, the institute commissioned the company Beyond the Leading Edge to conduct a Leadership Culture Survey to assess the quality of its leadership culture. The results were negative. The survey measured how well the leadership "builds quality relationships, fosters teamwork, collaborates, develops people, involves people in decision making and planning, and demonstrates a high level of interpersonal skill." In the "relating dimension" the survey returned a score of 18%, compared to a desired 88%. It also produced strongly dissonant scores in measures of community welfare, relating with interpersonal intelligence, clearly communicating vision, and building a sense of personal worth within the community. It ranked management as overly compliant and lacking authenticity. However, the survey found that Esalen closely matched its overall goal for customer focus.
Gordon Wheeler dramatically restructured Esalen management. These changes prompted Christine Stewart Price, the widow of Dick Price, to withdraw from the institute, and found an organization named the Tribal Ground Circle with the intention to preserve Dick Price's legacy.
Early leaders and programs
In the few years after its founding, many of the seminars like "The Value of Psychotic Experience" attempted to challenge the status quo. There were even Esalen programs that questioned the movement of which Esalen itself was a part—for instance, "Spiritual and Therapeutic Tyranny: The Willingness To Submit". There were also a series of encounter groups focused on racial prejudice.
Early leaders included many well-known individuals, including Ansel Adams, Gia-fu Feng, Buckminster Fuller, Timothy Leary,
Robert Nadeau, Linus Pauling, Carl Rogers, Virginia Satir, B.F. Skinner, and Arnold Toynbee. Rather than merely lecturing, many leaders experimented with what Huxley called the non-verbal humanities: the education of the body, the senses, and the emotions. Their intention was to help individuals develop awareness of their present flow of experience, to express this fully and accurately, and to listen to feedback. These "experiential" workshops were particularly well attended and were influential in shaping Esalen's future course.
Staff residency
Because of Esalen's isolated location, its operational staff members have lived on site from the beginning and for many years collectively contributed to the character of the institute. The community has been steeped in a form of Gestalt practice that pervades all aspects of daily life, including meeting structures, workplace practices, and individual language styles. There is a preschool on site called the Gazebo, serving the children of staff, some program participants, and affiliated local residents.
Scholars in residence
Esalen has sponsored long-term resident scholars, including notable individuals such as Gregory Bateson, Joseph Campbell, Stanislav Grof, Sam Keen, George Leonard, Fritz Perls, Ida Rolf, Virginia Satir, William Schutz, and
Alan Watts.
Esalen Massage and Bodywork Association
Bodywork has always been a significant part of the Esalen experience. In the late 1990s, the "EMBA" was organized as a semi-autonomous Esalen association for the regulation of Esalen massage practitioners.
Past initiatives and projects
Esalen Institute has sponsored many research initiatives, educational projects, and invitational conferences. The Big Sur facility has been used for these events, as well as other locations, including international sites.
Arts events
In 1964, Joan Baez led a workshop entitled "The New Folk Music" which included a free performance. This was the first of seven "Big Sur Folk Festivals" featuring many of the era's music legends. The 1969 concert included musicians who had just come from the Woodstock Festival. This event was featured in a documentary movie, Celebration at Big Sur, which was released in 1971.
John Cage and Robert Rauschenberg performed together at Esalen. Robert Bly, Lawrence Ferlinghetti, Allen Ginsberg, Michael McClure, Kenneth Rexroth (who led one of the first workshops), Gary Snyder and others held poetry readings and workshops.
In 1994, president and CEO Sharon Thom created an artist-in-residence program to provide artists with a two-week retreat in which to focus upon works in progress. These artists interacted with the staff, offered informal gatherings, and staged performances on the newly created dance platform. Located next to the Art Barn, the dance platform was used by Esalen teachers for dance and martial arts. The platform was later covered by a dome and renamed the Leonard Pavilion after deceased Esalen past president and board member, George Leonard.
In 1995 and 1996, Esalen hosted two arts festivals which gathered together artists, poets, musicians, photographers and performers, including artist Margot McLean, psychotherapist James Hillman, guitarist Michael Hedges and Joan Baez. All staff members were allowed to attend every class and performance that did not interfere with their schedules. Arts festivals have since become a popular yearly event at Esalen.
Schizophrenia Research Project
Encouraged by Dick Price, the Schizophrenia Research Project was conducted over a three-year period at Agnews State Hospital in San Jose, California, involving 80 young males diagnosed with schizophrenia. Funded in part by Esalen Institute, this program was co-sponsored by the California Department of Mental Hygiene (reorganized: CMHSA) and the National Institute of Mental Health. It explored the thesis that the health of certain patients would permanently improve if their psychotic process was not interrupted by administration of antipsychotic pharmaceutical drugs. Julian Silverman was chief of research for the project. He also served as Esalen's general manager in the 1970s. The Agnews double blind study was the largest first-episode psychosis research project ever conducted in the United States. It demonstrated that the young men given a placebo had a 75 percent lower re-hospitalization rate and much better outcomes than the men who received anti-psychotic medication. These results were used as justification for medication-free programs in the San Francisco Bay Area. Esalen has recently begun to revive some of this interest in schizophrenia and psychosis, and hosted the R.D. Laing Symposium and workshops on compassionately responding to psychosis.
Publishing
Starting in 1969, in association with Viking Press, the institute published a series of 17 books about Esalen-related topics, including the first edition of Michael Murphy's novel, Golf in the Kingdom (1971). Some of these books remain in print. In the mid-1980s, Esalen entered into a joint publishing arrangement with Lindisfarne Press to publish a small library of Russian philosophical and theological books.
Soviet–American Exchange Program
In 1979, Esalen began the Soviet–American Exchange Program (later renamed: Track Two, an institute for citizen diplomacy). This initiative came at a time when Cold War tensions were at their peak. The program was credited with substantial success in fostering peaceful private exchanges between citizens of the "super powers". In the 1980s, Michael Murphy and his wife Dulce were instrumental in organizing the program with Soviet citizen Joseph Goldin, in order to provide a vehicle for citizen-to-citizen relations between Russians and Americans. In 1982, Esalen and Goldin pioneered the first U.S.–Soviet Space Bridge, allowing Soviet and American citizens to speak directly with one another via satellite communication. In 1988, Esalen brought Abel Aganbegyan, one of Mikhail Gorbachev's chief economic advisors, to the United States. In 1989, Esalen brought Boris Yeltsin on his first trip to the United States, although Yeltsin did not visit the Esalen facility in Big Sur. Esalen arranged meetings for Yeltsin with then President George H. W. Bush as well as many other leaders in business and government. Two former presidents of the exchange program included Jim Garrison and Jim Hickman. After Gorbachev stepped down, and effectively dissolved the Soviet Union, Garrison helped establish The State of the World Forum, with Gorbachev as its convening chairman. These successes led to other Esalen citizen diplomacy programs, including exchanges with China, an initiative to further understanding among Jews, Christians and Muslims, as well as further work on Russian-American relations.
Prices and finances
2017 closure
On February 12, 2017, a number of mud and land slides closed Highway 1 in several locations to the south and north of the hot springs and caused Esalen to partially shut down. On February 18, 2017, shifting earth damaged a pier supporting the Pfeiffer Canyon Bridge north of Esalen and forced CalTrans to close Highway 1. CalTrans determined that the bridge was damaged beyond repair and announced an accelerated project to replace the bridge by September. Following closure of the bridge, Esalen was cut off, and resorted to evacuating dozens of guests by helicopter. A landslide at Mud Creek south of the hot springs severely restricted vehicle access to the resort, and Esalen temporarily closed its doors. Then, on May 20, 2017, a new slide at Mud Creek closed Highway 1 for at least a year.
On June 20, Esalen announced that it would lay off 45 staff members through at least July, leaving only about 10 percent of its staff.
Esalen partially reopened on July 28, 2017, offering limited workshops. It plans to add more seminars after the Pfeiffer Canyon Bridge reopens in September 2017.
Attendance and costs
In 2012, 600 Esalen workshops were attended by more than 12,000 people. Topics ranged from sustainable business practices to hypnosis to "The Holy Fool: Crazy Wisdom From Van Gogh to Tina Fey and The Big Lebowski."
, a weekend workshop, including the program, meals, and a place for a sleeping bag in a communal area, cost a minimum of $405 per person. A couple could rent a private room for $730 per person. Week-long workshops begin at $900 and couples are charged $1,700 per person to stay in a private room. In 2013, the institute charges participants in its month-long, residential licensed massage practitioner training programs, $4910, including board and room. In 1987, a weekend workshop along with a single room and meals cost $270, and a five-day workshop cost $530.
Revenue and expenses
In 2013, the institute reported revenue of $18,513,254, $13,066,407 from programs, and after expenses of $13,515,552 a net income of $4,997,702. In that year it paid CEO Patricia McEntee $152,077 In 2014, it reported total revenue of $15,934,586, expenses totalling $14,472,201, and net income of $1,462,385. McEntee was paid $157,839.
The company spent nearly $10 million for renovations from 2014 to 2016, including $7.4 million to renovate the main lodge and add a cafe and bar. It also spent $1.8 million on a six-room guesthouse. There is only limited internet cellular service available, but Esalen is planning to make some of its workshops available to online participants.
Lease terms
The annual cost of its 87-year lease for the 27-acre site from the Vinnie A. Murphy Trust—which extends through 2049—was $344,704 in 2014. McEntee told the Monterey County Weekly that the cost of the lease is highly discounted, and that the terms of the lease allow the trust to re-assess the lease terms in 2017. This could potentially increase the institute's rent to market value.
Past teachers
Past guest teachers include:
In popular culture
Cultural influence
Esalen has been cited as having played a key role in the cultural transformations of the 1960s. In its beginnings as a "laboratory for new thought", it was seen by some as the headquarters of the human potential movement. Its use of encounter groups, a focus on the mind-body connection, and their ongoing experimentation in personal awareness introduced many ideas to American society that later became mainstream. In its early years, guest lecturers and workshop leaders included many leading thinkers, psychologists, and philosophers including Erik Erikson, Ken Kesey, Alan Watts, John C. Lilly, Buckminster Fuller, Aldous Huxley, Linus Pauling, Fritz Perls, Joseph Campbell, Robert Bly and Carl Rogers.
Esalen has also been the subject of some criticism and controversy. The Economist wrote, "For many others in America and around the world, Esalen stands more vaguely for that metaphorical point where ‘East meets West’ and is transformed into something uniquely and mystically American or New Agey. And for a great many others yet, Esalen is simply that notorious bagno-bordello where people had sex and got high throughout the 1960s and 1970s before coming home talking psychobabble and dangling crystals."
The Human Potential Movement was criticized for espousing an ethic that the inner-self should be freely expressed in order to reach a person's true potential. Some people saw this ethic as an aspect of Esalen's culture. The historian Christopher Lasch wrote that humanistic techniques encourage narcissistic, spiritual materialistic or self-obsessive thoughts and behaviors. In 1990 a graffiti artist spray painted "Jive shit for rich white folk" on the entrance to Esalen, highlighting class and race issues. Some thought that this was a regression of progress away from true spiritual growth. Michel Houellebecq's Atomised traces the New Age movement's influence on the novel's protagonists to older generations' chance meetings at Esalen.
Popular media
Films
In the comedy-drama Bob & Carol & Ted & Alice (1969), sophisticated Los Angeles residents Bob (played by Robert Culp) and Carol Sanders (Natalie Wood) spend a weekend of emotional honesty at an Esalen-style retreat, after which they return to their life determined to embrace free love and complete openness.
Literature
In Thomas Pynchon's novel Inherent Vice (2009) and Paul Thomas Anderson's eponymous 2014 film adaptation, the Chryskylodon Institute is modeled after Esalen.
In Norman Rush's novel Mating (1992), Esalen is referred to as a "twit factory."
Television
The BBC television series, The Century of the Self (2002), is critical of the Human Potentials Movement and includes video segments recorded at Esalen.
The Mad Men show finale, "Person to Person" (airdate May 17, 2015), features Don and Stephanie staying at an Esalen-like coastline retreat in the year 1970.
In True Detective season 2, the Panticapaeum Institute is largely based on the Esalen Institute.
Music
On July 10, 1968, The Beatles guitarist George Harrison was given sitar lessons at Esalen by Ravi Shankar for the movie Raga.
References
Notes
Works cited
Further reading
External links
1962 establishments in California
Gestalt therapy
Hot springs of California
Human Potential Movement
Buildings and structures in Monterey County, California
Personal development
Tourist attractions in Monterey County, California
Big Sur
New Age communities
New Age organizations | Esalen Institute | Biology | 5,471 |
77,676,491 | https://en.wikipedia.org/wiki/List%20of%20star%20systems%20within%20450%E2%80%93500%20light-years | This is a list of star systems within 450–500 light years of Earth.
See also
List of star systems within 400–450 light-years
References
Lists by distance
Star systems
Lists of stars | List of star systems within 450–500 light-years | Physics,Astronomy | 40 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.