text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics , a Ringschluss ( German : Beweis durch Ringschluss , lit. 'Proof by ring-inference') is a mathematical proof technique where the equivalence of several statements can be proven without having to prove all pairwise equivalences directly. In English it is also sometimes called a cycle of implications , [ 1 ] closed chain inference , or circular implication ; however, it should be distinguished from circular reasoning , a logical fallacy.
In order to prove that the statements φ 1 , … , φ n {\displaystyle \varphi _{1},\ldots ,\varphi _{n}} are each pairwise equivalent, proofs are given for the implications φ 1 ⇒ φ 2 {\displaystyle \varphi _{1}\Rightarrow \varphi _{2}} , φ 2 ⇒ φ 3 {\displaystyle \varphi _{2}\Rightarrow \varphi _{3}} , … {\displaystyle \dots } , φ n − 1 ⇒ φ n {\displaystyle \varphi _{n-1}\Rightarrow \varphi _{n}} and φ n ⇒ φ 1 {\displaystyle \varphi _{n}\Rightarrow \varphi _{1}} . [ 2 ] [ 3 ]
The pairwise equivalence of the statements then results from the transitivity of the material conditional .
For n = 4 {\displaystyle n=4} the proofs are given for φ 1 ⇒ φ 2 {\displaystyle \varphi _{1}\Rightarrow \varphi _{2}} , φ 2 ⇒ φ 3 {\displaystyle \varphi _{2}\Rightarrow \varphi _{3}} , φ 3 ⇒ φ 4 {\displaystyle \varphi _{3}\Rightarrow \varphi _{4}} and φ 4 ⇒ φ 1 {\displaystyle \varphi _{4}\Rightarrow \varphi _{1}} . The equivalence of φ 2 {\displaystyle \varphi _{2}} and φ 4 {\displaystyle \varphi _{4}} results from the chain of conclusions that are no longer explicitly given:
That is φ 2 ⇔ φ 4 {\displaystyle \varphi _{2}\Leftrightarrow \varphi _{4}} .
The technique saves writing effort above all. In proving the equivalence of n {\displaystyle n} statements, it requires the direct proof of only n {\displaystyle n} out of the n ( n − 1 ) / 2 {\displaystyle n(n-1)/2} implications between these statements. In contrast, for instance, choosing one of the statements as being central and proving that the remaining n − 1 {\displaystyle n-1} statements are each equivalent to the central one would require 2 ( n − 1 ) {\displaystyle 2(n-1)} implications, a larger number. [ 1 ] The difficulty for the mathematician is to find a sequence of statements that allows for the most elegant direct proofs possible. | https://en.wikipedia.org/wiki/Ringschluss |
Ringwoodite is a high-pressure phase of Mg 2 SiO 4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between 525 and 660 km (326 and 410 mi) depth. It may also contain iron and hydrogen . It is polymorphous with the olivine phase forsterite (a magnesium iron silicate ).
Ringwoodite is notable for being able to contain hydroxide ions within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions . [ 5 ]
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean 's equivalent of water in the mantle transition zone from 410 to 660 km deep. [ 6 ] [ 7 ]
This mineral was first identified in the Tenham meteorite in 1969, [ 8 ] and is inferred to be present in large quantities in the Earth's mantle.
Olivine , wadsleyite , and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about 660 kilometres (410 mi), other minerals, including some with the perovskite structure , are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Ringwoodite is polymorphous with forsterite, Mg 2 SiO 4 , and has a spinel structure . Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about 410 km (250 mi); the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure range for stability of ringwoodite lies in the approximate range from 18 to 23 GPa .
Natural ringwoodite has been found in many shocked chondritic meteorites , in which the ringwoodite occurs as fine-grained polycrystalline aggregates . [ 9 ]
Natural ringwoodite generally contains much more magnesium than iron and can form a gapless solid solution series from the pure magnesium endmember to the pure iron endmember [ citation needed ] . The latter, the iron-rich endmember of the γ-olivine solid solution series, γ-Fe 2 SiO 4 , was named ahrensite in honor of US mineral physicist Thomas J. Ahrens (1936–2010). [ 10 ]
In meteorites, ringwoodite occurs in the veinlets of quenched shock-melt cutting the matrix and replacing olivine probably produced during shock metamorphism . [ 9 ]
In Earth's interior, olivine occurs in the upper mantle at depths less than about 410 km, and ringwoodite is inferred to be present within the transition zone from about 520 to 660 km depth. Seismic activity discontinuities at about 410 km, 520 km, and at 660 km depth have been attributed to phase changes involving olivine and its polymorphs .
The 520-km depth discontinuity is generally believed to be caused by the transition of the olivine polymorph wadsleyite (beta-phase) to ringwoodite (gamma-phase), while the 660-km depth discontinuity by the phase transformation of ringwoodite (gamma-phase) to a silicate perovskite plus magnesiowüstite . [ 11 ] [ 12 ]
Ringwoodite in the lower half of the transition zone is inferred to play a pivotal role in mantle dynamics, and the plastic properties of ringwoodite are thought to be critical in determining flow of material in this part of the mantle. The ability of ringwoodite to incorporate hydroxide is important because of its effect on rheology .
Ringwoodite has been synthesized at conditions appropriate to the transition zone, containing up to 2.6 weight percent water. [ 13 ] [ 14 ]
Because the transition zone between the Earth's upper and lower mantle helps govern the scale of mass and heat transport throughout the Earth, the presence of water within this region, whether global or localized, may have a significant effect on mantle rheology and therefore mantle circulation. [ 15 ] In subduction zones, the ringwoodite stability field hosts high levels of seismicity. [ 16 ]
An "ultradeep" diamond (one that has risen from a great depth) found in Juína in western Brazil contained an inclusion of ringwoodite — at the time the only known sample of natural terrestrial origin — thus providing evidence of significant amounts of water as hydroxide in the Earth's mantle. [ 6 ] [ 17 ] [ 18 ] [ 19 ] The gemstone, about 5mm long, [ 19 ] was brought up by a diatreme eruption. [ 20 ] The ringwoodite inclusion is too small to see with the naked eye. [ 19 ] A second such diamond was later found. [ 21 ]
The mantle reservoir could contain about three times more water, in the form of hydroxide contained within the wadsleyite and ringwoodite crystal structure, than the Earth's oceans combined. [ 7 ]
For experiments, hydrous ringwoodite has been synthesized by mixing powders of forsterite ( Mg 2 SiO 4 ), brucite ( Mg(OH) 2 ), and silica ( SiO 2 ) so as to give the desired final elemental composition. Putting this under 20 gigapascals of pressure at 1,523 K (1,250 °C; 2,282 °F) for three or four hours turns this into ringwoodite, which can then be cooled and depressurized. [ 5 ]
Ringwoodite has the spinel structure , in the isometric crystal system with space group Fd 3 m (or F 4 3 m [ 22 ] ). On an atomic scale, magnesium and silicon are in octahedral and tetrahedral coordination with oxygen, respectively. The Si-O and Mg-O bonds have mixed ionic and covalent character. [ 23 ] The cubic unit cell parameter is 8.063 Å for pure Mg 2 SiO 4 and 8.234 Å for pure Fe 2 SiO 4 . [ 24 ]
Ringwoodite compositions range from pure Mg 2 SiO 4 to Fe 2 SiO 4 in synthesis experiments. Ringwoodite can incorporate up to 2.6 percent by weight H 2 O. [ 5 ]
The physical properties of ringwoodite are affected by pressure and temperature. At the pressure and temperature condition of the Mantle Transition Zone, the calculated density value of ringwoodite is 3.90 g/cm 3 for pure Mg 2 SiO 4 ; [ 25 ] 4.13 g/cm 3 for (Mg 0.91 ,Fe 0.09 ) 2 SiO 4 [ 26 ] of pyrolitic mantle; and 4.85 g/cm 3 for Fe 2 SiO 4 . [ 27 ] It is an isotropic mineral with an index of refraction n = 1.768.
The colour of ringwoodite varies between the meteorites, between different ringwoodite bearing aggregates, and even in one single aggregate. The ringwoodite aggregates can show every shade of blue, purple, grey and green, or have no colour at all.
A closer look at coloured aggregates shows that the colour is not homogeneous, but seems to originate from something with a size similar to the ringwoodite crystallites. [ 28 ] In synthetic samples, pure Mg ringwoodite is colourless, whereas samples containing more than one mole percent Fe 2 SiO 4 are deep blue in colour. The colour is thought to be due to Fe 2+ –Fe 3+ charge transfer. [ 29 ] | https://en.wikipedia.org/wiki/Ringwoodite |
Ringworld is a 1970 science fiction novel by Larry Niven , set in his Known Space universe and considered a classic of science fiction literature. Ringworld tells the story of Louis Wu and his companions on a mission to the Ringworld, an enormous rotating ring, an alien construct in space 186 million miles (299 million kilometres) in diameter. Niven later wrote three sequel novels and then cowrote, with Edward M. Lerner , four prequels and a final sequel; the five latter novels constitute the Fleet of Worlds series. All the novels in the Ringworld series tie into numerous other books set in Known Space. Ringworld won the Nebula Award in 1970, [ 1 ] as well as both the Hugo Award and Locus Award in 1971. [ 2 ]
On Earth in 2850 AD, a bored Louis Wu is celebrating his 200th birthday. Despite his age, Louis is in perfect physical condition due to the longevity drug boosterspice . Nessus , a Pierson's puppeteer , offers him a mysterious job. Intrigued, Louis accepts. Nessus also recruits the Kzin Speaker-to-Animals and Teela Brown , a young human woman who becomes Louis's lover, for the rest of the ship's crew.
On the puppeteer home world (which is fleeing deadly radiation that will arrive in 20,000 years), they are told that their goal is to determine if the Ringworld, a gigantic artificial ring near the puppeteers' path, poses any threat to their migration. The Ringworld is about one million miles (1.6 million km) wide and approximately the diameter of Earth's orbit, encircling a sunlike star. It rotates to provide artificial gravity 99% as strong as Earth's from centrifugal force . It has a habitable inner surface (equivalent in area to approximately three million Earths), a breathable atmosphere, and a temperature optimal for humans. Night is provided by an inner ring of shadow squares which are connected to each other by thin, ultra-strong wire. When the crew completes its mission, as payment they will be given the starship they used to travel to the puppeteer world; it is about 1000 times faster than any human or Kzinti ship.
When they reach the vicinity of the Ringworld, they are unable to contact anyone. Their ship, the Lying Bastard , is disabled by an automated meteoroid -defense system. The vessel collides with a strand of shadow-square wire and crash-lands near a huge mountain, which is called "Fist-of-God" by the first natives they speak with. The fusion drive is destroyed, so they set out to find a way to get the Lying Bastard off the Ringworld and use the undamaged hyperdrive to return home.
Using their flycycles, they set out for the rim of the ring, searching for technology to help them get home. They encounter primitive human natives who live in the ruins of a once-advanced city. The natives think that Louis is one of the engineers who created the ring, whom they revere as gods. The crew is attacked when Louis accidentally commits what the natives consider a blasphemy, but extricate themselves.
During their journey, Nessus reveals several puppeteer secrets. They initiated research into rendering the Kzinti extinct, considering them dangerous and useless, but found that the numerous Man-Kzin wars—which the Kzinti always lost—had greatly reduced their aggression: a very high percentage of Kzinti males were killed in each conflict, leaving more prudent and cautious survivors to breed. The puppeteers had also used Birthright Lotteries to try to breed humans for luck: all of Teela's ancestors for six generations are lottery winners. Speaker's outrage at learning the former forces Nessus to flee from the group and then follow from a safe distance.
While flying through a giant storm, Teela becomes separated from the others. When Louis and Speaker search for her, their flycycles are caught by an automated trap designed to catch speeders. They are brought to a floating police station. There, they meet Halrloprillalar Hotrufan ("Prill"), a former crew member of a ship that had brought back goods from worlds abandoned by the Ringworld builders. Nessus, using a tasp (a remote pleasure-giving device), conditions Prill into helping and joining them. When her ship returned to the Ringworld the last time, they discovered that civilization had collapsed. Louis surmises that a mold inadvertently brought back by a ship like Prill's mutated and broke down the superconductors vital to the Ringworld civilization, causing its fall.
Teela rejoins them, accompanied by her new lover, a traveling warrior named Seeker who protected her. Based on an insight gained from studying a Ringworld map, Louis comes up with a plan to get home. Teela chooses to remain on the Ringworld with Seeker. Louis, formerly skeptical about breeding for luck, now wonders if the entire mission was caused by Teela's luck, to unite her with her true love and help her mature.
The party collects one end of the shadow-square wire that snapped after the collision with their ship and fell near their path, and drag it behind them. Louis threads it through the Lying Bastard to tether it to the floating police station. "Fist-of-God", the enormous mountain near their crash site, was not on the Ringworld map, leading Louis to guess that it is the result of a meteoroid striking the underside of the ring, pushing the ring's floor up and finally breaking through. The top of the mountain, above the atmosphere, is therefore just a hole. Louis uses the police station to drag the Lying Bastard up and into the hole. Once the ship falls through and clears the ring, they can use its hyperdrive to get home. The book concludes with Louis and Speaker discussing returning to the Ringworld.
Algis Budrys found Ringworld to be "excellent and entertaining ... woven together very skillfully and proceed[ing] at a pretty smooth pace." While praising the novel generally, he faulted Niven for relying on inconsistencies regarding evolution in his extrapolations to support his fictional premises. [ 3 ]
Sam Jordison described Ringworld as "arguably one of the most influential science fiction novels of the past 50 years." [ 4 ]
In addition to the two aliens, Niven includes a number of concepts from his other Known Space stories:
The opening chapter of the original paperback edition of Ringworld featured Louis Wu teleporting eastward around the Earth in order to extend his birthday. Moving in this direction would, in fact, make local time later rather than earlier, so that Wu would soon arrive in the early morning of the next calendar day. Niven was "endlessly teased" about this error, which he corrected in subsequent printings to show Wu teleporting westward. [ 5 ] In his dedication to The Ringworld Engineers , Niven wrote, "If you own a first paperback edition of Ringworld , it's the one with the mistakes in it. It's worth money." [ 6 ]
After the publication of Ringworld , many fans identified numerous engineering problems in the Ringworld as described in the novel. One major one was that the Ringworld, being a rigid structure, was not actually in orbit around the star it encircled and would eventually drift, ultimately colliding with its sun and disintegrating. This led MIT students attending the 1971 Worldcon to chant, "The Ringworld is unstable!" Niven wrote the 1980 sequel The Ringworld Engineers in part to address these engineering issues.
The second chapter refers to standard Earth gravity as 9.98 m/s 2 (or even gives the unit as m/s [ sic ]), while standard Earth gravity is 9.81 m/s 2 .
The fifth chapter refers to Nereid as Neptune's largest moon; the planet's largest moon is Triton.
"Ringworld" has become a generic term for such a structure, which is an example of what science fiction fans call a " Big Dumb Object ", or more formally a megastructure . Other science fiction authors have devised their own variants of Niven's Ringworld, notably Iain M. Banks ' Culture Orbitals , best described as miniature Ringworlds, and the titular ring-shaped Halo structures of the video game series Halo . Such a mini-Ringworld appears in Star Wars: The Book of Boba Fett , season 1, episode 5. [ citation needed ] . In the Paramount+ series Star Trek: Lower Decks season 4, episode 3, "In the Cradle of Vexilon", a Ringworld-like world is prominently featured.
In 1984, a role-playing game based on this setting was produced by Chaosium named The Ringworld Roleplaying Game . Information from the RPG, along with notes composed by RPG author John Hewitt with Niven, was later used to form the "Bible" given to authors writing in the Man-Kzin Wars series. Niven himself recommended that Hewitt write one of the stories for the original two MKW books, although this never came to pass. [ 7 ]
Tsunami Games released two adventure games based on Ringworld . Ringworld: Revenge of the Patriarch was released in 1992 and Return to Ringworld in 1994. A third game, Ringworld: Within ARM's Reach , was also planned, but never completed.
The video game franchise Halo , created by Bungie , took inspiration from the book in the creation and development of its story around the eponymous rings, called Halos. These are physically similar to the Ringworld, however they are much smaller and do not encircle the star, instead orbiting stars or planets.
The open source video game Endless Sky features an alien species that creates ringworlds.
In 2017, Paradox Interactive added a DLC called "Utopia" to their game Stellaris , [ 8 ] allowing the player to restore or build ringworlds.
In 2021, Mobius Digital added a DLC called "Echoes of the Eye" to their game Outer Wilds , [ 9 ] [ non-primary source needed ] which allows the player to explore a hidden, abandoned ringworld and determine what happened to its inhabitants.
There have been many aborted attempts to adapt the novel to the screen.
In 2001, Larry Niven reported that a movie deal had been signed and was in the early planning stages. [ 10 ] [ 11 ]
In 2004, the Sci-Fi Channel reported that it was developing a Ringworld miniseries. [ 12 ] The series never came to fruition.
In 2013, it was again announced by the channel, now rebranded as Syfy , that a miniseries of the novel was in development. This proposed four-hour miniseries was being written by Michael R. Perry and would have been a co-production between MGM Television and Universal Cable Productions . [ 13 ]
In 2017, Amazon announced that Ringworld was one of three science fiction series it was developing for its streaming service. MGM were again listed as a co-producer. [ 14 ]
Tor/Seven Seas (same joint venture of Macmillan 's Tor Books and Seven Seas Entertainment who also published the English-language translation of Afro Samurai ) published a two-part original English-language manga adaptation of Ringworld , with the script written by Robert Mandell and the artwork by Sean Lam . [ 15 ] Ringworld: The Graphic Novel, Part One , covering the events of the novel up to the sunflower attack on Speaker, was released on July 8, 2014. Part Two was released on November 10, 2015. | https://en.wikipedia.org/wiki/Ringworld |
The Rio Grande Project is a United States Bureau of Reclamation irrigation , hydroelectricity , flood control , and interbasin water transfer project serving the upper Rio Grande basin in the southwestern United States . The project irrigates 193,000 acres (780 km 2 ) along the river in the states of New Mexico and Texas . [ 1 ] Approximately 60 percent of this land is in New Mexico. Some water is also allotted to Mexico to irrigate some 25,000 acres (100 km 2 ) on the south side of the river. The project was authorized in 1905, [ 2 ] but its final features were not implemented until the early 1950s.
The project consists of two large storage dams, 6 small diversion dams , two flood-control dams, 596 miles (959 km) of canals and their branches and 465 miles (748 km) of drainage channels and pipes. A small hydroelectric plant at one of the project's dams also supplies electricity to the region. [ 3 ]
Long before Texas was a state, the Pueblo Indians used the waters of the Rio Grande with simple irrigation systems that were noted by the Spanish in the 16th century while conducting expeditions from Mexico to North America. In the mid-19th century, American settlers began intensive irrigation development of the Rio Grande watershed. Small dikes , dams, canals , and other irrigation works were constructed along the Rio Grande and its tributaries. The river would take out some of these primitive structures in its annual floods, and a large, coordinated project would be needed to construct permanent replacements. However, investigations to begin this project did not begin until the early twentieth century.
Like many rivers of the American Southwest , runoff in the Rio Grande basin is limited and varies widely from year to year. [ 4 ] By the 1890s, water use in the upper basin was so great that the river's flow near El Paso, Texas , was reduced to a trickle in dry summers. To resolve these problems, plans were drafted up for a large storage dam at Elephant Butte, about 120 miles (190 km) downstream of Albuquerque, New Mexico . The Newlands Reclamation Act was passed in 1902, authorizing the Rio Grande Project as a Bureau of Reclamation undertaking. For the next two years, surveyors and engineers undertook a comprehensive feasibility study for the project's dams and reservoirs.
The first elements of the project to be built were the Leasburg Diversion Dam and about 6 miles (9.7 km) of supporting canal, begun in 1906 and finished in 1908. Elephant Butte Dam , the largest dam on the Rio Grande, was authorized by the United States Congress on February 15, 1905. Construction began in 1908, when groundworks were laid. Conflicts over the lands to be submerged under the future reservoir bogged down the project for a while, but work resumed in 1912 and the reservoir began to fill by 1915. The Franklin Canal was an existing 1890 canal purchased by the Bureau of Reclamation in 1912 and rebuilt from 1914 to 1915. The Mesilla and Percha Diversion Dams, East Side Canal, West Side Canal, Rincon Valley Canal, and an extension of the Leasburg Canal were built in the period between 1914 and 1919. [ 3 ]
In the late 1910s, a problem developed with rising local groundwater levels caused by irrigation. In response, Reclamation began planning for the extensive 465-mile (748 km) drainage system of the Rio Grande Project in 1916. Contracts for the construction of these drainage systems, as well as distribution canals (laterals) were not awarded until the period from 1917 to 1918. Before 1929, the entire irrigation system would be overhauled. This involved repairing, rebuilding and extending old canals; and construction of new laterals. Work is still in progress, as agricultural development in the region continues to grow. [ 3 ]
The last major components of the project were constructed from the 1930s to the early 1950s. Caballo Dam , the second major storage facility of the project located 21 miles south of Truth or Consequences, New Mexico was built from 1936 to 1938. Caballo was built to provide flood protection for the projects downstream, stabilize outflows from Elephant Butte, and replace storage lost in Elephant Butte Reservoir due to sedimentation. With the benefit of flow regulation, a small hydroelectric plant was completed in 1940 at the base of Elephant Butte Dam. The construction of power transmission lines was begun in 1940, and was finally completed by 1952. [ 3 ] [ 5 ]
The Elephant Butte Irrigation District is a 6,870 acres (27.8 km 2 ) historic district providing recognition and limited protection for the history of much of the system, which was listed on the National Register of Historic Places in 1997. The listing included three contributing buildings and 214 contributing structures . Noted as historic are the diversion dams and the unlined irrigation canals; most of the mechanical fixtures in the system have been routinely replaced and are non-historic. [ 6 ]
The Elephant Butte Dam (also referred to as Elephant Butte Dike) is the main storage facility for the Rio Grande Project. It is a 1,674 ft (510 m) long concrete gravity dam standing 193 ft (59 m) above the river and 301 ft (92 m) high from its foundations. The dam is 228 feet (69 m) thick at the base and tapers to about 18 feet (5.5 m) thick at the crest. [ 7 ] The dam took 629,500 cubic yards (481,300 m 3 ) of material to construct.
The full volume of Elephant Butte Reservoir is some 2,109,423 acre⋅ft (2.601935 × 10 9 m 3 ), accounting for about 85% of the project's storage capacity. The outlet works of the dam can release 10,800 cu ft/s (310 m 3 /s), while the service spillway can release 34,750 cu ft/s (984 m 3 /s). [ 7 ]
The reservoir and dam receive water from a catchment of 28,900 square miles (75,000 km 2 ), about 16% of the Rio Grande's total drainage area. [ 7 ] The Elephant Butte hydroelectric station is a base load power plant that draws water from the reservoir and has a capacity of 27.95 megawatts . [ 8 ]
Caballo Dam is the second major storage dam of the Rio Grande Project, located about 25 miles (40 km) below Elephant Butte. The dam is 78 feet (24 m) high above the river, 96 feet (29 m) high from its foundations, and 4,558 feet (1,389 m) long. It forms the Caballo Reservoir, which can store up to 343,990 acre⋅ft (0.42431 km 3 ) of water.
The outlet works can release 5,000 cubic feet (140 m 3 ), while the spillway has a capacity of 33,200 cubic feet (940 m 3 ) per second. [ 9 ] The dam has no power generation facilities, although it has been proposed that a small hydroelectric plant be installed at its base for local irrigation districts. [ 10 ]
Percha Diversion Dam lies downstream from and 1 mile (1.6 km) west of the Caballo Dam. It consists of a concrete overflow section flanked by earthen wing dikes totaling 2,489 ft (759 m) in length, standing 19 feet (5.8 m) high above the riverbed and 29 feet (8.8 m) above its foundations. [ 11 ] The dam diverts water into the Rincon Valley Main Canal, which is 28.1 miles (45.2 km) long and has a capacity of 350 cu ft/s (9.9 m 3 /s). Water from the canal irrigates 16,260 acres (6,580 ha) of land in the Rincon Valley. [ 3 ]
Leasburg Diversion Dam is downstream and nearly identical in design to the Percha Diversion Dam. It is 7 feet (2.1 m) high above the river and 10 feet (3.0 m) high above its foundations. The dam and adjacent dikes total 3,922.3 feet (1,195.5 m) in length. The dam's spillway is a broad-crested weir about 600 feet (180 m) long with a capacity of 17,000 cu ft/s (480 m 3 /s). [ 12 ] The dam diverts water into the 13.7-mile (22.0 km) Leasburg Canal, which irrigates 31,600 acres (12,800 ha) of land in the upper Mesilla Valley. The canal has a capacity of 625 cubic feet (17.7 m 3 ) per second.
Pichacho North and Pichacho South dams impound North Pichacho Arroyo and South Pichacho Arroyo, respectively, to provide flood protection for the Leasburg Canal. Both arroyos are ephemeral , and so the dams operate only during storm events. The dams were both built in the 1950s.
The Mesilla Diversion Dam is located about 40 miles (64 km) upstream of El Paso and consists of a gated overflow structure. The dam is 10 feet (3.0 m) high above the Rio Grande, 22 feet (6.7 m) high above its foundations, and measures 303 feet (92 m) long. The spillway has a capacity of 15,000 cu ft/s (420 m 3 /s). [ 15 ] The dam diverts water into the East Side Canal and West Side Canal, which provide irrigation water to 53,650 acres (21,710 ha) of land in the lower Mesilla Valley. The East Side Canal is 13.5 miles (21.7 km) long, and has a capacity of 300 cu ft/s (8.5 m 3 /s). The West Side Canal is larger at 23.4 miles (37.7 km) long, and has a capacity of 650 cu ft/s (18 m 3 /s). Near its end, the West Side Canal crosses underneath the Rio Grande via the Montoya Siphon. [ 3 ]
The American Diversion Dam is a gated dam flanked by earthen dikes about 2 miles (3.2 km) northwest of El Paso and just above the Mexico–United States border . It is 5 feet (1.5 m) high above the riverbed, and 18 feet (5.5 m) from crest to foundation. The spillway is 286 feet (87 m) long and has a capacity of 12,000 cu ft/s (340 m 3 /s). [ 16 ] The dam diverts water into the American Canal, which carries up to 1,200 cubic feet per second (34 m 3 /s) of water for 2.1 miles (3.4 km) to the beginning of the Franklin Canal. The Franklin Canal is 28.4 miles (45.7 km) long and takes water into the El Paso Valle, where it irrigates 17,000 acres (69 km 2 ). [ 3 ]
Riverside Diversion Dam is the lowermost dam of the Rio Grande Project. The dam is 8 feet (2.4 m) above the streambed, 17.5 feet (5.3 m) above its foundations, and 267 feet (81 m) long. Its service spillway consists of six 16 ft × 8.17 ft (4.88 m × 2.49 m) radial gates , and an uncontrolled overflow weir serves as an emergency spillway. [ 17 ] The Riverside Canal carries water 17.2 miles (27.7 km) to the El Paso Valley, and has a capacity of about 900 cu ft/s (25 m 3 /s). The Tornillo Canal, with a capacity of 325 cu ft/s (9.2 m 3 /s), branches 12 miles (19 km) off the Riverside Canal. Excess waters from the canals are diverted to irrigate about 18,000 acres (7,300 ha) in Hudspeth County, Texas . [ 3 ]
The Rio Grande Project furnishes irrigation water year-round to a long, narrow area of 178,000 acres (72,000 ha) [ 2 ] in the Rio Grande Valley in south-central New Mexico and western Texas. Crops grown in the region include grain , pecans , alfalfa , cotton , and many types of vegetables. Power generated at the Elephant Butte power plant is distributed through an electrical grid totaling 490 miles (790 km) of 115- kilovolt transmission lines and 11 substations . Originally built by Reclamation, the power grid remained under its ownership until 1977, when it was sold to a local company. [ 8 ]
Caballo and Elephant Butte reservoirs are both popular recreational areas. Elephant Butte Reservoir, with 36,897 acres (149.32 km 2 ) of water at full pool, is popular for swimming, boating, and fishing. Cabins, fishing tackle, and boat rental services are available at the reservoir. Downstream Caballo Reservoir, with an area of 11,500 acres (47 km 2 ), is also a popular site for picnicking, fishing and boating. Elephant Butte Lake State Park and Caballo Lake State Park serve the two reservoirs, respectively. [ 3 ]
Even before the Rio Grande Project, the waters of the Rio Grande were already overtaxed by human development in the region. At the end of the 19th century, there were some 925 diversions of the river in the state of Colorado alone. In 1896, it was affirmed by the United States Geological Survey (USGS) that the river's flow was decreasing by 200,000 acre-feet (250,000,000 m 3 ) annually. [ dubious – discuss ] The river has run dry many times since the 1950s at Big Bend National Park . At El Paso, Texas , the river is non-existent for much of the year. Tributaries of the river, both on the Mexican and American sides, have been diverted heavily for irrigation. The Rio Grande is said to be "one of the most stressed river basins in the world". [ 18 ] In 2001, the river failed to reach the Gulf of Mexico but instead ended 500 feet (150 m) from the shore behind a sandbar , "not with a roar but with a whimper in the sand". [ 19 ]
The river's decreasing flow has posed problems for international security. In the past, the river was wide, deep and fast-flowing in its section through Texas, where it forms a large section of the Mexico–United States border . Illegal immigrants once had to swim across the river at the border, but with the river so low immigrants need only wade across for most of the year. Other than extensive diversions, exotic introduced, fast-growing and water-consuming plants, such as water hyacinth and hydrilla , are also leading to reduced flows. The United States government has recently attempted to slow or stop the progress of these weeds by introducing insects and fish that feed on the invasive plants. [ 19 ] | https://en.wikipedia.org/wiki/Rio_Grande_Project |
A Riordan array is an infinite lower triangular matrix , D {\displaystyle D} , constructed from two formal power series , d ( t ) {\displaystyle d(t)} of order 0 and h ( t ) {\displaystyle h(t)} of order 1, such that d n , k = [ t n ] d ( t ) h ( t ) k {\displaystyle d_{n,k}=[t^{n}]d(t)h(t)^{k}} .
A Riordan array is an element of the Riordan group. [ 1 ] It was defined by mathematician Louis W. Shapiro and named after John Riordan . [ 1 ] The study of Riordan arrays is a field influenced by and contributing to other areas such as combinatorics , group theory , matrix theory , number theory , probability , sequences and series , Lie groups and Lie algebras , orthogonal polynomials , graph theory , networks , unimodal sequences, combinatorial identities, elliptic curves , numerical approximation , asymptotic analysis , and data analysis . Riordan arrays also unify tools such as generating functions , computer algebra systems , formal languages , and path models . [ 2 ] Books on the subject, such as The Riordan Array [ 1 ] (Shapiro et al., 1991), have been published.
A formal power series a ( x ) = a 0 + a 1 x + a 2 x 2 + ⋯ = ∑ j ≥ 0 a j x j ∈ C [ [ x ] ] {\displaystyle a(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots =\sum _{j\geq 0}a_{j}x^{j}\in \mathbb {C} [[x]]} (where C [ [ x ] ] {\displaystyle \mathbb {C} [[x]]} is the ring of formal power series with complex coefficients) is said to have order r {\displaystyle r} if a 0 = ⋯ = a r − 1 = 0 ≠ a r {\displaystyle a_{0}=\cdots =a_{r-1}=0\neq a_{r}} . Write F r {\displaystyle {\mathcal {F}}_{r}} for the set of formal power series of order r {\displaystyle r} . A power series a ( x ) {\displaystyle a(x)} has a multiplicative inverse (i.e. 1 / a ( x ) {\displaystyle 1/a(x)} is a power series) if and only if it has order 0, i.e. if and only if it lies in F 0 {\displaystyle {\mathcal {F}}_{0}} ; it has a composition inverse that is there exists a power series a ¯ {\displaystyle {\bar {a}}} such that a ¯ ( a ( x ) ) = x {\displaystyle {\bar {a}}(a(x))=x} if and only if it has order 1, i.e. if and only if it lies in F 1 {\displaystyle {\mathcal {F}}_{1}} .
As mentioned previously, a Riordan array is usually defined via a pair of power series ( d ( t ) , h ( t ) ) ∈ F 0 × F 1 {\displaystyle (d(t),h(t))\in {\mathcal {F}}_{0}\times {\mathcal {F}}_{1}} . The "array" part in its name stems from the fact that one associates to ( d ( t ) , h ( t ) ) {\displaystyle (d(t),h(t))} the array of complex numbers defined by d n , k := [ t n ] d ( t ) h ( t ) k , {\displaystyle d_{n,k}:=[t^{n}]d(t)h(t)^{k},} n , k ∈ N {\displaystyle n,k\in \mathbb {N} } (here " [ t n ] ⋯ {\displaystyle [t^{n}]\cdots } " means "coefficient of t n {\displaystyle t^{n}} in ⋯ {\displaystyle \cdots } "). Thus column k {\displaystyle k} of the array consists of the sequence of coefficients of the power series d ( t ) h ( t ) k ; {\displaystyle d(t)h(t)^{k};} in particular, column 0 determines and is determined by the power series d ( t ) . {\displaystyle d(t).} Because d ( t ) {\displaystyle d(t)} is of order 0, it has a multiplicative inverse, and it follows that from the array's column 1 we can recover h ( t ) {\displaystyle h(t)} as h ( t ) = d ( t ) − 1 d ( t ) h ( t ) {\displaystyle h(t)=d(t)^{-1}d(t)h(t)} . Since h ( t ) {\displaystyle h(t)} has order 1, h ( t ) k {\displaystyle h(t)^{k}} is of order k {\displaystyle k} and so is d ( t ) h ( t ) k . {\displaystyle d(t)h(t)^{k}.} It follows that the array d n , k {\displaystyle d_{n,k}} is lower triangular and exhibits a geometric progression ( d k , k ) k ≥ 0 = ( d 0 h 1 k ) k ≥ 0 {\displaystyle (d_{k,k})_{k\geq 0}=(d_{0}h_{1}^{k})_{k\geq 0}} on its main diagonal. It also follows that the map sending a pair of power series ( d ( t ) , h ( t ) ) ∈ F 0 × F 1 {\displaystyle (d(t),h(t))\in {\mathcal {F}}_{0}\times {\mathcal {F}}_{1}} to its triangular array is injective .
An example of a Riordan array is given by the pair of power series
( 1 1 − x , x 1 − x ) = ( ∑ j ≥ 0 x j , ∑ j ≥ 0 x j + 1 ) ∈ F 0 × F 1 {\displaystyle \left({\frac {1}{1-x}},{\frac {x}{1-x}}\right)=\left(\sum _{j\geq 0}x^{j},\sum _{j\geq 0}x^{j+1}\right)\in {\mathcal {F}}_{0}\times {\mathcal {F}}_{1}} .
It is not difficult to show that this pair generates the infinite triangular array of binomial coefficients d n , k = ( n k ) {\displaystyle d_{n,k}={\binom {n}{k}}} , also called the Pascal matrix :
P = ( 1 1 1 1 2 1 ⋯ 1 3 3 1 1 4 6 4 1 ⋮ ⋱ ) {\displaystyle P=\left({\begin{array}{ccccccc}1&&&&&&\\1&1&&&&&\\1&2&1&&&&\cdots \\1&3&3&1&&&\\1&4&6&4&1&&\\&&\vdots &&&&\ddots \end{array}}\right)} .
Proof: If q ( x ) = ∑ j ≥ 0 q j x j {\displaystyle q(x)=\sum _{j\geq 0}q_{j}x^{j}} is a power series with associated coefficient sequence ( q 0 , q 1 , q 2 , … ) {\displaystyle (q_{0},q_{1},q_{2},\dotsc )} , then, by Cauchy multiplication of power series, q ( x ) x 1 − x = ∑ j ≥ 0 ( 0 + q 0 + q 1 + ⋯ + q j − 1 ) x j . {\displaystyle q(x){\frac {x}{1-x}}=\sum _{j\geq 0}(0+q_{0}+q_{1}+\cdots +q_{j-1})x^{j}.} Thus, the latter series has the coefficient sequence ( 0 , q 0 , q 0 + q 1 , q 0 + q 1 + q 2 , … ) {\displaystyle (0,q_{0},q_{0}+q_{1},q_{0}+q_{1}+q_{2},\dotsc )} , and hence [ t n ] q ( x ) x 1 − x = q 0 + ⋯ + q n − 1 {\displaystyle [t^{n}]q(x){\frac {x}{1-x}}=q_{0}+\cdots +q_{n-1}} . Fix any k ∈ Z ≥ 0 . {\displaystyle k\in \mathbb {Z} _{\geq 0}.} If q n = ( n k ) {\displaystyle q_{n}={\binom {n}{k}}} , so that ( q n ) n ≥ 0 {\displaystyle (q_{n})_{n\geq 0}} represents column k {\displaystyle k} of the Pascal array, then ∑ j = 0 n − 1 q j = ∑ j = 0 n − 1 ( j k ) = ( n k + 1 ) {\displaystyle \sum _{j=0}^{n-1}q_{j}=\sum _{j=0}^{n-1}{\binom {j}{k}}={\binom {n}{k+1}}} . This argument shows by induction on k {\displaystyle k} that 1 1 − x ( x 1 − x ) k {\displaystyle {\frac {1}{1-x}}\left({\frac {x}{1-x}}\right)^{k}} has column k {\displaystyle k} of the Pascal array as coefficient sequence.
Below are some often-used facts about Riordan arrays. Note that the matrix multiplication rules applied to infinite lower triangular matrices lead to finite sums only and the product of two infinite lower triangular matrices is infinite lower triangular. The next two theorems were first stated and proved by Shapiro et al. [ 1 ] , which describes them as derived from results in papers by Gian-Carlo Rota and the book of Roman. [ 3 ]
Theorem: a. Let ( a ( x ) , b ( x ) ) {\displaystyle (a(x),b(x))} and ( c ( x ) , d ( x ) ) {\displaystyle (c(x),d(x))} be Riordan arrays, viewed as infinite lower triangular matrices. Then the product of these matrices is the array associated to the pair ( a ( x ) c ( b ( x ) ) , d ( b ( x ) ) ) {\displaystyle (a(x)c(b(x)),d(b(x)))} of formal power series, which is itself a Riordan array.
b. This fact justifies the definition of the multiplication ' ∗ {\displaystyle *} ' of Riordan arrays viewed as pairs of power series by
Proof: Since a ( x ) {\displaystyle a(x)} and c ( x ) {\displaystyle c(x)} have order 0, it is clear that a ( x ) c ( b ( x ) ) {\displaystyle a(x)c(b(x))} has order 0. Similarly, b ( x ) , d ( x ) ∈ F 1 {\displaystyle b(x),d(x)\in {\mathcal {F}}_{1}} implies d ( b ( x ) ) ∈ F 1 {\displaystyle d(b(x))\in {\mathcal {F}}_{1}} .
Therefore, ( a ( x ) c ( b ( x ) ) , d ( b ( x ) ) ) {\displaystyle (a(x)c(b(x)),d(b(x)))} is a Riordan array.
Define a matrix M {\displaystyle M} as the Riordan array ( a ( x ) , b ( x ) ) {\displaystyle (a(x),b(x))} . By definition, its j {\displaystyle j} -th column M ∗ , j {\displaystyle M_{*,j}} is the sequence of coefficients of
the power series a ( x ) b ( x ) j {\displaystyle a(x)b(x)^{j}} . If we multiply this matrix from the right with the sequence ( r 0 , r 1 , r 2 , . . . ) T {\displaystyle (r_{0},r_{1},r_{2},...)^{T}} we get as a result a linear combination of columns of M {\displaystyle M} which we can read as a linear combination of power series, namely ∑ ν ≥ 0 r ν M ∗ , ν = ∑ ν ≥ 0 r ν a ( x ) b ( x ) ν = a ( x ) ∑ ν ≥ 0 r ν b ( x ) ν . {\displaystyle \sum _{\nu \geq 0}r_{\nu }M_{*,\nu }=\sum _{\nu \geq 0}r_{\nu }a(x)b(x)^{\nu }=a(x)\sum _{\nu \geq 0}r_{\nu }b(x)^{\nu }.} Thus, viewing sequence ( r 0 , r 1 , r 2 , . . . ) T {\displaystyle (r_{0},r_{1},r_{2},...)^{T}} as codified by the power series r ( x ) , {\displaystyle r(x),} we showed that ( a ( x ) , b ( x ) ) ∗ r ( x ) = a ( x ) r ( b ( x ) ) . {\displaystyle (a(x),b(x))*r(x)=a(x)r(b(x)).} Here the ∗ {\displaystyle *} is the symbol for indicating correspondence on the power series level with matrix multiplication. We multiplied a Riordan array ( a ( x ) , b ( x ) ) {\displaystyle (a(x),b(x))} with a single power series. Now let ( c ( x ) , d ( x ) ) {\displaystyle (c(x),d(x))} be another Riordan array viewed as a matrix. One can form the product ( a ( x ) , b ( x ) ) ( c ( x ) , d ( x ) ) {\displaystyle (a(x),b(x))(c(x),d(x))} . The j {\displaystyle j} -th column of this product is just ( a ( x ) , b ( x ) ) {\displaystyle (a(x),b(x))} multiplied with the j {\displaystyle j} -th column of ( c ( x ) , d ( x ) ) . {\displaystyle (c(x),d(x)).} Since the latter corresponds to the power series c ( x ) d ( x ) j {\displaystyle c(x)d(x)^{j}} , it follows by the above that the j {\displaystyle j} -th column of ( a ( x ) , b ( x ) ) ( c ( x ) , d ( x ) ) {\displaystyle (a(x),b(x))(c(x),d(x))} corresponds to a ( x ) c ( b ( x ) ) d ( b ( x ) ) j {\displaystyle a(x)c(b(x))d(b(x))^{j}} . As this holds for all column indices j {\displaystyle j} occurring in ( c ( x ) , d ( x ) ) {\displaystyle (c(x),d(x))} we have shown part a. Part b is now clear. ◻ {\displaystyle \Box }
Theorem: The family of Riordan arrays endowed with the product ' ∗ {\displaystyle *} ' defined above forms a group: the Riordan group. [ 1 ]
Proof: The associativity of the multiplication ' ∗ {\displaystyle *} ' follows from associativity of matrix multiplication. Next note ( 1 , x ) ∗ ( c ( x ) , d ( x ) ) = ( 1 ⋅ c ( x ) , d ( x ) ) = ( c ( x ) , d ( x ) ) {\displaystyle (1,x)*(c(x),d(x))=(1\cdot c(x),d(x))=(c(x),d(x))} . So ( 1 , x ) {\displaystyle (1,x)} is a left neutral element. Finally, we claim that ( c ( d ¯ ( x ) ) − 1 , d ¯ ( x ) ) {\displaystyle (c({\bar {d}}(x))^{-1},{\bar {d}}(x))} is the left inverse to the power series ( c ( x ) , d ( x ) ) {\displaystyle (c(x),d(x))} . For this check the computation ( c ( d ¯ ( x ) ) − 1 , d ¯ ( x ) ) ∗ ( c ( x ) , d ( x ) ) {\displaystyle (c({\bar {d}}(x))^{-1},{\bar {d}}(x))*(c(x),d(x))} = ( ( c ( d ¯ ( x ) ) − 1 c ( d ( x ) ) , d ( d ¯ ( x ) ) ) = ( 1 , x ) {\displaystyle =((c({\bar {d}}(x))^{-1}c(d(x)),d({\bar {d}}(x)))=(1,x)} . As is well known, an associative structure which has a left neutral element and where each element has a left inverse is a group. ◻ {\displaystyle \Box }
Of course, not all invertible infinite lower triangular arrays are Riordan arrays. Here is a useful characterization for the arrays that are Riordan. The following result is apparently due to Rogers. [ 4 ]
Theorem: An infinite lower triangular array D = ( d n , k ) n , k ≥ 0 {\displaystyle D=(d_{n,k})_{n,k\geq 0}} is a Riordan array if and only if there exist a sequence traditionally called the A {\displaystyle A} -sequence, A = ( a 0 ≠ 0 , a 1 , . . . ) {\displaystyle A=(a_{0}\neq 0,a_{1},...)} such that
Proof . [ 5 ] ⇒ : {\displaystyle \Rightarrow :} Let D {\displaystyle D} be the Riordan array stemming from ( d ( t ) , h ( t ) ) . {\displaystyle (d(t),h(t)).} Since d ( t ) ∈ F 0 , {\displaystyle d(t)\in {\mathcal {F}}_{0},} d 0 , 0 ≠ 0. {\displaystyle d_{0,0}\neq 0.} Since h ( t ) {\displaystyle h(t)} has order 1, it follows that ( d ( t ) h ( t ) / t , h ( t ) ) {\displaystyle (d(t)h(t)/t,h(t))} is a Riordan array and by the group property there exists a Riordan array ( A ( t ) , B ( t ) ) {\displaystyle (A(t),B(t))} such that ( d ( t ) , h ( t ) ) ∗ ( A ( t ) , B ( t ) ) = ( d ( t ) h ( t ) / t , h ( t ) ) . {\displaystyle (d(t),h(t))*(A(t),B(t))=(d(t)h(t)/t,h(t)).} Computing the left-hand side yields ( d ( t ) A ( h ( t ) ) , B ( h ( t ) ) {\displaystyle (d(t)A(h(t)),B(h(t))} , and hence, comparison yields B ( h ( t ) ) = h ( t ) {\displaystyle B(h(t))=h(t)} . Thus, B ( t ) = t {\displaystyle B(t)=t} is a solution to this equation; it is unique since B {\displaystyle B} is composition invertible. Thus, we can rewrite the equation as ( d ( t ) , h ( t ) ) ∗ ( A ( t ) , t ) = ( d ( t ) h ( t ) / t , h ( t ) ) . {\displaystyle (d(t),h(t))*(A(t),t)=(d(t)h(t)/t,h(t)).}
From the matrix multiplication law, the ( n , k ) {\displaystyle (n,k)} -entry of the left-hand side of this latter equation is
At the other hand the ( n , k ) {\displaystyle (n,k)} -entry of the right-hand side of the equation above is
so that i results. From ∗ 1 {\displaystyle *_{1}} we also get d n + 1 , n + 1 = a 0 d n , n {\displaystyle d_{n+1,n+1}=a_{0}d_{n,n}} for all n ≥ 0 {\displaystyle n\geq 0} and since we know that the diagonal elements are nonzero, we have a 0 ≠ 0. {\displaystyle a_{0}\neq 0.} Note that using equation ∗ 1 {\displaystyle *_{1}} one can compute all entries knowing the entries ( d n , 0 ) n ≥ 0 . {\displaystyle (d_{n,0})_{n\geq 0}.}
⇐ : {\displaystyle \Leftarrow :} Now assume that, for a triangular array, we have the equations ∗ 1 {\displaystyle *_{1}} for some sequence ( a j ) j ≥ 0 . {\displaystyle (a_{j})_{j\geq 0}.} Let A ( t ) {\displaystyle A(t)} be the generating function of that sequence and define h ( t ) {\displaystyle h(t)} from the equation t A ( h ( t ) ) = h ( t ) {\displaystyle tA(h(t))=h(t)} . Check that it is possible to solve the resulting equations for the coefficients of h {\displaystyle h} ; and since a 0 ≠ 0 {\displaystyle a_{0}\neq 0} one gets that h ( t ) {\displaystyle h(t)} has order 1. Let d ( t ) {\displaystyle d(t)} be the generating function of the sequence ( d 0 , 0 , d 1 , 0 , d 2 , 0 , . . . ) {\displaystyle (d_{0,0},d_{1,0},d_{2,0},...)} . Then for the pair ( d ( t ) , h ( t ) ) {\displaystyle (d(t),h(t))} we find ( d ( t ) , h ( t ) ) ∗ ( A ( t ) , t ) = ( d ( t ) A ( h ( t ) ) , h ( t ) ) = ( d ( t ) h ( t ) / t , h ( t ) ) {\displaystyle (d(t),h(t))*(A(t),t)=(d(t)A(h(t)),h(t))=(d(t)h(t)/t,h(t))} . This is the same equations we found in the first part of the proof, and going through its reasoning, we find equations as in ∗ 1 {\displaystyle *_{1}} . Since d ( t ) {\displaystyle d(t)} (or the sequence of its coefficients) determines the other entries, we see that the initial array is the array we deduced. Thus, the array in ∗ 1 {\displaystyle *_{1}} is a Riordan array. ◻ {\displaystyle \Box }
Clearly, the A {\displaystyle A} -sequence alone does not contain all the information about a Riordan array. Indeed, it only determines h ( t ) {\displaystyle h(t)} and places no restriction on d ( t ) {\displaystyle d(t)} . To determine d ( t ) {\displaystyle d(t)} "horizontally", a similarly defined Z {\displaystyle Z} -sequence is used.
Theorem. Let ( d n , k ) n , k ≥ 0 {\displaystyle (d_{n,k})_{n,k\geq 0}} be an infinite lower triangular array whose diagonal sequence ( d n , n ) n ≥ 0 {\displaystyle (d_{n,n})_{n\geq 0}} does not contain zeroes. Then there exists a unique sequence Z = ( z 0 , z 1 , z 2 , . . . ) {\displaystyle Z=(z_{0},z_{1},z_{2},...)} such that
Proof: By the triangularity of the array, the equation claimed is equivalent to d n + 1 , 0 = ∑ j = 0 n z j d n , j {\displaystyle d_{n+1,0}=\sum _{j=0}^{n}z_{j}d_{n,j}} . For n = 0 {\displaystyle n=0} , this equation is d 1 , 0 = z 0 d 0 , 0 {\displaystyle d_{1,0}=z_{0}d_{0,0}} and, as d 0 , 0 ≠ 0 , {\displaystyle d_{0,0}\neq 0,} it allows computing z 0 {\displaystyle z_{0}} uniquely. In general, if z 0 , z 1 , . . . , z n − 1 {\displaystyle z_{0},z_{1},...,z_{n-1}} are known, then d n + 1 , 0 − ∑ j = 0 n − 1 z j d n , j = z n d n , n {\displaystyle d_{n+1,0}-\sum _{j=0}^{n-1}z_{j}d_{n,j}=z_{n}d_{n,n}} allows computing z n {\displaystyle z_{n}} uniquely. ◻ {\displaystyle \Box } | https://en.wikipedia.org/wiki/Riordan_array |
A riparian buffer or stream buffer is a vegetated area (a " buffer strip ") near a stream , usually forested , which helps shade and partially protect the stream from the impact of adjacent land uses . It plays a key role in increasing water quality in associated streams, rivers , and lakes , thus providing environmental benefits. With the decline of many aquatic ecosystems due to agriculture , riparian buffers have become a very common conservation practice aimed at increasing water quality and reducing pollution .
Riparian buffers act to intercept sediment , nutrients , pesticides , and other materials in surface runoff and reduce nutrients and other pollutants in shallow subsurface water flow . [ 1 ] They also serve to provide habitat and wildlife corridors in primarily agricultural areas. They can also be key in reducing erosion by providing stream bank stabilization. Large scale results have demonstrated that the expansion of riparian buffers through the deployment of plantations systems can effectively reduce nitrogen emissions to water and soil loss by wind erosion, while simultaneously providing substantial environmental co-benefits, having limited negative effects on current agricultural production. [ 2 ]
Riparian buffers intercept sediment and nutrients. They counteract eutrophication in downstream lakes and ponds which can be detrimental to aquatic habitats because of large fish kills that occur upon large-scale eutrophication. Riparian buffers keep chemicals, like pesticides, that can be harmful to aquatic life out of the water. Some pesticides can be especially harmful if they bioaccumulate in the organism, with the chemicals reaching harmful levels once they are ready for human consumption. Riparian buffers also stabilise the bank surrounding the water body which is important since erosion can be a major problem in agricultural regions when cut (eroded) banks can take land out of production. Erosion can also lead to sedimentation and siltation of downstream lakes, ponds, and reservoirs. Siltation can greatly reduce the life span of reservoirs and the dams that create the reservoirs.
Riparian buffers can act as crucial habitat for a large number of species, especially those who have lost habitat due to agricultural land being put into production. The habitat provided by the buffers also double as corridors for species that have had their habitat fragmented by various land uses. By adding this vegetated area of land near a water source, it increases biodiversity by allowing species an area to re-establish after being displaced due to non-conservation land use. With this re-establishment, the number of native species and biodiversity in general can be increased. [ 3 ] The large trees in the first zone of the riparian buffer provide shade and therefore cooling for the water, increasing productivity and increasing habitat quality for aquatic species. When branches and stumps ( large woody debris ) fall into the stream from the riparian zone, more stream habitat features are created. Carbon is added as an energy source for biota in the stream.
Buffers increase land value and allow for the production of profitable alternative crops. Vegetation such as black walnut and hazelnut , which can be profitably harvested, can be incorporated into the riparian buffer. Lease fees for hunting can also be increased as the larger habitat means that the land will be more sought-after for hunting purposes. Designing buffer zones based on their hydrological function instead of a traditionally used fixed width method, can be economically beneficial in forestry practices. [ 4 ]
A riparian buffer is usually split into three different zones, each having its own specific purpose for filtering runoff and interacting with the adjacent aquatic system. Buffer design is a key element in the effectiveness of the buffer. It is generally recommended that native species be chosen to plant in these three zones, with the general width of the buffer being 50 feet (15 m) on each side of the stream. [ 5 ]
The US National Agroforestry Center has developed a filter strip design tool called AgBufferBuilder, which is a GIS-based computer program for designing vegetative filter strips around agricultural fields that utilizes terrain analysis to account for spatially non-uniform runoff.
Logging is sometimes recommended as a management practice in riparian buffers, usually to provide economic incentive. However, some studies have shown that logging can harm wildlife populations, especially birds. A study by the University of Minnesota found that there was a correlation between the harvesting of timber in riparian buffers and a decline in bird populations. [ 7 ] Therefore, logging is generally discouraged as an environmental practice, and left to be done in designated logging areas.
The Conservation Reserve Program (CRP), a farming assistance program in the United States , provides many incentives to landowners to encourage them to install riparian buffers around water systems that have a high chance of non-point water pollution and are highly erodible. For example, the Nebraska system of Riparian Buffer Payments offers payments for the cost of setup, a sign up bonus, and annual rental payments.
These incentives are offered to agriculturists to compensate them for their economic loss of taking this land out of production. If the land is highly erodible and produces little economic gain, it can sometimes be more economic to take advantage of these CRP programs. [ 8 ]
Riparian buffers have undergone much scrutiny about their effectiveness, resulting in thorough testing and monitoring. A study done by the University of Georgia , conducted over a nine-year period, monitored the amounts of fertilizers that reached the watershed from the source of the application. It found that these buffers removed at least 60% of the nitrogen in the runoff, and at least 65% of the phosphorus from the fertilizer application. The same study showed that the effectiveness of the Zone 3 was much greater than that of both Zone 1 and 2 at removing contaminants. [ 9 ] But another study in 2017 did not find efficiency (or a very limiting capacity) for reducing glyphosate and AMPA leaching to streams; spontaneous herbaceous vegetation RBS is as efficient as Salix plantations and measures of glyphosate in runoff after a year, suggest an unexpected persistence and even a capacity of RBS to potentially favor glyphosate infiltration up to 70 cm depth in the soil. [ 10 ] [ clarification needed ]
After the initial installation of the riparian buffer, relatively little maintenance is needed to keep the buffer in good condition. Once the trees and grasses mature, they regenerate naturally and make a more effective buffer. The sustainability of the riparian buffer makes it extremely attractive to landowners since they do relatively little work and still receive payments. Riparian buffers have the potential to be the most effective way to protect aquatic biodiversity and water quality and manage water resources in developing countries that lack the funds to install water treatment and supply systems in midsize and small towns.
Species selection based on an area in Nebraska, as an example: | https://en.wikipedia.org/wiki/Riparian_buffer |
A riparian zone or riparian area is the interface between land and a river or stream . [ 2 ] In some regions, the terms riparian woodland , riparian forest , riparian buffer zone , riparian corridor , and riparian strip are used to characterize a riparian zone. The word riparian is derived from Latin ripa , meaning " river bank ". [ 3 ]
Riparian is also the proper nomenclature for one of the terrestrial biomes of the Earth . [ 4 ] Plant habitats and communities along the river margins and banks are called riparian vegetation, characterized by hydrophilic plants . [ 5 ] Riparian zones are important in ecology , environmental resource management , and civil engineering [ 6 ] because of their role in soil conservation , their habitat biodiversity , and the influence they have on terrestrial and semiaquatic fauna as well as aquatic ecosystems , including grasslands , woodlands , wetlands , and even non-vegetative areas. [ 7 ]
Riparian zones may be natural or engineered for soil stabilization or restoration . [ 8 ] These zones are important natural biofilters , protecting aquatic environments from excessive sedimentation , polluted surface runoff , and erosion . [ 9 ] They supply shelter and food for many aquatic animals and shade that limits stream temperature change. [ 10 ] When riparian zones are damaged by construction , agriculture or silviculture , biological restoration can take place, usually by human intervention in erosion control and revegetation. [ 11 ] If the area adjacent to a watercourse has standing water or saturated soil for as long as a season, it is normally termed a wetland because of its hydric soil characteristics. Because of their prominent role in supporting a diversity of species , [ 12 ] riparian zones are often the subject of national protection in a biodiversity action plan . These are also known as a "plant or vegetation waste buffer". [ 13 ]
Research shows that riparian zones are instrumental in water quality improvement for both surface runoff and water flowing into streams through subsurface or groundwater flow. [ 14 ] [ 15 ] Riparian zones can play a role in lowering nitrate contamination in surface runoff, such as manure and other fertilizers from agricultural fields , that would otherwise damage ecosystems and human health. [ 16 ] Particularly, the attenuation of nitrate or denitrification of the nitrates from fertilizer in this buffer zone is important. [ 17 ] The use of wetland riparian zones shows a particularly high rate of removal of nitrate entering a stream and thus has a place in agricultural management. [ 18 ] Also in terms of carbon transport from terrestrial ecosystems to aquatic ecosystems, riparian groundwater can play an important role. [ 19 ] As such, a distinction can be made between parts of the riparian zone that connect large parts of the landscape to streams, and riparian areas with more local groundwater contributions. [ 20 ]
- Riparian forests are primarily situated alongside rivers or streams, with varying degrees of proximity to the water's edge.
- These ecosystems are intimately connected with dynamic water flow and soil processes, influencing their characteristics.
- Riparian forests feature a diverse combination of elements, including:
- Mesic terrestrial vegetation (vegetation adapted to moist conditions).
- Dependent animal life, relying on the riparian environment for habitat and resources.
- Local microclimate influenced by the presence of water bodies.
- The vegetation in riparian forests exhibits a multi-layered structure.
- Moisture-dependent trees are the dominant feature, giving these forests a unique appearance, especially in savanna regions.
- These moisture-dependent trees define the landscape, accompanied by a variety of mesic understorey , shrub, and ground cover species.
- Riparian forests often host plant species that have high moisture requirements.
- The flora typically includes species native to the region, adapted to the moist conditions provided by proximity to water bodies.
In summary, riparian forests are characterized by their location along waterways, their intricate interplay with water and soil dynamics, a diverse array of vegetation layers, and a plant composition favoring moisture-dependent species.
Riparian zones dissipate stream energy. [ 21 ] The meandering curves of a river, combined with vegetation and root systems, slow the flow of water, which reduces soil erosion and flood damage. [ 22 ] Sediment is trapped, reducing suspended solids to create less turbid water, replenish soils, and build stream banks. [ 23 ] Pollutants are filtered from surface runoff, enhancing water quality via biofiltration. [ 3 ] [ 24 ] [ 25 ]
The riparian zones also provide wildlife habitat , increased biodiversity, and wildlife corridors , [ 26 ] enabling aquatic and riparian organisms to move along river systems avoiding isolated communities. [ 27 ] Riparian vegetation can also provide forage for wildlife and livestock. [ 23 ]
Riparian zones are also important for the fish that live within rivers, such as brook and charr. [ 28 ] Impacts on riparian zones can affect fish, and restoration is not always sufficient to recover fish populations. [ 29 ] [ 30 ]
They provide native landscape irrigation by extending seasonal or perennial flows of water. [ 31 ] Nutrients from terrestrial vegetation (e.g. plant litter and insect drop) are transferred to aquatic food webs , and are a vital source of energy in aquatic food webs. [ 32 ] The vegetation surrounding the stream helps to shade the water, mitigating water temperature changes . Thinning of riparian zones has been observed to cause increased maximum temperatures, higher fluctuations in temperature, and elevated temperatures being observed more frequently and for longer periods of time. [ 33 ] Extreme changes in water temperature can have lethal effects on fish and other organisms in the area. [ 32 ] The vegetation also contributes wood debris to streams, which is important to maintaining geomorphology . [ 34 ]
Riparian zones also act as important buffers against nutrient loss in the wake of natural disasters, such as hurricanes . [ 35 ] [ 36 ] Many of the characteristics of riparian zones that reduce the inputs of nitrogen from agricultural runoff also retain the necessary nitrogen in the ecosystem after hurricanes threaten to dilute and wash away critical nutrients. [ 37 ] [ 38 ] [ 39 ]
From a social aspect, riparian zones contribute to nearby property values through amenity and views, and they improve enjoyment for footpaths and bikeways through supporting foreshoreway networks. Space is created for riparian sports such as fishing, swimming, and launching for vessels and paddle craft. [ 40 ]
The riparian zone acts as a sacrificial erosion buffer to absorb impacts of factors including climate change , increased runoff from urbanization , and increased boat wake without damaging structures located behind a setback zone. [ 41 ] [ 42 ]
"Riparian zones play a crucial role in preserving the vitality of streams and rivers, especially when faced with challenges stemming from catchment land use, including agricultural and urban development. These changes in land utilization can exert adverse impacts on the health of streams and rivers and, consequently, contribute to a decline in their reproductive rates."
The protection of riparian zones is often a consideration in logging operations. [ 43 ] The undisturbed soil, soil cover, and vegetation provide shade, plant litter, and woody material and reduce the delivery of soil eroded from the harvested area. [ 44 ] Factors such as soil types and root structures, climatic conditions, and vegetative cover determine the effectiveness of riparian buffering. Activities associated with logging, such as sediment input, introduction or removal of species, and the input of polluted water all degrade riparian zones. [ 45 ]
The assortment of riparian zone trees varies from those of wetlands and typically consists of plants that are either emergent aquatic plants, or herbs , trees and shrubs that thrive in proximity to water. [ 46 ] In South Africa's fynbos biome, Riparian ecosystem are heavily invaded by alien woody plants . [ 47 ] Riparian plant communities along lowland streams exhibit remarkable species diversity, driven by the unique environmental gradients inherent to these ecosystems. [ 48 ]
Riparian forest can be found in Benin, West Africa. In Benin, where the savanna ecosystem prevails, "riparian forests" include various types of woodlands, such as semi-deciduous forests, dry forests, open forests, and woodland savannas . These woodlands can be found alongside rivers and streams. [ 49 ] In Nigeria, you can also discover riparian zones within the Ibadan region of Oyo state. Ibadan, one of the oldest towns in Africa, covers a total area of 3,080 square kilometers and is characterized by a network of perennial water streams that create these valuable riparian zones. [ 49 ] In the research conducted by Adeoye et al. (2012) on land use changes in Southwestern Nigeria, it was observed that 46.18 square kilometers of the area are occupied by water bodies. Additionally, most streams and rivers in this region are accompanied by riparian forests. Nevertheless, the study also identified a consistent reduction in the extent of these riparian forests over time, primarily attributed to a significant deforestation rate. [ 50 ] In Nigeria, according to Momodu et al. (2011), there has been a notable decline of about 50% in the riparian forest coverage within the period of 1978 to 2000. This reduction is primarily attributed to alterations in land use and land cover. Additionally, their research indicates that if current trends continue, the riparian forests may face further depletion, potentially leading to their complete disappearance by the year 2040. [ 50 ] Riparian zones can also be found in Cape Agulhas region of South Africa. [ 51 ] Riparian areas along South African rivers have experienced significant deterioration as a result of human activities. Similar to many other developed and developing areas worldwide, the extensive building of dams in upstream river areas and the extraction of water for irrigation purposes have led to diminished water flows and changes in the riparian environment. [ 8 ]
Herbaceous Perennial :
Herbaceous Perennial : [ 52 ] [ unreliable source? ]
In western North America and the Pacific coast, the riparian vegetation includes:
Riparian trees [ 53 ]
Riparian shrubs [ 53 ]
Other plants
In Asia there are different types of riparian vegetation, [ 54 ] but the interactions between hydrology and ecology are similar as occurs in other geographic areas. [ 55 ]
Typical riparian vegetation in temperate New South Wales, Australia include:
Typical riparian zone trees in Central Europe include:
Land clearing followed by floods can quickly erode a riverbank, taking valuable grasses and soils downstream, and later allowing the sun to bake the land dry. [ 56 ] [ 57 ] Riparian zones can be restored through relocation (of human-made products), rehabilitation, and time. [ 45 ] Natural Sequence Farming techniques have been used in the Upper Hunter Valley of New South Wales , Australia, in an attempt to restore eroded farms to optimum productivity rapidly. [ 58 ]
The Natural Sequence Farming technique involves placing obstacles in the water's pathway to lessen the energy of a flood and help the water to deposit soil and seep into the flood zone. [ 59 ] Another technique is to quickly establish ecological succession by encouraging fast-growing plants such as "weeds" ( pioneer species ) to grow. [ 60 ] These may spread along the watercourse and cause environmental degradation , but may stabilize the soil, place carbon into the ground, and protect the land from drying. The weeds will improve the streambeds so trees and grasses can return and, ideally, replace the weeds. [ 61 ] [ 62 ] There are several other techniques used by government and non-government agencies to address riparian and streambed degradation, ranging from the installation of bed control structures such as log sills to the use of pin groynes or rock emplacement. [ 63 ] Other possible approaches include control of invasive species, monitoring of herbivore activity, and cessation of human activity in a particular zone followed by natural re-vegetation. [ 64 ] Conservation efforts have also encouraged incorporating the value of ecosystem services provided by riparian zones into management plans, as these benefits have traditionally been absent in the consideration and designing of these plans. [ 64 ] [ 65 ] | https://en.wikipedia.org/wiki/Riparian_zone |
The Ripper Method , developed in 1898, [ 1 ] is an analytical chemistry technique used to determine the total amount of sulfur dioxide (SO 2 ) in a solution. This technique uses iodine standard and a starch indicator to titrate the solution and determine the concentration of free SO 2 . The titration is done again with a new sample of the solution, but the sample is pretreated with sodium hydroxide (NaOH) to release bound SO 2 . The result of these two titrations can then be used to determine the bound, free, and total amount of SO 2 in the solution. Instead of using a starch indicator, an electrode can be used to determine the presence of free iodine. [ 2 ] This technique is widely used in wine making. [ 3 ]
The first reaction of iodine with SO 2 and water is as follows:
SO 2 +I 2 +2H 2 O→H 2 SO 4 +2HI
As the reaction proceeds, all available SO 2 will be consumed and the starch indicator added to the solution will bind with the unconsumed iodine, turning the solution black.
The second step of the reaction requires pretreating with solution with NaOH to release bound SO 2 . The reaction with iodine can then be done.
HSO 3 − ⇌H 2 SO 3 ⇌SO 2
The Ripper Method is commonly used in wine making applications as SO 2 is often added to wine to maintain its freshness and the concentration needs to be determined. The technique is not precise and is prone to systematic error as well. This limits its use, despite being a fast and inexpensive test. [ 4 ] | https://en.wikipedia.org/wiki/Ripper_Method |
Ripple20 is a set of vulnerabilities discovered in 2020 in a software library that implemented a TCP/IP stack . The security concerns were discovered by JSOF, which named the collective vulnerabilities for how one company's code became embedded into numerous products. The software library was created around 1997 and had been implemented by many manufacturers of online devices.
Ripple20 is a set of 19 vulnerabilities discovered in 2020 in a software library developed by the Cincinnati -based [ 1 ] company Treck Inc., which implemented a TCP/IP stack . [ 2 ] [ 3 ] [ 4 ]
The first release of Treck's library was around 1997. [ 1 ] Treck had also worked with Elmic Systems , which created a fork of the library when the companies ended their collaboration. [ 5 ] In September 2019, JSOF researchers analyzed a device containing code from the library and discovered it had vulnerabilities. Further analysis determined that the code originated from Treck's library, which had been widely implemented by numerous manufacturers. [ 5 ] The disclosure of the vulnerabilities was made in June 2020. [ 6 ] [ 7 ] [ 8 ] [ 9 ] Ripple20 was chosen as the name for the set of vulnerabilities based on the disclosure year and the idea that the problems "rippled" through the supply chain from one company. [ 2 ] [ 10 ] It is difficult to identify all affected devices, because manufacturers may not realize that the library was used in one of their components. [ 11 ] | https://en.wikipedia.org/wiki/Ripple20 |
In physics , a ripple tank is a shallow glass tank of water used to demonstrate the basic properties of waves . It is a specialized form of a wave tank . The ripple tank is usually illuminated from above, so that the light shines through the water. Some small ripple tanks fit onto the top of an overhead projector , i.e. they are illuminated from below. The ripples on the water show up as shadows on the screen underneath the tank. All the basic properties of waves, including reflection , refraction , interference and diffraction , can be demonstrated.
Ripples may be generated by a piece of wood that is suspended above the tank on elastic bands so that it is just touching the surface. Screwed to wood is a motor that has an off center weight attached to the axle. As the axle rotates the motor wobbles, shaking the wood and generating ripples.
A number of wave properties can be demonstrated with a ripple tank. These include plane waves , reflection, refraction, interference and diffraction.
When the rippler is lowered so that it just touches the surface of the water, plane waves will be produced.
When the rippler is attached with a point spherical ball and lowered so that it just touches the surface of the water, circular waves will be produced.
By placing a metal bar in the tank and tapping the wooden bar a pulse of three or four ripples can be sent towards the metal bar. The ripples reflect from the bar. If the bar is placed at an angle to the wavefront the reflected waves can be seen to obey the law of reflection. The angle of incidence and angle of reflection will be the same.
If a concave parabolic obstacle is used, a plane wave pulse will converge on a point after reflection. This point is the focal point of the mirror. Circular waves can be produced by dropping a single drop of water into the ripple tank. If this is done at the focal point of the "mirror" plane waves will be reflected back.
If a sheet of glass is placed in the tank, the depth of water in the tank will be shallower over the glass than elsewhere. The speed of a wave in water depends on the depth, so the ripples slow down as they pass over the glass. This causes the wavelength to decrease. If the junction between the deep and shallow water is at an angle to the wavefront , the waves will refract. In the diagram above, the waves can be seen to bend towards the normal. The normal is shown as a dotted line. The dashed line is the direction that the waves would travel if they had not met the angled piece of glass. [ 1 ]
In practice, showing refraction with a ripple tank is quite tricky to do.
If a small obstacle is placed in the path of the ripples, and a slow frequency is used, there is no shadow area as the ripples refract around it, as shown below on the right. A faster frequency may result in a shadow, as shown below on the right. If a large obstacle is placed in the tank, a shadow area will probably be observed.
If an obstacle with a small gap is placed in the tank the ripples emerge in an almost semicircular pattern. If the gap is large however, the diffraction is much more limited. Small , in this context, means that the size of the obstacle is comparable to the wavelength of the ripples.
A phenomenon identical to the x-ray diffraction of x-rays from an atomic crystal lattice can also be seen, thus demonstrating the principles of crystallography . If one lowers a grid of obstacles into the water, with the spacing between the obstacles roughly corresponding to the wavelength of the water waves, one will see diffraction from the grid. At certain angles between the grid and the oncoming waves, the waves will appear to reflect off the grid; at other angles, the waves will pass through. Similarly, if the frequency (wavelength) of the waves is altered, the waves will also alternately pass through or be reflected, depending on the precise relationship between spacing, orientation and wavelength.
Interference can be produced by the use of two dippers that are attached to the main ripple bar. [ 1 ] In the diagrams below on the left the light areas represent crests of waves, the black areas represent troughs. Notice the grey areas: they are areas of destructive interference where the waves from the two sources cancel one another out. To the right is a photograph of two-point interference generated in a circular ripple tank. | https://en.wikipedia.org/wiki/Ripple_tank |
In computer science , more particularly in automated theorem proving , rippling [ 1 ] is a group of meta-level heuristics , developed primarily in the Mathematical Reasoning Group in the School of Informatics at the University of Edinburgh , and most commonly used to guide inductive proofs in automated theorem proving systems . Rippling may be viewed as a restricted form of rewrite system , where special object level annotations are used to ensure fertilization upon the completion of rewriting, with a measure decreasing requirement ensuring termination for any set of rewrite rules and expression.
Raymond Aubin was the first person to use the term "rippling out" whilst working on his 1976 PhD thesis [ 2 ] at the University of Edinburgh. He recognised a common pattern of movement during the rewriting stage of inductive proofs. Alan Bundy later turned this concept on its head by defining rippling to be this pattern of movement, rather than a side effect. [ citation needed ]
Since then, "rippling sideways", "rippling in" and "rippling past" were coined, so the term was generalised to rippling. [ citation needed ] Rippling continues to be developed at Edinburgh, and elsewhere, as of 2007.
Rippling has been applied to many problems traditionally viewed as being hard in the inductive theorem proving community, including Bledsoe 's limit theorems [ citation needed ] and a proof of the Gordon microprocessor, [ citation needed ] a miniature computer developed by Michael J. C. Gordon and his team at Cambridge.
Very often, when attempting to prove a proposition, we are given a source expression and a target expression, which differ only by the inclusion of a few extra syntactic elements.
This is especially true in inductive proofs , where the given expression is taken to be the inductive hypothesis , and the target expression the inductive conclusion. Usually, the differences between the hypothesis and conclusion are only minor, perhaps the inclusion of a successor function (e.g., +1) around the induction variable.
At the start of rippling the differences between the two expressions, known as wave-fronts in rippling parlance, are identified. Typically these differences prevent the completion of the proof and need to be "moved away". The target expression is annotated to distinguish the wavefronts (differences) and skeleton (common structure) between the two expressions. Special rules, called wave rules, can then be used in a terminating fashion to manipulate the target expression until the source expression can be used to complete the proof.
We aim to show that the addition of natural numbers is commutative . This is an elementary property, and the proof is by routine induction. Nevertheless, the search space for finding such a proof may become quite large.
Typically, the base case of any inductive proof is solved by methods other than rippling. For this reason, we will concentrate on the step case.
Our step case takes the following form, where we have chosen to use x as the induction variable:
We may also possess several rewrite rules, drawn from lemmas, inductive definitions or elsewhere, that can be used to form wave-rules.
Suppose we have the following three rewrite rules:
then these can be annotated, to form:
Note that all these annotated rules preserve the skeleton (x + y = y + x, in the first case and x + y in the second/third). Now, annotating the inductive step case, gives us:
And we are all set to perform rippling:
Note that the final rewrite causes all wave-fronts to disappear, and we may now apply fertilization, the application of the inductive hypotheses, to complete the proof. | https://en.wikipedia.org/wiki/Rippling |
In symbolic computation , the Risch algorithm is a method of indefinite integration used in some computer algebra systems to find antiderivatives . It is named after the American mathematician Robert Henry Risch , a specialist in computer algebra who developed it in 1968.
The algorithm transforms the problem of integration into a problem in algebra . It is based on the form of the function being integrated and on methods for integrating rational functions , radicals , logarithms , and exponential functions . Risch called it a decision procedure , because it is a method for deciding whether a function has an elementary function as an indefinite integral, and if it does, for determining that indefinite integral. However, the algorithm does not always succeed in identifying whether or not the antiderivative of a given function in fact can be expressed in terms of elementary functions. [ example needed ]
The complete description of the Risch algorithm takes over 100 pages. [ 1 ] The Risch–Norman algorithm is a simpler, faster, but less powerful variant that was developed in 1976 by Arthur Norman .
Some significant progress has been made in computing the logarithmic part of a mixed transcendental-algebraic integral by Brian L. Miller. [ 2 ]
The Risch algorithm is used to integrate elementary functions . These are functions obtained by composing exponentials, logarithms, radicals, trigonometric functions, and the four arithmetic operations ( + − × ÷ ). Laplace solved this problem for the case of rational functions , as he showed that the indefinite integral of a rational function is a rational function and a finite number of constant multiples of logarithms of rational functions [ citation needed ] . The algorithm suggested by Laplace is usually described in calculus textbooks; as a computer program, it was finally implemented in the 1960s. [ citation needed ]
Liouville formulated the problem that is solved by the Risch algorithm. Liouville proved by analytical means that if there is an elementary solution g to the equation g ′ = f then there exist constants α i and functions u i and v in the field generated by f such that the solution is of the form
Risch developed a method that allows one to consider only a finite set of functions of Liouville's form.
The intuition for the Risch algorithm comes from the behavior of the exponential and logarithm functions under differentiation. For the function f e g , where f and g are differentiable functions , we have
so if e g were in the result of an indefinite integration, it should be expected to be inside the integral. Also, as
then if (ln g ) n were in the result of an integration, then only a few powers of the logarithm should be expected.
Finding an elementary antiderivative is very sensitive to details. For instance, the following algebraic function (posted to sci.math.symbolic by Henri Cohen in 1993 [ 3 ] ) has an elementary antiderivative, as Wolfram Mathematica since version 13 shows (however, Mathematica does not use the Risch algorithm to compute this integral): [ 4 ] [ 5 ]
namely:
But if the constant term 71 is changed to 72, it is not possible to represent the antiderivative in terms of elementary functions, [ 6 ] as FriCAS also shows. Some computer algebra systems may here return an antiderivative in terms of non-elementary functions (i.e. elliptic integrals ), which are outside the scope of the Risch algorithm. For example, Mathematica returns a result with the functions EllipticPi and EllipticF. Integrals in the form ∫ x + A x 4 + a x 3 + b x 2 + c x + d d x {\displaystyle \int {\frac {x+A}{\sqrt {x^{4}+ax^{3}+bx^{2}+cx+d}}}\,dx} were solved by Chebyshev (and in what cases it is elementary), [ 7 ] but the strict proof for it was ultimately done by Zolotarev . [ 6 ]
The following is a more complex example that involves both algebraic and transcendental functions : [ 8 ]
In fact, the antiderivative of this function has a fairly short form that can be found using substitution u = x + x + ln x {\displaystyle u=x+{\sqrt {x+\ln x}}} ( SymPy can solve it while FriCAS fails with "implementation incomplete (constant residues)" error in Risch algorithm):
Some Davenport "theorems" [ definition needed ] are still being clarified. For example in 2020 a counterexample to such a "theorem" was found, where it turns out that an elementary antiderivative exists after all. [ 9 ]
Transforming Risch's theoretical algorithm into an algorithm that can be effectively executed by a computer was a complex task which took a long time.
The case of the purely transcendental functions (which do not involve roots of polynomials) is relatively easy and was implemented early in most computer algebra systems . The first implementation was done by Joel Moses in Macsyma soon after the publication of Risch's paper. [ 10 ]
The case of purely algebraic functions was partially solved and implemented in Reduce by James H. Davenport – for simplicity it could only deal with square roots and repeated square roots and not general radicals or other non-quadratic algebraic relations between variables. [ 11 ]
The general case was solved and almost fully implemented in Scratchpad, a precursor of Axiom , by Manuel Bronstein, there is Axiom's fork FriCAS, with active Risch and other algorithm development on github. [ 12 ] [ 13 ] However, the implementation did not include some of the branches for special cases completely. [ 14 ] [ 15 ] Currently in 2025, there is no known full implementation of the Risch algorithm. [ 16 ]
The Risch algorithm applied to general elementary functions is not an algorithm but a semi-algorithm because it needs to check, as a part of its operation, if certain expressions are equivalent to zero ( constant problem ), in particular in the constant field. For expressions that involve only functions commonly taken to be elementary it is not known whether an algorithm performing such a check exists (current computer algebra systems use heuristics); moreover, if one adds the absolute value function to the list of elementary functions, then it is known that no such algorithm exists; see Richardson's theorem .
This issue also arises in the polynomial division algorithm ; this algorithm will fail if it cannot correctly determine whether coefficients vanish identically. [ 17 ] Virtually every non-trivial algorithm relating to polynomials uses the polynomial division algorithm, the Risch algorithm included. If the constant field is computable , i.e., for elements not dependent on x , then the problem of zero-equivalence is decidable, so the Risch algorithm is a complete algorithm. Examples of computable constant fields are ℚ and ℚ( y ) , i.e., rational numbers and rational functions in y with rational-number coefficients, respectively, where y is an indeterminate that does not depend on x .
This is also an issue in the Gaussian elimination matrix algorithm (or any algorithm that can compute the nullspace of a matrix), which is also necessary for many parts of the Risch algorithm. Gaussian elimination will produce incorrect results if it cannot correctly determine whether a pivot is identically zero [ citation needed ] . | https://en.wikipedia.org/wiki/Risch_algorithm |
The rise in core ( RIC ) method is an alternate reservoir wettability characterization method described by S. Ghedan and C. H. Canbaz in 2014. The method enables estimation of all wetting regions such as strongly water wet, intermediate water, oil wet and strongly oil wet regions in relatively quick and accurate measurements in terms of Contact angle rather than wettability index.
During the RIC experiments, core samples saturated with selected reservoir fluid were subjected to imbibition from a second reservoir fluid. RIC wettability measurements are compared with and modified – Amott test [ 1 ] and USBM measurements using core plug pairs from different heights of a thick carbonate reservoir. Results show good coherence. The RIC method is an alternate method to Amott and USBM methods and that efficiently characterizes Reservoir Wettability. [ 2 ] [ 3 ]
One study used the water advancing contact angle to estimate the wettability of fifty-five oil reservoirs. De-oxygenated synthetic formation brine and dead anaerobic crude was tested on quartz and calcite crystals at reservoir temperature. Contact angles from 0 to 75 degrees were deemed water wet, 75 to 105 degrees as intermediate and 105 to 180 degrees as oil wet. [ 4 ] Although the range of wettabilities were divided into three regions, these were arbitrary divisions. The wettability of different reservoirs can vary within the broad spectrum from strongly water-wet to strongly oil-wet.
Another study described two initial conditions as reference and non-reference for calculating cut-off values by using advancing and receding contact angles and spontaneous imbibition data. [ 5 ] Limiting value between water wet and intermediate zones was described as 62-degree. Similarly, cut-off values for advancing contact angle is described as 0 to 62 degrees for water wet region, 62 to 133 degrees for Intermediate-wet zone, and 133 to 180 degrees for Oil wet zone. Chilingar and Yen [ 6 ] examined extensive research work on 161 limestone , dolomitic limestone, calcitic dolomite, and dolomite cores. Cut-off values classified as 160 to 180 degrees for strongly oil wet, 100 to 160 degrees for oil wet, 80 to 100 degrees intermediate wet, 80 to 20 degrees water wet and 0 to 20 strongly water wet.
Rise in core uses a combination of Chilingar et al. and Morrow wettability cut-off criteria. The contact angle range 80 – 100 degrees indicate neutral-wetness, the range 100 – 133 degrees indicate slight-oil wetness, the range 133 – 160 degrees indicate oil-wetness while the range 160- 180 degrees indicate strongly oil-wetness. The range 62 – 80 degrees indicate slight water wetness, the range 20 – 62 degrees indicate, water-wetness, while the range 0 – 20 degrees indicate strong water-wetness.
RIC wettability characterization technique is based on a modified form of Washburn's equation (1921). The technique enables relatively quick and accurate measurements of wettability in terms of contact angle while requiring no complex equipment. The method is applicable for any set of reservoir fluids, on any type of reservoir rock and at any heterogeneity level. It characterizes wettability across the board from strongly water to strongly oil wet conditions. [ 7 ]
The step of deriving the modified form of Washburn equation for a rock/liquid/liquid system involves acquiring a Washburn equation for a rock/air/liquid system. The Washburn equation for a rock/air/liquid system is represented by:
Herein, "t" is the penetration rate of liquid into a porous sample, "μ" is the liquid's viscosity , "ρ" is the liquid's density , "γ" is the liquid's surface tension , "θ" is the liquid's contact angle, "m" is the mass of the liquid that penetrates the porous sample and "C" is the constant of characterization of the porous sample. evaluating a value of "γ os " using a young’s equation for a rock surface/water/air system (Figure 2) and a value of "γ ws " using young’s equation for a liquid/liquid/rock system is represented as:
"γ ow " is the surface tension between the oil and water system, "γ os " is the surface tension between oil and solid system and "γ ws " is the surface tension between water and the solid system. Using Young's equation for a rock surface/ water/air system and substituting in equation (2) to obtain equation 3:
Rearranging equation (1) to factor out γ LV obtains equation (4), wherein γ LV a liquid-vapor surface tension is:
Realizing that γ LV (liquid–vapor surface tension) is equivalent to γ o (oil–air surface tension), or γ w (water–air-surface tension), substituting equation (4) in equation (3) and cancelling out similar terms obtains equation (5):
Therein, γ LV is liquid-vapor surface tension, γ o is oil-air surface tension, γ w is water-air surface tension, μ o is viscosity of oil and μ w is viscosity of water. cosθ wo is contact angle between water and oil; representing a relationship between a mass of water imbibed into the core sample and a mass of oil imbibed in the core sample with an equation (6):
Therein ρ w is density of water and V w is volume of water imbibed, ρ o is density of oil and V o is volume of oil imbibed, the amount of water imbibed and amount of oil imbibed under gravity are same; and air behaves as a strong non-wetting phase in both an oil–air–solid and a water–air–solid systems, thereby indicating that both oil and water behave as strong wetting phases, resulting in equal air/oil and air/water capillary forces for the same porous media and for a given pore size distribution. Thus, a mass change of a core sample due to water imbibition is equal to a mass change of a core sample due to oil imbibition, because water or oil penetration of the porous media at any time is a function of a balance between gravity and capillary forces. The mass of water imbibed into a core sample is approximately equal to a mass of oil imbibed in the core sample core samples of a same rock type and dimensions, and for equal capillary forces;
Cancelling out g in equation(6) gives equation (7):
which means
Therein, m w is mass of water and m o is mass of oil. Factoring out C m 2 t {\displaystyle C{m^{2} \over t}} from Eq. 5 to obtain Eq. 9, gives Modified Washburn Equation:
Therein θ 12 is the contact angle of liquid/liquid/rock system, μ 1 is a viscosity of oil phase, μ 2 is a viscosity of water phase, ρ 1 is density of oil phase in g/cm 3 , ρ 2 is density of water phase in g/cm 3 , m is mass of fluid penetrated into a porous rock, t is time in min, γ_ L1L2 is the surface tension between an oil and a water in dyne/cm, and ∁ is a characteristic constant of the porous rock.
Schematic view and experimental setups of the RIC wettability testing method is described in Figure 1. Core plugs are divided into 3–4 core samples, each of 3.8 cm average diameter and 1.5 cm length. The lateral area of each core sample is sealed by epoxy resin to ensure one-dimensional liquid penetration into the core by imbibition. A hook is mounted on top side of the core sample.
The RIC setup includes a beaker to host the imbibing fluid. A thin rope connects the core sample to a high-precision balance (0.001 gm accurate). A hanging core sample is positioned with the bottom part of the sample barely touching the imbibing fluid in the beaker. Relative saturation as well as mass of core samples starts to change during imbibition. A computer connected to a balance continuously monitors the core sample mass change over time. Plots of squared mass change versus time are generated. [ 2 ] [ 8 ]
The RIC experiment is first performed with a n- dodecane –air–rock system to determine the constant ∁ of the Washburn Equation. N-dodecane imbibes into one of the core samples and the imbibition curve is recorded in Figure 2. Dodecane is an alkane that has low surface energy, very strongly wetting the rock sample in the presence of air, with contact angle θ equal to zero. Constant ∁ is determined by the contact angle value for dodecane/air/rock system, determining physical properties of n-dodecane (ρ,μ,γ) and rearranging equation 1;
The second step of the RIC experimental process is to saturate the neighboring core sample with crude oil and subjected the sample to water imbibition. Applying the slope of the RIC curve ( m 2 t ) {\displaystyle ({m^{2} \over t})} , fluid properties of oil/brine system (ρ,μ,γ) and the ∁ value are determined from the neighboring core sample into Eq. 9 to calculate the contact angle, θ. | https://en.wikipedia.org/wiki/Rise_in_core |
In electronics , when describing a voltage or current step function , rise time is the time taken by a signal to change from a specified low value to a specified high value. [ 1 ] These values may be expressed as ratios [ 2 ] or, equivalently, as percentages [ 3 ] with respect to a given reference value. In analog electronics and digital electronics , [ citation needed ] these percentages are commonly the 10% and 90% (or equivalently 0.1 and 0.9 ) of the output step height: [ 4 ] however, other values are commonly used. [ 5 ] For applications in control theory, according to Levine (1996 , p. 158), rise time is defined as " the time required for the response to rise from x% to y% of its final value ", with 0% to 100% rise time common for underdamped second order systems, 5% to 95% for critically damped and 10% to 90% for overdamped ones. [ 6 ] According to Orwiler (1969 , p. 22), the term "rise time" applies to either positive or negative step response , even if a displayed negative excursion is popularly termed fall time . [ 7 ]
Rise time is an analog parameter of fundamental importance in high speed electronics , since it is a measure of the ability of a circuit to respond to fast input signals. [ 8 ] There have been many efforts to reduce the rise times of circuits, generators, and data measuring and transmission equipment. These reductions tend to stem from research on faster electron devices and from techniques of reduction in stray circuit parameters (mainly capacitances and inductances). For applications outside the realm of high speed electronics , long (compared to the attainable state of the art) rise times are sometimes desirable: examples are the dimming of a light, where a longer rise-time results, amongst other things, in a longer life for the bulb, or in the control of analog signals by digital ones by means of an analog switch , where a longer rise time means lower capacitive feedthrough, and thus lower coupling noise to the controlled analog signal lines.
For a given system output, its rise time depend both on the rise time of input signal and on the characteristics of the system . [ 9 ]
For example, rise time values in a resistive circuit are primarily due to stray capacitance and inductance . Since every circuit has not only resistance , but also capacitance and inductance , a delay in voltage and/or current at the load is apparent until the steady state is reached. In a pure RC circuit , the output risetime (10% to 90%) is approximately equal to 2.2 RC . [ 10 ]
Other definitions of rise time, apart from the one given by the Federal Standard 1037C (1997 , p. R-22) and its slight generalization given by Levine (1996 , p. 158), are occasionally used: [ 11 ] these alternative definitions differ from the standard not only for the reference levels considered. For example, the time interval graphically corresponding to the intercept points of the tangent drawn through the 50% point of the step function response is occasionally used. [ 12 ] Another definition, introduced by Elmore (1948 , p. 57), [ 13 ] uses concepts from statistics and probability theory . Considering a step response V ( t ) , he redefines the delay time t D as the first moment of its first derivative V′ ( t ) , i.e.
Finally, he defines the rise time t r by using the second moment
All notations and assumptions required for the analysis are listed here.
The aim of this section is the calculation of rise time of step response for some simple systems:
A system is said to have a Gaussian response if it is characterized by the following frequency response
where σ > 0 is a constant, [ 14 ] related to the high cutoff frequency by the following relation:
Even if this kind frequency response is not realizable by a causal filter , [ 15 ] its usefulness lies in the fact that behaviour of a cascade connection of first order low pass filters approaches the behaviour of this system more closely as the number of cascaded stages asymptotically rises to infinity . [ 16 ] The corresponding impulse response can be calculated using the inverse Fourier transform of the shown frequency response
Applying directly the definition of step response ,
To determine the 10% to 90% rise time of the system it is necessary to solve for time the two following equations:
By using known properties of the error function , the value t = − t 1 = t 2 is found: since t r = t 2 - t 1 = 2 t ,
and finally
For a simple one-stage low-pass RC network , [ 18 ] the 10% to 90% rise time is proportional to the network time constant τ = RC :
The proportionality constant can be derived from the knowledge of the step response of the network to a unit step function input signal of V 0 amplitude:
Solving for time
and finally,
Since t 1 and t 2 are such that
solving these equations we find the analytical expression for t 1 and t 2 :
The rise time is therefore proportional to the time constant: [ 19 ]
Now, noting that
then
and since the high frequency cutoff is equal to the bandwidth,
Finally note that, if the 20% to 80% rise time is considered instead, t r becomes:
Even for a simple one-stage low-pass RL network, the 10% to 90% rise time is proportional to the network time constant τ = L ⁄ R . The formal proof of this assertion proceed exactly as shown in the previous section: the only difference between the final expressions for the rise time is due to the difference in the expressions for the time constant τ of the two different circuits, leading in the present case to the following result
According to Levine (1996 , p. 158), for underdamped systems used in control theory rise time is commonly defined as the time for a waveform to go from 0% to 100% of its final value: [ 6 ] accordingly, the rise time from 0 to 100% of an underdamped 2nd-order system has the following form: [ 21 ]
The quadratic approximation for normalized rise time for a 2nd-order system, step response , no zeros is:
where ζ is the damping ratio and ω 0 is the natural frequency of the network.
Consider a system composed by n cascaded non interacting blocks, each having a rise time t r i , i = 1,…, n , and no overshoot in their step response : suppose also that the input signal of the first block has a rise time whose value is t r S . [ 22 ] Afterwards, its output signal has a rise time t r 0 equal to
According to Valley & Wallman (1948 , pp. 77–78), this result is a consequence of the central limit theorem and was proved by Wallman (1950) : [ 23 ] [ 24 ] however, a detailed analysis of the problem is presented by Petitt & McWhorter (1961 , §4–9, pp. 107–115), [ 25 ] who also credit Elmore (1948) as the first one to prove the previous formula on a somewhat rigorous basis. [ 26 ] | https://en.wikipedia.org/wiki/Rise_time |
A riser clamp is a type of hardware used by mechanical building trades for pipe support in vertical runs of piping (risers) at each floor level. The devices are placed around the pipe, and integral fasteners are then tightened to clamp them onto the pipe. [ 1 ] [ 2 ] The friction between the pipe and riser clamp transfers the weight of the pipe through the riser clamp to the building structure. Risers are generally located at floor penetrations, particularly for continuous floor slabs such as concrete. [ 3 ] They may also be located at some other interval as dictated by local building codes or at intermediate intervals to support plumbing which has been altered or repaired. Heavier piping types, such as cast iron, require more frequent support. Ordinarily, riser clamps are made of carbon steel and individually sized to fit certain pipe sizes.
There are at least two types of riser clamp: the two-bolt pipe clamp and the yoke clamp. [ 4 ] | https://en.wikipedia.org/wiki/Riser_clamp |
The Harari–Shupe preon model (also known as rishon model , RM ) is the earliest effort to develop a preon model to explain the phenomena appearing in the Standard Model (SM) of particle physics . [ 1 ] It was first developed independently by Haim Harari and by Michael A. Shupe [ 2 ] and later expanded by Harari and his then-student Nathan Seiberg . [ 3 ]
The model has two kinds of fundamental particles called rishons ( ראשון, rishon means "first" in Hebrew ). They are T ("Third" since it has an electric charge of + 1 / 3 e ), or Tohu and V ("Vanishes", since it is electrically neutral), or Vohu. The terms tohu and vohu are picked from the Biblical phrase Tohu va-Vohu , for which the King James Version translation is "without form, and void". All leptons and all flavours of quarks are three-rishon ordered triplets. These groups of three rishons have spin- 1 / 2 . They are as follows:
Each rishon has a corresponding antiparticle. Hence:
The W + boson = TTTVVV;
The W − boson = TTTVVV .
Note that:
Baryon number ( B ) and lepton number ( L ) are not conserved, but the quantity B − L is conserved. A baryon number violating process (such as proton decay ) in the model would be
In the expanded Harari–Seiberg version the rishons possess color and hypercolor, explaining why the only composites are the observed quarks and leptons. [ 3 ] Under certain assumptions, it is possible to show that the model allows exactly for three generations of quarks and leptons.
Currently, there is no scientific evidence for the existence of substructure within quarks and leptons, but there is no profound reason why such a substructure may not be revealed at shorter distances. In 2008, Piotr Zenczykowski (Żenczykowski) has derived the RM by starting from a non-relativistic O(6) phase space . [ 4 ] Such model is based on fundamental principles and the structure of Clifford algebras , and fully recovers the RM by naturally explaining several obscure and otherwise artificial features of the original model. | https://en.wikipedia.org/wiki/Rishon_model |
A rising film or vertical long tube evaporator is a type of evaporator that is essentially a vertical shell and tube heat exchanger. The liquid being evaporated is fed from the bottom into long tubes and heated with steam condensing on the outside of the tube from the shell side. This is to produce steam and vapour within the tube bringing the liquid inside to a boil. The vapour produced then presses the liquid against the walls of the tubes and causes the ascending force of this liquid. As more vapour is formed, the centre of the tube will have a higher velocity which forces the remaining liquid against the tube wall forming a thin film which moves upwards. This phenomenon of the rising film gives the evaporator its name.
Applications:
There is a wide range of applications for rising tube evaporators, including effluent treatment, production of polymers, food production, thermal desalination, pharmaceuticals, and solvent recovery. Aschner, F.S. & Schaal, M. & Hasson, D. (1971). “Large Long-Tube Evaporators for Seawater Distillation. In terms of applications within these industries, rising tube evaporators are mainly used as reboilers for distillation columns, or as pre-concentrators or flash evaporators or pre-heaters designed to remove volatile components prior to stripping.
Thermal desalination.
A specific application of rising tube evaporators is the thermal desalination of sea water. Sea water is pumped into the long tubes of the evaporator while the heating media (usually steam) heats it up. As vapour forms inside the tubes it flows upwards. This evaporation occurs under vacuum conditions that allow for the use of lower temperatures.
Juice concentration and food processing:-
The food industry requires handling of delicate products that are sensitive to high temperature for long periods of time. Rising film evaporators can operate quickly and efficiently enough to avoid having to expose the product to high temperatures which may damage or undermine its quality. Hence, they are suitable to use as concentrators for juices, milk and other dairy products which are products that require delicate handling in the food industry.
The main advantage of the rising film evaporator is the low residence time of the liquid feed in the evaporator compared to other evaporator designs like plate-type evaporators. This is crucial because it allows the usage of the evaporator in higher operating temperatures and gives assurance of high product quality despite the product being heat sensitive. Additional advantage is the availability of the option to operate the evaporator as a continuous process which is overall more energy and time efficient than a batch process operation. [ 1 ]
High heat transfer coefficients:-
Another significant advantage is the relatively high heat transfer coefficient of this evaporator type. This is essential as it reduces the overall heat transfer area requirement which in turn will lower the initial capital cost of the evaporator. This is accentuated by the fact that the components, which consist of a shell and tubes, are easily obtainable with customized designs making them cost effective for construction and ideal for simple evaporation requirements. Moreover, this type of evaporator also can easily contain those widely available vapour separators for foaming products.
While the rising film evaporators are relatively efficient and have several advantages, some literature suggests that they are not as efficient as the vertical or the horizontal tube falling film evaporator. As such, in recent times falling film evaporators are usually chosen in place of rising film evaporators because they have similar advantages as rising film evaporators and have the additional benefits of better efficiency. Moreover, rising film evaporators requires a driving force to move the film against gravity and this causes a limitation because there is a requirement for a sufficient temperature difference between the heating surfaces to provide the driving force. [ 1 ]
Limited product versatility:-
Another major limitation of rising film evaporators is the requirement for the products to be of low viscosity and have minimal fouling tendencies. Competitive process designs like plate-type evaporators can handle liquids that are more viscous with higher fouling tendencies because the inner parts are more easily accessible for cleaning and maintenance.
Evaporators have the aim of concentrating a solution by vaporising the solvent. To assess the performance of a rising film evaporator, the capacity and efficiency of the evaporator is measured. Capacity is the amount of water vaporized per unit time while the steam economy is the amount of solvent vaporised. Hence, the main process attributes are the characteristics of the process that significantly affect these two areas. [ 2 ]
Considering that rising film evaporators use the same heat transfer principle as a general shell and tube heat exchanger. Therefore, the overall heat transfer rate is crucial in determining the performance of the evaporator. This factor will determine the capacity of the rising film evaporator. The fundamental general formula which gives the overall heat transfer rate is, [ 3 ]
where
For a general shell and tube heat exchanger, U is given by the equation [ 3 ]
where
A is given by the equation
where
This is the contact area of heat transfer which involves the outer surface area of the long vertical tubes that are parallel and in direct contact with the heating media housed within the shell of the evaporator.
The log mean temperature difference (LMTD), T lm , is given by the equation [ 3 ]
where
The hot fluid in the case of the rising film evaporator would be the steam in the shell side and the cold fluid would be the liquid inside the long tubes. In relation to the overall heat transfer rate, there are several key parameters that affect this characteristic specifically in terms of a rising film evaporator.
For a rising film evaporator, the main paths of heat transfer are conduction and convection. These occur as the steam in the shell side heats up the tube and as the tube heats up the liquid and vapour within. The design of the long vertical tubes in rising tube evaporators promote the formation of the long, thin and continual film of liquid formed by the pressure exerted by the vapour which occupies the centre part of the tube and rises up. This ascending motion of film and vapour in the centre promotes great turbulence which then allows for higher heat transfer coefficients leading to a better efficiency heat transfer. Another significant factor that affects the value of the heat transfer coefficients is the design of the evaporator.
According to the laws of heat transfer, initially the heat transfer rate increases as the temperature difference increases to approach the boiling point of the materials in the case of a constant feed flow rate. Hence, it is generally good to have a large temperature difference for this process. However, as the vapour bubbles gradually fill the entire centre of the tube, the steam pressure reaches a peak value. Beyond this point, the temperature difference, which acts as the main driving force for the heat transfer and also the rising film, will start reducing if there is any increase in steam pressure. In addition to that, there are other constraints in terms of the product quality and consistency when considering increasing the temperature difference. The temperature difference for driving force is also defined heavily by the properties of steam and the boiling liquid. [ 3 ]
This is the total contact area between the heating media and the liquid requiring the heating or any intermediate surfaces in between them. In the case of the rising film evaporator this is the specific region between the film and the surface of long tubes and also the surrounding heating media within the shell of the evaporator. As shown in the equation, the heat transfer area is one of the chief factors that affect the rate of conductive heat transfer. Therefore, it would be favourable to maximise the amount of area available for heat transfer. The limitations for these can be in terms of the cost per unit area as increasing the area would mean increasing the length of the tubes and also the shell of the evaporator which can significantly increase the cost of construction and maintenance,. [ 4 ] [ 5 ]
The rising film which is formed due to the pressure of the vapor within the vertical tubes greatly influences the efficiency of the heat transfer. This is because the thickness of the film will affect the heat transfer coefficients as well as the contact area for heat transfer. In relation to this, a thin and long film is favored as it reduces the distance between the two heat transfer surfaces. This would give higher heat transfer coefficients and overall heat transfer rate.
Residence time indicates the time taken for the product to undergo the entire procedure. This can be generalised to mean residence time of a product, which specifies the mean time of the product stays within the evaporator. In many industries, especially for food and beverage manufacturing, clients’ requirement would greatly limit the residence time to be as short to minimise the potential destruction of nutrients through exposure to intense heat.
This can be solved in various ways such as having a pre-heating system prior to the main evaporator with the function of heating the liquid feed until the temperature almost approaches the boiling point. This will lower the load capacity of evaporator and reduce the residence time.
A number of various evaporator designs have been developed using the fundamental principles of the rising film evaporator, thermo-siphon. The available designs for the evaporators are mostly customized by private companies and industries depending on the required application in producing the desired products. This is essential in order to ensure that the optimal product is obtained while maximising the efficiency and cost effectiveness of the design.
Artisan Industries is a company specializes in customizing thermal separation equipment. The Artisan Rising Film Evaporator has similar basic fundamental as the general design of a long-tube vertical evaporator, but it is modified to allow for handling of more viscous and volatile material which the orthodox design may not be able to handle due to excessive fouling.
In relation to that, the Artisan Rising Film Evaporator is designed to eliminate the majority of the volatile components before stripping and is usually used as a flash evaporator or pre-heater. [ citation needed ] This design allows the operator to control the feed rate or steam rate in order to remove residues, to adjust to the product behaviour, and to maximise steam economy. This evaporator is appropriate for high temperature application and materials with high viscosity that has a propensity to foul the transfer surfaces. [ 6 ]
The Rising Thin Film Vacuum Evaporator design is modification on the original rising film evaporator which has the main difference of allowing the liquid evaporates at lower temperature. This is possible because it operates under vacuum conditions which also avoid undesired formation in the liquid. This design is created to re-concentrate a diluted solution to its desired concentration by evaporation while also allowing evaporated water to condense simultaneously and be recovered for either recirculation or other purposes. There are many different types of Rising Thin Film Vacuum Evaporator models operating with different capacity, concentration control, types of condenser design to obtain the product at optimal condition. [ 7 ] In addition to that, this design is compact, allows easy maintenance of solution concentration and applicable to highly corrosive and effervescent liquids.
Semi Kestner, also known as Semi-Rising Film Evaporator, is widely used in sugar industries. This equipment provides Polly-baffle catcher to avoid juice entrapment and there would be more effective juice distribution by means of juice coil and Flushing of juice. This design has high brix of syrup and high vapour pressure is supplied as less steam quantity is required. [ 8 ] With short retention time of liquor and good heat transfer design features, the juice flows back without any discharge by going through the heating surface only once. [ 9 ]
The temperature difference which is log mean temperature difference between the heating media and the boiling liquid must be high enough to generate sufficient ascending force of the steam vapour in the tube side to force the liquid film to flow upwards. In general, the greater the temperature difference, the better the driving force of the steam. Additionally, having a high temperature difference will also increase the flow rate of the liquid and vapour within the tube. This increase in flow rate causes higher turbulence which results in an increase of the heat transfer coefficient. However, the overall temperature difference has to be within the range of boiling points of the two components as it could affect the quality and the purity of the products. [ 10 ]
The determination of the size of a rising film evaporator is generally a sensitive task because it involves obtaining a good understanding of the process requirements and behaviour of the materials involved. [ 10 ] In terms of cost efficiency, it is generally accepted that long and thin tubes are relatively cheaper because the thicker shell sizes for shell and tube heat exchangers are usually more expensive. Nonetheless, while weighing in the cost of the construction against the requirement, the size can be always adjusted and customized depending on the required application in producing the desired products. Generally the size of the length ranges between 4 and 8 m and 25 mm to 50 mm in diameter. [ 11 ]
Thermal economy is something major to deliberate in the designing of a rising tube evaporator. In order to optimise this, parameters in the design that have great influence on the thermal economy have to be considered. One major factor is the overall heat transfer area. In order to maximise the thermal economy it is generally acceptable to maximise the heat transfer area as a larger area would give a higher heat transfer rate. Despite that, increasing the area of the heat transfer may include complications in terms of increasing the different dimensions of the evaporator which in results increases the cost of construction in addition to being subject to other limitations such as space and design constraints. [ 12 ] | https://en.wikipedia.org/wiki/Rising_film_evaporator |
Rising Step Load Testing (or RSL testing ) is a testing system that can apply loads in tension or bending to evaluate hydrogen-induced cracking (also called hydrogen embrittlement ). It was specifically designed to conduct the accelerated ASTM F1624 [ 1 ] step-modified, slow strain rate tests on a variety of test coupons or structural components. It can also function to conduct conventional ASTM E8 [ 2 ] tensile tests; ASTM F519 [ 3 ] 200-hr Sustained Load Tests with subsequent programmable step loads to rupture for increased reliability; and ASTM G129 Slow Strain Rate Tensile tests .
The RSL Testing System can be applied to all of the specimen geometries in ASTM F519, including Notched Round Tensile Bars, Notched C-Rings, and Notched Square Bars. Product testing of actual hardware can also be conducted, such as with fasteners. Taking mechanical advantage of by testing in bending allows large diameter bolts to be tested with only a 1-kip load cell.
The RSL Test Method has been demonstrated as a valuable tool in the testing of high-performance materials for determining susceptibility to hydrogen embrittlement . This test is dependent upon the test machine’s capability to provide a profile with incremental increases in the applied stress as a function of time. It is imperative that the load increases do not overshoot the next elevation in applied stress. This is achieved through careful design and operation of the loading mechanisms. Once this is achieved, repeatability is good with variance in the low single digits that are probably related more to surface roughness, internal defects, and other intrinsic material properties differences rather than the testing equipment.
Precision in controlling the load allows for greater sensitivity in measuring crack extension via load drop and compliance correlation than obtainable with high-voltage electrical resistivity measurements and eliminates the need for clip gages. This capability allows for precise electronic detection of the maximum load required for Crack Tip Opening Displacement calculations of Fracture Toughness and precise detection of the onset of crack growth required for measurement of the threshold stress for hydrogen embrittlement, environmental or stress corrosion cracking . | https://en.wikipedia.org/wiki/Rising_step_load_testing |
Risk-sensitive foraging models help to explain the variance in foraging behaviour in animals. This model allows powerful predictions to be made about expected foraging behaviour for individual groups of animals. Risk sensitive foraging is based on experimental evidence that the net energy budget level of an animal is predictive of type of foraging activity an animal will employ. [ 1 ] Experimental evidence has indicated that individuals will change the type of foraging strategy that they use depending on environmental conditions and ability to meet net energy levels. When individuals can meet net energy level requirements by accessing food in risk aversive methods they do so. [ 2 ] [ 1 ] However, when net energy level requirements are not met by employing risk aversive methods, individuals are more likely to take risk prone actions in order to meet their net energy requirements. [ 2 ] [ 1 ]
Thomas Caraco and his colleagues in 1980 were amongst the first to study risk sensitive foraging behaviour in yellow-eyed juncos . For the original study seven yellow-eyed juncos were used in a two-part experiment. Part one examined foraging behaviours in five juncos when they were given a choice of eating on a perch where enough seeds were placed every time to meet their 24-hour energy requirements, or on a perch where they would sometimes find an abundance of seeds and sometimes no seeds. [ 1 ] All individuals showed a preference to feed at the perch where they could get their daily seed requirement, the risk aversive choice. Part two examined foraging preference in four juncos, on one perch seeds were present every time but not enough to meet their 24-hour energy requirement. On the other perch they could sometimes find and abundance of seeds or no seeds. [ 1 ] In this case the juncos showed a preference to feeding at the variable reward perch, choosing the risk prone feeding option. [ 1 ] In order to test if individuals would change their strategy as a result of changed environment, two of the juncos from part one were used in part two of the experiment. [ 1 ] As expected the juncos from part one who preferred the risk aversive foraging strategy switched to risk prone foraging behaviour in part two of the experiment.
Thomas Caraco conducted follow up experiment in 1981 with dark-eyed juncos and used a larger sample size. The results were similar; dark-eyed juncos prefer risk aversive foraging behaviours when their 24-hour energy budgets can be met. [ 3 ] However, when 24hr energy budgets are not met the juncos employ risk prone foraging behavior. [ 3 ]
Risk sensitive foraging has also been found in other animal species. Laboratory rats have also been found to display risk sensitive foraging. Rats prefer to forage at a constant food supply source if they are able to meet their energy requirements. [ 4 ] But will employ risk prone foraging behaviour when the constant food supply source does not fulfill their daily energy requirement.
The common shrew has also been found to use risk sensitive foraging methods. Choosing to be risk aversive when they are able to constantly meet their energy requirements. [ 2 ] But switching over to risk prone foraging and variable reward when their energy requirements are not met regularly.
Follow-up studies conducted in hummingbirds have found conflicting evidence about risk sensitivity foraging. When the hummingbirds are given three different choices of food supply, risk sensitivity foraging model was not entirely accurate at predicting foraging strategy. [ 5 ] When deciding to obtain food from experimentally manipulated flowers containing: low variance, high variance or constant nectar. Hummingbirds were found to prefer nectar from the low variance flower more than any other choice. Researchers suggest that these results may be attributed to the possibility that the hummingbirds were not able to examine the amount of nectar present in each flower visually. | https://en.wikipedia.org/wiki/Risk-sensitive_foraging_models |
Risk Evaluation and Mitigation Strategies ( REMS ) is a program of the US Food and Drug Administration for the monitoring of medications with a high potential for serious adverse effects . REMS applies only to specific prescription drugs, but can apply to brand-name or generic drugs. [ 1 ] The REMS program was formalized in 2007.
The FDA determines as part of the drug approval process that a REMS is necessary, and the drug company develops and maintains the individual program. [ 2 ] REMS applies only to specific prescription drugs, but can apply to brand-name or generic drugs. REMS for generic drugs may be created in collaboration with the manufacturer of the brand-name drug. [ 1 ] The FDA may remove the REMS requirement if it is found to not improve patient safety. [ 3 ]
The REMS program developed out of previous systems dating back to the 1980s for monitoring the use of a small number of high-risk drugs such as isotretinoin , which causes serious birth defects; clozapine , which can cause agranulocytosis ; and thalidomide , which is used to treat leprosy and certain cancers but causes serious birth defects. [ 4 ] The 2007 Food and Drug Administration Amendments Act created section 505-1 of the Food, Drug, and Cosmetic Act , which allowed for the creation of the REMS program for applying individual monitoring restrictions to medications. [ 5 ]
Some of the provisions required by the REMS program are training and certification of physicians allowed to prescribe the drug, requiring that the drug be administered in a hospital setting, requiring pharmacies to verify the status of patients receiving REMS drugs, requiring lab testing of patients to ensure that health status is satisfactory, or requiring that patients be entered into a registry. [ 6 ]
As of 2018, there are 74 medications subject to REMS monitoring.
62% of these include "elements to assure safe use". These typically require clinicians or healthcare institutions to become certified prior to prescribing. 12% include only a "communication plan" REMS element, which is informational in nature. These communication plans are typically composed of letters, websites, and fact sheets describing the specific safety risks identified in the REMS. 26% include only the "medication guide" REMS element. [ 7 ]
In 2020, clinical settings enrolled in the REMS program asked the FDA to make their reviews of REMS compliance public so that they can more easily view the records and adjust to feedback. [ 8 ] Between 2014 and 2017, the FDA stated they did not have enough data to determine whether the REMS program was sufficiently preventing opioid abuse. [ 9 ] The Health and Human Services Office of the Inspector General recommended that parties in the REMS program provide the FDA more data. [ 10 ] The FDA was habitually late in evaluating that data, reportedly leaving those parties with inadequate time to react to the review before their next assessment. [ 10 ] In November 2020, the FDA planned to create a "Summary of the REMS Assessment" document that would publicize their assessments of clinical settings and manufacturers in the REMS program. [ 8 ] The FDA made a public request for comment on the idea of publishing the Summary of the REMS Assessment. [ 11 ] Without the publication of the summary, parties in the REMS program must request it using the Freedom of Information Act. [ 8 ] | https://en.wikipedia.org/wiki/Risk_Evaluation_and_Mitigation_Strategies |
Risk aversion is a preference for a sure outcome over a gamble with higher or equal expected value. Conversely, rejection of a sure thing in favor of a gamble of lower or equal expected value is known as risk-seeking behavior.
The psychophysics of chance induce overweighting of sure things and of improbable events, relative to events of moderate probability. Underweighting of moderate and high probabilities relative to sure things contributes to risk aversion in the realm of gains by reducing the attractiveness of positive gambles. The same effect also contributes to risk seeking in losses by attenuating the aversiveness of negative gambles. Low probabilities, however, are overweighted, which reverses the pattern described above: low probabilities enhance the value of long-shots and amplify aversion to a small chance of a severe loss. Consequently, people are often risk seeking in dealing with improbable gains and risk averse in dealing with unlikely losses. [ 1 ]
Most theoretical analyses of risky choices depict each option as a gamble that can yield various outcomes with different probabilities. [ 2 ] Widely accepted risk-aversion theories, including Expected Utility Theory (EUT) and Prospect Theory (PT), arrive at risk aversion only indirectly, as a side effect of how outcomes are valued or how probabilities are judged. [ 3 ] In these analyses, a value function indexes the attractiveness of varying outcomes, a weighting function quantifies the impact of probabilities, and value and weight are combined to establish a utility for each course of action. [ 2 ] This last step, combining the weight and value in a meaningful way to make a decision, remains sub-optimal in EUT and PT, as people's psychological assessments of risk do not match objective assessments.
Expected Utility Theory (EUT) poses a utility calculation linearly combining weights and values of the probabilities associated with various outcomes. By presuming that decision-makers themselves incorporate an accurate weighting of probabilities into calculating expected values for their decision-making, EUT assumes that people's subjective probability-weighting matches objective probability differences, when they are, in reality, exceedingly disparate. [ 2 ]
Consider the choice between a prospect that offers an 85% chance to win $1000 (with a 15% chance to win nothing) and the alternative of receiving $800 for sure. A large majority of people prefer the sure thing over the gamble, although the gamble has higher (mathematical) expected value (also known as expectation). The expected value of a monetary gamble is a weighted average, in which each possible outcome is weighted by its probability of occurrence. The expected value of the gamble in this example is .85 X $1000 + .15 X $0 = $850, which exceeds the expected value of $800 associated with the sure thing. [ 1 ]
Research suggests that people do not evaluate prospects by the expected value of their monetary outcomes, but rather by the expected value of the subjective value of these outcomes (see also Expected utility ). [ 4 ] In most real-life situations, the probabilities associated with each outcome are not specified by the situation, but have to be subjectively estimated by the decision-maker. [ 5 ] The subjective value of a gamble is again a weighted average, but now it is the subjective value of each outcome that is weighted by its probability. [ 1 ] To explain risk aversion within this framework, Bernoulli proposed that subjective value, or utility, is a concave function of money. In such a function, the difference between the utilities of $200 and $100, for example, is greater than the utility difference between $1,200 and $1,100. It follows from concavity that the subjective value attached to a gain of $800 is more than 80% of the value of a gain of $1,000. [ 1 ] Consequently, the concavity of the utility function entails a risk averse preference for a sure gain of $800 over an 80% chance to win $1,000, although the two prospects have the same monetary expected value. [ 1 ]
While EUT has dominated the analysis of decision-making under risk and has generally been accepted as a normative model of rational choice (telling us how we should make decisions), descriptive models of how people actually behave deviate significantly from this normative model. [ 5 ]
Modern Portfolio Theory (MPT) was created by economist Harry Markowitz in 1952 to mathematically measure an individual's risk tolerance and reward expectations. [ 6 ] The theory was that constant variance allowed for a maximized expected return and to gain a constant expected return variance should be minimized. An asset must be considered in regard to how they will move within the market and by taking these movements into account an investment portfolio can be constructed that decreased risk and had a constant expected return. [ 6 ]
The levels of additional expected returns are calculated as the standard deviation of the return on investment (square root of the variance). [ 6 ] Standard deviation illustrates the fluctuation of an asset's returns over the period of time creating an accepted trading range to estimate possible returns on the asset. [ 6 ] This tool enables individuals to determine their level of risk aversion to create a diversified portfolio.
MPT has been critiqued for using standard deviation as a form of measurement. [ 7 ] Standard deviation is a relative form of measurement and investors using this index for their risk assessment must analyse an appropriate context in which the market sits to ensure a quantified understanding of what the standard deviation means. [ 7 ] MPT automatically assumes that investors have an aversion towards risk however can be used by all types of investors to suit their needs individually. Furthermore, under MPT, two portfolios could be represented by the same level of variance hence would be considered equally desirable. The first portfolio may experience small losses frequently, and the second may experience a singular decline. This contrast between portfolios needs to be examined by investors prior to their purchasing of assets. By eliminating downside risk instead of volatility, Post-modern portfolio theory aims to build on MPT. [ 7 ]
Prospect Theory (PT) claims that fair gambles (gambles in which the expected value of the current option and all other alternatives are held equal) are unattractive on the gain side but attractive on the loss side. In contrast to EUT, PT is posited as an alternative theory of choice, in which value is assigned to gains and losses rather than to final assets (total wealth), and in which probabilities are replaced by decision weights. [ 8 ] In an effort to capture inconsistencies in our preferences, PT offers a non-linear, S-shaped probability-weighted value function, implying that the decision-maker transforms probabilities along a diminishing sensitivity curve, in which the impact of a given change in probability diminishes with its distance from impossibility and certainty. [ 1 ]
The value function shown is:
A. Defined on gains and losses rather than on total wealth. [ 1 ] Prospects are coded as gains and losses from a zero point (e.g. using current wealth, rather than total wealth as a reference point), leading people to be risk averse for gains and risk seeking for losses. [ 5 ]
B. Concave in the domain of gains (risk aversion) and convex in the domain of losses (risk seeking). [ 1 ] The negatively accelerated nature of the function implies that people are risk averse for gains and risk seeking for losses. [ 5 ]
C. Considerably steeper for losses than for gains ( see also loss aversion ). [ 1 ] Steepness of the utility function in the negative direction (for losses over gains) explains why people are risk-averse even for gambles with positive expected values. [ 5 ]
While risk aversion is not part of PT per se, a pertinent part of PT is gain-loss asymmetry with regard to risk. PT's S-shaped probability-weighted, non-linear value function deems risk aversion context-dependent, as the gain-loss asymmetry illustrated above, results from our psychological assessments of risk hardly matching objective assessments of risk. One conceivable component of risk aversion in the framework of PT is that the degree of risk aversion apparent will vary depending on where along the curve our decision lies.
Example: Participants are indifferent between receiving a lottery ticket offering a 1% chance at $200 and receiving $10 for sure. Additionally, people are indifferent between receiving a lottery ticket offering a 99% chance at $200 and receiving $188 for sure. [ 9 ]
In line with diminishing sensitivity, the first hundredth of probability is worth $10, and the last hundredth is worth $12, but the 98 intermediate hundredths are worth only $178, or about $1.80 per hundredth. PT captures this pattern of differentially weighting (objective) probabilities subjectively with an S-shaped weighting function. [ 9 ]
A framing effect occurs when transparently and objectively identical situations generate dramatically different decisions depending on whether the situations are presented or perceived as either potential losses or gains. [ 10 ] Framing effects play an integral role in risk-aversion, as an extension of PT's S-shaped value function, which illustrates the differences in how gains and losses are valued relative to a reference point.
Risky prospects are characterized by their possible outcomes and by the probabilities of these outcomes. [ 10 ] The same, possible outcomes of a gamble can be framed either as gains or as losses relative to the status quo. The following pair of problems attests to the power of framing effects in manipulating either risk-averse or risk-seeking behavior.
The total number of respondents in each problem is denoted by N, and the percentage who chose each option is indicated in parentheses.
Problem 1 (N = 152): Imagine that the U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:
If Program A is adopted, 200 people will be saved. (72%)
If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. (28%)
Which of the two programs would you favor?. [ 1 ]
The formulation of Problem 1 implicitly adopts as a reference point a state of affairs in which the disease is allowed to take its toll of 600 lives. The outcomes of the programs include the reference state and two possible gains, measured by the number of lives saved. As expected, preferences are risk averse: a clear majority of respondents prefer saving 200 lives for sure over a gamble that offers a one-third chance of saving 600 lives.
Now consider another problem in which the same cover story is followed by a different description of the prospects associated with the two programs:
Problem 2 (N = 155): Imagine that the U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:
If Program C is adopted, 400 people will die. (22%)
If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. (78%)
It is easy to verify that options C and D in Problem 2 are indistinguishable in real terms from options A and B in Problem 1, respectively. The second version, however, assumes a reference state in which no one dies of the disease. The best outcome is the maintenance of this state and the alternatives are losses measured by the number of people that will die of the disease. People who evaluate options in these terms are expected to show a risk-seeking preference for the gamble (option D) over the sure loss of 400 lives. Of course, the "sure loss" of 400 lives that participants found so unattractive is exactly the same outcome as the "sure gain" of 200 subjects found so attractive in the Problem 1. [ 5 ] The public health problem illustrates a formulation effect in which a change of wording from "lives saved" to "lives lost" induced a marked shift of preference from risk aversion to risk seeking. [ 1 ]
If preferences reverse based on inconsequential aspects of how the problem is framed, people cannot possibly be maximizing expected utility. [ 5 ] Latent here is the unsettling idea that people's preferences come from the outside (from whoever has the power to shape the environment and determine how questions are phrased), rather than from their own psychological makeup. [ 5 ] Decision-making in matters as important as lives saved or lives lost can reverse risk preference. This may be based on a rephrasing of the outcomes that conveys no differential information about the treatments and that changes nothing about the outcomes themselves. [ 5 ]
While risk aversion is commonly explained through EUT and PT, observed risk-aversion behavior remains solely an artifact of these two theories, and extends beyond the bounds of what each theory can explain.
Both EUT and PT make the following falsifiable prediction: an individual cannot be so risk averse as to value a risky prospect less than the prospect's worst possible outcome. [ 3 ] On the contrary, several between-participant studies have found that people are willing to pay less, on average, for a binary lottery than for its worse outcome, a finding coined the uncertainty effect (UE). [ 11 ]
For example, people are willing to pay an average of $26 for a $50 gift certificate, but only $16 for a lottery that pays either a $50 or $100 gift certificate, with equal probability. [ 11 ]
UE, valuing a risky prospect below the value of its worse possible outcome, occurs as the result of a phenomenon known as direct risk aversion, a literal distaste for uncertainty, as uncertainty itself enters directly into people's utility function. [ 3 ]
EUT and PT predict that people should not purchase insurance for small-stakes risks, yet such forms of insurance (e.g., electronic warranties, insurance policies with low deductibles, mail insurance, etc.) are very popular. [ 3 ] Direct risk aversion may explain why, as people demonstrate their literal distaste for any and all levels of uncertainty. By paying a premium (often higher than the cost of replacement) for the possibility that insurance may come in handy, people display direct risk aversion by valuing a risky prospect below the value of its worst possible outcome (replacement at face-value).
Suppose you are undecided whether or not to purchase earthquake insurance because the premium is quite high. As you hesitate, your friendly insurance agent comes forth with an alternative offer: "For half the regular premium you can be fully covered if the quake occurs on an odd day of the month. This is a good deal because for half the price you are covered for more than half the days."
Why do most people find such probabilistic insurance distinctly unattractive? Starting anywhere in the region of low probabilities, the impact on the decision weight of a reduction of probability from p to p/2 is considerably smaller than the effect of a reduction from p/2 to 0. Reducing the risk by half, then, is not worth half the premium.
The aversion to probabilistic insurance is significant for three reasons. First, it undermines the classical explanation of insurance in terms of a concave utility function. [ 1 ] According to EUT, probabilistic insurance should be definitely preferred to normal insurance when the latter is just acceptable. [ 8 ] Second, probabilistic insurance represents many forms of protective action, such as having a medical checkup, buying new tires, or installing a burglar alarm system. [ 1 ] Such actions typically reduce the probability of some hazard without eliminating it altogether. [ 1 ] Third, the acceptability of insurance can be manipulated by the framing of the contingencies. [ 1 ] An insurance policy that covers fire but not flood, for example, could be evaluated either as full protection against a specific risk, (e.g., fire) or as a reduction in the overall probability of property loss. [ 1 ] People greatly undervalue a reduction in the probability of a hazard in comparison to the complete elimination of that hazard. [ 1 ] Hence, insurance should appear more attractive when it is framed as the elimination of risk than when it is described as a reduction of risk. [ 1 ]
Further, Slovic, Fischhoff, and Lichtenstein (1982) [ 12 ] showed that a hypothetical vaccine that reduces the probability of contracting a disease from 20% to 10% is less attractive if it is described as effective in half of the cases than if it is presented as fully effective against one of two exclusive and equally probable virus strains that produce identical symptoms. [ 13 ]
The earliest studies of risk perception also found that, whereas risk and benefit tend to be positively correlated in the world, they are negatively correlated in people's minds, and, therefore, judgments. [ 14 ] The significance of this finding was not realized until a study by Alhakami and Slovic (1994) found that the inverse relation between perceived risk and perceived benefit of an activity (e.g., using pesticides) was linked to the strength of positive or negative affect associated with that activity as measured by rating the activity on bipolar scales such as good/bad, nice/awful, dread/not dread, and so forth. [ 15 ] This result implies that people base their judgments of an activity or a technology not only on what they think about it but also on how they feel about it. [ 13 ] If their feelings toward an activity are favorable, they are moved toward judging the risks as low and the benefits as high; if their feelings toward it are unfavorable, they tend to judge the opposite— high risk and low benefit ( see also affect heuristic ). [ 13 ]
Both EUT and PT are probability-outcome independent theories, as they posit separate functions for the evaluation of outcomes and probabilities. [ 2 ] Both assume that the impact of a given probability is a function of that probability but not of the outcome to which it's attached. [ 2 ] Further, neither theory distinguishes one source of value from another. [ 2 ] While probability-outcome independence may hold across outcomes of different monetary values, it is unlikely to hold across outcomes of varying affects . [ 2 ]
In 2001, two researchers from the University of Chicago, Rottenstreich and Hsee, conducted a series of three experiments to illustrate probability-outcome dependence, using an affective approach. [ 2 ]
Experiment 1: In an experiment observing probability-outcome interactions, a lottery ticket offers you a chance to meet and kiss your favorite movie-star as a prize (affect-rich) or $50 in cash (affect-poor). Each of the two conditions poses a 1% probability of the respective gamble occurring.
Results & Implications: 70% of participants preferred the cash to the kiss under certainty, whereas 65% (nearly the reverse) preferred the kiss to the cash under low probability. This indicates that we weight what should be an objectively equal 1% probability in each scenario differently: a 1% probability is greater for the affect-rich kiss than for the affect-poor cash.
Experiment 2: In a subsequent, and more realistic study, two similar and financially equivalent prizes - a $500 coupon redeemable toward payments associated with a European vacation (affect-rich) and a $500 coupon redeemable toward payment of tuition (affect-poor) were presented. For each prize, some participants were told they had a 1% chance of winning, and others a 99% chance of winning. Participants then had to indicate how much money they would have to be offered for them to be indifferent between receiving that dollar amount for sure and having the specified chance of winning the prize.
Results & Implications: Although the two coupons had equivalent redemption values, the median price of the 1% chance of winning the European vacation was $20, but $5 for the tuition coupon, indicating that the weight of 1% we place on affect-rich prizes is greater than for affect-poor prizes. Based on results from the 1% condition, PT would predict that at a 99% chance of winning, the European coupon would still be priced higher than the tuition coupon. On the contrary, the affective approach found that in the 99% chance of winning condition, the median price of the European coupon was $450, whereas that of the tuition coupon was $478. Our weighting of the 99% probability as smaller for the affect-rich European coupon than the affect-poor tuition coupon indicates probability-outcome dependence for affect-rich outcomes. Affect-rich outcomes yield more pronounced overweighting of small probabilities, but more pronounced underweighting of large probabilities.
Both examples indicate probability-outcome dependence, as based on affect-rich outcomes, which changes the shape of PT's S-shaped curve.
In Experiment 2, the size of the affect-rich jump in the weighting function is much greater ($500 – $450 = $50) than the size of the affect-poor jump ($500 – $478 = $22). [ 2 ] Thus, weighting functions will be more S-shaped for lotteries involving affect-rich than affect-poor outcomes. [ 2 ] That is, people will be more sensitive to departures from impossibility and certainty (from hope and fear), but less sensitive to intermediate probability variations for affect-rich outcomes, resulting in larger jumps at the endpoints of the weighting function. [ 2 ] Results from this study suggest that the assumption of probability-outcome independence adopted by both EUT and PT may hold across outcomes of different monetary values, but not different affective values. [ 2 ]
The outcomes studies in Experiments 1 and 2 were gains above the status quo. When a positive outcome is available, any departure from impossibility may engender hope (affect-rich and positive), and any deviation from certainty may produce fear (affect-rich but negative). The following study demonstrates that the opposite pattern is also true: when the available outcome is negative, departures from impossibility engender fear, and deviations from certainty produce hope.
Experiment 3: Participants were told to imagine themselves in a hypothetical experiment entailing either a certain, 1% or 99% chance of a short, painful but not dangerous electric shock (affect-rich), and others were told that the experiment entailed either a 1% or 99% chance of a cash penalty (affect-poor, relatively). They were then asked to indicate how much money they would have to pay for them to be indifferent between paying that amount for sure and participating in the hypothetical experiment.
Certainty condition: The median price paid to avoid an electric shock was $19.86. Most participants (24/30) preferred receiving the shock over paying more than $20.
Low-probability condition: The median price paid to avoid a 1% chance of a shock was $7, substantially greater than the median price paid to avoid a 1% chance of a $20 penalty. As before, the weight of a 1% probability is greater for the affect-rich shock than for the affect-poor cash payment.
High-probability condition: The median price paid to avoid a 99% chance of shock, $10, was substantially lower than the median price paid to avoid a 99% chance of cash penalty, $18.
Results: Taken together, for the affect-rich electric shock, the size of the right-hand jump in the weighting function is about $10 ($19.86 - $10), but for the affect-poor cash penalty, the size of this jump is much smaller at $2 ($20 - $18). Again, we see that the weight of the 99% is smaller for the affect-rich shock than for the affect-poor cash.
Both Experiments 1 and 2 investigated outcomes that were gains over the status quo. Experiment 3 studied negative outcomes and also found evidence of a weighting function that is more S-shaped for affect-rich than affect-poor prizes. Therefore, probability-outcome dependence based on the affect-rich psychology of risk applies in the domains of both gains and losses. [ 2 ]
Do you remember the worst thing that has happened to you? What about the best? At what frequency are you able to recall memories that are negative in comparison to those that are positive? Does it seem like negative information is remembered with more ease and clarity than positive information? Why is it easier to know the percentage of fatal car accidents each year, as opposed to the percentage of accidents without fatalities?
The human brain demonstrates a partiality for the processing of negative information. In comparison with their positive counterparts, negative stimuli receive a larger allocation of attention and a swifter response once recognized by the brain. [ 16 ] [ 17 ] This bias for negative information occurs very early on in the stages of processing, seen in the appearance of a P1, a component of the event-related potentials (ERP) gathered from an EEG (electroencephalography) output. Researchers localized this particular ERP to the ventrolateral occipital cortex. Given that a greater amount of attention is allotted to the processing of negative stimuli, the negativity bias may also be indicative of an attentional bias.
The negativity bias is noticeable in a plethora of situations related to the formation of risk-averse behaviour. Notably, any stimuli that evokes the expression of fear encourages risk -aversion. The human brain has adapted to easily parse out these stimuli from a sea of benign stimuli. In the laboratory, participants report and respond more quickly to negative stimuli; Photos of negative and threatening pictures jump out of an array of photos, capturing participants' attention more than positive or neutral pictures. [ 18 ] Non-tangibles, such as personality traits, also demonstrate a similar impact for eliciting risk-averse behaviour. Carleston & Skowronski (1989) found that negative traits form a stronger impression on an individual than positive traits, thus affecting the overall impression of the individual being evaluated. [ 19 ]
An individual's affect often determines the extent to which one's behaviour is effective in obtaining their goal. Decision making and emotion, intertwined, cannot be separated from each other, as emotion can either benefit or hinder the attainment of maximized utility.
Three different emotional states influence decision making:
Your current emotional state (i.e. How do you feel while you are making a decision?)
Your past emotional state (i.e. How did you feel anticipating your decision?)
Your future emotional state (i.e. How will your decision affect how you feel in the future; What effect will the decision have on your emotional well-being?)
Researching decision-making and affect, Antoine Bechara, Antonio Damasio and colleagues (2000; 2005) discovered that damage to a brain area associated with emotional processing impairs effective decision-making. [ 20 ] [ 21 ] After discovering that damage to the orbitofrontal cortex impaired participants from making goal-oriented decisions in social and professional contexts, Damasio and his colleagues designed the Iowa Gambling Task. In creating this task, Damasio wondered whether decision-making was afflicted because emotion was a necessary component to making effective decisions.
In the task, participants continuously draw from one out of four possible decks – participants may switch decks at any point during the study.
Each card possesses monetary value, resulting in either gains or losses.
Participants are unaware that 2 of the decks correspond to net winnings – low payoffs and even lower losses. The other 2 decks correspond to net losses – high payoffs and even higher losses.
Researchers instruct participants to maximize their utility – gain the most money by the end of the task. In order to complete this task successfully, participants must discern that the decks associated with net winning, yet low payoffs, maximize their utility.
Results . Damasio noticed that participants with damage to their orbitofrontal cortex were unable to realize that the deck associated with low payoffs yielded higher reward. From his discovery using the Iowa Gambling Task, Damasio formulated a Somatic marker hypothesis .
Alternate Conclusions . Other researchers suggest that the difficulty encountered by patients with orbitofrontal cortex damage on Iowa Gambling Task is because the task requires participants to change their initial perception of potential gains and losses. [ 22 ] Participants are lured in by appealing rewards, then confronted with devastating losses. Thus, orbitofrontal cortex damage inhibits the adaptation to changing patterns of rewards and punishment. This conclusion has been replicated in primates, where orbitofrontal damage prevented the extinction of a learned association. [ 23 ]
Damasio posited that emotional information in the form of physiological arousal, is needed to inform decision making. When confronted with a decision, we may react emotionally to the situation, a reaction that manifests as changes in physiological arousal in the body, or somatic markers. Given data collected from the Iowa Gambling Task, Damasio postulated that the orbitofrontal cortex assists individuals in forming an association between somatic markers and the situations that trigger them. Once an association is made, the orbitofrontal cortex and other brain areas evaluate an individual's previous experiences eliciting similar somatic markers. Once recognized, the orbitofrontal cortex can determine an adequate and swift behavioural response, and its likeliness for reward.
Several brain areas are observed in the expression of risk-averse behaviour. The previously mentioned orbitofrontal cortex is amongst these brain areas, supporting the feeling of regret. Regret, an emotion which heavily influences decision making, leads individuals to make decisions which circumvent encountering this emotion in the future.
Studying brain activity associated with regret, researcher Georgio Coricelli and his colleagues (2005) triggered feelings of regret in healthy participants, by having them complete a gambling task in which they were informed that the best choice was the unchosen option. [ 24 ] Using functional magnetic resonance imaging (fMRI), Coricelli found that increasing regret correlated with increased activity in the medial orbitofrontal cortex, the anterior cingulate cortex, and the anterior hippocampus. [ 24 ] The higher the activation in the medial orbitofrontal cortex, the greater the reported regret. After repeated trials, researchers began to observe risk averse behaviour by their participants, a behaviour echoed in intensified activity within the medial orbitofrontal cortex and the amygdala.
Risk-averse behaviors are the culmination of several neural correlates. While avoiding negative stimuli, perceived or real, is a simple enough action, it requires anticipation, motivation and reasoning. How do you know a stimulus is malevolent? What information leads you to ultimately behave in a manner consistent with ensuring or endangering your well-being? Each of these questions recruit a different brain area, playing a poignant role in whether a decision is beneficial to an individual.
Fear-Conditioning . Over time, individuals learn that a stimulus is not benign through personal experience. Implicitly, a fear of a particular stimulus can develop, resulting in risk-averse behaviour. Traditionally, fear-conditioning is not associated with decision-making, but rather the pairing of a neutral stimulus with an aversive situation. Once an association is formed between the neutral stimulus and aversive event, a startle response is observed each time the neutral stimulus is presented. An aversion to the presentation of the neutral stimulus is observed after repeated trials.
Essential to understanding risk aversion is the implicit learning that occurs during fear-conditioning. Risk aversion is the culmination of implicitly or explicitly acquired knowledge that informs an individual that a particular situation is aversive to their psychological well-being. Similarly, fear-conditioning is the acquisition of knowledge that informs an individual that a particular neutral stimulus now predicts an event that endangers their psychological or physical well-being.
Researchers such as Mike Davis (1992) and Joseph LeDoux (1996), have deciphered the neural correlates responsible for the acquisition of fear-conditioning. [ 25 ] [ 26 ]
The amygdala, previously mentioned as a region showing high activity for the emotion of regret, is the central recipient for brain activity concerning fear-conditioning. Several streams of information from multiple brain areas converge on the lateral amygdala, allowing for the creation of associations that regulate fear-conditioning; Cells in the superior dorsal lateral amygdala are able to rapidly pair the neutral stimulus with the aversive stimulus. Cells that project from the lateral amygdala to the central amygdala allow for the initiation of an emotional response if a stimulus is deemed threatening.
Cognitive Control . Evaluating a gamble and calculating its expected value requires a certain amount of cognitive control. Several brain areas are dedicated to monitoring the congruence between expected and actual outcomes. Evidence by Ridderinkhof et al. (2004) suggests that the posterior medial frontal cortex (pMFC) and the lateral prefrontal cortex (LPFC) are involved in goal-directed performance monitoring and behaviour modulation. [ 27 ] The pMPC monitors response conflicts (any situation that activates more than one response tendency), decision uncertainty, and any deviation from the anticipated outcome. Activation in the pMPC increases significantly after an error, response conflict, or unfavorable outcome is detected. As a result, the pMFC can signal a need for performance adjustment; there is a lack of evidence, however, indicating that the pMFC controls modulatory behaviour. Behaviour control processes in the LPFC have been implicated in the modulatory behaviour observed by researchers.
The field of neuroeconomics is emerging as a unified branch of knowledge, intending to merge information from psychology, economics and neuroscience with hopes of better understanding human behaviour. Risk aversion poses a mystifying question that intrigues experts in all three disciplines. Why is it that humans do not act in accord with their anticipated outcome? Whilst negative outcomes retain more value than positive outcome, human beings do not make logical decisions. Parsing out emotion and fear of loss from decision making would result in more implementation of mathematical calculations, thus maximizing expected utility. While activation in specific brain areas can highlight the mechanisms of decision making, evidence continues to support the prevalence of risk-averse behaviour. | https://en.wikipedia.org/wiki/Risk_aversion_(psychology) |
A risk management plan is a document to foresee risks, estimate impacts, and define responses to risks. It also contains a risk assessment matrix . According to the Project Management Institute , a risk management plan is a "component of the project, program, or portfolio management plan that describes how risk management activities will be structured and performed". [ 1 ]
Moreover, according to the Project Management Institute, a risk is "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project's objectives". [ 1 ] Risk is inherent with any project, and project managers should assess risks continually and develop plans to address them. [ 2 ] The risk management plan contains an analysis of likely risks with both high and low impact, as well as mitigation strategies to help the project avoid being derailed should common problems arise. Risk management plans should be periodically reviewed by the project team to avoid having the analysis become stale and not reflective of actual potential project risks.
Broadly, there are four potential responses to risk with numerous variations on the specific terms used to name these response options: [ 3 ] [ 4 ]
(Mnemonic: SARA, for S hare A void R educe A ccept, or A-CAT, for "Avoid, Control, Accept, or Transfer")
Risk management plans often include matrices.
The United States Department of Defense, as part of acquisition, uses risk management planning that may have a Risk Management Plan document for the specific project. The general intent of the RMP in this context is to define the scope of risks to be tracked and means of documenting reports. It is also desired that there would be an integrated relationship to other processes. An example of this would be explaining which developmental tests verify risks of the design type were minimized are stated as part of the test and evaluation master plan . A further example would be instructions from 5000.2D [ 5 ] that for programs that are part of a system of systems the risk management strategy shall specifically address integration and interoperability as a risk area. The RMP specific process and templates shift over time (e.g. the disappearance of 2002 documents Defense Finance and Accounting Service / System Risk Management Plan, and the SPAWAR Risk Management Process). | https://en.wikipedia.org/wiki/Risk_management_plan |
Risks of astronomical suffering , also called suffering risks or s-risks , are risks involving much more suffering than all that has occurred on Earth so far. [ 2 ] [ 3 ] They are sometimes categorized as a subclass of existential risks . [ 4 ]
According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering. Thus, s-risks are considered to be a morally urgent matter, despite the possibility of technological benefits. [ 5 ]
Sources of possible s-risks include embodied artificial intelligence [ 6 ] and superintelligence , [ 7 ] as well as space colonization , which could potentially lead to "constant and catastrophic wars" [ 8 ] and an immense increase in wild animal suffering by introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally or inadvertently. [ 9 ]
Artificial intelligence is central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space. [ 10 ] Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds. [ 3 ] Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks. [ 6 ] Brian Tomasik has argued that astronomical suffering could emerge from solving the AI alignment problem incompletely. He argues for the possibility of a "near miss" scenario, where a superintelligent AI that is slightly misaligned has the maximum likelihood of causing astronomical suffering, compared to a completely unaligned AI. [ 11 ]
Space colonization could increase suffering by introducing wild animals to new environments, leading to ecological imbalances. In unfamiliar habitats, animals may struggle to survive, facing hunger, disease, and predation. These challenges, combined with unstable ecosystems, could cause population crashes or explosions, resulting in widespread suffering. Additionally, the lack of natural predators or proper biodiversity on colonized planets could worsen the situation, mirroring Earth’s ecological problems on a larger scale. This raises ethical concerns about the unintended consequences of space colonization, as it could propagate immense animal suffering in new, unstable ecosystems. Phil Torres argues that space colonization poses significant "suffering risks", where expansion into space will lead to the creation of diverse species and civilizations with conflicting interests. These differences, combined with advanced weaponry and the vast distances between civilizations, would result in catastrophic and unresolvable conflicts. Strategies like a "cosmic Leviathan" to impose order or deterrence policies are unlikely to succeed due to physical limitations in space and the destructive power of future technologies. Thus, Torres concludes that space colonization could create immense suffering and should be delayed or avoided altogether. [ 12 ]
Magnus Vinding's "astronomical atrocity problem" questions whether vast amounts of happiness can justify extreme suffering from space colonization. He highlights moral concerns such as diminishing returns on positive goods, the potentially incomparable weight of severe suffering, and the priority of preventing misery. He argues that if colonization is inevitable, it should be led by agents deeply committed to minimizing harm. [ 13 ]
David Pearce has argued that genetic engineering is a potential s-risk. Pearce argues that while technological mastery over the pleasure-pain axis and solving the hard problem of consciousness could lead to the potential eradication of suffering , it could also potentially increase the level of contrast in the hedonic range that sentient beings could experience. He argues that these technologies might make it feasible to create "hyperpain" or "dolorium" that experience levels of suffering beyond the human range. [ 14 ]
S-risk scenarios may arise from excessive criminal punishment, with precedents in both historical and in modern penal systems. These risks escalate in situations such as warfare or terrorism, especially when advanced technology is involved, as conflicts can amplify destructive tendencies like sadism, tribalism , and retributivism . War often intensifies these dynamics, with the possibility of catastrophic threats being used to force concessions. Agential s-risks are further aggravated by malevolent traits in powerful individuals, such as narcissism or psychopathy. This is exemplified by totalitarian dictators like Hitler and Stalin , whose actions in the 20th century inflicted widespread suffering. [ 15 ]
According to David Pearce, there are other potential s-risks that are more exotic, such as those posed by the many-worlds interpretation of quantum mechanics. [ 14 ]
To mitigate s-risks, efforts focus on researching and understanding the factors that exacerbate them, particularly in emerging technologies and social structures. Targeted strategies include promoting safe AI design, ensuring cooperation among AI developers, and modeling future civilizations to anticipate risks. Broad strategies may advocate for moral norms against large-scale suffering and stable political institutions. According to Anthony DiGiovanni, prioritizing s-risk reduction is essential, as it may be more manageable than other long-term challenges, while avoiding catastrophic outcomes could be easier than achieving an entirely utopian future. [ 16 ]
Induced amnesia has been proposed as a way to mitigate s-risks in locked-in conscious AI and certain AI-adjacent biological systems like brain organoids . [ 17 ]
David Pearce's concept of "cosmic rescue missions" proposes the idea of sending probes to alleviate potential suffering in extraterrestrial environments. These missions aim to identify and mitigate suffering among hypothetical extraterrestrial life forms, ensuring that if life exists elsewhere, it is treated ethically. [ 18 ] However, challenges include the lack of confirmed extraterrestrial life, uncertainty about their consciousness, and public support concerns, with environmentalists advocating for non-interference and others focusing on resource extraction. [ 19 ] | https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering |
Ristocetin is a glycopeptide antibiotic , obtained from Amycolatopsis lurida , previously used to treat staphylococcal infections . It is no longer used clinically because it caused thrombocytopenia and platelet agglutination . It is now used solely to assay those functions in vitro in the diagnosis of conditions such as von Willebrand disease (vWD) and Bernard–Soulier syndrome . Platelet agglutination caused by ristocetin can occur only in the presence of von Willebrand factor multimers, so if ristocetin is added to blood lacking the factor (or its receptor—see below), the platelets will not clump.
Through an unknown mechanism, the antibiotic ristocetin causes von Willebrand factor to bind the platelet receptor glycoprotein Ib (GpIb), so when ristocetin is added to normal blood, it causes agglutination.
In some types of vWD (types 2B and platelet-type), even very small amounts of ristocetin cause platelet aggregation when the patient's platelet-rich plasma is used. [ 1 ] This paradox is explained by these types having gain-of-function mutations which cause the vWD high molecular-weight multimers to bind more tightly to their receptors on platelets (the alpha chains of glycoprotein Ib (GPIb) receptors). In the case of type 2B vWD, the gain-of-function mutation involves von Willebrand's factor (VWF gene), and in platelet-type vWD, the receptor is the object of the mutation (GPIb). This increased binding causes vWD because the high-molecular weight multimers are removed from circulation in plasma since they remain attached to the patient's platelets. Thus, if the patient's platelet-poor plasma is used, the ristocetin cofactor assay will not agglutinate standardized platelets (i.e., pooled platelets from normal donors that are fixed in formalin), similar to the other types of vWD.
In all forms of the ristocetin assay, the platelets are fixed in formalin prior to the assay to prevent von Willebrand's factor stored in platelet granules from being released and participating in platelet aggregation. Thus, the ristocetin cofactor activity depends only upon high-molecular multimers of the factor present in circulating plasma. | https://en.wikipedia.org/wiki/Ristocetin |
The Rittenhouse Medal is awarded by the Rittenhouse Astronomical Society for outstanding achievement in the science of Astronomy. [ 1 ] The medal was one of those originally minted to commemorate the Bi-Centenary of the birth of David Rittenhouse on April 8, 1932. In 1952 the Society decided to establish a silver medal to be awarded to astronomers for noteworthy achievement in astronomical science. The silver medal is cast from the die (obverse) used for the Bi-Centennial Rittenhouse Medal. | https://en.wikipedia.org/wiki/Rittenhouse_Medal |
The Ritter reaction (sometimes called the Ritter amidation ) is a chemical reaction that transforms a nitrile into an N -alkyl amide using various electrophilic alkylating reagents. The original reaction formed the alkylating agent using an alkene in the presence of a strong acid . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The Ritter reaction proceeds by the electrophilic addition of either a carbenium ion or covalent species [ 5 ] [ 6 ] to the nitrile . The resulting nitrilium ion is hydrolyzed to the desired amide.
Primary, [ 7 ] secondary, [ 4 ] tertiary, [ 8 ] and benzylic [ 9 ] alcohols , [ 1 ] as well as tert -butyl acetate, [ 10 ] also successfully react with nitriles in the presence of strong acids to form amides via the Ritter reaction. A wide range of nitriles can be used. In particular, cyanide can be used to prepare formamides, which are useful precursors to isocyanides , or may also be hydrolysed to give amines .
A large scale application of the Ritter reaction is in the synthesis of tert-octylamine , by way of the intermediate formamide. This process was originally described by Ritter in 1948, [ 11 ] and an estimated 10,000 tons/y (year: 2000) of this and related lipophilic amines are prepared in this way. [ 12 ] Otherwise, the Ritter reaction is most useful in the formation of amines and amides of pharmaceutical interest. Real world applications include Merck 's industrial-scale synthesis of anti- HIV drug Crixivan (indinavir); [ 13 ] the production of the falcipain-2 inhibitor PK-11195 ; the synthesis of the alkaloid aristotelone; [ 14 ] and synthesis of Amantadine , an antiviral and antiparkinsonian drug. [ 15 ] Other applications of the Ritter reaction include synthesis of dopamine receptor ligands [ 14 ] and production of racemic amphetamine from allylbenzene and methyl cyanide . [ 1 ] [ 16 ]
The Ritter reaction is inferior to most amination methods because it cogenerates substantial amounts of salts. Illustrative is the conversion of isobutylene to tert-butylamine using HCN and sulfuric acid followed by base neutralization. The weight of the salt byproduct is greater than the weight of the amine. [ 12 ]
In the laboratory, the Ritter reaction suffers from the necessity of an extremely strong acid catalyst . Other methods have been proposed in order to promote carbocation formation, including photocatalytic electron transfer [ 17 ] or direct photolysis . [ 18 ]
The reaction is named after John J. Ritter, who supervised the Ph.D. thesis work of P. Paul Minieri . | https://en.wikipedia.org/wiki/Ritter_reaction |
Rituximab , sold under the brand name Rituxan among others, is a monoclonal antibody medication used to treat certain autoimmune diseases and types of cancer . [ 17 ] It is used for non-Hodgkin lymphoma , chronic lymphocytic leukemia (in children and adults, but not recommended in elderly patients), rheumatoid arthritis , granulomatosis with polyangiitis , idiopathic thrombocytopenic purpura , pemphigus vulgaris , myasthenia gravis and Epstein–Barr virus -positive mucocutaneous ulcers . [ 17 ] [ 18 ] [ 19 ] [ 20 ] It is given by slow intravenous infusion (injected slowly through an IV line). [ 17 ]
The most common side effects with intravenous infusions are reactions related to the infusion (such as fever, chills and shivering) while most common serious side effects are infusion reactions, infections and heart-related problems. [ 15 ] Similar side effects are seen when it is injected under the skin, with the exception of reactions around the injections site (pain, swelling and rash), which occur more frequently with the skin injections. [ 15 ]
Severe side effects include reactivation of hepatitis B in those previously infected, progressive multifocal leukoencephalopathy , toxic epidermal necrolysis , and death. [ 17 ] [ 21 ] It is unclear if use during pregnancy is safe for the developing fetus or newborn baby. [ 9 ] [ 17 ]
Rituximab is a chimeric monoclonal antibody against the protein CD20 , which is primarily found on the surface of immune system B cells . [ 22 ] When it binds to this protein it triggers cell death. [ 17 ]
Rituximab was approved for medical use in 1997. [ 22 ] It is on the World Health Organization's List of Essential Medicines . [ 23 ] Rituxan is co-marketed by Biogen and Genentech in the US, by Roche elsewhere except Japan, and co-marketed by Chugai Pharmaceuticals and Zenyaku Kogyo in Japan. [ 24 ] [ 25 ]
Rituximab is a chimeric monoclonal antibody targeted against CD20, a surface antigen present on B cells . It acts by depleting normal as well as pathogenic B cells while sparing plasma cells and hematopoietic stem cells , which do not express the CD20 surface antigen. [ 26 ]
In the United States, rituximab is indicated to treat:
In the European Union, rituximab is indicated for the treatment of follicular lymphoma and diffuse large B cell non-Hodgkin's lymphoma (two types of non-Hodgkin's lymphoma, a blood cancer); [ 15 ] chronic lymphocytic leukemia (CLL, another blood cancer affecting white blood cells); [ 15 ] severe rheumatoid arthritis (an inflammatory condition of the joints); [ 15 ] two inflammatory conditions of blood vessels known as granulomatosis with polyangiitis (GPA) and microscopic polyangiitis (MPA); [ 15 ] moderate to severe pemphigus vulgaris, an autoimmune disease characterised by widespread blistering and erosion of the skin and mucous membranes (the linings of internal organs). 'Autoimmune' means that the disease is caused by the immune system (the body's natural defences) attacking the body's own cells. [ 15 ]
Rituximab is used to treat cancers of the white blood system such as leukemias and lymphomas , including non-Hodgkin's lymphoma, chronic lymphocytic leukemia , and nodular lymphocyte predominant Hodgkin's lymphoma. [ 28 ] [ 29 ] This also includes Waldenström's macroglobulinemia , a type of non-Hodgkin lymphoma. [ 17 ] Rituximab in combination with hyaluronidase human, sold under the brand names Mabthera SC [ 13 ] and Rituxan Hycela, [ 30 ] is used to treat follicular lymphoma , diffuse large B-cell lymphoma , and chronic lymphocytic leukemia. [ 30 ] It is used in combination with fludarabine and cyclophosphamide to treat previously untreated and previously treated CD20-positive chronic lymphocytic leukemia. [ 14 ]
Rituximab has been shown to be an effective rheumatoid arthritis treatment in three randomised controlled trials and is now licensed for use in refractory rheumatoid disease. [ 31 ] In the United States, it has been FDA approved for use in combination with methotrexate for reducing signs and symptoms in adult patients with moderately to severely active rheumatoid arthritis (RA) who have had an inadequate response to one or more anti- TNF-alpha therapy. In the European Union, the license is slightly more restrictive: it is licensed for use in combination with methotrexate in patients with severe active RA who have had an inadequate response to one or more anti-TNF therapy. [ 32 ]
There is some evidence for efficacy, but not necessarily safety , in a range of other autoimmune diseases, and rituximab is widely used off-label to treat difficult cases of multiple sclerosis , [ 33 ] [ 34 ] systemic lupus erythematosus , chronic inflammatory demyelinating polyneuropathy and autoimmune anemias . [ 35 ] [ 36 ] The most dangerous, although among the most rare, side effect is progressive multifocal leukoencephalopathy infection, which is usually fatal; however, only a very small number of cases have been recorded occurring in autoimmune diseases. [ 35 ] [ 37 ]
Other autoimmune diseases that have been treated with rituximab include autoimmune hemolytic anemia , pure red cell aplasia , thrombotic thrombocytopenic purpura (TTP), [ 38 ] idiopathic thrombocytopenic purpura (ITP), [ 39 ] [ 40 ] Evans syndrome , [ 41 ] vasculitis (e.g., granulomatosis with polyangiitis ), bullous skin disorders (for example, pemphigus , pemphigoid —with very encouraging results of approximately 85% rapid recovery in pemphigus, according to a 2006 study), [ 42 ] type 1 diabetes mellitus , Sjögren syndrome , anti-NMDA receptor encephalitis and Devic's disease ( Anti-AQP4 disease , MOG antibody disease ), [ 43 ] Graves' ophthalmopathy , [ 44 ] autoimmune pancreatitis , [ 45 ] Opsoclonus myoclonus syndrome (OMS), [ 46 ] and IgG4-related disease . [ 47 ] There is some evidence that it is ineffective in treating IgA-mediated autoimmune diseases. [ 48 ]
Serious adverse events, which can cause death and disability, include: [ 14 ] [ 17 ]
A concern with continuous rituximab treatment is the difficulty to induce a proper vaccine response. [ 54 ] [ unreliable medical source? ] This was brought into focus during the COVID-19 pandemic , where persons with multiple sclerosis and rituximab treatment had higher risk of severe COVID-19. [ 55 ] [ unreliable medical source? ] [ 56 ] In persons previously treated with rituximab for multiple sclerosis, nine of ten patients who delayed re-dosing until B cell counts passed 40/μL developed protective levels of antibodies after vaccination with the Pfizer–BioNTech COVID-19 vaccine . [ 57 ] [ unreliable medical source? ]
The antibody binds to the cell surface protein CD20 . CD20 is widely expressed on B cells, from early pre-B cells to later in differentiation , but it is absent on terminally differentiated plasma cells . Although the function of CD20 is unknown, it may play a role in Ca 2+ influx across plasma membranes, maintaining intracellular Ca 2+ concentration and allowing activation of B cells.
Rituximab is relatively ineffective in elimination of cells with low CD20 cell-surface levels. [ 59 ] It tends to stick to one side of B cells, where CD20 is, forming a cap and drawing proteins over to that side. The presence of the cap changes the effectiveness of natural killer (NK) cells in destroying these B cells. When an NK cell latched onto the cap, it had an 80% success rate at killing the cell. In contrast, when the B cell lacked this asymmetric protein cluster, it was killed only 40% of the time. [ 60 ]
The following effects have been found: [ 61 ]
The combined effect results in the elimination of B cells (including the cancerous ones) from the body, allowing a new population of healthy B cells to develop from lymphoid stem cells .
Rituximab binds to amino acids 170–173 and 182–185 on CD20, which are physically close to each other as a result of a disulfide bond between amino acids 167 and 183. [ 63 ]
Rituximab was developed by IDEC Pharmaceuticals under the name IDEC-C2B8. The US patent for the drug was issued in 1998 and expired in 2015. [ 64 ]
Based on its safety and effectiveness in clinical trials , [ 65 ] rituximab was approved by the US Food and Drug Administration (FDA) in 1997 to treat B-cell non-Hodgkin lymphomas resistant to other chemotherapy regimens. [ 66 ] [ 67 ] Rituximab, in combination with CHOP chemotherapy , is superior to CHOP alone in the treatment of diffuse large B-cell lymphoma and many other B-cell lymphomas. [ 68 ] In 2010, it was authorized by the European Commission for maintenance treatment after initial treatment of follicular lymphoma . [ 69 ]
It is on the World Health Organization's List of Essential Medicines . [ 23 ]
Originally available for intravenous injection (e.g. over 2.5 hrs), in 2016, it gained EU approval in a formulation for subcutaneous injection for B-cell CLL/lymphoma (CLL). [ 70 ]
In June 2017, the US FDA granted regular approval to the combination of rituximab and hyaluronidase human (brand name Rituxan Hycela) for adults with follicular lymphoma, diffuse large B-cell lymphoma, and chronic lymphocytic leukemia. [ 71 ] The combination is not indicated for the treatment of non-malignant conditions. [ 30 ] [ 71 ] The combination was approved based on clinical studies SABRINA/NCT01200758 and MabEase/NCT01649856. [ 30 ]
In September 2019, the US FDA approved rituximab injection to treat granulomatosis with polyangiitis and microscopic polyangiitis in children two years of age and older in combination with glucocorticoids (steroid hormones). [ 72 ] It is the first approved treatment for children with these rare vasculitis diseases, in which a person's small blood vessels become inflamed, reducing the amount of blood that can flow through them. [ 72 ] This can cause serious problems and damage to organs, most notably the lungs and the kidneys. [ 72 ] It also can impact the sinuses and skin. [ 72 ] Rituximab was approved by the FDA to treat adults with granulomatosis with polyangiitis and microscopic polyangiitis in 2011. [ 72 ]
In December 2021, the US FDA approved rituximab in combination with chemotherapy for children aged 6 months to 18 years with previously untreated, advanced stage, CD20-positive diffuse large B-cell lymphoma, Burkitt lymphoma, Burkitt-like lymphoma, or mature B-cell acute leukemia. [ 27 ] [ 73 ] Efficacy was evaluated in Inter-B-NHL Ritux 2010, a global multicenter, open-label, randomized 1:1 trial of participants six months in age or older with previously untreated, advanced stage, CD20-positive diffuse large B-cell lymphoma, Burkitt lymphoma, Burkitt-like lymphoma, or B-cell acute leukemia. [ 73 ] Advanced stage was defined as stage III with elevated lactose dehydrogenase level (lactose dehydrogenase greater than twice the institutional upper limit of normal values) or stage IV B-cell non-Hodgkin's lymphoma or B-cell acute leukemia. [ 73 ] Participants were randomized to Lymphome Malin B chemotherapy that consisted of corticosteroids, vincristine, cyclophosphamide, high-dose methotrexate, cytarabine, doxorubicin, etoposide, and triple drug (methotrexate/cytarabine/corticosteroid) intrathecal therapy alone or in combination with rituximab or non-US licensed rituximab, administered as six infusions of rituximab IV at a dose of 375 mg/m2 as per the Lymphome Malin B scheme. [ 73 ]
Rituximab was approved for medical use in the United States in November 1997. [ 14 ] [ 66 ]
Biosimilars are approved in the United States, India, the European Union, Switzerland, Japan, and Australia. [ citation needed ] The US FDA approved rituximab-abbs (Truxima) in 2018, [ 1 ] [ 74 ] [ 75 ] rituximab-pvvr (Ruxience) in 2019, [ 2 ] and rituximab-arrx (Riabni) in 2020. [ 3 ]
In July 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency recommended marketing authorization for the rituximab biosimilar Ituxredi produced by Dr. Reddy's Laboratories / Holding GmbH . Ituxredi is intended for the treatment of non-Hodgkin's lymphoma, chronic lymphocytic leukemia, rheumatoid arthritis, granulomatosis with polyangiitis and microscopic polyangiitis and pemphigus vulgaris. [ 5 ] Ituxredi was authorized for medical use in the European Union in September 2024. [ 5 ] [ 6 ]
A Rituximab biosimilar was approved in India in 2007. [ 76 ]
In 2014, Genentech reclassified Rituxan as a specialty drug , a class of drugs that are only available through specialty distributors in the US. [ 77 ] Because wholesalers discounts and rebates no longer apply, hospitals would pay more. [ 77 ]
Patents on rituximab have expired in the European Union [ 78 ] [ 79 ] [ 80 ] and in the United States. [ 81 ] [ 82 ] [ 83 ] Biosimilars were approved in the United States, India, the European Union, Switzerland, Japan, and Australia. The US FDA approved rituximab-abbs (Truxima) in 2018, [ 1 ] [ 74 ] [ 75 ] rituximab-pvvr (Ruxience) in 2019, [ 2 ] and rituximab-arrx (Riabni) in 2020. [ 3 ] [ 84 ] [ 85 ] Truxima and Riabni are approximately $3600 per 500 mg, wholesale - 10% less than Rituxan, while Ruxience is 24% less than Rituxan. [ 86 ] [ 87 ] The Indian biosimilar ituxredi retails for about 1/6 the price. [ 88 ]
Tailored-dose rituximab is more cost-effective than fixed-dose. It is both more effective and less expensive. [ 89 ] [ 90 ]
Rituximab has been reported as a possible cofactor in a chronic hepatitis E infection in a person with lymphoma. Hepatitis E infection is normally an acute infection, suggesting the drug in combination with lymphoma may have weakened the body's immune response to the virus. [ 91 ]
In 2009, a patient receiving methotrexate-induced B-cell depletion for cancer treatment, experienced a transient remittal of their myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) symptoms. While initial trials using Rituximab were promising, a phase 3 trial published in 2019 did not find an association between Rituximab treatment and improvements in ME/CFS. [ 92 ] [ 93 ]
For CNS diseases, rituximab could be administered intrathecally and this possibility is under study. [ 94 ]
The efficacy and success of rituximab has led to some other anti-CD20 monoclonal antibodies being developed: | https://en.wikipedia.org/wiki/Rituximab |
River bank erosion along the Ganges in Malda and Murshidabad districts focusses on river bank erosion along the main channel of the Ganges in Malda and Mushidabad districts of West Bengal , India .
The Ganges is a long river carrying a huge discharge of 70,000 m 3 /s. However, the river bank erosion problems are restricted to a few places. Floods and erosion pose a serious problem in the lower Ganges region, particularly in West Bengal. The Ganges enters West Bengal after wandering around the Rajmahal hills in Jharkhand . After flowing through Malda district, it enters Murshidabad district, where it splits into two river channels – the Bhagirathi flows south through West Bengal and the Padma flows east into Bangladesh . River bank erosion is a common problem in river channels in the deltaic tracts and is widespread throughout the course of the Ganges in West Bengal. Official reports show that on an average 8 km 2 of land is engulfed annually by the river in West Bengal. [ 1 ]
The Ganges forms one of the major river systems in India. From the Gangotri Glacier , it traverses a distance of 2,525 km to the Bay of Bengal . The river carries millions of tonnes of sediment load and deposits it in the plains. The sediment deposition creates many severe problems like the decrease of river depth. [ 2 ]
The Ganges is a meandering river and Farakka Barrage has disrupted the dynamic equilibrium of the river and hindered the natural oscillation of the river within its meandering belt, which is about 10 km wide in Malda and Murshidabad districts. [ 3 ] The river has a general tendency to shift towards the left bank upstream of the Farakka Barrage and towards the right bank downstream of it. [ 1 ] River bank failure is because of certain factors like soil stratification of the river bank, presence of hard rocky area (Rajmahal), high load of sediment, difficulty of dredging and construction of Farakka Barrage as an obstruction to the natural river flow. [ 2 ] The rivers in Murshidabad district have been continuously changing their meandering-geometry actively since second half of the twentieth century but the dimension of river bank erosion has been increased after construction of Farakka Barrage. [ citation needed ] More than 200 km 2 of fertile land was completely wiped out till 2004 in Malda district. As of 2004, the Ganges had eroded 356 km 2 of fertile land and displaced around 80,000 people in the period 1988 – 1994, in Murshidabad district. [ 1 ]
In the early decades of the twentieth century, the Ganges flowed in a south-easterly course between Rajmahal and Farakka, but later in the century it formed a large meander to accommodate the additional water because of the barrage construction. Furthermore, nearly 64 crore (640 million) tonnes of silt is accumulated annually on the river bed. All these lead to massive erosion of the left bank. [ 3 ]
During the period 1969-1999, 4.5 lakh people were affected by left bank erosion of the Ganges in Malda district, upstream of the Farakka Barrage. 22 mouzas in Manickchak , Kaliachak I and Kaliachak II CD Blocks have gone into the river. Other affected areas are in Kaliachak III , Ratua I and Ratua II CD Blocks. The worst-hit areas lie in the left bank of the river stretch between Bhutnidiara and Panchanandapore in the Kaliachak II block. Even in the 1960s, Panchanandapur was a flourishing river-port and trading centre. It had the block headquarters, high school, sugar mill and a regular weekly market where traders used to come by large boats from Rajmahal , Sahebganj , Dhuliyan and other towns. After being hit by river bank erosion much of what was there at Panchanandapur has shifted to Chethrumahajantola. The Ganga Bhangan Pratirodh Action Nagarik Committee’s survey revealed a loss of 750 km 2 area in Kaliachak and Manikchak. 60 primary schools, 14 high schools, coveted mango orchards have gone leaving 40,000 affected families. [ 3 ]
During the period 1990-2001 Hiranandapur, Manikchak , Gopalpur of Manikchak CD Block and Kakribondha Jhaubona of Kaliachak II CD Block were badly affected by river bank erosion. In 2004-05 large scale erosion took place in Kakribondha Jhaubona and Panchanandapur-I gram panchayats of Kaliachak II CD Block and Dakshin Chandipur, Manikchak, and Dharampur gram panchayets of Manikchak CD Block. Kakribondha Jhaubona, a gram panchayat, was totally lost by river bank erosion. The affected persons and their administrative responsibilities were merged with Bangitola gram panchayet administration. [ 2 ]
River bank failures occur in two phases. Pre-flood bank failure occurs because of the high pressure of increasing water on the bank walls. During floods the area is submerged and water seeps into the weak soil. After the floods, the banks collapse in chunks. Every monsoon a large number people are affected by river bank erosion. They become landless and lose their livelihood. It creates neo-refugees with many social problems. Sometimes, poverty leads to increase in crime. [ 2 ] The consequences of floods are of the short range as economic recovery is possible, but effects of the slow and steady disaster of river bank erosion are of permanent nature, where the entire socio-economic structure is damaged and the affected population has to move and settle somewhere else. People seriously affected by river bank erosion in Malda have migrated in search of work to as far as Gujarat and Maharashtra. At Byculla , Mumbai, there is a whole colony of erosion affected people of Malda, where they are often branded as Bangladeshi infiltrators, as they have lost not only their belongings but also their documents in the erosion. Such is the tragedy of these neo-refugees in their own country. [ 3 ]
In the remote past, the Ganges used to flow past Gauda , 40 km downstream from Rajmahal. Over a long period, the river shifted westward and now it tends to come to its earlier position. Therefore, the whole belt up to Gauda is risk zone for river bank erosion. [ 2 ]
A group of experts has suggested the pressure on the left bank be reduced by diverting flow from the eroding channel. [ 2 ] Alternatively, it is possible that in one devastating flood the Ganges will merge with Kalindri in the eastern side and the combined flow will merge with Mahananda at Nimasarai Ghat of Malda and afterwards the collective flow will merge with Ganges/ Padma in Godagari Ghat of Bangladesh. The Ganges has numerous abandoned channels in the area. [ 3 ]
As of 2013, an estimated 2.4 million people reside along the banks of the Ganges alone in Murshidabad district. [ 4 ] The main channel of the Ganges has a bankline of 94 km along its right bank from downstream of Farakka Barrage to Jalangi. Severe erosion occurs all along this bank. From a little above Nimtita , about 20 km downstream from Farakka, the Ganges flows along the international boundary with Bangladesh in the left bank. The following blocks have to face the brunt of erosion year after year: Farakka , Samserganj , Suti I , Suti II , Raghunathganj II , Lalgola , Bhagawangola I , Bhagawangola II , Raninagar I , Raninagar II and Jalangi . [ 3 ]
According to government reports between 1931 and 1977, 26769 hectares of lands have been eroded and many villages have been fully submerged. Thousands of people have lost their dwellings. Between 1988 and 1994, 206.60 square km. land was eroded displacing 14,236 families. [ 4 ]
During 1952-53 the old Dhuliyan town was completely washed away by the river. Dhuliyan and its adjoining areas were greatly affected in mid 1970s when about 50,000 people became homeless. The encroaching river wiped out 50 mouzas and engulfed about 10,000 hectares of fertile land. [ 3 ] [ 5 ] In August 2020, this region again faced erosion which washed away dwelling places, temples, schools, litchi and mango orchards and agricultural lands along the right bank nearly after 50 years. It affected namely Dhanghora, Dhusaripara and Natun Shibpur villages of Samserganj block. [ 6 ] [ 7 ] In September-October 2022, Pratapganj and Maheshtola areas of Samserganj were the new victim of river bank erosion. Five houses, one temple and several bighas of land were washed away by the eroding river. [ 8 ]
According to the Report on Impact of the Farakka Barrage on the Human Fabric: "People in Murshidabad had been experiencing erosion for the last two centuries but the ravages caused by the mighty Padma at Akheriganj in 1989 and 1990 surpassed all previous records. Akheriganj disappeared from the map destroying 2,766 houses, leaving 23,394 persons homeless many of whom migrated to the newly emerged Nirmal char along the opposite bank…. This area has lost its school, college, places of worship, panchayat office to the raging Padma…. Original Akheriganj of nearly 20,000 inhabitants has gone into the river around 1994." [ 3 ]
" Jalangi situated 50 km east of Baharampur district headquarter has suffered tremendously in 1994-95. At Jalangi Bazaar severe erosion started in September 1995 engulfing nearly 400 metre width of land within a week and then high built up homestead land thereby destroying Jalangi High School, Gram Panchayat Office, Thana and innumerable buildings rendering nearly 12000 people homeless." [ 3 ]
"As per official estimate, till 1992-94 more than 10,000 hectares of chars (flood plain sediment island) have developed in main places, which have become inaccessible from the Indian side but can be reached easily from Bangladesh. The erosion wiped away boundary posts at many places creating border dispute. In Parliament when this issue was raised the House was assured that the boundary was fixed on the map even though the river has shifted". [ 3 ]
"One typical example is that of Nirmal char built by eroding Akheriganj. Here a population of 20,000 lives in an area of 50 sq.km. From here Rajshahi city of Bangladesh can be reached within 45 minutes on road whereas to come to the mainland of India one has to cross the mighty Padma which will take more than three hours. Moreover, the basic infrastructure provided here is too poor and the people’s plight is further heightened by negligence of the mainland administration. Since there is no primary health centre, people go to Rajshahi for treatment. The concept of international border is very much flexible here due to basic problems of living. Instances of fighting for harvesting with Bangladeshi cultivators have been reported again and again apart from the usual problem of allotting created land to the rightful owners. Once again, the question of Bangladeshi infiltrators, the recent fiasco over ISI agents have increased in this district due to these char areas." [ 3 ]
"Downstream of Jangipur Barrage the river Ganga/Padma is swinging away close to river Bhagirathi at Fazilpur leaving only 1.34 km. in width. In 1996, this distance was 2.86 km. If Ganga/Padma actually merges with Bhagirathi due to the natural tendency, it will lead to flood and catastrophe in the entire Bhagirathi basin. Bhagirathi water remains at a higher elevation than the river Ganga/Padma during lean season and if they merge the water of the feeder canal will flow through Padma to Bangladesh defeating the very purpose of the Farakka Project." [ 3 ] | https://en.wikipedia.org/wiki/River_bank_erosion_along_the_Ganges_in_Malda_and_Murshidabad_districts |
River bank failure can be caused when the gravitational forces acting on a bank exceed the forces which hold the sediment together. Failure depends on sediment type, layering, and moisture content . [ 1 ]
All river banks experience erosion , but failure is dependent on the location and the rate at which erosion is occurring. [ 2 ] River bank failure may be caused by house placement, water saturation, weight on the river bank, vegetation, and/or tectonic activity. When structures are built too close to the bank of the river, their weight may exceed the weight which the bank can hold and cause slumping , or accelerate slumping that may already be active. [ 1 ] [ 3 ] Adding to these stresses can be increased saturation caused by irrigation and septics , which reduce the soil's strength. [ 4 ] While deep rooted vegetation can increase the strength of river banks, replacement with grass and shallower rooted vegetation can actually weaken the soil. Presence of lawns and concrete driveways concentrates runoff onto the riverbank, weakening it further. Foundations and structures further increase stress. [ 3 ] Although each mode of failure is clearly defined, investigation into soil types, bank composition, and environment must be clearly defined in order to establish the mode of failure, of which multiple types may be present on the same area at different times. Once failure has been classified, steps may be taken in order to prevent further erosion. If tectonic failure is at fault, research into its effects may aid in the understanding of alluvial systems and their responses to different stresses.
A river bank can be divided into three zones: Toe zone, bank zone, and overbank area. The toe zone is the area which is most susceptible to erosion. [ 2 ] Because it is located in between the ordinary water level and the low water level, it is strongly affected by currents and erosional events. [ 2 ] The bank zone is above the ordinary high water level, but can still be effected periodically by currents, and gets the most human and animal traffic. The overbank area is inland of both the toe and bank zones, and can be classified as either a floodplain or a bluff, depending on its slope. [ 2 ] A river bank will respond to erosional activity based on the characteristics of the bank material. The most common type of bank is a stratified or interstratified bank, which consists of cohesionless layers interbedded with cohesive layers. [ 5 ] If the cohesive soil is at the toe of the bank, it will control the retreat rate of the overlying layer. If the cohesionless soil is at the toe of the bank, these layers are not protected by the layers of cohesive soil. A Bedrock bank is usually very stable and will experience gradual erosion. A cohesive bank is highly susceptible to erosion in times of lowering water levels due to its low permeability . [ 2 ] Failures in cohesive soils will be in rotational or planar failure surfaces , while in non-cohesive soils failures will be in an avalanche fashion. [ 5 ]
Hydraulic processes at or below the surface of the water may entrain sediment and directly cause erosion. Non-cohesive banks are particularly vulnerable to this type of failure, due to bank undercutting, bed degradation, and basal clean-out. [ 6 ]
Hydraulic toe erosion occurs when flow is in the direction of a bank at the bend of the river and the highest velocity is at the outer edge and in the center depth of the water. [ 5 ] Centrifugal forces raise the water elevation so that it is highest on the outside bend, and as gravity pulls the water downward, a rolling, helical spiral happens, with downward velocities against the bank (erosive force). [ 2 ] It will be highest in tight bends. The worst erosion will be immediately downstream from the point of maximum curvature. In cases with noncohesive layers, currents remove the material and create a cantilever overhang of cohesive material. Shear exceeds the critical shear at the toe of the bank, and particles are eroded. This then causes an overhang eventually resulting in bank retreat and failure. [ 2 ]
Geotechnical failure usually occurs due to stresses on the bank exceeding the forces the bank can accommodate. One example is oversaturation of the bank following a lowering of the water level from the floodplain to normal bank levels. Pore water pressure in the saturated bank reduces the frictional shear strength of the soil and increases sliding forces. [ 5 ] This type of failure is most common in fine grained soils because they cannot drain as rapidly as coarse grained soils. [ 2 ] This can be accentuated if the banks had already been destabilized due to erosion of cohesionless sands, which undermines the bank material and leads to bank collapse. [ 5 ] If the bank has been exposed to freeze thaw, tension cracks may lay lead to bank failure. Subsurface moisture weakens internal shear. [ 2 ] Capillary action can also decrease the angle of repose of the bank to less than the existing bank slope. This oversteepens the slope and can lead to collapse when the soil dries. [ 2 ]
Piping failure may occur when high groundwater seepage pressure increases, as well as the rate of flow. This causes collapse of part of the bank. Failure is usually due to selective groundwater flow along interbedded saturated layers within stratified river banks, with lenses of sand and coarser material in between layers of finer cohesive material. [ 6 ]
Changes in the valley floor slope can influence alluvial rivers, which can happen due to tectonics . This may cause river bank failure, resulting in hazards to people living near to the river and to structures such as bridges, pipelines, and powerline crossings. While large and fast flowing rivers should maintain their original flow paths, low gradients makes effects caused by slope changes larger. [ 7 ] Bank failure as the result of tectonics may also lead to avulsion, in which a river abandons its own river channel in favor of forming a new one. [ 7 ] Avulsion due to tectonics is most common in rivers experiencing a high stand, in which bank failure has led to a loss of natural levees due to liquefication and fractures from an earthquake. [ 8 ]
Gravitational failure includes shallow and rotational slides, slab and cantilever failures, and earthflows and dry granular flows. It is the process of detaching sediment primarily from a cohesive bank and transporting it fluvially. Shallow failure occurs where a layer of material moves along planes parallel to bank surfaces. Failure is typical of soils with low cohesion, and occurs when the angle of the bank exceeds the angle of internal friction. [ 5 ] Small to medium-sized blocks are forced out at or near the base of the river bank due to excessive pore water pressure and overburden . The slab of material in the lower half of the bank will fall out, leaving an alcove shaped cavity. Failure is usually associated with steep banks and saturated finer grained cohesive bank materials that allow buildup of positive pore water pressure and strong seepage within structure. [ 6 ] Popout failure is when small to medium-sized blocks are forced out at or near the base of the river bank due to excessive pore water pressure and overburden. The slab of material in the lower half of the bank will fall out, leaving an alcove shaped cavity. Failure is usually associated with steep banks and saturated finer grained cohesive bank materials that allow buildup of positive pore water pressure and strong seepage within structure. Small to medium-sized blocks are forced out at, or near the base of the river bank due to excessive pore water pressure and overburden. [ 6 ]
Slab failure is the sliding and forward toppling of deep-seated mass into the river channel. Failures are associated with steep, low height, fine grained cohesive banks and occur during low flow conditions. They are the result of a combination of scour at the bank toe, high pore water pressure in the bank material, and tension crack at the top of the bank. [ 6 ]
Cantilever failures occur when an overhanging blocks collapses into the channel. [ 5 ] Failure often occurs after the bank has experienced undercutting. Failure is usually in a composite of fine and coarse grained material, and is active during low flow conditions. [ 6 ]
Failure caused by dry granular flow occurs typically on non-cohesive banks at, or near to, the angle of repose, which are undercut. This increases the local bank angle above the friction angle, and individual grains roll, slide, and bounce down the bank in a layer. Accumulation usually occurs at the toe. [ 6 ]
A wet earthflow occurs where the loss of strength of a section of bank due to saturation increases the weight of the bank and decreases the banks material strength so that the soil flows as a viscous liquid. [ 2 ] This type of failure usually occurs on low angle banks and the affected material flows down the bank to form lobes of material at the toe. [ 6 ]
Beam failure happens as the result of tension cracks in the overhang, and occurs only when the lower part of an overhang block fails along an almost horizontal failure surface. [ 6 ]
The 1811–12 New Madrid earthquakes were caused by earthquakes on the Mississippi River , and represent bank failure caused by tectonic activity in the New Madrid Seismic Zone (NMSZ). [ 9 ] The NMSZ is the result of a failed rift system which remains weak today, and thus is prone to faulting and earthquake activity. [ 9 ] The earthquakes caused immediate bank failure, in which the surface banks fell above and below the water surface, causing swells large enough to sink a boat. [ 7 ] Some swells were caused by the sediment falling into the river, but at other times the swells themselves hitting the banks caused large areas of the Mississippi banks to fall at one time. [ 7 ] The waters of the Mississippi were seen to flow backwards, due to the shocks caused by the earthquake. [ 8 ] Large amounts of sediment were introduced into the river. Bank caving was seen as far downriver as Memphis, Tennessee . Vertical offsets may have been the primary source of turbulence, though short lived. [ 7 ]
Bank failure was located on the Red River and its tributaries. It was caused by erosion and represents slumping. Failure occurs in this area because river banks are composed of clay, due to glacial and lake deposition, as opposed to more resistant sediments such as sand or gravel. [ 1 ] Most commonly, slumping exists in the Sherack Formation, which sits on a less competent formation called the Huot and Brenna Formations. [ 1 ] The Sherack Formation is composed of silt and clay laminations, while the Brenna is a clay deposit. [ 10 ] These less competent formations become exposed when the overlying Sherack Formation is eroded by the river valley. Cracks can also form in the Sherack Formation, causing weakness in the underlying clay, and slumping. The exposed contact between the formations (commonly in the Red River area), and thus the inherent weakness at this contact, causes mass wasting of the river bank. [ 11 ] Human activity near the banks of the river then increases failure risks. [ 1 ] Due to this human interference, the river's best mode of defense is to avoid unnecessary loading near the river and to enhance awareness of the issues leading to failure. [ 1 ] When failure does occur, an understanding of the geotechnical parameters of the slope are necessary, and are the most heavily relied upon in order to understand the underlying causes. [ 11 ] This can be accomplished by obtaining values for the plastic limit and liquid limit of the soils. [ 1 ]
Also of interest are the interactions between streamflows and sediment contribution. The Red River and Minnesota receives contributions from the Pembina River of northeastern North Dakota. [ 10 ] Erosion rates are very high for this river, and lead to extensive and steep erosion of the banks of the river. This increased runoff then produces increased streamflow and thus higher erosion events downstream, such as in the Red River. [ 10 ]
River bank failure is dependent on many solutions, the most common of which are lime stabilization and retaining walls , riprap and sheet piling , maintaining deep vegetation, windrows and trenches, sacks and blocks, gabions and mattresses, soil-cement, and avoiding the construction of structures near the banks of the river. [ 12 ]
Riprap made of rocks and other materials, arranged in a way as to inhibit erosional processes on a river bank. This method is expensive and can experience failure, but has the ability to be used for large areas. [ 3 ] Failure is seen when the bank undergoes particle erosion, due to the stones being too small to resist shear stress, removal of individual stones weakening the overall riprap, the side slope of the bank being too steep for the riprap to resist the displacing forces, or gradation of riprap being too uniform (nothing to fill small spaces). Failure can also occur by slump, translational slide, or modified slump. [ 12 ]
Windrows are the piling of erosion-resistant material on a river's bank, where if buried, they become known as trenches. When erosion persists an already determined location, these windrows and trenches are made to slide down with the bank in order to protect it from further occurrences of erosion. [ 13 ] This allows for the need of minimal design work, in that installation is simple on high banks, although other methods could lead to failure. [ 12 ] Disadvantages include the windows and trenches continuing to erode until they intersect the erosion-resistant material. Results of this method have been seen to be inconsistent, as the steep slope of the bank leads to increased velocity of the river. [ 12 ]
Sacks and blocks may be used during flooding, where sacks are filled with material, allowing for blocks to encourage drainage and vegetation growth. This method requires increased man labor and larger amounts of filler material, as all sacks and blocks should be of the same size. [ 12 ]
Gabions are stacked, rectangular wire boxes filled with stones. They are useful on steep slopes when the water is too fast for the use of a riprap technique. They are expensive and labor-intensive, as well as require periodical inspection for damage and subsequent maintenance, though they have been seen to demonstrate positive performance. [ 13 ]
Mattress gabions are broad shallow baskets, useful on smooth riverbanks for the growth of vegetation. Tied side by side and layered next to each other on shallow surfaces, they create a blanket of protection against erosion. [ 12 ]
Articulated concrete mattresses are used in large rivers such as the Mississippi, and consist of concrete blocks held by steel rods. [ 12 ] Quick to use with a good reputation, they allow for complete coverage of the riverbank when properly placed. This in turn leads to a good service record. [ 12 ] However, open spaces (8%) allow for fine material to pass through, and the spaces between the blocks may cause removal of the bank. [ 13 ] Unfortunately, the mattresses themselves don't fit well in sharp curves, and it may be costly to remove the vegetation on the bank, which is required for placement. [ 12 ]
The exact placement of soil cement may be different depending on the slope of the bank. [ 14 ] In rivers with high wave action, a stairstep pattern may be needed to dissipate the energy coming from the waves. [ 12 ] In conditions with lower wave energy, cement may be 'plated' in sheets parallel to the slope. This technique cannot be used however on a steep slope. [ 14 ] Soil cement may have negative effects in freeze/thaw conditions, but positive effects in banks with sand and vegetation, as little strength and impermeability can cause failure. [ 12 ]
Three main types of vegetation exist to prevent bank failure: Trees, shrubs, and grasses. Trees will provide for deep and dense root systems, increasing the stresses a river bank will accommodate. Shrubs are staked into the river bank in order to provide a protective covering against erosion, creating good plant coverage and soil stability. [ 3 ] Cuttings may be tied together into fascines , and placed into shallow trenches parallel to the bank of the river. [ 12 ] Typically, willows and cottonwood poles are the most useful materials, however, fiber products may also be used [ 13 ] [ 15 ] are then partially buried and staked in place. These bundles of cuttings create log-like structures which will root, grow, and create good plant coverage. The structures hold the soil in place and protect the stream bank from erosion. [ 13 ] The use of vegetation to counteract erosional processes is the most labor-intensive method to employ, while also the least expensive. It also improves the habitat and is aesthetically pleasing. On steep banks, however, trees may not be able to stabilize the toe of the bank, and the weight of the tree itself may lead to failure. It is also difficult to grow vegetation in conditions such as freeze thaw . If not properly protected, wildlife and livestock may damage the vegetation. [ 12 ] | https://en.wikipedia.org/wiki/River_bank_failure |
The river barrier hypothesis is a hypothesis seeking to partially explain the high species diversity in the Amazon Basin , first presented by Alfred Russel Wallace in his 1852 paper On Monkeys of the Amazon . [ 1 ] It argues that the formation and movement of the Amazon and some of its tributaries presented a significant enough barrier to movement for wildlife populations to precipitate allopatric speciation . Facing different selection pressures and genetic drift , the divided populations diverged into separate species.
There are several observable qualities that should be present if speciation has resulted from a river barrier. Divergence of species on either side of the river should increase with the size of the river, expressing weakly or not at all in the headwaters and more strongly in the wider, deeper channels further downriver. Organisms endemic to terra firme forest should be more affected than those that live in alluvial forests alongside the river, as they have a longer distance to cross before reaching appropriate habitat and lowland populations can rejoin relatively frequently when a river shifts or narrows in the early stages of oxbow lake formation. Finally, if a river barrier is the cause of speciation, sister species should exist on opposing shores more frequently than expected by chance.
River barrier speciation occurs when a river is of sufficient size to provide a vicariance for allopatric speciation, or when the river is large enough to prevent or interfere with a genetic exchange between populations. Population division is initiated either when a river shifts into or forms within the range of a species that cannot cross it, effectively splitting the population in half, or when a small founder group is transported across an existing river through random chance. Usually a river's strength as a barrier is viewed as proportional to its width; wider rivers present a longer crossing distance and thus a greater obstacle to movement. Barrier strength varies within a given river; narrow headwaters are easier to cross than wide downstream channels. Rivers that present a barrier for some species in a region may not necessarily do so for all, leading to species-by-species and clade , or genetically distinct group, differences in degree of isolation and differentiation on opposing shores. Large mammals and birds have little trouble crossing most streams, whereas small birds unaccustomed to long-distance flight can have particular difficulties and thus may be more subject to population division. Additionally, rivers more effectively divide species that prefer terra firme forest as meanders and the process of oxbow formation in alluvial regions can narrow otherwise impassable streams.
Many research projects in the Amazon basin aimed to test the validity of the hypothesis. The southern chestnut-tailed antbird ( Myrmeciza hemimelaena ) is a species that exemplifies the hypothesis in nature. The antbirds' diversification and distribution were examined throughout the Amazon, three monophyletic , genetically distinct, populations of the bird were found; two of them are currently valid subspecies. Two of the clades existed on either side of the Madeira River and the third one had a range in between the Madeira River and two small tributaries Jiparaná and Aripuanã . [ 2 ] This shows evidence of how these birds diversified due to possible riverine barriers, causing a limitation in gene flow. Another study found that saddle-back tamarins follow the premise that variant gene flow occurs at different parts of a river. Gene flow was found to be restricted to the narrower headwaters of rivers, while a decrease was observed toward the mouth. [ 3 ] This is consistent with the hypothesis. Yet, some speculate that using a single mechanism to explain diversification in the tropics would be an oversimplification. For example, there is evidence that genetic variation in the Blue-crowned Manakin may have been influenced by river barriers, Andean uplift, and range expansions. [ 4 ]
Not all studies have found support for the hypothesis. One study tested the riverine hypothesis by observing populations of four species of Amazonian frogs along the Juruá River. The team expected to see gene flows of different volumes when comparing sites that were on the same bank to sites that were across the river. They found that this was not the case. Gene flow seemed to be in almost equal quantities between either set of sites. [ 5 ]
Another study took the hypothesis a step further. They postulated that since rivers are supposed to be barriers for gene flow for certain taxa, then it should be a barrier at a community level. Variation in species of frogs and small mammals along and across the riverbanks of the Juruá River were evaluated. No obvious gradient of decreasing similarity in species of frogs and mammals from the headwaters to the mouth of the river was found. Another discovery was the fact that there was no greater similarity between species that lived on the same bank than those that were on opposite banks of the river. [ 6 ]
These results indirectly dispute the aspects of the hypothesis that insist on speciation caused by riverine barriers. The validity of the hypothesis was tested further by examining poison dart frogs . This study (Lougheed et al.)'s results were incongruent with the hypothesis that species on either side of a river would be monophyletic relatives. The Lougheed study aimed to show that the ridge hypothesis has more credibility than the river hypothesis. [ 7 ] Eighty-one species of non-flying mammals were trapped at cross-river sites along the Juruá River in another experiment. The river seemed to be a barrier for only a few taxa, with the majority either homogeneous throughout the research area or divided into monophyletic upriver and downriver clades. Patton argues that the geographic location of these clades suggest that landform evolution is an under-appreciated factor in diversification in Amazonia. This project further suggests that riverine barriers are not the only mechanism for speciation. [ 8 ]
All these critics argue that other factors influence speciation in Amazonia. Another shortcoming of the hypothesis is that it has been researched mostly in Amazonia, rather than in other river basins. Also, shifts of rivers may prevent the establishment of any patterns across rivers, further complicating means to test for the strength of the hypothesis. [ citation needed ] | https://en.wikipedia.org/wiki/River_barrier_hypothesis |
River bifurcation (from Latin : furca , fork) occurs when a river (a bifurcating river ) flowing in a single channel separates into two or more separate streams (called distributaries ) which then continue downstream . Some rivers form complex networks of distributaries, typically in their deltas . If the streams eventually merge again or empty into the same body of water, then the bifurcation forms a river island .
River bifurcation may be temporary or semi-permanent, depending on the strength of the material that is dividing the two distributaries. For example, a mid-stream island of soil or silt in a delta is most likely temporary, due to low material strength. A location where a river divides around a rock fin, e.g. a volcanically formed dike , or a mountain, may be more lasting as a result of higher material strength and resistance to weathering and erosion. A bifurcation may also be man-made, for example when two streams are separated by a long bridge pier.
River bifurcation occurs in many types of rivers. It is common in meandering and braided rivers. In meandering rivers, bifurcations are often unstable in their configuration, and usually result in channel avulsion . [ 1 ] The stability of bifurcation is dependent on the rate of flow of the river upstream as well as the sediment transport of the upper reaches of the branches just after bifurcation occurs. [ 2 ] The evolution of bifurcation is highly dependent on the discharge of the river upstream of the bifurcation. [ 3 ] Unstable bifurcations are bifurcations in which only one channel receives water. Within deltas, these typically create channels with relatively large widths, and are also known as channel avulsions. Stable bifurcations are bifurcations in which both channels receive water. [ 3 ]
In deltas, the directions of distributaries resulting from bifurcation are easily changeable by processes like aggradation , or differential subsidence and compaction . [ 4 ] The number of distributaries that are present is in part determined by the rate of sediment discharge, [ 4 ] and increased sediment discharge leads to more river bifurcation. This then leads to increased numbers of distributaries in deltas.
Delta bifurcation has a typical angle at which it is observed, with a critical angle of approximately 72º. [ 5 ] However, observations and experiments show that many distributary channel bifurcations do not actually exhibit a bifurcation angle of 72º, but rather grow towards this angle over time after initiation of bifurcation. [ 5 ] This implies that bifurcations that occur in deltas are semi-permanent, as many observed channels do not exhibit this angle due to their relatively recent initiation, or because some of the channels that do reach this bifurcation angle did not last to be observed.
As is the case with river confluence , bifurcation is important in dividing land and morphological areas. Rivers are abundantly used as political boundaries, marking borders between regions of opposing countries, states and peoples, among other things. Sudden river bifurcation, even temporary, can disturb terranes that would otherwise be considered the same region. Bifurcations are different from confluences in that many confluences are considered important sites for cities and trade. But due to the semi-permanence of most bifurcated rivers, and their uncommon occurrences, construction is not largely exhibited at sites of river bifurcation.
Distributaries are common components of deltas, and are the opposite of tributaries. These distributaries, that are a result of river bifurcation, are important for the deposition and movement of water, sediment and nutrients from farther inland to the larger body of water that it empties into. [ 4 ] Deltas are very important to humans, as the delta distributary regions provide homes to roughly half a billion people, and are exceptionally biologically rich. [ 3 ]
Bifurcated rivers are largely semi-permanent, and are subject to constant change in their configuration from evolving terranes and flow rates. As a result of this, observation of the process by which rivers bifurcate and then gradually deteriorate has been poorly documented. The evolution of river bifurcations from single channel to multi-channeled and back again is largely dependent on discharge rate from the backwater regions of the channel. [ 2 ] The bifurcation of channel systems begins when a single channel is forced to split when a bar of sediment causes initiation of the two channel system, however, this does not always result in a system in which both channels receive flow. In braided systems, evolution of bifurcate systems is largely determined by the water level of adjacent branches of the system. [ 6 ] The water level differences in braided systems are themselves caused by closure of branch entrances as a result of bar growth. [ 6 ] In addition to bar growth, differences in direction of bifurcated river flows from compound bar shapes and backwater effects also influence the evolution of the braided system.
Bifurcations move largely as a result of migration of the upstream channel. [ 7 ] The configuration of the bifurcated system is also modified by the migration of bars within the system. [ 7 ] This can cause sudden variations in channel widths, as well as width asymmetry in the system. [ 7 ] Over time, the stable channel system will eventually deteriorate until only one channel receives flow from upstream, this then creates an unstable channel, one in which no flow passes through.
River bifurcations impact the surrounding area in a plethora of ways, namely, redistributing flow of water, sediment and nutrients throughout a watershed and delta. In addition to this, migrating bifurcations and landforms can alter the terranes in a given region affected by this process. Sudden bifurcation initiation can cause small scale flooding of the surrounding area. The opposite, deterioration of a stable bifurcation to an unstable one, can have similar effects, as flow that was split through two channels now being directed through one can cause the stable channel to surpass bank-full stage, or the point at which the water level is above the river bank. This can also cause flooding, and is a prominent issue in regions where levees are in use. Bifurcations are a major distributor of nutrients and mineral particulates to biologically rich areas in deltas. Sudden deterioration or initiation of bifurcated systems can disrupt the deposition of material required for various organisms to live, and thus has an indirect impact on surrounding ecosystems via flow patterns. | https://en.wikipedia.org/wiki/River_bifurcation |
A river delta is a landform , archetypically triangular, created by the deposition of the sediments that are carried by the waters of a river , where the river merges with a body of slow-moving water or with a body of stagnant water. [ 1 ] [ 2 ] The creation of a river delta occurs at the river mouth , where the river merges into an ocean , a sea , or an estuary , into a lake , a reservoir , or (more rarely) into another river that cannot carry away the sediment supplied by the feeding river. Etymologically, the term river delta derives from the triangular shape (Δ) of the uppercase Greek letter delta . In hydrology , the dimensions of a river delta are determined by the balance between the watershed processes that supply sediment and the watershed processes that redistribute, sequester, and export the supplied sediment into the receiving basin. [ 3 ] [ 4 ]
River deltas are important in human civilization , as they are major agricultural production centers and population centers. [ 5 ] They can provide coastline defence and can impact drinking water supply. [ 6 ] They are also ecologically important, with different species' assemblages depending on their landscape position. On geologic timescales , they are also important carbon sinks . [ 7 ]
A river delta is so named because the shape of the Nile Delta approximates the triangular uppercase Greek letter delta . The triangular shape of the Nile Delta was known to audiences of classical Athenian drama ; the tragedy Prometheus Bound by Aeschylus refers to it as the "triangular Nilotic land", though not as a "delta". [ 8 ] Herodotus 's description of Egypt in his Histories mentions the Delta fourteen times, as "the Delta, as it is called by the Ionians ", including describing the outflow of silt into the sea and the convexly curved seaward side of the triangle. [ 8 ] Despite making comparisons to other river systems deltas, Herodotus did not describe them as "deltas". [ 8 ] The Greek historian Polybius likened the land between the Rhône and Isère rivers to the Nile Delta, referring to both as islands, but did not apply the word delta. [ 8 ] According to the Greek geographer Strabo , the Cynic philosopher Onesicritus of Astypalaea , who accompanied Alexander the Great 's conquests in India , reported that Patalene (the delta of the Indus River ) was "a delta" ( Koinē Greek : καλεῖ δὲ τὴν νῆσον δέλτα , romanized: kalei de tēn nēson délta , lit. 'he calls the island a delta'). [ 8 ] The Roman author Arrian 's Indica states that "the delta of the land of the Indians is made by the Indus river no less than is the case with that of Egypt". [ 8 ]
As a generic term for the landform at the mouth of the river, the word delta is first attested in the English-speaking world in the late 18th century, in the work of Edward Gibbon . [ 9 ]
River deltas form when a river carrying sediment reaches a body of water, such as a lake, ocean, or a reservoir . When the flow enters the standing water, it is no longer confined to its channel and expands in width. This flow expansion results in a decrease in the flow velocity , which diminishes the ability of the flow to transport sediment . As a result, sediment drops out of the flow and is deposited as alluvium , which builds up to form the river delta. [ 11 ] [ 12 ] Over time, this single channel builds a deltaic lobe (such as the bird's-foot of the Mississippi or Ural river deltas), pushing its mouth into the standing water. As the deltaic lobe advances, the gradient of the river channel becomes lower because the river channel is longer but has the same change in elevation (see slope ).
As the gradient of the river channel decreases, the amount of shear stress on the bed decreases, which results in the deposition of sediment within the channel and a rise in the channel bed relative to the floodplain . This destabilizes the river channel. If the river breaches its natural levees (such as during a flood), it spills out into a new course with a shorter route to the ocean, thereby obtaining a steeper, more stable gradient. [ 13 ] Typically, when the river switches channels in this manner, some of its flow remains in the abandoned channel. Repeated channel-switching events build up a mature delta with a distributary network.
Another way these distributary networks form is from the deposition of mouth bars (mid-channel sand and/or gravel bars at the mouth of a river). When this mid-channel bar is deposited at the mouth of a river, the flow is routed around it. This results in additional deposition on the upstream end of the mouth bar, which splits the river into two distributary channels. [ 14 ] [ 15 ] A good example of the result of this process is the Wax Lake Delta .
In both of these cases, depositional processes force redistribution of deposition from areas of high deposition to areas of low deposition. This results in the smoothing of the planform (or map-view) shape of the delta as the channels move across its surface and deposit sediment. Because the sediment is laid down in this fashion, the shape of these deltas approximates a fan. The more often the flow changes course, the shape develops closer to an ideal fan because more rapid changes in channel position result in a more uniform deposition of sediment on the delta front. The Mississippi and Ural River deltas, with their bird's feet, are examples of rivers that do not avulse often enough to form a symmetrical fan shape. Alluvial fan deltas, as seen by their name, avulse frequently and more closely approximate an ideal fan shape.
Most large river deltas discharge to intra-cratonic basins on the trailing edges of passive margins due to the majority of large rivers such as the Mississippi , Nile , Amazon , Ganges , Indus , Yangtze , and Yellow River discharging along passive continental margins. [ 16 ] This phenomenon is due mainly to three factors: topography , basin area, and basin elevation. [ 16 ] Topography along passive margins tend to be more gradual and widespread over a greater area enabling sediment to pile up and accumulate over time to form large river deltas. Topography along active margins tends to be steeper and less widespread, which results in sediments not having the ability to pile up and accumulate due to the sediment traveling into a steep subduction trench rather than a shallow continental shelf .
There are many other lesser factors that could explain why the majority of river deltas form along passive margins rather than active margins. Along active margins, orogenic sequences cause tectonic activity to form over-steepened slopes, brecciated rocks, and volcanic activity resulting in delta formation to exist closer to the sediment source. [ 16 ] [ 17 ] When sediment does not travel far from the source, sediments that build up are coarser grained and more loosely consolidated, therefore making delta formation more difficult. Tectonic activity on active margins causes the formation of river deltas to form closer to the sediment source which may affect channel avulsion , delta lobe switching, and auto cyclicity. [ 17 ] Active margin river deltas tend to be much smaller and less abundant but may transport similar amounts of sediment. [ 16 ] However, the sediment is never piled up in thick sequences due to the sediment traveling and depositing in deep subduction trenches. [ 16 ]
At the mouth of a river, the change in flow conditions can cause the river to drop any sediment it is carrying. This sediment deposition can generate a variety of landforms, such as deltas, sand bars, spits, and tie channels. Landforms at the river mouth drastically alter the geomorphology and ecosystem. [ 18 ]
Deltas are typically classified according to the main control on deposition, which is a combination of river, wave , and tidal processes, [ 19 ] [ 20 ] depending on the strength of each. [ 21 ] The other two factors that play a major role are landscape position and the grain size distribution of the source sediment entering the delta from the river. [ 22 ]
Fluvial-dominated deltas are found in areas of low tidal range and low wave energy. [ 23 ] Where the river water is nearly equal in density to the basin water, the delta is characterized by homopycnal flow , in which the river water rapidly mixes with basin water and abruptly dumps most of its sediment load. Where the river water has a higher density than basin water, typically from a heavy load of sediment, the delta is characterized by hyperpycnal flow in which the river water hugs the basin bottom as a density current that deposits its sediments as turbidites . When the river water is less dense than the basin water, as is typical of river deltas on an ocean coastline, the delta is characterized by hypopycnal flow in which the river water is slow to mix with the denser basin water and spreads out as a surface fan. This allows fine sediments to be carried a considerable distance before settling out of suspension. Beds in a hypocynal delta dip at a very shallow angle, around 1 degree. [ 23 ]
Fluvial-dominated deltas are further distinguished by the relative importance of the inertia of rapidly flowing water, the importance of turbulent bed friction beyond the river mouth, and buoyancy . Outflow dominated by inertia tends to form Gilbert-type deltas. Outflow dominated by turbulent friction is prone to channel bifurcation, while buoyancy-dominated outflow produces long distributaries with narrow subaqueous natural levees and few channel bifurcations. [ 24 ]
The modern Mississippi River delta is a good example of a fluvial-dominated delta whose outflow is buoyancy-dominated. Channel abandonment has been frequent, with seven distinct channels active over the last 5000 years. Other fluvial-dominated deltas include the Mackenzie delta and the Alta delta. [ 14 ]
A Gilbert delta (named after Grove Karl Gilbert ) is a type of fluvial-dominated [ 25 ] delta formed from coarse sediments, as opposed to gently sloping muddy deltas such as that of the Mississippi. For example, a mountain river depositing sediment into a freshwater lake would form this kind of delta. [ 26 ] [ 27 ] It is commonly a result of homopycnal flow. [ 23 ] Such deltas are characterized by a tripartite structure of topset, foreset, and bottomset beds. River water entering the lake rapidly deposits its coarser sediments on the submerged face of the delta, forming steeping dipping foreset beds. The finer sediments are deposited on the lake bottom beyond this steep slope as more gently dipping bottomset beds. Behind the delta front, braided channels deposit the gently dipping beds of the topset on the delta plain. [ 28 ] [ 29 ]
While some authors describe both lacustrine and marine locations of Gilbert deltas, [ 26 ] others note that their formation is more characteristic of the freshwater lakes, where it is easier for the river water to mix with the lakewater faster (as opposed to the case of a river falling into the sea or a salt lake, where less dense fresh water brought by the river stays on top longer). [ 30 ] Gilbert himself first described this type of delta on Lake Bonneville in 1885. [ 30 ] Elsewhere, similar structures occur, for example, at the mouths of several creeks that flow into Okanagan Lake in British Columbia and form prominent peninsulas at Naramata , Summerland , and Peachland .
In wave-dominated deltas, wave-driven sediment transport controls the shape of the delta, and much of the sediment emanating from the river mouth is deflected along the coastline. [ 19 ] The relationship between waves and river deltas is quite variable and largely influenced by the deepwater wave regimes of the receiving basin. With a high wave energy near shore and a steeper slope offshore, waves will make river deltas smoother. Waves can also be responsible for carrying sediments away from the river delta, causing the delta to retreat. [ 6 ] For deltas that form further upriver in an estuary, there are complex yet quantifiable linkages between winds, tides, river discharge, and delta water levels. [ 31 ] [ 32 ]
Erosion is also an important control in tide-dominated deltas, such as the Ganges Delta , which may be mainly submarine, with prominent sandbars and ridges. This tends to produce a "dendritic" structure. [ 33 ] Tidal deltas behave differently from river-dominated and wave-dominated deltas, which tend to have a few main distributaries. Once a wave-dominated or river-dominated distributary silts up, it is abandoned, and a new channel forms elsewhere. In a tidal delta, new distributaries are formed during times when there is a lot of water around – such as floods or storm surges . These distributaries slowly silt up at a more or less constant rate until they fizzle out. [ 33 ]
A tidal freshwater delta [ 34 ] is a sedimentary deposit formed at the boundary between an upland stream and an estuary, in the region known as the "subestuary". [ 35 ] Drowned coastal river valleys that were inundated by rising sea levels during the late Pleistocene and subsequent Holocene tend to have dendritic estuaries with many feeder tributaries. Each tributary mimics this salinity gradient from its brackish junction with the mainstem estuary up to the fresh stream feeding the head of tidal propagation. As a result, the tributaries are considered to be "subestuaries". The origin and evolution of a tidal freshwater delta involves processes that are typical of all deltas [ 4 ] as well as processes that are unique to the tidal freshwater setting. [ 36 ] [ 37 ] The combination of processes that create a tidal freshwater delta result in a distinct morphology and unique environmental characteristics. Many tidal freshwater deltas that exist today are directly caused by the onset of or changes in historical land use, especially deforestation , intensive agriculture , and urbanization . [ 38 ] These ideas are well illustrated by the many tidal freshwater deltas prograding into Chesapeake Bay along the east coastline of the United States. Research has demonstrated that the accumulating sediments in this estuary derive from post-European settlement deforestation, agriculture, and urban development. [ 39 ] [ 40 ] [ 41 ]
Other rivers, particularly those on coasts with significant tidal range , do not form a delta but enter into the sea in the form of an estuary . Notable examples include the Gulf of Saint Lawrence and the Tagus estuary. [ 42 ] [ 43 ]
In rare cases, the river delta is located inside a large valley and is called an inverted river delta . Sometimes a river divides into multiple branches in an inland area, only to rejoin and continue to the sea. Such an area is called an inland delta , and often occurs on former lake beds. The term was first coined by Alexander von Humboldt for the middle reaches of the Orinoco River , which he visited in 1800. [ 44 ] Other prominent examples include the Inner Niger Delta , [ 45 ] Peace–Athabasca Delta , [ 46 ] the Sacramento–San Joaquin River Delta , [ 47 ] and the Sistan delta of Iran. [ 48 ] The Danube has one in the valley on the Slovak–Hungarian border between Bratislava and Iža . [ 49 ]
In some cases, a river flowing into a flat arid area splits into channels that evaporate as it progresses into the desert. The Okavango Delta in Botswana is one example. [ 50 ] See endorheic basin .
The generic term mega delta can be used to describe very large Asian river deltas, such as the Yangtze , Pearl , Red , Mekong , Irrawaddy , Ganges-Brahmaputra , and Indus . [ 51 ] [ 52 ]
The formation of a delta is complicated, multiple, and cross-cutting over time, but in a simple delta three main types of bedding may be distinguished: the bottomset beds, foreset/frontset beds, and topset beds. This three-part structure may be seen on small scale by crossbedding . [ 26 ] [ 53 ]
Human activities in both deltas and the river basins upstream of deltas can radically alter delta environments. [ 56 ] Upstream land use change such as anti-erosion agricultural practices and hydrological engineering such as dam construction in the basins feeding deltas have reduced river sediment delivery to many deltas in recent decades. [ 57 ] This change means that there is less sediment available to maintain delta landforms, and compensate for erosion and sea level rise , causing some deltas to start losing land. [ 57 ] Declines in river sediment delivery are projected to continue in the coming decades. [ 58 ]
The extensive anthropogenic activities in deltas also interfere with geomorphological and ecological delta processes. [ 59 ] People living on deltas often construct flood defences which prevent sedimentation from floods on deltas, and therefore means that sediment deposition can not compensate for subsidence and erosion . In addition to interference with delta aggradation , pumping of groundwater , [ 60 ] oil , and gas , [ 61 ] and constructing infrastructure all accelerate subsidence , increasing relative sea level rise. Anthropogenic activities can also destabilise river channels through sand mining , [ 62 ] and cause saltwater intrusion . [ 63 ] There are small-scale efforts to correct these issues, improve delta environments and increase environmental sustainability through sedimentation enhancing strategies .
While nearly all deltas have been impacted to some degree by humans, the Nile Delta and Colorado River Delta are some of the most extreme examples of the devastation caused to deltas by damming and diversion of water. [ 64 ] [ 65 ]
Historical data documents show that during the Roman Empire and Little Ice Age (times when there was considerable anthropogenic pressure), there was significant sediment accumulation in deltas. The industrial revolution has only amplified the impact of humans on delta growth and retreat. [ 66 ]
Ancient deltas benefit the economy due to their well-sorted sand and gravel . Sand and gravel are often quarried from these old deltas and used in concrete for highways , buildings, sidewalks, and landscaping. More than 1 billion tons of sand and gravel are produced in the United States alone. [ 67 ] Not all sand and gravel quarries are former deltas, but for ones that are, much of the sorting is already done by the power of water. [ citation needed ]
Urban areas and human habitation tend to be located in lowlands near water access for transportation and sanitation . [ 68 ] This makes deltas a common location for civilizations to flourish due to access to flat land for farming, freshwater for sanitation and irrigation , and sea access for trade. Deltas often host extensive industrial and commercial activities, and agricultural land is frequently in conflict. Some of the world's largest regional economies are located on deltas such as the Pearl River Delta , Yangtze River Delta , European Low Countries and the Greater Tokyo Area . [ citation needed ]
The Ganges–Brahmaputra Delta , which spans most of Bangladesh and West Bengal and empties into the Bay of Bengal , is the world's largest delta. [ 69 ]
The Selenga River delta in the Russian republic of Buryatia is the largest delta emptying into a body of fresh water, in its case Lake Baikal . [ citation needed ]
Researchers have found a number of examples of deltas that formed in Martian lakes . Finding deltas is a major sign that Mars once had large amounts of water. Deltas have been found over a wide geographical range. Below are pictures of a few. [ 70 ] | https://en.wikipedia.org/wiki/River_delta |
River ecosystems are flowing waters that drain the landscape, and include the biotic (living) interactions amongst plants, animals and micro-organisms, as well as abiotic (nonliving) physical and chemical interactions of its many parts. [ 1 ] [ 2 ] River ecosystems are part of larger watershed networks or catchments, where smaller headwater streams drain into mid-size streams, which progressively drain into larger river networks. The major zones in river ecosystems are determined by the river bed's gradient or by the velocity of the current. Faster moving turbulent water typically contains greater concentrations of dissolved oxygen , which supports greater biodiversity than the slow-moving water of pools. These distinctions form the basis for the division of rivers into upland and lowland rivers.
The food base of streams within riparian forests is mostly derived from the trees, but wider streams and those that lack a canopy derive the majority of their food base from algae . Anadromous fish are also an important source of nutrients . Environmental threats to rivers include loss of water, dams, chemical pollution and introduced species . [ 3 ] A dam produces negative effects that continue down the watershed. The most important negative effects are the reduction of spring flooding , which damages wetlands , and the retention of sediment , which leads to the loss of deltaic wetlands. [ 4 ]
River ecosystems are prime examples of lotic ecosystems. Lotic refers to flowing water, from the Latin lotus , meaning washed. Lotic waters range from springs only a few centimeters wide to major rivers kilometers in width. [ 5 ] Much of this article applies to lotic ecosystems in general, including related lotic systems such as streams and springs . Lotic ecosystems can be contrasted with lentic ecosystems , which involve relatively still terrestrial waters such as lakes, ponds, and wetlands . Together, these two ecosystems form the more general study area of freshwater or aquatic ecology .
The following unifying characteristics make the ecology of running waters unique among aquatic habitats: the flow is unidirectional, there is a state of continuous physical change, and there is a high degree of spatial and temporal heterogeneity at all scales ( microhabitats ), the variability between lotic systems is quite high and the biota is specialized to live with flow conditions. [ 6 ]
The non-living components of an ecosystem are called abiotic components.
E.g. stone, air, soil, etc.
Unidirectional water flow is the key factor in lotic (riverine) systems influencing their ecology . Streamflow can be continuous or intermittent. Streamflow is the result of the summative inputs from groundwater , precipitation , and overland flow . Water flow can vary between systems, ranging from torrential rapids to slow backwaters that almost seem like lentic systems . The speed or velocity of the water flow of the water column can also vary within a system and is subject to chaotic turbulence, though water velocity tends to be highest in the middle part of the stream channel (known as the thalveg ). This turbulence results in divergences of flow from the mean downslope flow vector as typified by eddy currents. The mean flow rate vector is based on the variability of friction with the bottom or sides of the channel, sinuosity , obstructions, and the incline gradient. [ 5 ] In addition, the amount of water input into the system from direct precipitation, snowmelt , and/or groundwater can affect the flow rate. The amount of water in a stream is measured as discharge (volume per unit time). As water flows downstream, streams and rivers most often gain water volume, so at base flow (i.e., no storm input), smaller headwater streams have very low discharge, while larger rivers have much higher discharge. The "flow regime" of a river or stream includes the general patterns of discharge over annual or decadal time scales, and may capture seasonal changes in flow. [ 7 ] [ 8 ]
While water flow is strongly determined by slope, flowing waters can alter the general shape or direction of the stream bed, a characteristic also known as geomorphology . The profile of the river water column is made up of three primary actions: erosion, transport, and deposition. Rivers have been described as "the gutters down which run the ruins of continents". [ 9 ] Rivers are continuously eroding , transporting, and depositing substrate, sediment, and organic material. The continuous movement of water and entrained material creates a variety of habitats, including riffles , glides , and pools . [ 10 ]
Light is important to lotic systems, because it provides the energy necessary to drive primary production via photosynthesis , and can also provide refuge for prey species in shadows it casts. The amount of light that a system receives can be related to a combination of internal and external stream variables. The area surrounding a small stream, for example, might be shaded by surrounding forests or by valley walls. Larger river systems tend to be wide so the influence of external variables is minimized, and the sun reaches the surface. These rivers also tend to be more turbulent, however, and particles in the water increasingly attenuate light as depth increases. [ 10 ] Seasonal and diurnal factors might also play a role in light availability because the angle of incidence, the angle at which light strikes water can lead to light lost from reflection. Known as Beer's Law , the shallower the angle, the more light is reflected and the amount of solar radiation received declines logarithmically with depth. [ 6 ] Additional influences on light availability include cloud cover, altitude, and geographic position. [ 11 ]
Most lotic species are poikilotherms whose internal temperature varies with their environment, thus temperature is a key abiotic factor for them. Water can be heated or cooled through radiation at the surface and conduction to or from the air and surrounding substrate. Shallow streams are typically well mixed and maintain a relatively uniform temperature within an area. In deeper, slower moving water systems, however, a strong difference between the bottom and surface temperatures may develop. Spring fed systems have little variation as springs are typically from groundwater sources, which are often very close to ambient temperature. [ 6 ] Many systems show strong diurnal fluctuations and seasonal variations are most extreme in arctic, desert and temperate systems. [ 6 ] The amount of shading, climate and elevation can also influence the temperature of lotic systems. [ 5 ]
Water chemistry in river ecosystems varies depending on which dissolved solutes and gases are present in the water column of the stream. Specifically river water can include, apart from the water itself, [ citation needed ]
Dissolved stream solutes can be considered either reactive or conservative . Reactive solutes are readily biologically assimilated by the autotrophic and heterotrophic biota of the stream; examples can include inorganic nitrogen species such as nitrate or ammonium , some forms of phosphorus (e.g., soluble reactive phosphorus), and silica . Other solutes can be considered conservative, which indicates that the solute is not taken up and used biologically; chloride is often considered a conservative solute. Conservative solutes are often used as hydrologic tracers for water movement and transport. Both reactive and conservative stream water chemistry is foremost determined by inputs from the geology of its watershed , or catchment area. Stream water chemistry can also be influenced by precipitation, and the addition of pollutants from human sources. [ 5 ] [ 10 ] Large differences in chemistry do not usually exist within small lotic systems due to a high rate of mixing. In larger river systems, however, the concentrations of most nutrients, dissolved salts, and pH decrease as distance increases from the river's source. [ 6 ]
In terms of dissolved gases, oxygen is likely the most important chemical constituent of lotic systems, as all aerobic organisms require it for survival. It enters the water mostly via diffusion at the water-air interface. Oxygen's solubility in water decreases as water pH and temperature increases. Fast, turbulent streams expose more of the water's surface area to the air and tend to have low temperatures and thus more oxygen than slow, backwaters. [ 6 ] Oxygen is a byproduct of photosynthesis, so systems with a high abundance of aquatic algae and plants may also have high concentrations of oxygen during the day. These levels can decrease significantly during the night when primary producers switch to respiration. Oxygen can be limiting if circulation between the surface and deeper layers is poor, if the activity of lotic animals is very high, or if there is a large amount of organic decay occurring. [ 10 ]
Rivers can also transport suspended inorganic and organic matter. These materials can include sediment [ 12 ] or terrestrially-derived organic matter that falls into the stream channel. [ 13 ] Often, organic matter is processed within the stream via mechanical fragmentation, consumption and grazing by invertebrates, and microbial decomposition. [ 14 ] Leaves and woody debris recognizable coarse particulate organic matter (CPOM) into particulate organic matter (POM), down to fine particulate organic matter. Woody and non-woody plants have different instream breakdown rates, with leafy plants or plant parts (e.g., flower petals) breaking down faster than woody logs or branches. [ 15 ]
The inorganic substrate of lotic systems is composed of the geologic material present in the catchment that is eroded, transported, sorted, and deposited by the current. Inorganic substrates are classified by size on the Wentworth scale , which ranges from boulders, to pebbles, to gravel, to sand, and to silt. [ 6 ] Typically, substrate particle size decreases downstream with larger boulders and stones in more mountainous areas and sandy bottoms in lowland rivers. This is because the higher gradients of mountain streams facilitate a faster flow, moving smaller substrate materials further downstream for deposition. [ 10 ] Substrate can also be organic and may include fine particles, autumn shed leaves, large woody debris such as submerged tree logs, moss, and semi-aquatic plants. [ 5 ] Substrate deposition is not necessarily a permanent event, as it can be subject to large modifications during flooding events. [ 10 ]
The living components of an ecosystem are called the biotic components. Streams have numerous types of biotic organisms that live in them, including bacteria, primary producers, insects and other invertebrates, as well as fish and other vertebrates.
Bacteria are present in large numbers in lotic waters. Free-living forms are associated with decomposing organic material, biofilm on the surfaces of rocks and vegetation, in between particles that compose the substrate, and suspended in the water column . Other forms are also associated with the guts of lotic organisms as parasites or in commensal relationships. [ 6 ] Bacteria play a large role in energy recycling (see below ). [ 5 ]
Diatoms are one of the main dominant groups of periphytic algae in lotic systems and have been widely used as efficient indicators of water quality, because they respond quickly to environmental changes, especially organic pollution and eutrophication, with a broad spectrum of tolerances to conditions ranging, from oligotrophic to eutrophic. [ 17 ] [ 18 ] [ 19 ]
Fungi are also very frequently present in lotic environments. These are mostly miscroscopic, and found for the most as asexual ( anamorph ) aquatic hyphomycete spores , or less frequently as sexual ( teleomorph ) spores freely floating in waters. However, the main body of the fungi, the mycelium , live freely in sediments , on decaying organic material, [ 20 ] as parasites on or in other organisms (such as on animals , or algae ), [ 21 ] [ 22 ] as endophytes , [ 23 ] in plants, or as mutualists in the guts of insects . [ 24 ]
A biofilm is a combination of algae (diatoms etc.), fungi, bacteria, and other small microorganisms that exist in a film along the streambed or the benthos . [ 26 ] Biofilm assemblages themselves are complex, [ 27 ] and add to the complexity of a streambed.
The different biofilm components (algae and bacteria are the principal components) are embedded in an exopolysaccharide matrix (EPS), and are net receptors of inorganic and organic elements and remain submitted to the influences of the different environmental factors. [ 25 ]
Biofilms are one of the main biological interphases in river ecosystems, and probably the most important in intermittent rivers , where the importance of the water column is reduced during extended low-activity periods of the hydrological cycle . [ 25 ] Biofilms can be understood as microbial consortia of autotrophs and heterotrophs , coexisting in a matrix of hydrated extracellular polymeric substances (EPS). These two main biological components are respectively mainly algae and cyanobacteria on one side, and bacteria and fungi on the other. [ 25 ] Micro - and meiofauna also inhabit the biofilm, predating on the organisms and organic particles and contributing to its evolution and dispersal. [ 28 ] Biofilms therefore form a highly active biological consortium, ready to use organic and inorganic materials from the water phase, and also ready to use light or chemical energy sources. The EPS immobilize the cells and keep them in close proximity allowing for intense interactions including cell-cell communication and the formation of synergistic consortia. [ 29 ] The EPS is able to retain extracellular enzymes and therefore allows the utilization of materials from the environment and the transformation of these materials into dissolved nutrients for the use by algae and bacteria. At the same time, the EPS contributes to protect the cells from desiccation as well from other hazards (e.g., biocides , UV radiation , etc.) from the outer world. [ 25 ] On the other hand, the packing and the EPS protection layer limits the diffusion of gases and nutrients, especially for the cells far from the biofilm surface, and this limits their survival and creates strong gradients within the biofilm. Both the biofilm physical structure, and the plasticity of the organisms that live within it, ensure and support their survival in harsh environments or under changing environmental conditions. [ 25 ]
Algae, consisting of phytoplankton and periphyton , are the most significant sources of primary production in most streams and rivers. [ 6 ] Phytoplankton float freely in the water column and thus are unable to maintain populations in fast flowing streams. They can, however, develop sizeable populations in slow moving rivers and backwaters. [ 5 ] Periphyton are typically filamentous and tufted algae that can attach themselves to objects to avoid being washed away by fast currents. In places where flow rates are negligible or absent, periphyton may form a gelatinous, unanchored floating mat. [ 10 ]
Plants exhibit limited adaptations to fast flow and are most successful in reduced currents. More primitive plants, such as mosses and liverworts attach themselves to solid objects. This typically occurs in colder headwaters where the mostly rocky substrate offers attachment sites. Some plants are free floating at the water's surface in dense mats like duckweed or water hyacinth . Others are rooted and may be classified as submerged or emergent. Rooted plants usually occur in areas of slackened current where fine-grained soils are found. [ 11 ] [ 10 ] These rooted plants are flexible, with elongated leaves that offer minimal resistance to current. [ 1 ]
Living in flowing water can be beneficial to plants and algae because the current is usually well aerated and it provides a continuous supply of nutrients. [ 10 ] These organisms are limited by flow, light, water chemistry, substrate, and grazing pressure. [ 6 ] Algae and plants are important to lotic systems as sources of energy, for forming microhabitats that shelter other fauna from predators and the current, and as a food resource. [ 11 ]
Up to 90% of invertebrates in some lotic systems are insects . These species exhibit tremendous diversity and can be found occupying almost every available habitat, including the surfaces of stones, deep below the substratum in the hyporheic zone , adrift in the current, and in the surface film. [ citation needed ]
Insects have developed several strategies for living in the diverse flows of lotic systems. Some avoid high current areas, inhabiting the substratum or the sheltered side of rocks. Others have flat bodies to reduce the drag forces they experience from living in running water. [ 30 ] Some insects, like the giant water bug ( Belostomatidae ), avoid flood events by leaving the stream when they sense rainfall. [ 31 ] In addition to these behaviors and body shapes, insects have different life history adaptations to cope with the naturally-occurring physical harshness of stream environments. [ 32 ] Some insects time their life events based on when floods and droughts occur. For example, some mayflies synchronize when they emerge as flying adults with when snowmelt flooding usually occurs in Colorado streams. Other insects do not have a flying stage and spend their entire life cycle in the river.
Like most of the primary consumers, lotic invertebrates often rely heavily on the current to bring them food and oxygen. [ 33 ] Invertebrates are important as both consumers and prey items in lotic systems. [ citation needed ]
The common orders of insects that are found in river ecosystems include Ephemeroptera (also known as a mayfly ), Trichoptera (also known as a caddisfly ), Plecoptera (also known as a stonefly , Diptera (also known as a true fly ), some types of Coleoptera (also known as a beetle ), Odonata (the group that includes the dragonfly and the damselfly ), and some types of Hemiptera (also known as true bugs). [ citation needed ]
Additional invertebrate taxa common to flowing waters include mollusks such as snails , limpets , clams , mussels , as well as crustaceans like crayfish , amphipoda and crabs . [ 10 ]
Fish are probably the best-known inhabitants of lotic systems. The ability of a fish species to live in flowing waters depends upon the speed at which it can swim and the duration that its speed can be maintained. This ability can vary greatly between species and is tied to the habitat in which it can survive. Continuous swimming expends a tremendous amount of energy and, therefore, fishes spend only short periods in full current. Instead, individuals remain close to the bottom or the banks, behind obstacles, and sheltered from the current, swimming in the current only to feed or change locations. [ 1 ] Some species have adapted to living only on the system bottom, never venturing into the open water flow. These fishes are dorso-ventrally flattened to reduce flow resistance and often have eyes on top of their heads to observe what is happening above them. Some also have sensory barrels positioned under the head to assist in the testing of substratum. [ 11 ]
Lotic systems typically connect to each other, forming a path to the ocean (spring → stream → river → ocean), and many fishes have life cycles that require stages in both fresh and salt water. Salmon , for example, are anadromous species that are born in freshwater but spend most of their adult life in the ocean, returning to fresh water only to spawn. Eels are catadromous species that do the opposite , living in freshwater as adults but migrating to the ocean to spawn. [ 6 ]
Other vertebrate taxa that inhabit lotic systems include amphibians , such as salamanders , reptiles (e.g. snakes, turtles, crocodiles and alligators) various bird species, and mammals (e.g., otters , beavers , hippos , and river dolphins ). With the exception of a few species, these vertebrates are not tied to water as fishes are, and spend part of their time in terrestrial habitats. [ 6 ] Many fish species are important as consumers and as prey species to the larger vertebrates mentioned above. [ citation needed ]
The concept of trophic levels are used in food webs to visualise the manner in which energy is transferred from one part of an ecosystem to another. [ 34 ] Trophic levels can be assigned numbers determining how far an organism is along the food chain .
All energy transactions within an ecosystem derive from a single external source of energy, the sun. [ 34 ] Some of this solar radiation is used by producers (plants) to turn inorganic substances into organic substances which can be used as food by consumers (animals). [ 34 ] Plants release portions of this energy back into the ecosystem through a catabolic process. Animals then consume the potential energy that is being released from the producers. This system is followed by the death of the consumer organism which then returns nutrients back into the ecosystem. This allow further growth for the plants, and the cycle continues. Breaking cycles down into levels makes it easier for ecologists to understand ecological succession when observing the transfer of energy within a system. [ 34 ]
A common issue with trophic level dynamics is how resources and production are regulated. [ 35 ] The usage and interaction between resources have a large impact on the structure of food webs as a whole. Temperature plays a role in food web interactions including top-down and bottom-up forces within ecological communities. Bottom-up regulations within a food web occur when a resource available at the base or bottom of the food web increases productivity, which then climbs the chain and influence the biomass availability to higher trophic organism. [ 35 ] Top-down regulations occur when a predator population increases. This limits the available prey population, which limits the availability of energy for lower trophic levels within the food chain. Many biotic and abiotic factors can influence top-down and bottom-up interactions. [ 36 ]
Another example of food web interactions are trophic cascades . Understanding trophic cascades has allowed ecologists to better understand the structure and dynamics of food webs within an ecosystem. [ 36 ] The phenomenon of trophic cascades allows keystone predators to structure entire food web in terms of how they interact with their prey. Trophic cascades can cause drastic changes in the energy flow within a food web. [ 36 ] For example, when a top or keystone predator consumes organisms below them in the food web, the density and behavior of the prey will change. This, in turn, affects the abundance of organisms consumed further down the chain, resulting in a cascade down the trophic levels. However, empirical evidence shows trophic cascades are much more prevalent in terrestrial food webs than aquatic food webs. [ 36 ]
A food chain is a linear system of links that is part of a food web, and represents the order in which organisms are consumed from one trophic level to the next. Each link in a food chain is associated with a trophic level in the ecosystem. The numbered steps it takes for the initial source of energy starting from the bottom to reach the top of the food web is called the food chain length. [ 38 ] While food chain lengths can fluctuate, aquatic ecosystems start with primary producers that are consumed by primary consumers which are consumed by secondary consumers, and those in turn can be consumed by tertiary consumers so on and so forth until the top of the food chain has been reached. [ 39 ] When the top has been reached, or even before then, there are decomposers that utilize dead organic material, and are releasing nutrients into the environment, or are then again eaten by other organisms. [ 40 ]
Primary producers start every food chain. Their production of energy and nutrients comes from the sun through photosynthesis . Algae contributes to a lot of the energy and nutrients at the base of the food chain along with terrestrial litter-fall that enters the stream or river . [ 41 ] Production of organic compounds like carbon is what gets transferred up the food chain. Primary producers are consumed by herbivorous invertebrates that act as the primary consumers . Productivity of these producers and the function of the ecosystem as a whole are influenced by the organism above it in the food chain. [ 42 ]
Primary consumers are the invertebrates and macro-invertebrates that feed upon the primary producers. They play an important role in initiating the transfer of energy from the base trophic level to the next. They are regulatory organisms which facilitate and control rates of nutrient cycling and the mixing of aquatic and terrestrial plant materials. [ 43 ] They also transport and retain some of those nutrients and materials. [ 43 ] There are many different functional groups of these invertebrate, including grazers, organisms that feed on algal biofilm that collects on submerged objects, shredders that feed on large leaves and detritus and help break down large material. Also filter feeders , macro-invertebrates that rely on stream flow to deliver them fine particulate organic matter (FPOM) suspended in the water column , and gatherers who feed on FPOM found on the substrate of the river or stream. [ 43 ]
The secondary consumers in a river ecosystem are the predators of the primary consumers. This includes mainly insectivorous fish. [ 44 ] Consumption by invertebrate insects and macro-invertebrates is another step of energy flow up the food chain. Depending on their abundance, these predatory consumers can shape an ecosystem by the manner in which they affect the trophic levels below them. When fish are at high abundance and eat lots of invertebrates, then algal biomass and primary production in the stream is greater, and when secondary consumers are not present, then algal biomass may decrease due to the high abundance of primary consumers. [ 44 ] Energy and nutrients that starts with primary producers continues to make its way up the food chain and depending on the ecosystem, may end with these predatory fish.
The decomposers in a river system are composed of bacteria , fungi , protists , and many invertebrates like insects and snails . They are crucial to benthic food webs and in the river ecosystem as a whole. [ 40 ] The bacteria in river ecosystems live mostly on decaying organic matter from animals and plants . The decomposing fungi in river ecosystems include saprotrophs living mainly on leaf litter and other submerged plant material. [ 20 ] The protists include pseudofungi like oomycetes that also are found on leaves and submerged plant material. The invertebrates include arthropods and molluscs . Most benthic insects and certain snails are functionally classified as " shredders " that break down plant material. [ 45 ]
Diversity , productivity , species richness , composition and stability are all interconnected by a series of feedback loops. Communities can have a series of complex, direct and/or indirect, responses to major changes in biodiversity . [ 42 ] Food webs can include a wide array of variables, the three main variables ecologists look at regarding ecosystems include species richness, biomass of productivity and stability /resistant to change. [ 42 ] When a species is added or removed from an ecosystem it will have an effect on the remaining food web, the intensity of this effect is related to species connectedness and food web robustness. [ 46 ] When a new species is added to a river ecosystem the intensity of the effect is related to the robustness or resistance to change of the current food web. [ 46 ] When a species is removed from a river ecosystem the intensity of the effect is related to the connectedness of the species to the food web. [ 46 ] An invasive species could be removed with little to no effect, but if important and native primary producers, prey or predatory fish are removed you could have a negative trophic cascade . [ 46 ] One highly variable component to river ecosystems is food supply ( biomass of primary producers ). [ 47 ] Food supply or type of producers is ever changing with the seasons and differing habitats within the river ecosystem. [ 47 ] Another highly variable component to river ecosystems is nutrient input from wetland and terrestrial detritus . [ 47 ] Food and nutrient supply variability is important for the succession , robustness and connectedness of river ecosystem organisms. [ 47 ]
Energy sources can be autochthonous or allochthonous.
Invertebrates can be organized into many feeding guilds in lotic systems. Some species are shredders, which use large and powerful mouth parts to feed on non-woody CPOM and their associated microorganisms. Others are suspension feeders , which use their setae , filtering aparati, nets, or even secretions to collect FPOM and microbes from the water. These species may be passive collectors, utilizing the natural flow of the system, or they may generate their own current to draw water, and also, FPOM in Allan. [ 5 ] Members of the gatherer-collector guild actively search for FPOM under rocks and in other places where the stream flow has slackened enough to allow deposition. [ 10 ] Grazing invertebrates utilize scraping, rasping, and browsing adaptations to feed on periphyton and detritus . Finally, several families are predatory, capturing and consuming animal prey. Both the number of species and the abundance of individuals within each guild is largely dependent upon food availability. Thus, these values may vary across both seasons and systems. [ 5 ]
Fish can also be placed into feeding guilds . Planktivores pick plankton out of the water column . Herbivore - detritivores are bottom-feeding species that ingest both periphyton and detritus indiscriminately. Surface and water column feeders capture surface prey (mainly terrestrial and emerging insects) and drift ( benthic invertebrates floating downstream). Benthic invertebrate feeders prey primarily on immature insects, but will also consume other benthic invertebrates. Top predators consume fishes and/or large invertebrates. Omnivores ingest a wide range of prey. These can be floral , faunal , and/or detrital in nature. Finally, parasites live off of host species, typically other fishes. [ 5 ] Fish are flexible in their feeding roles, capturing different prey with regard to seasonal availability and their own developmental stage. Thus, they may occupy multiple feeding guilds in their lifetime. The number of species in each guild can vary greatly between systems, with temperate warm water streams having the most benthic invertebrate feeders, and tropical systems having large numbers of detritus feeders due to high rates of allochthonous input. [ 10 ]
Large rivers have comparatively more species than small streams. Many relate this pattern to the greater area and volume of larger systems, as well as an increase in habitat diversity. Some systems, however, show a poor fit between system size and species richness . In these cases, a combination of factors such as historical rates of speciation and extinction , type of substrate , microhabitat availability, water chemistry, temperature, and disturbance such as flooding seem to be important. [ 6 ]
Although many alternate theories have been postulated for the ability of guild-mates to coexist (see Morin 1999), resource partitioning has been well documented in lotic systems as a means of reducing competition. The three main types of resource partitioning include habitat, dietary, and temporal segregation. [ 6 ]
Habitat segregation was found to be the most common type of resource partitioning in natural systems (Schoener, 1974). In lotic systems, microhabitats provide a level of physical complexity that can support a diverse array of organisms (Vincin and Hawknis, 1998). The separation of species by substrate preferences has been well documented for invertebrates. Ward (1992) was able to divide substrate dwellers into six broad assemblages, including those that live in: coarse substrate, gravel, sand, mud, woody debris, and those associated with plants, showing one layer of segregation. On a smaller scale, further habitat partitioning can occur on or around a single substrate, such as a piece of gravel. Some invertebrates prefer the high flow areas on the exposed top of the gravel, while others reside in the crevices between one piece of gravel and the next, while still others live on the bottom of this gravel piece. [ 6 ]
Dietary segregation is the second-most common type of resource partitioning. [ 6 ] High degrees of morphological specializations or behavioral differences allow organisms to use specific resources. The size of nets built by some species of invertebrate suspension feeders , for example, can filter varying particle size of FPOM from the water (Edington et al. 1984). Similarly, members in the grazing guild can specialize in the harvesting of algae or detritus depending upon the morphology of their scraping apparatus. In addition, certain species seem to show a preference for specific algal species. [ 6 ]
Temporal segregation is a less common form of resource partitioning, but it is nonetheless an observed phenomenon. [ 6 ] Typically, it accounts for coexistence by relating it to differences in life history patterns and the timing of maximum growth among guild mates. Tropical fishes in Borneo , for example, have shifted to shorter life spans in response to the ecological niche reduction felt with increasing levels of species richness in their ecosystem (Watson and Balon 1984).
Over long time scales, there is a tendency for species composition in pristine systems to remain in a stable state. [ 50 ] This has been found for both invertebrate and fish species. [ 6 ] On shorter time scales, however, flow variability and unusual precipitation patterns decrease habitat stability and can all lead to declines in persistence levels. The ability to maintain this persistence over long time scales is related to the ability of lotic systems to return to the original community configuration relatively quickly after a disturbance (Townsend et al. 1987). This is one example of temporal succession, a site-specific change in a community involving changes in species composition over time. Another form of temporal succession might occur when a new habitat is opened up for colonization . In these cases, an entirely new community that is well adapted to the conditions found in this new area can establish itself. [ 6 ]
The River continuum concept (RCC) was an attempt to construct a single framework to describe the function of temperate lotic ecosystems from the headwaters to larger rivers and relate key characteristics to changes in the biotic community (Vannote et al. 1980). [ 51 ] The physical basis for RCC is size and location along the gradient from a small stream eventually linked to a large river. Stream order (see characteristics of streams ) is used as the physical measure of the position along the RCC.
According to the RCC, low ordered sites are small shaded streams where allochthonous inputs of CPOM are a necessary resource for consumers. As the river widens at mid-ordered sites, energy inputs should change. Ample sunlight should reach the bottom in these systems to support significant periphyton production. Additionally, the biological processing of CPOM (coarse particulate organic matter – larger than 1 mm) inputs at upstream sites is expected to result in the transport of large amounts of FPOM (fine particulate organic matter – smaller than 1 mm) to these downstream ecosystems. Plants should become more abundant at edges of the river with increasing river size, especially in lowland rivers where finer sediments have been deposited and facilitate rooting. The main channels likely have too much current and turbidity and a lack of substrate to support plants or periphyton. Phytoplankton should produce the only autochthonous inputs here, but photosynthetic rates will be limited due to turbidity and mixing. Thus, allochthonous inputs are expected to be the primary energy source for large rivers. This FPOM will come from both upstream sites via the decomposition process and through lateral inputs from floodplains.
Biota should change with this change in energy from the headwaters to the mouth of these systems. Namely, shredders should prosper in low-ordered systems and grazers in mid-ordered sites. Microbial decomposition should play the largest role in energy production for low-ordered sites and large rivers, while photosynthesis, in addition to degraded allochthonous inputs from upstream will be essential in mid-ordered systems. As mid-ordered sites will theoretically receive the largest variety of energy inputs, they might be expected to host the most biological diversity (Vannote et al. 1980). [ 5 ] [ 6 ]
Just how well the RCC actually reflects patterns in natural systems is uncertain and its generality can be a handicap when applied to diverse and specific situations. [ 5 ] The most noted criticisms of the RCC are: 1. It focuses mostly on macroinvertebrates , disregarding that plankton and fish diversity is highest in high orders; 2. It relies heavily on the fact that low ordered sites have high CPOM inputs, even though many streams lack riparian habitats; 3. It is based on pristine systems, which rarely exist today; and 4. It is centered around the functioning of temperate streams. Despite its shortcomings, the RCC remains a useful idea for describing how the patterns of ecological functions in a lotic system can vary from the source to the mouth. [ 5 ]
Disturbances such as congestion by dams or natural events such as shore flooding are not included in the RCC model. [ 52 ] Various researchers have since expanded the model to account for such irregularities. For example, J.V. Ward and J.A. Stanford came up with the Serial Discontinuity Concept in 1983, which addresses the impact of geomorphologic disorders such as congestion and integrated inflows. The same authors presented the Hyporheic Corridor concept in 1993, in which the vertical (in depth) and lateral (from shore to shore) structural complexity of the river were connected. [ 53 ] The flood pulse concept , developed by W. J. Junk in 1989, further modified by P. B. Bayley in 1990 and K. Tockner in 2000, takes into account the large amount of nutrients and organic material that makes its way into a river from the sediment of surrounding flooded land. [ 52 ]
Humans exert a geomorphic force that now rivals that of the natural Earth. [ 55 ] [ 56 ] The period of human dominance has been termed the Anthropocene , and several dates have been proposed for its onset. Many researchers have emphasised the dramatic changes associated with the Industrial Revolution in Europe after about 1750 CE (Common Era) and the Great Acceleration in technology at about 1950 CE. [ 57 ] [ 58 ] [ 59 ] [ 60 ] [ 61 ]
However, a detectable human imprint on the environment extends back for thousands of years, [ 62 ] [ 63 ] [ 64 ] [ 65 ] and an emphasis on recent changes minimises the enormous landscape transformation caused by humans in antiquity. [ 66 ] Important earlier human effects with significant environmental consequences include megafaunal extinctions between 14,000 and 10,500 cal yr BP; [ 67 ] domestication of plants and animals close to the start of the Holocene at 11,700 cal yr BP; agricultural practices and deforestation at 10,000 to 5000 cal yr BP; and widespread generation of anthropogenic soils at about 2000 cal yr BP. [ 60 ] [ 68 ] [ 69 ] [ 70 ] [ 71 ] Key evidence of early anthropogenic activity is encoded in early fluvial successions , [ 72 ] [ 73 ] long predating anthropogenic effects that have intensified over the past centuries and led to the modern worldwide river crisis. [ 74 ] [ 75 ] [ 61 ]
River pollution can include but is not limited to: increasing sediment export, excess nutrients from fertilizer or urban runoff, [ 76 ] sewage and septic inputs, [ 77 ] plastic pollution , [ 78 ] nano-particles, pharmaceuticals and personal care products, [ 79 ] synthetic chemicals, [ 80 ] road salt, [ 81 ] inorganic contaminants (e.g., heavy metals), and even heat via thermal pollutions. [ 82 ] The effects of pollution often depend on the context and material, but can reduce ecosystem functioning , limit ecosystem services , reduce stream biodiversity, and impact human health. [ 83 ]
Pollutant sources of lotic systems are hard to control because they can derive, often in small amounts, over a very wide area and enter the system at many locations along its length. While direct pollution of lotic systems has been greatly reduced in the United States under the government's Clean Water Act , contaminants from diffuse non-point sources remain a large problem. [ 10 ] Agricultural fields often deliver large quantities of sediments, nutrients, and chemicals to nearby streams and rivers. Urban and residential areas can also add to this pollution when contaminants are accumulated on impervious surfaces such as roads and parking lots that then drain into the system. Elevated nutrient concentrations, especially nitrogen and phosphorus which are key components of fertilizers, can increase periphyton growth, which can be particularly dangerous in slow-moving streams. [ 10 ] Another pollutant, acid rain , forms from sulfur dioxide and nitrous oxide emitted from factories and power stations. These substances readily dissolve in atmospheric moisture and enter lotic systems through precipitation. This can lower the pH of these sites, affecting all trophic levels from algae to vertebrates. [ 11 ] Mean species richness and total species numbers within a system decrease with decreasing pH. [ 6 ]
Flow modification can occur as a result of dams , water regulation and extraction, channel modification, and the destruction of the river floodplain and adjacent riparian zones. [ 84 ]
Dams alter the flow, temperature, and sediment regime of lotic systems. [ 6 ] Additionally, many rivers are dammed at multiple locations, amplifying the impact. Dams can cause enhanced clarity and reduced variability in stream flow, which in turn cause an increase in periphyton abundance. Invertebrates immediately below a dam can show reductions in species richness due to an overall reduction in habitat heterogeneity. [ 10 ] Also, thermal changes can affect insect development, with abnormally warm winter temperatures obscuring cues to break egg diapause and overly cool summer temperatures leaving too few acceptable days to complete growth. [ 5 ] Finally, dams fragment river systems, isolating previously continuous populations, and preventing the migrations of anadromous and catadromous species. [ 10 ]
Invasive species have been introduced to lotic systems through both purposeful events (e.g. stocking game and food species) as well as unintentional events (e.g. hitchhikers on boats or fishing waders). These organisms can affect natives via competition for prey or habitat, predation, habitat alteration, hybridization, or the introduction of harmful diseases and parasites. [ 6 ] Once established, these species can be difficult to control or eradicate, particularly because of the connectivity of lotic systems. Invasive species can be especially harmful in areas that have endangered biota, such as mussels in the Southeast United States, or those that have localized endemic species, like lotic systems west of the Rocky Mountains, where many species evolved in isolation. | https://en.wikipedia.org/wiki/River_ecosystem |
River engineering is a discipline of civil engineering which studies human intervention in the course, characteristics, or flow of a river with the intention of producing some defined benefit. People have intervened in the natural course and behaviour of rivers since before recorded history—to manage the water resources , to protect against flooding , or to make passage along or across rivers easier. Since the Yuan Dynasty and Ancient Roman times, rivers have been used as a source of hydropower .
From the late 20th century onward, the practice of river engineering has responded to environmental concerns broader than immediate human benefit. Some river engineering projects have focused exclusively on the restoration or protection of natural characteristics and habitats .
Hydromodification encompasses the systematic response to alterations to riverine and non-riverine water bodies such as coastal waters ( estuaries and bays ) and lakes. The U.S. Environmental Protection Agency (EPA) has defined hydromodification as the "alteration of the hydrologic characteristics of coastal and non-coastal waters, which in turn could cause degradation of water resources." [ 1 ] River engineering has often resulted in unintended systematic responses, such as reduced habitat for fish and wildlife, and alterations of water temperature and sediment transport patterns. [ 2 ]
Beginning in the late 20th century, the river engineering discipline has been more focused on repairing hydromodified degradations and accounting for potential systematic response to planned alterations by considering fluvial geomorphology . Fluvial geomorphology is the study of how rivers change their form over time. Fluvial geomorphology is the cumulation of a number of sciences including open channel hydraulics , sediment transport , hydrology , physical geology, and riparian ecology. River engineering practitioners attempt to understand fluvial geomorphology, implement a physical alteration, and maintain public safety. [ 3 ] : 3–13ff
The size of rivers above any tidal limit and their average freshwater discharge are proportionate to the extent of their basins and the amount of rain which, after falling over these basins, reaches the river channels in the bottom of the valleys, by which it is conveyed to the sea. [ 4 ]
The drainage basin of a river is the expanse of country bounded by a watershed (called a "divide" in North America) over which rainfall flows down towards the river traversing the lowest part of the valley, whereas the rain falling on the far slope of the watershed flows away to another river draining an adjacent basin. River basins vary in extent according to the configuration of the country, ranging from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. The size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. [ 4 ]
The rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. When two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. The fall available in a section of a river approximately corresponds to the slope of the country it traverses; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. Accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. [ 4 ]
The irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. In tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. In fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from May to October and from November to April in the Northern hemisphere respectively; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. The only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the Rhône above the Lake of Geneva , and the Arve which joins it below. But even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the Rhone below Lyon has a more uniform discharge than most rivers, as the summer floods of the Arve are counteracted to a great extent by the low stage of the Saône flowing into the Rhone at Lyon, which has its floods in the winter when the Arve, on the contrary, is low. [ 4 ]
Another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood-time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. The power of a current to transport materials varies with its velocity , so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones , which are by degrees ground by attrition in their onward course into slate , gravel , sand and silt , simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. Accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the Po River in Italy, for instance, pebbles and gravel are found for about 140 miles below Turin , sand along the next 100 miles, and silt and mud in the last 110 miles (176 km). [ 4 ]
The removal of obstructions, natural or artificial (e.g., trunks of trees, boulders and accumulations of gravel) from a river bed furnishes a simple and efficient means of increasing the discharging capacity of its channel. Such removals will consequently lower the height of floods upstream. Every impediment to the flow, in proportion to its extent, raises the level of the river above it so as to produce the additional artificial fall necessary to convey the flow through the restricted channel, thereby reducing the total available fall. [ 4 ]
Reducing the length of the channel by substituting straight cuts for a winding course is the only way in which the effective fall can be increased. This involves some loss of capacity in the channel as a whole, and in the case of a large river with a considerable flow it is difficult to maintain a straight cut owing to the tendency of the current to erode the banks and form again a sinuous channel. Even if the cut is preserved by protecting the banks, it is liable to produce changes shoals and raise the flood-level in the channel just below its termination. Nevertheless, where the available fall is exceptionally small, as in land originally reclaimed from the sea, such as the English Fenlands , and where, in consequence, the drainage is in a great measure artificial, straight channels have been formed for the rivers. Because of the perceived value in protecting these fertile, low-lying lands from inundation, additional straight channels have also been provided for the discharge of rainfall, known as drains in the fens. Even extensive modification of the course of a river combined with an enlargement of its channel often produces only a limited reduction in flood damage. Consequently, such floodworks are only commensurate with the expenditure involved [ 4 ] where significant assets (such as a town) are under threat. Additionally, even when successful, such floodworks may simply move the problem further downstream and threaten some other town. Recent floodworks in Europe have included restoration of natural floodplains and winding courses, so that floodwater is held back and released more slowly.
Human intervention sometimes inadvertently modifies the course or characteristics of a river, for example by introducing obstructions such as mining refuse, sluice gates for mills, fish-traps, unduly wide piers for bridges and solid weirs . By impeding flow these measures can raise the flood-level upstream. Regulations for the management of rivers may include stringent prohibitions with regard to pollution , requirements for enlarging sluice-ways and the compulsory raising of their gates for the passage of floods, the removal of fish traps , which are frequently blocked up by leaves and floating rubbish, reduction in the number and width of bridge piers when rebuilt, and the substitution of movable weirs for solid weirs. [ 4 ]
By installing gauges in a fairly large river and its tributaries at suitable points, and keeping continuous records for some time of the heights of the water at the various stations, the rise of the floods in the different tributaries, the periods they take in passing down to definite stations on the main river, and the influence they severally exercise on the height of the floods at these places, can be ascertained. With the help of these records, and by observing the times and heights of the maximum rise of a particular flood at the stations on the various tributaries, the time of arrival and height of the top of the flood at any station on the main river can be predicted with remarkable accuracy two or more days beforehand. By communicating these particulars about a high flood to places on the lower river, weir-keepers are enabled to fully open the movable weirs beforehand to permit the passage of the flood, and riparian inhabitants receive timely warning of the impending inundation. [ 4 ]
Where portions of a riverside town are situated below the maximum flood-level, or when it is important to protect land adjoining a river from inundations, the overflow of the river must be diverted into a flood-dam or confined within continuous embankments on both sides. By placing these embankments somewhat back from the margin of the river-bed, a wide flood-channel is provided for the discharge of the river as soon as it overflows its banks, while leaving the natural channel unaltered for the ordinary flow. Low embankments may be sufficient where only exceptional summer floods have to be excluded from meadows. Occasionally the embankments are raised high enough to retain the floods during most years, while provision is made for the escape of the rare, exceptionally high floods at special places in the embankments, where the scour of the issuing current is guarded against, and the inundation of the neighboring land is least injurious. In this manner, the increased cost of embankments raised above the highest flood-level of rare occurrence is avoided, as is the danger of breaches in the banks from an unusually high flood-rise and rapid flow, with their disastrous effects. [ 4 ]
A most serious objection to the formation of continuous, high embankments along rivers bringing down considerable quantities of detritus, especially near a place where their fall has been abruptly reduced by descending from mountain slopes onto alluvial plains, is the danger of their bed being raised by deposit , producing a rise in the flood-level, and necessitating a raising of the embankments if inundations are to be prevented. Longitudinal sections of the Po River , taken in 1874 and 1901, show that its bed was materially raised during this period from the confluence of the Ticino to below Caranella , despite the clearance of sediment effected by the rush through breaches. [ citation needed ] Therefore, the completion of the embankments, together with their raising, would only eventually aggravate the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. [ 4 ]
Inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. Channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. One of the most heavily channelized areas in the United States is West Tennessee , where every major stream with one exception (the Hatchie River ) has been partially or completely channelized. [ citation needed ]
Channelization of a stream may be undertaken for several reasons. One is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. Another is to restrict water to a certain area of a stream's natural bottom lands so that the bulk of such lands can be made available for agriculture. A third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. One major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil , and precious topsoil from the outside corners where it flows rapidly due to a change in direction. Unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. It simply washes away.
Channelization has several predictable and negative effects. One of them is loss of wetlands . Wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a "filter" for much of the world's surface fresh water. Another is the fact that channelized streams are almost invariably straightened. For example, the channelization of Florida's Kissimmee River has been cited as a cause contributing to the loss of wetlands. [ 5 ] This straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. It can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. In addition, studies have shown that stream channelization results in declines of river fish populations. [ 3 ] : 3-1ff
A 1971 study of the Chariton River in northern Missouri , United States, found that the channelized section of the river contained only 13 species of fish, whereas the natural segment of the stream was home to 21 species of fish. [ 6 ] The biomass of fish able to be caught in the dredged segments of the river was 80 percent less than in the natural parts of the same stream. This loss of fish diversity and abundance is thought to occur because of reduction in habitat, elimination of riffles and pools, greater fluctuation of stream levels and water temperature, and shifting substrates. The rate of recovery for a stream once it has been dredged is extremely slow, with multiple streams showing no significant recovery 30 to 40 years after the date of channelization. [ 7 ]
For the reasons cited above, in recent years stream channelization has been curtailed in the U.S., and in some instances even partially reversed. In 1990 the United States Government published a " no net loss of wetlands" policy, whereby a stream channelization project in one place must be offset by the creation of new wetlands in another, a process known as "mitigation." [ 8 ] [ needs update ]
The major agency involved in the enforcement of this policy is the same Army Corps of Engineers, which for a number of years was the primary promoter of wide-scale channelization. Often, in the instances where channelization is permitted, boulders may be installed in the bed of the new channel so that water velocity is slowed, and channels may be deliberately curved as well. In 1990 the U.S. Congress gave the Army Corps a specific mandate to include environmental protection in its mission, and in 1996 it authorized the Corps to undertake restoration projects. [ 9 ] The U.S. Clean Water Act regulates certain aspects of channelization by requiring non-Federal entities (i.e. state and local governments, private parties) to obtain permits for dredging and filling operations. Permits are issued by the Army Corps with EPA participation. [ 10 ]
Rivers whose discharge is liable to become quite small at their low stage, or which have a somewhat large fall, as is usual in the upper part of rivers, cannot be given an adequate depth for navigation purely by works which regulate the flow; their ordinary summer level has to be raised by impounding the flow with weirs at intervals across the channel, while a lock has to be provided alongside the weir, or in a side channel, to provide for the passage of vessels. A river is thereby converted into a succession of fairly level reaches rising in steps up-stream, providing still-water navigation comparable to a canal; but it differs from a canal in the introduction of weirs for keeping up the water-level, in the provision for the regular discharge of the river at the weirs, and in the two sills of the locks being laid at the same level instead of the upper sill being raised above the lower one to the extent of the rise at the lock, as usual on canals. [ 4 ]
Canalization secures a definite available depth for navigation; and the discharge of the river generally is amply sufficient for maintaining the impounded water level, as well as providing the necessary water for locking. Navigation, however, is liable to be stopped during the descent of high floods, which in a number of cases rise above the locks; and it is necessarily arrested in cold climates on all rivers by long, severe frosts, and especially by ice. Multiple small rivers, like the Thames above its tidal limit, have been rendered navigable by canalization, and several fairly large rivers have thereby provided a good depth for vessels for considerable distances inland. Thus the canalized Seine has secured a navigable depth of 10 1 ⁄ 2 feet (3.2 metres) from its tidal limit up to Paris, a distance of 135 miles, and a depth of 6 3 ⁄ 4 feet (2.06 metres) up to Montereau, 62 miles higher up. [ 4 ]
As rivers flow onward towards the sea, they experience a considerable diminution in their fall, and a progressive increase in the basin which they drain, owing to the successive influx of their various tributaries. Thus, their current gradually becomes more gentle and their discharge larger in volume and less subject to abrupt variations; and, consequently, they become more suitable for navigation. Eventually, large rivers, under favorable conditions, often furnish important natural highways for inland navigation in the lower portion of their course, as, for instance, the Rhine , the Danube and the Mississippi . River engineering works are only required to prevent changes in the course of the stream, to regulate its depth, and especially to fix the low-water channel and concentrate the flow in it, so as to increase as far as practicable the navigable depth at the lowest stage of the water level.
Engineering works to increase the navigability of rivers can only be advantageously undertaken in large rivers with a moderate fall and a fair discharge at their lowest stage, for with a large fall the current presents a great impediment to up-stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. It is impossible to maintain a sufficient depth of water in the low-water channel. [ 4 ]
The possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. A soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. The lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. The removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. Where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. [ 4 ]
The capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. The problem in the dry season is the small discharge and deficiency in scour during this period. A typical solution is to restrict the width of the low-water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. This can be effected by closing subsidiary low-water channels with dikes across them, and narrowing the channel at the low stage by low-dipping cross dikes extending from the river banks down the slope and pointing slightly up-stream so as to direct the water flowing over them into a central channel. [ 4 ]
The needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary . The interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh-water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. The models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. [ 4 ] | https://en.wikipedia.org/wiki/River_engineering |
River linking is a project of linking two or more rivers by creating a network of manually created reservoirs and canals , and providing land areas that otherwise does not have river water access and reducing the flow of water to sea using this means. It is based on the assumptions that surplus water in some rivers can be diverted to deficit rivers by creating a network of canals to interconnect the rivers. [ 1 ]
For an instance, in India the rainfall over the country is primarily orographic, associated with tropical depressions originating in the Arabian Sea and the Bay of Bengal. The summer monsoon accounts for more than 85 per cent of the precipitation. The uncertainty of occurrence of rainfall marked by prolonged dry spells and fluctuations in seasonal and annual rainfall is a serious problem for the country. Large parts of Haryana, Maharashtra, Andhra Pradesh, Rajasthan, Gujarat, Madhya Pradesh, Karnataka and Tamil Nadu are not only in deficit in rainfall but also subject to large variations, resulting in frequent droughts and causing immense hardship to the population and enormous loss to the nation. The water availability even for drinking purposes becomes critical, particularly in the summer months as the rivers dry up and the ground water recedes. Regional variations in the rainfall lead to situations when some parts of the country do not have enough water even for raising a single crop. On the other hand, excess rainfall occurring in some parts of the country creates havoc due to floods.
Irrigation using river water and ground water has been the prime factor for raising the food grain production in India from a mere 50 million tonnes in the 1950s to more than 200 million tonnes at present, leading India to attain self-sufficiency in food. Irrigated area has increased from 22 million hectares to 95 million hectares during this period. The population of India, which is around 1100 million at present, is expected to increase to 1500 to 1800 million in the year 2050 and that would require about 450 million tonnes of food grains. For meeting this requirement, it would be necessary to increase irrigation potential to 160 million hectares for all crops by 2050. India's maximum irrigation potential that could be created through conventional sources has been assessed to be about 140 million hectares. For attaining a potential of 160 million hectares, other strategies shall have to be evolved.
Floods are a recurring feature, particularly by the Brahmaputra and Ganga rivers, in which almost 60 per cent of the river flows of India occur. Flood damages, which were Rs. 52 crores in 1953, have gone up to Rs. 5,846 crores in 1998 with annual average being Rs. 1,343 crores affecting the States of Assam, Bihar, West Bengal and Uttar Pradesh along with untold human sufferings. On the other hand, large areas in the States of Rajasthan, Gujarat, Andhra Pradesh, Karnataka and Tamil Nadu face recurring droughts. As much as 85 percentage of drought prone area falls in these States. One of the most effective ways to increase the irrigation potential for increasing the food grain production, mitigating floods and droughts and reducing regional imbalance in the availability of water is the Inter Basin Water Transfer (IBWT) from the surplus rivers to deficit areas. Brahmaputra and Ganga particularly their northern tributaries, Mahanadi, Godavari and West Flowing Rivers originating from the Western Ghats are found to be surplus in water resources. If we can build storage reservoirs on these rivers and connect them to other parts of the country, regional imbalances could be reduced significantly and lot of benefits could be gained by way of additional irrigation, domestic and industrial water supply, hydropower generation, navigational facilities etc.
By linking the rivers, vast amount of land areas which will not otherwise be irrigated and are unusable for agriculture become fertile. [ 2 ]
During heavy rainy seasons some areas can experience heavy floods while other areas might be experiencing drought like situations. With network of rivers this problem can be greatly avoided by channeling excess water to areas that are not experiencing a flood or are dry.
With new canals built, feasibility of new dams to generate hydroelectric power becomes a possibility.
Newly created network of canals opens up new routes and ways of water navigation, which is generally more efficient and cheaper compared to road transport .
The National River Linking Project (NRLP) is designed to ease water shortages in western and southern India while mitigating the impacts of recurrent floods in the eastern parts of the Ganga basin . The NRLP, if and when implemented, will be one of the biggest interbasin water transfer projects in the world. [ 2 ]
A number of leading environmentalists are of the opinion that the project could be an ecological disaster . There would be a decrease in downstream flows resulting in reduction of fresh water inflows into the seas seriously jeopardizing aquatic life . [ 2 ]
Creation of canals would need large areas of land resulting in large scale deforestation in certain areas. [ 2 ]
Possibility of new dams comes with the threat of large otherwise habitable or reserved land getting submerged under water or surface water.
As large strips of land might have to be converted to canals, a considerable population living in these areas must need to be rehabilitated to new areas.
As the rivers interlink, rivers with dirty water will get connect to rivers with clean water, hence dirtying the clean water. By Raunak Tupparwar | https://en.wikipedia.org/wiki/River_linking |
A river mile is a measure of distance in miles along a river from its mouth . River mile numbers begin at zero and increase further upstream. The corresponding metric unit using kilometers is the river kilometer . They are analogous to vehicle roadway mile markers , except that river miles are rarely marked on the physical river; instead they are marked on navigation charts, and topographic maps. Riverfront properties are sometimes partially legally described by their river mile.
The river mile is not the same as the length of the river, rather it is a means of locating any feature along the river relative to its distance from the mouth, when measured along the course (or navigable channel) of the river. [ 1 ]
River mile zero may not be exactly at the mouth. For example, the Willamette River (which discharges into the Columbia River ) has its river mile zero at the edge of the navigable channel in the Columbia, some 900 feet (270 m) beyond the mouth. [ 2 ] Also, the river mile zero for the Lower Mississippi River is located at Head of Passes , where the main stem of the Mississippi splits into three major branches before flowing into the Gulf of Mexico . Mileages are indicated as AHP (Above Head of Passes) or BHP (Below Head of Passes). [ 3 ]
River miles are used in a variety of ways. The Commonwealth of Pennsylvania , in its 2001 Pennsylvania Gazetteer of Streams , lists every named stream and every unnamed stream in a named geographic feature in the state, and gives the drainage basin area, mouth coordinates, and river mile, specifically the distance from the mouth of the tributary to the mouth of its parent stream. [ 1 ] Some islands are named for their river mile distance, for example the Allegheny River in Pennsylvania has Six Mile Island, Nine Mile Island, Twelve Mile Island, and Fourteen Mile Island. [ 4 ] [ 5 ] (The last two islands form Allegheny Islands State Park , although Fourteen Mile Island was split into two parts by a dam). [ 6 ] [ 7 ]
The state of Ohio uses the "River Mile System of Ohio", which is "a method to reference locations on streams and rivers of Ohio". [ 8 ] This work began by hand measurements on paper maps between 1972 and 1975 and has since been converted to a computer-based electronic version, which now covers the state in 787 river mile maps. Locations of facilities such as wastewater treatment plants and water quality measurement sites are referenced via river miles. Ohio uses one of two systems. The simplest is just the name of the river and the location in river miles. In cases where there is ambiguity, for example when more than one stream has the same name, it uses a series of river mile strings referring to the distance to the ocean along either the Ohio River (and Mississippi River ) or through Lake Erie (and the Saint Lawrence Seaway ). [ 8 ]
Another example of a River Mile System is utilized by the U.S. Bureau of Reclamation, in New Mexico, on the Rio Grande. The river miles in Central New Mexico are measured from Caballo Dam upstream to near Embudo , New Mexico. For example, a river mile sign in the Albuquerque Bosque (part of Albuquerque's Open Space Park) is River Mile 184, approximately 184 miles above Caballo Dam . As mentioned earlier in this system the further you go up stream the higher the river mile number. This system is measured in 10ths of a mile.
The U.S. Army Corps of Engineers uses river miles for its navigation maps. [ 5 ] | https://en.wikipedia.org/wiki/River_mile |
A river mouth is where a river flows into a larger body of water , such as another river, a lake / reservoir , a bay / gulf , a sea , or an ocean . [ 1 ] At the river mouth, sediments are often deposited due to the slowing of the current, reducing the carrying capacity of the water. [ 1 ] The water from a river can enter the receiving body in a variety of different ways. [ 1 ] The motion of a river is influenced by the relative density of the river compared to the receiving water, the rotation of the Earth, and any ambient motion in the receiving water, such as tides or seiches . [ 2 ]
If the river water has a higher density than the surface of the receiving water, the river water will plunge below the surface. The river water will then either form an underflow or an interflow within the lake. However, if the river water is lighter than the receiving water, as is typically the case when fresh river water flows into the sea, the river water will float along the surface of the receiving water as an overflow.
Alongside these advective transports, inflowing water will also diffuse . [ 1 ]
At the mouth of a river, the change in flow conditions can cause the river to drop any sediment it is carrying. This sediment deposition can generate a variety of landforms , such as deltas , sand bars , spits , and tie channels. [ 3 ] Landforms at the river mouth drastically alter the geomorphology and ecosystem. Along coasts, sand bars and similar landforms act as barriers, sheltering sensitive ecosystems that are enriched by nutrients deposited from the river. [ 4 ] However, the damming of rivers can starve the river of sand and nutrients, creating a deficit at the river's mouth. [ 4 ]
As river mouths are the site of large-scale sediment deposition and allow for easy travel and ports, many towns and cities are founded there. Many places in the United Kingdom take their names from their positions at the mouths of rivers, such as Plymouth (i.e. mouth of the Plym River ), Sidmouth (i.e. mouth of the Sid River ), and Great Yarmouth (i.e. mouth of the Yare River ). In Celtic, the corresponding terms are Aber or Inver , from which come numerous place names such as Aberdeen , Abercrombie , and Abernethy , as well as Inverness , Inverkip , and Inveraray .
Due to rising sea levels as a result of climate change, the coastal cities are at heightened risk of flooding. Sediment starvation in the river compounds this concern. [ 4 ] | https://en.wikipedia.org/wiki/River_mouth |
A Rivlin–Ericksen temporal evolution of the strain rate tensor such that the derivative translates and rotates with the flow field. The first-order Rivlin–Ericksen is given by
where
Higher-order tensor may be found iteratively by the expression
The derivative chosen for this expression depends on convention. The upper-convected time derivative , lower-convected time derivative , and Jaumann derivative are often used. | https://en.wikipedia.org/wiki/Rivlin–Ericksen_tensor |
Radon difluoride ( RnF 2 ) is a compound of radon , a radioactive noble gas . Radon reacts readily with fluorine to form a solid compound, but this decomposes on attempted vaporization and its exact composition is uncertain. [ 1 ] [ 2 ] Calculations suggest that it may be ionic , [ 3 ] unlike all other known binary noble gas compounds . The usefulness of radon compounds is limited because of the radioactivity of radon. The longest-lived isotope , radon-222 , has a half-life of only 3.82 days, which decays by α-emission to yield polonium-218. [ 4 ]
When radon is heated to 400 °C with fluorine, radon difluoride is formed. [ 1 ]
Radon difluoride can be reduced to radon and hydrogen fluoride when heated with hydrogen gas at 500 °C. [ 1 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/RnF2 |
Ro05-4082 ( N-methylclonazepam , ID-690 ) is a benzodiazepine derivative developed in the 1970s. It has sedative and hypnotic properties, and has around the same potency as clonazepam itself. [ 1 ] It was never introduced into clinical use. It is a structural isomer of meclonazepam (3-methylclonazepam), and similarly has been sold as a designer drug , first being identified in Sweden in 2017. [ 2 ]
This sedative -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ro05-4082 |
The reduction-oxidation sensitive green fluorescent protein ( roGFP ) is a green fluorescent protein engineered to be sensitive to changes in the local redox environment. roGFPs are used as redox -sensitive biosensors .
In 2004, researchers in S. James Remington 's lab at the University of Oregon constructed the first roGFPs by introducing two cysteines into the beta barrel structure of GFP. The resulting engineered protein could exist in two different oxidation states (reduced dithiol or oxidized disulfide ), each with different fluorescent properties. [ 2 ]
Originally, members of the Remington lab published six versions of roGFP, termed roGFP1-6 (see more structural details below). Different groups of researchers introduced cysteines at different locations in the GFP molecule, generally finding that cysteines introduced at the amino acid positions 147 and 204 produced the most robust results. [ 3 ]
roGFPs are often genetically encoded into cells for in-vivo imaging of redox potential. In cells, roGFPs can generally be modified by redox enzymes such as glutaredoxin or thioredoxin . roGFP2 preferentially interacts with glutaredoxins and therefore reports the cellular glutathione redox potential . [ 4 ]
Various attempts have been made to make roGFPs that are more amenable to live-cell imaging. Most notably, substituting three positively-charged amino acids adjacent to the disulfide in roGFP1 drastically improves the response rate of roGFPs to physiologically relevant changes in redox potential. The resulting roGFP variants, named roGFP1-R1 through roGFP1-R14, are much more suitable for live-cell imaging. [ 1 ] The roGFP1-R12 variant has been used to monitor redox potential in bacteria and yeast, [ 5 ] [ 6 ] but also for studies of spatially-organized redox potential in live, multicellular organisms such as the model nematode C. elegans . [ 7 ] In addition, roGFPs are used to investigate the topology of ER proteins, or to analyze the ROS production capacity of chemicals . [ 8 ] [ 9 ]
One notable improvement to roGFPs occurred in 2008, when the specificity of roGFP2 for glutathione was further increased by linking it to the human glutaredoxin 1 (Grx1). [ 10 ] By expressing the Grx1-roGFP fusion sensors in the organism of interest and/or targeting the protein to a cellular compartment, it is possible to measure the glutathione redox potential in a specific cellular compartment in real-time and therefore provides major advantages compared to other invasive static methods e.g. HPLC .
Given the variety of roGFPs, some effort has been made to benchmark their performance. For example, members of Javier Apfeld 's group published a method in 2020 describing the 'suitable ranges' of different roGFPs, determined by how sensitive each sensor is to experimental noise in different redox conditions. [ 11 ]
See Kostyulk 2020 [ 12 ] for a more comprehensive review of different redox sensors. | https://en.wikipedia.org/wiki/RoGFP |
The Restriction of Hazardous Substances Directive 2002/95/EC ( RoHS 1 ), short for Directive on the restriction of the use of certain hazardous substances in electrical and electronic equipment , was adopted in February 2003 by the European Union . [ 2 ]
The initiative was to limit the amount of hazardous chemicals in electronics.
The RoHS 1 directive took effect on 1 July 2006, and is required to be enforced and became a law in each member state. [ 3 ] This directive restricts (with exceptions ) the use of ten hazardous materials in the manufacture of various types of electronic and electrical equipment. In addition to the exceptions, there are exclusions for products such as solar panels. It is closely linked with the Waste Electrical and Electronic Equipment Directive (WEEE) 2002/96/EC (now superseded [ 4 ] ) which sets collection, recycling and recovery targets for electrical goods and is part of a legislative initiative to solve the problem of huge amounts of toxic electronic waste . In speech, RoHS is often spelled out, or pronounced [ citation needed ] / r ɒ s / , / r ɒ ʃ / , / r oʊ z / , or / ˈ r oʊ h ɒ z / , and refers to the EU standard, unless otherwise qualified.
Each European Union member state will adopt its own enforcement and implementation policies using the directive as a guide.
RoHS is often referred to as the "lead-free directive", but it restricts the use of the following ten substances:
Maximum Permitted Concentration: 0.1% [ 5 ]
Max for Cadmium: 0.01% [ 5 ]
DEHP, BBP, DBP and DIBP were added as part of DIRECTIVE (EU) 2015/863 which was published on 31 March 2015. [ 5 ]
PBB and PBDE are flame retardants used in several plastics. Hexavalent chromium is used in chrome plating , chromate coatings and primers , and in chromic acid .
The maximum permitted concentrations in non- exempt products are 0.1% or 1000 parts per million (ppm) (except for cadmium , which is limited to 0.01% or 100 ppm) by weight. The restrictions are on each homogeneous material in the product, which means that the limits do not apply to the weight of the finished product, or even to a component, but to any single material that could (theoretically) be separated mechanically – for example, the sheath on a cable or the tinning on a component lead.
As an example, a radio is composed of a case, screws , washers , a circuit board, speakers, etc. The screws, washers, and case may each be made of homogenous materials, but the other components comprise multiple sub-components of many different types of material. For instance, a circuit board is composed of a bare printed circuit board (PCB), integrated circuits (IC), resistors , capacitors , switches, etc. A switch is composed of a case, a lever, a spring, contacts, pins, etc., each of which may be made of different materials. A contact might be composed of a copper strip with a surface coating. A loudspeaker is composed of a permanent magnet, copper wire, paper, etc.
Everything that can be identified as a homogeneous material must meet the limit. So if it turns out that the case was made of plastic with 2,300 ppm (0.23%) PBB used as a flame retardant, then the entire radio would fail the requirements of the directive.
In an effort to close RoHS 1 loopholes, in May 2006 the European Commission was asked to review two currently excluded product categories (monitoring and control equipment, and medical devices) for future inclusion in the products that must fall into RoHS compliance. [ 6 ] In addition the commission entertains requests for deadline extensions or for exclusions by substance categories, substance location or weight. [ 7 ] New legislation was published in the official journal in July 2011 which supersedes this exemption.
Note that batteries are not included within the scope of RoHS. However, in Europe, batteries are under the European Commission's 1991 Battery Directive (91/157/EEC [ 8 ] ), which was increased in scope and approved in the new battery directive , version 2003/0282 COD, [ 9 ] which will be official when submitted to and published in the EU's Official Journal. While the first Battery Directive addressed possible trade barrier issues brought about by disparate European member states' implementation, the new directive more explicitly highlights improving and protecting the environment from the negative effects of the waste contained in batteries. It also contains a programme for more ambitious recycling of industrial, automotive, and consumer batteries, gradually increasing the rate of manufacturer-provided collection sites to 45% by 2016. It also sets limits of 5 ppm mercury and 20 ppm cadmium to batteries except those used in medical, emergency, or portable power-tool devices. [ 10 ] Though not setting quantitative limits on quantities of lead, lead–acid, nickel, and nickel–cadmium in batteries, it cites a need to restrict these substances and provide for recycling up to 75% of batteries with these substances. There are also provisions for marking the batteries with symbols in regard to metal content and recycling collection information.
The directive applies to equipment as defined by a section of the WEEE directive. The following numeric categories apply:
It does not apply to fixed industrial plant and tools. Compliance is the responsibility of the company that puts the product on the market, as defined in the Directive; components and sub-assemblies are not responsible for product compliance. Of course, given the fact that the regulation is applied at the homogeneous material level, data on substance concentrations needs to be transferred through the supply chain to the final producer. An IPC standard has recently been developed and published to facilitate this data exchange, IPC-1752. [ 11 ] It is enabled through two PDF forms that are free to use.
RoHS applies to these products in the EU whether made within the EU or imported. Certain exemptions apply, and these are updated on occasion by the EU.
RoHS restricted substances have been used in a broad array of consumer electronics products. Examples of components that have contained lead include:
Cadmium is found in many of the components above; examples include plastic pigmentation, nickel–cadmium (NiCd) batteries and CdS photocells (used in night lights). Mercury is used in lighting applications and automotive switches; examples include fluorescent lamps and mercury tilt switches (these are rarely used nowadays). Hexavalent chromium is used for metal finishes to prevent corrosion. Polybrominated biphenyls and diphenyl ethers/oxides are used primarily as flame retardants. [ 12 ]
RoHS and other efforts to reduce hazardous materials in electronics are motivated in part to address the global issue of consumer electronics waste. As newer technology arrives at an ever-increasing rate, consumers are discarding their obsolete products sooner than ever. This waste ends up in landfills and in countries like China to be "recycled". [ 13 ]
In the fashion-conscious mobile market, 98 million U.S. cell phones took their last call in 2005. All told, the EPA estimates that in the U.S. that year, between 1.5 and 1.9 million tons of computers, TVs, VCRs, monitors, cell phones, and other equipment were discarded. If all sources of electronic waste are tallied, it could total 50 million tons a year worldwide, according to the UN Environment Programme. [ 14 ]
American electronics sent offshore to countries like Ghana in West Africa under the guise of recycling may be doing more harm than good. Not only are adult and child workers in these jobs being poisoned by heavy metals, but these metals are returning to the U.S. "The U.S. right now is shipping large quantities of leaded materials to China, and China is the world's major manufacturing center," Dr. Jeffrey Weidenhamer says, a chemistry professor at Ashland University in Ohio. "It's not all that surprising things are coming full circle and now we're getting contaminated products back." [ 13 ]
In addition to the high-tech waste problem, RoHS reflects contemporary research over the past 50 years in biological toxicology that acknowledges the long-term effects of low-level chemical exposure on populations. New testing is capable of detecting much smaller concentrations of environmental toxicants. Researchers are associating these exposures with neurological, developmental, and reproductive changes.
RoHS and other environmental laws are in contrast to historical and contemporary law that seek to address only acute toxicology, that is direct exposure to large amounts of toxic substances causing severe injury or death. [ 15 ]
The United States Environmental Protection Agency (EPA) has published a life-cycle assessment (LCA) of the environmental impacts of lead-free and tin–lead solder , as used in electronic products. [ 16 ] For bar solders, when only lead-free solders were considered, the tin/copper alternative had the lowest (best) scores. For paste solders, bismuth / tin /silver had the lowest impact scores among the lead-free alternatives in every category except non-renewable resource consumption. For both paste and bar solders, all of the lead-free solder alternatives had a lower (better) LCA score in toxicity categories than tin/lead solder. This is primarily due to the toxicity of lead, and the amount of lead that leaches from printed wiring board assemblies, as determined by the leachability study conducted by the partnership. The study results are providing the industry with an objective analysis of the life-cycle environmental effects of leading candidate alternative lead-free solders, allowing industry to consider environmental concerns along with the traditionally evaluated parameters of cost and performance. This assessment is also allowing industry to redirect efforts toward products and processes that reduce solders' environmental footprint, including energy consumption, releases of toxic chemicals, and potential risks to human health and the environment. Another life-cycle assessment by IKP, University of Stuttgart, shows similar results to those of the EPA study. [ 17 ]
The ban on concentrations of brominated flame retardants (BFR) above 0.1% in plastics has affected plastics recycling. As more and more products include recycled plastics, it has become critical to know the BFR concentration in these plastics, either by tracing the origins of the recycled plastics to establish the BFR concentrations, or by measuring the BFR concentrations from samples. Plastics with high BFR concentrations are costly to handle or to discard, whereas plastics with levels below 0.1% have value as recyclable materials.
There are a number of analytical techniques for the rapid measurement of BFR concentrations. X-ray fluorescence spectroscopy can confirm the presence of bromine (Br), but it does not indicate the BFR concentration or specific molecule. Ion attachment mass spectrometry (IAMS) can be used to measure BFR concentrations in plastics. The BFR ban has significantly affected both upstream (plastic material selection) and downstream (plastic material recycling). [ citation needed ]
The RoHS 2 directive (2011/65/EU) is an evolution of the original directive and became law on 21 July 2011 and took effect on 2 January 2013. It addresses the same substances as the original directive while improving regulatory conditions and legal clarity. It requires periodic re-evaluations that facilitate gradual broadening of its requirements to cover additional electronic and electrical equipment, cables and spare parts. [ 18 ] The CE logo now indicates compliance and RoHS 2 declaration of conformity is now detailed (see below). [ citation needed ]
In 2012, a final report from the European Commission revealed that some EU Member States considered all toys under the scope of the primary RoHS 1 Directive 2002/95/EC, irrespective of whether their primary or secondary functions were using electric currents or electromagnetic fields. From the implementation of RoHS 2 or RoHS Recast Directive 2011/65/EU on, all the concerned Member States will have to comply with the new regulation.
The key difference in the recast is that it is now necessary to demonstrate conformity in a similar way to the LVD and EMC directives. Not being able to show compliance in sufficiently detailed files, and not ensuring it is implemented in production is now a criminal offence. Like the other CE marking directives it mandates production control and traceability to the technical files. It describes two methods of achieving presumption of conformity (Directive 2011/65/EU Article 16.2), either technical files should include test data for all materials or a standard accepted in the official journal for the directive, is used. Currently the only standard is EN IEC 63000:2018 (based on IEC 63000:2016 superseded EN 50581:2012), a risk based method to reduce the amount of test data required (Harmonised Standards list for RoHS2, OJEU C363/6).
One of the consequences of the requirement to demonstrate conformity is the requirement to know the exemption use of each component, otherwise it is not possible to know compliance when the product is placed on the market, the only point in time the product must be 'compliant'. Many do not understand that 'compliance' varies depending on what exemptions are in force and it is quite possible to make a non-compliant product with 'compliant' components. Compliance must be calculated on the day of placing on the market. In reality this means knowing the exemption status of all components and using up stock of old status parts before the expire date of the exemptions (Directive 2011/65/EU Article 7.b referring to Decision 768/2008/EC Module A Internal production control). Not having a system to manage this could be seen as a lack of diligence and a criminal prosecution could occur (UK Instrument 2012 N. 3032 section 39 Penalties).
RoHS 2 also has a more dynamic approach to exemptions, creating an automatic expiration if exemptions are not renewed by requests from industry. Additionally new substances can be added to the controlled list, with 4 new substances expected to be controlled by 2019. All these mean greater information control and update systems are required. [ citation needed ]
Other differences include new responsibilities for importers and distributors and markings to improve traceability to the technical files. These are part of the NLF for directives and make the supply chain a more active part of the policing (Directive 2011/65/EU Articles 7, 9, 10).
There has been a recent additional amendment 2017/2102 to 2011/65
The RoHS 2 directive (2011/65/EU) contains allowance to add new materials and 4 materials are highlighted for this attention in the original version, the amendment 2015/863 adds four additional substances to Annex II of 2011/65/EU (3/4 of the new restrictions are recommended for investigation in the original directive, ref Para 10 of preamble). This is another reason that simple component RoHS compliance statements are not acceptable as compliance requirements vary depending on the date the product is placed on the market (ref IEC 63000:2016). The additional four substances restriction and evidence requirements shall be applied for products placed on the market on or after 22 July 2019 except where exemptions permit as stated in Annex III., [ 5 ] although at the time of writing no exemptions exist or have been applied for, for these materials.
The four additional substances are
The maximum permitted concentrations in non-exempt products are 0.1%.
The new substances are also listed under the REACH Candidate list, and DEHP is not authorised for manufacturing (use as a substance) in the EU under Annex XIV of REACH. [ 19 ]
With the recast of the original RoHS (I) Directive (2002/95/EC), the scope of the directive was decoupled from the scope of the WEEE Directive and an open scope was introduced. The RoHS (II) Directive (2011/65/EU) was applicable to all electrical and electronic equipment. Scope limitations and exclusions were specifically introduced in Article 2(4) a) – j) of the recast Directive. All other EEE was in scope of the Directive, unless specific exemptions have been granted through Commission delegated acts (see next paragraph).
The scope exclusions are listed below [ 20 ]
This Directive does not apply to:
There are over 80 exemptions, some of which are quite broad. Exemptions will automatically expire after 5 or 7 years unless renewed. [ 18 ] [ 21 ]
According to Hewlett-Packard : "The European Union is gradually narrowing the scope of and expiring many of the current RoHS exemptions. In addition, it is likely that new substance restrictions will be introduced in the next several years." [ 18 ]
Some exemptions: [ 22 ]
Medical devices were exempt in the original directive. [ 24 ] RoHS 2 narrowed the exemption's scope to active implantable medical devices only (Category 4h). In vitro diagnostic devices (IVDD) and other medical devices are now included. [ 25 ]
Automotive vehicles are exempt (Category 4f). Vehicles instead are addressed in the End of Life Vehicles Directive (Directive 2000/53/EC). [ 26 ]
X-Ray Fluorescence (XRF) helps detect hazardous substances restricted under the RoHS directive. [ 27 ] It identifies heavy metals like lead, mercury, and cadmium quickly and efficiently. XRF works best for inorganic elements, making it a popular tool for compliance testing.
XRF uses X-rays to excite atoms in a sample. When atoms stabilize, they release energy as photons. The device measures this energy to identify elements in the sample. Each element emits a unique energy, allowing precise detection. XRF creates an energy spectrum showing what elements are present and how much of each exists. This data supports both qualitative and quantitative analysis.
XRF effectively detects key RoHS-restricted elements [ 28 ] like lead, mercury, and cadmium. It can also find chromium but cannot tell if it’s hexavalent chromium. Additional tests are necessary for that. However, XRF cannot analyze organic compounds such as phthalates or PBBs. XRF can detect total bromine which works as a pre-screening method. However, these substances require other methods like gas chromatography-mass spectrometry (GC-MS) for precise detection.
XRF offers many advantages. It’s fast and non-destructive, meaning samples remain intact after analysis. It works well for solid materials with minimal preparation, saving time and reducing costs. Handheld XRF devices allow on-site testing, making them useful in industrial and environmental settings. These devices analyze samples in seconds, increasing efficiency.
XRF struggles to detect light elements like carbon or oxygen. It also cannot analyze molecules or organic substances. Despite these limits, XRF remains a top choice for detecting heavy metals and ensuring compliance with environmental regulations [ 29 ] like RoHS.
Products within scope of the RoHS 2 directive must display the CE mark , the manufacturers name and address and a serial or batch number. Parties needing to know more detailed compliance information can find this on the EU Declaration of Conformity for the product as created by the manufacturer (Brand owner) responsible for the design or the EU representative. The regulation also requires most actors in the supply chain for the product (importer and distributors) to keep and check this document, as well as ensuring a conformance process has been followed and the correct language translation for instructions are provided. The manufacturer must keep certain documentation to demonstrate conformity, known as a technical file or technical records. The directive requires the manufacturer to demonstrate conformity by the use of test data for all materials or by following a harmonised standard (IEC 63000:2016 is the only standard at the time of writing). Regulators may request this file or, more likely, specific data from it as it will likely be very large. [ 30 ] [ citation needed ]
RoHS did not require any specific product labelling, but many manufacturers have adopted their own compliance marks to reduce confusion. Visual indicators have included explicit "RoHS compliant" labels, green leaves, check marks, and "PB-Free" markings. Chinese RoHS labels, a lower case "e" within a circle with arrows, can also imply compliance.
RoHS 2 attempts to address this issue by requiring the aforementioned CE mark whose use is policed by the Trading Standards enforcement agency. [ 31 ] It states that the only permitted indication of RoHS compliance is the CE mark. [ 32 ] The closely related WEEE ( Waste Electrical and Electronic Equipment Directive ), which became law simultaneously with RoHS, depicts a waste-can logo with an "X" through it and often accompanies the CE mark.
New substance restrictions being considered for introduction in the next few years include phthalates, brominated flame retardants (BFRs), chlorinated flame retardants (CFRs), and PVC. [ 18 ]
The Consumer Product Safety Act was enacted in 1972 followed by the Consumer Product Safety Improvement Act in 2008.
California has passed the Electronic Waste Recycling Act of 2003 (EWRA). This law prohibits the sale of electronic devices after 1 January 2007, that are prohibited from being sold under the EU RoHS directive, but across a much narrower scope that includes LCDs, CRTs, and the like and only covers the four heavy metals restricted by RoHS. EWRA also has a restricted material disclosure requirement.
Effective 1 January 2010, the California Lighting Efficiency and Toxics Reduction Act applies RoHS to general purpose lights, i.e. "lamps, bulbs, tubes, or other electric devices that provide functional illumination for indoor residential, indoor commercial, and outdoor use." [ 37 ]
Other US states and cities are debating whether to adopt similar laws, and there are several states that have mercury and PBDE bans already. [ citation needed ]
Canada does not have a dedicated national regulation equivalent to the European Union’s Restriction of Hazardous Substances (RoHS) Directive. However, certain environmental and product safety regulations align with RoHS principles by controlling or restricting the use of hazardous substances in electrical and electronic equipment. At the federal level, the Canadian Environmental Protection Act, 1999 (CEPA) governs the assessment and management of chemical substances, including those restricted under RoHS, such as lead, cadmium, and mercury. [ 38 ] Specific regulations under CEPA, such as the Prohibition of Certain Toxic Substances Regulations, limit or prohibit the manufacture, use, and import of several RoHS-relevant substances. [ 39 ]
On January 31, 2020, the United Kingdom completed its withdrawal from the European Union and subsequently entered a transition phase spanning from February 1 to December 31, 2020. This event is commonly referred to as Brexit. During this transitional period, the United Kingdom conducted a comprehensive assessment of various regulations, including RoHS. UK RoHS stays well aligned with EU RoHS, with similar scopes, restricted substances, thresholds, and exemptions. [ 40 ]
Worldwide standards and certification are available under the QC 080000 standard, governed by the National Standards Authority of Ireland , to ensure the control of hazardous substances in industrial applications.
In 2012 Sweden's Chemicals Agency (Kemi) and Electrical Safety Authority tested 63 consumer electronics products and found that 12 were out of compliance. Kemi claims that this is similar to testing results from prior years. "Eleven products contained prohibited levels of lead, and one of polybrominated diphenyl ether flame retardants. Details of seven companies have been passed to Swedish prosecutors. Kemi says that levels of non-compliance with RoHS are similar to previous years, and remain too high." [ 41 ]
RoHS is not the only environmental standard of which electronic product developers should be aware. Manufacturers will find that it is cheaper to have only a single bill of materials for a product that is distributed worldwide, instead of customising the product to fit each country's specific environmental laws. Therefore, they develop their own standards, which allow only the strictest of all allowable substances.
For example, IBM forces each of their suppliers to complete a Product Content Declaration [ 42 ] form to document compliance to their environmental standard 'Baseline Environmental Requirements for Materials, Parts and Products for IBM Logo Hardware Products'. [ 43 ] Thus, IBM banned DecaBDE , even though there was formerly a RoHS exemption for this material [ 44 ] (overturned by the European Court in 2008). [ 45 ]
Similarly, here is Hewlett-Packard 's environmental standard. [ 46 ]
Adverse effects on product quality and reliability, plus high cost of compliance (especially to small business) are cited as criticisms of the directive, as well as early research indicating that the life cycle benefits of lead-free solder versus traditional solder materials are mixed. [ 16 ]
Criticism earlier on came from an industry resistant to change and a misunderstanding of solders and soldering processes. Deliberate misinformation was espoused to resist what was perceived as a "non-tariff barrier created by European bureaucrats." Many believe the industry is stronger now through this experience and has a better understanding of the science and technologies involved. [ 47 ]
One criticism of RoHS is that the restriction of lead and cadmium does not address some of their most prolific applications, while being costly for the electronics industry to comply with. [ citation needed ] . Specifically, the total lead used in electronics makes up only 2% of world lead consumption, while 90% of lead is used for batteries (covered by the battery directive, as mentioned above, which requires recycling and limits the use of mercury and cadmium, but does not restrict lead). Another criticism is that less than 4% of lead in landfills is due to electronic components or circuit boards, while approximately 36% is due to leaded glass in cathode-ray tube monitors and televisions, which can contain up to 2 kg per screen. This study was done right after the tech boom . [ 48 ]
The more common lead-free solder systems have a higher melting point, e.g. a 30 °C typical difference for tin-silver-copper alloys, but wave soldering temperatures are approximately the same at ~255 °C; [ 47 ] however at this temperature most typical lead-free solders have longer wetting times than eutectic Pb/Sn (lead/tin) 37:63 solder. [ 49 ] Additionally wetting force is typically lower, [ 49 ] which can be disadvantageous (for hole filling), but advantageous in other situations (closely spaced components).
Care must be taken in selection of RoHS solders as some formulations are harder with less ductility, increasing the likelihood of cracks instead of plastic deformation , which is typical for lead-containing solders. [ citation needed ] Cracks can occur due to thermal or mechanical forces acting on components or the circuit board, the former being more common during manufacturing and the latter in the field. RoHS solders exhibit advantages and disadvantages in these respects, dependent on packaging and formulation. [ 50 ]
The editor of Conformity Magazine wondered in 2005 if the transition to lead-free solder would affect long-term reliability of electronic devices and systems, especially in applications more mission-critical than in consumer products, citing possible breaches due to other environmental factors like oxidation. [ 51 ] The 2005 Farnell/Newark InOne " RoHS Legislation and Technical Manual ", [ 52 ] cites these and other "lead-free" solder issues, such as:
Potential reliability concerns were addressed in Annex item #7 of the RoHS directive, granting some specific exemptions from regulation until 2010. These issues were raised when the directive was first implemented in 2003 and reliability effects were less known. [ 53 ]
Another potential problem that some lead-free, high tin-based solders may face is the growth of tin whiskers . These thin strands of tin can grow and make contact with an adjacent trace, developing a short circuit . Lead in the solder suppresses the growth of tin whiskers. Historically tin whiskers have been associated with a handful of failures, including a nuclear power plant shutdown , satellite failure and pacemaker incident where pure tin plating was used. However, these failures pre-date RoHS. They also do not involve consumer electronics, and therefore may employ RoHS-restricted substances if desired. Manufacturers of electronic equipment for mission-critical aerospace applications have followed a policy of caution and therefore resisted the adoption of lead-free solders.
To help mitigate potential problems, lead-free manufacturers are using a variety of approaches such as tin-zinc formulations that produce non-conducting whiskers or formulations that reduce growth, although they do not halt growth completely in all circumstances. [ 54 ] Fortunately, experience thus far suggests deployed instances of RoHS compliant products are not failing due to whisker growth. Dr. Ronald Lasky of Dartmouth College reports: "RoHS has been in force for more than 15 months now, and ~$400B RoHS-compliant products have been produced. With all of these products in the field, no significant numbers of tin whisker-related failures have been reported." [ 55 ] Whisker growth occurs slowly over time, is unpredictable, and not fully understood, so time may be the only true test of these efforts. Whisker growth is even observable for lead-based solders, albeit on a much smaller scale.
Some countries have exempted medical and telecommunication infrastructure products from the legislation. [ 56 ] However, this may be a moot point, since as electronic component manufacturers convert their production lines to producing only lead-free parts, conventional parts with eutectic tin-lead solder will simply not be available, even for military, aerospace and industrial users. To the extent that only solder is involved, this is at least partially mitigated by many lead-free components' compatibility with lead-containing solder processes. Leadframe -based components, such as Quad Flat Packages (QFP), Small Outline Integrated Circuits (SOIC), and Small outline packages (SOP) with gull wing leads , are generally compatible since the finish on the part leads contributes a small amount of material to the finished joint. However, components such as Ball grid arrays (BGA) which come with lead-free solder balls and leadless parts are often not compatible with lead-containing processes. [ 57 ]
There are no de minimis exemptions, e.g., for micro-businesses. This economic effect was anticipated and at least some attempts at mitigating the effect were made. [ 58 ]
Another form of economic effect is the cost of product failures during the switch to RoHS compliance. For example, tin whiskers were responsible for a 5% failure rate in certain components of Swiss Swatch watches in 2006, prior to the July implementation of RoHS, reportedly triggering a US$1 billion recall. [ 59 ] [ 60 ] Swatch responded to this by applying for an exemption to RoHS compliance, but this was denied. [ 61 ] [ 62 ]
RoHS helps reduce damage to people and the environment in third-world countries where much of today's "high-tech waste" ends up. [ 14 ] [ 63 ] [ 64 ] The use of lead-free solders and components reduces risks to electronics industry workers in prototype and manufacturing operations. Contact with solder paste no longer represents the same health hazard as it used to. [ 65 ]
Contrary to the predictions of widespread component failure and reduced reliability, RoHS's first anniversary (July 2007) passed with little fanfare. [ 66 ] Most contemporary consumer electronics are RoHS compliant. As of 2013, millions of compliant products are in use worldwide.
Many electronics companies keep "RoHS status" pages on their corporate websites. For example, the AMD website states:
Although lead containing solder cannot be completely eliminated from all applications today, AMD engineers have developed effective technical solutions to reduce lead content in microprocessors and chipsets to ensure RoHS compliance while minimizing costs and maintaining product features. There is no change to fit, functional, electrical or performance specifications. Quality and reliability standards for RoHS compliant products are expected to be identical compared to current packages. [ 67 ]
RoHS printed circuit board finishing technologies are surpassing traditional formulations in fabrication thermal shock, solder paste printability, contact resistance, and aluminium wire bonding performance and nearing their performance in other attributes. [ 68 ]
The properties of lead-free solder, such as its high temperature resilience, has been used to prevent failures under harsh field conditions. These conditions include operating temperatures with test cycles in the range of −40 °C to +150 °C with severe vibration and shock requirements. Automobile manufacturers are turning to RoHS solutions now as electronics move into the engine bay. [ 69 ]
One of the major differences between lead-containing and lead-free solder pastes is the "flow" of the solder in its liquid state. Lead-containing solder has a lower surface tension, and tends to move slightly to attach itself to exposed metal surfaces that touch any part of the liquid solder. Lead-free solder conversely tends to stay in place where it is in its liquid state, and attaches itself to exposed metal surfaces only where the liquid solder touches it.
This lack of "flow" – while typically seen as a disadvantage because it can lead to lower quality electrical connections – can be used to place components more tightly than they used to be placed due to the properties of lead-containing solders.
For example, Motorola reports that their new RoHS wireless device assembly techniques are "...enabling a smaller, thinner, lighter unit." Their Motorola Q phone would not have been possible without the new solder. The lead-free solder allows for tighter pad spacing. [ 70 ]
Research into new alloys and technologies is allowing companies to release RoHS products that are currently exempt from compliance, e.g. computer servers. [ 71 ] IBM has announced a RoHS solution for high lead solder joints once thought to remain a permanent exemption. The lead-free packaging technology "...offers economical advantages in relation to traditional bumping processes, such as solder waste reduction, use of bulk alloys, quicker time-to-market for products and a much lower chemical usage rate." [ 72 ] [ 73 ]
Test and measurement vendors, such as National Instruments , have also started to produce RoHS-compliant products, despite devices in this category being exempt from the RoHS directive. [ 74 ]
RoHS compliance can be misleading because RoHS3 (EU) allows exemptions, ex. up to 85% lead content for high-temperature soldering alloys. [ 5 ]
Therefore good companies should clearly define their level of compliance in their product main datasheets (DS); ideally, they should provide a product content sheet (PCS) with full substance declaration by mass. Similarly, good developers (and users) should carefully validate the product info to make sure they get the exact material safety expected.
Industry Examples:
Ideal: RoHS3 compliant without exemptions
Good Minimum Standard: RoHS3 compliant with exemption for lead-content on internal-only material (to help prevent lead-exposure on touch, lead-leakage in water) | https://en.wikipedia.org/wiki/RoHS |
In graph theory the road coloring theorem , known previously as the road coloring conjecture , deals with synchronized instructions . The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze ). [ 1 ] In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics .
The theorem was first conjectured by Roy Adler and Benjamin Weiss . [ 2 ] It was proved by Avraham Trahtman . [ 3 ]
The image to the right shows a directed graph on eight vertices in which each vertex has out-degree 2. (Each vertex in this case also has in-degree 2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring.
For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk "blue-red-red—blue-red-red—blue-red-red", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk "blue-blue-red—blue-blue-red—blue-blue-red", you will always end up at the vertex marked in green, no matter where you started.
The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring.
Let G be a finite, strongly connected , directed graph where all the vertices have the same out-degree k . Let A be the alphabet containing the letters 1, ..., k . A synchronizing coloring (also known as a collapsible coloring ) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such that all paths in G corresponding to w terminate at v .
The terminology synchronizing coloring is due to the relation between this notion and that of a synchronizing word in finite automata theory.
For such a coloring to exist at all, it is necessary that G be aperiodic . [ 4 ] The road coloring theorem states that aperiodicity is also sufficient for such a coloring to exist. Therefore, the road coloring problem can be stated briefly as:
Previous partial or special-case results include the following: | https://en.wikipedia.org/wiki/Road_Coloring_Conjecture |
The Road and Waterway Construction Service Corps [ 1 ] ( Swedish : Väg- och vattenbyggnadskåren , VVK) was during the years 1851–2010 a military administrative corps of reserve personnel in the Swedish Army , who was responsible for in the case of war provide the Swedish Armed Forces with specially trained personnel to maintain positions in the field of civil engineering .
The Road and Waterway Construction Service Corps was established in 1851 as a military corps that primarily catered to the Swedish government 's need for engineers for the planning and management of the so-called public works. [ 2 ] The corps sorted under the Ministry of Communications and had under the regulations issued on 22 December 1851 the purpose of assisting the National Swedish Road Board ( Väg- och vattenbyggnadsstyrelsen ) in its dealings with public works; [ 3 ] the officers of the corps could during the case of war be commanded to the engineering service in the Swedish Army . Concerning discipline, subordination and liability rules, the corps was under the jurisdiction of martial law . The corps was first set up only by certain officers of the Swedish Navy Mechanical Corps , the Army and the Navy , which had been employed in public companies and therein acquired practical skills. [ 3 ]
The training of corps officers occurred in 1846-78 at the Higher Artillery and Engineering Grammar School ( Högre artilleri- och ingenjörläroverket ) in Marieberg in Stockholm , but according to a royal letter on 12 June 1885 a special military course for aspirants to the corps was now organized. [ 3 ] To gain entry to this course required among other things that one had completed their final examination from the Royal Institute of Technology 's Department of Civil Engineering. By royal letter on 19 October 1894 and 6 April 1900, new regulations had been provided for the military training. The corps officers were listed in accordance with the Royal Proclamation on 9 February 1906 to the Army's surplus staff. [ 3 ]
The regulations in 1922 for entry into the corps were; to have completed the four-year syllabus of the training school ( fackskola ) for civil engineering at the Royal Institute of Technology, and from there have obtained full leaving certificate; after completing military service, have undergone a 7 + 1 ⁄ 2 -month-long practical and theoretical course in artillery and fortification et cetera at the Svea Engineer Corps or be a reserve officer in the Fortifications ( Fortifikationen ); and after completing the course at the Royal Institute of Technology, have served at least 3 years at any public work or investigation function as well as to have evinced qualities, required for management of larger companies. [ 3 ] During the early 1920s, 10 new corps officers were appointed annually. In 1921 the corps consisted of 221 officers. Of these, one was colonel (who was also the Director General of the National Swedish Road Board), seven lieutenant colonels , 34 majors , 102 captains and 77 lieutenants . [ 3 ]
The development of this corps formed the basis for civil engineering education in Sweden and subsequently the Royal Institute of Technology in Stockholm . Also the Swedish Transport Administration has its roots in the corps. [ 2 ] The corps later sorted under the Chief of the Army and the head was a senior colonel . The deputy head was a colonel. The rest of the corps staff held the ranks of lieutenant colonel, major, lieutenant or captain. [ 4 ] Since 1937 the Road and Waterway Construction Service Corps has been a reserve officer corps. [ 2 ] The corps was decommissioned on 30 September 2010 and the corps officers civil-military expertise in the infrastructure field was then transferred to Göta Engineer Regiment (Ing 2) in Eksjö . A ceremonial handover took place in mid-August 2010. The corps had during decommissioning 84 active officers. [ 2 ]
The Road and Waterway Construction Service Corps had the task of in the case of war to provide the Swedish Armed Forces with specially trained personnel to maintain positions, which required insight in civil engineering , and if that conveniently can take place, also in peacetime provide the Swedish Armed Forces agencies counsel in matters that require access to particular expertise in civil engineering, to keep records of those who have completed university's civil engineering programme or have equivalent competence and other useful techniques in civil engineering for the Swedish Armed Forces, and in cooperation with the Enrollment Administration of the Swedish Armed Forces ( Värnpliktsverket ) and other relevant agencies of the Swedish Armed Forces propose both selection for the needs of the Swedish Armed Forces of a necessary number of technicians in civil engineering as measures for this personnel's appropriate activity during heightened preparedness. [ 4 ]
The coat of arms of the Road and Waterway Construction Service Corps. Blazon : "Sable, two swords in saltire surmounted by a circle azure charged with a mullet on a cluster of rays as a pentagon, all or". [ 5 ]
In 1993, the Väg- och vattenbyggnadskårens förtjänstmedalj ("Road and Waterway Construction Service Corps Medal of Merit") in gold and silver (VVKGM/SM). The medal is pentagonal and the medal ribbon is of black moiré with a narrow yellow line on the first side and a narrow blue line on the second side. [ 6 ]
Until 1934, the head of the Road and Waterway Construction Service Corps was also the Director General of the National Swedish Road Board ( Väg- och vattenbyggnadsstyrelsen ). | https://en.wikipedia.org/wiki/Road_and_Waterway_Construction_Service_Corps |
In graph theory the road coloring theorem , known previously as the road coloring conjecture , deals with synchronized instructions . The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze ). [ 1 ] In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics .
The theorem was first conjectured by Roy Adler and Benjamin Weiss . [ 2 ] It was proved by Avraham Trahtman . [ 3 ]
The image to the right shows a directed graph on eight vertices in which each vertex has out-degree 2. (Each vertex in this case also has in-degree 2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring.
For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk "blue-red-red—blue-red-red—blue-red-red", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk "blue-blue-red—blue-blue-red—blue-blue-red", you will always end up at the vertex marked in green, no matter where you started.
The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring.
Let G be a finite, strongly connected , directed graph where all the vertices have the same out-degree k . Let A be the alphabet containing the letters 1, ..., k . A synchronizing coloring (also known as a collapsible coloring ) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such that all paths in G corresponding to w terminate at v .
The terminology synchronizing coloring is due to the relation between this notion and that of a synchronizing word in finite automata theory.
For such a coloring to exist at all, it is necessary that G be aperiodic . [ 4 ] The road coloring theorem states that aperiodicity is also sufficient for such a coloring to exist. Therefore, the road coloring problem can be stated briefly as:
Previous partial or special-case results include the following: | https://en.wikipedia.org/wiki/Road_coloring_theorem |
Road curves are irregular bends in roads to bring a gradual change of direction. Similar curves are on railways and canals . [ 1 ]
Curves provided in the horizontal plane are known as horizontal curves and are generally circular or parabolic. Curves provided in the vertical plane are known as vertical curve.
Five types of horizontal curves on roads and railways:
Two types of vertical curves on roads:
A simple curve has the same radius throughout and is a single arc of a circle , with two tangents meeting at the intersection (B in this diagram).
A compound curve has two or more simple curves with different radii that bend the same way and are on the same side of a common tangent. In this diagram, MN is the common tangent.
Also called a serpentine curve, it is the reverse of a compound curve, and two simple curves bent in opposite directions are on opposite sides of the common tangent.
A deviation curve is simply a combination of two reverse curves. It is used when it is necessary to deviate from a given straight path to avoid intervening obstructions such as a building, a body of water, or other significant site.
Is a curve with a gradual change in elevation on the outside of the curve to help drivers comfortably take turns at faster speeds
Also called a sag curve, this curve dips down and then rises back up. These are placed in the base of hills. The opposite of summit curve.
Also called the crest curve, this curve rises and then dips down. At the peak of hills. The opposite of valley curve. [ 3 ]
This road-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Road_curve |
Road debris , a form of road hazard, is debris that accumulates on or off a road . Road debris includes substances, materials, and objects that are foreign to the normal roadway environment. Debris may be produced by vehicular or non-vehicular sources, although in all cases it is considered litter , a form of solid waste. [ 1 ] Debris may tend to collect in areas where vehicles do not drive, such as on the edges (shoulder) , around traffic islands , and junctions.
Road spray [ 2 ] or tire kickup is road debris (usually liquid water) that has been kicked up, pushed out, or sprayed out from, a tire . In 2004, a AAA Foundation for Traffic Safety study revealed that vehicle-related road debris caused 25,000 accidents and nearly 100 deaths a year.
Road debris can be caused by various factors, including objects falling off vehicles or natural disasters and weather, specifically wind, storms, tornadoes , hurricanes , etc. [ 3 ]
Examples of road debris include:
Road debris is a hazard [ 5 ] that can cause loss of vehicle control with damages ranging from a flat tire , vehicular rollover, penetration of the passenger compartment by the debris, [ 1 ] [ 7 ] to a serious collision , with accompanying injuries or deaths. In the year 2011, the National Highway Traffic Safety Administration 's Traffic Safety Facts found that more than 800 persons were killed across America by "non-fixed objects" (a term that includes roadway debris). California had the highest number of total deaths for any state, while New Mexico had the greatest probability for death from a vehicle-debris crash in that year. [ 8 ]
In 2004, a AAA Foundation for Traffic Safety study revealed that vehicle-related road debris caused 25,000 accidents—and nearly 100 deaths—each year. [ 1 ] [ 7 ] At highway speeds, even small debris can be deadly. [ 1 ] On June 16, 1925, in the United States, a passenger train carrying German and American tourists from Chicago, Illinois to Hoboken, New Jersey struck debris washed into a road crossing and derailed during a heavy thunderstorm. [ 9 ] Collision with road debris resulted in a solar vehicle accident at the World Solar Challenge 2007 in Australia.
Road debris tends to collect in areas where two-track vehicles such as cars and buses do not drive. In urban areas, this tends to be on the edges (shoulder) and on the crown of the road, and debris frequently collects around traffic islands and junctions. In rural areas, debris collects in the middle of the lane and on the outside of corners and bends. [ 10 ] Road debris can be especially dangerous to bicyclists , who may have to travel outside the cycle lane and into traffic to avoid debris.
Flooding can also occur if storm drains and street gutters are not kept clear of road debris and litter. Large quantities of water are sometimes thrown up from the road (road spray) by large vehicles, creating visibility problems for the drivers of oncoming, nearby, or following vehicles. Following vehicles may reduce the problem by slowing and increasing the following/separation distance . Headlights (or fog lights ) improve vehicle visibility for all drivers, including those dealing with the spray. Driving manuals advise against following vehicles too closely ( tailgating ) in these hazardous conditions. [ 2 ] Road spray can cause reduced visibility and dramatically reduce the safety of motorists. [ 11 ] Over time, road spray and gunk from [a bicycle's] brake pads coat the rim of the wheel, interfering with braking power. [ 12 ]
In motorsport racing, road debris can cause loss of traction and subsequent crashes. Usually, the yellow caution flag is used to indicate a track hazard, and the pace/safety car will come out.
Road debris can also cause other more specific problems and damage to vehicles. Rocks striking the catalytic converter can cause the internal mat to break and clog the converter. [ 13 ] Several recalls have occurred due to road debris. The 2005 Scion TC 's wind deflector was recalled because of potential shatter from road debris impact. [ 14 ] The 2004 Mitsubishi Endeavor was recalled in February 2010 when it was determined that a mixture of road salt and road debris (mud) might be trapped between a reinforcing bracket and the fuel filler pipe , causing corrosion. [ 15 ] The 2001 Chevrolet C/K chassis cab truck was also recalled on discovery that road debris could strike and damage its pressure relief valves . [ 16 ]
Small debris particles and dust (primarily from tire wear and vehicle exhaust particulates) constitute a significant problem when they are washed into the soil and leak into groundwater reservoirs through surface runoff , especially urban runoff . Roadside soil and water contamination can result when the concentration of harmful constituents is high enough. The greater the surface area of synthetic rubber waste fragments, the greater the potential for breakdown into harmful constituents. For leached tire debris, the potential environmental impact of the ingredients zinc and organic toxicants has been demonstrated. [ 17 ]
Additionally, debris from lawns in local communities can flush into local waterways. There are currently some laws against blowing organic matter such as grass clippings into the roadway because of their potential toxic effect on the local waterways. Grass being high in nitrogen, which can accumulate in waterways and cause algae blooms . [ 18 ] An example of such laws can be seen in the City of Davenport, Iowa's Clean Air and Water Act. [ 19 ]
A car bra can help reduce damage from minor road debris. Road spray is lessened on stone mastic asphalt and open-graded asphalt [ 11 ] and can be further reduced with fenders [ 20 ] (more so on a bicycle since most motor vehicles tend to already have fenders) and/or mud flaps . Street sweepers and winter service vehicles remove most solid road debris and the Adopt a Highway program also helps. Road signs and variable-message signs may warn drivers of special situations involving road debris.
The American Automobile Association (AAA) publishes the following recommendations: [ 1 ]
Ocean Colour Scene , an English Britpop band, made a song about Birmingham, England called "Debris Road" (reputed to be about the road running past the band's recording studios in Ladywood ) on their Marchin' Already 1997 album.
Some video games (particularly racing games ) include road debris that damages vehicles or obstructs visibility. [ 21 ] Spy Hunter (1983) features slippery, icy roads and puddles, oil slicks , and smoke screens . MotorStorm (2007) depicts air-borne mud that becomes accurately painted onto the body of each vehicle in real-time. Players can use this airborne debris strategically: a chunk of debris may be used to knock opponents off their motorcycles, and mud spatter on the wind-shields might temporarily blind them. [ 21 ] Fuel (2009) features "crazy windstorms that kick up leaves and debris." [ 22 ] | https://en.wikipedia.org/wiki/Road_debris |
A road maintenance depot is a depot used by road maintenance agencies for storing works equipment and organising maintenance operations. Road maintenance depots can range in size from small sheds storing just a few pieces of equipment, to vast buildings housing computer and closed-circuit television systems, allowing operators to monitor conditions across the road network.
Road maintenance depots carry gear for a number of tasks, including road works , snow removal , planting of road verge and central reservations and storm drain maintenance. Most depots will have limited accommodation facilities for staff who are on-call , particularly during heavy winter storms, when travel between the worker's home and the depot may be restricted. Road maintenance depots also include garages and repair shops for the fleets of vehicles stored within, and large depots keep supplies of fuel and road salt for drivers. [ 1 ]
Depots carry a wide range of vehicles to cover most eventualities, depending on the location of the depot; small urban depots carry street sweeper vehicles and small gully emptiers , while larger rural and motorway-based depots hold fleets of winter service vehicles and engineering vehicles , and often tow trucks and breakdown vehicles for rescuing broken down or stranded equipment. [ 1 ] Other vehicles commonly kept at depots include lawnmowers , sprayers and road markers .
Along with garages, most depots also have either salt barns or brine tanks, to store de-icing agents for use in winter months, and filling stations to refuel vehicles, especially those that use red diesel , which is not available at public filling stations. Larger depots have vehicle washes and repair shops to maintain the fleet, and a cafeteria and on-call room for workers.
Road maintenance depots in Germany are known as Straßenmeisterei or Autobahnmeisterei . Responsibility for operating the depot depends on the type of road covered by the catchment area of the depot; those that cover the autobahn network and major road are owned by the Federal Government , while those that cover more urban roads are operated by the local city administration. [ 2 ]
All public road maintenance depots in the United Kingdom are owned by the Highways Agency or its contractors, although the depot on the privately owned M6 Toll is run by the operators of the road. [ 3 ] Most are located along the edge of motorways, and are signposted "Works Exit" or "Works Access Only". These signs are also used to disguise sliproads leading to sensitive military institutions such as RAF Welford and to dissuade members of the public from using emergency evacuation routes and short cuts designed for emergency vehicles . [ 4 ] | https://en.wikipedia.org/wiki/Road_maintenance_depot |
Soil stabilizers and road recyclers are engineering vehicles that were once similar machines; however, they are now specialised pieces of road making machinery and have developed into different machines. Other terms that are sometimes used are: road profiler, road reclaimer, road miller, road planer and pavement profiler. They are used in the process of full depth recycling .
Different countries sometimes use different terminology and it becomes difficult to identify the difference between the machines and the processes involved.
A soil stabiliser (also called a pulvimixer) is a construction vehicle with a powered metal drum that has rows of mixing blades or paddles. It makes soil cement by blending soil, a binder agent (usually Portland cement or lime ) and water together with paddles in the mixing chamber instead of a concrete mixer and usually does not cut or mill hard or very thick asphalt or concrete . Modern soil stabilisers are more powerful and often use carbide tips instead of paddles. Some are called single pass soil stabilizers because they can make soil cement in one pass where some of these machines take up to four passes. [ 1 ] In this way most soil stabilisers have become much more like road recyclers where they can also blend the old road surface in the mixture.
A road pavement mill is a construction vehicle with a powered metal drum that has rows of tungsten carbide tipped teeth that cut off the top surface of a paved concrete or asphalt road . Usually (since sustainability is now very important) extracts the material for recycling into new asphalt. In some applications the entire road pavement can be removed. The reasons for removal may be that the road surface has become damaged and needs replacing.
It is a very high-powered machine, with some using engines above 500 hp. It is usually mounted on four crawler tracks although sometimes on three crawler tracks or on wheels.
A road recycler or road reclaimer is an asphalt pavement grinder or a combination grinder and soil stabilizer when it is equipped to blend cement, foamed asphalt and/or lime and water with the existing pavement (usually only very thin asphalt) to create a new, recycled road surface. It usually refers to the process of blending the asphalt road with a binder and base course in a single pass. [ 2 ] In the photo below of the milling cutter drums, the front drum with many teeth would be from a pavement mill and would be used to remove very hard asphalt or concrete surfaces. The drums behind with less teeth would be from a road recycler, the teeth are placed in a chevron pattern to reduce the load on the motor. Only a few teeth are cutting at one time and this pattern of teeth placement also serves to auger the material to the centre where it can be picked up easily by a conveyor belt.
Key: SS = soil stabiliser; RM = road mill | https://en.wikipedia.org/wiki/Road_recycler |
A road roller (sometimes called a roller-compactor , or just roller [ 1 ] ) is a compactor -type engineering vehicle used to compact soil , gravel , concrete , or asphalt in the construction of roads and foundations . [ 1 ] Similar rollers are used also at landfills or in agriculture.
Road rollers are frequently referred to as steamrollers , regardless of their method of propulsion. [ 2 ]
The first road rollers were horse-drawn , and were probably borrowed farm implements (see Roller ) .
Since the effectiveness of a roller depends to a large extent on its weight, [ 3 ] self-powered vehicles replaced horse-drawn rollers from the mid-19th century. The first such vehicles were steam rollers . Single-cylinder steam rollers were generally used for base compaction and run with high engine revs with low gearing to promote bounce and vibration from the crankshaft through to the rolls in much the same way as a vibrating roller. The double cylinder or compound steam rollers became popular from around 1910 onwards and were used mainly for the rolling of hot-laid surfaces due to their smoother running engines, but both cylinder types are capable of rolling the finished surface. Steam rollers were often dedicated to a task by their gearing as the slower engines were for base compaction whereas the higher geared models were often referred to as "chip chasers" which followed the hot tar and chip laying machines. Some road companies in the US used steamrollers through the 1950s. In the UK some remained in service until the early 1970s.
As internal combustion engines improved during the 20th century, kerosene -, gasoline - (petrol), and diesel -powered rollers gradually replaced their steam -powered counterparts. The first internal-combustion powered road rollers were similar to the steam rollers they replaced. They used similar mechanisms to transmit power from the engine to the wheels, typically large, exposed spur gears. Some users disliked them in their infancy, as the engines of the era were typically difficult to start, particularly the kerosene-powered ones.
Virtually all road rollers in use today use diesel power.
Road rollers use the weight of the vehicle to compress the surface being rolled (static) or use mechanical advantage (vibrating). Initial compaction of the substrate on a road project is done using a padfoot or "sheep's foot" drum roller, which achieves higher compaction density due to the pads having less surface area. On large freeways, a four-wheel compactor with padfoot drum and a blade, such as a Caterpillar 815/825 series machine, would be used due to its high weight, speed, and the powerful pushing force to spread bulk material. On regional roads, a smaller single padfoot drum machine may be used.
The next machine is usually a single smooth drum compactor that compacts the high spots down until the soil is smooth. This is usually done in combination with a motor grader to obtain a level surface. Sometimes at this stage a pneumatic tyre roller is used. These rollers feature two rows (front and back) of pneumatic tyres that overlap, and the flexibility of the tyres provides a kneading action that seals the surface and with some vertical movement of the wheels, enables the roller to operate effectively on uneven ground. Once the soil base is flat the pad drum compactor is no longer used on the road surface. [ citation needed ]
The next course (road base) is compacted using a smooth single drum, smooth tandem roller, or pneumatic tyre roller in combination with a grader and a water truck to achieve the desired flat surface with the correct moisture content for optimum compaction. Once the road base is compacted, the smooth single drum compactor is no longer used on the road surface (there is an exception if the single drum has special flat-wide-base tyres on the machine).
The final wear course of asphalt concrete (known as asphalt or blacktop in North America, or macadam in England [ citation needed ] ) is laid using a paver and compacted using a tandem smooth drum roller, a three-point roller or a pneumatic tyre roller. Three point rollers on asphalt were once common and are still used, but tandem vibrating rollers are the usual choice now. The pneumatic tyre roller's kneading action is the final roller to seal the surface.
Rollers are also used in landfill compaction. Such compactors typically have padfoot drums, and do not achieve a smooth surface. The pads aid in compression, due to the smaller area contacting the ground.
The roller can be a simple drum with a handle that is operated by one person and weighs 45 kilograms (100 lb) or as large as a ride-on road roller weighing 20 tonnes (20 long tons; 22 short tons) and costing more than US$ 150,000. A landfill unit may weigh 54 tonnes (53 long tons; 60 short tons).
Drums are available in widths ranging from 610 to 2,130 millimetres (24 to 84 in).
Tyre rollers are available in widths ranging up to 2.7 metres (8.9 ft), with between 7 and 11 wheels (e.g. 3 wheels at front, 4 at back): 7 and 8 wheel types are normally used in Europe and Africa; 9 and 11 in America; and any type in Asia. Very heavy tyre rollers are used to compact soil.
KEY: | https://en.wikipedia.org/wiki/Road_roller |
Road salt (also known as de-icing salt , rock salt , or snow salt ) is a salt used mainly as an anti-slip agent in winter road conditions , but also to prevent dust and snow build-up on roads . [ 1 ] Various kinds of salts are used as road salt, but calcium chloride and sodium chloride (rock salt) are among the most common. [ 2 ] [ 3 ] The more expensive magnesium chloride is generally considered safer, but is not as widely used because of its cost and effect on structural integrity. [ 4 ] [ 5 ] When used in its solid form, road salt is often pre-wet to accelerate the ice-melting process. [ 6 ]
Road salt and brine are generally spread using a winter service vehicle called a salt spreader . Salt spreaders are typically added to trucks , loaders , or in the case of brine, tankers . The salt is stored in the large hopper on the rear of the vehicle, with a wire mesh over the top to prevent foreign objects from entering the spreading mechanism and hence becoming jammed. The salt is generally spread across the roadway by an impeller , attached by a hydraulic drive system to a small onboard engine . However, until the 1970s, it was often spread manually either by workers shoveling salt from trucks or by smaller wheelbarrow-like vehicles, [ 7 ] the latter still being used today for personal use. [ 8 ] Some older spreading mechanisms still require it to be manually loaded into the impeller from the hopper.
Salt for use of melting ice and snow works through a phenomenon called freezing-point depression , the lowering of a substances freezing point after the addition of solutes . When road salt is added to roads, aside from providing better friction for vehicles on the road, it also dissolves in the water of the ice, resulting in a lower freezing point. As long as the temperature is above this freezing point, this in turn results in the ice melting . [ 9 ] [ 10 ] Because of this, ordinary rock salt is only effective down to a range of −6 to −10 °C (21 to 14 °F). At colder temperatures, it can have the opposite effect. Road salt is sometimes used even in colder conditions, if milder weather is expected. In very cold and dry weather, the road surface becomes rough and the need for de-icing is reduced. However, during extreme cold and rain, the roads can become extremely difficult to pass and, in some cases, roads may need to be closed to traffic. [ 11 ]
Sodium chloride is by far the most common kind of road salt. This is mainly due to its widespread use and low cost, and thanks to its large industrial infrastructure , [ 12 ] it is used in many industrial and consumer applications. [ 13 ] While it is common and inexpensive, its effective temperature range usually does not fall below −6 to −10 °C (21 to 14 °F), and under these temperatures, it is often counter-productive. When used in large quantities, it can also disrupt local ecosystems by heightening the salinity of bodies of water and the soil . Further, rock salt's abrasive nature erodes concrete or asphalt if used heavily. [ 1 ] [ 14 ]
Calcium chloride is less common compared to sodium chloride. While it is slightly more expensive, it can cover a far larger area and melts ice almost three times quicker. [ 15 ] It has recently started rising in popularity since it is not as environmentally damaging as sodium chloride, and also because of its heightened effectiveness at clearing ice. [ 16 ] [ 17 ]
Magnesium chloride is more expensive by far than the road salts in common use today. It has a very low environmental impact, and is quite effective at clearing ice. However, it has been discovered that magnesium chloride causes far more damage to concrete surfaces compared to the other salts, and its use as a de-icing chemical has largely been discontinued. [ 4 ] [ 5 ] It is still widespread as a highly effective dust clearer in warmer weather. [ 18 ]
The widespread use of road salt has significant environmental and infrastructural repercussions. While effective and relatively inexpensive, this practice incurs hidden costs because of its corrosive nature, leading to approximately $5 billion in annual repairs across the United States, according to the country's Environmental Protection Agency . [ 19 ]
One of the primary environmental concerns is the contamination of water sources. Road salt can infiltrate surface and ground water , elevating sodium and chloride levels in drinking water reservoirs and wells; one teaspoon of road salt can permanently pollute five gallons of water. [ 19 ] Elevated sodium levels pose health risks for individuals with hypertension , and high chloride concentrations are toxic to aquatic life .
The accumulation of salt in roadside soils adversely affects vegetation by increasing soil salinity, which can hinder plant growth and lead to the death of sensitive species. This degradation of plant life not only disrupts local ecosystems but also contributes to soil erosion. Additionally, wildlife attracted to the salt (such as deer and moose) can be endangered, as they may ingest harmful amounts or be drawn to roadways, increasing the likelihood of vehicle collisions. The term " Salt Belt " refers to regions with heavy road salt usage, predominantly in the northeastern United States. In these areas, the cumulative effects of salt application are more pronounced, leading to higher concentrations of salt in the environment and exacerbating the associated negative impacts.
Alternatives to traditional road salt are being explored to mitigate its environmental and infrastructural damage. While magnesium chloride and calcium chloride are considered less harmful to the environment, they are more expensive and may require higher application rates. Other strategies that help reduce salt usage and discharge into waterways include spraying road surfaces with brine in anticipation of snowfall, as well as mixing salt with other substances such as sand to improve traction, dyes to aid in identification of salted areas, [ 20 ] and biodegradables like beet juice, pickle juice, and molasses. Innovative solutions, such as porous pavements, have also been developed to reduce ice accumulation and minimize the need for de-icing agents. | https://en.wikipedia.org/wiki/Road_salt |
Road slipperiness is a condition of low skid resistance due to insufficient road friction. It is a result of snow , ice , water , loose material and the texture of the road surface on the traction produced by the wheels of a vehicle . [ 1 ]
Road slipperiness can be measured either in terms of the friction between a freely-spinning wheel and the ground, or the braking distance of a braking vehicle, and is related to the coefficient of friction between the tyre and the road surface.
Public works agencies spend a sizeable portion of their budget measuring and reducing road slipperiness. Even a small increase in slipperiness of a section of road can increase the accident rate of the section of road tenfold. [ 2 ] Maintenance activities affecting slipperiness include drainage repair, snow removal and street sweeping . More intensive measures may include grinding or milling a surface that has worn smooth, a surface treatment such as a chipseal , or overlaying a new layer of asphalt.
A specific road safety problem is split friction or μ (mu) - split; when the friction significantly differs between the left and the right wheelpath. The road may then not be perceived as hazardous when accelerating, cruising or even braking softly, but in a case of hard braking, the difference in friction will cause the vehicle to start to rotate towards the side offering higher grip. Split friction may cause jack-knifing of articulated trucks, while trucks with towed trailers may experience trailer swing phenomena. Split friction may be caused by an improper road spot repair that results in high variance of texture (roads) and colour (thin ice on newly paved black spots thaws faster than ice on old greyish asphalt) across the road section.
The two ways to measure road slipperiness are surface friction testing and stopping distance testing. Friction testing can use surface friction testers or portable friction testers, and involves allowing a freely moving object, usually a wheel, to move against the surface. By measuring the resistance experienced by the wheel, the friction between the ground and the wheel can be found. [ 1 ]
Stopping distance testing involves performing an emergency stop in a test vehicle and measuring the distance required to come to a stop. This can be measured either from the length of the skid marks left by the vehicle, or by the "chalk-to-gun" method, where the brakes are connected to a small gun filled with chalk powder, which marks the point when the brakes were applied. This has the advantage of measuring the full stopping distance, while simply measuring the skid marks only measures the distance from the point where the wheels began to lock or slip. [ 1 ]
Measurement of skidding resistance is not yet universally harmononised despite a number of attempts such as FEHRL 's HERMES project. [ 3 ] The European Standards Organisation ( CEN )has been working on the topic for many years through its committee CEN/TC 227 - Road materials. [ 4 ] Contributions to this were made through the FP7 Tyrosafe project [ 5 ] which aims to raise awareness, to coordinate and prepare for European harmonisation and to optimise the assessment and management of essential tyre/road interaction parameters in order to increase safety and support greening of road transport. This project will provide a synopsis of the current state of scientific understanding and its current application in different standards. It will identify the needs for future research and propose a way forward in the context of the future objectives of road administrations in order to optimise three key properties of roads: skid resistance, rolling resistance and tyre/road noise emission.
Road slipperiness can contribute to car accidents . In 1997, over 53,000 accidents were caused by slippery roads in the United Kingdom out of an estimated 4,000,000 accidents (or approximately 1.3 per cent) . [ 6 ] A small change in road slipperiness can have a drastic effect on surface friction: decreasing the coefficient of friction from 0.45 to 0.35, equivalent to adding a dusting of wet snow, increased the accident rate by almost 1000%. [ 2 ] As such, road agencies have a number of approaches to decreasing road slipperiness. Most roads are designed with a convex camber to provide sufficient drainage , thereby allowing surface water to drain out of the road. Trouble sections include entrances and exits of banked outercurves, where the cross slope is close to zero. Unless these sections have a longitudinal grade of at least 0.4–0.5%, the drainage gradient (resultant to crossfall and longitudinal grade) will be lower than 0.5% so water will not run off the road surface. Storm drains may be installed at regular intervals and modern paving materials are designed to provide high friction in most conditions. Permeable paving allows water to soak through the paving material, reducing slipperiness in very adverse conditions.
Road slipperiness can be prevented or delayed by proper pavement design. The aggregate used in the pavement should be selected with care, as certain aggregates such as dolomite may polish , or wear smooth under the action of tires. With asphalt pavements and surface treatments, using too much asphalt or asphalt emulsion can cause bleeding or flushing , a condition where excess asphalt rises to the top and fills in the road texture. Both problems increase slipperiness, especially when the pavement is wet.
Once lost, pavement texture can be restored with retexturing procedures such as diamond grinding of pavement , surface treatments such as chipsealing and resurfacing with asphalt concrete .
Snow and ice removal also decreases road slipperiness; snowploughs and snow blowers can remove the snow from the road surface while gritters drop road salt and sand , which both melts the snow and ice from the road surface, and provide a rougher surface to grip onto. However, in dry conditions, sand and salt on the road surface can themselves increase road slipperiness and pose a danger to road traffic, and therefore, roads are cleared by street sweepers after roadworks and gritting to make sure that all the loose material is cleared from the road surface.
"Volume 7" of the UK Design Manual for Roads and Bridges (DMRB), specifically HD 37/99: Section 5: Part 2: Chapter 11. | https://en.wikipedia.org/wiki/Road_slipperiness |
A road surface ( British English ) or pavement ( North American English ) is the durable surface material laid down on an area intended to sustain vehicular or foot traffic , such as a road or walkway . In the past, gravel road surfaces, macadam , hoggin , cobblestone and granite setts were extensively used, but these have mostly been replaced by asphalt or concrete laid on a compacted base course . Asphalt mixtures have been used in pavement construction since the beginning of the 20th century and are of two types: metalled (hard-surfaced) and unmetalled roads. Metalled roadways are made to sustain vehicular load and so are usually made on frequently used roads. Unmetalled roads, also known as gravel roads or dirt roads, are rough and can sustain less weight. Road surfaces are frequently marked to guide traffic .
Today, permeable paving methods are beginning to be used for low-impact roadways and walkways to prevent flooding. Pavements are crucial to countries such as United States and Canada , which heavily depend on road transportation. Therefore, research projects such as Long-Term Pavement Performance have been launched to optimize the life cycle of different road surfaces. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Pavement , in construction, is an outdoor floor or superficial surface covering. Paving materials include asphalt , concrete , stones such as flagstone , cobblestone , and setts , artificial stone , bricks , tiles , and sometimes wood. In landscape architecture , pavements are part of the hardscape and are used on sidewalks , road surfaces , patios , courtyards , etc.
The term pavement comes from Latin pavimentum , meaning a floor beaten or rammed down, through Old French pavement . [ 5 ] The meaning of a beaten-down floor was obsolete before the word entered English. [ 6 ]
Pavement, in the form of beaten gravel , dates back before the emergence of anatomically modern humans . Pavement laid in patterns like mosaics were commonly used by the Romans. [ 7 ]
The bearing capacity and service life of a pavement can be raised dramatically by arranging good drainage by an open ditch or covered drains to reduce moisture content in the pavements subbase and subgrade .
Wheeled transport created the need for better roads. Generally, natural materials cannot be both soft enough to form well-graded surfaces and strong enough to bear wheeled vehicles, especially when wet, and stay intact. In urban areas it was worthwhile to build stone-paved streets and, in fact, the first paved streets appear to have been built in Ur in 4000 BC. Corduroy roads were built in Glastonbury , England in 3300 BC, [ 8 ] and brick-paved roads were built in the Indus Valley Civilisation on the Indian subcontinent from around the same time. Improvements in metallurgy meant that by 2000 BC stone-cutting tools were generally available in the Middle East and Greece allowing local streets to be paved. [ 9 ] Notably, in about 2000 BC, the Minoans built a 50 km paved road from Knossos in northern Crete through the mountains to Gortyn and Lebena , a port on the south coast of the island, which had side drains, a 200 mm thick pavement of sandstone blocks bound with clay - gypsum mortar , covered by a layer of basaltic flagstones and had separate shoulders . This road could be considered superior to any Roman road . [ 10 ] Roman roads varied from simple corduroy roads to paved roads using deep roadbeds of tamped rubble as an underlying layer to ensure that they kept dry, as the water would flow out from between the stones and fragments of rubble, instead of becoming mud in clay soils.
Although there were attempts to rediscover Roman methods, there was little useful innovation in road building before the 18th century. The first professional road builder to emerge during the Industrial Revolution was John Metcalf , who constructed about 290 kilometres (180 mi) of turnpike road , mainly in the north of England, from 1765, when Parliament passed an act authorising the creation of turnpike trusts to build toll funded roads in the Knaresborough area.
Pierre-Marie-Jérôme Trésaguet is widely credited with establishing the first scientific approach to road building in France at the same time as Metcalf. He wrote a memorandum on his method in 1775, which became general practice in France. It involved a layer of large rocks, covered by a layer of smaller gravel.
By the late 18th and early 19th centuries, new methods of highway construction had been pioneered by the work of two British engineers: Thomas Telford and John Loudon McAdam . Telford's method of road building involved the digging of a large trench in which a foundation of heavy rock was set. He designed his roads so that they sloped downwards from the centre, allowing drainage to take place, a major improvement on the work of Trésaguet. The surface of his roads consisted of broken stone. McAdam developed an inexpensive paving material of soil and stone aggregate (known as macadam ). His road building method was simpler than Telford's, yet more effective at protecting roadways: he discovered that massive foundations of rock upon rock were unnecessary, and asserted that native soil alone would support the road and traffic upon it, as long as it was covered by a road crust that would protect the soil underneath from water and wear. [ 11 ] Size of stones was central to McAdam's road building theory. The lower 200-millimetre (7.9 in) road thickness was restricted to stones no larger than 75 millimetres (3.0 in).
Modern tarmac was patented by British civil engineer Edgar Purnell Hooley , who noticed that spilled tar on the roadway kept the dust down and created a smooth surface. [ 12 ] He took out a patent in 1901 for tarmac. [ 13 ] Hooley's 1901 patent for tarmac involved mechanically mixing tar and aggregate prior to lay-down, and then compacting the mixture with a steamroller . The tar was modified by adding small amounts of Portland cement , resin , and pitch . [ 14 ]
Asphalt (specifically, asphalt concrete ), sometimes called flexible pavement since its viscosity causes minute deformations as it distributes loads, has been widely used since the 1920s. The viscous nature of the bitumen binder allows asphalt concrete to sustain significant plastic deformation , although fatigue from repeated loading over time is the most common failure mechanism. Most asphalt surfaces are laid on a gravel base, which is generally at least as thick as the asphalt layer, although some 'full depth' asphalt surfaces are laid directly on the native subgrade . In areas with very soft or expansive subgrades such as clay or peat , thick gravel bases or stabilization of the subgrade with Portland cement or lime may be required. Polypropylene and polyester geosynthetics are also used for this purpose, [ 15 ] and in some northern countries a layer of polystyrene boards are used to delay and minimize frost penetration into the subgrade. [ 16 ]
Depending on the temperature at which it is applied, asphalt is categorized as hot mix, warm mix, half warm mix, or cold mix. Hot mix asphalt is applied at temperatures over 150 °C (300 °F) with a free floating screed . Warm mix asphalt is applied at temperatures of 95–120 °C (200–250 °F), resulting in reduced energy usage and emissions of volatile organic compounds . [ 17 ] Cold mix asphalt is often used on lower-volume rural roads, where hot mix asphalt would cool too much on the long trip from the asphalt plant to the construction site. [ 18 ]
An asphalt concrete surface will generally be constructed for high-volume primary highways having an average annual daily traffic load greater than 1,200 vehicles per day. [ 19 ] Advantages of asphalt roadways include relatively low noise, relatively low cost compared with other paving methods, and perceived ease of repair. Disadvantages include less durability than other paving methods, less tensile strength than concrete, the tendency to become slick and soft in hot weather, and a certain amount of hydrocarbon pollution to soil and groundwater or waterways .
In the mid-1960s, rubberized asphalt was used for the first time, mixing crumb rubber from used tires with asphalt. [ 20 ] While a potential use for tires that would otherwise fill landfills and present a fire hazard, rubberized asphalt has shown greater incidence of wear in freeze-thaw cycles in temperate zones because of the non-homogeneous expansion and contraction with non-rubber components. The application of rubberized asphalt is more temperature-sensitive and in many locations can only be applied at certain times of the year. [ 21 ] Study results of the long-term acoustic benefits of rubberized asphalt are inconclusive. Initial application of rubberized asphalt may provide a reduction of 3–5 decibels (dB) in tire-pavement-source noise emissions; however, this translates to only 1–3 dB in total traffic-noise reduction when combined with the other components of traffic noise. Compared to traditional passive attenuating measures (e.g., noise walls and earth berms), rubberized asphalt provides shorter-lasting and lesser acoustic benefits at typically much greater expense. [ citation needed ]
Concrete surfaces (specifically, Portland cement concrete) are created using a concrete mix of Portland cement, coarse aggregate , sand , and water. In virtually all modern mixes there will also be various admixtures added to increase workability, reduce the required amount of water, mitigate harmful chemical reactions, and for other beneficial purposes. In many cases there will also be Portland cement substitutes added, such as fly ash . This can reduce the cost of the concrete and improve its physical properties. The material is applied in a freshly mixed slurry and worked mechanically to compact the interior and force some of the cement slurry to the surface to produce a smoother, denser surface free from honeycombing. The water allows the mix to combine molecularly in a chemical reaction called hydration .
Concrete surfaces have been classified into three common types: jointed plain (JPCP), jointed reinforced (JRCP) and continuously reinforced (CRCP). The one item that distinguishes each type is the jointing system used to control crack development.
One of the major advantages of concrete pavements is they are typically stronger and more durable than asphalt roadways. The surface can be grooved to provide a durable skid-resistant surface. Concrete roads are more economical to drive in terms of fuel consumption, they reflect light better, and they last significantly longer than other paving surfaces; but they have a much smaller market share than other paving solutions. [ 22 ] Modern paving methods and design methods have changed the economics of concrete paving so that a well-designed and placed concrete pavement will be cheaper in initial cost and significantly cheaper over the life cycle. [ 23 ] Another important advantage is that waterproof concrete can be used, which eliminates the need to place storm drains next to the road and reduces the need for a slightly sloped driveway to drain rainwater. Avoiding rainwater discharge by using runoff also means less electricity is needed (otherwise more pumps would be needed in the water distribution system) and rainwater is not polluted because it no longer mixes with polluted water. Rather, it is immediately absorbed by the earth. [ 24 ] A previous disadvantage was that they had a higher initial cost and could be more time-consuming to construct. This cost can typically be offset through the long life cycle of the pavement and the higher cost of bitumen. Concrete pavement can be maintained over time utilizing a series of methods known as concrete pavement restoration which include diamond grinding , dowel bar retrofits , joint and crack sealing, cross-stitching, etc. Diamond grinding is also useful in reducing noise and restoring skid resistance in older concrete pavement. [ 25 ] [ 26 ]
The first street in the United States to be paved with concrete was Court Avenue in Bellefontaine, Ohio in 1893. [ 27 ] [ 28 ] The first mile of concrete pavement in the United States was on Woodward Avenue in Detroit, Michigan in 1909. [ 29 ] Following these pioneering uses, the Lincoln Highway Association , established in October 1913 to oversee the creation of one of the United States' earliest east-west transcontinental highways for the automobile, began to establish "seedling miles" of specifically concrete-paved roadbed in various places in the American Midwest , starting in 1914 west of Malta, Illinois , while using concrete with the specified concrete "ideal section" for the Lincoln Highway in Lake County, Indiana , during 1922 and 1923. [ 30 ]
Concrete roadways may produce more noise than asphalt from tire noise on cracks and expansion joints. A concrete pavement composed of multiple slabs of uniform size will produce a periodic sound and vibration in each vehicle as its tires pass over each expansion joint. These monotonous repeated sounds and vibrations can cause a fatiguing or hypnotic effect upon the driver over the course of a long journey.
Composite pavements combine a Portland cement concrete sublayer with an asphalt overlay. They are usually used to rehabilitate existing roadways rather than in new construction. Asphalt overlays are sometimes laid over distressed concrete to restore a smooth wearing surface. [ 31 ] A disadvantage of this method is that movement in the joints between the underlying concrete slabs, whether from thermal expansion and contraction, or from deflection of the concrete slabs from truck axle loads , usually causes reflective cracks in the asphalt.
To decrease reflective cracking, concrete pavement is broken apart through a break and seat, crack and seat , or rubblization process. Geosynthetics can be used for reflective crack control. [ 32 ] With break and seat and crack and seat processes, a heavy weight is dropped on the concrete to induce cracking, then a heavy roller is used to seat the resultant pieces into the subbase. The main difference between the two processes is the equipment used to break the concrete pavement and the size of the resulting pieces. The theory is that frequent small cracks will spread thermal stress over a wider area than infrequent large joints, reducing the stress on the overlying asphalt pavement. "Rubblization" is a more complete fracturing of the old, worn-out concrete, effectively converting the old pavement into an aggregate base for a new asphalt road. [ 33 ]
The whitetopping process uses Portland cement concrete to resurface a distressed asphalt road.
Distressed pavement can be reused when rehabilitating a roadway. The existing pavement is broken up and may be ground on-site through a process called milling . This pavement is commonly referred to as reclaimed asphalt pavement (RAP). RAP can be transported to an asphalt plant, where it will be stockpiled for use in new pavement mixes, [ 34 ] or it may be recycled in-place using the techniques described below.
Bituminous surface treatment (BST) or chipseal is used mainly on low-traffic roads, but also as a sealing coat to rejuvenate an asphalt concrete pavement. It generally consists of aggregate spread over a sprayed-on asphalt emulsion or cut-back asphalt cement. The aggregate is then embedded into the asphalt by rolling it, typically with a rubber-tired roller . This type of surface is described by a wide variety of regional terms including "chip seal", "tar and chip", "oil and stone", "seal coat", "sprayed seal", [ 38 ] "surface dressing", [ 39 ] "microsurfacing", [ 40 ] "seal", [ 41 ] or simply as "bitumen".
BST is used on hundreds of miles of the Alaska Highway and other similar roadways in Alaska , the Yukon Territory , and northern British Columbia . The ease of application of BST is one reason for its popularity, but another is its flexibility, which is important when roadways are laid down over unstable terrain that thaws and softens in the spring.
Other types of BSTs include micropaving, slurry seals and Novachip. These are laid down using specialized and proprietary equipment. They are most often used in urban areas where the roughness and loose stone associated with chip seals is considered undesirable.
A thin membrane surface (TMS) is an oil -treated aggregate which is laid down upon a gravel road bed, producing a dust-free road. [ 42 ] A TMS road reduces mud problems and provides stone-free roads for local residents where loaded truck traffic is negligible. The TMS layer adds no significant structural strength, and so is used on secondary highways with low traffic volume and minimal weight loading. Construction involves minimal subgrade preparation, following by covering with a 50-to-100-millimetre (2–4 in) cold mix asphalt aggregate. [ 19 ] The Operation Division of the Ministry of Highways and Infrastructure in Saskatchewan has the responsibility of maintaining 6,102 kilometres (3,792 mi) of thin membrane surface (TMS) highways. [ 43 ]
Otta seal is a low-cost road surface using a 16–30-millimetre-thick ( 5 ⁄ 8 – 1 + 1 ⁄ 8 in) mixture of bitumen and crushed rock. [ 44 ]
Gravel is known to have been used extensively in the construction of roads by soldiers of the Roman Empire (see Roman road ) but in 1998 a limestone-surfaced road, thought to date back to the Bronze Age , was found at Yarnton in Oxfordshire, Britain. [ 45 ] Applying gravel, or " metalling ", has had two distinct usages in road surfacing. The term road metal refers to the broken stone or cinders used in the construction or repair of roads or railways , [ 46 ] and is derived from the Latin metallum , which means both " mine " and " quarry ". [ 47 ] The term originally referred to the process of creating a gravel roadway. The route of the roadway would first be dug down several feet and, depending on local conditions, French drains may or may not have been added. Next, large stones were placed and compacted, followed by successive layers of smaller stones, until the road surface was composed of small stones compacted into a hard, durable surface. "Road metal" later became the name of stone chippings mixed with tar to form the road-surfacing material tarmac . A road of such material is called a " metalled road " in Britain, a " paved road " in Canada and the US, or a " sealed road " in parts of Canada, Australia and New Zealand. [ 48 ]
A granular surface can be used with a traffic volume where the annual average daily traffic is 1,200 vehicles per day or less. [ citation needed ] There is some structural strength if the road surface combines a sub base and base and is topped with a double-graded seal aggregate with emulsion. [ 19 ] [ 49 ] Besides the 4,929 kilometres (3,063 mi) of granular pavements maintained in Saskatchewan, around 40% of New Zealand roads are unbound granular pavement structures. [ 43 ] [ 50 ]
The decision whether to pave a gravel road or not often hinges on traffic volume. It has been found that maintenance costs for gravel roads often exceed the maintenance costs for paved or surface-treated roads when traffic volumes exceed 200 vehicles per day. [ 51 ]
Some communities are finding it makes sense to convert their low-volume paved roads to aggregate surfaces. [ 52 ]
Pavers (or paviours ), generally in the form of pre-cast concrete blocks, are often used for aesthetic purposes, or sometimes at port facilities that see long-duration pavement loading. Pavers are rarely used in areas that see high-speed vehicle traffic.
Brick , cobblestone , sett , wood plank , and wood block pavements such as Nicolson pavement , were once common in urban areas throughout the world, but fell out of fashion in most countries, due to the high cost of labor required to lay and maintain them, and are typically only kept for historical or aesthetic reasons. [ citation needed ] In some countries, however, they are still common in local streets. In the Netherlands , brick paving has made something of a comeback since the adoption of a major nationwide traffic safety program in 1997. From 1998 through 2007, more than 41,000 km of city streets were converted to local access roads with a speed limit of 30 km/h, for the purpose of traffic calming . [ 53 ] One popular measure is to use brick paving - the noise and vibration slows motorists down. At the same time, it is not uncommon for cycle paths alongside a road to have a smoother surface than the road itself. [ 54 ] [ 55 ]
Although rarely constructed today, early-style macadam and tarmac pavements are sometimes found beneath modern asphalt concrete or Portland cement concrete pavements, because the cost of their removal at the time of renovation would not significantly benefit the durabilty and longevity of the newer surface.
There are ways to create the appearance of brick pavement, without the expense of actual bricks. The first method to create brick texture is to heat an asphalt pavement and use metal wires to imprint a brick pattern using a compactor to create stamped asphalt . A similar method is to use rubber imprinting tools to press over a thin layer of cement to create decorative concrete . Another method is to use a brick pattern stencil and apply a surfacing material over the stencil. Materials that can be applied to give the color of the brick and skid resistance can be in many forms. An example is to use colored polymer-modified concrete slurry which can be applied by screeding or spraying. [ 56 ] Another material is aggregate -reinforced thermoplastic which can be heat applied to the top layer of the brick-pattern surface. [ 57 ] Other coating materials over stamped asphalt are paints and two-part epoxy coating. [ 58 ]
Roadway surfacing choices are known to affect the intensity and spectrum of sound emanating from the tire/surface interaction. [ 59 ] Initial applications of noise studies occurred in the early 1970s. Noise phenomena are highly influenced by vehicle speed.
Roadway surface types contribute differential noise effects of up to 4 dB , with chip seal type and grooved roads being the loudest, and concrete surfaces without spacers being the quietest. Asphaltic surfaces perform intermediately relative to concrete and chip seal . Rubberized asphalt has been shown to give a 3–5 dB reduction in tire-pavement noise emissions, and a marginally discernible 1–3 dB reduction in total road noise emissions when compared to conventional asphalt applications.
As pavement systems primarily fail due to fatigue (in a manner similar to metals ), the damage done to pavement increases with the fourth power of the axle load of the vehicles traveling on it. According to the AASHO Road Test , heavily loaded trucks can do more than 10,000 times the damage done by a normal passenger car. Tax rates for trucks are higher than those for cars in most countries for this reason, though they are not levied in proportion to the damage done. [ 60 ] Passenger cars are considered to have little practical effect on a pavement's service life, from a materials fatigue perspective.
Other failure modes include aging and surface abrasion. As years go by, the binder in a bituminous wearing course gets stiffer and less flexible. When it gets "old" enough, the surface will start losing aggregates, and macrotexture depth increases dramatically. If no maintenance action is done quickly on the wearing course, potholes will form. The freeze-thaw cycle in cold climates will dramatically accelerate pavement deterioration, once water can penetrate the surface. Clay and fumed silica nanoparticles may potentially be used as efficient UV-anti aging coatings in asphalt pavements.
If the road is still structurally sound, a bituminous surface treatment, such as a chipseal or surface dressing can prolong the life of the road at low cost. In areas with cold climate, studded tires may be allowed on passenger cars. In Sweden and Finland, studded passenger car tires account for a very large share of pavement rutting . [ 61 ]
The physical properties of a stretch of pavement can be tested using a falling weight deflectometer .
Several design methods have been developed to determine the thickness and composition of road surfaces required to carry predicted traffic loads for a given period of time. Pavement design methods are continuously evolving. Among these are the Shell Pavement design method, and the American Association of State Highway and Transportation Officials (AASHTO) 1993/98 "Guide for Design of Pavement Structures". A mechanistic-empirical design guide was developed through the NCHRP process, resulting in the Mechanistic Empirical Pavement Design Guide (MEPDG), which was adopted by AASHTO in 2008, although MEPDG implementation by state departments of transportation has been slow. [ 62 ]
Further research by University College London into pavements has led to the development of an indoor, 80-sq-metre artificial pavement at a research centre called Pedestrian Accessibility and Movement Environment Laboratory (PAMELA). It is used to simulate everyday scenarios, from different pavement users to varying pavement conditions. [ 63 ] There also exists a research facility near Auburn University , the NCAT Pavement Test Track , that is used to test experimental asphalt pavements for durability.
In addition to repair costs, the condition of a road surface has economic effects for road users. Rolling resistance increases on rough pavement, as does wear and tear of vehicle components. It has been estimated that poor road surfaces cost the average US driver $324 per year in vehicle repairs, or a total of $67 billion. Also, it has been estimated that small improvements in road surface conditions can decrease fuel consumption between 1.8 and 4.7%. [ 64 ]
Road surface markings are used on paved roadways to provide guidance and information to drivers and pedestrians. It can be in the form of mechanical markers such as cat's eyes , botts' dots and rumble strips , or non-mechanical markers such as paints, thermoplastic , plastic and epoxy . | https://en.wikipedia.org/wiki/Road_surface |
A road train , also known as a land train or long combination vehicle (LCV) is a semi-trailer used to move road freight more efficiently than single-trailer semi-trailers. It consists of one semi-trailer or more connected together with or without a prime-mover . [ 1 ] It typically has to be at least three trailers and one prime-mover . Road trains are often used in areas where other forms of heavy transport ( freight train , cargo aircraft , container ship ) are not feasible or practical.
Early road trains consisted of traction engines pulling multiple wagons. The first identified road trains operated into South Australia 's Flinders Ranges from the Port Augusta area in the mid-19th century. [ 2 ] They displaced bullock teams for the carriage of minerals to port and were, in turn, superseded by railways.
During the Crimean War , a traction engine was used to pull multiple open trucks. [ 3 ] By 1898 steam traction engine trains with up to four wagons were employed in military manoeuvres in England. [ 4 ]
In 1900, John Fowler & Co. provided armoured road trains for use by the British Armed Forces in the Second Boer War . [ 3 ] [ 5 ] Lord Kitchener stated that he had around 45 steam road trains at his disposal. [ 6 ]
A road train devised by Captain Charles Renard of the French Engineering Corps was displayed at the 1903 Paris Salon. After his death, Daimler , which had acquired the rights, attempted to market it in the United Kingdom. [ 7 ] [ 8 ] Four of these vehicles were successfully delivered to Queensland , Australia, before the company ceased production upon the start of World War I . [ 9 ]
In the 1930s/40s, the government of Australia operated an AEC Roadtrain to transport freight and supplies into the Northern Territory, replacing the Afghan camel trains that had been trekking through the deserts since the late 19th century. This truck pulled two or three 6 m (19 ft 8 in) Dyson four-axle self-tracking trailers. At 130 hp (97 kW ), the AEC was grossly underpowered by today's standards, and drivers and offsiders (a partner or assistant) routinely froze in winter and sweltered in summer due to the truck's open cab design and the position of the engine radiator, with its 1.5 m (4 ft 11 in) cooling fan, behind the seats.
Australian Kurt Johannsen , a bush mechanic, is recognised as the inventor of the modern road train. [ 10 ] After transporting stud bulls 200 mi (320 km) to an outback property, Johannsen was challenged to build a truck to carry 100 head of cattle instead of the original load of 20. Provided with financing of about 2000 pounds and inspired by the tracking abilities of the Government roadtrain, Johannsen began construction. Two years later his first road train was running. [ 11 ]
Johannsen's first road train consisted of a United States Army World War II surplus Diamond-T tank carrier , nicknamed "Bertha", and two home-built self-tracking trailers. Both wheel sets on each trailer could steer, and therefore could negotiate the tight and narrow tracks and creek crossings that existed throughout Central Australia in the earlier part of the 20th century. Freighter Trailers in Australia viewed this improved invention and went on to build self-tracking trailers for Kurt and other customers, and went on to become innovators in transport machinery for Australia.
This first example of the modern road train, along with the AEC Government Roadtrain, forms part of the huge collection at the National Road Transport Hall of Fame in Alice Springs , Northern Territory .
In 2023, Janus launched the first BEV triple road train with 620 kWh battery, also the world's heaviest street-legal BEV truck at 170 tonnes (gross weight). [ 12 ]
The term road train is used in Australia and typically means a prime mover hauling two or more trailers, other than a B-double. [ 13 ] In contrast with a more common semi-trailer towing one trailer or semi-trailer , the diesel prime mover of a road train hauls two or more trailers or semi-trailers. Australia has the longest and heaviest road-legal road trains in the world, weighing up to 200 tonnes (197 long tons; 220 short tons). [ 1 ]
Double (two-trailer) road train combinations are allowed on some roads in most states of Australia, including specified approaches to the ports and industrial areas of Adelaide , South Australia [ 14 ] and Perth , Western Australia . [ 15 ] An A-double road train should not be confused with a B-double , which is allowed access to most of the country and in all major cities. [ 16 ]
In South Australia, B-triples up to 35.0 metres (114 ft 10 in) and two-trailer road trains to 36.5 metres (119 ft 9 in) were only permitted to travel on a small number of approved routes in the north and west of the state, including access to Adelaide's north-western suburban industrial and export areas such as Port Adelaide , Gillman and Outer Harbour via Salisbury Highway , Port Wakefield Road and Augusta Highway before 2017. [ 14 ] A project named Improving Road Transport for the Agriculture Industry added 7,200 kilometres (4,500 mi) of key routes permitted to operate vehicles over 30 m (98 ft 5 in) in 2015–2018. [ 17 ]
Triple (three-trailer) road trains operate in western New South Wales , western Queensland , South Australia, Western Australia and the Northern Territory , with the last three states also allowing AB-quads (B double with two additional trailers coupled behind). Darwin is the only capital city in the world where triples and quads are allowed to within 1 km (0.62 mi) of the central business district (CBD). [ 16 ]
Strict regulations regarding licensing, registration, weights, and experience apply to all operators of road trains throughout Australia.
Road trains are used for transporting all manner of materials: common examples are livestock , fuel , mineral ores , and general freight. Their cost-effective transport has played a significant part in the economic development of remote areas; some communities are totally reliant on regular service.
When road trains get close to populated areas, the multiple dog-trailers are unhooked, the dollies removed and then connected individually to multiple trucks at "assembly" yards.
When the flat-top trailers of a road train need to be transported empty, it is common practice to stack them. This is commonly referred to as "doubled-up" or "doubling-up". Sometimes, if many trailers are required to be moved at one time, they will be triple-stacked, or "tripled-up".
Higher Mass Limits (HML) Schemes are now in all jurisdictions in Australia, allowing trucks to carry additional weight beyond general mass limits. Some roads in some states regularly allowing up to 4 trailers at 53.5 metres (175 ft 6 in) long and 136 tonnes (134 long tons; 150 short tons). [ 18 ] On private property like mines, highway restrictions on trailer length, weight and count may not apply. Some of the heaviest road trains carrying ore are multiple unit with a diesel engine in each trailer, controlled by the tractor. [ 19 ] [ 20 ]
Diesel sales in Australia (per year) are around 32 billion litres, [ 21 ] of which some is used by road trains. In order to reduce emissions and running cost, trials are made with road trains powered by batteries . [ 22 ] [ 23 ]
In the United States, trucks on public roads are limited to two trailers (two 28 ft or 8.5 m and a dolly to connect; the limit is 63 ft or 19 m end to end). Some states allow three 28 ft or 8.5 m trailers, although triples are usually restricted to less populous states such as Idaho, Oregon, and Montana, plus the Ohio Turnpike [ 24 ] and Indiana East–West Toll Road . Triples are used for long-distance less-than-truckload freight hauling (in which case the trailers are shorter than a typical single-unit trailer) or resource hauling in the interior west (such as ore or aggregate ). Triples are sometimes marked with "LONG LOAD" banners both front and rear. "Turnpike doubles"—tractors towing two full-length trailers—are allowed on the New York Thruway and Massachusetts Turnpike ( Interstate 90 ), Florida's Turnpike , Kansas Turnpike (Kansas City – Wichita route) as well as the Ohio and Indiana toll roads. [ 25 ] Colorado allows what are known as "Rocky Mountain Doubles" which is one full length 53 ft or 16 m trailer and an additional 28 ft or 8.5 m trailer. The term "road train" is not commonly used in the United States; "turnpike train" has been used, generally in a pejorative sense. [ 26 ]
In the western United States LCVs are allowed on many Interstate highways. The only LCVs allowed nationwide are STAA doubles . [ 27 ]
On private property like farms, highway restrictions on trailer length and count do not apply. Bales of straw , for example, are sometimes moved in wagon trains of up to 20 trailers an eighth of a mile long (carrying a total of 3,600 bales). [ 28 ]
In Finland , Sweden , Germany , the Netherlands , Denmark , Belgium , and some roads in Norway , trucks with trailers are allowed to be 25.25 m (82.8 ft) long. [ 29 ] In Finland, a length of 34.5 metres (113 ft) has been allowed since January 2019. In Sweden, this length has been allowed on several major roads, including all of E4 , since August 2023. [ 30 ] 34.5 meters allows two 40 foot containers .
Elsewhere in the European Union , the limit is 18.75 m (61.5 ft) (Norway 19.5 m or 64 ft). The trucks are of a cab-over-engine design, with a flat front and a high floor, about 1.2 m (3.9 ft) above ground. The Scandinavian countries are less densely populated than the other EU countries, and distances, especially in Finland and Sweden, are long. Until the late 1960s, vehicle length was unlimited, giving rise to long vehicles to cost effectively handle goods. As traffic increased, truck lengths became more of a concern and they were limited, albeit at a more generous level than in the rest of Europe.
In the United Kingdom in 2009, a two-year desk study of Longer Heavier Vehicles (LHVs), including up to 11-axle, 34-metre (111.5 ft) long, 82- tonne (81- long-ton ; 90- short-ton ) combinations, ruled out all road-train-type vehicles for the foreseeable future.
In 2010, Sweden was performing tests on log-hauling trucks, weighing up to 90 t (89 long tons; 99 short tons) and measuring 30 metres (98.4 ft) and haulers for two 40 ft containers, measuring 32 metres (105 ft) in total. [ 31 ] [ 32 ] In 2015, a pilot began in Finland to test a 104-tonne timber lorry which was 33 metres (108 ft) and had 13 axles. Testing of the special lorry was limited to a predefined route in northern Finland [ 33 ] [ 34 ]
Since 2015, Spain has permitted B-doubles with a length of up to 25.25 metres (82.8 ft) and weighing up to 60 tonnes to travel on certain routes. [ 35 ] In July 2024, after 5 years of testing, HCTs have been permitted on Spanish territory, with lengths of up to 32 meters (105 ft) and 70 gross tonnes. [ 36 ]
Since 2016, Eoin Gavin Transport, Shannon and Dennison Trailers, Kildare have been trialling 25.25 metres (82.8 ft) B-doubles on the Irish motorways. [ 37 ] In Feb 2024, The Pallet Network announced four B-doubles to operate between Dublin, Cork and Galway. [ 38 ]
In 2020, a small number of road trains were operating between Belgium and the Netherlands.
In Mexico road trains exist in a limited capacity due to the sizes of roads in its larger cities, and they are only allowed to pull 2 trailers joined with a pup or dolly created for this purpose. Recently [ when? ] the regulations tend to be more severe and strict to avoid overloading and accidents, to adhere to the federal rules of transportation. Truck drivers must obtain a certificate to certify that the driver is capable to manipulate and drive that type of vehicle. [ 39 ]
All the tractor vehicles that make road train type transport in the country (along with the normal security requirements) need to have visual warnings like; [ 39 ]
Some major cargo enterprises in the country use this form to cut costs of carrying all type of goods in some regions where other forms of transportation are too expensive compared to it due to the difficult geography of the country. [ 40 ]
The Mexican road train equivalent form in Australian Standard is the A-Double form, the difference is that the Mexican road trains can be hauled with a long distance tractor truck.
In Zimbabwe, they are only used in one highway, Ngezi – Makwiro road. They make use of 42 m long road trains pulling three trailers.
On 15th February 2025 Volvo Trucks India and Delhivery a Gurgaon based logistics company unveiled India's first road train consisting of a Volvo FM 420 4x2 tractor and a B-Double combination of 24 ft lead trailer and 44 ft semi-trailer coupled via fifth wheel making total length of vehicle close to 80 ft. With approvals from Ministry of Road Transport and Highways (MORTH) and Automotive Research Association of India (ARAI) . Currently, road trains are only permitted to operate on Mumbai-Nagpur Expressway . [ 41 ]
An A-double consists of a prime mover towing a normal lead trailer with a towing hitch such as a Ringfeder coupling affixed to it at the rear. A fifth wheel dolly is then affixed to the hitch allowing another standard trailer to be attached. Eleven-axle coal tipping sets carrying to Port Kembla , Australia are described as A-doubles. The set depicted has a tare weight of 35.5 t (39.1 short tons) and is capable of carrying 50 t (55.1 short tons) of coal. [ 42 ] Note the shield at the front of the second trailer to direct tipped coal from the first trailer downwards.
Pros include the ability to use standard semi-trailers and the potential for very large loads. Cons mainly include very tricky reversing due to the multiple articulation points across two different types of coupling.
A B-double consists of a prime mover towing a specialised lead trailer that has a fifth-wheel mounted on the rear towing another semi-trailer, resulting in two articulation points . It may also be known as a B-train , interlink in South Africa, B-double in Australia, tandem tractor-trailer, tandem rig or double in North America. They may typically be up to 27.5 m (90 ft 3 in) long. The fifth wheel coupling is located at the rear of the lead (first) trailer and is mounted on a "tail" section commonly located immediately above the lead trailer axles. [ 43 ] In North America this area of the lead trailer is often referred to as the "bridge". The twin-trailer assembly is hooked up to a tractor unit via the tractor unit's fifth wheel in the customary manner.
An advantage of the B-train configuration is its inherent stability when compared to most other twin trailer combinations, the turntable mounted on the forward trailer results in the B-train not requiring a converter dolly as with all other road train configurations. [ 44 ] It is this feature above all else that has ensured its continued development and global acceptance. [ 45 ] Reversing is simpler as all articulation points are on fifth wheel couplings.
B-train trailers are used to transport many types of load and examples include tanks for liquid and dry-bulk, flat-beds and curtain-siders for deck-loads, bulkers for aggregates and wood residuals, refrigerated trailers for chilled and frozen goods, vans for dry goods, logging trailers for forestry work and cattle liners for livestock.
In Australia, standard semi-trailers are permitted on almost any road. B-doubles are more heavily regulated, but routes are made available by state governments for almost anywhere that significant road freight movement is required. [ 46 ]
Around container ports in Australia exists what is known as a super B-double; a B-double with an extra axle (total of 4) on the lead trailer and either three or four axle set on the rear trailer. This allows the super B-Double to carry combinations of two 40 foot containers, four 20 foot containers, or a combination of one 40 foot container and two twenty foot containers. However, because of their length and low accessibility into narrow streets, these vehicles are restricted in where they can go and are generally used for terminal-to-terminal work, i.e. wharf to container holding park or wharf-to-wharf. The rear axle on each trailer can also pivot slightly while turning to prevent scrubbing out the edges of the tyres due to the heavy loads placed on them.
Same as B-double, but with an additional lead trailer behind the prime mover . [ 47 ] The B-train principle has been exploited in Australia, where configurations such as B triples, double-B doubles and 2AB quads are permitted on some routes. These are run in most states of Australia where double road trains are allowed. Australia's National Transport Commission proposed a national framework for B-triple operations that includes basic vehicle specifications and operating conditions that the commission anticipates will replace the current state-by-state approach, which largely discourages the use of B-triples for interstate operation. [ 48 ] In South Australia, B-triples up to 35.0 metres (114 ft 10 in) and two-trailer road trains to 36.5 metres (119 ft 9 in) are generally only permitted on specified routes, including access to industrial and export areas near Port Adelaide from the north. [ 46 ]
In 2018, B quad was also allowed in states Victoria, New South Wales and Queensland, which enables more economical transport. [ 49 ]
An AB triple consists of a standard trailer with a B-Double behind it using a converter dolly, with a trailer order of Standard, Dolly, B-Train, Standard. The final trailer may be either a B-Train with no trailer attached to it or a standard trailer. Alternatively, a BA triple sees this configuration reversed, consisting of a B-double with a converter dolly and standard trailer behind it.
In South Australia, larger road trains up to 53.5 metres (175 ft 6 in) (three full trailers) are only permitted on certain routes in the Far North . [ 46 ]
A BAB quad consists of two B-double units linked with a converter dolly, with trailer order of Prime Mover, B-Train, Dolly, B-Train.
ABB quad consists of one standard trailer and B-triple units linked with a converter dolly.
AAB quad consists of A-double and B-double units linked with a converter dolly. Alternatively, a BAA quad sees this configuration reversed, first the B-double, then the A-double.
In some parts of Australia, 'super quad' road trains up to 60 metres (196 ft 10 in) are permitted, consisting of four standard trailers connected via three converter dollies. [ 50 ] [ 51 ]
A C-train is a semi-trailer attached to a turn table on a C-dolly. Unlike in an A-Train, the C-dolly is connected to the tractor or another trailer in front of it with two drawbars , thus eliminating the drawbar connection as an articulation point. One of the axles on a C-dolly is self-steerable to prevent tire scrubbing. C-dollies are not permitted in Australia, due to the lack of articulation.
A dog-trailer (also called a pup) is a short trailer with a permanent dolly, with a single A-frame drawbar that fits into the Ringfeder or pintle hook on the rear of the truck or trailer in front, giving the whole unit two or more articulation points and very little roll stiffness. These are commonly used in Australia, particularly for end tipper applications like shown above. They are normally limited to a single dog trailer behind a short bodied (independently load carrying) truck with a standard length limit of 19 metres (20 under design permits). A quad dog trailer in combination with a bodied truck is able to carry more weight than a truck and single semi-trailer of the same length limit and access restrictions, as well as carrying two different materials as separate loads, such as with tipper bodies and fluid tankers.
In 1991, at a special Premiers' Conference , Australian heads of government signed an inter-governmental agreement to establish a national heavy vehicle registration, regulation and charging scheme: the Federal Interstate Registration Scheme (FIRS). [ 52 ] Its requirements are as follows:
Due to the "eastern" and "western" mass limits in Australia, two different categories of registration were enacted. The second digit of the registration plate showed what mass limit was allowed for that vehicle. If a vehicle had a 'V' as the second letter, its mass limits were in line with the eastern states mass limits, which were:
If a vehicle had an X as the second letter, its mass limits were in line with the western states mass limits, which were:
The second digit of the registration being a T designates a trailer.
One of the main criteria of the registration is that intrastate operation is not permitted. The load has to come from one state or territory and be delivered to another. Many grain carriers were reported and prosecuted for cartage from the paddock to the silos. However, if the load went to a port silo, they were given the benefit of the doubt, as that grain was more than likely to be going overseas.
Australian road trains have horizontal signs front and back with 180 mm (7.1 in) high black uppercase letters on a reflective yellow background reading "ROAD TRAIN". The sign(s) must have a black border and be at least 1.02 m (3.3 ft) long and 220 mm (8.7 in) high and be placed between 500 mm (19.7 in) and 1.8 m (5.9 ft) above the ground on the fore or rearmost surface of the unit.
In the case of B-triples in Western Australia, they are signed front and rear with "ROAD TRAIN" until they cross the WA/SA border where they are then signed with "LONG VEHICLE" in the front and rear.
Converter dollies must have a sign affixed horizontally to the rearmost point, complying to the same conditions, reading "LONG VEHICLE". This is required for when a dolly is towed behind a trailer.
Operational weights are based on axle group masses, as follows:
Therefore,
The Australian national heavy vehicle speed limit is 100 km/h (62 mph), except New South Wales and Queensland where the speed limit for any road train is 90 km/h (56 mph). [ 54 ] B triple road trains have a speed limit of 100 km/h (62 mph) in Queensland. [ 55 ]
In Canada, there has been no difference between the speed limits between cars and road trains, which range from 80 to 100 km/h (50 to 62 mph) on two-lane roads and between 100 and 110 km/h (62 and 68 mph) on three-lane roads. [ 56 ]
In Europe, the speed limit for heavy goods trucks is usually 80 km/h (50 mph). There is a law on having speed limiters which makes it impossible to drive heavy trucks faster than 90 km/h (56 mph). [ 57 ] These limits are normally the same for road trains. There is not a wish to encourage trucks to overtake slightly slower trucks on motorways because it obstacles the left lane, although common anyway e.g. when heavy road trains lose speed uphill.
Below is a list of longest road trains driven in the world. Most of these had no practical use, as they were put together and driven across relatively short distances for the express purpose of record-breaking. | https://en.wikipedia.org/wiki/Road_train |
In business travel, a road warrior is a remote worker that uses mobile devices such as tablet computers , laptops , smartphones , and Internet access while traveling to conduct business. [ 1 ] The term has often been used with regard to salespeople who travel often and who seldom are in the office. Today it is used for anyone who works outside the office and travels for business. Unlike digital nomads , road warriors do not necessarily choose to travel; it is part of their work duties.
The term is believed to originate in the Mel Gibson movie Mad Max 2:The Road Warrior (1981).
In the pre-mobile technology era, road warriors were people whose jobs required a lot of travel, either by car or plane. The majority of this group were salespeople and professionals that needed to be with clients such as accountants, consultants, etc. They typically would need to come back to their company's office for administrative duties. The office held limited resources (phones, fax machine, computers, etc.) that were best used by centralizing them.
As both computer and telecommunication technologies became more portable and less expensive, the need for road warriors to come back to offices for use of limited and costly resources began to wane.
Major technologies that impacted road warriors:
Road warriors use mobile devices and laptop computers that connect to companies' information systems. Specialized applications from software as a service (SaaS) providers are often used in order to conduct their work duties.
The term road warrior has been credited to the 1981 movie Mad Max 2 sub-titled "Road Warrior" starring Mel Gibson . Its harsh road life in a post-apocalyptic world was used to symbolize the hardship of modern business travel.
The 2009 movie " Up in the Air " starred George Clooney as a person who fully lives the road warrior life to the extreme.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Road_warrior_(computing) |
A road is a thoroughfare used primarily for movement of traffic . Roads differ from streets , whose primary use is local access. They also differ from stroads , which combine the features of streets and roads. Most modern roads are paved .
The words "road" and "street" are commonly considered to be interchangeable, but the distinction is important in urban design .
There are many types of roads , including parkways , avenues , controlled-access highways (freeways, motorways, and expressways), tollways , interstates , highways , and local roads.
The primary features of roads include lanes , sidewalks (pavement), roadways (carriageways), medians , shoulders , verges , bike paths (cycle paths), and shared-use paths .
Historically many roads were simply recognizable routes without any formal construction or some maintenance. [ 1 ]
The Organization for Economic Co-operation and Development (OECD) defines a road as "a line of communication (travelled way) using a stabilized base other than rails or air strips open to public traffic, primarily for the use of road motor vehicles running on their own wheels", which includes "bridges, tunnels, supporting structures, junctions, crossings, interchanges, and toll roads, but not cycle paths". [ 2 ]
The Eurostat , ITF and UNECE Glossary for Transport Statistics Illustrated defines a road as a "Line of communication (traveled way) open to public traffic, primarily for the use of road motor vehicles, using a stabilized base other than rails or air strips. [...] Included are paved roads and other roads with a stabilized base, e.g. gravel roads. Roads also cover streets, bridges, tunnels, supporting structures, junctions, crossings and interchanges. Toll roads are also included. Excluded are dedicated cycle lanes." [ 3 ]
The 1968 Vienna Convention on Road Traffic defines a road as the entire surface of any way or street open to public traffic. [ 4 ]
In urban areas roads may diverge through a city or village and be named as streets, serving a dual function as urban space easement and route. [ 5 ] Modern roads are normally smoothed, paved, or otherwise prepared to allow easy travel . [ 6 ]
Part 2, Division 1, clauses 11–13 of the National Transport Commission Regulations 2006 defines a road in Australia as 'an area that is open to or used by the public and is developed for, or has as one of its main uses, the driving or riding of motor vehicles.' [ 7 ]
Further, it defines a shoulder (typical an area of the road outside the edge line, or the curb) and a road-related area which includes green areas separating roads, areas designated for cyclists and areas generally accessible to the public for driving, riding or parking vehicles.
In New Zealand, the definition of a road is broad in common law [ 8 ] where the statutory definition includes areas the public has access to, by right or not. [ 9 ] Beaches, publicly accessible car parks and yards (even if privately owned), river beds, road shoulders (verges), wharves and bridges are included. [ 10 ] However, the definition of a road for insurance purposes may be restricted to reduce risk.
In the United Kingdom The Highway Code details rules for "road users", but there is some ambiguity between the terms highway and road . [ 11 ] For the purposes of the English law , Highways Act 1980 , which covers England and Wales but not Scotland or Northern Ireland , road is "any length of highway or of any other road to which the public has access, and includes bridges over which a road passes". [ 12 ] This includes footpaths, bridleways and cycle tracks, and also road and driveways on private land and many car parks. [ 13 ] Vehicle Excise Duty , a road use tax , is payable on some vehicles used on the public road. [ 13 ]
The definition of a road depends on the definition of a highway; there is no formal definition for a highway in the relevant Act. A 1984 ruling said "the land over which a public right of way exists is known as a highway; and although most highways have been made up into roads, and most easements of way exist over footpaths, the presence or absence of a made road has nothing to do with the distinction. [ 14 ] [ 15 ] Another legal view is that while a highway historically included footpaths , bridleways , driftways, etc., it can now be used to mean those ways that allow the movement of motor vehicles , and the term rights of way can be used to cover the wider usage. [ 16 ]
In the United States, laws distinguish between public roads , which are open to public use, [ 17 ] and private roads , which are privately controlled. [ 18 ]
The assertion that the first pathways were the trails made by animals has not been universally accepted; in many cases animals do not follow constant paths. [ 1 ] Some believe that some roads originated from following animal trails. [ 25 ] [ 26 ] The Icknield Way may exemplify this type of road origination, where human and animal both selected the same natural line. [ 27 ] By about 10,000 BC human travelers used rough roads/pathways. [ 1 ]
In transport engineering , subgrade is the native material underneath a constructed road.
Road construction requires the creation of an engineered continuous right-of-way or roadbed , overcoming geographic obstacles and having grades low enough to permit vehicle or foot travel , [ 42 ] : 15 and may be required to meet standards set by law [ 43 ] or official guidelines. [ 44 ] The process is often begun with the removal of earth and rock by digging or blasting, construction of embankments , bridges and tunnels , and removal of vegetation (this may involve deforestation ) and followed by the laying of pavement material. A variety of road building equipment is employed in road building. [ 45 ] [ 46 ]
After design, approval , planning , legal, and environmental considerations have been addressed alignment of the road is set out by a surveyor . [ 37 ] The radii and gradient are designed and staked out to best suit the natural ground levels and minimize the amount of cut and fill. [ 44 ] : 34 Great care is taken to preserve reference benchmarks . [ 44 ] : 59
Roads are designed and built for primary use by vehicular and pedestrian traffic. Storm drainage and environmental considerations are a major concern. Erosion and sediment controls are constructed to prevent detrimental effects. Drainage lines are laid with sealed joints in the road easement with runoff coefficients and characteristics adequate for the land zoning and storm water system. Drainage systems must be capable of carrying the ultimate design flow from the upstream catchment with approval for the outfall from the appropriate authority to a watercourse , creek , river or the sea for drainage discharge. [ 44 ] : 38–40
A borrow pit (source for obtaining fill, gravel, and rock) and a water source should be located near or in reasonable distance to the road construction site. Approval from local authorities may be required to draw water or for working (crushing and screening) of materials for construction needs. The topsoil and vegetation is removed from the borrow pit and stockpiled for subsequent rehabilitation of the extraction area. Side slopes in the excavation area not steeper than one vertical to two horizontal for safety reasons. [ 44 ] : 53–56
Old road surfaces, fences, and buildings may need to be removed before construction can begin. Trees in the road construction area may be marked for retention. These protected trees should not have the topsoil within the area of the tree's drip line removed and the area should be kept clear of construction material and equipment. Compensation or replacement may be required if a protected tree is damaged. Much of the vegetation may be mulched and put aside for use during reinstatement. The topsoil is usually stripped and stockpiled nearby for rehabilitation of newly constructed embankments along the road. Stumps and roots are removed and holes filled as required before the earthwork begins. Final rehabilitation after road construction is completed will include seeding, planting, watering and other activities to reinstate the area to be consistent with the untouched surrounding areas. [ 44 ] : 66–67
Processes during earthwork include excavation, removal of material to spoil, filling, compacting, construction and trimming. If rock or other unsuitable material is discovered it is removed, moisture content is managed and replaced with standard fill compacted to meet the design requirements (generally 90–95% relative compaction). Blasting is not frequently used to excavate the roadbed as the intact rock structure forms an ideal road base. When a depression must be filled to come up to the road grade the native bed is compacted after the topsoil has been removed. The fill is made by the "compacted layer method" where a layer of fill is spread then compacted to specifications, under saturated conditions. The process is repeated until the desired grade is reached. [ 44 ] : 68–69
General fill material should be free of organics , meet minimum California bearing ratio (CBR) results and have a low plasticity index . The lower fill generally comprises sand or a sand-rich mixture with fine gravel, which acts as an inhibitor to the growth of plants or other vegetable matter. The compacted fill also serves as lower-stratum drainage. Select second fill ( sieved ) should be composed of gravel , decomposed rock or broken rock below a specified particle size and be free of large lumps of clay . Sand clay fill may also be used. The roadbed must be "proof rolled" after each layer of fill is compacted. If a roller passes over an area without creating visible deformation or spring the section is deemed to comply. [ 44 ] : 70–72
Geosynthetics such as geotextiles , geogrids , and geocells are frequently used in the various pavement layers to improve road quality. These materials and methods are used in low-traffic private roadways as well as public roads and highways. [ 47 ] Geosynthetics perform four main functions in roads: separation, reinforcement, filtration, and drainage; which increase the pavement performance, reduce construction costs and decrease maintenance. [ 48 ] [ self-published source ]
The completed roadway is finished by paving or left with a gravel or other natural surface. The type of road surface is dependent on economic factors and expected usage. Safety improvements such as traffic signs , crash barriers , raised pavement markers and other forms of road surface marking are installed.
According to a May 2009 report by the American Association of State Highway and Transportation Officials (AASHTO) and TRIP – a national transportation research organization – driving on rough roads costs the average American motorist approximately $400 a year in extra vehicle operating costs. Drivers living in urban areas with populations more than 250,000 are paying upwards of $750 more annually because of accelerated vehicle deterioration, increased maintenance, additional fuel consumption , and tire wear caused by poor road conditions.
When a single carriageway road is converted into dual carriageway by building a second separate carriageway alongside the first, it is usually referred to as duplication , [ 49 ] twinning or doubling . The original carriageway is changed from two-way to become one-way, while the new carriageway is one-way in the opposite direction. In the same way as converting railway lines from single track to double track , the new carriageway is not always constructed directly alongside the existing carriageway.
Roads that are intended for use by a particular mode of transport can be reallocated for another mode of transport, [ 50 ] i.e. by using traffic signs . For instance, in the ongoing road space reallocation effort, some roads (particularly in city centers) which are intended for use by cars are increasingly being repurposed for cycling and/or walking . [ 51 ] [ 52 ] [ 53 ]
Like all structures, roads deteriorate over time. Deterioration is primarily due to environmental effects such as frost heaves , thermal cracking and oxidation often contribute, however accumulated damage from vehicles also contributes. [ 54 ] According to a series of experiments carried out in the late 1950s, called the AASHO Road Test , it was empirically determined that the effective damage done to the road is roughly proportional to the fourth power of axle weight . [ 55 ] A typical tractor-trailer weighing 80,000 pounds (36.287 t ) with 8,000 pounds (3.629 t) on the steer axle and 36,000 pounds (16.329 t) on both of the tandem axle groups is expected to do 7,800 times more damage than a passenger vehicle with 2,000 pounds (0.907 t) on each axle. Potholes on roads are caused by rain damage and vehicle braking or related construction work.
Pavements are designed for an expected service life or design life . In some parts of the United Kingdom the standard design life is 40 years for new bitumen and concrete pavement. Maintenance is considered in the whole life cost of the road with service at 10, 20 and 30-year milestones. [ 56 ] Roads can be and are designed for a variety of lives (8-, 15-, 30-, and 60-year designs). When pavement lasts longer than its intended life, it may have been overbuilt, and the original costs may have been too high. When a pavement fails before its intended design life, the owner may have excessive repair and rehabilitation costs. Some asphalt pavements are designed as perpetual pavements with an expected structural life in excess of 50 years. [ 57 ]
Many asphalt pavements built over 35 years ago, despite not being specifically designed as a perpetual pavement, have remained in good condition long past their design life. [ 58 ] Many concrete pavements built since the 1950s have significantly outlived their intended design lives. [ 59 ] Some roads like Chicago 's Wacker Drive , a major two-level (and at one point, three-level) roadway in the downtown area, are being rebuilt with a designed service life of 100 years. [ 60 ]
Virtually all roads require some form of maintenance before they come to the end of their service life. Pro-active agencies use pavement management techniques to continually monitor road conditions and schedule preventive maintenance treatments as needed to prolong the lifespan of their roads. Technically advanced agencies monitor the road network surface condition with sophisticated equipment such as laser/inertial profilometers . These measurements include road curvature , cross slope , asperity , roughness , rutting and texture . Software algorithms use this data to recommend maintenance or new construction.
Maintenance treatments for asphalt concrete generally include thin asphalt overlays, crack sealing, surface rejuvenating, fog sealing, micro milling or diamond grinding and surface treatments . Thin surfacing preserves, protects and improves the functional condition of the road while reducing the need for routing maintenance, leading to extended service life without increasing structural capacity. [ 61 ]
Older concrete pavements that develop faults can be repaired with a dowel bar retrofit , in which slots are cut in the pavement at each joint, and dowel bars are placed in the slots, which are then filled with concrete patching material. This can extend the life of the concrete pavement for 15 years. [ 62 ]
Failure to maintain roads properly can create significant costs to society. A 2009 report released by the American Association of State Highway and Transportation Officials estimated that about 50% of the roads in the US are in bad condition, with urban areas worse. The report estimates that urban drivers pay an average of $746/year on vehicle repairs while the average US motorist pays about $335/year. In contrast, the average motorist pays about $171/year in road maintenance taxes (based on 600 gallons/year and $0.285/gallon tax).
Distress and serviceability loss on concrete roads can be caused by loss of support due to voids beneath the concrete pavement slabs. The voids usually occur near cracks or joints due to surface water infiltration . The most common causes of voids are pumping, consolidation, subgrade failure and bridge approach failure. Slab stabilization is a non-destructive method of solving this problem and is usually employed with other concrete pavement restoration methods including patching and diamond grinding. The technique restores support to concrete slabs by filing small voids that develop underneath the concrete slab at joints, cracks or the pavement edge.
The process consists of pumping a cementitious grout or polyurethane mixture through holes drilled through the slab. The grout can fill small voids beneath the slab and/or sub-base. The grout also displaces free water and helps keep water from saturating and weakening support under the joints and slab edge after stabilization is complete. The three steps for this method after finding the voids are locating and drilling holes, grout injection and post-testing the stabilized slabs.
Slab stabilization does not correct depressions, increase the design structural capacity, stop erosion or eliminate faulting. It does, however, restore the slab support, therefore, decreasing deflections under the load. Stabilization should only be performed at joints and cracks where the loss of support exists. Visual inspection is the simplest manner to find voids. Signs that repair is needed are transverse joint faulting, corner breaks and shoulder drop off and lines at or near joints and cracks. Deflection testing is another common procedure used to locate voids. It is recommended to do this testing at night as during cooler temperatures, joints open, aggregate interlock diminishes and load deflections are at their highest.
Ground penetrating radar pulses electromagnetic waves into the pavement and measures and graphically displays the reflected signal. This can reveal voids and other defects.
The epoxy/core test, detects voids by visual and mechanical methods. It consists of drilling a 25 to 50 millimeter hole through the pavement into the sub-base with a dry-bit roto-hammer . Next, a two-part epoxy is poured into the hole – dyed for visual clarity. Once the epoxy hardens, technicians drill through the hole. If a void is present, the epoxy will stick to the core and provide physical evidence.
Common stabilization materials include pozzolan -cement grout and polyurethane. The requirements for slab stabilization are strength and the ability to flow into or expand to fill small voids. Colloidal mixing equipment is necessary to use the pozzolan-cement grouts. The contractor must place the grout using a positive-displacement injection pump or a non-pulsing progressive cavity pump. A drill is also necessary but it must produce a clean hole with no surface spalling or breakouts. The injection devices must include a grout packer capable of sealing the hole. The injection device must also have a return hose or a fast-control reverse switch, in case workers detect slab movement on the uplift gauge. The uplift beam helps to monitor the slab deflection and has to have sensitive dial gauges. [ 63 ] [ 64 ]
Also called joint and crack repair, this method's purpose is to minimize infiltration of surface water and incompressible material into the joint system. Joint sealants are also used to reduce dowel bar corrosion in concrete pavement restoration techniques. Successful resealing consists of old sealant removal, shaping and cleaning the reservoir, installing the backer rod and installing the sealant. Sawing, manual removal, plowing and cutting are methods used to remove the old sealant. Saws are used to shape the reservoir. When cleaning the reservoir, no dust, dirt or traces of old sealant should remain. Thus, it is recommended to water wash, sand-blast and then air blow to remove any sand, dirt or dust. The backer rod installation requires a double-wheeled, steel roller to insert the rod to the desired depth. After inserting the backer rod, the sealant is placed into the joint. There are various materials to choose for this method including hot pour bituminous liquid, silicone and preformed compression seals. [ 63 ] [ 65 ] [ 66 ] [ 67 ]
Careful design and construction of roads can increase road traffic safety and reduce the harm (deaths, injuries, and property damage) on the highway system from traffic collisions.
On neighborhood roads traffic calming , safety barriers , pedestrian crossings and cycle lanes can help protect pedestrians, cyclists, and drivers.
Lane markers in some countries and states are marked with Cat's eyes or Botts dots. Botts dots are not used where it is icy in the winter, because frost and snowplows can break the glue that holds them to the road, although they can be embedded in short, shallow trenches carved in the roadway, as is done in the mountainous regions of California.
For major roads risk can be reduced by providing limited access from properties and local roads, grade separated junctions and median dividers between opposite-direction traffic to reduce the likelihood of head-on collisions.
The placement of energy attenuation devices (e.g. guardrails, wide grassy areas, sand barrels) is also common. Some road fixtures such as road signs and fire hydrants are designed to collapse on impact. Light poles are designed to break at the base rather than violently stop a car that hits them. Highway authorities may also remove larger trees from the immediate vicinity of the road. During heavy rains, if the elevation of the road surface is not higher than the surrounding landscape, it may result in flooding. [ 68 ]
Speed limits can improve road traffic safety and reduce the number of road traffic casualties from traffic collisions . In their World report on road traffic injury prevention report, the World Health Organization (WHO) identify speed control as one of various interventions likely to contribute to a reduction in road casualties.
Road conditions are the collection of factors describing the ease of driving on a particular stretch of road, or on the roads of a particular locality, including the quality of the pavement surface , potholes , road markings, and weather . It has been reported that "[p]roblems of transportation participants and road conditions are the main factors that lead to road traffic accidents". [ 69 ] It has further been specifically noted that "weather conditions and road conditions are interlinked as weather conditions affect the road conditions". [ 70 ] Specific aspects of road conditions can be of particular importance for particular purposes. For example, for autonomous vehicles such as self-driving cars , significant road conditions can include "shadowing and lighting changes, road surface texture changes, and road markings consisting of circular reflectors, dashed lines, and solid lines". [ 71 ]
Various government agencies and private entities, including local news services, track and report on road conditions to the public so that drivers going through a particular area can be aware of hazards that may exist in that area. News agencies, in turn, rely on tips from area residents with respect to certain aspects of road conditions in their coverage area. [ 72 ]
Careful design and construction of a road can reduce any negative environmental impacts.
Water management systems can be used to reduce the effect of pollutants from roads. [ 73 ] [ 74 ] Rainwater and snowmelt running off of roads tends to pick up gasoline, motor oil , heavy metals , trash and other pollutants and result in water pollution . Road runoff is a major source of nickel , copper, zinc , cadmium , lead and polycyclic aromatic hydrocarbons (PAHs), which are created as combustion byproducts of gasoline and other fossil fuels . [ 75 ]
De-icing chemicals and sand can run off into roadsides, contaminate groundwater and pollute surface waters ; [ 76 ] and road salts can be toxic to sensitive plants and animals. [ 77 ] Sand applied to icy roads can be ground up by traffic into fine particulates and contribute to air pollution.
Roads are a chief source of noise pollution . In the early 1970s, it was recognized that design of roads can be conducted to influence and minimize noise generation. [ 78 ] Noise barriers can reduce noise pollution near built-up areas. Regulations can restrict the use of engine braking .
Motor vehicle emissions contribute air pollution . Concentrations of air pollutants and adverse respiratory health effects are greater near the road than at some distance away from the road. [ 79 ] Road dust kicked up by vehicles may trigger allergic reactions. [ 80 ] In addition, on-road transportation greenhouse gas emissions are the largest single cause of climate change, scientists say. [ 81 ]
Traffic flows on the right or on the left side of the road depending on the country. [ 82 ] In countries where traffic flows on the right, traffic signs are mostly on the right side of the road, roundabouts and traffic circles go counter-clockwise/anti-clockwise, and pedestrians crossing a two-way road should watch out for traffic from the left first. [ 83 ] In countries where traffic flows on the left, the reverse is true.
About 33% of the world by population drive on the left, and 67% keep right. By road distances, about 28% drive on the left, and 72% on the right, [ 84 ] even though originally most traffic drove on the left worldwide. [ 85 ]
Transport economics is used to understand both the relationship between the transport system and the wider economy and the complex effects of the road network structure when there are multiple paths and competing modes for both personal and freight (road/rail/air/ferry) and where induced demand can result in increased on decreased transport levels when road provision is increased by building new roads or decreased (for example California State Route 480). Roads are generally built and maintained by the public sector using taxation although implementation may be through private contractors ). [ 86 ] [ 87 ] or occasionally using road tolls . [ 88 ]
Public-private partnerships are a way for communities to address the rising cost by injecting private funds into the infrastructure. There are four main ones: [ 89 ]
Society depends heavily on efficient roads. In the European Union (EU) 44% of all goods are moved by trucks over roads and 85% of all people are transported by cars, buses or coaches on roads. [ 90 ] The term was also commonly used to refer to roadsteads , waterways that lent themselves to use by shipping.
According to the New York State Thruway Authority, [ 91 ] some sample per-mile costs to construct multi-lane roads in several US northeastern states were:
The United States has the largest network of roads of any country with 4,050,717 miles (6,518,997 km) as of 2009. [ 92 ] The Republic of India has the second-largest road system globally with 4,689,842 kilometres (2,914,133 miles) of road (2013). [ 93 ] The People's Republic of China is third with 3,583,715 kilometres (2,226,817 mi) of road (2007). The Federative Republic of Brazil has the fourth-largest road system in the world with 1,751,868 kilometres (1,088,560 mi) (2002). See List of countries by road network size . When looking only at expressways , the National Trunk Highway System (NTHS) in China has a total length of 45,000 kilometres (28,000 mi) at the end of 2006, and 60,300 km at the end of 2008, second only to the United States with 90,000 kilometres (56,000 mi) in 2005. However, as of 2017, China has 130,000 km of Expressways. [ 94 ] [ 95 ]
Eurasia, Africa, North America, South America, and Australia each have an extensive road network that connects most cities.
The North and South American road networks are separated by the Darién Gap , the only interruption in the Pan-American Highway . Eurasia and Africa are connected by roads on the Sinai Peninsula . The European Peninsula is connected to the Scandinavian Peninsula by the Øresund Bridge , and both have many connections to the mainland of Eurasia, including the bridges over the Bosphorus . Antarctica has very few roads and no continent-bridging network, though there are a few ice roads between bases, such as the South Pole Traverse . Bahrain is the only island country to be connected to a continental network by road (the King Fahd Causeway to Saudi Arabia).
Even well-connected road networks are controlled by many different legal jurisdictions, and laws such as which side of the road to drive on vary accordingly.
Many populated domestic islands are connected to the mainland by bridges. A very long example is the 113 mi (182 km) Overseas Highway connecting many of the Florida Keys with the continental United States.
Even on mainlands, some settlements have no roads connecting with the primary continental network, due to natural obstacles like mountains or wetlands, or high cost compared to the population served. Unpaved roads or lack of roads are more common in developing countries , and these can become impassible in wet conditions. As of 2014, only 43% of rural Africans have access to an all-season road. [ 96 ] Due to steepness, mud, snow, or fords, roads can sometimes be passable only to four-wheel drive vehicles, those with snow chains or snow tires , or those capable of deep wading or amphibious operation .
Most disconnected settlements have local road networks connecting ports, buildings, and other points of interest.
Where demand for travel by road vehicle to a disconnected island or mainland settlement is high, roll-on/roll-off ferries are commonly available if the journey is relatively short. For long-distance trips, passengers usually travel by air and rent a car upon arrival. If facilities are available, vehicles and cargo can also be shipped to many disconnected settlements by boat, or air transport at much greater expense. The island of Great Britain is connected to the European road network by Eurotunnel Shuttle – an example of a car shuttle train which is a service used in other parts of Europe to travel under mountains and over wetlands.
In polar areas, disconnected settlements are often more easily reached by snowmobile or dogsled in cold weather, which can produce sea ice that blocks ports, and bad weather that prevents flying. For example, resupply aircraft are only flown to Amundsen–Scott South Pole Station October to February, and many residents of coastal Alaska have bulk cargo shipped in only during the warmer months. Permanent darkness during the winter can also make long-distance travel more dangerous in polar areas. Continental road networks do reach into these areas, such as the Dalton Highway to the North Slope of Alaska, the R21 highway to Murmansk in Russia, and many roads in Scandinavia (though due to fjords water transport is sometimes faster). Large areas of Alaska, Canada, Greenland, and Siberia are sparsely connected. For example, all 25 communities of Nunavut are disconnected from each other and the main North American road network. [ 97 ]
Road transport of people and cargo by may also be obstructed by border controls and travel restrictions. For example, travel from other parts of Asia to South Korea would require passage through the hostile country of North Korea. Moving between most countries in Africa and Eurasia would require passing through Egypt and Israel, which is a politically sensitive area.
Some places are intentionally car-free , and roads (if present) might be used by bicycles or pedestrians.
Roads are under construction to many remote places, such as the villages of the Annapurna Circuit , and a road was completed in 2013 to Mêdog County . However, in some remote mountain areas, road (or transport) development can be devastating to communities, causing tourism to be cut off and settlements to be abandoned. [ 98 ] Additional intercontinental and transoceanic fixed links have been proposed, including a Bering Strait crossing that would connect Eurasia-Africa and North America, a Malacca Strait Bridge to the largest island of Indonesia from Asia, and a Strait of Gibraltar crossing to connect Europe and Africa directly. | https://en.wikipedia.org/wiki/Roadbed |
A roadheader , also called a boom-type roadheader , road header machine , road header or just header machine , is a piece of excavating equipment consisting of a boom-mounted cutting head, a loading device usually involving a conveyor, and a crawler travelling track to move the entire machine forward into the rock face. [ 1 ]
The cutting head can be a general purpose rotating drum mounted in line or perpendicular to the boom, or can be special function heads such as jackhammer-like spikes, compression fracture micro-wheel heads like those on larger tunnel boring machines , a slicer head like a gigantic chain saw for dicing up rock, or simple jaw-like buckets of traditional excavators. [ 2 ]
The first roadheader patent was applied for by Dr. Z. Ajtay in Hungary, in 1949. [ 1 ] It was invented as a remote operated miner for exploitation of small seam, close walled deposits, typically in wet conditions.
Cutting Heads:
Roadheaders were initially used in coal mines. The first use in a civil engineering project was the construction of the City Loop (then called the Melbourne Underground Rail Loop) in the 1970s, where the machines enabled around 80% of the excavation to be performed mechanically. [ 3 ]
They are now widely used in such as tunneling both for mining and municipal government projects, building wine caves , and building cave homes such as those in Coober Pedy , Australia .
On February 21, 2014, Waller Street, just south of Laurier Avenue collapsed into an 8m-wide and 12m-deep sink-hole where a roadheader was excavating the eastern entrance to Ottawa 's LRT O-Train tunnel. [ 4 ] A similar incident occurred in June 2016, when a sink-hole opened up in Rideau Street during further construction of the tunnel, and filled with water up to a depth of three metres. The CBC reported that one of Rideau Transit Group ’s 135-tonne roadheaders was in a part of the tunnel where the flooding was the deepest. Three roadheaders were used in the construction of the O-Train. [ 5 ] | https://en.wikipedia.org/wiki/Roadheader |
Roadkill is a wild animal that has been killed by collision with motor vehicles. Wildlife-vehicle collisions (WVC) have increasingly been the topic of academic research to understand the causes, and how they can be mitigated. [ 1 ] [ 2 ] [ 3 ]
Essentially non-existent before the advent of mechanized transport, roadkill is associated with increasing automobile speed in the early 20th century. In 1920, naturalist Joseph Grinnell wrote of his observations in the state of California that "this is a relatively new source of fatality; and if one were to estimate the entire mileage of such roads in the state, the mortality must mount into the hundreds and perhaps thousands every 24 hours." [ 4 ]
In Europe and North America, deer are the animal most likely to cause vehicle damage.
The development of roads affects wildlife by altering and isolating habitat and populations, deterring the movement of wildlife, and resulting in extensive wildlife mortality. [ 5 ] One writer states that "our insulated industrialized culture keeps us disconnected from life beyond our windshields." [ 6 ] Driving "mindlessly" without paying attention to the movements of others in the vehicle's path, driving at speeds that do not allow stopping, and distractions contribute to the death toll. [ 6 ] Moreover, a culture of indifference and hopelessness is created if people learn to ignore lifeless bodies on roads. [ 6 ]
A study in Ontario , Canada in 1996 found many reptiles killed on portions of the road where vehicle tires do not usually pass over, which led to the inference that some drivers intentionally run over reptiles. [ 7 ] : 138 To verify this hypothesis, research in 2007 found that 2.7% of drivers intentionally hit reptile decoys masquerading as snakes and turtles. [ 7 ] Several drivers were seen to speed up when aiming for the decoys. [ 7 ] : 142 Male drivers hit the reptile decoys more often than female drivers. [ 7 ] : 140–141 However, 3.4% of male drivers and 3% of female drivers stopped to rescue the reptile decoys. [ 7 ] : 140
On roadways where rumble strips are installed to provide a tactile vibration alerting drivers when drifting from their lane, the rumble strips may accumulate road salt in regions where it is used. The excess salt can accumulate and attract both small and large wildlife in search of salt licks ; these animals are at great risk of becoming roadkill or causing accidents. [ 8 ] [ 9 ] [ 10 ]
Very large numbers of mammals, birds, reptiles, amphibians and invertebrates are killed on the world's roads every day. [ 11 ] A Humane Society volunteer survey conducted over three Memorial Day weekends in the 1960s estimated that one million vertebrate animals are killed by vehicular traffic daily in the United States. [ 12 ] [ 13 ] [ 14 ] A 2008 Federal Highway Administration report estimates that 1 to 2 million accidents occur each year between large animals and vehicles. Extrapolating globally based on total length of roads, roughly 5.5 million vertebrates are killed per day, or over 2 billion annually. [ 15 ]
The estimated number of birds killed on the roads in different European countries ranges from 350,000 to 27 million, depending on the factors such as the geography of the country and bird migration paths. [ 16 ]
Mortality resulting from roadkill can be very significant for species with small populations. Roadkill is estimated to be responsible for 50% of deaths of Florida panthers , and is the largest cause of badger deaths in England. Roadkill is considered to significantly contribute to the population decline of many threatened species, including wolf, koala and eastern quoll . [ 17 ] In Tasmania, Australia the most common species affected by roadkill are brushtail possums and Tasmanian pademelons . [ 17 ] In Bolivia there has been a report of an Andean cat, a critically endangered species, dead by a car collision. [ 18 ]
In 1993, 25 schools throughout New England , United States, participated in a roadkill study involving 1,923 animal deaths. By category, the fatalities were: 81% mammals, 15% bird, 3% reptiles and amphibians, 1% indiscernible. [ 19 ] Extrapolating these data nationwide, Merritt Clifton (editor of Animal People Newspaper ) estimated that the following animals are being killed by motor vehicles in the United States annually: 41 million squirrels, 26 million cats, 22 million rats, 19 million Virginia opossums , 15 million raccoons , 6 million dogs, and 350,000 deer. [ 20 ] This study may not have considered differences in observability between taxa (e.g. dead raccoons are easier to see than dead frogs), and has not been published in peer-reviewed scientific literature. Observability, amongst other factors, may be the cause for mammal species to dominate roadkill reports, whereas bird and amphibian mortality are likely underestimated. [ 21 ]
A year-long study in northern India in an agricultural landscape covering only 20 km of road identified 133 road kills of 33 species comprising amphibians, reptiles, birds and mammals. The study compared road-killed animals with all species seen along the road and estimated that traffic killed individuals of 30% of amphibian species, 25% of reptile species, 16% of birds, and 27% of mammals that were seen in the area. [ 22 ]
A 2007 study showed that insects, too, are prone to a very high risk of roadkill incidence. [ 23 ] Research showed interesting patterns in insect roadkills in relation to the vehicle density.
The decrease in insects being killed by cars is known as the " windshield phenomenon ". In 2003–2004, the Royal Society for the Protection of Birds investigated anecdotal reports of declining insect populations in the UK by asking drivers to affix a postcard-sized PVC rectangle, called a "splatometer", to the front of their cars. [ 24 ] Almost 40,000 drivers took part, and the results found one squashed insect for every 5 miles (8.0 km) driven. This contrasts with 30 years ago when cars were covered more completely with insects, supporting the idea that insect numbers had waned. [ 25 ]
In 2011, Dutch biologist Arnold van Vliet coordinated a similar study of insect deaths on car license plates. He found two insects killed on the license-plate area for every 10 kilometres (6.2 mi) driven. This implies about 1.6 trillion insect deaths by cars per year in the Netherlands, and about 32.5 trillion deaths in the United States if the figures are extrapolated there. [ 26 ] The number grows to 228 trillion per year if extended globally. [ 15 ]
One rarely considered positive aspect of roadkill is the regular availability of carrion it provides for scavenger species such as vultures, crows, ravens, foxes, opossums and a wide variety of carnivorous insects. Areas with robust scavenger populations tend to see roadkilled animal corpses being quickly carried off, sometimes within minutes of being struck. This can skew data and cause a lower estimation of the number of roadkill animals per year. [ 27 ] In particularly roadkill-prone areas, scavenging birds rely on roadkill for much of their daily nutritional requirements, and can even be seen observing the roadway from telephone poles, overhead wires and trees, waiting for animals, usually squirrels, opossums and raccoons to be struck so they can swoop down and feed. However, such scavengers are at greater risk of becoming roadkill themselves, and are subject to evolutionary pressure to be alert to traffic hazards.
In contrast, areas where scavengers have been driven out (such as many urban areas) often see roadkill rotting in place indefinitely on the roadways and being further macerated by traffic. The remains must be manually removed by dedicated disposal personnel and disposed of via cremation; this greatly increases the public nuisance inherent to roadkill, unnecessarily complicates its disposal, and consumes additional public money, time and fuel that could be spent on other roadway maintenance projects. [ citation needed ]
The study of roadkill has proven highly amenable to the application of citizen science observation methods. Since 2009, statewide roadkill observation systems have been started in the US, enrolling hundreds of observers in reporting roadkill on a website. The observers, who are usually naturalists or professional scientists, provide identification, location, and other information about the observations. The data are then displayed on a website for easy visualization and made available for studies of proximate causes of roadkill, actual wildlife distributions, wildlife movement, and other studies. Roadkill observation system websites are available for the US states of California, [ 28 ] Maine, [ 29 ] and Idaho. [ 30 ] In each case, index roads are used to help quantify total impact of vehicle collisions on specific vertebrate taxa. Researchers that use data from citizen science platforms may benefit from a large pool of data, specially for iconic, well known conspicuous species. Care must be taken when analyzing data for species that are not easy to identify, as studies have shown that misidentification is not uncommon amongst these platforms. [ 21 ]
In the United Kingdom, "The Road Lab" (formerly Project Splatter) was started by Cardiff University in 2012, with the aim of estimating the impact of roads and motoring on British wildlife. [ 31 ] Since then it has gathered data on its website, and on several social media platforms including Facebook [ 32 ] and Twitter. [ 33 ]
In India, the project "Provide Animals Safe Transit on Highways" (PATH) was initiated by the Environment Conservation Group [ 34 ] in 2015, to study the impact of roads on Indian wildlife. [ 35 ] A team of five wildlife conservationists led by R. Mohammed Saleem had undertaken a forty-four-day expedition, traveling more than 17,000 kilometers across 22 states to study and spread awareness on roadkill. [ 36 ] [ 37 ] [ 38 ] It is also gathering data on its website, and social media platforms. [ 39 ] More focused scientific studies on impacts of traffic on animals have been conducted across India especially in the Western Ghats of south India documenting a large number of species of insects, other arthropods, amphibians, reptiles, birds and mammals killed. [ 40 ] [ 41 ] Another study conducted on 420 km of roads located along cultivated fields in Punjab showed granivorous birds to be killed far more than their availability, likely attracted to spilled grain on the roads. [ 42 ]
In the Czech Republic, an online animal-vehicle crash reporting system Srazenazver.cz is gathering both professional (Police, road maintenance) and volunteered data on roadkill and wildlife-vehicle crashes. [ 43 ] The application allows users to input, edit and browse data. The data is visualized in the form of maps, graphs or tables and analyzed online (KDE+ hotspots identification, area statistics). [ 44 ]
In Australia, wombat roadkill data is collected by the citizen science project, WomSAT. [ 45 ] [ 46 ] [ 47 ]
The first wildlife roadkill identification guide produced by a state agency in North America was published by the British Columbia Ministry of Transportation (BCMoT) in Canada in 2008. [ 48 ] BCMoT's "Wildlife Roadkill Identification Guide" focused on the most common large carnivores and ungulates found in British Columbia. The guide was developed to assist BCMoT's maintenance contractors in identifying wildlife carcasses found on provincial highways as part of their responsibilities for BCMoT's Wildlife Accident Reporting System (WARS). [ 49 ]
Collisions with animals can have many negative consequences:
Regardless of the spatial scale at which the mitigation measure is applied, there are two main types of roadkill mitigation measures: changing driver behavior, and changing wildlife behavior. [ 50 ]
There are three potential ways to change driver behavior. Primary methods focus on changing driver attitude by increasing public awareness and helping people understand that reducing roadkill will benefit their community. The second potential way is to make people aware of specific hazardous areas by use of signage, rumble strips or lighting. The third potential way is to slow traffic physically or psychologically, using chicanes or speed bumps.
There are three categories of altering wildlife behavior. Primary methods discourage wildlife from loitering on roadsides by reducing food and water resources, or by making the road surfaces lighter in color which may make wildlife feel more exposed on the roadway. Second are methods of discouraging wildlife from crossing roads, at least when cars are present, using equipment such as ultrasonic whistles, reflectors, and fencing. Third are mechanisms to provide safe crossing like overpasses, underpasses and escape routes.
Although it is not illegal to help wild animals that are in danger of becoming roadkill, stopping on the highway is potentially dangerous and may result in injury or death of the person that is helping them and/or an inattentive driver that collides into their stopped vehicle. [ 51 ]
In the US, an estimated 1.25 million insurance claims are filed annually due to collisions with deer, elk, or moose, amounting to 1 out of 169 collision damage claims. [ 52 ]
Collisions with large animals with antlers (such as deer) are particularly dangerous, but any large, long-legged animal (e.g. horses, larger cattle, camels) can pose a similar cabin incursion hazard. [ 53 ] Injury to humans due to driver failure to maintain control of a vehicle either while avoiding, or during and immediately after an animal impact, is also common. Dusk and dawn are times of highest collision risk. [ 54 ] [ 55 ]
The recommended reaction to a large animal (such as a moose) is to slow down in lane, if at all possible, and to avoid swerving suddenly, which could cause loss of control. [ 52 ] [ 54 ] If a collision cannot be avoided, it is best to swerve towards the rear end of the animal, as it is more likely to run forward. [ 56 ] Drivers who see a deer near or in the roadway should be aware that it is very likely that other members of a herd are nearby. [ 57 ]
Acoustic warning deer horns can be mounted on vehicles to warn deer of approaching automobiles, though their effectiveness is disputed. [ 58 ] Ultrasonic wind-driven whistles are often promoted as a cheap, simple way to reduce the chance of wildlife-vehicle collisions. In one study, the sound pressure level of the whistle was 3 dB above the sound pressure level of the test vehicle, but caused no observable difference in behavior of animals when the whistles were activated and not activated, casting doubt on their effectiveness. [ 59 ]
In Australia, kangaroos are the most common species hit and killed by vehicles, [ 60 ] causing significant damage and even fatalities. Another large species hit and killed by vehicles are wombats. [ 61 ] Sightings of wombat roadkill can be logged at WomSAT to help support the implementation of mitigation strategies to reduce wombat deaths. [ 45 ] [ 46 ]
Squirrels, rabbits, birds, or other small animals are often crushed by vehicles. Serious accidents may result from motorists swerving or stopping for squirrels in the road. [ 62 ] [ 63 ] [ 64 ] [ 65 ] Such evasive maneuvers are often unproductive, since small rodents and birds are much more agile and quicker to react than motorists in heavy vehicles. There is very little a driver can do to avoid an unpredictably darting squirrel or rabbit, or even to intentionally hit one. The suggested course of action is to continue driving in a predictable, safe manner, and let the small animal decide on the spur of the moment which way to run or fly; the majority of vehicular encounters end with no harm to either party. [ 53 ] [ 66 ] [ 67 ]
Although strikes can happen at any time of day, deer tend to move at dusk and dawn, and are particularly active during the October–December mating season as well as late March and early April in the Northern Hemisphere. [ 57 ] Driving at night presents its own challenges: nocturnal species are active, and visibility, particularly side visibility, is reduced. Penguins, for example, are common roadkill traffic victims in Wellington, New Zealand due to their color and the fact that they come ashore at dusk and leave again around dawn. [ 68 ]
Night time drivers should reduce speed and use high beam headlights when possible to give themselves maximum time to avoid a collision. [ 57 ] However, when headlights approach a nocturnal animal, it is hard for the creature to see the approaching car (nocturnal animals see better in low than in bright light). Furthermore, the glare of oncoming vehicle headlights can dazzle some species, such as rabbits; they will freeze in the road rather than flee. It may be better to flash the headlights on and off, rather than leaving them on continuously while approaching an animal. [ 52 ]
The simple tactics of reducing speed and scanning both sides of the road for foraging deer can improve driver safety at night, and drivers may see the retro-reflection of an animal's eyes before seeing the animal itself. [ 54 ] [ 55 ] [ 67 ]
Wildlife crossings allow animals to travel over or underneath roads. They are most widely used in Europe, but have also been installed in a few US locations and in parts of Western Canada. As new highways cause habitats to become increasingly fragmented, these crossings can play an important role in protecting endangered species.
In the US, sections of road known to have heavy deer cross-traffic will usually have warning signs depicting a bounding deer; similar signs exist for moose, elk, and other species. In the American West, roads may pass through large areas designated as " open range ", meaning no fences separate drivers from large animals such as cattle or bison. A driver may round a bend to find a small herd standing in the road. Open range areas are generally marked with signage and protected by cattle grids .
In an attempt to mitigate US$1.2 billion in animal-related vehicular damage, a few US states now have sophisticated systems to protect motorists from large animals. [ 69 ] One of these systems is called the Roadway Animal Detection System (RADS). [ 70 ] [ 71 ] A solar powered sensor can detect large animals such as deer, bear, elk, and moose near the roadway, and thereafter flash a light to alert oncoming drivers. The sensor's detection distance ranges from 650 feet (200 m) to unlimited, depending on the terrain.
The removal of trees associated with road construction produces a gap in the forest canopy that forces arboreal (tree dwelling) species to come to the ground to travel across the gap. Canopy crossings have been constructed for red squirrels in Great Britain, colobus monkeys in Kenya, and ringtail possums in Far North Queensland, Australia. [ 73 ] The crossings have two purposes: to ensure that roads do not restrict movement of animals and also to reduce roadkill. Installation of the canopy crossings may be relatively quick and cheap.
Banks, cuttings and fences that trap animals on the road are associated with roadkill. [ 74 ] In order to increase the likelihood of escape from a main roadway, escape routes have been constructed on the access roads. Escape routes may be considered as one of the most useful measures, especially when new roads are being built or roads are being upgraded, widened or sealed. Research may be undertaken into the efficacy of escape routes by observation of animals’ response to vehicles in places with natural escape routes and barriers, rather than trialing purpose-built escape routes. [ citation needed ]
In the New Forest , in southern England, there is a proposal to fence roads to protect the New Forest pony . [ citation needed ] However, this proposal is controversial. [ 75 ]
Removing animal carcasses from roadways is considered essential to public safety. [ 76 ] The removal takes away the potential distraction and hazard of the carcass to other motorists. [ 77 ] Quick removal can also prevent deaths of other animals that may wish to feed on the carcass, as well as animals that may go into the road to try to move the body of an animal in their social group. [ 6 ] Sometimes rather than removal, the carcass is moved to a nearby public right-of-way where it can be consumed by scavengers, but not placed in a ditch or where waterways might be polluted. [ 76 ] [ 77 ] Covering the carcass with wood chips can aid in decomposition while minimizing odor. [ 76 ]
Local governments and other levels of government have services that pick up dead animals from roadways, who will respond when advised about a dead animal.
New York City has an online request form which may be completed by residents of the city. [ 78 ] New York State has a process to report dead wildlife to the Department of Environmental Conservation; they are especially interested in marked/tagged wildlife and endangered or threatened species. [ 79 ]
In Toronto , Canada, the city accepts requests to remove a dead animal by telephone. [ 80 ] If an animal is found along a major highway, depending on who has jurisdiction for maintaining the highway, the request may be directed to the city, the provincial Ministry of Transportation , or a highway operations centre. [ 81 ] In Ontario, citizens may keep possession of roadkill in many circumstances, but may have to register their find. [ 82 ]
If fresh enough, roadkill can be eaten, and there are several recipe books dedicated to roadkill. The practice of eating animals killed on the road is usually derided, and most people consider it not to be safe, [ 79 ] sanitary, or wholesome. For example, when the Tennessee legislature attempted to legalize the use of accidentally killed animals, they became the subject of stereotyping and derisive humor. [ 83 ] Nevertheless, in some cultures there is tradition of using fresh roadkill as a nutritious and economical source of meat similar to that obtained by hunting.
Songwriter and performer Loudon Wainwright III released his deadpan humorous song, " Dead Skunk (in the Middle of the Road) " in 1972, and it peaked at number 16 on the Billboard Hot 100 . [ 84 ]
The American band Phish frequently [ 85 ] plays the song "Possum", originally from the album The Man Who Stepped into Yesterday at its concerts. The song describes an encounter with a roadkilled opossum and includes the lyric "Your end is the road".
The Horse Flies , an American alt rock/folk band from Ithaca, NY, released an upbeat homage to vehicularly-mediated food security titled "Roadkil" [ 86 ] on their 1991 album, "Gravity Dance" exhorting the listener to "Eat what you kill".
Roadkill is sometimes used as an art form. Several artists use traditional taxidermy preparation in their works whilst others explore different artforms. International artist Claudia Terstappen photographs roadkill [ 87 ] and produces enormous prints which see the animals floating eerily in a void. [ 88 ] American artist Gary Michael Keyes photographs and transforms them into "RoadKill Totems" in his "Resurrection Gallery". [ 89 ] American artist Stephen Paternite has been exhibiting roadkill pieces since the 1970s. [ 90 ]
Canadian writer Timothy Findley wrote about the experience of seeing killed animals on highways during travels: "The dead by the road, or on it, testify to the presence of man. Their little gestures of pain—paws, wings and tails—are the saddest, the loneliest, most forlorn postures of the dead I can imagine. When we have stopped killing animals as though they were so much refuse, we will stop killing one another. But the highways show our indifference to death, so long as it is someone else's. It is an attitude of the human mind I do not grasp." [ 91 ]
In a 2013 essay, American anthropologist Jane Desmond examined at length the failure of American culture and public discourse to adequately confront the ubiquity of roadkill. She concluded: "The simplest answer is that these animal lives have little value for most of the populations in the United States, as these animals are unowned, lacking in monetary or emotional value, not pets or livestock, and without the charismatic following that megafauna like elephants and lions in zoos receive. This calculus of devaluation clears the way for such carnage to be ignored in public discourse and legal venues, to be out of mind while insistently in sight." [ 92 ]
There are driving video games where players can run over animals, such as the arcade version of Cruis'n USA , as well as video games where players control an animal that crosses roads to avoid becoming roadkill, such as Frogger and Crossy Road . [ citation needed ] | https://en.wikipedia.org/wiki/Roadkill |
The roadometer was a 19th-century device like an odometer for measuring mileage, mounted on a wagon wheel. One such device was invented in 1847 by William Clayton , Orson Pratt , and Appleton Harmon, pioneers of the Church of Jesus Christ of Latter-day Saints .
Brass odometers were used by many pioneers making the westward trek in the 1840s. [ 1 ] : 92–93 However, the design of Clayton, Pratt, and Harmon's odometer was new. [ 2 ] : 317 In 1847, William Clayton accompanied the first expedition to the Utah Territory as a writer and record-keeper. He initially counted revolutions of a wagon wheel to calculate the distance they had travelled. [ 1 ] : 87 He tired of counting wheel revolutions and wanted a device that could measure the distance a wagon travelled. Clayton asked Orson Pratt if it would be possible to make such a device, and Pratt created the design. [ 1 ] : 86–87, 89 Harmon carved the gears out of wood and may have further refined the design. [ 1 ] : 89–90 They started using the roadometer around May 12. [ 2 ] : 315 Three hundred and sixty revolutions of the wagon wheel equaled one mile. A piece on the hub of the wheel turned a shaft one revolution for every six revolutions of the wagon wheel. Then one revolution of that shaft moved a 60 tooth gear by one tooth. [ 2 ] : 321 "The second gear wheel had forty teeth [and] overlaid the first gear and was turned by four teeth on the axle of that gear. One rotation of the second gear therefore represented ten miles each tooth being one quarter of a mile." [ 1 ] Unfortunately, the small four-toothed gear swelled in the rain and was not functional for much of the journey. [ 3 ] Clayton used their invention to provide an estimate of the distance their party traveled each day between Omaha, Nebraska, and Salt Lake City , Utah. William Clayton returned to Winter Quarters from the Salt Lake Valley. He had a new odometer built by William A. King that could measure a thousand miles for the return trip. [ 1 ] : 94 Clayton published the distances and other helpful travel information in his popular The Latter-day Saints' Emigrants' Guide . [ 2 ]
A machine commonly displayed as Clayton's odometer is actually one built in 1876 by Thomas G. Lowe. Lowe created his odometer to calculate the distance between villages in northern Arizona. He gave his odometer to the Deseret Museum in Salt Lake City, and it was on display with accurate information from 1876 until it closed for a period in 1903. When the museum reopened in 1911, they displayed his odometer with the incorrect information that it had been made by Appleton Harmon and William Clayton. Lowe's odometer was visibly different from Clayton's. It had four toothed gears and a ratchet-like drive mechanism. Lowe attempted to correct the misinformation when he visited in 1921, but the information was not corrected until 1983. [ 1 ] : 96–98, 103–104 Steven Pratt created a replica of Clayton's odometer which was on display at the Museum of Church History and Art. [ 1 ] : 99
A 1921 news article in the Deseret News claimed that Clayton's original odometer was "the first of its kind". The paper published a correction from an engineer, who clarified that odometers existed as early as 12 B.C. in Rome. The incorrect idea that Clayton's odometer was the first persisted. [ 1 ] : 100–101
Brigham Young University engineering professor Larry Howell built a replica of the roadometer in 2006. He stated that his replica was more accurate than Steven Pratt's. He published information about the rebuild at the 2006 symposium for the American Society of Mechanical Engineers. [ 4 ] According to Howell's calculations, the 60-tooth gear's diameter was 15 inches, the 40-tooth gear's diameter was 10 inches, and the 4-tooth gear's diameter was 1 inch. [ 2 ] : 311 | https://en.wikipedia.org/wiki/Roadometer_(odometer) |
Roadworthiness [ 1 ] or streetworthiness is a property or ability of a car , bus , truck or any kind of automobile to be in a suitable operating condition or meeting acceptable standards for safe driving and transport of people, baggage or cargo in roads or streets , being therefore street-legal .
In Europe, roadworthy inspection is regulated by:
A Certificate of Roadworthiness (also known as a ‘roadworthy’ or ‘RWC’) attests that a vehicle is safe enough to be used on public roads. A roadworthy is required in the selling of a vehicle in some countries. It may also be required when the vehicle is re-registered, and to clear some problematic notices. [ 6 ]
"roadworthiness certificate" means a road-worthiness test report issued by the competent authority or a testing centre containing the result of the road-worthiness test
Roadworthy inspection is designed to check the vehicle to make sure that its important auto parts are in a good (not top) condition that is enough for safe road use. It includes: [ 6 ]
Directive 2014/45/EU regulates the periodic testing for various kind of vehicles:
18 of 27 EU member states have required motorcycle owners to have their vehicles checked for road-worthiness. The directive 2014/45/EU defines obligations and responsibilities, minimum requirements concerning road-worthiness tests, administrative provisions and cooperation and exchange of information.
Minimum requirements concerning road-worthiness tests encompass date and frequency of testing, contents and methods of testing, assessment of deficiencies, road-worthiness certificate, follow-up of deficiencies and proof of test. [ 2 ]
The test shall cover at least the following areas:
(0) Identification of the vehicle;
(1) Braking equipment;
(2) Steering;
(3) Visibility;
(4) Lighting equipment and parts of the electrical system;
(5) Axles, wheels, tires, suspension;
(6) Chassis and chassis attachments;
(7) Other equipment;
(8) Nuisance;
(9) Supplementary tests for passenger-carrying vehicles of categories M2 and M3
This article about transport is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Roadworthiness |
Roald Hoffmann (born Roald Safran ; July 18, 1937) [ 2 ] is a Polish-American theoretical chemist who won the 1981 Nobel Prize in Chemistry . He has also published plays and poetry. He is the Frank H. T. Rhodes Professor of Humane Letters Emeritus at Cornell University . [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Hoffmann was born in Złoczów , Poland (now Zolochiv, Ukraine ), to a Polish-Jewish family, and was named in honor of the Norwegian explorer Roald Amundsen . His parents were Clara (Rosen), a teacher, and Hillel Safran, a civil engineer. [ 7 ] After Germany invaded Poland and occupied the town, his family was placed in a labor camp where his father, who was familiar with much of the local infrastructure, was a valued prisoner. As the situation grew more dangerous, with prisoners being transferred to extermination camps, the family bribed guards to allow an escape. They arranged with a Ukrainian neighbor named Mykola Dyuk for Hoffmann, his mother, two uncles and an aunt to hide in the attic and a storeroom of the local schoolhouse, where they remained for eighteen months, from January 1943 to June 1944, while Hoffmann was aged 5 to 7. [ 8 ] [ 9 ]
His father remained at the labor camp, but was able to occasionally visit, until he was tortured and killed by the Germans for his involvement in a plot to arm the camp prisoners. When she received the news, his mother attempted to contain her sorrow by writing down her feelings in a notebook her husband had been using to take notes on a relativity textbook he had been reading. While in hiding his mother kept Hoffmann entertained by teaching him to read and having him memorize geography from textbooks stored in the attic, then quizzing him on it. He referred to the experience as having been enveloped in a cocoon of love. [ 10 ] [ 9 ] In 1944 they moved to Kraków where his mother remarried. [ 4 ] They adopted her new husband's surname Hoffmann. [ 4 ]
Most of the rest of the family was killed in the Holocaust , though one grandmother and a few others survived. [ 11 ] They migrated to the United States on the troop carrier Ernie Pyle in 1949. [ 12 ]
Hoffmann visited Zolochiv with his adult son (by then a parent of a five-year-old) in 2006 and found that the attic where he had hidden was still intact, but the storeroom had been incorporated, ironically enough, into a chemistry classroom. In 2009, a monument to Holocaust victims was built in Zolochiv on Hoffmann's initiative. [ 13 ]
Hoffmann married Eva Börjesson in 1960. They have two children, Hillel Jan and Ingrid Helena. [ 14 ]
He describes himself as "an atheist who is moved by religion." [ 15 ]
Hoffmann graduated in 1955 from New York City's Stuyvesant High School , [ 16 ] [ 17 ] where he won a Westinghouse science scholarship . He received his Bachelor of Arts degree at Columbia University (Columbia College) in 1958. He earned his Master of Arts degree in 1960 from Harvard University . He earned his doctor of philosophy degree from Harvard University while working [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] under joint supervision of Martin Gouterman and subsequent 1976 Nobel Prize in Chemistry winner William N. Lipscomb, Jr. Hoffman worked on the molecular orbital theory of polyhedral molecules. [ 16 ] Under Lipscomb's direction the Extended Hückel method was developed by Lawrence Lohr and by Roald Hoffmann. [ 19 ] [ 23 ] This method was later extended by Hoffmann. [ 24 ] In 1965, he went to Cornell University and has remained there, where he is a professor emeritus.
Hoffmann's research and interests have been in the electronic structure of stable and unstable molecules, and in the study of transition states in reactions. [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 24 ] [ 23 ] He has investigated the structure and reactivity of both organic and inorganic molecules, and examined problems in organo-metallic and solid-state chemistry. [ 12 ] Hoffman has developed semiempirical and nonempirical computational tools and methods such as the extended Hückel method which he proposed in 1963 for determining molecular orbitals. [ 14 ]
With Robert Burns Woodward he developed the Woodward–Hoffmann rules for elucidating reaction mechanisms and their stereochemistry . They realized that chemical transformations could be approximately predicted from subtle symmetries and asymmetries in the electron orbitals of complex molecules. [ 25 ] Their rules predict differing outcomes, such as the types of products that will be formed when two compounds are activated by heat compared with those produced under activation by light. [ 26 ] For this work Hoffmann received the 1981 Nobel Prize in chemistry, sharing it with Japanese chemist Kenichi Fukui , [ 27 ] who had independently resolved similar issues. (Woodward was not included in the prize, which is given only to living persons, [ 28 ] although he had won the 1965 prize for other work.) In his Nobel Lecture, Hoffmann introduced the isolobal analogy for predicting the bonding properties of organometallic compounds . [ 29 ]
Some of Hoffman's most recent work, with Neil Ashcroft and Vanessa Labet, examines bonding in matter under extreme high pressure. [ 12 ]
What gives me the greatest joy in this work? That as we tease apart what goes on in hydrogen under pressures such as those that one finds at the center of the earth, two explanations subtly contend with each other ... [physical and chemical] ... Hydrogen under extreme pressure is doing just what an inorganic molecule at 1 atmosphere does! [ 12 ]
In 1988 Hoffmann became the series host in a 26-program PBS education series by Annenberg/CPB, The World of Chemistry , opposite with series demonstrator Don Showalter . While Hoffmann introduced a series of concepts and ideas, Showalter provided a series of demonstrations and other visual representations to help students and viewers to better understand the information.
Since the spring of 2001, Hoffmann has been the host of the monthly series Entertaining Science at New York City's Cornelia Street Cafe , [ 30 ] which explores the juncture between the arts and science.
He has published books on the connections between art and science: Roald Hoffmann on the Philosophy, Art, and Science of Chemistry and Beyond the Finite: The Sublime in Art and Science . [ 31 ]
Hoffmann is also a writer of poetry . [ 32 ] His collections include The Metamict State (1987, ISBN 0-8130-0869-7 ), [ 33 ] Gaps and Verges (1990, ISBN 0-8130-0943-X ), [ 25 ] and Chemistry Imagined (1993, ISBN 978-1-56098-539-6 , co-produced with artist Vivian Torrence. [ 25 ] [ 34 ]
He co-authored with Carl Djerassi the play Oxygen , about the discovery of oxygen and the experience of being a scientist. Hoffman's play, "Should've" (2006) about ethics in science and art, has been produced in workshops, as has a play based on his experiences in the holocaust, "We Have Something That Belongs to You" (2009), later retitled "Something That Belongs to You. [ 31 ] [ 35 ]
In 1981, Hoffmann received the Nobel Prize in Chemistry , which he shared with Kenichi Fukui "for their theories, developed independently, concerning the course of chemical reactions". [ 28 ] [ 36 ]
Hoffmann has won many other awards, [ 37 ] and is the recipient of more than 25 honorary degrees. [ 38 ]
Hoffmann is a member of the International Academy of Quantum Molecular Science [ 63 ] and the Board of Sponsors of The Bulletin of the Atomic Scientists . [ 64 ]
In August 2007, the American Chemical Society held a symposium at its biannual national meeting to honor Hoffmann's 70th birthday. [ 65 ]
In 2008, the Göttingen Academy of Sciences and Humanities awarded him its Lichtenberg Medal .
In August 2017, another symposium was held at the 254th American Chemical Society National Meeting in Washington DC, to honor Hoffmann's 80th birthday. [ 66 ]
The Hoffmann Institute of Advanced Materials in Shenzhen, named after him, was founded in his honor in February 2018 [ 67 ] and formally opened in his presence in May 2019. [ 68 ]
In 2023, Roald Hoffmann was named by Carnegie Corporation of New York as an honoree of the Great Immigrants Awards . [ 69 ] | https://en.wikipedia.org/wiki/Roald_Hoffmann |
Roam is a California -based productivity and note-taking application developed by Roam Research Inc. The system is built on a directed graph, which frees it from the constraints of the classic filesystem tree. [ 1 ] It is viewed as a competitor to Notion . [ 2 ] [ 3 ]
This business software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Roam_(software) |
A roaming SIM is a mobile phone SIM card that operates on more than one network within its home country. Roaming SIMs currently have two main applications, the least cost call routing for roaming mobile calls and machine to machine .
Using a normal network locked SIM , travelers can use their own roaming enabled mobile phone in any country that has a roaming agreement with their home network, or for global networks like Vodafone , with another Vodafone OpCo. This manifests itself to most users when they receive a text message welcoming the traveler to a local network. Once they return home, their SIM will only work on the network with which they have a contract.
A roaming SIM however, also known as a global roaming SIM, will work with whichever network it can detect, at home or abroad.
The use of roaming SIM cards in its most common form is in normal voice applications such as mobile phone calls. The common application of roaming SIMs for voice is where mobile calls are automatically routed to, and made on, the least cost network. This typically means that incoming calls are free, no matter which network a mobile user is on. This also means that a caller enjoys the lowest cost when making a call, significantly reducing call costs, especially compared to normal network charges for International Roaming.
Global roaming SIMs are very often combined with callback technology, whereby the user dials a number in the normal way, but the call is intercepted by an application on the SIM card and turned from an outbound call to an inbound call which the user answers. This ensures that the call travels exclusively through the least cost route, and also it is taking advantage of the fact that inbound call charges are typically lower than outbound ones.
Some providers achieve this automatic call interception and callback by encoding a program onto the SIM card.
Other providers use Multi-IMSI ( International Mobile Subscriber Identity ) technology to lower the cost of roaming. In this case, there is a program on the SIM card that selects the lowest cost IMSI (or 'profile') to use in a specific country. [ 1 ]
Increasingly, data services are being added to roaming SIM cards to reduce the cost of roaming data charges. Mobile users are increasingly using data services, and it can be very difficult to predict the cost of using data because it is invoiced based on volume.
This technology is also used in various machine to machine (M2M) applications where devices communicate directly, such as vehicle tracking systems, smart meters, and industrial monitoring. By seamlessly switching between multiple networks, it ensures more comprehensive coverage, even in remote areas, while minimizing costs through least-cost routing, which selects the most economical network available. [ 2 ]
For some applications (particularly where regular travel between two countries is the main purpose) a Dual SIM can be considered as an alternative. [ 3 ] They have the advantage that it is possible to buy a local SIM card and use that next to the primary SIM card.
Voice over IP apps ( softphones ) may be installed on smartphones to inexpensively call international numbers. As these use Wi-Fi where available, costs may be substantially lower. Voice quality using VoIP for international calls may vary. | https://en.wikipedia.org/wiki/Roaming_SIM |
A railgrinder (or rail grinder) is a maintenance of way vehicle or train used to restore the profile and remove irregularities from worn tracks to extend its life and to improve the ride of trains using the track. Rail grinders were developed to increase the lifespan of the tracks being serviced for rail corrugation. Rail grinding is a process that is done to stop the deformation due to use and friction on railroad tracks by removing deformations and corrosion. [ 1 ] Railway tracks that experience continual use are more likely to experience corrugation and overall wear. Rail grinders are used to grind the tracks when rail corrugation is present, or before corrugation begins to form on the tracks. Major freight train tracks use rail grinders for track maintenance based on the interval of tonnage, rather than time. [ 2 ] Transit systems and subways in major cities continue to use scheduled rail grinding processes to combat the corrugation common to heavily used tracks. Rail-grinding equipment may be mounted on a single self-propelled vehicle or on a dedicated rail-grinding train which, when used on an extensive network, may include crew quarters. The grinding wheels, of which there may be more than 100, are set at controlled angles to restore the track to its correct profile.
The machines have been in use in North America and Europe since the early 20th century. They are made by specialist rail maintenance companies who may also operate them under contract.
The early 2000s saw several advancements in rail maintenance technology, most notably the introduction of track reprofiling by rail milling trains for which advantages in accuracy of the profile and quality of the processed surface are claimed. A second technology that is gaining widespread acceptance in Europe, Germany in particular, is high-speed grinding . While it cannot reprofile rails like milling or other grinding trains, its working speed of approximately 80 km/h allows defect removal and prevention to be achieved with little or no impact on other scheduled traffic.
The ERICO Company manufactures hand-held rail grinders and drills for the railway industry as maintenance of way tools. ERICO uses Honda four-stroke engines to power their railway drill and rail grinder. Rail grinders are used for rail preparation prior to the attachment of bonds, and serve as a multipurpose tool capable of rail preparation, maintenance and repair. [ 3 ]
The grinding quality index (GQI) is a software-based template used to measure the profile of a rail. This allows the desired rail profile to be compared to the actual rail profile. GQI software makes use of laser-based hardware mounted to the front and rear of the rail grinder. The use of laser-based hardware on maintenance of way vehicles such as rail grinders allows workers and contractors to take precise measurements of the rail profile before and after grinding. The GQI is rated from 0 (low priority) to 100 (high priority). Grinding Quality Software is able to record and document measurements independently and provide a GQI rating for each rail on the track for before and after each pass on the grinder. The advantage of using GQI software is the ability to produce post-grinding reports for later usage by planners to help further prioritize and monitor grinding profiles in the future. GQI reports also provide analysis on the consistency of profiling to determine if grinding operations are consistently improving or deteriorating the rail profile. The usage of GQI software also provides the ability to produce accurate assessments of rail grinder effectiveness in real-time which allows for work to be prioritized more efficiently and be executed in a timely manner. [ 4 ]
In the railway industry, there are risks with the prolonged use of maintenance of way vehicles during track maintenance and construction. A common risk is prolonged exposure to excessive whole body vibration and shock exposure to the vertical and horizontal axes of the lumbar spine and vertebral endplate, which can lead to spinal injury and or long-term damage to the vertebral bone structure . The American Conference of Governmental Industrial Hygienists has proposed thresholds for whole-body vibration with certain guidelines also being based on ISO-2631 standards, but no exposure thresholds for maintenance of way vehicles have been widely published or enforced. The ACGIH-TLV limits whole-body vibration to no more than 8 hours. In the European Union a risk assessment model (VibRisk model) for structural failure of the lumbar spine in the lower back was proposed as a result of vibration risk research. The VibRisk model provides more specific risk assessments of vertebral endplate failure on individual lumbar levels taking into account driver posture. When compared, risk assessments using the VibRisk Model rate a higher risk of vertebral endplate failure at different lumbar levels than ISO-2631 Part 5 standards suggest. The main contributing factor that VibRisk incorporates that the ISO-2631 Part 5 standards are lacking is the recognition of operator posture as an additional stress factor when exposed to vibration and multiple shocks. [ 5 ]
Rail corrugation or roaring rails is a type of track wear that develops from track and train wheel set contact over time. Once this process has started, it will begin to grow exponentially worse as time progresses. The wear that develops due to the wheel set contact between railways takes its form in the many troughs and crests it leaves behind over time, which may or may not develop into rail corrugation, depending on the circumstances. Rails that are heavily used and put under continual and constant wear will develop rail corrugation. Rail corrugation is represented in wavelength. [ 1 ] Typically, heavily corrugated rails experience a concave deformation on the top of the railroad track in 20 mm to 200 mm intervals. [ 2 ] Significant rail corrugation can decrease the service life of tracks and make the replacement of the affected track necessary. Rail corrugation is caused by the friction between the rail and the train wheels tangentially, vertically, and axially. [ 2 ] Wear corrugation is a result of friction on the lower rail, which comes in contact with the train wheel. Excessive corrugation can be identified by the wavelength found on the higher, or outer, rail. [ 2 ] Rail corrugation may be limited or lessened with the use of heat treated or alloyed rails, as opposed to the traditional carbon composite rail. [ 2 ] The estimated tendency for wear is calculated by taking into account fluctuations in track and wheel set contact which causes the amount of wear to vary. The dynamic properties of different lines of the track can lead to varied degrees of rail corrugation through the use of high-speed wheel sets. In a study of high-speed railroad tracks, four types of tracks were studied for their tendency to develop corrugation (RHEDA 200, AFTRAV, STEDEF, and high performance ballasted track ) and of the four considered the ballasted track was the one least prone to rail corrugation with the AFTRAV track being the second most reliable as well. [ 6 ]
It is generally accepted that a few distinct causes lie behind different wavelengths of railroad corrugation. [ 7 ] [ 8 ] One study indicates that the specific short-wave railroad deformity is mainly caused by pinned-pinned resonance, in which the rail vibrates as a fixed beam, as if pinned between periodically placed sleepers . The dynamic train-track interaction that causes fixed frequency vibrations at high speeds, commonly observed in light load metro operations, and the anti-resonance caused by the pinning of the rails on sleepers, causes deformation and the "roaring" corrugation of the rails.
Rail corrugation may be prevented by selecting rails with material compositions that are more resistant to corrugation. Heat-treated alloy steel rails with relative hardness are the most resistant, as opposed to Bessemer steels, due to a greater relative hardness. Rails with a Brinell hardness of 320 to 360 are best for corrugation-resistant rails. [ 9 ] Trains may vary speed on the tracks in an effort to prevent corrugation from affecting sections or rail on a transit systems. [ 9 ] Varying a train's speed, direction, and tonnage are beneficial for combating the growth of rail corrugation, as corrugation is caused by continually uniform friction. [ 2 ] On subways and major transit systems, it is not possible to vary the direction of trains, making the use of annual and biennial rail grinding processes more applicable.
Preventive rail grinding is done before any signs of rail corrugation development. Rail corrugation will develop exponentially if the first signs of rail corrugation are not ground or serviced. [ 2 ] Preventive grinding removes the deformation from friction and the chemical breakdown of the tracks. [ 1 ] Regular rail grinding is the primary maintenance operation used to combat roaring rail or short-pitched rail corrugation. [ 9 ] Rail grinding operations occur periodically in order to prevent rail corrugation from occurring. Rail grinding cars can be taken down freight lines that traverse long distances in the same direction if the freight railway is used continually [ 2 ] Rail corrugation, the carbon growth of the rail which is increased by friction, grows exponentially. [ 2 ]
Rail corrugation is frequently the subject of community noise complaints. Often, vibrations of the corrugated track will become progressively worse, generating more friction and metal on metal contact. Roaring rail corrugation is a common reason for noise complaints in urban and suburban communities and is most prevalent when trains travel at moderate speed. [ 2 ] It is often called short-pitch corrugation and is responsible for the majority of community reaction. [ 9 ] The loud and uncomfortable vibration caused by rail corrugation on transit systems affects both transit system passengers and local communities where the railroads intersect. Short-pitch corrugation creates significantly more noise than normal railroad track friction, with a tone of about 500 to 800 hertz. [ 9 ] Short-pitch corrugation is most commonly seen on railroads that do not experience regular rail grinding maintenance or that are rarely used. Rail support stiffness directly correlates with short-pitch corrugation. | https://en.wikipedia.org/wiki/Roaring_rails |
Roark's Formulas for Stress and Strain is a mechanical engineering design book written by Richard G. Budynas and Ali M. Sadegh. It was first published in 1938 and the most current ninth edition was published in March 2020. [ 1 ]
The book covers various subjects, including bearing and shear stress , experimental stress analysis , stress concentrations , material behavior, and stress and strain measurement. It also features expanded tables and cases, improved notations and figures within the tables, consistent table and equation numbering, and verification of correction factors. The formulas are organized into tables in a hierarchical format: chapter, table, case, subcase, and each case and subcase is accompanied by diagrams.
The main topics of the book include:
• The behavior of bodies under stress
• Analytical, numerical, and experimental methods
• Tension, compression, shear, and combined stress
• Beams and curved beams
• Torsion, flat plates, and columns
• Shells of revolution, pressure vessels, and pipes
• Bodies under direct pressure and shear stress
• Elastic stability
• Dynamic and temperature stresses
• Stress concentration
• Fatigue and fracture
• Stresses in fasteners and joints
• Composite materials and solid biomechanics
The topics covered in the 7th Edition:
Chapter 1 – Introduction Chapter 2 – Stress and Strain: Important Relationships Chapter 3 – The Behavior of Bodies Under Stress Chapter 4 – Principles and Analytical Methods Chapter 5 – Numerical Methods Chapter 6 – Experimental Methods Chapter 7 – Tension, Compression, Shear, and Combined Stress Chapter 8 – Beams; Flexure of Straight Bars Chapter 9 – Bending of Curved Beams Chapter 10 – Torsion Chapter 11 – Flat Plates Chapter 12 – Columns and Other Compression Members Chapter 13 – Shells of Revolution; Pressure Vessels; Pipes Chapter 14 – Bodies in Contact Undergoing Direct Bearing and Shear Stress Chapter 15 – Elastic Stability Chapter 16 – Dynamic and Temperature Stresses Chapter 17 – Stress Concentration Factors Appendix A – Properties of a Plane Area Appendix B – Glossary Appendix C – Composite Materials
In all, there are over 5,000 formulas for over 1,500 different load/support conditions for various structural members.
Richard G. Budynas is professor of mechanical engineering at Rochester Institute of Technology . He is author of a newly revised McGraw-Hill textbook, Applied Strength and Applied Stress Analysis, 2nd Edition.
Ali M. Sadegh is a professor and the Founder and Director of the Center for Advanced Engineering Design at The City College of New York . He is a Licensed Professional Engineer, P.E., and a Certified Manufacturing Engineer, CMfgE.
Warren C. Young is professor emeritus in the department of mechanical engineering at the University of Wisconsin, Madison , where he was on the faculty for over 40 years. Dr. Young has also taught as a visiting professor at Bengal Engineering College in Calcutta , India , and served as chief of the Energy Manpower and Training Project sponsored by USAir in Bandung , Indonesia . | https://en.wikipedia.org/wiki/Roark's_Formulas_for_Stress_and_Strain |
Roasting is a process of heating a sulfide ore to a high temperature in the presence of air. It is a step in the processing of certain ores . More specifically, roasting is often a metallurgical process involving gas–solid reactions at elevated temperatures with the goal of purifying the metal component(s). Often before roasting, the ore has already been partially purified, e.g. by froth flotation . The concentrate is mixed with other materials to facilitate the process. The technology is useful in making certain ores usable but it can also be a serious source of air pollution . [ 1 ]
Roasting consists of thermal gas–solid reactions, which can include oxidation, reduction, chlorination, sulfation, and pyrohydrolysis. In roasting, the ore or ore concentrate is treated with very hot air. This process is generally applied to sulfide minerals . During roasting, the sulfide is converted to an oxide, and sulfur is released as sulfur dioxide , a gas. For the ores Cu 2 S ( chalcocite ) and ZnS ( sphalerite ), balanced equations for the roasting are:
The gaseous product of sulfide roasting, sulfur dioxide (SO 2 ) is often used to produce sulfuric acid . Many sulfide minerals contain other components such as arsenic that are released into the environment.
Up until the early 20th century, roasting was started by burning wood on top of ore. This would raise the temperature of the ore to the point where its sulfur content would become its source of fuel, and the roasting process could continue without external fuel sources. Early sulfide roasting was practiced in this manner in "open hearth" roasters, which were manually stirred (a practice called "rabbling") using rake-like tools to expose unroasted ore to oxygen as the reaction proceeded.
This process released large amounts of acidic, metallic, and other toxic compounds. Results of this include areas that even after 60–80 years are still largely lifeless, often exactly corresponding to the area of the roast bed, some of which are hundreds of metres wide by kilometres long. Roasting is an exothermic process. [ 2 ] [ 3 ]
The following describe different forms of roasting: [ 4 ]
Oxidizing roasting, the most commonly practiced roasting process, involves heating the ore in excess of air or oxygen, to burn out or replace the impurity element, generally sulfur, partly or completely by oxygen. For sulfide roasting, the general reaction can be given by:
Roasting the sulfide ore, until almost complete removal of the sulfur from the ore, results in a dead roast . [ 5 ]
Volatilizing roasting, involves oxidation at elevated temperatures of the ores, to eliminate impurity elements in the form of their volatile oxides. Examples of such volatile oxides include As 2 O 3 , Sb 2 O 3 , ZnO and sulfur oxides. Careful control of the oxygen content in the roaster is necessary, as excessive oxidation can form non-volatile oxides.
Chloridizing roasting transforms certain metal compounds to chlorides through oxidation or reduction. Some metals such as uranium , titanium , beryllium and some rare earths are processed in their chloride form. Certain forms of chloridizing roasting may be represented by the overall reactions:
The first reaction represents the chlorination of a sulfide ore involving an exothermic reaction. The second reaction involving an oxide ore is facilitated by addition of elemental sulfur. Carbonate ores react in a similar manner as the oxide ore, after decomposing to their oxide form at high temperatures. [ 6 ]
Sulfating roasting oxidizes certain sulfide ores to sulfates in a supply of air to enable leaching of the sulfate for further processing. [ citation needed ]
Magnetic roasting involves controlled roasting of the ore to convert it into a magnetic form, thus enabling easy separation and processing in subsequent steps. For example, controlled reduction of haematite (non magnetic Fe 2 O 3 ) to magnetite (magnetic Fe 3 O 4 ).
Reduction roasting partially reduces an oxide ore before the actual smelting process.
Sinter roasting involves heating the fine ores at high temperatures, where simultaneous oxidation and agglomeration of the ores take place. For example, lead sulfide ores are subjected to sinter roasting in a continuous process after froth flotation to convert the fine ores to workable agglomerates for further smelting operations. | https://en.wikipedia.org/wiki/Roasting_(metallurgy) |
Simon's reagent is used as a simple spot-test to presumptively identify alkaloids as well as other compounds. It reacts with secondary amines like MDMA and methamphetamine to give a blue solution.
The primary use of this reagent is for detecting secondary amines, such as MDMA and methamphetamine , and is typically used after the mecke or marquis reagents to differentiate between the two mentioned and amphetamine or MDA . [ 1 ]
The reagent is typically provided in two parts: [ 2 ] [ 1 ] [ 3 ]
Separate storage of the aldehyde and base are necessary to prevent aldol polymerisation of the aldehyde.
When exposed to an amine, reaction with acetaldehyde produces the enamine , which subsequently reacts with sodium nitroprusside to the imine . Finally, the iminium salt is hydrolysed to the bright blue [ 1 ] Simon-Awe complex. [ 3 ] [ 5 ]
Acetaldehyde can be replaced with acetone , in which case the reagent detects primary amines instead, giving a purple coloured product. [ 3 ]
A drop from each solution (A and B) is dripped onto the substance being tested, causing the two solutions to mix together. | https://en.wikipedia.org/wiki/Robadope_reagent |
In abstract algebra , a Robbins algebra is an algebra containing a single binary operation , usually denoted by ∨ {\displaystyle \lor } , and a single unary operation usually denoted by ¬ {\displaystyle \neg } satisfying the following axioms :
For all elements a , b , and c :
For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras . This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra".
In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus:
From these axioms, Huntington derived the usual axioms of Boolean algebra.
Very soon thereafter, Herbert Robbins posed the Robbins conjecture , namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra . ∨ {\displaystyle \lor } would interpret Boolean join and ¬ {\displaystyle \neg } Boolean complement . Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra."
Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski , and others worked on the problem, but failed to find a proof or counterexample.
William McCune proved the conjecture in 1996, using the automated theorem prover EQP . For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof. | https://en.wikipedia.org/wiki/Robbins_algebra |
A Robel pole is a device consisting of a 1 to 2 m (3 ft 3 in to 6 ft 7 in) vertical pole possessing alternating horizontal bands and a 4 m (13 ft) line of rope or cord. It is used by range ecologists, field biologists and other scientists to measure the density of vegetation and to quantify the volume of ground cover in a particular habitat using the visual obstruction (VO) measurement method . The Robel pole is named for Robert J. Robel, the scientist who developed the device and technique. [ 1 ] Modifications of Robel's original design have been developed and published; all use the VO method.
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Robel_pole |
Robert "Bob" Swanson (1947–1999) was an American venture capitalist who co-founded Genentech in 1976 with Herbert Boyer . Genentech is one of the leading biotechnology companies in the world. He was CEO of Genentech from 1976 to 1990, and chairman from 1990 to 1996.
Swanson graduated from the Massachusetts Institute of Technology , where he was a member of the Sigma Chi fraternity. He completed a B.S. degree in Chemistry as well as a master's degree in Management from the MIT Sloan School of Management . Both degrees were conferred in 1970.
He is regarded as an instrumental figure in launching the biotechnology revolution . The authors of the book, 1,000 Years, 1,000 People: Ranking the Men and Women Who Shaped the Millennium ranked Mr. Swanson number 612. Mr. Swanson was inducted into the Junior Achievement U. S. Business Hall of Fame in 2006. [ 1 ] He received the 2000 Biotechnology Heritage Award posthumously with Herbert Boyer . [ 2 ] [ 3 ]
On December 6, 1999, he succumbed to glioblastoma , a type of brain cancer , at the age of 52. [ 4 ]
Robert S. Swanson was born in Brooklyn, New York , in 1947 to Arthur J. Swanson and Arline Baker Swanson. [ 5 ] [ 6 ] [ 7 ] Arthur Swanson was an airplane electrical maintenance crew leader, and worked in shifts. [ 5 ] [ 6 ] [ 7 ]
According to Swanson, he was taught from an early age that his generation would do better than the last generation of his family. [ 5 ] It was because of this that his family wanted him to be the first to obtain a college degree. [ 5 ] [ 6 ] His family was particularly interested in the Massachusetts Institute of Technology (MIT). [ 5 ] Much to his family's pride, Swanson was accepted into MIT in 1965. [ 5 ] [ 6 ]
Even though he was majoring in chemistry, he realized later during his undergraduate education that he preferred working with people, rather than in research. [ 5 ] [ 6 ] What follows is an excerpt from a 1996 interview that describes how he came to this realization: "At the end of my junior year, I... got a summer job working for a chemical company... One of the things I discovered was that I enjoyed people more than things. So I said, 'Gee, this probably isn't going to be what I'd want to do all my life,'". [ 5 ]
As a result, Swanson petitioned MIT to be able to take the first year's courses at the Alfred P. Sloan School of Management for a master's degree, and they allowed him to do so. [ 5 ] Thanks to the graduate courses he took, he realized that he was particularly interested in two things: organizational development, and the commercialization of innovative ideas. [ 5 ] He graduated from MIT in 1970, with an undergraduate degree in chemistry and a Master of Science degree in management. [ 5 ] [ 6 ] [ 7 ]
After graduating from MIT, Swanson took a job at Citibank, where he managed a venture investment group. [ 5 ] [ 6 ] [ 7 ] [ 8 ] His performance pleased his supervisors, and he and a colleague were chosen to open a San Francisco office for Citicorp Venture Capital. [ 5 ] [ 6 ] [ 7 ] [ 8 ] However, the new Citicorp investments were not doing well. [ 5 ] [ 6 ] One particular failure, which Swanson later believed to have been a lucky break, was the bankruptcy of Antex, a science based company that Citicorp had invested in. [ 5 ] [ 6 ] He worked with Eugene Kleiner, another Citicorp executive, to attempt to get some money out of the company's bankruptcy. Eugene Kleiner was the cofounder of the venture capital partnership Kleiner & Perkins. [ 6 ]
Swanson left Citicorp and joined Kleiner & Perkins in 1974, under the recommendation of Eugene Kleiner himself. [ 6 ] As an associate, Swanson spent a lot of time and effort attempting to convince the heads of the science company Cetus, one which Kleiner and Perkins had invested in, to pursue genetic recombination projects. [ 5 ] [ 6 ] His interest in the technology had been piqued in a lunch with famed scientist and Nobel laureate Donald Glaser. [ 5 ] [ 6 ] However, the company refused to take on such a risky endeavor, and Kleiner & Perkins parted ways with the company. [ 6 ] [ 5 ] This falling out was one of the main reasons for the group's decision to advise Swanson to look for another job. [ 5 ] [ 6 ] Kleiner & Perkins had decided that they would rather work alone, and by the end of 1975, Swanson's position there would be terminated. [ 5 ] [ 6 ]
A young Swanson now found himself unemployed. Swanson was interviewing almost daily, attempting to find a job. [ 5 ] [ 6 ] However, he was still fascinated by the potential of recombinant DNA technology, and decided to cold call scientists working on the technology, with the hope that one of them would be interested in commercializing it. [ 5 ] [ 6 ] [ 7 ] [ 8 ] One of the scientists he contacted, Herbert Boyer, expressed interest but was hesitant of meeting up with Swanson at first. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Boyer was an academic scientist, and was not well versed on the matters of business. [ 6 ] Swanson convinced Boyer to meet, for a short time, at his University of California, San Francisco lab. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The short meeting was extended to three hours, and Boyer came out determined to commercialize the technology he had helped pioneer. He would deal with the science behind the product, whereas Swanson would work on obtaining funds, and managing the organization as a whole. [ 5 ] [ 6 ] The two agreed to form a partnership, and each put down $500 to cover legal fees. [ 5 ] [ 6 ] [ 8 ]
Swanson made the decision to pursue the creation of the company full-time, rather than obtain a job at an established institution or company. [ 5 ] [ 6 ] He explains his logic in an interview: "(I told myself) "Look, I think this is important. If I don't do this, I'm not going to like myself so much for not having given it a shot." So that was what made that decision." [ 5 ]
Swanson then set out to identify their first marketable product, and quickly focused on the human protein insulin . [ 5 ] [ 6 ] From a scientific standpoint, it was a well characterized protein, whose structure had already been elucidated, making it easier to work with, in theory. [ 5 ] [ 6 ] Additionally, the widely available insulin at the time was pig insulin, and many people presented allergic reactions to this insulin. [ 5 ] [ 6 ] Human insulin, then, was preferable, for it was believed that people would not have allergic reactions to it. [ 6 ] From a business standpoint, there was a large market for insulin; at the time, world sales were greater than $100 million, and growing. [ 5 ] [ 6 ] Boyer agreed that the insulin hormone should be their first target molecule. [ 5 ] [ 6 ]
After concluding the market research, Swanson prepared Genentech's first business proposal by March 1976. [ 5 ] [ 6 ] It was with this proposal that Swanson pitched Genentech to Kleiner & Perkins. [ 6 ] Perkins later explained that they considered the technical risks to be enormous: "(The risk of failure was) Very high. I figured better than 50–50 we'd lose it... (However) If it worked, the rewards would be obvious.". [ 6 ] Boyer's scientific expertise and Swanson's business plan convinced the venture capitalists. [ 5 ] [ 6 ] While acknowledging the tremendous risk associated with the company, Kleiner and Perkins promised to invest $100,000 in Genentech. [ 5 ] [ 6 ] This was just a small fraction of Kleiner and Perkins's $8 million venture capital fund. [ 6 ]
Because of the Kleiner and Perkins investment, Swanson and Boyer dissolved their partnership and created the legal entity Genentech. [ 6 ] Kleiner and Perkins provided $100,000 on the May closing, and acquired 20,000 shares of preferred stock from Genentech. [ 6 ] Swanson was made the president and treasurer of Genentech, and received a $2,500 per month salary, along with 25,000 shares. [ 6 ] This marked the end of Swanson's unemployment, and the beginning of his career at Genentech. [ 5 ] [ 6 ]
With funding secured, and the organizational structure formed, the first logical step forward was to begin experimenting with the procedure for the synthesis of insulin. [ 5 ] [ 6 ] Since Genentech lacked any laboratories of its own, the Boyer lab, as well as two other labs in the San Francisco area, were to be subcontracted to carry out the experiments. [ 5 ] [ 6 ]
However, the scientists quickly realized that a step wise approach would be more practical; rather than immediately engineer a bacterium that synthesized insulin, they would engineer a bacterium that could synthesize somatostatin , a smaller hormone. [ 6 ] Swanson resisted at first, since he believed that “If you are going to go for something, go for the real thing.” the "real thing" being insulin, in this case. [ 5 ] [ 6 ] He eventually agreed, albeit grudgingly. [ 5 ] [ 6 ]
With a new research goal set up, Swanson proceeded to establish official research agreements with the institutions. [ 5 ] [ 6 ] He set up research agreements with the University of California and the City of Hope . [ 5 ] [ 6 ] Then, in early 1977, Swanson began a second round of funding, to jumpstart the somatostatin research. He raised approximately $850,000 by February, enough money to fund the somatostatin research projects. [ 5 ] [ 6 ] By August 1977, the research teams managed to create the first bacterium capable of synthesizing somatostatin. [ 5 ] [ 6 ] This was the proof of concept that the fledgling company sought. On December 2, 1977, Swanson and the scientists held a press conference announcing their findings. [ 5 ] [ 6 ]
Following their success with the proof of concept, Swanson then directed the scientists to pursue the creation of a bacterium that synthesized human insulin. [ 5 ] [ 6 ] Two other scientific teams were already attempting to carry out such a project, but Swanson moved quickly to ensure that they synthesized it first. [ 5 ] [ 6 ] By early 1978, his priorities were to obtain a lab space for the scientists, corporate contracts, and more funding for Genentech. [ 5 ] [ 6 ]
In order to attract the best scientists, Swanson, with the assistance of Boyer, tried to create an attractive environment for academic scientists. It was because of this that scientists at Genentech were allowed to publish their findings in scientific journals. [ 5 ] [ 6 ] The restriction was that they could publish only after the appropriate patents had already been filed. [ 5 ] [ 6 ]
By February 1978, Swanson leased a 10,000-square-foot section of an airfreight warehouse, which would serve as Genentech's first lab space. [ 5 ] [ 6 ] Later that year, Swanson also secured a partnership with Eli Lilly; Genentech would receive $50,000 a month to pursue the human insulin project. [ 5 ] [ 6 ] By August 1978, the Genentech scientists were able to synthesize human insulin, and in that same month, Swanson and colleagues negotiated a multimillion-dollar contract with Eli Lilly . [ 5 ] [ 6 ] The big company-small company relationship they developed became the eventual template for other biotechnology start ups. [ 6 ] While there was still plenty of work to be done on the human insulin synthesis, the new stream of revenues and the significant amount of media coverage meant that Genentech could pursue other research projects. [ 6 ] By 1979, Genentech had projects on interferons, animal growth hormones, hepatitis B vaccines, and the hormone thymosin. [ 6 ]
By 1980, Swanson decided that they should raise money by making Genentech public. [ 5 ] [ 6 ] This was due to a variety of factors. Genentech needed more money to continue its development, and Swanson believed that the public interest in the technology should be capitalized on. [ 5 ] [ 6 ] The initial public offering took place on October 14, 1980, and it was the largest IPO ever, at that moment in history, with Genentech raising 35 million dollars. [ 5 ] [ 6 ] A trip to Europe in September 1980 to raise interest from European investors before the IPO also served as his honeymoon. [ 9 ]
From here on, Swanson would focus on pursuing his vision of Genentech as a self sustainable biotechnology company, not a contract research operation. [ 5 ] [ 6 ] He believed that recombinant growth hormones had a large market in the United States, and that they would be key for Genentech's corporate evolution. [ 5 ] [ 6 ] By October 18, 1985, the FDA approved the human growth hormone, developed almost entirely by Genentech, for sale in the United States, under the commercial name Protropin. [ 5 ] [ 6 ] In just two decades, Protropin sales exceeded $2 billion. [ 6 ] Genentech had been able to manufacture, receive federal approval for, and market its own product, marking the successful execution of Swanson's plan to form out of Genentech a self sustainable biotech firm. [ 5 ] [ 6 ] [ 8 ] Swanson left his position as CEO in 1990, taking on the position of chairman until his retirement from Genentech in 1996. [ 5 ] [ 6 ] [ 8 ]
Robert Swanson's legacy can still be found to this day through the company he cofounded and led. Genentech is still producing drugs and treatments to this day, and some of his policies, such as allowing company scientists to publish, are still in place. [ 6 ] [ 5 ] [ 8 ] Genentech scored many firsts under Swanson's leadership, such as developing the first drug produced via genetic engineering, being the first biotechnology company to go public, and being the first biotechnology company to sell its own drug. [ 8 ] [ 7 ] These accomplishments have earned Genentech, and Swanson, a place in the history of the biotechnology industry. [ 8 ] [ 7 ]
The following is a list of the awards and honors received by Robert Swanson. [ 5 ] [ 3 ] | https://en.wikipedia.org/wiki/Robert_A._Swanson |
Robert Blanché (1898–1975) was an associate professor of philosophy at the University of Toulouse . He wrote many books addressing the philosophy of mathematics .
Robert Blanché died in 1975. Nine years before, in 1966, he published with Vrin: Structures intellectuelles . Therein, he deals with the logical hexagon . Whereas the logical square or square of Apuleius represents four values: A, E, I, O, the logical hexagon represents six, that is to say, not only A, E, I, O but also two new values: Y and U. It is advisable to read the article: logical hexagon as well what concerns Indian logic .
In La Logique et son histoire d' Aristote à Russell , published with Armand Colin in 1970, Robert Blanché, the author of Structures intellectuelles (Vrin, 1966) mentions that Józef Maria Bocheński speaks of a sort of Indian logical triangle to be compared with the square of Aristotle (or square of Apuleius), in other words with the square of opposition. | https://en.wikipedia.org/wiki/Robert_Blanché |
Robert Burns Woodward ForMemRS HonFRSE (April 10, 1917 – July 8, 1979) was an American organic chemist . He is considered by many to be the preeminent synthetic organic chemist of the twentieth century, [ 3 ] having made many key contributions to the subject, especially in the synthesis of complex natural products and the determination of their molecular structure . He worked closely with Roald Hoffmann on theoretical studies of chemical reactions . He was awarded the Nobel Prize in Chemistry in 1965.
Woodward was born in Boston, Massachusetts , on April 10, 1917. He was the son of Margaret Burns (an immigrant from Scotland who claimed to be a descendant of the poet, Robert Burns ) and her husband, Arthur Chester Woodward, himself the son of Roxbury apothecary, Harlow Elliot Woodward.
His father was one of the many victims of the 1918 influenza pandemic .
From a very early age, Woodward was attracted to and engaged in private study of chemistry while he attended a public primary school, and then Quincy High School , [ 4 ] in Quincy, Massachusetts . By the time he entered high school, he had already managed to perform most of the experiments in Ludwig Gattermann 's then widely used textbook of experimental organic chemistry. In 1928, Woodward contacted the Consul-General of the German consulate in Boston (Baron von Tippelskirch [ 5 ] ), and through him, managed to obtain copies of a few original papers published in German journals. Later, in his Cope lecture, he recalled how he had been fascinated when, among these papers, he chanced upon Diels and Alder's original communication about the Diels–Alder reaction . Throughout his career, Woodward was to repeatedly and powerfully use and investigate this reaction, both in theoretical and experimental ways. In 1933, he entered the Massachusetts Institute of Technology (MIT), but neglected his formal studies badly enough to be excluded at the end of the 1934 fall term. MIT readmitted him in the 1935 fall term, and by 1936 he had received the Bachelor of Science degree. Only one year later, MIT awarded him the doctorate , when his classmates were still graduating with their bachelor's degrees. [ 6 ] Woodward's doctoral work involved investigations related to the synthesis of the female sex hormone estrone . [ 7 ] MIT required that graduate students have research advisors. Woodward's advisors were James Flack Norris and Avery Adrian Morton, [ citation needed ] although it is not clear whether he actually took any of their advice. After a short postdoctoral stint at the University of Illinois, he took a Junior Fellowship at Harvard University from 1937 to 1938, and remained at Harvard in various capacities for the rest of his life. In the 1960s, Woodward was named Donner Professor of Science, a title that freed him from teaching formal courses so that he could devote his entire time to research.
The first major contribution of Woodward's career in the early 1940s was a series of papers describing the application of ultraviolet spectroscopy in the elucidation of the structure of natural products. Woodward collected together a large amount of empirical data, and then devised a series of rules later called the Woodward's rules , which could be applied to finding out the structures of new natural substances, as well as non-natural synthesized molecules. The expedient use of newly developed instrumental techniques was a characteristic Woodward exemplified throughout his career, and it marked a radical change from the extremely tedious and long chemical methods of structural elucidation that had been used until then.
In 1944, with his post doctoral researcher, William von Eggers Doering , Woodward reported the synthesis of the alkaloid quinine , used to treat malaria . Although the synthesis was publicized as a breakthrough in procuring the hard to get medicinal compound from Japanese occupied southeast Asia, in reality it was too long and tedious to adopt on a practical scale. Nevertheless, it was a landmark for chemical synthesis. Woodward's particular insight in this synthesis was to realize that the German chemist Paul Rabe had converted a precursor of quinine called quinotoxine to quinine in 1905. Hence, a synthesis of quinotoxine (which Woodward actually synthesized) would establish a route to synthesizing quinine. When Woodward accomplished this feat, organic synthesis was still largely a matter of trial and error, and nobody thought that such complex structures could actually be constructed. Woodward showed that organic synthesis could be made into a rational science, and that synthesis could be aided by well-established principles of reactivity and structure. This synthesis was the first one in a series of exceedingly complicated and elegant syntheses that he would undertake.
Culminating in the 1930s, the British chemists Christopher Ingold and Robert Robinson among others had investigated the mechanisms of organic reactions, and had come up with empirical rules which could predict reactivity of organic molecules. Woodward was perhaps the first synthetic organic chemist who used these ideas as a predictive framework in synthesis. Woodward's style was the inspiration for the work of hundreds of successive synthetic chemists who synthesized medicinally important and structurally complex natural products.
During the late 1940s, Woodward synthesized many complex natural products including quinine , cholesterol , cortisone , strychnine , lysergic acid , reserpine , chlorophyll , cephalosporin , and colchicine . [ 8 ] With these, Woodward opened up a new era of synthesis, sometimes called the 'Woodwardian era' in which he showed that natural products could be synthesized by careful applications of the principles of physical organic chemistry , and by meticulous planning.
Many of Woodward's syntheses were described as spectacular by his colleagues and before he did them, it was thought by some that it would be impossible to create these substances in the lab. Woodward's syntheses were also described as having an element of art in them, and since then, synthetic chemists have always looked for elegance as well as utility in synthesis. His work also involved the exhaustive use of the then newly developed techniques of infrared spectroscopy and later, nuclear magnetic resonance spectroscopy . Another important feature of Woodward's syntheses was their attention to stereochemistry or the particular configuration of molecules in three-dimensional space. Most natural products of medicinal importance are effective, for example as drugs, only when they possess a specific stereochemistry. This creates the demand for ' stereoselective synthesis ', producing a compound with a defined stereochemistry. While today a typical synthetic route routinely involves such a procedure, Woodward was a pioneer in showing how, with exhaustive and rational planning, one could conduct reactions that were stereoselective. Many of his syntheses involved forcing a molecule into a certain configuration by installing rigid structural elements in it, another tactic that has become standard today. In this regard, especially his syntheses of reserpine and strychnine were landmarks.
During World War II, Woodward was an advisor to the War Production Board on the penicillin project. Although often given credit for proposing the beta-lactam structure of penicillin , it was actually first proposed by chemists at Merck and Edward Abraham at Oxford and then investigated by other groups, as well (e.g., Shell). Woodward at first endorsed an incorrect tricyclic ( thiazolidine fused, amino bridged oxazinone) structure put forth by the penicillin group at Peoria. Subsequently, he put his imprimatur on the beta-lactam structure, all of this in opposition to the thiazolidine – oxazolone structure proposed by Robert Robinson , the then leading organic chemist of his generation. Ultimately, the beta-lactam structure was shown to be correct by Dorothy Hodgkin using X-ray crystallography in 1945.
Woodward also applied the technique of infrared spectroscopy and chemical degradation to determine the structures of complicated molecules. Notable among these structure determinations were santonic acid , strychnine, magnamycin and terramycin . In each one of these cases, Woodward again showed how rational facts and chemical principles, combined with chemical intuition, could be used to achieve the task.
In the early 1950s, Woodward, along with the British chemist Geoffrey Wilkinson , then at Harvard, postulated a novel structure for ferrocene , a compound consisting of a combination of an organic molecule with iron. [ 9 ] This marked the beginning of the field of transition metal organometallic chemistry which grew into an industrially very significant field. [ 10 ] Wilkinson won the Nobel Prize for this work in 1973, along with Ernst Otto Fischer . [ 11 ] Some historians think that Woodward should have shared this prize along with Wilkinson. Remarkably, Woodward himself thought so, and voiced his thoughts in a letter sent to the Nobel Committee. [ 12 ]
Woodward won the Nobel Prize in 1965 for his synthesis of complex organic molecules. He had been nominated a total of 111 times from 1946 to 1965. [ 13 ] In his Nobel lecture, he described the total synthesis of the antibiotic cephalosporin, and claimed that he had pushed the synthesis schedule so that it would be completed around the time of the Nobel ceremony.
In the early 1960s, Woodward began work on what was the most complex natural product synthesized to date— vitamin B 12 . In a remarkable collaboration with his colleague Albert Eschenmoser in Zurich, a team of almost one hundred students and postdoctoral workers worked for many years on the synthesis of this molecule. The work was finally published in 1973, and it marked a landmark in the history of organic chemistry. The synthesis included almost a hundred steps, and involved the characteristic rigorous planning and analyses that had always characterised Woodward's work. This work, more than any other, convinced organic chemists that the synthesis of any complex substance was possible, given enough time and planning (see also palytoxin , synthesized by the research group of Yoshito Kishi , one of Woodward's postdoctoral students). As of 2019, no other total synthesis of Vitamin B 12 has been published.
That same year, based on observations that Woodward had made during the B 12 synthesis, he and Roald Hoffmann devised rules (now called the Woodward–Hoffmann rules ) for elucidating the stereochemistry of the products of organic reactions . [ 14 ] Woodward formulated his ideas (which were based on the symmetry properties of molecular orbitals ) based on his experiences as a synthetic organic chemist; he asked Hoffman to perform theoretical calculations to verify these ideas, which were done using Hoffmann's Extended Hückel method . The predictions of these rules, called the " Woodward–Hoffmann rules " were verified by many experiments. Hoffmann shared the 1981 Nobel Prize for this work along with Kenichi Fukui , a Japanese chemist who had done similar work using a different approach; Woodward had died in 1979 and Nobel Prizes are not awarded posthumously.
While at Harvard, Woodward took on the directorship of the Woodward Research Institute , based at Basel , Switzerland, in 1963. [ 15 ] He also became a trustee of his alma mater, MIT , from 1966 to 1971, and of the Weizmann Institute of Science in Israel. [ citation needed ]
Woodward died in Cambridge, Massachusetts , from a heart attack in his sleep. At the time, he was working on the synthesis of an antibiotic , erythromycin . A student of his said about him: [ citation needed ]
During his lifetime Woodward authored or coauthored almost 200 publications, of which 85 are full papers, the remainder comprising preliminary communications, the text of lectures, and reviews. The pace of his scientific activity soon outstripped his capacity to publish all experimental details, and much of the work in which he participated was not published until a few years after his death. Woodward trained more than two hundred Ph.D. students and postdoctoral workers, many of whom later went on to distinguished careers.
Some of his best-known students include Robert M. Williams (Colorado State), Harry Wasserman (Yale), Yoshito Kishi (Harvard), Stuart Schreiber (Harvard), William R. Roush ( Scripps-Florida ), Steven A. Benner (UF), James D. Wuest (Montreal), Christopher S. Foote (UCLA), Kendall Houk (UCLA), porphyrin chemist Kevin M. Smith (LSU), Thomas R. Hoye (University of Minnesota), Ronald Breslow (Columbia University) and David Dolphin (UBC).
Woodward had an encyclopedic knowledge of chemistry, and an extraordinary memory for detail. [ 16 ] Probably the quality that most set him apart from his peers was his remarkable ability to tie together disparate threads of knowledge from the chemical literature and bring them to bear on a chemical problem. [ 16 ]
For his work, Woodward received many awards, honors and honorary doctorates, including election to the American Academy of Arts and Sciences in 1948, [ 17 ] the National Academy of Sciences in 1953, [ 18 ] the American Philosophical Society in 1962, [ 19 ] and membership in academies around the world. He was also a consultant to many companies such as Polaroid, Pfizer , and Merck . Other awards include:
Woodward also received over twenty honorary degrees , [ 3 ] including honorary doctorates from the following universities:
In 1938, he married Irja Pullman; they had two daughters: Siiri Anna (b. 1939) and Jean Kirsten (b. 1944). In 1946, he married Eudoxia Muller , an artist and technician whom he met at the Polaroid Corp. This marriage, which lasted until 1972, produced a daughter, and a son: Crystal Elisabeth (b. 1947), and Eric Richard Arthur (b. 1953). [ 6 ]
His lectures frequently lasted for three or four hours. [ 5 ] His longest known lecture defined the unit of time known as the "Woodward", after which his other lectures were deemed to be so many "milli-Woodwards" long. [ 22 ] In many of these, he eschewed the use of slides and drew structures by using multicolored chalk. Typically, to begin a lecture, Woodward would arrive and lay out two large white handkerchiefs on the countertop. Upon one would be four or five colors of chalk (new pieces), neatly sorted by color, in a long row. Upon the other handkerchief would be placed an equally impressive row of cigarettes. The previous cigarette would be used to light the next one. His Thursday seminars at Harvard often lasted well into the night. He had a fixation with blue, and many of his suits, his car, and even his parking space were coloured in blue. [ 5 ]
In one of his laboratories, his students hung a large black and white photograph of the master from the ceiling, complete with a large blue "tie" appended. There it hung for some years (early 1970s), until scorched in a minor laboratory fire. [ citation needed ] He detested exercise, could get along with only a few hours of sleep every night, was a heavy smoker , and enjoyed Scotch whisky and martinis. [ 1 ] [ 23 ] | https://en.wikipedia.org/wiki/Robert_Burns_Woodward |
Robert Joseph Cava (born 1951) [ 4 ] is a solid-state chemist at Princeton University where he holds the title Russell Wellman Moore Professor of Chemistry . [ 5 ] Previously, Professor Cava worked as a staff scientist at Bell labs from 1979–1996, where earned the title of Distinguished Member of the Technical Staff. As of 2016 [update] his research investigates topological insulators , semimetals , superconductors , frustrated magnets and thermoelectrics . [ 1 ] [ 6 ] [ 7 ] [ 8 ]
Cava was educated at the Massachusetts Institute of Technology (MIT) where he was awarded Bachelor of Science and Master of Science degrees in Materials Science and Engineering in 1974 followed by a PhD in ceramics in 1978. [ 6 ] His PhD was supervised by Bernhardt J. Wuensch [ 2 ] and investigated the electrical mobility of ions in fast ion conductors . [ 9 ] [ 10 ] [ 11 ]
In his career, he has published over 500 peer-reviewed papers, 36 of them in Nature and 8 of them in Science . [ 12 ] These papers have been cited over 30,000 times, [ 12 ] including his seminal work on Ba 2 YCu 3 O 9−δ ( YBCO ), which has been cited almost 1500 times. [ 13 ] He holds 15 patents. [ citation needed ]
His former doctoral students include Leslie Schoop . [ 3 ]
In recognition of his contributions, he was elected in 1988 a fellow of the American Institute of Physics [ 14 ] and a Fellow of the American Physical Society . [ 15 ] He was elected in 2001 a Member of the National Academy of Sciences [ 16 ] who specifically acknowledged his mastery of the ternary and quaternary oxides that produced materials possessing high-temperature superconductivity .
In 1996 Cava received the Bernd T. Matthias Prize for new superconducting materials. He received in 2011 the Humboldt Prize and in 2012 the Linus Pauling Award . In 2014 he received a Doctor Honoris Causa degree from the Gdańsk University of Technology . Cava also won the 2021 David Adler Lectureship Award in the Field of Materials Physics . [ 17 ] In addition to research, Cava's ability to connect with students while teaching has earned him several teaching awards, including the Fall 2002 Excellence in Teaching Award from Princeton University . [ 6 ]
He was elected a Foreign Member of the Royal Society (ForMemRS) in 2016 . [ 18 ]
His biography at the Gdańsk University of Technology describes him as a New Yorker , dedicated supporter of the New York Yankees , passionate astronomer and amateur brewer . [ 4 ]
“All text published under the heading 'Biography' on Fellow profile pages is available under Creative Commons Attribution 4.0 International License .” -- "Royal Society Terms, conditions and policies" . Archived from the original on September 25, 2015 . Retrieved 2016-03-09 . {{ cite web }} : CS1 maint: bot: original URL status unknown ( link ) | https://en.wikipedia.org/wiki/Robert_Cava |
Robert Day Shannon (born 1935) is a retired research chemist formerly at DuPont de Nemours, Inc. [ 1 ]
Shannon received his B.S. and M.S. degrees in Ceramic Engineering from the University of Illinois in 1957 and 1959. He then went on to receive his Ph.D. in Ceramic Engineering from the University of California at Berkeley in 1960. He then joined the DuPont Company as a research chemist from 1964 to 1971 where he concentrated on high-pressure synthesis and precious metal oxide chemistry. He then spent 1971 conducting post-doctorate studies at McMaster University in Hamilton, Ontario , working with Chris Calvo on the crystal structures of a number of vanadates [ 2 ] and with David Brown on bond strength-bond length relationships useful in determining H locations in hydroxides and hydrates. [ 3 ] Next, he took a sabbatical leave from DuPont and spent 1972 at the CNRS and teaching at the University of Grenoble , France as a visiting professor, where he presented a course on solid state chemistry and conducted research on high-pressure chemistry of vanadates. [ 4 ] He returned to DuPont in 1973 to do research on new ionic conductors and precious metal oxide chemistry.
In 1982, he was granted another sabbatical leave from DuPont and worked on catalysis with zeolites at the Institute de Catalyse in Lyon, France. Upon completion of the sabbatical, he returned to DuPont and worked for another ten years before retiring in 1992.
After retirement, he received a grant from the Alexander von Humboldt Foundation to continue his research on ion polarizabilities in collaboration with Reinhard Fischer in 1994 at the Universities of Mainz and Bremen in Germany and with Olaf Medenbach at the Ruhr-Universität in Bochum , Germany. There, he prepared three papers on refractive indices and electronic polarizabilities in oxides, and other compounds. [ 5 ] He has since moved to Colorado where he has been associated with the University of Colorado Boulder · Cooperative Institute for Research in Environmental Sciences (CIRES). [ 6 ]
Shannon was a member of the American Chemical Society and the American Crystallographic Association . He was elected a Fellow of the Mineralogical Society of America . [ 7 ] He has served on the Evaluation Panel for Materials Science at the National Bureau of Standards , and on the National Science Foundation Subcommittee for Oversight Review of Solid State Chemistry.
Shannon has about 164 publications that, together, have received over 77 thousand citations. [ 8 ] His work on ionic radii of ions has drawn particularly wide attention. In a 2014 Nature paper [ 9 ] his 1976 work on the ionic radii of ions [ 10 ] was recognized as the 22d most cited paper in all of science. It is also been cited as the highest formally-cited database of all time. [ 9 ] He has a number of patents on glass compositions, zeolite catalysts, noble-metal oxide, electrodes, and chemical compounds. [ 11 ]
The mineral bobshannonite, [ 12 ] Na 2 KBa(Mn,Na) 8 (Nb,Ti) 4 (Si 2 O 7 ) 4 O 4 (OH) 4 (O,F) 2 , was named in his honor in recognition of his major contributions to the field of crystal chemistry in particular and mineralogy in general through his development of accurate and comprehensive ionic radii and his work on dielectric properties of minerals. [ 13 ] | https://en.wikipedia.org/wiki/Robert_D._Shannon |
Robert Eugene Rundle (1915 – 9 October 1963) was an American chemist and crystallographer . He was a professor at Iowa State University and fellow of the American Physical Society .
Rundle was born in Orleans, Nebraska in 1915. [ 1 ] [ 2 ] He attended University of Nebraska where he completed a bachelor of science in 1937 and a master's degree in 1938. He completed a Ph.D. in 1941 at the California Institute of Technology . [ 2 ] His advisors were Linus Pauling and J. Holmes Sturdivant. [ 3 ]
Rundle joined Iowa State University as an assistant professor of chemistry. From 1945 to 1946, he worked at Princeton University before returning to Iowa State University as a full professor. His research was focused on x-ray diffraction by crystals, inorganic solid-state chemistry , intermetallic and interstitial compounds , hydrogen-bonded substances, compounds of uranium and thorium, and electron-deficient compounds . He was a member of the American Crystallographic Association and served as the president of the organization in 1958. [ 2 ] He was a member of the American Association of University Professors . [ 4 ]
Rundle was a fellow of the American Physical Society . [ 2 ]
Rundle died from a stroke in Iowa Methodist Hospital on October 9, 1963. [ 2 ] He was survived by his wife and three sons. [ 4 ] | https://en.wikipedia.org/wiki/Robert_E._Rundle |
Robert Gerard Wilhelm (born June 27, 1960) is an American mechanical engineer.
Wilhelm holds the Kate Foster professorship in Mechanical and Materials Engineering at the University of Nebraska — Lincoln . From 2018 to 2023 he served as the Vice Chancellor for Research and Economic Development at UNL. [ 1 ]
Before joining the University of Nebraska — Lincoln, he served as Vice Chancellor for Research and Economic Development at the University of North Carolina at Charlotte . [ 2 ] There, he also held a faculty appointment as a professor. [ 3 ]
His expertise is in precision engineering and advanced manufacturing.
Bob Wilhelm was born June 27, 1960, in Mobile, Alabama . As a child, his family moved to Raleigh, North Carolina , where his father, William J. Wilhelm, earned a PhD in Civil Engineering at North Carolina State University . Their family relocated to Morgantown, West Virginia when William J. Wilhelm joined the West Virginia University civil and environmental engineering faculty. While there, Wilhem's mother, Patricia Zietz, earned a Bachelor of Arts in elementary education and Master of Arts in special education. [ 4 ] Later, his father joined Wichita State University as the Dean of the College of Engineering, and their family relocated to Wichita, Kansas. [ 5 ]
Wilhelm earned a Bachelor of Science in Industrial Engineering from Wichita State University in 1981, after beginning coursework at West Virginia University from 1977 to 1979. From 1981 to 1982, he studied the history of science and technology at the University of Leicester and the Ironbridge Gorge Museum as a Rotary Foundation Fellow. [ 3 ] In 1984, he earned a Master of Science in Industrial Engineering from Purdue University . In 1992, he received a Ph.D. in Mechanical Engineering from the University of Illinois . [ 3 ]
Early in his career, Wilhelm worked on naval structures and submarines. He also worked in restoration of historic structures including the original iron furnace at Ironbridge ( Coalbrookdale, United Kingdom ), Jackson's Mill ( Lewis County , West Virginia ), Staats Mill Covered Bridge and the Fink-Type Truss Bridge ( Hamden, New Jersey ).
Wilhelm has also worked in engineering at Cincinnati Milacron and the Palo Alto Laboratory of Rockwell Science Center . His engineering has impacted results in mechanical design and computational geometry, digital twin approaches to manufacturing for Caterpillar Inc. , aerospace design and manufacturing for the Boeing F/A-18E/F Super Hornet and AI approaches to logistics for the US military program Dynamic Analysis and Replanning Tool .
He joined University of North Carolina at Charlotte in 1992 as a faculty member and later co-founded a high-tech company in Charlotte, OpSource. [ 6 ]
In 1994, he was recognized with the Young Investigator Award of the National Science Foundation. [ 7 ] [ 8 ] He was a founding faculty member at UNC Charlotte in 5 different PhD programs: Mechanical Engineering, Biology and Biotechnology, Information Technology, Optical Sciences, and Nanoscale Sciences. [ 9 ] Wilhelm was a very early and longstanding member of the Precision Engineering and Metrology Group at the University of North Carolina at Charlotte. [ 10 ]
Wilhelm's engineering research has addressed metrology and measurement theory for complex mechanical parts, [ 11 ] virtual manufacturing for design of manufacturing systems, [ 12 ] software, and automation and artificial intelligence for mechanical design and tolerance synthesis. [ 13 ]
As a higher education leader he has led university organizations at UNC Charlotte and UNL that envisioned, built and operated innovation campuses with partner companies working collaboratively on the university site. [ 14 ] In Charlotte, these organizations included The Charlotte Research Institute Campus at UNC Charlotte and the University Research Park. In Nebraska, Wilhelm led the Nebraska Innovation Campus during his time as vice chancellor at the University of Nebraska - Lincoln.
Wilhelm is a fellow of the National Academy of Inventors [ 15 ] and the International Academy for Production Engineering . [ 16 ]
In 2012, he received the Society of Manufacturing Engineers S.M. Wu Research Implementation Award. [ 17 ] | https://en.wikipedia.org/wiki/Robert_G._Wilhelm |
Robert Goulston Gilbert (born 1946) is a polymer chemist whose most significant contributions have been in the field of emulsion polymerisation . In 1970 he gained his PhD from the Australian National University , and worked at the University of Sydney from then until 2006. In 1982, he was elected a fellow of the Royal Australian Chemical Institute ; in 1994 he was elected a fellow of the Australian Academy of Science . In 1992, he was appointed full professor, and in 1999 he started the Key Centre for Polymer Colloids, funded by the Australian Research Council , the University and industry. He has served in leadership roles in the International Union of Pure and Applied Chemistry (IUPAC) , the world ‘governing body’ of chemistry. He was founding chair (1987–98) of the IUPAC Working Party on the Modelling of Kinetics Processes of Polymerisation, of which he remains a member, and is a member of the IUPAC scientific task groups on starch molecular weight measurements, and terminology. He was vice-president (1996–97) and president (1998–2001) of the IUPAC Macromolecular Division, and secretary of the International Polymer Colloids Group (1997–2001). As of 2007, he is Research Professor at the Centre of Nutrition and Food Science, University of Queensland , [1] where his research program concentrates on the relations between starch structure and nutrition.
His scientific advances have been based on developing novel theoretical and experimental methods to isolate individual processes in very complex systems. By revealing the mechanistic bases of these individual processes through a combination of theory and experiment, he has significantly deepened, and in some cases revolutionised, the understanding of whole systems in small ( gas-phase ) and giant ( polymer ) reaction dynamics. [ citation needed ]
Reactions in chemical processes are either unimolecular or bimolecular. The rate of a unimolecular reaction is an average over a vast ensemble of the rate coefficients for the microscopic events of collisional energy transfer and of reaction of a completely isolated molecule. Gilbert's work in the field of unimolecular processes started with the development of theorems for this relationship. [2] These theorems are elegant developments in matrix algebra , proving relations that had been previously known only for particular cases. His theorems also became the basis for numerical methods that he developed to perform the requisite calculations. For this purpose, he created a computer code, UNIMOL , which is widely used by researchers. [ citation needed ]
With Prof J Troe he developed easily used approximate solutions for the pressure dependence of the rate coefficient. [3] He provided the first solutions for cases where angular momentum conservation needs to be incorporated. His methods are used by experimentalists to fit data and extrapolate to different pressure regimes, supplanting previous tools which were of dubious validity and accuracy. His coworkers and he obtained data on the collisional energy transfer process and used them to prove the conjecture that each collision involves only a small exchange of energy. He then developed the first rigorous means to calculate these quantities from basic theory, and the first physical model for the process. His work is widely used, both for basic understanding of the transition states and by atmospheric and combustion modellers. Predicting climate change and effects on the ozone layer rely critically on this modelling. [ citation needed ]
Emulsion polymerisation is the commonest means of making a wide variety of industrial polymers, such as paints, adhesives and tyre rubber. It is a complex process involving many simultaneous and separate processes and where historically only a few types of data were available. The complexity and the limited data types meant that conflicting assumptions could be forced to agree with experiment: there was no proper understanding of the process. Gilbert developed and applied mathematical and experimental tools whereby the effects resulting from individual processes could be isolated for the first time.
As with unimolecular reactions, the keys to the qualitative and quantitative understanding of the many processes in emulsion polymerisation are the rate coefficients of the individual steps. These steps are initiation (how quickly a growing chain starts), propagation (how quickly individual monomer units are added), radical loss processes (the termination and transfer of radical activity), and particle formation (nucleation). With Prof D Napper, Gilbert applied equations that he had solved in gas-phase chemistry to the area of emulsion polymerisation. This opened the way for him to develop—initially in collaboration with Napper—new theoretical and experimental methods for extracting the rate coefficients of elementary processes. He produced targeted data using these methods, particularly the time evolution of reaction rates and molecular-weight and particle-size distributions. This included novel types of systems, such as γ-radiolysis relaxation, in which events such as radical loss can be separated from radical propagation and growth.
Gilbert's mathematical treatments and experimental techniques revealed the fundamentals controlling these steps by enabling each of the processes to be effectively studied in isolation. His advances allowed rate coefficients to be measured for virtually any process in emulsion polymerisation, values of these rate coefficients for simple systems to be predicted, and the reliability of new measurements to be checked from theory. [4] He used data from applying these methods to obtain the dependence of rate coefficients on controllable quantities, such as initiator concentration. Thus he tested existing models, developed new tests (some of which refuted extant models), and refined the older models that withstood his tests, making it possible to achieve consistency between supposed microscopic events and experiment, and, for the very first time in the field, to refute postulated models authoritatively. [ citation needed ]
Using these data he quantified radical loss from particles, showing that simple diffusion theory could explain the results. [5] Gilbert and his coworkers then revealed the mechanism for initiation in emulsion polymerisation by the entry of radicals into particles—in terms of fundamental thermodynamic and kinetic precepts—in a theory [6] that clarifies the process as being through production of surface-active species in the water phase. This model produced various qualitative predictions. One prediction, that of the independence of the entry-rate coefficient of the size and surface properties of particles, was widely seen as counterintuitive because of the deep-rooted belief in models that he had shown to be wrong. Subsequently, this prediction was experimentally verified by Gilbert and others. He used the understanding from this knowledge to develop a priori models for particle formation [7] and molecular-weight distribution. [8]
These developments led to an understanding of basic processes in free-radical polymerisation—the commonest industrial process. For the propagation reaction Gilbert led an international team that produced a methodology that overcame the long-standing problem of obtaining reliable rate coefficients for this process. [9] He showed that the Arrhenius parameters for different types of monomer take different classes of values, and developed qualitative and quantitative understanding of these classes from fundamental transition-state theory and quantum mechanics . [10] These new methods were based on those that he had developed in his work on unimolecular gas-phase processes. For the termination reaction, his data and models led to the qualitative and quantitative understanding of this process as diffusion-controlled.
As a result of Gilbert’s work, all individual processes in emulsion polymerisation, one of the commonest ways of making everyday products, are now qualitatively and quantitatively understood. It is now possible to polymerise simple systems and to predict the molecular architecture that will be formed under chosen conditions, while for more complex conditions, trends can be semiquantitatively predicted and understood. The international scientific and technical community in this field now uses the mechanistic knowledge that he obtained as the key to understanding current processes and creating new processes and products. His work has put this industrially important field on a rigorous scientific footing. [ citation needed ]
Gilbert and others have used this knowledge and understanding to develop means of creating new materials. One major example includes his role as leader of a collaborative project that has led to a new generation of surface coatings. He developed the first practical means to implement on industrially significant scales Dr E Rizzardo’s reversible addition-fragmentation chain transfer (RAFT) method of controlled radical polymerisation. [11]
Gilbert has developed a new way of understanding the biochemistry of the enzymatic processes involved in starch biosynthesis, in collaboration with Dr Melissa Fitzgerald, International Rice Research Institute , Manilla. In this new field he applied the methods he had developed for understanding molecular-weight distributions in synthetic polymers to understanding those of natural ones. [12] He has thus created a powerful new technique for probing the enzymatic processes in starch biosynthesis in grains, again, creating a methodology to obtain reliable mechanistic knowledge by isolating steps in highly complex systems. Each enzymatic step that creates individual chains—analysed by debranching the starch—can now be associated with particular regions in the molecular-weight distribution of a starch. This supported the applicability of one of two rival mechanistic postulates made by starch biochemists. [ citation needed ] | https://en.wikipedia.org/wiki/Robert_Gilbert_(chemist) |
Robert Guillaumont [ 1 ] (born 26 February 1933 in Lyon) is a French chemist and honorary professor at the University of Paris-Saclay in Orsay (1967-1998), Member of the French Academy of Sciences [ 2 ] and the French Academy of Technologies [ 3 ]
Robert Guillaumont is a specialist in radiochemistry and actinide chemistry. He prepared his doctorate at the Institut radium de Paris, Curie Laboratory, University of Paris VI (1966). He continued his research in this Institute and then at the Radiochemistry Laboratory of the Orsay Institute of Nuclear Physics [ fr ] (1968–98), which he directed for twelve years (1979–90). He taught chemistry / radiochemistry at the University of Paris XI-Orsay (1967–98). His expertise covers the chemistry of the nuclear fuel cycle (from uranium mining to waste management and spent fuel reprocessing) and nuclear energy issues. He has been a member or chairman of numerous French and international committees dealing with the nuclear fuel cycle, nuclear energy, radioactive waste management and the synthesis and use of radionuclides for medicine. He was a member of the National Commission for the Evaluation of Research on Nuclear Materials and Radioactive Waste [ 4 ] (1994-2019).
Robert Guillaumont began his research in 1959 on the chemistry of protactinium [ 5 ] in solution. He showed that the electronic filling of the 5f underlay begins for this element. The UV absorption spectrum of Pa 4+ is typical of a 5f 1 6d 1 transition (Pa atom: 5f 2 6d 1 7s 2 ). Together with his collaborators, he extended his methodology for studying the behaviour of radioelements in imponderable quantities to other actinides. The rest of his work can be linked to the common thread of the consequences of filling the atomic underlayer 5f on the physicochemical properties of actinides. This filling plays an essential role in the behaviour of the 15 actinides, especially when these electrons are delocalized, from protactinium (Pa) to americium (Am). This results in a high richness of oxidation degrees of the first actinides (usually from 3 to 6) and in the manifestation of particular effects in the series (electronic states characterized by the quantum number J). Thus, he studied the thermodynamic consequences of the population of sublayer 5f on a series of solution complexes [ 6 ] (citric complexes of trivalent actinides from Am to fermium (Fm). He showed the existence of the "tetrad effect" for trivalent actinide complexes, an effect that reflects an extra-stabilization of the fundamental state of actinides for 1/4, 1/2 and 3/4 of the filling of the 5f underlay. After the curium (Cm), it is necessary, to carry out experiments, to synthesize isotopes of berkelium (Bk), einstenium (Es) and Fm by nuclear reactions with particle accelerators, [ 7 ] [ 8 ] [ 9 ] and separate them from irradiated targets, which he did at Orsay. To conduct most of his research he developed the methodology for studying species and equilibria between species in extremely diluted solutions (which radioactivity allows until about 10 −14 M), and he pushed, at the theoretical level, the description of the thermodynamic behaviour of a few atoms in terms of deviation from the law of mass action , [ 10 ] which gave a foundation to chemical experiments on elements 6d (Z>103), produced atom by atom by radiochemists at accelerators. [ 11 ]
At the same time, he participated in the study of thermodynamic [ 12 ] [ 13 ] and spectroscopic [ 14 ] [ 15 ] properties of elements 5f (and 4f) in connection with electronic transfers between these elements and their environment: covalence in two-phase solvent extraction systems and crystal field effect on solids, in particular single crystals examined at 4 K.
Finally, he continued his research on the fundamental problems of radionuclide migration in the environment [ 16 ] (speciation, concentration effect, retention on colloids ) and selective separation of actinides/ lanthanides from the elements constituting spent nuclear fuel. [ 17 ] R. Guillaumont's research themes are upstream of the many chemistry/radiochemistry problems encountered in "nuclear": chemistry of actinides from uranium to curium in the various stages of nuclear fuel cycles and radioactive waste management.
He has published more than 200 scientific articles, popular articles [ 18 ] [ 19 ] [ 20 ] and has written several books. [ 21 ] | https://en.wikipedia.org/wiki/Robert_Guillaumont |
Robert Brill is an American archaeologist, best known for his work on the chemical analysis of ancient glass . Born in the US in 1929, Brill attended West Side High School in Newark, New Jersey, before going on to study for his B.S. degree at Upsala College (Brill 1993a, Brill 2006, Getty Conservation Institute 2009). Having completed his Ph.D. in physical chemistry at Rutgers University in 1954, Brill returned to Upsala College to teach chemistry. In 1960, he joined the staff of the Corning Museum of Glass as their second research scientist.
Throughout his career at Corning, where a four-year directorship punctuated his time as a research scientist, Brill was a forerunner in the scientific investigation of glass, glazes and colorants, developing and challenging the usefulness of emerging techniques. His pioneering work with the application of lead and oxygen isotope analysis in archaeology led him occasionally to add the investigation of metal objects to his portfolio so that, together, his published works number more than 160 (Brill and Wampler 1967). Perhaps the most famous of these is his Chemical Analyses of Early Glass , a sum of his 39 years of work and now a seminal reference guide in the field (Brill 1999).
Since 1982, Brill has served on the International Commission on Glass . Within this, he founded TC17, the technical committee for the Archaeometry of Glass, which lists among its aims the ‘promotion of collaboration among glass specialists in widely separated countries’ and the stimulation and encouragement of glass scientists ‘in developing countries’ (Archaeometry of Glass 2005). His internationalism is aptly demonstrated by his study of glasses from around the world, with his attentions most recently being focused on those from the Silk Road. It seems he was attracted by the lack of previous study and the need for further development in the field. Seeing a disparity between contemporary knowledge of glasses from the western world and those from East Asia, Brill was keen to add insight to a hitherto unexploited field and, as such, has gone on to contribute a great deal to Silk Road studies (Brill 1993b).
The 1960s saw Brill beginning to develop the analytical techniques that would define the early years of his career at Corning, and yet the scope of his interest within glass remained vast. Indeed, 1961 saw Brill pen a letter to Nature with a colleague, that was a ‘bombshell’, according to Newton, in the field of glass-dating (1971, 3). Here Brill suggested that the rather enigmatic weathering crust found to form on buried glass objects over time could be used to date the object in a method rather similar to dendrochronology , using the separate layers of the shiny lamination (Brill 1961, Brill and Hood 1961, Newton 1971). Whilst in dendrochronology the tree-rings account simply for the tree's annual growth, in the weathering crust on glass Brill suggested the accumulation of a layer of laminate might respond to some kind of annual event of climatic change (Brill 1961). Unfortunately, despite the examples of the method's successful applications provided by Brill, such as the almost accurate count of 156 layers on a bottle-base from the York River submerged in 1781 and excavated in 1935, the technique largely failed to convince and did not see widespread adoption (Brill 1961, Newton 1971).
The most important of these techniques would prove to be Brill's pioneering application of lead isotope analysis, hitherto used only in geology , to archaeological objects. Brill first presented this idea at the 1965 Seminar in Examination of Works of Art, held at the Museum of Fine Arts Boston, but the first widely published account of the method seems to be Brill and Wampler's 1967 article in the American Journal of Archaeology . Here, Brill and Wampler outlined how the technique could be used to provenance the lead contents of archaeological objects to lead ore sources around the world, based on the isotopic signature of various leads, which relates them to ‘ores occurring in different geographical areas’ (1967, 63). These different areas have different signatures because they are of varying geological age, something reflected by the individual lead isotopes which form only after the radioactive decay of uranium and thorium (Brill et al. 1965, Brill and Wampler 1967). While the lead isotope ratios used for provenancing are different, they are not unique: areas geologically similar will yield similar lead isotope signatures (Brill 1970). Furthermore, if leads were salvaged and mixed in ancient times, the isotope ratio will be compromised (Brill 1970). Aside from these two limitations, there is little else that could affect the lead isotope reading an object would yield. As such, Brill's method was greeted enthusiastically and he went on to develop the technique, as well as oxygen isotope analysis, in his 1970 publication. Here he demonstrated how the technique could be used both to classify early glasses and to a certain extent characterize the ingredients from which they were made (1970, 143).
In 1965, Brill launched another important innovation in glass analysis, the comparison of interlaboratory experiments in order to verify analytical results (Brill 1965). ‘Originally inspired by a plea from W E S Turner’, according to Freestone, Brill first mooted his idea at the VIIth International Congress on Glass , in Brussels (Brill 1965a, I. Freestone, pers. comm. 2009). It wasn’t until the VIIIth International Congress on Glass in 1968, however, that Brill fully launched his concept of an ‘analytical round robin ’, having distributed a number of reference glasses to be tested in different laboratories using a range of current techniques including X-ray fluorescence and neutron activation analysis (1968, 49). When discussing his motive for the experiment, Brill aptly stated: 'The truth is that the chemical analysis of glasses is a difficult undertaking and still remains in some senses an art' (1968, 49). By conducting the round robin experiment, Brill hoped the results gathered from different laboratories would help ‘correlate [...] earlier results’ and ‘ calibrate future analyses in reference to one another’, as well as suggest which out of the analytical procedures used was the most accurate and effective (1968, 49). The results of the round robin were presented at the 'IXth International Congress on Glass' in 1971, and showed that, as Brill suspected, there was poor agreement between certain identified elements, and therefore these might be ‘troublesome’ generally across analyses (1971, 97). These included calcium, aluminium, lead and barium, among others (Brill 1971). Aside from their correctional potential, the results, from 45 different laboratories in 15 countries, also provided an enormous data set from which, Brill suggested, the participants could ‘evaluate their own methods and procedures against the findings of other analysts’ (1971, 97). At the time, Brill could hardly have suspected that the data would go on to have such great import, but Croegaard's generation of preferred glass compositions, from statistical analysis of the data, were used successfully by many people until Brill's own reference guide was published in 1999 (I. Freestone, 'pers. comm.', 2009).
Brill made various trips to the Middle East, including accompanying Theodore Wertime 's 1968 survey of the ancient technologies of Iran, alongside other great minds such as the noted ceramicist, Frederick Matson (UCL Institute for Archaeo-Metallurgical Studies 2007). In the years 1963-1964, the Corning Museum of Glass and the University of Missouri, following a long history of excavation at the necropolis of Beth She'Arim , conducted an examination of a huge slab of glass, some 2000 years old, that had been languishing in an ancient cistern (Brill and Wosinski 1965). Brill cannot recall who first suggested this slab, measuring 3.4m by 1.94m, could be made of glass, but the only way to test it was to drill a core through its 45 cm thickness and analyse it (Brill 1967, Brill and Wosinski 1965). On analysis of the core, Brill found that the glass was devitrified and stained, and not very homogenous, with a presence of wollastonite crystals throughout (1965, 219.2). Investigation of the manufacture technology required to produce the slab, suggested that in order to produce such a slab of glass, it would have been necessary to heat over eleven tons of batch material, and sustain it at around 1050˚C for between five and ten days (Brill 1967)! His initial interpretation was that the glass must have been heated either from above or from the sides using a kind of tank furnace; a hypothesis that was proven accurate when excavation underneath the slab suggested it had been melted in situ , in a tank whose floor was a bed of limestone blocks with a thin parting layer of clay (Brill and Wosinski 1965, Brill 1967). Brill's interpretation, that the slab and its surroundings suggest ‘some early form of reverberatory furnace ’ was the first suggestion of the use of tank furnaces in early glassmaking (1967, 92). The evidence at Beth She’arim encouraged further innovative thought because whilst the slab represented glass production on a grand scale, no associated evidence for glass working was found. Brill had already suspected that historical glassmaking occurred in two phases, the heavy ‘engineering’ stage when the glass is formed from the batch ingredients and the ‘crafting’ stage when the glass is formed into artefacts (Brill, pers. comm., 2009). These stages could occur in combination at one location, or at two differing locales, and the time span of production after the initial glass melt is highly flexible. For Brill, the idea of this ‘dual nature of all glassmaking’ was ‘crystallized’ at Beth She’Arim, where only the raw glass production was represented, and would be reinforced later by the contrasting evidence, where working was favoured over production, found at Jalame , as discussed below (Brill, pers. comm., 2009).
Aside from the aforementioned published results of his analytical round robin and his lead and oxygen isotope studies in the early 1970s, the 1970s saw Brill publish comparatively little, perhaps due to his post as director at The Corning Museum of Glass. Those publications he did pen are largely concerned with the development of lead isotope analysis and are listed in the further reading section. Alas, before Brill could be named Director, however, the museum was to be blighted by an enormous flood, ‘possibly the greatest single catastrophe borne by an American museum’ according to Buechner , Brill's successor in 1976 (1977, 7).
The flood was brought to Corning by Hurricane Agnes , a tropical storm that filled the Chemung River system to bursting until, on the morning of June 23, 1972, the river breached its banks and decimated the town (Martin and Edwards 1977). The Corning Glass Centre was under around twenty feet of water on the lower level's west side, while the museum itself was filled to a water-level of five feet and four inches (Martin and Edwards 1977). 528 of the museum's objects were damaged, the library's rare books were ruined and paper index systems, data and catalogues were all lost (Martin and Edwards 1977). In the wake of this destruction, Brill was named Director, so that his time holding this position, from 1972-1975, would be spent overseeing the complete restoration of the museum. Buechner praises how Brill 'painstakingly' prepared the insurance claim that would support the museum throughout the renovation process and facilitate the replacement of many wonderful objects (1977, 7). Under Brill's auspices, the Corning Museum of Glass was reopened just 39 days after the event, on August 1, but it would be another four years before the collection and library were restored to their former glory (Buechner 1977).
In 1982, Brill joined the International Commission on Glass (Corning Museum of Glass 2009). The International Commission functions through various technical committees, among which Brill saw an opening for TC17, the committee for the Archaeometry of Glass, which he founded shortly after joining. The main purpose of TC17, whose members met for the first time in Beijing in 1984, is ‘to bring together glass scientists, archaeologists and museum curators to present and discuss the results of research on early glass and glassmaking and on the conservation of historical glass objects’, as expressed in their mission statement (Archaeometry of Glass 2005). Brill chaired this committee until 2004 and received the W E S Turner Award from the International Commission of Glass on his departure, in recognition of his contribution as founder (Corning Museum of Glass 2009).
One of the on-running projects of the Corning Museum of Glass published the excavation report from their many field seasons at the ancient glass factory in Jalame , in Late Roman Palestine (Brill 1988, Schreurs and Brill 1984). Brill was called upon to conduct scientific investigations of the huge amount of material generated at the site, in order to exploit the full potential of the artefacts; after all, the site was being excavated specifically because of its role as a glass factory (Brill 1988). Of the vast quantity of glass fragments from Jalame, both vessel sherds and cullet , most were aqua and green and all were soda-lime-silica glasses melted in highly reducing conditions (Schreurs and Brill 1984). Where the melting conditions had been increasingly reducing, a ferri-sulfide chromophore complex was shown to have formed, thus changing the bluey-aqua colour of the glass to an olive, or even an amber shade (Schreurs and Brill 1984). Despite these colour variations, Brill's further chemical analysis showed the vessel glasses to be highly homogeneous in composition, apart from a clear divide where around 40 glasses demonstrated the intentional addition of manganese (Brill 1988). Brill conducted an investigation of the furnace at Jalame, nicknamed the Red Room, in which there was a mysterious absence of glass finds of any kind (Brill 1988). Whilst work at Beth She’Arim had eventually found there to be five firing chambers responsible for heating the one tank, the fragmentary remains at Jalame made it very difficult to interpret the furnace set-up, apart from the fact that they believed there to have been only one firing chamber (Brill 1988).
In the late eighties, Brill contributed to various studies with the Institute of Nautical Archaeology, following the excavation of a number of exciting shipwrecks including the Serçe Liman , and the Ulu Burun (Barnes et al. 1986, Brill 1989). Brill's own technique of lead isotope analysis was to provide a means for provenancing items aboard ship, and thus determine the ship's origin and her ports-of-call. The excavators of the Serçe Liman wanted to know whether she was Byzantine or Islamic, a complicated question for lead isotope analysis as the lead ores of the Eastern Mediterranean share geographical characteristics and therefore overlap (Barnes et al. 1986). Using 900 lead net sinkers divided into six loose groupings, Brill found groups III, V and VI to be Byzantine, that is with ores found in modern-day Turkey (Barnes et al. 1986). Group I, however, was taken to be most indicative of the ship's origin; this group contained net sinkers, but also two ceramic glazes and three glass vessels, all sharing virtually identical lead ores with only one isotopic match, ‘an ore from Anguran, northwest of Tehran ’ according to Barnes et al. (1986, 7).
Brill's submissions to the XIVth International Congress on Glass , which took place in New Delhi in 1986, can be seen to represent the origins of his work on the Great Silk Road, the impressive trade route carrying goods from the East through India to Europe. Here, chemical analysis of Early Indian glasses helped Brill determine the ingredients and techniques of production, ‘to make certain broad generalizations as to regions or periods of manufacture’, and therefore to follow an object's movement along the trade route (1987, 1). For the XIVth Congress, Brill conducted atomic absorption spectroscopy (AAS) and optical emission spectroscopy (OES) on samples of 38 glasses from India, and the success of his method was made clear when he was able to separate 21 samples away from those made in the Middle East and Europe (Brill 1987). The glasses were shown to have mixed alkali compositions, a feature that is ‘rare among glasses from more westerly sources’, and therefore Brill concluded that they had definitely been manufactured in India (1987, 4). Brill also collaborated with Mckinnon to conduct chemical analyses of some glass samples from Sumatra , Indonesia, the results of which would be the ‘first data of their kind from this island’ (1987, 1). The results of the study, which also used samples from Java, another important location for the Silk Road, were hoped by McKinnon and Brill to ‘stimulate a greater awareness of glass in the economy [...] of ancient Sumatra and further new lines of research in the archaeology of the region’ (1987, 1).
The beginning of the 1990s saw Brill accorded the Archaeological Institute of America's Pomerance Award for scientific contributions to archaeology; however the decade mostly reflects Brill's continuing dedication to Asian glasses and the study of the Silk Road (Archaeological Institute of America 2009). In Scientific Research in Early Chinese Glass , Brill reflected that in comparison to the knowledge of glassmaking in the West, ‘little is known about Chinese glass and about the role it played in the overall unfolding of glass history on a worldwide basis’ (1991, vii). One reason for this is that glass was never produced in the East in such great quantities as it was in the West but also that archaeological Chinese glasses are often prone to problems (Brill 1991). The difficulties of analysing Chinese glasses were reflected later in the publication where, following the chemical investigation of 71 samples, Brill found that identifying the ‘basic formulation’, or ‘any of the primary batch materials’ of the glasses was still almost impossible (Brill et al. 1991). Brill had greater success in differentiating between Chinese glass samples when using lead isotope analysis, a method that has proven effective in the first instance of identifying Chinese glass as the leads used here are different from those anywhere else in the world (Brill, Barnes et al. 1991). Brill found his Chinese samples to fall into two distinct groups, possessing on one hand the highest, and on the other the lowest, lead isotope ratios he had ever encountered (Brill, Barnes et al. 1991). As such, he was able to show that despite the striking similarity in the glasses’ chemical composition and appearance, the ores from which their leads were sourced must have been from very geologically-different mines (Brill, Barnes et al. 1991).
Brill conducted further investigations of ancient Asian glasses for the Nara Symposium on the Silk Road's maritime route in 1991, ‘to demonstrate [...] that chemical analyses can be useful for learning how glass was traded along the Desert, Steppe, and Maritime Routes of the Silk Road’ (1993a, 71), as well as providing a more technical discussion on glass and glassmaking in China for the Glass Art Society's Toledo Conference in 1993 (Brill 1993b). Further lead isotope analysis, this time on Chinese and central Asian pigments, was conducted with a larger team for the Getty's Conservation of Ancient Sites on the Silk Road, which saw Brill et al. launching studies that held incredible potential for understanding ‘chronological or stylistic differences among Buddhist cave paintings’, or ‘distinguish[ing] between original and repainted parts of individual works’ (1993, 371).
In 1999, Brill published the sum of 39 years worth of results from his chemical investigations at Corning in two volumes of reference material with a third forthcoming (Brill 1999). Brill was reluctant to publish the data without any accompanying interpretation, but he felt that the most important factor was to quickly release the material into a wider sphere, made ‘readily accessible to the scientific community’ (1999, 8). Of Corning's 10,000 research artefacts, the master catalogue contains 6,400 samples, an abbreviated catalogue, or AbbCat, of which is presented in the two volumes (1999, 11). 19 geographical, typological or chronological categories of glass samples are recorded, spanning Brill's various research projects and collaborations, from Egypt to the East (Brill 1999). It also records the results of oxygen isotope analyses, reminding us that Brill was ever one for the integration of different investigative methods.
Since 2000, Dr Brill's interest in Silk Road studies and ancient glass compositions has continued, but his publication rate has slowed somewhat. His years of prolific publication, however, and his willingness to analyse glass from almost every situation have provided the archaeometry of glass with a bounty of reference material, as reflected by the Chemical Analyses of Early Glasses . Despite his official retirement from the Corning Museum of Glass on May 31, 2008, he returned to the laboratory the next day and continues to work, showing no intention of enjoying a retirement proper any time soon (Brill, pers. comm. , 2009). | https://en.wikipedia.org/wiki/Robert_H._Brill |
Robert Howard Crabtree FRS [ 2 ] (born 17 April 1948) is a British-American chemist . He is serving as Conkey P. Whitehead Professor Emeritus of Chemistry at Yale University in the United States. He is a naturalized citizen of the United States. [ 3 ] Crabtree is particularly known for his work on " Crabtree's catalyst " for hydrogenations, and his textbook on organometallic chemistry . [ 4 ]
Robert Howard Crabtree studied at Brighton College (1959–1966), and earned a Bachelor of Arts degree from the University of Oxford where he was a student at New College, Oxford in 1970, studying under Malcolm Green . He received his PhD from the University of Sussex in 1973, supervised by Joseph Chatt . [ 5 ]
After his PhD, he was a postdoctoral researcher with Hugh Felkin at the Institut de Chimie des Substances Naturelles at Gif-sur-Yvette , near Paris. He was a postdoctoral fellow (1973–1975) and then attaché de recherche (1975–1977). At the end of that time he was chargé de recherche. In 1977 Crabtree took an assistant professorship in Inorganic Chemistry at Yale University . He served as associate professor from 1982 to 1985, and as full professor from 1985 to 2021. [ 6 ] In retirement, he now serves as an emeritus professor of chemistry. [ 7 ]
Robert Crabtree is renowned for his influential work on hydrogenation , particularly his contributions to the development of the Crabtree catalyst . [ 10 ] This catalyst, utilizing iridium as the active metal, displays exceptional efficiency, regio- and stereoselectivity in hydrogenation reactions. Notably, when terpinen-4-ol undergoes hydrogenation, the Crabtree catalyst exhibits a remarkable preference of 1000:1 for adding hydrogen to the substrate face containing the OH group. In contrast, the hydrogenation reaction with Palladium on carbon only achieves a selectivity ratio of 20:80. The chelation of the alcohol to the catalyst is evident from the identification of a catalyst-substrate complex involving norbornene-2-ol. [ 11 ] [ 12 ]
During his early research, Crabtree also focused on C–H bond activation . [ 13 ] Crabtree's groundbreaking contribution in this area was reversing the hydrogenation reactions he developed before, particularly in stoichiometric alkane dehydrogenation . He utilized tert-butylethylene as a hydrogen acceptor to facilitate the release of hydrogen during the dehydrogenation of cyclooctane , forming bound cyclooctadiene . This discovery demonstrated one of the earliest instances of intermolecular C–H activation using a homogeneous metal complex . This achievement played a significant role in his tenure award and academic success
Another part of Crabtree's research centers on a novel form of hydrogen bonding that involves metal hydrides , resulting in unconventional bonding interactions. [ 14 ] [ 15 ] Traditional hydrogen bonds feature a protic hydrogen donor and an electronegative acceptor, while Crabtree's discoveries include aromatic ring π electrons as weaker acceptors in X–H···π hydrogen bonds (X = N, O). Surprisingly, Crabtree also observed Y–H σ bonds (Y= B or metal) acting as acceptors, leading to X–H···H–Y structures with significantly shorter H···H distances compared to typical contacts. Known as "dihydrogen bonds," these interactions exhibit bond lengths of approximately 1.8 Å, in contrast to the regular H···H length of 2.4 Å. Crabtree's findings shed light on the diverse nature of hydrogen bonding, with implications for understanding molecular structures and designing catalysts with tailored properties.
Crabtree has made significant contributions to the field of carbene chemistry, particularly in the exploration of mesoionic carbenes (MICs), or so called "abnormal carbenes". These carbenes, offer advantages as ligand systems in organometallic complexes and catalytic applications. Unlike C2 coordinated imidazolylidenes, mesoionic carbenes possess only charge-separated electronic resonance structures , allowing for greater adaptability to metal centers within catalytic cycles. Crabtree has developed novel methods for generating and isolating abnormal carbenes, providing insights into their structures and stability under different conditions. Notably, he introduced the first example of an abnormal carbene complex incorporating an iridium complex with a C4 coordinated imidazolylidene, which found application in transfer hydrogenation catalysis. [ 16 ]
Crabtree's research has made significant advancements in our understanding of O–O bond formation within manganese di-μ-oxo dimers involved in oxygen evolution . [ 17 ] [ 18 ] Through his investigations, he has put forward a simplified proposal for the reaction mechanism responsible for the generation of oxygen through the reaction of a manganese di-μ-oxo dimer with NaClO . The oxidation of the IV/IV dimer results in the production of a Mn(V)=O dimer. Subsequently, the formation of the O–O bond could potentially occur through a nucleophilic attack of OH– on the oxo group. Oxygen-18 isotope labeling experiments have demonstrated that the oxygen atoms in the evolved molecular oxygen originate from water . This system thus serves as a functional model for photosynthetic water oxidation.
Crabtree has made significant contributions in C–H bond activation , water oxidation , and hydrogenation . His approach entails selecting unique projects, conducting early critical experiments, transitioning between problems, developing air-stable catalysts, and educating through his writing.
This article incorporates text available under the CC BY 4.0 license. | https://en.wikipedia.org/wiki/Robert_H._Crabtree |
Robert Miller Hazen (born November 1, 1948) is an American mineralogist and astrobiologist . He is a research scientist at the Carnegie Institution of Washington 's Geophysical Laboratory and Clarence Robinson Professor of Earth Science at George Mason University , in the United States . Hazen is the Executive Director of the Deep Carbon Observatory .
Hazen was born in Rockville Centre , New York, on November 1, 1948. His parents were Peggy Hazen ( née Dorothy Ellen Chapin; 1918–2002) and Dan Hazen ( né Daniel Francis Hazen, Jr.; 1918–2016). [ 3 ] [ 4 ] He spent his early childhood in Cleveland , near a fossil quarry where he collected his first trilobite at the age of about 9. [ 5 ]
The Hazen family moved to New Jersey , where Robert's eight-grade teacher, Bill Welsh, observed Robert's interest in his collection of minerals. Hazen later recalled "He gave me a starter collection of 100 specimens, mineral field guides, and mimeographed directions to Paterson and Franklin, New Jersey." [ 6 ] Hazen also had an early interest in music, starting with the piano at age 5, the violin at 6 and the trumpet at age 9. [ 7 ]
Hazen worked on his B.S. and S.M. (Master of Science) in Earth Science at the Massachusetts Institute of Technology 1971. He started with the intention of going into chemical engineering, but he was captivated by the enthusiasm of David Wones and converted to mineralogy . With Wones as advisor, he completed a masters thesis on cation substitution in trioctahedral micas ; his publication in American Mineralogist was his first to be highly cited. [ 8 ] [ 9 ] He completed a Ph.D. in Mineralogy & Crystallography at Harvard University in 1975. His thesis, with Charles Burnham as advisor, involved learning how to use a 4-circle diffractometer to do high-pressure X-ray crystallography and applying it to olivine . This became a focus of his early career. [ 8 ] [ 10 ] [ 6 ]
While a NATO Postdoctoral Fellow at Cambridge University in England, Hazen worked with Charles Prewitt to determine empirical relations for the effect of temperature and pressure on interatomic distances in oxides and silicates . [ 8 ] [ 11 ]
In 1976, Hazen joined the Carnegie Institution's Geophysical Laboratory as a research associate. [ 1 ] After a brief stint measuring optical properties of lunar minerals with Peter Bell and David Mao , he started to do X-ray crystallography with Larry Finger. [ 8 ] He later recalled, "It was a match made in mineralogical heaven: Larry loved to write code, build machines, and analyze data; I loved to mount crystals, run the diffractometers, and write papers." [ 6 ] They collaborated for two decades and determined about a thousand crystal structures at variable pressures and temperatures, work summarized in their 1982 book Comparative Crystal Chemistry . [ 8 ] [ 12 ]
Much of the work that Hazen was doing could be classified as mineral physics , a cross between geophysics and mineralogy. Although the field had pioneering contributions from the Nobel Prize winner Percy Bridgman and a student of his, Francis Birch , in the early- to mid-20th century, it did not have a name until the 1960s, and in the 1970s some scientists were concerned that a more interdisciplinary approach was needed to understand the relationship between interatomic forces and mineral properties. Hazen and Prewitt co-convened the first mineral physics conference; it was held on October 17–19, 1977 at the Airlie House in Warrenton, Virginia . [ 13 ]
Cooled to very low temperatures, some materials experience a sudden transition where electrical resistance drops to zero and any magnetic fields are expelled. This phenomenon is called superconductivity . Superconductors have a host of applications including powerful electromagnets , fast digital circuits and sensitive magnetometers , but the very low temperatures needed make the applications more difficult and expensive. Until the 1980s, no superconductors existed above 21 K (−252.2 °C). Then in 1986 two IBM researchers, Georg Bednorz and K. Alex Müller , found a ceramic material with a critical temperature of 35 K (−238.2 °C). This set off a frenzied search for higher critical temperatures. [ 15 ]
A group led by Paul Chu at the University of Houston explored some materials made of yttrium , barium , copper and oxygen (YCBO) and were the first to obtain a critical temperature above the boiling point of liquid nitrogen . The YCBO samples had a mixture of black and green minerals, and although the researchers knew the average composition, they did not know the compositions of the two phases. In February 1987, Chu turned to Mao and Hazen, because they could determine the composition of small mineral grains in rocks. It took Mao and Hazen a week to determine the compositions; the black phase, which turned out to be the superconductor, was YBa 2 Cu 3 O 7−δ . [ 16 ]
Mao and Hazen determined that the crystal structure of the superconducting phase was like that of perovskite , an important mineral in Earth's mantle . [ 17 ] Subsequently, Hazen's group identified twelve more high-temperature oxide superconductors, all with perovskite structures , and worked on organic superconductors . [ 18 ]
By the mid-1990s, Hazen felt that his research had reached a "respectable plateau" where the main principles of how crystals compress were known. The questions he was asking were increasingly narrow and the answers rarely surprising. So he changed research directions to study life's chemical origins. [ 19 ] This opportunity came when a colleague at George Mason University , Harold Morowitz , realized that the temperature and pressure at a hydrothermal vent might change the properties of water, allowing chemical reactions that ordinarily require the help of an enzyme . Enlisting the help of Hatten Yoder , a specialist in high pressure mineralogy, they tried subjecting pyruvate in water to high pressure, hoping for a simple reaction that would return oxaloacetate . Instead, an analysis by an organic geochemist, George Cody, found that they obtained tens of thousands of molecules. [ 20 ]
The publication of their results, which seemed to support the deep sea vent hypothesis , [ 21 ] met with heavy criticism, especially from Stanley Miller and colleagues who believe that life emerged on the surface. Along with the general criticism that organic compounds would not survive long in hot, high pressure conditions, they pointed out several flaws in the experiment. In his book, Genesis , Hazen acknowledges that Stanley Miller "was basically right" about the experiments, but argues that "the art of science isn't necessarily to avoid mistakes; rather, progress is often made by making mistakes as fast as possible, while avoiding making the same mistake twice." [ 22 ] In subsequent work, the group formed biomolecules from carbon dioxide and water and catalyzed the formation of amino acids using oxides and sulfides of transition metals ; and different transition elements catalyze different organic reactions. [ 18 ] [ 6 ]
Organic molecules often come in two mirror-image forms, often referred to as "right-handed" and "left-handed". This handedness is called chirality . For example, the amino acid alanine comes in a right-handed (D-alanine) and a left-handed (L-alanine) form. Living cells are very selective, choosing amino acids only in the left-handed form and sugars in the right-handed form. [ 23 ] However, most abiotic processes produce an equal amount of each. Somehow life must have developed this preference ( homochirality ); but while scientists have proposed several theories, they have no consensus on the mechanism. [ 24 ]
Hazen investigated the possibility that organic molecules might acquire a chiral asymmetry when grown on the faces of mineral crystals. Some, like quartz , come in mirror-image forms; others, like calcite , are symmetric about their centers but their faces come in pairs with opposite chirality. [ 25 ] With Tim Filley, an expert at organic chemical analysis, and Glenn Goodfriend, a geochemist, Hazen cleaned large calcite crystals and dipped them into aspartic acid . They found that each face of the crystal had a small preference for either left- or right-handed forms of aspartate. They proposed that a similar mechanism might work on other amino acids and sugars. [ 26 ] This work attracted a lot of interest from the pharmaceutical industry, which needs to produce some of their drugs with a pure chirality. [ 8 ]
At a Christmas party in 2006, the biophysicist Harold Morowitz asked Hazen whether there were clay minerals during the Archean Eon. Hazen could not recall a mineralogist ever having asked whether a given mineral existed in a given era, [ 27 ] [ 28 ] and it occurred to him that no one had ever explored how Earth's mineralogy changed over time. He worked on this question for a year with his closest colleague, geochemist Dimitri Sverjensky at Johns Hopkins University , and some other collaborators including a mineralogist, Robert Downs; a petrologist , John Ferry; and a geobiologist , Dominic Papineau. The result was a paper in American Mineralogist that provided a new historical context to mineralogy that they called mineral evolution . [ 29 ]
Based on a review of the literature, Hazen and his co-authors estimated that the number of minerals in the Solar System has grown from about a dozen at the time of its formation to over 4300 at present. (As of 2017, the latter number has grown to 5300. [ 30 ] ). They predicted that there was a systematic increase in the number of mineral species over time, and identified three main eras of change: the formation of the Solar System and planets; the reworking of crust and mantle and the onset of plate tectonics ; and the appearance of life. After the first era, there were 250 minerals; after the second, 1500. The remainder were made possible by the action of living organisms, particularly the addition of oxygen to the atmosphere. [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] This paper was followed over the next few years by several studies concentrating on one chemical element at a time and mapping out the first appearances of minerals involving each element. [ 36 ]
Hazen and his colleagues started the Carbon Mineral Challenge , a citizen science project dedicated to accelerating the discovery of "missing" carbon-bearing minerals . [ 37 ]
As the Clarence B. Robinson Professor at George Mason University, Hazen developed innovative courses to promote scientific literacy in both scientists and non-scientists. [ 38 ] With physicist James Trefil , he developed a course that they described as "science appreciation", aimed at non-scientists. It was organized around a total of 20 "Great Ideas of Science" that were later reduced to 18. [ 39 ] [ 40 ] In addition to writing about their ideas in several magazines, they turned the course into a book, Science Matters: Achieving Scientific Literacy. They used the principles to organize explanations of a "vast number of socially significant, fundamental, or environmentally crucial topics." [ 41 ] This was published with an amount of advance publicity that was unusual for a popular science book, including an article they wrote for the New York Times Sunday Magazine , [ 42 ] praise from prolific author Isaac Asimov and physics Nobelist Leon Lederman , and a publicity tour. [ 41 ] For an article in Science about the book, they provided the author with the original list of 20 ideas and invited readers to send in their comments. [ 39 ] About 200 readers responded, generally supporting the idea of such a list while vehemently criticizing many of the particulars, including an informal style and sometimes vague language. Particularly criticized were numbers 1 ("The universe is regular and predictable") and 16 ("Everything on the earth operates in cycles"). [ 43 ] Hazen and Trefil argued, in defense of point 1, that it was not intended as a defense of determinism and that they covered unpredictable phenomena like chaos ; [ 43 ] but they also used the responses to modify the list of ideas in subsequent editions.
Hazen and Trefil went on to write three undergraduate textbooks: The Sciences: An Integrated Approach (1993), [ 44 ] The Physical Sciences: An Integrated Approach (1995), [ 45 ] and Physics Matters: An Introduction to Conceptual Physics (2004). [ 46 ] Hazen used these as the basis for a 60-lecture video and audio course called The Joy of Science . [ 47 ] [ 38 ]
In 2008, Hazen was an outgoing member of the AAAS Committee on Public Understanding of Science and Technology. He and his wife Margee, noting that it is important for scientists to engage with the public but actually doing so does not help them get tenure, proposed a new award, The Early Career Award for Public Engagement with Science, and established a fund for it. [ 48 ] The first award, with a monetary prize of $5,000, was announced in 2010. [ 49 ]
Hazen is a Fellow of the American Association for the Advancement of Science .
The Mineralogical Society of America presented Hazen with the Mineralogical Society of America Award in 1982 and the Distinguished Public Service Medal in 2009. [ 38 ] [ 50 ] In 2016, he received its highest award, the Roebling Medal . [ 8 ] [ 6 ] He also served as Distinguished Lecturer and is a Past President of the Society. A mineral that was discovered in Mono Lake was named hazenite in his honor by Hexiong Yang, a former student of his. [ 34 ]
In 1986, Hazen received the Ipatieff Prize , which the American Chemical Society awards in recognition of "outstanding chemical experimental work in the field of catalysis or high pressure". [ 51 ]
For the book The Music Men , he and his wife Margaret received the Deems Taylor Award from the American Society of Composers, Authors and Publishers in 1989. [ 52 ]
For his popular and educational science writing, Hazen received the E.A. Wood Science Writing Award from the American Crystallographic Association in 1998, [ 53 ]
In 2012, the State Council of Higher Education for Virginia presented Hazen with its Outstanding Faculty Award. [ 54 ]
Hazen has presented numerous named lectures at universities. He gave a Directorate for Biological Sciences Distinguished Lecture at the National Science Foundation in 2007, [ 55 ] and was named the Sigma Xi Distinguished Lecturer for 2008–2010. [ 56 ] [ 57 ]
In 2019, Hazen was named a Fellow of the American Geophysical Union. [ 58 ]
In 2021, Hazen was awarded the Medal of Excellence in Mineralogical Sciences from the International Mineralogical Association . [ 59 ]
Hazen is author of more than 350 articles and 20 books on science, history, and music.
Hazen has 289 refereed publications that have been cited a total of over 11,000 times, for an h-index of 58. A selection of articles follows:
Hazen's wife, Margee (née Margaret Joan Hindle), is a science writer and published historian. [ 65 ] Her late father, Howard Brooke Hindle, PhD (1918–2001), was a historian who studied the role of material culture in the history of the United States and served as Director of the National Museum of American History from 1974 to 1978. [ 66 ] Hazen's late brother, Dan Chapin Hazen, PhD (1947–2015), was an academic research librarian who had been affiliated with the libraries at Harvard , and was particularly recognized for his accomplishments to the Center for Research Libraries and advocacy of collections from Latin America. Harvard has memorialized Dan Hazen by establishing two chairs in his name. [ 67 ] The Hazens have two children: Benjamin Hindle Hazen (born 1976) and Elizabeth Brooke Hazen (born 1978). [ 1 ] | https://en.wikipedia.org/wiki/Robert_Hazen |
Sir Robert Howson Pickard FRS (27 September 1874 – 18 October 1949) was a chemist who did pioneering work in stereochemistry and also for the cotton industry in Lancashire . He was also involved in educational administration and was Vice Chancellor of the University of London from 1937-1939. [ 1 ] He was Principal of Battersea Polytechnic (which later became the University of Surrey ) from 1920 to 1927. [ 2 ]
He was born in Balsall Heath , Birmingham , Warwickshire , (now the West Midlands ), England, [ 1 ] the son of Joseph Henry Pickard, a tool maker, and Alice his wife, the daughter of Robert Howson of Birmingham. [ 3 ] From 1883-1891 [ 3 ] he attended King Edward VI's Grammar School . In 1891 he studied chemistry at Mason Science College (which later became the University of Birmingham ), under Percy F. Frankland and obtained a first class BSc , then awarded by the University of London . In 1896 he attended the University of Munich as an 1851 Exhibitioner [ 3 ] being awarded a PhD summa cum laude in 1898. [ 1 ]
After a year in Birmingham doing chemical research, he was appointed head of the chemistry at Blackburn Technical School in Blackburn , Lancashire and was principal from 1908-1920. While at Blackburn was involved in publication of 35 papers in the Journal of the Chemical Society . He did original work on chemical structure and optical isomerism and as a result became a Fellow of the Royal Society (FRS) in 1917. Pickard was Principal of Battersea Polytechnic (which later became the University of Surrey ) from 1920 to 1927. [ 2 ]
He was also consulted by the cotton industry and later became director of the British Cotton Industry Research Association (then the Shirley Institute) in Manchester from 1927-1943 and expanded the technical facilities extensively in 1936. [ 1 ]
He had considerable organisational skills and was active in several scientific organisations including the Royal Society (council); Society of Chemical Industry (president 1932-33); the Royal Institute of Chemistry [ 4 ] (now the Royal Society of Chemistry ) (president 1936-1939); the Chemical Society (vice-president); the now defunct Chemical Council (chairman) and various positions over a long period with the University of London including Vice-Chancellor , 1937-1939. [ 1 ]
He married Ethel Marian Wood in 1901. She died in 1944. They had a daughter, who predeceased her father, and a son. He died at his son's home in Headley , Surrey . [ 1 ] | https://en.wikipedia.org/wiki/Robert_Howson_Pickard |
Robert Hurst , CBE , GM (3 January 1915 – 16 May 1996) was a New Zealand -born scientist. He was the first director of the experimental fast-breeder reactor complex at Dounreay , and later the director of the British Ship Research Association. During World War II , he worked in bomb disposal and mine detection, and was awarded the George Medal for his work as part of the team that defused the first V-1 flying bomb found intact in Britain.
Hurst was born in Nelson, New Zealand in 1915, [ 1 ] the son of Percy Cecil Hurst, a commercial traveller, and his wife Margery Whitwell. He attended Nelson College from 1927 to 1932, [ 2 ] and then was a student at Canterbury University College in Christchurch , from where he graduated with a Master of Science degree in physical chemistry in 1939. [ 1 ] [ 3 ] While at Canterbury, Hurst was a member of a student group assisting European Jews to escape the Nazis. [ 1 ]
Following his graduation, Hurst was en route to the United Kingdom by ship to undertake doctoral studies at Emmanuel College, Cambridge , working his passage as a radio operator, when World War II broke out. [ 1 ] [ 3 ]
In 1940, after completing the first year of his PhD studies, Hurst volunteered as a civilian experimental scientist with the Ministry of Supply, undertaking bomb disposal and mine detection duties. In 1944, he was part of the team, led by John Pilkington Hudson , that defused the first V-1 flying bomb found intact in Britain. It took the team a week of painstaking work to successfully complete the task, made more difficult by toxic fumes from the explosive and ongoing bombing raids. As a result, Hurst was awarded the George Medal: his citation noted his "sustained courage when engaged in hazardous operations". [ 1 ] [ 3 ] [ 4 ]
At the end of the war, Hurst saw service in Berlin, assisting with defusing unexploded Allied bombs. [ 1 ]
Hurst returned to Cambridge after the war to complete his PhD in physical chemistry. In 1948, he joined the Atomic Energy Research Establishment at Harwell , working first on the chemistry of plutonium , before heading a team that investigated the potential of different types of nuclear reactors. In 1957, he was appointed chief chemist at the Atomic Energy Authority Industrial Research and Development branch at Risley, Cheshire, and in 1958 he was named as the first director of the Dounreay experimental fast-breeder reactor complex. [ 1 ] [ 3 ]
In 1963, Hurst left Dounreay to take up the directorship of the British Ship Research Association. [ 1 ]
In the 1973 Queen's Birthday Honours , Hurst was appointed a Commander of the Order of the British Empire . [ 5 ]
Hurst retired to Poole , Dorset , in 1976, where he did voluntary work with the Royal National Lifeboat Institution . [ 1 ] He died in 1996 following his third heart attack. [ 3 ] | https://en.wikipedia.org/wiki/Robert_Hurst_(nuclear_chemist) |
The Robert J. Trumpler Award of the Astronomical Society of the Pacific is given annually to a recent recipient of the Ph.D degree whose thesis is judged particularly significant to astronomy. [ 1 ] The award is named after Robert Julius Trumpler , a notable Swiss-American astronomer (1886–1956).
Source: ASP | https://en.wikipedia.org/wiki/Robert_J._Trumpler_Award |
Robert Travis Kennedy is an American chemist specializing in bioanalytical chemistry including liquid chromatography , capillary electrophoresis , and microfluidics . He is currently the Hobart H. Willard Distinguished University Professor of Chemistry and the chair of the department of chemistry at the University of Michigan . [ 1 ] He holds joint appointments with the Department of Pharmacology and Department Macromolecular Science and Engineering. [ 2 ] Kennedy is an Associate Editor of Analytical Chemistry and ACS Measurement Science AU. [ 3 ] [ 4 ]
Kennedy was born on November 11, 1962, in Sault Ste. Marie, Michigan . He earned a Bachelor of Science degree in chemistry at the University of Florida in 1984 and a Ph.D. from the University of North Carolina-Chapel Hill (UNC) in 1988 while working under James Jorgenson . He was an NSF post-doctoral fellow at UNC from 1989-1991 with R. Mark Wightman . [ 5 ]
Kennedy became a professor of chemistry at the University of Florida in 1991. After 11 years, he moved to the University of Michigan. He has graduated approximately 80 graduate students. Kennedy’s research focuses on developing analytical instrumentation and methods that can help solve biological problems. [ 6 ] He is considered a leader in the field of analytical chemistry, [ 7 ] and an expert in endocrinology, neurochemistry, and high-throughput analysis. Major contributions to analytical chemistry include affinity probe capillary electrophoresis, [ 8 ] [ 9 ] in vivo neurochemical measurements, [ 10 ] and ultra-high pressure liquid chromatography. [ 11 ] He has been a Lilly Analytical Research Fellow, Alfred P. Sloan Fellow , NSF Presidential Faculty Fellow , and AAAS Fellow . [ 12 ] | https://en.wikipedia.org/wiki/Robert_Kennedy_(chemist) |
Robert Nicholas "Bob" Klein II is a stem cell advocate. He initiated California Proposition 71 , which succeeded in establishing the California Institute for Regenerative Medicine , of which Klein was the chairman of the governing board. [ 1 ]
Before getting involved in stem cell advocacy, he was a housing developer and lawyer. He lives in Portola Valley, California and works in Palo Alto , where he used to live.
He was a chief author of Proposition 71 and was the chair of the Yes on 71 campaign. He donated $3 million to the cause, the largest donation, and ran the campaign from the Klein Financial Corporation. [ 2 ]
After the election, Proposition 71 became Article XXXV of the California Constitution and the Yes on 71 campaign became the California Research and Cures Coalition, a stem cell advocacy organization. Klein was the head of that organization until he took the position at the California Institute for Regenerative Medicine , the organization created by the ballot initiative. In 2005, he was named as one of TIME Magazine's 100 Most Influential People ; and, that same year Scientific American named Klein one of “The Scientific American 50” as a leader shaping the future of science. Klein was honored at the 2010 BIO International Convention as the second annual Biotech Humanitarian. [ 3 ] Also, in 2010, Klein received the 2010 Research!America Gordon and Llura Gund Leadership Award for his advocacy of stem cell and diabetes research.
In 2020, the original funding for the Institute for Regenerative Medicine had run out, so Klein spearheaded another initiative to fund it, known as Proposition 14 . [ 4 ]
Klein has a Bachelor of Arts in History with Honors from Stanford University and a Juris Doctor from Stanford University Law School, 1970. Additional education includes: Executive Summer Finance Program at Stanford University Business School and an internship with the United Nations Economic and Social Council in Switzerland on Economic Development Policy.
Soon after graduating law school, he joined the firm of William Glikbarg , a Southern California housing developer who also taught housing law at Stanford.
He made his multimillion-dollar fortune primarily in the Modesto area, of the Central Valley, CA, developing low-income housing . He included market-rate units within subsidized projects to help generate financing for projects.
When Nixon administration housing secretary George W. Romney ended public housing subsidies in January 1973, Klein and an associate, Michael J. BeVier , successfully persuaded the California legislature to create the California Housing Finance Agency , which subsidizes housing developments with low-interest bonds. (Klein did not use CHFA money in his real estate deals to eliminate the potential for a conflict of interest.) BeVier wrote about this in the book "Politics Backstage."
Robert lives in Portola Valley with his wife Danielle Guttman Klein, as well as her daughter Alyssa. He has two sons and a daughter: Robert, Jordan, and Lauren. Lauren and her husband, Daryl Baltazar have one son named Bennett. Robert cites his son Jordan's autoimmune-mediated (type 1) diabetes as a primary source of his involvement in stem cell research.
Klein's father Robert Klein Sr. ( Harvard , UCLA ) was an administrator of San Jose , Fresno , Santa Cruz and Menlo Park . | https://en.wikipedia.org/wiki/Robert_N._Klein_II |
Robert Elliot Pollack (born September 2, 1940) is an American academic, administrator, biologist, and philosopher, who served as a long-time Professor of Biological Sciences at Columbia University .
Born in Brooklyn , Pollack earned a Bachelor of Arts in physics at Columbia College in 1961. He received a PhD in Biological Sciences from Brandeis University in 1966, and subsequently was a postdoctoral Fellow in Pathology at NYU Langone Health and the Weizmann Institute of Science . He was a senior staff scientist at Cold Spring Harbor Laboratory for nearly a decade, before becoming Associate Professor of Microbiology at Stony Brook University in 1975. He returned to Columbia University as a Professor of Biological Sciences in 1978. He served as Dean of Columbia College from 1982 to 1989. He founded the Center for the Study of Science and Religion (CSSR) in 1999, dedicated to exploring the intersection between faith and science. He served as Director of the Columbia University University Seminars from 2011 to 2019. He retired as Director of the CSSR, later renamed to the Research Cluster on Science and Subjectivity, in 2023.
Pollack has been credited as the father of reversion therapy, for his observation that cancer cells infected with different types of viruses could revert to non-oncogenic phenotypes. [ 1 ] Subsequently, he published nearly one hundred scientific articles related to reversion. He later became a philosopher, examining his faith with a scientific lens , and, at the same time, reinterpreting science through faith . Pollack has authored over 200 scientific articles, seven books, and dozens of speeches, mostly delivered at Columbia University .
As the first Jewish Dean of an Ivy League institution, Pollack faced significant fundraising challenges, the AIDS epidemic , and conflict surrounding the issue of South African divestment . Being a scientific activist , he was the first to raise concerns about recombinant DNA technology, which eventually led to the Asilomar Conference . He also decried the corrupting relationship between scientific academia and industry and promoted scientific literacy among the general public. He set the stage for the inclusion of science in the Columbia College Core Curriculum . He ultimately converted the Research Cluster on Science and Subjectivity to an institution promoting undergraduates, encouraging a legacy of student-centered innovation. He has collaborated with and mentored many prominent scientists, including Nancy Hopkins and Bettie Steinberg .
Robert Elliot Pollack was born on September 2, 1940, in Brooklyn, NY , growing up in the neighborhood of Seagate . [ 2 ] His parents did not finish high school; [ 3 ] his father ran a factory, manufacturing cardboard boxes. [ 2 ] He attended Abraham Lincoln High School and studied at Columbia College , graduating in 1961 with a physics major. [ 4 ] While at Columbia, he was a member of Jester Magazine [ 5 ] and Columbia Daily Spectator . [ 6 ] [ 7 ] [ 8 ] [ 9 ] He took a freshman Core Course with Robert Belknap , [ 10 ] whom he later succeeded as the Director of University Seminars at Columbia University . [ 11 ] His favorite professors were Sidney Morgenbesser and Richard Neustadt , who taught philosophy and government, respectively. [ 2 ] He worked as a laboratory assistant under the direction of Arno Penzias , then a graduate student in the lab of Charles H. Townes . [ 12 ] Upon graduation, Pollack received a New York State Regents Teaching Fellowship to pursue graduate work at Brandeis University , [ 13 ] examining differential expression of leucine transfer RNA in different strains of Escherichia coli following T2 or T4 virus infection. [ 14 ]
In 1968, while working for Howard Green , Pollack published the first demonstration of reversion, a phenomenon whereby certain cancer cells demonstrated decreased growth and increased contact inhibition, thereafter being considered as reverted to a more normal non-oncogenic phenotype. [ 15 ] Reversion was later suggested as a potential treatment for cancer. [ 16 ] Pollack's work sparked a novel subfield of oncogenic research, elucidating the distinct mechanisms directing cell reversion. [ 17 ]
Graduating with a PhD in Biology from Brandeis University in 1966, he spent sixteen years as a research scientist, completing postdoctoral work at both N.Y.U. Medical Center and the Weizmann Institute in Israel. He thereafter served as a senior scientist at Cold Spring Harbor Laboratory from 1971 to 1975, became an Associate Professor of Biology at Stonybrook University from 1975 to 1978, and finally headed his own laboratory as a full Professor of Biology at Columbia University from 1978 to 1994. [ 2 ]
Pollack served as Dean of Columbia College from 1982 to 1989. [ 18 ] [ 19 ] At the time of his appointment, the College was firmly in the Sovern era, facing a severe financial crisis, student protests related to South African divestment and concerns regarding the quality of student life, following the institution of co-education and subsequently declining admissions rates. [ 20 ] [ 21 ] [ 22 ] [ 23 ] During his tenure, he joined with the Columbia College faculty to oppose a merger with the faculties of other schools at Columbia University . [ 24 ] Upon his resignation, he was praised for his honesty, independence, and involvement in student affairs. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ]
Pollack took a variety of academic stances during his tenure. At his encouragement, the faculty of the College voted to move up the pass-fail course registration deadline by one month. [ 32 ] Pollack opposed the inclusion of computer science in the Core Curriculum . [ 33 ] Pollack organized faculty committees to examine the development of additional majors in both African-American studies and gender studies. [ 34 ] [ 35 ] In 1983, Pollack awarded an honorary degree to Isaac Asimov , who had been forced due to racial quotas to attend Seth Low Junior College , later folded into the Columbia University School of General Studies . [ 36 ] [ 37 ] [ 38 ]
Pollack supported the founding of the Rabi Scholar's cohort, named after Nobel Laureate Isidor Isaac Rabi . [ 39 ] The program is designed to encourage talented students in the sciences to attend Columbia College . [ 40 ] In 1989, Pollack applied for and received a one million dollar grant from the Howard Hughes Medical Institute , aimed at enhancing undergraduate science education and community outreach, which ensured long-term financial support for the Rabi Scholars program. [ 41 ] Additionally, he founded the Summer Undergraduate Research Fellowship program in the Department of Biology, funding students pursuing on-campus summer research internships. [ 42 ] He encouraged these students to consider careers in academic post-graduation. [ 43 ]
Notably, Pollack oversaw the admission of the first female-inclusive class in 1983, [ 44 ] [ 45 ] appointing a co-education coordinator to facilitate the transition. [ 46 ] [ 47 ] [ 48 ] At the same time, he engineered a merger between the athletics programs of Barnard College and Columbia College . [ 49 ] He pushed for renovations to the main student life center, later rebuilt as Alfred Lerner Hall . [ 50 ]
Pollack forwarded initiatives to ensure guaranteed housing for all students. [ 51 ] [ 52 ] A contemporary editorial by the Managing Board of the Columbia Daily Spectator noted that: "College Dean Robert Pollack is clinging to his guarantee of housing for all freshmen like a mother bear to its threatened cub." [ 53 ] In addition to the acquisition of the Carlton Arms dormitory, he pushed for the construction of a new dorm on 115th street, [ 54 ] which eventually became Schapiro Hall . [ 55 ] He successfully convinced Morris Schapiro to donate an addition two million dollars to fund a student center for the arts in the basement of this dorm. [ 56 ] Additionally, the college negotiated directly with manufacturers to install computer labs in residence. [ 57 ]
In the face of significant financial constraints, [ 58 ] [ 59 ] [ 60 ] Pollack vigorously and successfully defended Columbia College 's need-blind admissions policy with alumni donations. [ 61 ] [ 62 ] [ 63 ] A focus within his tenure was to support a more racially and ethnically diverse student body. [ 64 ] [ 65 ] [ 66 ] To this end, he supported the development of an intercultural resource center, bolstering undergraduate student life. [ 67 ] [ 68 ] [ 69 ]
Pollack was one of the first university administers to meet with LGBTQ groups during the AIDS epidemic . [ 70 ] He later led an initiative to formulate an AIDS-related policy for Columbia's campus. [ 71 ] [ 72 ] [ 73 ] Additionally, Pollack called for the development of an AIDS vaccine . [ 74 ]
In response to increasing student activism related to divestment from South Africa , the Columbia University Senate voted on March 25, 1983, to recommend total divestment, which was in turn rejected by the Trustees of the University . [ 75 ] In response, the University Senate appointed Pollack, alongside Louis Henkin and then-student Barbara Ransby , to a seven-member committee, charged with researching university divestment and reporting their results to the trustees . [ 76 ] Pollack was selected to chair the committee. [ 77 ] Due to opposition from Ransby , the report could not be presented to the University Senate by the end of the 1984 academic year. [ 78 ] [ 79 ] In response, Pollack directly requested that Columbia University President Michael Sovern recommend that the trustees freeze investments in South Africa, [ 80 ] a principal recommendation of the report, which thereafter became known as the Pollack Report . [ 81 ] [ 82 ] [ 83 ] The trustees responded favorably to Pollack's request, instituting a freeze on new investments in June, 1984. [ 84 ] The committee, containing a new student representative, [ 85 ] approved the report on November 15, 1984, [ 86 ] followed by ratification in December, 1984 by the University Senate. [ 87 ] In addition to a freeze on investments, the report recommended the formation of a consortium of universities to organize against apartheid , the continuous monitoring of current South African investments by a standing committee, and the funding of educational programs to study social politics in South Africa . [ 88 ] Although Pollack strongly defended the committee's work, [ 89 ] student activists continued to push for total divestment , organizing a fast [ 90 ] and protest simultaneously, [ 91 ] blockading the entrance to Hamilton Hall for three weeks. [ 92 ] [ 93 ] [ 94 ] While the trustees accepted only three proposals from the Pollack Report , choosing to maintain the temporary investment freeze agreed to with Pollack in 1984, [ 95 ] a worsening human rights situation in South Africa led Pollack and other university administrators to also push for total divestment. [ 96 ] The trustees thereafter accepted a two-year divestment plan in October, 1985, making Columbia University the first private institution to move toward total divestment . [ 97 ] [ 98 ] [ 99 ] In order to fund the educational programs recommended by the Pollack Report , the University received a one million dollar grant in 1986 from the Ford Foundation to support interdisciplinary courses in human rights. [ 100 ]
In 1987, Columbia College celebrated its bicentennial, commemorating the signing of the College charter, in 1987. [ 101 ] [ 102 ] Pollack led a series of reflection sessions in advance of the event, championing recent advances in African American and Women's studies. [ 103 ] [ 104 ] [ 105 ] [ 106 ] He later gave speeches at major gatherings and parades, celebrating the close ties between Columbia College and New York City . [ 107 ] [ 108 ] [ 109 ]
Protests by hundreds of students erupted following a racially motivated fight between students in the College in March, 1987. [ 110 ] [ 111 ] In response, Pollack organized a meeting between black student leaders and Columbia University President Michael Sovern . [ 112 ] [ 113 ] [ 114 ] Sovern next met with the Columbia College student council, yielding limited results. [ 115 ] As a result, approximately one month later, student leaders organized a protest blockading Hamilton Hall, reminiscent of the protests during South African Divestment. [ 116 ] 50 students were arrested, sparking a nearly one thousand person strong protest. [ 117 ] [ 118 ] In response, Pollack released a report regarding the March 22nd fight, charging junior Drew Krause with racial harassment and suspending him for one semester. [ 119 ] [ 120 ] [ 121 ] [ 122 ] In response, Krause sued Columbia University for discrimination, winning in federal court and overturning his suspension. [ 123 ] [ 124 ] When the University appealed this ruling, the two parties entered arbitration, settling outside of court. [ 125 ] [ 126 ]
In 1984, Pollack came out against an African-American studies major, favoring a more broadly encompassing minority studies major. [ 127 ] Therefore, in 1986, minority studies became an approved major, while proposals for African-American studies languished. [ 128 ] [ 129 ] Four days after the March 22nd fight, the African-American studies proposal was brought before the committee on instruction with Pollack's approval, and ratified by the faculty nearly a month afterwards. [ 130 ] [ 131 ] [ 132 ] Therefore, the 1987-1988 academic year therefore became the first where African-American studies was an offered major. [ 133 ]
Over the course of the Fall, 1987 semester, Pollack developed a plan to use a 25 million dollar donation from John Kluge to encourage graduate studies for underrepresented groups. [ 134 ] [ 135 ] He additionally appointed a race relations committee, headed by Professor Charles Hamilton . [ 136 ] The committee provided fourteen recommendations, accepted by Pollack, including an investment in the Columbia University Double Discovery Center along with increased hiring of minority faculty. [ 137 ] [ 138 ] [ 139 ]
Alongside his administrative responsibilities, Pollack maintained an active role in scientific research. [ 140 ] [ 141 ] His work focused on understanding the molecular mechanisms of cellular differentiation and cancer cell transformation, specifically investigating the role of viral proteins and the cellular cytoskeleton in oncogenesis. Notable publications include studies on insulin binding by 3T3 cells, [ 142 ] the role of the cytoskeleton in colonic epithelial cells, [ 143 ] and adipocyte differentiation by DNA transfection. [ 144 ] Additionally, he spoke out regarding the relationship between academia and industry science. [ 145 ] [ 146 ]
Near the end of his term as Dean and afterwards, Pollack was considered for a wide variety of academic positions at other universities , including as provost at University of Pennsylvania , [ 147 ] [ 148 ] as president of University of Vermont , [ 149 ] [ 150 ] as president of Bowdoin College , [ 151 ] and as president of Brandeis University . [ 152 ] He ultimately continued as Professor of Biological Sciences at Columbia University , becoming the Co-Chair of the Jewish Campus Life Fund. [ 153 ] [ 12 ] In this role, he convinced Robert Kraft to donate the necessary funds to establish the Robert K. Kraft Family Center for Jewish Student Life at Columbia, which opened in 2000. [ 154 ] [ 155 ] [ 156 ] [ 157 ] [ 158 ] He continued to comment on current issues, defending David Baltimore during the Imanishi-Kari case [ 159 ] and advocating for need-blind admissions policies. [ 160 ]
He was awarded a Guggenheim Fellowship in 1993 to write a book on the definition of disease. [ 161 ] [ 162 ] From these efforts arose Pollack's first book geared for the general public, entitled Signs of Life: the Language and Meanings of DNA (1994). [ 163 ] [ 164 ] In 1999, Pollack published his second book, The Missing Moment: How the Unconscious Shapes Modern Science (Houghton Mifflin), which offers reflection on mortality, morality, and the role of science in society. [ 165 ] The Missing Moment ultimately critiques the biomedical field's tendency to overlook human needs by operating within a paradigm that denies personal mortality. [ 166 ]
Pollack founded the Center for the Study of Science and Religion , later renamed the Research Cluster for Science and Subjectivity, in 1999, receiving a number of notable grants to power its operations, spanning diverse colloquial efforts, undergraduate course support, and a medical writer-in-residence program. [ 167 ] In 2000, he published The Faith of Biology and the Biology of Faith: Order, meaning and free will in modern science , examining the relationship between religious belief and scientific practice. [ 168 ] Originally presented at the Columbia University Seminar 1999 Leonard Hastings Schoff Memorial Lecture, [ 169 ] the text was republished in 2013, with a new preface emphasizing individual responsibility over scientific institutions, in discussing the role of free will in scientific practice. [ 170 ] He participated in a 2003 interview with Robert Wright , underscoring Pollack's approach to finding balance and meaning at the intersection between scientific inquiry and spiritual belief. [ 171 ] He partnered with Jeffrey Sachs , moving the CSSR to the Earth Institute, turning his attention to the study of climate change during the later 2000s. [ 172 ]
From 2011 to 2019, Pollack concurrently served as the Director of the Columbia University seminars, a movement fostering interdisciplinary conversations between academics, founded by Frank Tannenbaum . [ 173 ] [ 174 ] [ 175 ] [ 176 ] In his role as Director, he played an important role in the creation of the University Seminar Archive. [ 177 ]
Starting in 2014, Pollack changed the mission of the RCSS to focus on empowering undergraduate projects. [ 178 ] He received an endowment from College alumnus Harvey Krueger ’51 to perpetually fund these undergraduate efforts. [ 179 ] An exemplar of this vision is the fully-funded Black Undergraduate Mentorship Program in Biology at Columbia, providing summer research housing stipends and significant individualized mentorship, with support from both Harmen Bussemaker and Nobel Laureate Martin Chalfie . [ 180 ] Pollack retired as director in 2023. [ 181 ] He continues to serve on the advisory board of the RCSS and as an executive committee member for the Columbia University Center for Science and Society. [ 182 ] [ 183 ]
As a research scientist in Nobel Laureate James Watson 's laboratory, [ 184 ] Pollack taught a yearly summer course on animal cells and viruses. [ 185 ] In 1971, his class heard a presentation from Janet Mertz , then a graduate student in the laboratory of Paul Berg , who proposed an experiment cloning SV40 genes from monkeys into bacteria. [ 186 ] Pollack reacted to this presentation by directly calling Berg to relay his concerns and drafting an unsent letter calling for a moratorium on this kind of cloning. [ 187 ] [ 188 ] His concerns centered on the potential for these bacteria to be capable of inducing cancer, which could possibly spread rapidly through the human population. [ 189 ] Berg accepted these concerns, starting a voluntary moratorium. [ 190 ] In 1973, a conference was held at Asilomar, with Pollack editing the proceedings into a book entitled Biohazards in biological research , specifically identifying the necessary experiments to deem recombinant DNA technologies safe. [ 191 ] In 1974, this expanded into the first national moratorium on a specific subset of scientific research, followed by the more-famous Asilomar Conference in 1975, which answered many of the safety concerns recombinant DNA technologies raised at the 1973 conference, thereby paving the way to lift the national moratorium. [ 192 ] The response of the scientific community to recombinant DNA technologies has been scrutinized in debates regarding CRISPR -based genome modification technologies. [ 193 ]
Pollack has taught a variety of lecture and seminar style courses at Columbia University , including, [BIOL W2001] Environmental Biology, [BIOL W3500] Independent research, [BIOL G4065] Molecular Biology of Disease, [RELI V2660] Science & Religion East & West, and [EEEBGU4321] Human Nature: DNA, Race & Identity: Our Bodies, Our Selves. [ 194 ] [ 195 ] Arriving at Columbia in 1978, [ 196 ] he soon joined the Columbia College Committee on Instruction, [ 197 ] responsible for approving academic policy changes, new courses, and new major proposals. [ 198 ] Pollack has been a consistent supporter of the Core Curriculum as a mandatory component of undergraduate education . [ 199 ] [ 200 ]
Pollack was an early advocate for the inclusion of science curriculum within Columbia's Core Curriculum . [ 201 ] [ 202 ] [ 203 ] To accomplish this goal, Pollack, alongside Herbert Goldstein and Jonathon Gross, developed a course entitled the Theory and Practice of Science, aimed at providing scientific literacy to the general student population, funded by a $30,000 grant from the Exxon Mobil Foundation along with an anonymous $30,000 donation, later revealed to be a personal donation from Columbia University President Michael Sovern . [ 204 ] [ 205 ] [ 206 ] Based on a belief that fundamental scientific papers double as literary masterpieces, [ 207 ] Pollack's portion of the course was organized around key publications in biochemistry , evolution , and genetics . [ 208 ] [ 209 ] In 1983, the course received an additional $240,000 in support from the Mellon Foundation . [ 210 ] Although the course was taught for at least fourteen years, [ 211 ] it failed enter the core curriculum, due to concerns regarding the breadth of technical concepts within the discussed works. [ 212 ] Pollack later contributed [ 213 ] to and taught [ 214 ] in Frontiers of Science, [ 215 ] a general science curriculum developed by David Helfand [ 216 ] and Darcy Kelley , former instructors for The Theory and Practice of Science, [ 217 ] which was added to the Core Curriculum in 2005. [ 218 ] [ 219 ] [ 220 ]
Pollack has received the Alexander Hamilton Medal from Columbia University , Columbia College's most distinguished award for alumni. [ 221 ] He has additionally received the Gershom Mendes Seixas Award from the Columbia/Barnard Hillel organization. [ 222 ] His book Signs of Life: the Language and Meanings of DNA (1994) [ 223 ] received the Lionel Trilling Award. [ 224 ] In 1986, he was appointed by NYC mayor Ed Koch to an advisory committee on science and technology. [ 225 ]
Pollack is married to Amy Steinberg, an artist. [ 226 ] [ 227 ] [ 228 ] They co-authored The Course of Nature: A Book of Drawings on Natural Selection and Its Consequences (2014), consisting of Steinberg's drawings and Pollack's commentary. [ 229 ] Their daughter Marya Pollack, who graduated as a member of the first coeducational class of students from Columbia College in 1987, [ 227 ] is an attending physician at New York Presbyterian Hospital and Assistant Clinical Professor in Psychiatry at Columbia University Vagelos College of Physicians and Surgeons . [ 230 ]
In addition to his academic and administrative positions, Pollack has written many articles and books on diverse subjects, ranging from laboratory science to religious ethics. | https://en.wikipedia.org/wiki/Robert_Pollack_(biologist) |
Robert 'Bob' Ramage FRS [ 1 ] (4 October 1935 — 16 October 2019) was an organic chemist , born in Glasgow , who specialised in the synthesis and biosynthesis of natural products , peptides , and proteins .
Following his undergraduate degree in chemistry and the University of Glasgow , he stayed on for a PhD in organic chemistry. After his time at Glasgow, he followed his interest in natural products synthesis to Harvard and then Basel , before taking up a lectureship in organic chemistry at the University of Liverpool where his attention was drawn to peptides.
His peptide synthesis research continued at the University of Manchester Institute of Science and Technology (UMIST) , where he also served as head of department. He returned to Scotland in 1984, taking up the Forbes chair of organic chemistry at the University of Edinburgh , where he remained until retirement in 2000. [ 2 ]
Outside of academia, in 1994 he founded the company Albachem, which utilised his work with peptides.
He was elected Fellow of the Royal Society of Chemistry (1977), Royal Society of Edinburgh (1986), and the Royal Society (1992). | https://en.wikipedia.org/wiki/Robert_Ramage_(chemist) |
Robert S. Roth (August 21, 1926 – July 16, 2012) was an American materials scientist known for his comprehensive research into the phase diagrams of ceramic materials and the structures of nonstoichiometric compounds . [ 1 ]
Roth studied geology at Coe College and University of Illinois Urbana-Champaign , where he obtained his PhD in 1951. He worked at the United States Geological Survey as a field assistant, and after his PhD, he joined the National Bureau of Standards (later NIST ), where he remained for most of his career. Since 1981, he was a senior editor of the book series Phase Diagrams for Ceramists , a major set of reference books in the field of ceramic materials. [ 2 ]
While visiting CSIRO in Melbourne , Australia in the 1960s, Roth collaborated with the Australian materials scientist Arthur D. Wadsley to understand the structures of transition metal oxides, which led to a series of publications. The ordered phases of transition metal oxides exhibiting shear structures are now referred to as the Wadsley-Roth phases . [ 3 ]
Roth received the United States Department of Commerce Gold Medal in 1986. He received the Sosman Award in 1991, the John Jeppson Award in 1995, the Spriggs Phase Equilibria Award in 2003, all from the American Ceramic Society . He received the Buessem Award from the Center for Dielectric Studies in 2001. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Robert_S._Roth |
Robert Sidney Cahn (9 June 1899 – 15 June 1981) was a British chemist, best known for his contributions to chemical nomenclature and stereochemistry , particularly by the Cahn–Ingold–Prelog priority rules , which he proposed in 1956 with Christopher Kelk Ingold and Vladimir Prelog . [ 1 ] Cahn was the first to report the structure of Cannabinol (CBN) found in Cannabis in the early 1930s. [ 2 ] [ 3 ]
Cahn was born in Hampstead , London. He became a fellow of the Royal Institute of Chemistry [ 4 ] and was editor of the Journal of the Chemical Society from 1949 until 1963, and he remained with the Society as Director of Publications Research until his retirement in 1965. [ 5 ]
This article about a British chemist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Robert_Sidney_Cahn |
Robert Vaßen is a German physicist and holds a teaching professorship at the Ruhr University Bochum at the Institute of Materials in the Department of Ceramics Technology. [ 1 ] He is head of the department "Materials for High Temperature Technologies" and deputy head of the Institute of Energy Materials and Devices (IMD-2): Materials Synthesis and Processing at Forschungszentrum Jülich . [ 2 ]
Vaßen studied physics at the RWTH Aachen University from 1980 to 1986, where he received his diploma . At the same university, he received his PhD in solid-state physics under Prof. Uhlmaier with the thesis Diffusion of Helium in Cubic-Space-Centered and Hexagonal Metals in 1990. After his PhD, he was a scientific assistant at IEK-1 Institute for Energy and Climate Research (now Institute of Energy Materials and Devices IMD-2), Forschungszentrum Jülich, where he became head of department in 1998 and deputy head of the institute since 2014. During this time, he habilitated at Ruhr University Bochum in 2004 with the topic of d evelopment of new oxide thermal barrier coatings for applications in stationary and aero-gas turbines . Since 2010, he has been a visiting professor at the University West, Trollhättan , Sweden. In 2014, he turned down a call for a W3 professorship in coating technology at the Technische Universität Berlin .
Since 2014, Vaßen has been a PhD supervisor of more than 75 students, 40 of them his own PhD students at the Ruhr University Bochum and the others as co-lecturer at various universities such as University West, Trollhättan, [ 3 ] Sweden, University of Cambridge , Imperial College London and University of Manchester , all three United Kingdom, University of Stuttgart , University of Bayreuth , Mines Paris Tech , and others. [ 4 ]
Vaßen's research focuses on the development of high-temperature materials and coatings also with additional functional properties such as sensing properties, self-healing capabilities or enhanced strain tolerance. He is also active in the development of functional coatings for solid oxide fuel cells and membranes for oxygen and hydrogen separation. Recently, repair technologies, especially by cold gas spraying , aerosol deposition processes, and coating solutions for alkaline and PEM electrolysis have also been developed. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Robert_Vaßen |
Robert W. Bussard (August 11, 1928 – October 6, 2007) was an American physicist who worked primarily in nuclear fusion energy research. He was the recipient of the Schreiber-Spence Achievement Award for STAIF-2004. [ 1 ] He was also a fellow of the International Academy of Astronautics and held a Ph.D. from Princeton University . [ 2 ]
In June 1955 Bussard moved to Los Alamos and joined the Nuclear Propulsion Division's Project Rover designing nuclear thermal rocket engines. [ 3 ] Bussard and R.D. DeLauer wrote two important monographs on nuclear propulsion, Nuclear Rocket Propulsion [ 4 ] and Fundamentals of Nuclear Flight . [ 5 ]
In 1960, Bussard [ 6 ] conceived of the Bussard ramjet , an interstellar space drive powered by hydrogen fusion using hydrogen collected with a magnetic field from the interstellar gas. Due to the presence of high-energy particles throughout space, much of the interstellar hydrogen exists in an ionized state (H II regions) that can be manipulated by magnetic or electric fields . Bussard proposed to "scoop" up ionized hydrogen and funnel it into a fusion reactor, using the exhaust from the reactor as a rocket engine. [ 7 ]
It appears the energy gain in the reactor must be extremely high for the ramjet to work at all; any hydrogen picked up by the scoop must be sped up to the same speed as the ship in order to provide thrust, and the energy required to do so increases with the ship's speed. Hydrogen itself does not fuse very well (unlike deuterium, which is rare in the interstellar medium), and so cannot be used directly to produce energy, a fact which accounts for the billion-year scale of stellar lifetimes. This problem was solved, in principle, according to Bussard by use of the stellar CNO cycle in which carbon is used as a catalyst to burn hydrogen via the strong nuclear reaction. [ 8 ]
Bussard Ramjets are common plot devices in science fiction.
Larry Niven uses them in his Known Space setting to propel interstellar flight. Following a standard hi-tech faster/cheaper/better learning curve, he started with robot probes during the early stages of interstellar colonization and eventually plotted them as affordable to wealthy individuals relocating their families off a too-crowded Earth (in "The Ethics of Madness"). Niven also employed Bussard Ramjets as the propulsion / stabilizing engine of the Ringworld (four novels), which were also set in Known Space.
In the Star Trek universe, a variation called the Bussard Hydrogen Collector or Bussard Ramscoop appears as part of the matter / antimatter propulsion system that allows Starfleet ships to travel faster than the speed of light . The ramscoops attach to the front of the warp nacelles , and when the ship's internal supply of deuterium runs low, they collect interstellar hydrogen and convert it to deuterium and anti-deuterium for use as the primary fuel in a starship's warp drive .
In the early 1970s Bussard became Assistant Director under Director Robert Hirsch at the Controlled Thermonuclear Reaction Division of what was then known as the Atomic Energy Commission . They founded the mainline fusion program for the United States: the Tokamak . In June 1995, Bussard claimed in a letter to all fusion laboratories, as well as to key members of the US Congress, that he and the other founders of the program supported the Tokamak not out of conviction that it was the best technical approach but rather as a vehicle for generating political support, thereby allowing them to pursue "all the hopeful new things the mainline labs would not try". [ 9 ]
In a 1998 Analog magazine article, fellow fusion researcher Tom Ligon described an easily built demonstration fusor system along with some of Bussard's ideas for fusion reactors and incredibly powerful spacecraft propulsion systems, with which spacecraft could swiftly move throughout the solar system. [ 10 ]
Bussard worked on a promising new type of inertial electrostatic confinement (IEC) fusor, called the Polywell , that has a magnetically shielded grid (MaGrid). He founded Energy/Matter Conversion Corporation, Inc. (EMC2) in 1985 to validate his theory, [ 11 ] and tested several (15) experimental devices from 1994 through 2006. The U.S. Navy contract funding that supported the work expired while experiments were still small. However, the final tests of the last device, WB-6, reputedly solved the last remaining physics problem just as the funding expired and the EMC2 labs had to be shut down.
Further funding was eventually found, the work continued and the WB-7 prototype was constructed and tested, and the research is ongoing. [ 12 ]
During 2006 and 2007, Bussard sought the large-scale funding necessary to design and construct a full-scale Polywell fusion power plant. [ 13 ] His fusor design is feasible enough, he asserted, to render unnecessary the construction of larger and larger test models still too small to achieve break-even . Also, the scaling of power with size goes as the seventh power of the machine radius, while the gain scales as the fifth power, so there is little incentive to build half-scale systems; one might as well build the real thing.
On March 29, 2006, Bussard claimed on the fusor.net internet forum that EMC² had developed an inertial electrostatic confinement fusion process that was 100,000 times more efficient than previous designs, but that the US Navy budget line item that supported the work was zero-funded in FY2006. [ 14 ]
Bussard provided more details of his breakthrough and the circumstances surrounding the end of his Navy funding in a letter to the James Randi Educational Foundation internet forum on June 23. [ 15 ]
From October 2, 2006, to October 6, 2006, Bussard presented an informal overview of the previous decade of his work at the 57th International Astronautical Congress. [ 16 ] This was the first publication of this work in 11 years, as the U.S. Navy had put an embargo on publications of the research, in 1994.
Bussard presented further details of his IEC fusion research at a Google Tech Talk on November 9, 2006, of which a video was widely circulated. [ 13 ]
Bussard presented more of his thoughts on the potential world impact of fusion power at a Yahoo! Tech Talk on April 10, 2007. [ 17 ] (The video is only available internally for Yahoo employees.) He also spoke on the internet talk radio show The Space Show , Broadcast 709, on May 7, 2007. [ 18 ]
He founded a non-profit organization to solicit tax-deductible donations to restart the work in 2007, EMC2 Fusion Development Corporation. [ 19 ]
"Thus, we have the ability to do away with oil (and other fossil fuels) but it will take 4–6 years and ca. $100–200M to build the full-scale plant and demonstrate it." [ 14 ]
"Somebody will build it; and when it's built, it will work; and when it works people will begin to use it, and it will begin to displace all other forms of energy." [ 18 ]
Bussard died from multiple myeloma on October 6, 2007, at age 79. [ 20 ] | https://en.wikipedia.org/wiki/Robert_W._Bussard |
Robert William Holley (January 28, 1922 – February 11, 1993) was an American biochemist . He shared the Nobel Prize in Physiology or Medicine in 1968 (with Har Gobind Khorana and Marshall Warren Nirenberg ) for describing the structure of alanine transfer RNA , linking DNA and protein synthesis .
Holley was born in Urbana, Illinois , and graduated from Urbana High School in 1938. He went on to study chemistry at the University of Illinois at Urbana-Champaign , graduating in 1942 and commencing his PhD studies in organic chemistry at Cornell University . During World War II Holley spent two years working under Professor Vincent du Vigneaud at Cornell University Medical College, where he was involved in the first chemical synthesis of penicillin . Holley completed his PhD studies in 1947. [ 1 ] [ 2 ] [ 3 ]
Following his graduate studies Holley remained associated with Cornell. He became an assistant professor of organic chemistry in 1948, and was appointed as professor of biochemistry in 1962. He began his research on RNA after spending a year's sabbatical (1955–1956) studying with James F. Bonner at the California Institute of Technology .
Holley's research on RNA focused first on isolating transfer RNA (tRNA), and later on determining the sequence and structure of alanine tRNA , the molecule that incorporates the amino acid alanine into proteins . Holley's team of researchers determined the tRNA's structure by using two ribonucleases to split the tRNA molecule into pieces. Each enzyme split the molecule at location points for specific nucleotides. By a process of "puzzling out" the structure of the pieces split by the two different enzymes, then comparing the pieces from both enzyme splits, the team eventually determined the entire structure of the molecule. The group of researchers include Elizabeth Beach Keller , who developed the cloverleaf model that describes transfer RNA, during the course of the research. [ 4 ]
The structure was completed in 1964, [ 5 ] [ 6 ] and was a key discovery in explaining the synthesis of proteins from messenger RNA . It was also the first nucleotide sequence of a ribonucleic acid ever determined. Holley was awarded the Nobel Prize in Physiology or Medicine in 1968 for this discovery, [ 7 ] and Har Gobind Khorana and Marshall W. Nirenberg were also awarded the prize that year for contributions to the understanding of protein synthesis.
Using the Holley team's method, other scientists determined the structures of the remaining tRNA's. A few years later the method was modified to help track the sequence of nucleotides in various bacterial, plant, and human viruses .
In 1968 Holley became a resident fellow at the Salk Institute for Biological Studies in La Jolla, California .
According to the New York Times obituary, "He was an avid outdoorsman and an amateur sculptor of bronze." His widow Ann died in 1996. | https://en.wikipedia.org/wiki/Robert_W._Holley |
The Roberts loom was a cast-iron power loom introduced by Richard Roberts in 1830. It was the first loom that was more viable than a hand loom and was easily adjustable and reliable, which led to its widespread use in the Lancashire cotton industry.
Roberts was born at Llanymynech , on the border between England and Wales . He was the son of William Roberts, a shoemaker, who also kept the New Bridge tollgate. Roberts was educated by the parish priest, and early found employment with a boatman on the Ellesmere Canal and later at the local limestone quarries. He received some instruction in drawing from Robert Bough, a road surveyor, who was working under Thomas Telford .
He was responsible for developing ever more precise machine tools , working eventually from 15 Deansgate, Manchester . Here he worked on improving textile machinery. He patented the cast-iron loom in 1822 and in 1830 patented the self-acting mule thus revolutionising the production of both the spinning and weaving industries.
The major components of the loom are the warp beam, heddles, harnesses, shuttle, reed and takeup roll. In the loom, yarn processing includes shedding, picking, battening and taking-up operations.
The Roberts loom of 1830 incorporated ideas embodied in an 1822 patent.
The frame of the loom was cast iron. There were two side frames cast as single pieces. The three cross tails were machined for an accurate assembly. The great arched rail at the top supports the healds. The front and back cross rails bifurcate at each side to give a larger binding surface. [ 1 ]
The warp passes from the warp beam, passes over a friction guide roller, where it horizontally passes through the loom to a breastbeam. Here it turns vertically to the cloth beam. Even tension is essential as any variation will lead to broken threads. As the warp beam empties its effective diameter changes making the warp slacker- tension is maintained by adding a wooden pulley to the beam, around which are two turns of rope that are attached to mill weights- thus retarding the beam through friction. The cloth beam bears a toothed wheel which works a pinion. A ratchet wheel is attached with a click level to take up the slack in the cloth. This was Roberts invention.
The heddles are of standard construction. They are arranged in groups of four, obviously even threads and odd must go up and down alternatively but two heddles are used for the evens and two for the odds so adjacent threads do not rub. The lower end of the heddle leaves is attached to treadles or marches. These are depressed by cam referred to as eccentrics..
The loom is powered by a leather steam-belt which drives the driving shaft. Here there is a flywheel to smooth the motion and a crank mechanism to drive the battens (swords) and a toothed wheel. This engages a second shaft known as the tappet shaft or wiper shaft whose job is to lower the treadles and throw the shuttle. This turns half the speed of the driving shaft, so its toothed wheel is twice the size.
The shuttle is thrown by two levers attached to the side frame, but activated by a friction roller on the tappet shaft. As the shuttle enters the shuttle-box at the end of its travel, it depresses a lever which acts as a brake. If this lever is not depressed then the loom is stopped. [ 2 ]
The Roberts was made at a time when the power loom industry was set to expand. Until this moment, hand looms were more common than power looms. The reliable Roberts loom was quickly adopted and again it was the spinning side that was short of capacity. Roberts then addressed this, with the construction of a self-acting (automatic) spinning mule. Essentially, textile production was no longer a skilled craft but an industrial process that could be manned by semi-skilled labour. Mule spinning became the man's occupation, and weaving a girl's occupation. [ citation needed ] | https://en.wikipedia.org/wiki/Roberts_Loom |
The Roberval balance is a weighing scale presented to the French Academy of Sciences by the French mathematician Gilles Personne de Roberval in 1669.
In this scale, two identical horizontal beams are attached, one directly above the other, to a vertical column, which is attached to a stable base. On each side, both horizontal beams are attached to a vertical beam. The six attachment points are pivots. Two horizontal plates, suitable for placing objects to be weighed, are fixed to the top of the two vertical beams. An arrow on the lower horizontal beam (and perpendicular to it) and a mark on the vertical column may be added to aid in leveling the scale.
The object to be weighed is placed on one plate, and calibrated masses are added to and subtracted from the other plate until level is reached. The mass of the object is equal to the mass of the calibrated masses regardless of where on the plates the items are placed. Since the vertical beams are always vertical, and the weighing platforms always horizontal, the potential energy lost by a weight as its platform goes down a certain distance will always be the same, so it makes no difference where the weight is placed. For maximum accuracy, Roberval balances require that their top fulcrum be placed on the line between the left and right pivot so that tipping will not result in the net transfer of weight to either the left or right side of the scale: a fulcrum placed below the ideal pivot point will tend to cause a net shift in the direction of any downward-moving vertical column (in a kind of positive feedback loop ); likewise, a fulcrum placed above this point will tend to level out the arms of the balance rather than respond to small changes in weight (in a negative feedback loop ).
An off-center weight on the plate exerts a downward force and a torque on the vertical column supporting the plate. The downward force is carried by the bearing at the top beam in most balance scales, the lower beam just being supported horizontally at midpoint by the body of the scales by a simple peg-in-slot arrangement, so it effectively hangs beneath the top beam and stops the platforms from rotating. The torque on the column is taken by a pair of equal and opposite forces in the horizontal beams. If the offset weight sits toward the outside of the platform, further from the centre of the scales, the top beam will be in tension and the bottom beam will be in compression. These tensions and compressions are carried by horizontal reactions from the central supports; the other side of the scales is not affected at all, nor is the balance of the scales.
Certain presumptions are made in a theoretical Roberval balance. In order for such a balance to appear level in its natural state and be able to balance theoretical masses, the following must be true:
The Roberval balance is arguably less accurate and more difficult to manufacture than a beam balance with suspended plates. The beam balance, however, has the significant disadvantage of requiring suspensory strings, chains, or rods. For over three hundred years the Roberval balance has instead been popular for applications requiring convenience and only moderate accuracy, notably in retail trade.
Well known manufacturers of Roberval balances include W & T Avery Ltd. and George Salter & Co. Ltd. in the United Kingdom and Trayvou in France. Henry Troemner , who designed scales for the United States Department of Treasury , was the first American to use the design. [ 1 ] | https://en.wikipedia.org/wiki/Roberval_balance |
Robinose is a disaccharide composed of 6″-O-α- rhamnopyranosyl -β- galactopyranoside . The sugar can be found in Acalypha hispida . [ 1 ]
Robinin is a kaempferol -3- O -robinoside-7- O -rhamnoside.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Robinose |
Robinson's joint consistency theorem is an important theorem of mathematical logic . It is related to Craig interpolation and Beth definability .
The classical formulation of Robinson's joint consistency theorem is as follows:
Let T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} be first-order theories . If T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} are consistent and the intersection T 1 ∩ T 2 {\displaystyle T_{1}\cap T_{2}} is complete (in the common language of T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} ), then the union T 1 ∪ T 2 {\displaystyle T_{1}\cup T_{2}} is consistent. A theory T {\displaystyle T} is called complete if it decides every formula, meaning that for every sentence φ , {\displaystyle \varphi ,} the theory contains the sentence or its negation but not both (that is, either T ⊢ φ {\displaystyle T\vdash \varphi } or T ⊢ ¬ φ {\displaystyle T\vdash \neg \varphi } ).
Since the completeness assumption is quite hard to fulfill, there is a variant of the theorem:
Let T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} be first-order theories. If T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} are consistent and if there is no formula φ {\displaystyle \varphi } in the common language of T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} such that T 1 ⊢ φ {\displaystyle T_{1}\vdash \varphi } and T 2 ⊢ ¬ φ , {\displaystyle T_{2}\vdash \neg \varphi ,} then the union T 1 ∪ T 2 {\displaystyle T_{1}\cup T_{2}} is consistent.
This logic -related article is a stub . You can help Wikipedia by expanding it .
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Robinson's_joint_consistency_theorem |
The Robinson annulation is a chemical reaction used in organic chemistry for ring formation. It was discovered by Robert Robinson in 1935 as a method to create a six membered ring by forming three new carbon–carbon bonds. [ 1 ] The method uses a ketone and a methyl vinyl ketone to form an α,β-unsaturated ketone in a cyclohexane ring by a Michael addition followed by an aldol condensation . This procedure is one of the key methods to form fused ring systems.
Formation of cyclohexenone and derivatives are important in chemistry for their application to the synthesis of many natural products and other interesting organic compounds such as antibiotics and steroids . [ 2 ] Specifically, the synthesis of cortisone is completed through the use of the Robinson annulation. [ 3 ]
The initial paper on the Robinson annulation was published by William Rapson and Robert Robinson while Rapson studied at Oxford with professor Robinson. Before their work, cyclohexenone syntheses were not derived from the α,β-unsaturated ketone component. Initial approaches coupled the methyl vinyl ketone with a naphthol to give a naphtholoxide, but this procedure was not sufficient to form the desired cyclohexenone. This was attributed to unsuitable conditions of the reaction. [ 1 ]
Robinson and Rapson found in 1935 that the interaction between cyclohexanone and α,β-unsaturated ketone afforded the desired cyclohexenone. It remains one of the key methods for the construction of six membered ring compounds. Since it is so widely used, there are many aspects of the reaction that have been investigated such as variations of the substrates and reaction conditions as discussed in the scope and variations section. [ 4 ] Robert Robinson won the Nobel Prize for Chemistry in 1947 for his contribution to the study of alkaloids. [ 5 ]
The original procedure of the Robinson annulation begins with the nucleophilic attack of a ketone in a Michael reaction on a vinyl ketone to produce the intermediate Michael adduct. Subsequent aldol type ring closure leads to the keto alcohol, which is then followed by dehydration to produce the annulation product.
In the Michael reaction, the ketone is deprotonated by a base to form an enolate nucleophile which attacks the electron acceptor (in red). This acceptor is generally an α,β-unsaturated ketone, although aldehydes , acid derivatives and similar compounds can work as well (see scope). In the example shown here, regioselectivity is dictated by the formation of the thermodynamic enolate. Alternatively, the regioselectivity is often controlled by using a β-diketone or β-ketoester as the enolate component, since deprotonation at the carbon flanked by the carbonyl groups is strongly favored. The intramolecular aldol condensation then takes place in such a way that installs the six-membered ring. In the final product, the three carbon atoms of the α,β-unsaturated system and the carbon α to its carbonyl group make up the four-carbon bridge of the newly installed ring.
In order to avoid a reaction between the original enolate and the cyclohexenone product, the initial Michael adduct is often isolated first and then cyclized to give the desired octalone in a separate step. [ 6 ]
Studies have been completed on the formation of the hydroxy ketones in the Robinson annulation reaction scheme. The trans compound is favored due to antiperiplanar effects of the final aldol condensation in kinetically controlled reactions. It has also been found though that the cyclization can proceed in synclinal orientation. The figure below shows the three possible stereochemical pathways, assuming a chair transition state. [ 7 ]
It has been postulated that the difference in the formation of these transition states and their corresponding products is due to solvent interactions. Scanio found that changing the solvent of the reaction from dioxane to DMSO gives different stereochemistry in step D above. This suggests that the presence of protic or aprotic solvents gives rise to different transition states. [ 8 ]
Robinson annulation is one notable example of a wider class of chemical transformations termed Tandem Michael-aldol reactions, that sequentially combine Michael addition and aldol reaction into a single reaction. As is the case with Robinson annulation, Michael addition usually happens first to tether the two reactants together, then aldol reaction proceeds intramolecularly to generate the ring system in the product. Usually five- or six-membered rings are generated.
Although the Robinson annulation is generally conducted under basic conditions, reactions have been conducted under a variety of conditions. Heathcock and Ellis report similar results to the base-catalyzed method using sulfuric acid . [ 2 ] The Michael reaction can occur under neutral conditions through an enamine . A Mannich base can be heated in the presence of the ketone to produce the Michael adduct. [ 6 ] Successful preparation of compounds using the Robinson annulation methods have been reported. [ 9 ]
A typical Michael acceptor is an α,β-unsaturated ketone, although aldehydes and acid derivatives work as well. In addition, Bergmann et al. reports that donors such as nitriles , nitro compounds, sulfones and certain hydrocarbons can be used as acceptors. [ 10 ] Overall, Michael acceptors are generally activated olefins such as those shown below where EWG refers to an electron withdrawing group such as cyano, keto, or ester as shown.
The Wichterle reaction is a variant of the Robinson annulation that replaces methyl vinyl ketone with 1,3-dichloro- cis -2-butene. This gives an example of using a different Michael acceptor from the typical α,β-unsaturated ketone. The 1,3-dichloro- cis -2-butene is employed to avoid undesirable polymerization or condensation during the Michael addition. [ 11 ]
The reaction sequence in the related Hauser annulation is a Michael addition followed by a Dieckmann condensation and finally an elimination. The Dieckmann condensation is a similar ring closing intramolecular chemical reaction of diesters with base to give β-ketoesters. The Hauser donor is an aromatic sulfone or methylene sulfoxide with a carboxylic ester group in the ortho position. The Hauser acceptor is a Michael acceptor . In the original Hauser publication ethyl 2-carboxybenzyl phenyl sulfoxide reacts with pent-3-ene-2-one with LDA as a base in THF at −78 °C. [ 12 ]
Asymmetric synthesis of Robinson annulation products most often involve the use of a proline catalyst . Studies report the use of L-proline as well as several other chiral amines for use as catalysts during both steps of the Robinson annulation reaction. [ 13 ] The advantages of using the optically active proline catalysis is that they are stereoselective with enantiomeric excesses of 60–70%. [ 14 ]
Wang, et al. reported the one-pot synthesis of chiral thiochromenes by such an organocatalytic Robinson annulation. [ 15 ]
The Wieland–Miescher ketone is the Robinson annulation product of 2-methyl-cyclohexane-1,3-dione and methyl vinyl ketone. This compound is used in the syntheses of many steroids possessing important biological properties and can be made enantiopure using proline catalysis. [ 14 ]
F. Dean Toste and co-workers [ 16 ] have used Robinson annulation in the total synthesis of (+)-fawcettimine, a tetracyclic Lycopodium alkaloid that has potential application to inhibiting the acetylcholine esterase .
Scientists at Merck discovered platensimycin , a novel antibiotic lead compound with potential medicinal applications as seen in the adjacent picture. [ 17 ]
Initial synthesis gave a racemic form of the compound using an intramolecular etherification reaction of the alcohol motifs and the double bond. Yamamoto and coworkers report the use of an alternative intramolecular Robinson annulation to provide a straightforward enantioselective synthesis of tetracyclic core of platensimycin. The key Robinson annulation step was reported to be accomplished in one pot using L-proline for chiral control. The reaction conditions can be seen below. [ 18 ] | https://en.wikipedia.org/wiki/Robinson_annulation |
In mathematics , Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. [ 1 ] It is usually denoted Q .
Q is almost [ clarification needed ] PA without the axiom schema of mathematical induction . Q is weaker than PA but it has the same language, and both theories are incomplete . Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable .
The background logic of Q is first-order logic with identity , denoted by infix '='. The individuals, called natural numbers , are members of a set called N with a distinguished member 0 , called zero . There are three operations over N :
The following axioms for Q are Q1–Q7 in Burgess (2005 , p. 42) (cf. also the axioms of first-order arithmetic ). Variables not bound by an existential quantifier are bound by an implicit universal quantifier .
The axioms in Robinson (1950) are (1)–(13) in Mendelson (2015 , pp. 202–203). The first 6 of Robinson's 13 axioms are required only when, unlike here, the background logic does not include identity.
The usual strict total order on N , "less than" (denoted by "<"), can be defined in terms of addition via the rule x < y ↔ ∃ z ( Sz + x = y ) . Equivalently, we get a definitional conservative extension of Q by taking "<" as primitive and adding this rule as an eighth axiom; this system is termed " Robinson arithmetic R " in Boolos, Burgess & Jeffrey (2002 , Sec 16.4).
A different extension of Q , which we temporarily call Q+ , is obtained if we take "<" as primitive and add (instead of the last definitional axiom) the following three axioms to axioms (1)–(7) of Q :
Q+ is still a conservative extension of Q , in the sense that any formula provable in Q+ not containing the symbol "<" is already provable in Q . (Adding only the first two of the above three axioms to Q gives a conservative extension of Q that is equivalent to what Burgess (2005 , p. 56) calls Q* . See also Burgess (2005 , p. 230, fn. 24), but note that the second of the above three axioms cannot be deduced from "the pure definitional extension" of Q obtained by adding only the axiom x < y ↔ ∃ z ( Sz + x = y ) .)
Among the axioms (1)–(7) of Q , axiom (3) needs an inner existential quantifier. Shoenfield (1967 , p. 22) gives an axiomatization that has only (implicit) outer universal quantifiers, by dispensing with axiom (3) of Q but adding the above three axioms with < as primitive. That is, Shoenfield's system is Q+ minus axiom (3), and is strictly weaker than Q+ , since axiom (3) is independent of the other axioms (for example, the ordinals less than ω ω {\displaystyle \omega ^{\omega }} forms a model for all axioms except (3) when Sv is interpreted as v + 1). Shoenfield's system also appears in Boolos, Burgess & Jeffrey (2002 , Sec 16.2), where it is called the " minimal arithmetic " (also denoted by Q ). A closely related axiomatization, that uses "≤" instead of "<", may be found in Machover (1996 , pp. 256–257).
On the metamathematics of Q see Boolos, Burgess & Jeffrey (2002 , chpt. 16), Tarski, Mostowski & Robinson (1953) , Smullyan (1991) , Mendelson (2015 , pp. 202–203) and Burgess (2005 , §§1.5a, 2.2). The intended interpretation of Q is the natural numbers and their usual arithmetic in which addition and multiplication have their customary meaning, identity is equality , Sx = x + 1, and 0 is the natural number zero .
Any model (structure) that satisfies all axioms of Q except possibly axiom (3) has a unique submodel ("the standard part") isomorphic to the standard natural numbers ( N , +, ·, S, 0) . (Axiom (3) need not be satisfied; for example the polynomials with non-negative integer coefficients forms a model that satisfies all axioms except (3).)
Q , like Peano arithmetic , has nonstandard models of all infinite cardinalities . However, unlike Peano arithmetic, Tennenbaum's theorem does not apply to Q , and it has computable non-standard models. For instance, there is a computable model of Q consisting of integer-coefficient polynomials with positive leading coefficient, plus the zero polynomial, with their usual arithmetic.
A notable characteristic of Q is the absence of the axiom scheme of induction . Hence it is often possible to prove in Q every specific instance of a fact about the natural numbers, but not the associated general theorem. For example, 5 + 7 = 7 + 5 is provable in Q , but the general statement x + y = y + x is not. Similarly, one cannot prove that Sx ≠ x . [ 2 ] A model of Q that fails many of the standard facts is obtained by adjoining two distinct new elements a and b to the standard model of natural numbers and defining Sa = a , Sb = b , x + a = b and x + b = a for all x , a + n = a and b + n = b if n is a standard natural number, x ·0 = 0 for all x , a · n = b and b · n = a if n is a non-zero standard natural number, x · a = a for all x except x = a , x · b = b for all x except x = b , a · a = b , and b · b = a . [ 3 ]
Q is interpretable in a fragment of Zermelo's axiomatic set theory , consisting of extensionality , existence of the empty set , and the axiom of adjunction . This theory is S' in Tarski, Mostowski & Robinson (1953 , p. 34) and ST in Burgess (2005 , pp. 90–91, 223). See general set theory for more details.
Q is a finitely axiomatized first-order theory that is considerably weaker than Peano arithmetic (PA), and whose axioms contain only one existential quantifier . Yet like PA it is incomplete and incompletable in the sense of Gödel's incompleteness theorems , and essentially undecidable. Robinson (1950) derived the Q axioms (1)–(7) above by noting just what PA axioms are required [ 4 ] to prove that every computable function is representable in PA. [ 5 ] The only use this proof makes of the PA axiom schema of induction is to prove a statement that is axiom (3) above, and so, all computable functions are representable in Q . [ 6 ] [ 7 ] [ 8 ] The conclusion of Gödel's second incompleteness theorem also holds for Q : no consistent recursively axiomatized extension of Q can prove its own consistency, even if we additionally restrict Gödel numbers of proofs to a definable cut. [ 9 ] [ 10 ] [ 11 ]
The first incompleteness theorem applies only to axiomatic systems defining sufficient arithmetic to carry out the necessary coding constructions (of which Gödel numbering forms a part). The axioms of Q were chosen specifically to ensure they are strong enough for this purpose. Thus the usual proof of the first incompleteness theorem can be used to show that Q is incomplete and undecidable. This indicates that the incompleteness and undecidability of PA cannot be blamed on the only aspect of PA differentiating it from Q , namely the axiom schema of induction .
Gödel's theorems do not hold when any one of the seven axioms above is dropped. These fragments of Q remain undecidable, but they are no longer essentially undecidable: they have consistent decidable extensions, as well as uninteresting models (i.e., models that are not end-extensions of the standard natural numbers). [ citation needed ] | https://en.wikipedia.org/wiki/Robinson_arithmetic |
The Robinson oscillator is an electronic oscillator circuit originally devised for use in the field of continuous wave (CW) nuclear magnetic resonance (NMR). It was a development of the marginal oscillator . Strictly one should distinguish between the marginal oscillator and the Robinson oscillator, although sometimes they are conflated and referred to as a Robinson marginal oscillator . Modern magnetic resonance imaging (MRI) systems are based on pulsed (or Fourier transform) NMR; they do not rely on the use of such oscillators.
The key feature of a Robinson oscillator is a limiter in the feedback loop. This means that a square wave current, of accurately-fixed amplitude, is fed back to the tank circuit. The tank selects the fundamental of the square wave, which is amplified and fed back. This results in an oscillation with well-defined amplitude; the voltage across the tank circuit is proportional to its Q-factor.
The marginal oscillator has no limiter. It is arranged for the working point of one of the amplifier elements to operate at a nonlinear part of its characteristic and this determines the amplitude of oscillation. This is not as stable as the Robinson arrangement.
The Robinson oscillator was invented by British physicist Neville Robinson .
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it .
This article related to medical equipment is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Robinson_oscillator |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.