id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
20,937,486 | https://en.wikipedia.org/wiki/List%20of%202007%20suicide%20bombings%20in%20Iraq | This article is a list of suicide bombings during 2007 in Iraq.
January 8: A suicide truck bomber attacked a checkpoint in Ramadi killing two policemen.
January 10: Two suicide bombers attacked separately in Tal Afar killing five people and wounding 15 others.
January 15: A suicide bomber attacked an office of the Kurdistan Democratic Party in Mosul killing 5 people and wounding 28.
January 16: Mustansiriya University bombings: A double car bombing, including one suicide attack, killed 70 people and wounded 180 at the Mustansiriya University in Baghdad. Shortly after a bomb exploded in a Baghdad motorcycle market a suicide bomber attacked the police and first responders who had arrived at the scene, killing 13 people.
January 17: A suicide car bomb struck a market in Sadr City, killing 17 people. A suicide bomber attacked near a police headquarters in Kirkuk killing 10 people. Police shot and killed a suicide bomber after he attempted to attack a police checkpoint in Ramadi.
January 18: A suicide bomber attacked a police patrol in Mosul killing one civilian and wounding six people, including four policemen.
January 21: A suicide bomber attacked an Iraqi army patrol in Mosul killing one woman.
January 22: Bab al-Sharqi market bombings: A parked car bomb followed immediately by a suicide car bomber struck a predominantly Shiite commercial area in the Bab al-Sharqi market in central Baghdad, killing 88 people.
January 23: A suicide car bomber attacked the Kurdistan Youth Federation, an affiliate of the KDP, in Mosul killing wounding nine people.
January 24: A suicide bomber attacked a police patrol in Baghdad killing four policemen.
January 25: A suicide car bomber killed 26 people and wounded 55 at a busy intersection in the Karrada district of Baghdad.
January 26: A suicide bomber attacked a Shi'ite mosque near Mosul killing one person. A suicide bomber attacked an army patrol in Baghdad killing two soldiers. Police killed a suicide car bomber attempting to attack their checkpoint in Ramadi.
January 27: 13 people were killed in a double suicide bombing in Baghdad. A suicide car bomb exploded outside a Shiite mosque in Kirkuk killing the vehicle driver and passenger.
January 28: In the first attack of its kind, a suicide bomber targeted an Emergency Response Unit in Ramadi with a chlorine-laden truck bomb. 16 People were killed in the blast, but the chlorine did not appear to injure anyone. A suicide bomber blew himself up in Kirkuk, killing eight people.
January 29: A suicide car bomber attacked a police checkpoint in the Hurriyah district of Baghdad killing 4 people and wounding 5.
January 30: A suicide bomber killed 23 people and wounded 57 in an attack on a Shi'ite mosque in Balad Ruz. A suicide bomber attacked a checkpoint protecting religious pilgrims commemorating the Ashura holiday in Hafriya killing two people.
February 1: Six people were killed and 12 wounded when a suicide bomber blew himself up in a minibus in the central Baghdad district of Karrada. Two suicide bombers blew themselves up in a crowded outdoor market in Hilla, south of Baghdad, killing 73 people.
February 3: Sadriyah market bombing: A suicide bomber blew up his vehicle in Baghdad's Sadriya market, killing 135 people and wounding 305, in the deadliest single bombing since the 2003 US-led invasion. A suicide bomber inadvertently veered into & blew up an ambulance in Mosul, killing a pregnant woman and wounding two others. Police sources suggested his intended target was the Al Boursah Market.
February 8: A suicide bomber attacked an Iraqi police checkpoint north of Haditha in Anbar province, killing seven policemen and wounding three.
February 10: A suicide car bomber killed five people and wounded 10 near a queue outside a bakery in the mainly Shi'ite district of Karrada. A suicide car bomber killed one Iraqi soldier and wounded five people, including three civilians, as it targeted an army checkpoint in the northern Iraqi town of Tal Afar.
February 11: In Tikrit, 80 miles north of Baghdad, a suicide truck bomber slammed into a crowd of police lining up for duty Sunday near Tikrit, collapsing the station and killing at least 30 people and wounding 50. One policeman was wounded when a suicide bomber exploded near a Shi'ite mosque in the Ilaam district in southern Baghdad.
February 13: A suicide bomber blew up a truck near a Baghdad college in the western district of Iskan, killing 18 people and wounding 40.
February 14: A suicide car bomber killed at least eight policemen and wounded 20 others when he blew up his vehicle at the entrance of a police station in the western Iraqi city of Ramadi, police sources said. The officer in charge of the station, Colonel Salam al-Dulaimi, died in the blast.
February 17: A suicide bomber attacked a checkpoint near Kerbala wounding two policemen. A double suicide attack killed 10 people and wounded 83 in Kirkuk.
February 19: At least one, and possibly as many as three suicide car bombers attacked a US combat outpost north of Baghdad, killing two American soldiers and wounding 29 others. Two suicide bombers killed 11 people, including five police officers, when they attacked the house of a tribal leader in Ramadi. A chlorine-laden truck was detonated by a suicide bomber in Ramadi, killing two Iraqi security forces. A suicide bomber attacked the house of the army chief in Dhuluiya, killing five and wounding fifteen.
February 20: Seven are killed by a suicide bomber during a funeral in Baghdad. A suicide car bomber hit a vegetable market in a Shiite enclave of the Sunni Dora district in southern Baghdad. At least five people were killed and seven injured.
February 21: In a suicide bombing in Najaf on a police checkpoint 12 people, including seven policemen, were killed.
February 24: A suicide truck bomber killed 52 people at a mosque in Habbaniyah. A suicide car bomber killed one civilian in southern Baghdad. A suicide bomber attacked outside an SCIRI compound in Baghdad killing three people, the compound was not his target.
February 25: A suicide bomber attacked a college campus in Baghdad killing 41 people, mostly students.
February 26: A suicide bomber attacked a police station in Ramadi killing 14 people. A suicide bomber attacked a checkpoint near Kirkuk killing one Iraqi soldier.
February 27: A suicide bomber attacked an Iraqi police station in Mosul killing 7 policemen and wounding 47 people, including 15 other policemen. A suicide bomber killed four people near Mosul.
February 28: A suicide bomber attacked an Iraqi police station in Baghdad killing 2 policemen and wounding another 4.
March 3: A suicide bomber killed 3 policemen and 9 civilians in Ramadi.
March 5: A suicide bomber killed 38 and wounded 105 people at Mutanabbi Street book market in Baghdad.
March 6: Hillah bombings: Two suicide vest bombers killed 120 people and wounded 190 when they targeted Shi'ite pilgrims in Hillah.
March 7: A suicide bomber kills 30 at a restaurant in Balad Ruz, in the Diyala province. A suicide bomber attacked a checkpoint in Baghdad killing 12 policemen and 10 civilians.
March 8: A suicide car bomber struck a police patrol in Mosul, killing four policemen.
March 10: A suicide bomber targeting a military patrol in Sadr city kills 18 people, including 6 soldiers, and wounds 48.
March 11: A suicide car bomber rammed a truck carrying Shiite pilgrims returning from a religious commemoration, killing 32 people. A suicide bomber attacked the offices of Iraq Islamic Party in Mosul, killing three guards. A suicide bomber killed 10 people in an attack between Talbiya Bridge and Mustansiriya Square.
March 14: A man wearing an explosives belt strolled into an outdoor market in Tuz Khormato and blew himself up, killing 8 and wounding 25. A suicide car bomber slammed into an Iraqi army checkpoint in the Sunni neighborhood of Yarmouk, killing 2 civilians and wounding 4 others.
March 15: A suicide bomber attacked an Iraqi army and police checkpoint in central Baghdad, killing eight policemen and soldiers and wounding 25. A suicide bomber targeted an Iraqi army checkpoint killing one Iraqi soldier in the Yarmouk district in Baghdad. A suicide bomber struck in the Karada district in Baghdad killing two civilians. A suicide bomber attacked a military checkpoint under construction west of Baquba wounding 10 Iraqi Army soldiers. A suicide bomber rammed his car into a bus killing four people in Iskandariyah.
March 16: Three suicide bombers driving chlorine-laden trucks wounded 350 Iraqis in co-ordinated attacks across Al Anbar province. The bombers struck in Ramadi, Amiriyah, and the Albu Issa tribal region south of Fallujah. A suicide bomber wounded 11 people, including 4 policemen, in Diyala province. A suicide bomber wounded 11 people, including 4 policemen, in Diyala province.
March 17: A suicide car bomb hit a checkpoint in Baghdad's Harthiya district, killing three and wounding five. Iraqi soldiers from 3rd Brigade, 5th Iraqi Army Division killed a suicide bomber south of Shakarat. The bomber ignored several verbal warnings to stop, and upon being shot his vest detonated.
March 18: An insurgent car bomb was waved through a security checkpoint in Azamiya, Northern Baghdad, after troops noticed two children were sitting in the back seats. Using the children as a decoy, the driver then gained permission to leave his vehicle parked next to a crowded marketplace in the district. With both minors still on board the car bomb detonated, killing them along with at least three other people.
March 19: A suicide bomber attacked a Shiite mosque in Baghdad killing 6 people and wounding 32.
March 20: A suicide car bomber targeted an Iraqi army checkpoint in the Jami'a district of Baghdad, killing one soldier and wounding another.
March 21: A suicide truck bomber killed five and wounded 40 when he attacked the headquarters of a Kurdish party, the Patriotic Union of Kurdistan, in Mosul.
March 23: Deputy prime minister Salam Al-Zubaie was seriously injured in a high-profile assassination attempt by a suicide bomber at a prayer hall in his own residential compound. Eight members of his entourage were killed, and there were reports the bomber could have been one of his own bodyguards.
March 24: A suicide truck bombing destroyed a Baghdad police station, killing 33 officers and wounding another 44 people. In Haswa, a suicide truck bomber killed 11 people and wounded 45 more near a mosque. A suicide bomber blew himself up in a Tal Afar marketplace, killing ten and wounding three people. A suicide bomber attacked a US-Iraqi joint checkpoint in Ramadi wounding three Iraqi soldiers. Three suicide bombers attacked a police station and two checkpoints near Al Qaim on the Syrian border killing 17 policemen and 3 civilians.
March 25: Two soldiers died after a suicide car bomber struck an Iraqi army checkpoint in Baqouba.
March 26: Near the Shorja marketplace in central Baghdad, a suicide car bomber killed two people and injured five others. Two suicide truck bombers attacked a U.S. military outpost near Fallujah wounding 8 American soldiers.
March 27: Tal Afar bombing: In the deadliest single blast of the four-year-old insurgency, 152 people were killed and 347 wounded when a suicide truck bomber targeted a Shi'ite district of Tal Afar. 100 homes were destroyed in the blast. Outside Ramadi, a suicide truck bomber attacked a roadside restaurant where he killed 17 people and wounded 32 others. In an internal conflict between insurgent groups a suicide bomber killed a leader of an opposite group in the Abu Ghraib suburb of Baghdad. Outside Ramadi, a suicide truck bomber attacked a roadside restaurant where he killed 17 people and wounded 32 others. In an internal conflict between insurgent groups a suicide bomber killed a leader of an opposite group in the Abu Ghraib suburb of Baghdad. In Ramadi, a suicide bomber killed one person and injured seven others. A suicide bomber killed himself and two policemen in Baquba.
March 28: Two suicide truck bombs, one of which contained chlorine gas, detonated outside the Fallujah Government Center. The initial blasts were followed by a sustained attack involving gunfire and two suicide bombers on foot. In total 14 US personnel and 57 Iraqi forces suffered injuries. A suicide car bomber drove into an Iraqi army post in Hay al-Jamiya in Baghdad killing one soldier and wounding three others. A suicide bomber attacked a school used by U.S. forces in Haditha.
March 29: Al-Shaab market bombings: A pair of suicide bombers on foot killed 82 people in a market in Baghdad's Shaab neighborhood. Three suicide car bombers attacked a market in the town of Khalis, killing 43-53 people.
March 31: In Tuz Khormato, a suicide car bombing killed two Shi’ite laborers and wounded 11 more.
April 1: East of Mosul at an Army base in Sinaea, two suicide truck bombs killed two people and wounded 17 others.
April 2: A suicide bomber attacked a police station in Kirkuk killing 15 people, including one U.S.soldier. In Baghdad, a suicide car bomber drove into a police checkpoint in the Doura neighborhood where he killed two people and wounded five others. A suicide bomber killed three people and wounded 20 near a popular Khalis restaurant.
April 5: A suicide truck bomber attacked a Baghdad satellite television station run by Iraq's biggest Sunni political party, killing one person and wounding three.
April 6: A suicide truck bomb containing chlorine detonated at a police checkpoint in Ramadi, killing 27 people. A suicide car bomb with two attackers on suicide attackssuicide bombings board hit a checkpoint south of Baghdad, but only the bombers were harmed.
April 7: A suicide bomber attacked a security checkpoint in Samara killing five policemen. A suicide bomber in Baghdad killed one Iraqi soldier in an attack on a checkpoint in Sadr city.
April 8: A suicide bomber killed seven people in the Ilaam district of Baghdad.
April 10: A female suicide bomber on foot killed 17 recruits and injured 33 others outside a police station in the majority Sunni Muslim town of Muqdadiya.
April 12: Parliament bombing: A suicide bomber penetrated the Green Zone and exploded himself in a cafeteria within the parliament building, killing Iraqi MP Mohammed Awadh and wounding more than twenty other people. A suicide truck bomb killed 10 people when it detonated in the middle of Baghdad's al-Sarafiya bridge, collapsing large parts of the steel structure and sending cars plunging into the river below.
April 14: A suicide car bomber killed at least 44 people and wounded 224 at a crowded bus station near a major Shi'ite shrine in Kerbala. A suicide car bomber detonated his device near a checkpoint at Baghdad's Jadriyah bridge, killing 10 people. A suicide car bomber killed five Iraqi soldiers and wounded four others when he targeted a checkpoint in Baiji. Four would-be suicide attackers were killed in Kirkuk when one of them detonated his explosives belt prematurely, said Police Brig. Adil Zain-Alabideen. No civilians were hurt.
April 15: A suicide bomber blew himself up on a small bus killing six people and wounding 11 in a Shiite are of northwestern Baghdad. In Mosul six people were killed in a double suicide car bomb attack on an Iraqi army base. Four Iraqi soldiers were among the dead.
April 16: Nine people were killed and ten wounded when a suicide car bomber targeted a police directorate in Ishaqi.
April 17: A suicide bomber in a tanker targeted a police patrol east of Mosul, killing one civilian and wounding four Iraqi soldiers.
April 18: Baghdad bombings: A suicide bomber killed 41, including 5 policemen, and wounded 76 in Sadr City. A suicide bomber attacked a police checkpoint in Baghdad's Sadiyah district killing two policemen and wounding eight. A suicide bomber killed two policemen and wounded four people when he targeted a police patrol near Baghdad. A suicide bomber injured seven people near Mosul.
April 19: In Baghdad a suicide car bomber drove his vehicle into a fuel tanker killing 12 and wounding 34 people.
April 20: A suicide truck bomber killed a civilian and wounded 8 U.S. troops when he detonated his vehicle under a highway overpass near Saqlawiya. A suicide truck bomber targeted a police station near Falluja, killing two civilians and wounding 37.
April 22: A double suicide attack on a police station in Baghdad killed 12 people and wounded 95 others. Most of the dead were civilians.
April 23: Nine American soldiers were killed and 20 wounded in a double suicide truck bombing at a military base in Diyala province. The U.S. military claimed only one vehicle was involved, but witnesses & Al Qaeda insisted two separate suicide truck bombs had been used. A suicide belt bomber attacked a restaurant near the entrance to the Green Zone, killing seven people and wounding 16. Three suicide car bombs hit a restaurant and two checkpoints in Ramadi, killing between 20 and 29 people. A suicide car bomber killed 10 people and wounded 20 at a PDK office near Mosul. A suicide car bomber killed 10 policemen, including the chief of police, and wounded 23 more when he targeted a gathering of senior police officials in Baquba. A suicide car bomber targeted Diyala Governorate's hall, killing 4 and injuring 25.
April 24: A suicide truck bomb targeted a police patrol in the Albufarraj area near Ramadi, killing 25 people and wounded 44.
April 25: A suicide vest bomber attacked a police station in Balad Ruz killing nine people, including at least four policemen, and wounding 16 others.
April 26: A suicide car bomber killed at least ten Iraqi soldiers and wounded 15 other people at an Iraqi army checkpoint in Khalis. Two suicide bombers detonated 50 yards from a PDK office in Zumar near Mosul, killing three security guards.
April 27: A suicide bomber attacked the home of the chief of police in Hit, killing 10-15 people. A suicide bomber exploded himself near a checkpoint in Kisk north of Kirkuk, killing four policemen.
April 28: A suicide car bomber killed 60 people in Karbala when he struck a checkpoint outside the al-Abbas shrine. A suicide bomber attacked a military checkpoint in Khalis, killing one Iraqi soldier and wounding three others.
April 30: A suicide vest bomber targeted a Shi'ite funeral in Khalis, killing at least 32 people. A suicide car bomber detonated his vehicle inside a subway tunnel near Nisour square in Baghdad, killing two civilians and injuring 15. A suicide car bomb injured four people when it exploded in Baghdad's Hay Al-Ja'mia neighborhood near the Mula Huaish mosque.
May 2: A suicide car bomber struck a police car near Al Rafidein police station in Sadr city, killing between four and nine people.
May 4: A suicide car bomber targeted the national police HQ in Baghdad's Doura neighbourhood, but it was not clear if any casualties were caused.
May 5: A suicide car bomber killed one person when he targeted the Karkh police directorate in Baghdad's Yarmuk neighbourhood. A suicide vest bomber exploded himself amongst a queue of Iraqi army recruits in Abu Ghraib, killing 15. McClatchy reported that that attack was caused by two suicide car bombs, but all other news reports and also Al Qaeda's own communique attributed it to a lone vest bomber.
May 6: A suicide car bomb exploded near the police directorate in Samarra, killing up to 12 police officers. CNN reported that two US soldiers were also killed in the attack.
May 7: Two suicide car bombers struck a market and a police checkpoint near Ramadi, killing 13 people. A suicide car bomber attacked a police checkpoint on the outskirts of Baghdad, killing eight policemen and wounding 12.
May 8: A suicide car bomber struck a market in the Shi'ite city of Kufa, killing 16 people and wounding 70 others. A suicide vest bomber wearing a police uniform exploded himself inside a police station in the town of Jalawla during morning roll call, killing two to five police officers.
May 9: A suicide truck bomb detonated outside the Interior Ministry in Irbil, killing at least 19 people and wounding 80.
May 11: A pair of suicide car bombers hit Iraqi police checkpoints on two bridges crossing the Diyala River, a Tigris tributary. The attacks on the southern edge of Baghdad in a Shi'ite area killed 23 people, including 11 police officers, and badly damaged one of the bridges. A third truck bomb struck a bridge near the town of Taji just north of Baghdad, followed immediately by a car bomb which killed four soldiers, but agencies did not report whether either of those bombings were suicide attacks.
May 12: According to McClatchy, police commandos manning a checkpoint opened fire on a truck bomb as it was being driven up to a petrol station in Baghdad's Al-Meda'en neighbourhood, causing it to explode and kill just the driver. CNN, however, reported that the explosion killed two civilians and was caused by a parked car bomb.
May 13: 50 people were killed in a suicide truck bombing targeting a KDP office in the town of Makhmoor in northern Iraq.
May 14: Two Iraqi soldiers were killed when a suicide car bomber attacked a military checkpoint in Baghdad's Mansour neighborhood.
May 15: McClatchy reported that a suicide car bomb struck a market in Abu Saida town, Diyala province, killing 12 and injuring 22. Reuters meanwhile put the death toll as high as 45 and reported that the attack was a chlorine bombing, but made no reference to it being a suicide attack. A suicide car bomber hit an Iraqi army checkpoint near Mosul, wounding four soldiers.
May 16: Heavy street fighting erupted in Mosul in which there were up to 10 car bombs exploding, seven of which were suicide bombings. 10 police officers, one soldier, one civilian and 15 insurgents were killed in the fighting. Seven tribesmen were killed during a suicide bombing at a checkpoint near Fallujah. A soldier died in a suicide bombing at a checkpoint in the Hadeed area of west Baquba.
May 18: A suicide bomber attacked an Iraqi police checkpoint in Mussayab killing three people and wounding four, mostly policemen. A suicide bomber killed three policemen and wounded two in Hilla. A suicide bomber detonated his cargo near a U.S. convoy in Fallujah.
May 20: Two suicide bombers targeted an Iraqi army checkpoint and military HQ in Baghdad, killing one soldier and one civilian. A suicide truck bomber using chlorine gas attacked a police checkpoint in Zangora district west of Ramadi, killing between two and 11 people.
May 21: A suicide car bomber rammed his cargo into a checkpoint in Fallujah. No casualty figures were released.
May 22: A 17-year-old suicide vest bomber blew himself up in the house of two brothers affiliated with the Anbar Salvation Council. Ten people were killed, including the intended targets Sheik Mohammed Ali & police Lt. Col. Abed Ali, as well as their wives and children. A suicide car bomber targeted a police checkpoint on the Al Mikaneek bridge in Baghdad's Doura district, killing one police officer and wounding three other people.
May 23: A suicide vest bomber killed 15-20 people in a cafe in Mandali, a mainly Shiite Kurd town near the Iranian border. A suicide bomber killed a policeman and wounded three others in the Doura section of Baghdad.
May 24: Reuters reported that a suicide car bomber targeted a funeral procession in Falljuah, killing at least 28 people. AP attributed the explosion to a parked car bomb however. A suicide car bomber killed an Iraqi soldier and wounded three others when he struck an Iraqi army checkpoint in northern Baghdad. A suicide vest bomber killed three civilians on a minibus in eastern Baghdad.
May 26: In Baghdad's Ghazaliya district, two people were killed and 11 wounded during a suicide car bomb attack on a checkpoint.
May 28: A suicide car bomber rammed his vehicle into a police checkpoint, injuring three officers and a child.
May 31: A suicide bomber killed 25 people, including 10 policemen, and wounded 30 more at a police recruitment center in Fallujah. In Ramadi, a suicide truck bomber killed five people and wounded 15 more. A suicide car bomber attacked a U.S. military checkpoint in Baghdad wounding 8 U.S. soldiers and 3 civilians.
June 1: A suicide truck bomber attacked what is thought to be an al-Qaeda safehouse; at least two insurgents were killed. A suicide truck bomber attacked a police lieutenant colonel's home in Shurkat killing 12 civilians.
June 2: A suicide car bomber at a checkpoint in Shurqat killed five Iraqis, including two soldiers and two policemen. A suicide bomber attacked a U.S. military patrol in Babil province killing one soldier. Another bomber was killed when his vest detonated after the soldiers fired on him.
June 3: A chlorine-laden car bomb – possibly driven by a suicide attacker – targeted FOB Warhorse near Baquba. In the aftermath of the attack at least 62 soldiers were sickened by noxious gas, but no-one was seriously injured. A suicide car bomber killed at least 10 people and injured 30 others when he targeted a police convoy in a busy market area in Balad Ruz.
June 4: Three Iraqi soldiers were killed yesterday when a suicide car bomber attacked their checkpoint near Taji. Two guards were wounded when a suicide truck bomber attacked the home of a police brigadier; 11 people were also injured.
June 5: A suicide car bomber killed 19 people and wounded 25 in a Fallujah marketplace. In Baghdad, a female suicide bomber detonated prematurely after security forces opened fire on her at an Interior Ministry police recruitment center in the Sadr al-Qanat neighborhood. Three police commandos were injured during the incident.
June 7: Near the Syrian border at Rabea, a suicide bomber killed 10 people, while wounding at least 30 Iraqis and five British contractors. Six people were wounded during a botched suicide truck attack at a police checkpoint near Ramadi; police fired at the driver and blew the truck up before it reached its destination.
June 9: A suicide truck bomber killed 14 Iraqi soldiers and wounded 30 more during an attack at a checkpoint near Hilla. In Baquba, two suicide bombers at a police checkpoint killed one officer.
June 10: A suicide truck bomber killed 14 policemen and wounded 42 more at a police station in Tikrit. A suicide truck bomber destroyed a pillar of a bridge over the main highway between Mahmudiya and Baghdad collapsing part of the bridge and killing 3 U.S. soldiers and wounding 6 soldiers and an Iraqi interpreter. South of Baquba, a suicide bomber killed two policemen and wounded three others at a police station.
June 12: A suicide car bomber in Ramadi killed three policemen and wounded 15 others.
June 13: In Ramadi, four policemen were killed an 11 wounded during a suicide car bombing at a checkpoint outside town. A suicide bomber in a Mandali police station killed three people, including the police chief and wounded five others. A suicide bomber was killed in Baquba before he could detonate his cargo.
June 14: A gunman blew himself up in front of the Arabic Advisory Council office in Diyala province. A suicide bomber killed two policemen and injured five others in an attack in Fallujah.
June 17: A suicide vest bomber killed at least four civilians when he detonated himself amongst a crowd gathering to renew their Falluja residency badges in Jbil district. Three policemen were killed and seven more were wounded during a suicide car bombing in Baiji.
June 18: A suicide truck bomber targeted Iraqi security troops occupying the Al Mutawakil school in central Samara. Gunmen also attacked the building as a diversion, and in total four soldiers and one civilian were killed.
June 19: Al-Khilani Mosque bombing: A suicide bomber killed 87 people and wounded some 200 more when he rammed his truck into the Khilani Shi'ite mosque in Baghdad.
June 20: A suicide car bomber killed five policemen and wounded 13 other officers in Ramadi.
June 21: A suicide truck bomber killed at least 20 people and wounded 75 when he rammed his vehicle into the municipal headquarters of Sulaiman Bek, about 90 km south of Kirkuk. A suicide truck bomb detonated near a building housing police commandos in Madaen 45 km south of Baghdad, killing three policemen and wounding 12.
June 22: Aswat Aliraq reported that a suicide bomber targeted a police checkpoint in al-Baghdadi, killing 20 policemen and wounding 10. A suicide bomber killed two people and wounded four when he blew himself up in a telecommunications office in Falluja. A suicide vest bomber attacked a police checkpoint at al-Somoud bridge in western Fallujah, killing three policemen.
June 23: A suicide vest bomber killed two policemen inside Fallujah market after being confronted by them. A car bomb with two apparent suicide bombers on board targeted a U.S. military patrol in Tikrit. The soldiers fired at the car, killing both occupants and causing the vehicle to crash without its cargo being detonated.
June 25: A suicide vest bomber blew himself up in the lobby of the Mansour Hotel in Baghdad, killing at least 12 people. Amongst the dead were six tribal leaders, two of their bodyguards, and an anchorman with Iraqiya state television. A suicide bomber in a fuel tanker struck Baiji police headquarters in northern Iraq, killing 27 people including up to 17 policemen. A suicide car bomber targeted a government compound in Hilla, killing at least eight people. Shortly after midday a suicide vest bomber detonated on a side-road near Al Waziriyah fuel station in Baghdad. No casualties were reported. In Siniyah, a suicide bomber killed two Iraqi soldiers and wounded three others at a checkpoint.
June 27: A suicide car bomber killed one police commando and wounded six others at a police checkpoint in al-Jaderiyia in Baghdad.
June 29: A suicide car bomber killed four people and wounded 11 when he targeted an Iraqi army position in the Tarmiya neighborhood of Baghdad. A suicide truck bomber killed six Iraqi soldiers and wounded five at an army post in Mishada.
June 30: A suicide bomber dressed as a policeman killed up to 25 people when he blew himself up outside a police recruitment centre in Muqdadiya, mostly policemen and volunteers.
July 1: A suicide truck bomb hit a police checkpoint in Fallujah, killing two policemen. In Ramadi a suicide car bomb struck a police station or checkpoint, killing five policemen. It was also reported that a suicide truck bomb exploded north of Ramadi on a bridge crossing the Euphrates, damaging the bridge and injuring two civilians, though this and the other Ramadi attack were likely one and the same. A suicide bomber killed one civilian and wounded four others when he detonated his cargo during an approach to a police checkpoint near al-Jadriya bridge in Baghdad.
July 2: A suicide vest bomber targeted a Fallujah tribal leader, Sheik Kamel Mohammed al-Essawi, killing four civilians and wounding 10 others.
July 4: A suicide car bomber killed 15 people at a checkpoint near Ramadi. A suicide car bomber killed between three and seven people when he targeted a police patrol outside a restaurant in Baiji. A suicide car bomber killed two policemen and wounded seven others at a police checkpoint in al-Salam district of Baghdad. A suicide bomber killed four police commandos and wounded eight more in Doura district of Baghdad.
July 5: A suicide car bomber struck the convoy of a wedding party in Baghdad, killing 17 people.
July 6: A suicide car bomb detonated outside a cafe in the Shiite Kurdish village of Ahmad Maref near the Iranian border, killing 26 people. A suicide vest bomber attacked a funeral tent in the Shiite Kurdish village of Zargosh in Jalwla, killing 22. A Saudi man was detained while trying to carry out a suicide bomb attack in a truck carrying canisters of chloride in Ramadi.
July 7: Amirli bombing: Approximately 150 Iraqis were killed and 250 wounded when a suicide truck bomb resembling an Iraqi military vehicle exploded in a busy market in the village of Amirli near Tuz Khurmatu. Some reports put the death toll higher than 160, which would make it the deadliest single insurgent bombing since the 2003 invasion. A suicide car bomber killed five Iraqi soldiers and one other person at an Iraqi army checkpoint in the Zayuna neighborhood of southeastern Baghdad. A suicide bomber attacked a military checkpoint in eastern Baghdad, reportedly wounding 23 people, though it was unclear if this and the Zayuna attack were one and the same.
July 8: A suicide bomber attacked a truck carrying military recruits south of Baghdad near Haswa, killing 23 recruits and wounding 27 more. A suicide bomber attacked a U.S. military patrol just west of Baghdad, killing one American soldier and wounding three others. A suicide bomber was killed along with three accomplices in Hilla when their bomb exploded prematurely.
July 9: A suicide car bomber killed three Iraqi soldiers and four policemen in an attack on a checkpoint in the Doura district of Baghdad. An unknown number of people were killed or wounded during a suicide car bombing at funeral in the village of Zarghosh.
July 10: A suicide bomber killed one police commando and injured eight in an attack in Saidiya district of Baghdad. A suicide vest bomber on a bicycle detonated next to two police vehicles in the Al Jumhuriyah area of central Fallujah, wounding between one and three people.
July 11: In the town of Garmah, two suicide vest bombers blew themselves up amongst a crowd of the al-Jumailat tribe in the house of Sheikh Meshhin al-Khalaf. Later, two more suicide vest bombers mingled in with people evacuating the casualties before detonating their explosives. In total some 21 people were killed and 50 wounded, many critically. In the Al Saidiyah neighborhood of Baghdad police manning a checkpoint opened fire on an approaching car bomb, causing it to detonate and killing the driver.
July 12: Seven people were killed when a suicide vest bomber targeted guests celebrating the wedding of an Iraqi policeman in Tal Afar. For the second time in three days, a suicide vest bomber on a bicycle wounded a policeman at a checkpoint in Falluja. A suicide bomber killed two people when he targeted a police recruitment centre in Fallujah.
July 14: A suicide bomber plowed his explosives-packed vehicle into a line of cars queuing at a Baghdad gas station, killing seven people.
July 16: A double suicide car and truck bomb attack in Kirkuk left at least 85 people dead. The targets were the headquarters of the Patriotic Union of Kurdistan, and the Haseer food market. In Baghdad a suicide car bombing struck a police checkpoint on a road leading to an Interior Ministry building, killing four policemen and a civilian.
July 17: A suicide car bomb targeting an Iraqi Army patrol in Baghdad's Zayouna district killed between eight and 20 people.
July 22: Two suicide bombers in a minivan struck a house in Taji where Sunni tribal leaders opposed to al Qaeda were meeting, killing between three and five people.
July 23:Seven policemen were killed when a female suicide bomber detonated her explosives at a police checkpoint in Ramadi.
July 24: A suicide truck bomber struck a crowded market near a children's hospital in Hilla, killing 26 people.
July 25: Two suicide car bombers in Baghdad killed 50 Iraqi soccer fans celebrating their national team's semi-final victory in the Asian Cup. The first struck in Baghdad's Mansour district, and the second hit an army checkpoint in the east of the city.
July 26: A suicide vest bomber blew himself up at the gate of a police station in the northern Tal Abta area, killing five policemen and one civilian.
July 30: A suicide truck bomb targeting a joint Iraqi army and police checkpoint killed six security members near the town of Balad.
August 1: A suicide bomber killed 50 people after luring motorists to an explosives-laden fuel truck near a petrol station in Baghdad's Mansour district. A suicide car bomb killed 15-20 people near a popular ice cream shop in the al-Hurriya Square of Baghdad's Karrada district.
August 2: A suicide car bomber targeted recruits lining up outside a police station in the northern town of Hibhib, killing 13 people.
August 5: A suicide car bomb targeted a vehicle workshop at the entrance to the town of Mahmudiya, killing two people and wounded five.
August 6: A suicide truck bomber killed at least 28 people including 19 children in Tal Afar.
August 7: A suicide bomber killed seven people and wounded eight near a market in the village of Salih Al Khalaf, north of Baghdad. A suicide car bomber struck a checkpoint near the Arab Shoka village north of Baquba, killing one soldier.
August 8: A suicide bomber blew himself up in a barber shop in the Gatoon neighbourhood in Baquba, killing five people and wounding eight.
August 9: Police detained a suicide bomber about to detonate an explosive vest in a crowded market in Ba'quba.
August 10: In Kirkuk a suicide car bomber killed 11 people and wounded 45 in an attack on a market. Four Peshmerga fighters were killed and 14 wounded when a suicide car bomber struck their convoy in Ein Zala village north of Mosul.
August 14: Qahtaniya bombings: Four suicide vehicle bombers massacred hundreds of members of northern Iraq's Yazidi sect in the deadliest post-war attack to date. The final death toll given by the Iraqi government was 411, but the Iraqi Red Crescent reported that over 500 people had been killed and 1500 wounded. A suicide truck bomber struck the Thiraa Dijla Bridge near Taji, killing ten people and sending three civilian vehicles plunging into the river below.
August 15: A suicide car bomber killed between two and five people when he targeted a senior judge in Hilla. A suicide car bomb struck a police patrol in Mosul, killing one officer. Two suicide bombers were killed in heavy fighting in Buhriz, near Baquba, when their vests detonated prematurely. In total the battle killed 21 insurgents and six civilians.
August 21: A suicide vest bomber wounded eight people when he targeted a queue outside a police station in Fallujah. In al-Arafiya, a man died preventing a suicide bomber from reaching a meeting between US soldiers and members of a civilian defence force.
August 22: A suicide fuel tanker bombing killed 27 people at a Baiji police station. Officials initially put the death toll at 45, but later revised that figure down. Ten people were killed when a suicide motorcycle bomber struck a police patrol in a Muqdadiyah marketplace. Two suicide car bombers killed four Iraqi soldiers and wounded 11 US soldiers in an attack on a joint US-Iraqi outpost in Taji.
August 23: A suicide bomber was killed by police when he targeted a checkpoint in Fallujah's Dam street. Two people were wounded.
August 26: The Iraqi army foiled three suicide truck bomb attacks in Mosul, resulting in the death of one of the bombers and the detention of the other two.
August 27: A suicide vest bomber in Fallujah killed 12 people at the al-Raqeeb Mosque after evening prayers.
August 28: Police reportedly killed a gunman wearing a suicide vest in Mosul.
August 31: A suicide car bomber killed four police commandos and wounded seven when he targeted their patrol in al Jallam village near Samarra.
September 1: A suicide car bomber wounded six people when he targeted an Iraqi army patrol in Mosul.
September 2: A suicide car bomber killed two soldiers and injured eight when he targeted the first gate of an Iraqi Army base in Taji.
September 3: A suicide car bomb targeted a police checkpoint in the al-Jazeera area near Ramadi, killing two policemen and wounding 13 other people. US forces killed five gunmen who had attacked a police station in al-Saqlawiyah, including one who was wearing a suicide vest.
September 5: A suicide car bomb at a Mosul checkpoint killed one policeman and wounded 28 other people.
September 6: A suicide truck bomber attacked a Marine security checkpoint in Al Anbar province killing 4 Marines.
September 8: A suicide car bomb near a Sadr City police station in Baghdad killed 15 and wounded 45 others.
September 9: A suicide fuel tanker bombing hit an Iraqi army checkpoint near a bridge in Balad, Salahuddin province. Four soldiers were killed and 15 wounded. Two people were killed and six wounded when a suicide car bomber targeted an Iraqi army checkpoint in Mahmudiya.
September 10: A suicide truck bomb killed at least ten people and wounded 60 others in the village of Tal Marag, near Mosul, when he targeted the offices of the Kurdish Democratic Party. A suicide bomber attacked a Saqlawiyah police checkpoint, killing two policemen and two civilians and wounding two other policemen.
September 14: A suicide bomber attacked a police checkpoint at a restaurant in Baiji killing 11 people, including nine policemen, and wounding 15 others.
September 15: Ten people were killed and 15 injured when a suicide car bomber blew up his vehicle near a bakery in Baghdad's southwestern Amil district as Muslims were preparing to break the Ramadan fast. The Iraqi military detained a would-be suicide bomber in Mosul.
September 16: A bomb – either attached to a suicide bomber or a booby-trapped bicycle – killed six people at an outdoor cafe in the northern town of Tuz Khurmato.
September 18: A suicide bomber killed a civilian in the Baladiyat neighborhood of Baghdad. A suicide bomber attacked a mobile phone shop in Jalawlaa, killing four and wounded 15 others. In Mosul Iraqi soldiers killed a suicide bomber before he attacked their convoy but two soldiers were still wounded. A suicide bomber attacked a U.S. military patrol in central Iraq killing a U.S. civilian translator.
September 22: In Hibhib a suicide bomber attacked an army checkpoint wounding five people, including Iraqi soldiers.
September 24: A suicide bomber attacked a gathering of local leaders in Baquba, the chief of police was among the 28 dead and another 50 were wounded, one other police official was killed and two U.S. soldiers were among the wounded. In Abu Maria, a suicide truck bomber killed six people, including two policemen and an Iraqi soldier, and wounded 17 others at a checkpoint.
September 25: A suicide car bomber in Basra killed three policemen and wounded 20 during an attack on a police station. In Mosul, a suicide bomber detonated his vest near a police colonel; ten were wounded, including the police officer and a judge. A suicide car bomber targeted the head of the Hawija City Council; the chairman, two guards, and a civilian were wounded.
September 26: A suicide bomber killed a civilian and wounded another in Baghdad. A suicide bomber killed three people and wounded 50 in Mosul. Also, police killed a suicide bomber before he could detonate his cargo. A suicide bomber in Fallujah caused no casualties.
September 29: A suicide bomber in Mosul killed five policemen and one civilian and wounded 21.
October 1: In Mosul a suicide car bomber killed a university professor and wounded seven other people.
October 2: In Khalis a suicide bomber attacked a police station killing six people, including two policemen.
October 4: A suicide bomber in Tal Afar killed three people and wounded 57 at a marketplace.
October 8: A suicide truck bomber destroyed a police station in the village of Dijlah, north of Baghdad, killing 13 people, including 3 policemen, and wounding 22 other people. A suicide bomber attacked a police checkpoint in Tikrit killing 3 policemen and 1 civilian and wounded 10 other people. A suicide bomber wounded seven policemen in Khalis. A suicide bomber in the Arab Jabour area on Baghdads' southern outskirts blew himself up to avoid captured killing one other insurgent and wounding two U.S. special forces soldiers.
October 9: 22 people were killed and 30 were wounded in an attack by two suicide truck bombers in Baiji.
October 10: A suicide bomber attacked the KDP headquarters in Mosul killing seven people and wounding 20. A suicide bomber attacked an army base in al-Zab killing one Iraqi soldier and wounding seven others.
October 11: A suicide bomber attacked a caffe in Baghdad killing 8 people and wounding 25. A suicide bomber attacked the PUK headquarters in Mosul wounding 8 people, including 4 PUK guards.
October 14: A suicide car bomber killed 18 people and wounded 37 in Samara in an attack near a mosque. A suicide bomber also attacked a police commando station in Samara killing 4 police commandos and wounding 9. In the village of Baghdadi near Ramadi, a suicide car bomber killed four members of a police major's family and wounded eight others.
October 15: In Baghdad, a suicide car bomber killed four people and wounded 25 others. A suicide car bomber attacked a checkpoint of a U.S.-allied Iraqi militia in Balad killing 6 militiamen and wounding 8. A policeman and four members of his family were killed when a suicide bomber drove into the policeman's home in Heit.
October 16: A suicide truck bomber killed 16 people and wounded 80 in an attack on a police station in Mosul.
October 26: A suicide bomber killed a woman and wounded four other people in an attack on the headquarters of the U.S.-allied Iraqi militia, 1920s Revolution Brigades, in Muqdadiyah.
See also
Terrorist incidents in Iraq in 2007
References
Explosions in 2007
2007 in Iraq
2007
Lists of explosions | List of 2007 suicide bombings in Iraq | Chemistry | 9,385 |
58,800,759 | https://en.wikipedia.org/wiki/Suzanne%20Blum | Suzanne A. Blum is an American professor of chemistry at the University of California, Irvine. Blum works on mechanistic chemistry, most recently focusing on borylation reactions and the development of single-molecule and single-particle fluorescence microscopy to study organic chemistry and catalysis. She received the American Chemical Society's Arthur C. Cope Scholar Award in 2023.
Education
Blum studied chemistry as an undergraduate at the University of Michigan. She participated in multiple teaching and research projects, winning outstanding American Chemical Society student chapter, the UM Alumni Leadership award, and a National Science Foundation fellowship to attend graduate school at the University of California, Berkeley, where she earned a PhD working with Robert G. Bergman. Blum published multiple first-author papers and received teaching awards throughout her tenure at the University of California, Berkeley. She completed a postdoctoral fellowship at Harvard Medical School in 2006.
Research
Prof. Blum began her independent research career in 2006 at the University of California, Irvine (UCI). Blum’s research focuses on the development and mechanistic study of reactions in organic, organometallic, catalytic, and materials chemistry, and on monitoring reaction intermediates by a combination of traditional spectroscopy and fluorescence microscopy methods. While many of her initial independent research publications were based on activated complexes of gold or palladium catalysts, she has more recently focused on borylation reactions to make advanced oxygen-, nitrogen-, or sulfur-containing heterocycles, amenable to pharmaceutical and agricultural derivation. Since starting her independent career, Blum developed single-molecule and single-particle techniques, often borrowed from biological or physical contexts, to study chemical processes, including to observe intermediates in "classical" reactions. Blum was elected Fellow of the American Association for the Advancement of Science (AAAS) in 2017 for distinguished contributions to molecular chemistry, particularly for the development of synthetic methods and of fluorescence microscopy tools to study chemical processes.
Awards
2023: Arthur C. Cope Scholar Award (American Chemical Society)
2018: University of California, Irvine Physical Science Outstanding Contributions to the Undergraduate Education
2017: Fellow of the AAAS
2013-2016: Humboldt Fellowship
2013: Japan Society for the Promotion of Science Fellowship
2008: NSF CAREER Award
2005-2006: National Institutes of Health Postdoctoral Fellow
References
Living people
American women chemists
21st-century American chemists
Organometallic chemistry
University of Michigan College of Literature, Science, and the Arts alumni
University of California, Berkeley alumni
University of California, Irvine faculty
Fellows of the American Association for the Advancement of Science
Year of birth missing (living people)
21st-century American women scientists | Suzanne Blum | Chemistry | 534 |
3,247,560 | https://en.wikipedia.org/wiki/ASCII%20stereogram |
ASCII stereograms are a form of ASCII art based on stereograms to produce the optical illusion of a three-dimensional image by crossing the eyes appropriately using a single image or a pair of images next to each other.
To obtain the 3D effect (in Figure 1 for instance), it is necessary for the viewer to diverge their eyes such that two adjacent letters in the same row come together (). To help in focusing, try to make the two capital Os at the top look like three. Ensure that the image of the central dot is stable and in focus. Once this has been done, look down at the rest of the image and the 3D effect should become apparent. If the Os at the bottom of Figure 1 look like three, then the effect is reversed. It is also possible to obtain opposite 3D effects by crossing the eyes rather than diverging them .
O O
n n n n n n n n n n n n n n n n n
f f f f f f f f f f f f f f
e e e e e e e e e e e e e e e e e
a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a
r r r r r r r r r r r r r r
r r r r r r r r r r r r r r r r r
O O
Figure 2 demonstrates the effect even more dramatically. Once the 3D image effect has been achieved (), moving the viewer's head away from the screen increases the stereo effect even more. Moving horizontally and vertically a little also produces interesting effects.
Figure 3 shows a Single Image Random Text Stereogram (SIRTS) based on the same idea as a Single Image Random Dot Stereogram (SIRDS). The word "Hi" in relief can be seen when the image clicks into place. ()
Some people have included stereograms in their "signature" at the end of electronic mail messages and news articles. Figure 4 is such an example.
O O
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
. . . . . . .
. . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
O O
OIWEQPOISDFBKJFOIWEQPOISDFBKJFOIWEQPOISDFBKJFOIWEQPOISDFBKJF
EDGHOUIEROUIYWEVDGHOXUIEROIYWEVDGHEOXUIEOIYWEVDGHEOXUIEOIYWE
KJBSVDBOIWERTBAKJBSVEDBOIWRTBAKJBSOVEDBOWRTBAKJBSOVEDBOWRTBA
SFDHNWECTBYUVRGSFDHNYWECTBUVRGSFDHCNYWECBUVRGSFDHCNYWECBUVRG
HNOWFHLSFDGWVRGHNOWFGHLSFDWVRGHNOWSFGHLSDWVRGHNLOWSFGLSDWVRG
YPOWVXTNWFECHRGYPOWVEXTNWFCHRGYPOWNVEXTNFCHRGYPWOWNVETNFCHRG
SVYUWXRGTWVETUISVYUWVXRGTWVETUISVYUWVXRGWVETUISVYUWVXRGWVETU
WVERBYOIAWEYUIVWVERBEYOIAWEYUIVWVERBEYOIWEYUIVWLVERBEOIWEYUI
EUIOETOUINWEBYOEUIOEWTOUINWEBYOEUIOEWTOUNWEBYOETUIOEWOUNWEBY
WFVEWVETN9PUW4TWFVEWPVETN9UW4TWFVETWPVET9UW4TWFBVETWPET9UW4T
NOUWQERFECHIBYWNOUWQXERFECIBYWNOUWFQXERFCIBYWNOFUWFQXRFCIBYW
VEHWETUQECRFVE[VEHWERTUQECFVE[VEHWQERTUQCFVE[VEOHWQERUQCFVE[
UIWTUIRTWUYWQCRUIWTUYIRTWUWQCRUIWTXUYIRTUWQCRUIBWTXUYRTUWQCR
IYPOWOXNPWTHIECIYPOWTOXNPWHIECIYPONWTOXNWHIECIYLPONWTXNWHIEC
R9UHWVETPUNRQYBR9UHWVETPUNRQYBR9UHWVETPUNRQYBR9UHWVETPUNRQYB
IIIIIIIIIIIIIII IIIIIIIIIIIIIII
H ( ) \|/ H H ( ) \|/ H
H( ) -O- H H ( ) -O- H
H )/|\ H H ( ) /|\ H
H======^======H H======^======H
H- |----@-----H H----| ---@---H
H /|\ @\|/ @ H H /|\@ \|/@ H
H \|/ \|/ H H \|/ \|/H
III^IIIIIII^III III^IIIIIII^III
Wide eyed stereo Wide eyed stereo
Moving animated versions of ASCII stereograms are possible too.
Text emphasis
The stereo effect can be used to highlight individual words in a text, as a sort of "secret message". The effect can be disguised when the paragraph is block justified.
According to the According to the
police inspector, police inspector,
Edward John Billings, Edward John Billings,
there are too many there are too many
individuals too close individuals too close
to the case to make to the case to make
an arrest. I asked an arrest. I asked
Mary Smith to comment Mary Smith to comment
on the case, but she on the case, but she
declined to comment, declined to comment,
because she is soon because she is soon
to be married to to be married to
Howard D. Fredericks, Howard D. Fredericks,
the victim's uncle. the victim's uncle.
Charles Wilson, the Charles Wilson, the
victim's brother, victim's brother,
stated that the chaos stated that the chaos
was responsible for was responsible for
at least five suicide at least five suicide
attempts last week attempts last week
alone. alone.
Sources
Figures 1, 2, 3 and 4 are due to David B. Thomas, Jonathan Bowen, Charles Durst and Marty Hewes respectively. These four stereograms appeared on the publicly accessible alt.3d USENET newsgroup. Figure 5 was invented on the spot by a Wikipedian.
Originally adapted from an article on ASCII Stereograms by the author of that article (and with his permission).
References
External links
3D Stereogram Ascii Image Generator and Movie Generator
ASCII Stereograms by Jonathan Bowen
ASCII art stereogram generator from AA-Project
IOCCC 2001 winner "herrmann2", an ASCII stereogram generator (for which the source code is itself an ASCII stereogram)
Online ASCII Stereogram Generator
Basic ASCII Stereogram Maker
Optical illusions
3D imaging
ASCII art
Digital art
Wikipedia articles with ASCII art | ASCII stereogram | Physics | 1,760 |
11,127,545 | https://en.wikipedia.org/wiki/Microdochium%20bolleyi | Microdochium bolleyi is a fungal plant pathogen that causes root rot in flax and wheat.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Wheat diseases
Xylariales
Fungus species
Fungi described in 1957 | Microdochium bolleyi | Biology | 54 |
4,474,492 | https://en.wikipedia.org/wiki/Intraspecific%20antagonism | Intraspecific antagonism means a disharmonious or antagonistic interaction between two individuals of the same species. As such, it could be a sociological term, but was actually coined by Alan Rayner and Norman Todd working at Exeter University in the late 1970s, to characterise a particular kind of zone line formed between wood-rotting fungal mycelia. Intraspecific antagonism is one of the expressions of a phenomenon known as vegetative or somatic incompatibility.
Fungal individualism
Zone lines form in wood for many reasons, including host reactions against parasitic encroachment, and inter-specific interactions, but the lines observed by Rayner and Todd when transversely-cut sections of brown-rotted birch tree trunk or branch were incubated in plastic bags appeared to be due to a reaction between different individuals of the same species of fungus.
This was a startling inference at a time when the prevailing orthodoxy within the mycological community was that of the "unit mycelium". This was the theory that when two different individuals of the same species of basidiomycete wood rotting fungi grew and met within the substratum, they fused, cooperated, and shared nuclei freely. Rayner and Todd's insight was that basidiomycete fungi individuals do, in most "adult" or dikaryotic cases anyway, retain their individuality.
A small stable of postgraduate and postdoctoral students helped elucidate the mechanisms underlying these intermycelial interactions, at Exeter University (Todd) and the University of Bath (Rayner), over the next few years.
Applications of intraspecific antagonism
Although the attribution of individual status to the mycelia confined by intraspecific zone lines is a comparatively new idea, zone lines themselves have been known since time immemorial. The term spalting is applied by woodworkers to wood showing strongly-figured zone lines, particularly those cases where the area of "no-man's land" between two antagonistic conspecific mycelia is colonised by another species of fungus. Dematiaceous hyphomycetes, with their dark-coloured mycelia, produce particularly attractive black zone lines when they colonise the areas occupied by two antagonistic basidiomycete individuals. Spalted wood can be difficult to work, since different individual wood-rotting fungi have different decay efficiencies, and thus produce zones of different softness, and the zone lines themselves are usually unrotted and hard.
Instraspecific antagonism can also sometimes be of assistance in quickly recognising the membership of clones in those fungi, particularly root-rots such as Armillarea where individual mycelia may colonise large areas, or more than one tree.
It is even the subject of a recent patent.
References
Mycology
Fungal morphology and anatomy
Wood | Intraspecific antagonism | Biology | 598 |
77,825,222 | https://en.wikipedia.org/wiki/Wetland%20virus | Wetland virus or WELV is a tick borne Orthonairovirus which can infect humans. It can produce fever, headache, dizziness, malaise, arthritis and less commonly petechiae, localized lymphadenopathy. Complications may include neurological symptoms.
Virology
The Wetland virus orthonairovirus (WELV) is a member of the genus Orthonairovirus, family Nairoviridae of RNA viruses. It was first identified in 2019 in a Chinese person in Jinzhou, Liaoning province NorthEastern China after a visit to a wetland park in Yakeshi, Inner Mongolia. Three different strains were identified one from the patient and two from ticks.
Its sequence is most similar to the Tofla virus from Japan.
Hosts and transmission
The Wetland virus was found in mice, sheep, pigs, and horses, but not dogs or cattle. It was found in about 2% of 14,500 different ticks in Northeast China with the highest prevalence (6%) in Haemaphysalis concinna.
Experimental infection showed that WELV caused lethal disease even in immunocompetent mice, unlike the remainder of the viruses in Nairoviridae family.
Signs and symptoms
Symptoms of infection with the Wetland virus are fever, headache, dizziness, malaise, myalgia (muscle pain), arthritis, and back pain. Less commonly there are petechiae and localized lymphadenopathy. One person also had severe neurological symptoms, but all recovered without sequelae. Symptoms and signs resemble those of Crimean–Congo hemorrhagic fever, and the differential diagnosis includes severe fever with thrombocytopenia syndrome and spotted fever.
References
Nairoviridae
Unaccepted virus taxa | Wetland virus | Biology | 362 |
23,520,833 | https://en.wikipedia.org/wiki/Gene%20signature | A gene signature or gene expression signature is a single or combined group of genes in a cell with a uniquely characteristic pattern of gene expression that occurs as a result of an altered or unaltered biological process or pathogenic medical condition. This is not to be confused with the concept of gene expression profiling. Activating pathways in a regular physiological process or a physiological response to a stimulus results in a cascade of signal transduction and interactions that elicit altered levels of gene expression, which is classified as the gene signature of that physiological process or response. The clinical applications of gene signatures breakdown into prognostic, diagnostic and predictive signatures. The phenotypes that may theoretically be defined by a gene expression signature range from those that predict the survival or prognosis of an individual with a disease, those that are used to differentiate between different subtypes of a disease, to those that predict activation of a particular pathway. Ideally, gene signatures can be used to select a group of patients for whom a particular treatment will be effective.
Timeline of gene signature detection
In 1995, 2 studies conducted identified unique approaches to analyzing global gene expression of a genome which collectively promoted the value of identifying and analyzing gene signatures for physiological relevance. The first study reports a technique that improves expressed sequence tag (EST) analysis, known as Serial Analysis of Gene Expression (SAGE) that hinged on sequencing and quantifying mRNA samples which acquired levels of gene expression that eventually revealed characteristic gene expression patterns.
The second study identified a technique that is now widely known as the microarray which quantifies complementary DNA (cDNA) hybridization on a glass slide to analyze the expression of many genes in parallel. These studies drew greater attention to the wealth of information that analysis of gene signatures bear that may or may not be physiologically relevant.
Pressing forward, the latter technique has revolutionized research in genetics and DNA chip technology as it is a widely adopted technique to profile gene expression signatures such that these physiological responses can be cataloged in repositories such as NCBI Gene Expression Omnibus. This catalogue of prognostic, diagnostic and predictive gene expression signatures allow for predictions of onset of pathogenic diseases in patients, tumour and cancer classification, and enhanced therapeutic strategies that predict the optimal target candidates subjects and genes.
Today, microarrays and other quantitative methods such as RNA-seq that encompass gene expression profiling, are moving towards promotion of re-analysis and integration of the large, publicly available database of gene expression signatures and profiles to uncover the full threshold of information these expression signatures hold.
Types of gene signatures
Prognostic gene signature
Prognostic refers to predicting the likely outcome or course of a disease. Classifying a biological phenotype or medical condition based on a specific gene signature or multiple gene signatures, can serve as a prognostic biomarker for the associated phenotype or condition. This concept termed prognostic gene signature, serves to offer insight into the overall outcome of the condition regardless of therapeutic intervention. Several studies have been conducted with focus on identifying prognostic gene signatures with the hopes of improving the diagnostic methods and therapeutic courses adopted in a clinical settings. It is important to note that prognostic gene signatures are not a target of therapy; they offer additional information to consider when discussing details such as duration or dosage or drug sensitivity etc. in therapeutic intervention. The criteria a gene signature must meet to be deemed a prognostic marker include demonstration of its association with the outcomes of the condition, reproducibility and validation of its association in an independent group of patients and lastly, the prognostic value must demonstrate independence from other standard factors in a multivariate analysis. The applications of these prognostic signatures include prognostic assays for breast cancer, hepatocellular carcinoma, leukaemia and are continually being developed for other types of cancers and disorders as well.
Diagnostic gene signatures
A diagnostic gene signature serves as a biomarker that distinguishes phenotypically similar medical conditions that have a threshold of severity consisting of mild, moderate or severe phenotypes. Establishing verified methods of diagnosing clinically indolent and significant cases allows practitioners to provide more accurate care and therapeutic options that range from no therapy, preventative care to symptomatic relief. These diagnostic signatures also allow for a more accurate representation of test samples used in research. Similar to the procedure of validation of prognostic gene signature, a criterion exists for classifying a gene signature as a biomarker for a disorder or diseases outlined by Chau et al.
Predictive gene signatures
A predictive gene signature is similar to a predictive biomarker, where it predicts the effect of treatment in patients or study participants that exhibit a particular disease phenotype. A predictive gene signature unlike a prognostic gene signature can be a target for therapy. The information predictive signatures provide are more rigorous than that of prognostic signatures as they are based on treatment groups with therapeutic intervention on the likely benefit from treatment, completely independent of prognosis. Predictive gene signatures addresses the paramount need for ways to personalize and tailor therapeutic intervention in diseases. These signatures have implications in facilitating personalized medicine through identification of more novel therapeutic targets and identifying the most qualified subjects for optimal benefit of specific treatments.
See also
Genomic signature
Mutational signatures
Gene expression profiling
Gene expression profiling in cancer
References
Genetics | Gene signature | Biology | 1,094 |
4,393,287 | https://en.wikipedia.org/wiki/M109%20Group | The M109 Group (also known as the NGC 3992 Group or Ursa Major cloud) is a group of galaxies about 55 million light-years away in the constellation Ursa Major. The group is named after the brightest galaxy within the group, the spiral galaxy M109.
Members
The table below lists galaxies that have been consistently identified as group members in the Nearby Galaxies Catalog, the survey of Fouque et al., the Lyons Groups of Galaxies (LGG) Catalogue, and the three group lists created from the Nearby Optical Galaxy sample of Giuricin et al.
Galaxies frequently but not consistently listed as group members in the above references (i.e. galaxies listed in four of the above lists) include NGC 3631, NGC 3657, NGC 3733, NGC 3756, NGC 3850, NGC 3898, NGC 3985, NGC 3990, NGC 3998, NGC 4217, NGC 4220, UGC 6773, UGC 6802, UGC 6816, UGC 6922, and UGC 6969. The exact membership and the exact number of galaxies in the group is somewhat uncertain.
Fouque et al. lists these galaxies as two separate groups named Ursa Major I North and Ursa Major I South, both of which were used to compile the above table. Most other references, however, identify this as a single group, as is specifically noted in the LGG Catalogue.
References
Ursa Major Cluster
Ursa Major
Virgo Supercluster | M109 Group | Astronomy | 314 |
73,119,378 | https://en.wikipedia.org/wiki/MT%20Pacific%20Cobalt | MT Pacific Cobalt is a Singaporean oil tanker built in 2020 and owned by Eastern Pacific Shipping. It is one of the first and largest ships to be installed with an onboard filtration and carbon capture system.
Description
Pacific Cobalt is a oil and chemical tanker with an overall length of . It is wide and has an average draft of . It has an identical sister ship named Pacific Gold.
History
In May 2022, Eastern Pacific Shipping announced that it would be working with the Netherlands-based maritime carbon capture company Value Maritime to install prefabricated "Filtree" systems. The installation was finished in February 2023 after a seventeen-day construction period, and Pacific Cobalt steamed from Rotterdam to Venice shortly after the installation was completed.
References
2020 ships
Merchant ships of Singapore
Carbon capture and storage
Oil tankers | MT Pacific Cobalt | Engineering | 163 |
14,183,897 | https://en.wikipedia.org/wiki/Benesi%E2%80%93Hildebrand%20method | The Benesi–Hildebrand method is a mathematical approach used in physical chemistry for the determination of the equilibrium constant K and stoichiometry of non-bonding interactions. This method has been typically applied to reaction equilibria that form one-to-one complexes, such as charge-transfer complexes and host–guest molecular complexation.
{H} + G <=> HG
The theoretical foundation of this method is the assumption that when either one of the reactants is present in excess amounts over the other reactant, the characteristic electronic absorption spectra of the other reactant are transparent in the collective absorption/emission range of the reaction system. Therefore, by measuring the absorption spectra of the reaction before and after the formation of the product and its equilibrium, the association constant of the reaction can be determined.
History
This method was first developed by Benesi and Hildebrand in 1949, as a means to explain a phenomenon where iodine changes color in various aromatic solvents. This was attributed to the formation of an iodine-solvent complex through acid-base interactions, leading to the observed shifts in the absorption spectrum. Following this development, the Benesi–Hildebrand method has become one of the most common strategies for determining association constants based on absorbance spectra.
Derivation
To observe one-to-one binding between a single host (H) and guest (G) using UV/Vis absorbance, the Benesi–Hildebrand method can be employed. The basis behind this method is that the acquired absorbance should be a mixture of the host, guest, and the host–guest complex.
With the assumption that the initial concentration of the guest (G0) is much larger than the initial concentration of the host (H0), then the absorbance from H0 should be negligible.
The absorbance can be collected before and following the formation of the HG complex. This change in absorbance (ΔA) is what is experimentally acquired, with A0 being the initial absorbance before the interaction of HG and A being the absorbance taken at any point of the reaction.
Using the Beer–Lambert law, the equation can be rewritten with the absorption coefficients and concentrations of each component.
Due to the previous assumption that [G]_0 \gg [H]_0, one can expect that [G] = [G]0. Δε represents the change in value between εHG and εG.
A binding isotherm can be described as "the theoretical change in the concentration of one component as a function of the concentration of another component at constant temperature." This can be described by the following equation:
By substituting the binding isotherm equation into the previous equation, the equilibrium constant Ka can now be correlated to the change in absorbance due to the formation of the HG complex.
Further modifications results in an equation where a double reciprocal plot can be made with 1/ΔA as a function of 1/[G]0. Δε can be derived from the intercept while Ka can be calculated from the slope.
Limitations and alternatives
In many cases, the Benesi–Hildebrand method provides excellent linear plots, and reasonable values for K and ε. However, various problems arising from experimental data have been noted from time to time. Some of these issues include: different values of ε with different concentration scales, lack of consistency between the Benesi–Hildebrand values and those obtained from other methods (e.g. equilibrium constants from partition measurements), and zero and negative intercepts. Concerns have also surfaced over the accuracy of the Benesi–Hildebrand method as certain conditions cause these calculations to become invalid. For instance, the reactant concentrations must always obey the assumption that the initial concentration of the guest ([G]0) is much larger than the initial concentration of the host ([H]0). In the case when this breaks down, the Benesi–Hildebrand plot deviates from its linear nature and exhibits scatter plot characteristics. Also, in the case of determining the equilibrium constants for weakly bound complexes, it is common for the formation of 2:1 complexes to occur in solution. It has been observed that the existence of these 2:1 complexes generate inappropriate parameters that significantly interfere with the accurate determination of association constants. Due to this fact, one of the criticisms of this method is the inflexibility of only being able to study reactions with 1:1 product complexes.
These limitations can be overcome by using a computational method which is more generally applicable, a non-linear least-squares minimization method. The two parameters, K or ε are determined by using the Solver module a spreadsheet, by minimizing a sum of squared differences between observed and calculated quantities with respect to the equilibrium constant and molar absorbance or chemical shift values of the individual chemical species involved. The use of this and more sophisticated methods have the additional advantage that they are not limited to systems where a single complex is formed.
Modifications
Although initially used in conjunction with UV/Vis spectroscopy, many modifications have been made that allow the B–H method to be applied to other spectroscopic techniques involving fluorescence, infrared, and NMR.
Modifications have also been done to further improve the accuracy in the determination of K and ε based on the Benesi–Hildebrand equations. One such modification was done by Rose and Drago. The equation that they developed is as follows:
Their method relied on a set of chosen values of ε and the collection of absorbance data and initial concentrations of the host and guest. This would thus allow the calculation of K−1. By plotting a graph of εHG versus K−1, the result would be a linear relationship. When the procedure is repeated for a series of concentrations and plotted on the same graph, the lines intersect at a point giving the optimum value of εHG and K−1. However, some problems have surfaced with this modified method as some examples displayed an imprecise point of intersection or no intersection at all.
More recently, another graphical procedure has been developed in order to evaluate K and ε independently of each other. This approach relies on a more complex mathematical rearrangement of the Benesi–Hildebrand method but has proven to be quite accurate when compared to standard values.
See also
Chemical equilibrium
Ultraviolet–visible spectroscopy
Job plot
References
Spectroscopy
Physical chemistry
Analytical chemistry | Benesi–Hildebrand method | Physics,Chemistry | 1,314 |
692,731 | https://en.wikipedia.org/wiki/American%20Association%20of%20University%20Women | The American Association of University Women (AAUW), officially founded in 1881, is a non-profit organization that advances equity for women and girls through advocacy, education, and research. The organization has a nationwide network of 170,000 members and supporters, 1,000 local branches, and 800 college and university partners. Its headquarters are in Washington, D.C. AAUW's CEO is Gloria L. Blackwell.
History
19th century
In 1881, Emily Fairbanks Talbot, Marion Talbot and Ellen Swallow Richards invited 15 alumnae from 8 colleges to a meeting in Boston, Massachusetts. The purpose of this meeting was to create an organization of women college graduates that would assist women in finding greater opportunities to use their education, as well as promoting and assisting other women's college attendance. The Association of Collegiate Alumnae or ACA (AAUW's predecessor organization) was officially founded on January 14, 1882. The ACA also worked to improve standards of education for women so that men and women's higher education was more equal in scope and difficulty.
At the beginning of 1884, the ACA had been meeting only in Boston. However, as more women across the country became interested in its work, the Association saw that expansion into branches was necessary to carry on its work. Washington, D.C., was the first branch to be created in 1884, and New York, Pacific (San Francisco), Philadelphia, and Boston branches followed in 1886.
In 1885, the organization took on one of its first major projects: they essentially had to justify their right to exist. A common belief held at the time that a college education would harm a woman's health and result in infertility. This myth was supported by Harvard-educated Boston physician Dr. Edward H. Clarke. An ACA committee led by Annie Howes created a series of questions that were sent to 1,290 ACA members; 705 replies were received. After the results were tabulated, the data demonstrated that higher education did not harm women's health. The report, "Health Statistics of Female College Graduates", was published in 1885 in conjunction with the Massachusetts Bureau of Statistics of Labor. This first research report is one of many conducted by AAUW during its history.
In 1887, a fellowship program for women was established. Supporting the education of women through fellowships would continually remain a critical part of AAUW's mission.
Back in 1883, a similar group of college women had considered forming a Chicago, Illinois branch of the ACA; however, they had reconsidered and formed their own independent organization. They formed the Western Association of Collegiate Alumnae (WACA) with Jane M. Bancroft as its first president. WACA was broad in purpose and consisted of five committees: fine arts, outdoor occupations, domestic professions, press and journalism, and higher education of women in the West. In 1888, WACA awarded its first fellowship of $350 to Ida Street, a Vassar College graduate, to conduct research at the University of Michigan. In 1889, WACA merged with the ACA, further expanding the groups' capacity.
20th century
In 1919, the ACA participated in a larger effort led by a group of American women which ultimately raised $156,413 to purchase a gram of radium for Marie Curie for her experiments.
In 1921, the ACA merged with the Southern Association of College Women to create the AAUW, although local branches continued to be the backbone of AAUW. The policy of expansion greatly increased both the size and the impact of the Association, from a small, local organization to a nationwide network of college educated women, and by 1929, there were 31,647 members and 475 branches.
During World War II, AAUW officially began raising money to assist female scholars displaced by the Nazi led occupation who were unable to continue their work. The War Relief Fund received numerous pleas for help and worked tirelessly to find teaching and other positions for refugee women at American schools and universities and in other countries. Individual branch members of AAUW also participated by signing immigration affidavits of support. During 1940, its inaugural year, the War Relief Committee raised $29,950 for distribution with 350 branches contributing.
The organization was "largely apolitical" until the 1960s. On the other hand, women in the workforce had increased to the extent that they made up 38% of workers by the end of the 1960s. Women graduating from college were looking for good employment. Membership in 1960 was at 147,920 women, most of them middle class.
Activities
AAUW is one of the world's largest sources of funding exclusively for women who have graduated from college. Each year, AAUW has provided $3.5 to $4 million in fellowships, grants, and awards for women and for community action projects. The Foundation also funds pioneering research on women, girls, and education. The organization funds studies germane to the education of women.
The AAUW Legal Advocacy Fund (LAF), a program of the Foundation, is the United States' largest legal fund focused solely on sex discrimination against women in higher education. LAF provides funds and a support system for women seeking judicial redress for sex discrimination in higher education. Since 1981, LAF has helped female students, faculty, and administrators challenge sex discrimination, including sexual harassment, pay inequity, denial of tenure and promotion, and inequality in women's athletics programs.
AAUW sponsors grassroots and advocacy efforts, research, and Campus Action Projects and other educational programs in conjunction with its ongoing programmatic theme, Education as the Gateway to Women's Economic Security. Along with three other organizations, it founded the CTM Madison Family Theatre in 1965. AAUW joined forces with other women's organizations in August 2011 to launch HERVotes to mobilize women voters in 2012 on preserving health and economic rights. In 2011, the AAUW Action Fund launched an initiative to encourage women to vote in the 2012 election. The campaign was aimed to increase the number of votes by women and to advance initiatives supporting education and equity for women and girls.
AAUW's 2011 research report addresses sexual harassment in grades seven through 12.
AAUW's national convention is held biennially. AAUW sponsors a student leadership conference, called the National Conference of College Women Student Leaders (NCCWSL) designed to help women college students access the resources, skills, and networks they need to lead change on campuses and in communities nationwide. The student leadership conference is held annually in Washington, D.C.
Local chapters frequently host speakers who highlight a variety of topics related to women such as Molly Murphy MacGregor, a co-founder of the National Women's History Alliance.
A statement by 16 women's rights organizations including the American Association of University Women, the National Women's Law Center, the National Women's Political Caucus, Girls, Inc., Legal Momentum, End Rape on Campus, Equal Rights Advocates and the Women's Sports Foundation said that, "as organizations that fight every day for equal opportunities for all women and girls, we speak from experience and expertise when we say that nondiscrimination protections for transgender people—including women and girls who are transgender—are not at odds with women's equality or well-being, but advance them" and that "we support laws and policies that protect transgender people from discrimination, including in participation in sports, and reject the suggestion that cisgender women and girls benefit from the exclusion of women and girls who happen to be transgender."
Notable members
Virginia Cleaver Bacon
C. Louise Boehringer
Pauline Suing Bloom
Kate Brousseau
Esther Caukin Brunauer
Marjorie Bell Chambers
Frances St John Chappelle
Vinnie B. Clark
Katherine M. Cook
R. Belle Colver
Della Prell Darknell Campbell
Blanche Hinman Dow
Permeal J. French
Robin Gee
Anne King Gregorie
Harriet A. Haas
Sarah Harder
Winifred M. Hausam
Winifred G. Helmes
Arleen McCarty Hynes
Reba Hurn
Lois Carter Kimball Mathews Rosenberry
Kate Wetzel Jameson
Rachel Fitch Kent
Angie Turner King
Nancy A. Leatherwood
Eva Frederica French LeFevre
Lillien Jane Martin
Lena B. Mathes
Bernice McCoy
Kathryn McHale (general director of AAUW, 1929-1950)
Ruth Karr McKee
Eva Perry Moore
Ruth Crosby Noble
Helen Matusevich Oujesky
Bernice Orpha Redington
Cora Rigby
E. Ruth Rockwood
Wanda Brown Shaw
M. Elizabeth Shellabarger
Sarah K. Smith
Rachel Applegate Solomon
Fanny J. Bayrhoffer Thelen
Violet Richardson Ward
Wilhelmine Wissman Yoakum
Mary Yost
See also
List of feminist periodicals in the United States
Younger Women's Task Force
References
External links
American Association of University Women records, 1935–1955 from the Smithsonian Archives of American Art
American Association of University Women Papers at Smith College
American Association of University Women. Boston Branch. Records, 1886–1978
American Association of University Women. Massachusetts State Division. Records, 1930–1976.
American Association of University Women (AAUW) Collection, 1929-2011 at James Madison University
Archived records of the Association of Collegiate Alumnae, 1882–1921, at Smith College.
Maryland Division of the American Association of University Women (AAUW) and the Metropolitan Area Mass Media Committee records, at University of Maryland libraries.
American Association of University Women, New York State Division records, Rare Books, Special Collections, and Preservation, River Campus Libraries, University of Rochester
Women and education
Women's occupational organizations
Women's organizations based in the United States
Women's political advocacy groups in the United States
1882 establishments in Massachusetts
American education-related professional associations
Educational organizations based in the United States
Feminist organizations in the United States
Professional associations based in the United States
Organizations for women in science and technology
Organizations established in 1882
Women in Washington, D.C. | American Association of University Women | Technology | 2,008 |
21,053,569 | https://en.wikipedia.org/wiki/Yellow%20fever%20vaccine | Yellow fever vaccine is a vaccine that protects against yellow fever. Yellow fever is a viral infection that occurs in Africa and South America. Most people begin to develop immunity within ten days of vaccination and 99% are protected within one month, and this appears to be lifelong. The vaccine can be used to control outbreaks of disease. It is given either by injection into a muscle or just under the skin.
The World Health Organization (WHO) recommends routine immunization in all countries where the disease is common. This should typically occur between nine and twelve months of age. Those traveling to areas where the disease occurs should also be immunized. Additional doses after the first are generally not needed.
The yellow fever vaccine is generally safe. This includes in those with HIV infection but without symptoms. Mild side effects may include headache, muscle pains, pain at the injection site, fever, and rash. Severe allergies occur in about eight per million doses, serious neurological problems occur in about four per million doses, and organ failure occurs in about three per million doses. It appears to be safe in pregnancy and is therefore recommended among those who will be potentially exposed. It should not be given to those with very poor immune function.
Yellow fever vaccine came into use in 1938. It is on the World Health Organization's List of Essential Medicines. The vaccine is made from weakened yellow fever virus. Some countries require a yellow fever vaccination certificate before entry from a country where the disease is common.
Medical uses
Targeting
Medical experts recommend vaccinating people most at risk of contracting the virus, such as woodcutters working in tropical areas. Insecticides, protective clothing, and screening of houses are helpful, but not always sufficient for mosquito control; medical experts recommend using personal insecticide spray in endemic areas. In affected areas, mosquito control methods have proven effective in decreasing the number of cases.
Travellers need to have the vaccine ten days before being in an endemic area to ensure full immunity.
Duration and effectiveness
For most people, the vaccine remains effective permanently. People who are HIV positive at vaccination can benefit from a booster after ten years.
On 17 May 2013, the World Health Organization (WHO) Strategic Advisory Group of Experts on immunization (SAGE) announced that a booster dose of yellow fever (YF) vaccine, ten years after a primary dose, is not necessary. Since yellow fever vaccination began in the 1930s, only 12 known cases of yellow fever post-vaccination have been identified after 600 million doses have been dispensed. Evidence showed that among this small number of "vaccine failures", all cases developed the disease within five years of vaccination. This demonstrates that immunity does not decrease with time.
Schedule
The World Health Organization recommends the vaccine between the ages of 9 and 12 months in areas where the disease is common. Anyone over the age of nine months who has not been previously immunized and either lives in or is traveling to an area where the disease occurs should also be immunized.
Side effects
The yellow fever 17D vaccine is considered safe, with over 500 million doses given and very few documented cases of vaccine-associated illness (62 confirmed cases and 35 deaths as of January 2019). In no case of vaccine-related illness has there been evidence of the virus reverting to a virulent phenotype.
The majority of adverse reactions to the 17D vaccine result from allergic reactions to the eggs in which the vaccine is grown. Persons with known egg allergy should discuss this with their physician before vaccination. In addition, there is a small risk of neurologic disease and encephalitis, particularly in individuals with compromised immune systems and very young children. The 17D vaccine is contraindicated in (among others) infants between zero and six months, people with thymus disorders associated with abnormal immune cell function, people with primary immunodeficiencies, and anyone with a diminished immune capacity including those taking immunosuppressant drugs.
There is a small risk of more severe yellow fever-like disease associated with the vaccine. This reaction, known as yellow fever vaccine-associated acute viscerotropic disease (YEL-AVD), causes a fairly severe disease closely resembling yellow fever caused by virulent strains of the virus. The risk factors for YEL-AVD are not known, although it has been suggested that it may be genetic. The 2'-5'-oligoadenylate synthase (OAS) component of the innate immune response is particularly important in protection from Flavivirus infection. Another reaction to the yellow fever vaccine is known as yellow fever vaccine-associated acute neurotropic disease (YEL-AND).
The Canadian Medical Association published a 2001 CMAJ article entitled "Yellow fever vaccination: be sure the patient needs it". The article begins by stating that of the seven people who developed system failure within two to five days of the vaccine in 1996–2001, six died "including 2 who were vaccinated even though they were planning to travel to countries where yellow fever has never been reported." The article cites that "3 demonstrated histopatholic changes consistent with wild yellow fever virus." The author recommends vaccination for only non-contraindicated travelers (see the articles list) and those travelers going where yellow fever activity is reported or in the endemic zone which can be found mapped at the CDC website cited below. In addition, the 2010 online edition of the Center for Disease Control Traveler's Health Yellow Book states that between 1970 and 2002 only "nine cases of yellow fever were reported in unvaccinated travelers from the United States and Europe who traveled" to West Africa and South America, and 8 of the 9 died. However, it goes on to cite "only 1 documented case of yellow fever in a vaccinated traveler. This nonfatal case occurred in a traveler from Spain who visited several West African countries in 1988".
History
African tropical cultures had adopted burial traditions in which the deceased were buried near their habitation, including those who died of Yellow fever. This ensured that people within these cultures gained immunity through a childhood case of "endemic" yellow fever through acquired immunity. This led to a lasting misperception, first by colonial authorities and foreign medical experts, that Africans have a "natural immunity" to the illness. In the nineteenth century health provisioners forced the abandonment of these traditional burial traditions, leading to local populations dying of yellow fever as frequently as those without such burial customs such as settler populations.
The first modern attempts to develop a yellow fever vaccine followed the opening of the Panama Canal in 1912, which increased global exposure to the disease. The Japanese bacteriologist Hideyo Noguchi led investigations for the Rockefeller Foundation in Ecuador that resulted in a vaccine based on his theory that the disease was caused by a leptospiral bacterium. However, other investigators could not duplicate his results and the ineffective vaccine was eventually abandoned.
Another vaccine was developed from the "French strain" of the virus, obtained by Pasteur Institute scientists from a man in Dakar, Senegal, who survived his bout with the disease. This vaccine could be administered by scarification, like the smallpox vaccine, and was given in combination to produce immunity to both diseases, but it also had severe systemic and neurologic complications in a few cases. Attempts to attenuate the virus used in the vaccine failed. Scientists at the Rockefeller Foundation developed another vaccine derived from the serum of an African named Asibi in 1927, the first isolation of the virus from a human. It was safer but involved the use of large amounts of human serum, which limited widespread use. Both vaccines were in use for several years, the Rockefeller vaccine in the Western hemisphere and England, and the Pasteur Institute vaccine in France and its African colonies.
In 1937, Max Theiler, working with Hugh Smith and Eugen Haagen at the Rockefeller Foundation to improve the vaccine from the "Asibi" strain, discovered that a favorable chance mutation in the attenuated virus had produced a highly effective strain that was named 17D. Following the work of Ernest Goodpasture, Theiler used chicken eggs to culture the virus. After field trials in Brazil, over one million people were vaccinated by 1939, without severe complications. This vaccine was widely used by the U.S. Army during World War II. For his work on the yellow fever vaccine, Theiler received the 1951 Nobel Prize in Physiology or Medicine. Only the 17D vaccine remains in use today.
Theiler's vaccine was responsible for the largest outbreak of hepatitis B in history, infecting 330,000 soldiers and giving 50,000 jaundice between 1941 and 1942. At the time, chronic infectious hepatitis was not known, so when human serum was used in vaccine preparation, serum drawn from chronic hepatitis B virus (HBV) carriers contaminated the yellow fever vaccine. In 1941, researchers at Rocky Mountain Laboratories developed a safer alternative, an "aqueous-base" version of the 17D vaccine using distilled water combined with the virus grown in chicken eggs. Since 1971, screening technology for HBV has been available and is routinely used in situations where HBV contamination is possible including vaccine preparation.
Also in the 1930s, a French team developed the French neurotropic vaccine (FNV), which was extracted from mouse brain tissue. Since this vaccine was associated with a higher incidence of encephalitis, FNV was not recommended after 1961. Vaccine 17D is still in use, and more than 400 million doses have been distributed. Little research has been done to develop new vaccines. Newer vaccines, based on vero cells, are in development (as of 2018).
Manufacture and global supply
Increases in cases of yellow fever in endemic areas of Africa and South America in the 1980s were addressed by the WHO Yellow Fever Initiative launched in the mid-2000s. The initiative was supported by the Gavi Alliance, a collaboration of the WHO, UNICEF, vaccine manufacturers, and private philanthropists such as the Bill & Melinda Gates Foundation. Gavi-supported vaccination campaigns since 2011 have covered 88 million people in 14 countries considered at "high-risk" of a yellow fever outbreak (Angola was considered "medium risk"). As of 2013, there were four WHO-qualified manufacturers: Bio-Manguinhos in Brazil (with the Oswaldo Cruz Foundation), Institute Pasteur in Dakar, Senegal, the Federal State Unitary Enterprise of Chumakov Institute in Russia, and Sanofi Pasteur, the French pharmaceutical company. Two other manufacturers supply domestic markets: Wuhan Institute of Biological Products in China and Sanofi Pasteur in the United States.
Demand for yellow fever vaccine for preventive campaigns has increased from about five million doses per year to a projected 62 million per year by 2014. UNICEF reported in 2013 that supplies were insufficient. Manufacturers are producing about 35 million of the 64 million doses needed per year. Demand for the yellow fever vaccine has continued to increase due to the growing number of countries implementing yellow fever vaccination as part of their routine immunization programmes.
The outbreak of yellow fever in Angola and the Democratic Republic of Congo in 2016 has raised concerns about whether the global supply of the vaccine is adequate to meet the need during a large epidemic or pandemic of the disease. Routine childhood immunization was suspended in other African countries to ensure an adequate supply in the vaccination campaign against the outbreak in Angola. Emergency stockpiles of vaccine diverted to Angola, which consisted of about 10 million doses at the end of March 2016, had become exhausted, but were being replenished by May 2016. However, in August it was reported that about one million doses of six million shipped in February had been sent to the wrong place or not kept cold enough to ensure efficacy, resulting in shortages to fight the spreading epidemic in DR Congo. As an emergency measure, experts suggested fractional dose vaccination, using a fractional dose (1/5 or 1/10 of the usual dose) to extend existing supplies of vaccine. Others have noted that switching manufacturing processes to modern cell-culture technology might improve vaccine supply shortfalls, as the manufacture of the current vaccine in chicken eggs is slow and laborious. On 17 June 2016, the WHO agreed to the use of 1/5 the usual dose as an emergency measure during the ongoing outbreak in Angola and the DR Congo. The fractional dose would not qualify for a yellow fever certificate of vaccination for travelers. Later studies found that the fractional dose was just as protective as the full dose, even 10 years after vaccination.
As of February 2021, UNICEF reported awarded contract prices ranging from to per dose under multi-year contracts with various suppliers.
Travel requirements
Travellers who wish to enter certain countries or territories must be vaccinated against yellow fever 10 days before crossing the border, and be able to present a vaccination record/certificate at the border checks. In most cases, this travel requirement depends on whether the country they are travelling from has been designated by the World Health Organization as being a 'country with risk of yellow fever transmission'. In a few countries, it does not matter which country the traveller comes from: everyone who wants to enter these countries must be vaccinated against yellow fever. There are exemptions for newborn children; in most cases, any child who is at least 9 months or 1 year old needs to be vaccinated.
References
External links
Live vaccines
Vaccines
World Health Organization essential medicines (vaccines)
Yellow fever
Wikipedia medicine articles ready to translate | Yellow fever vaccine | Biology | 2,773 |
34,476,728 | https://en.wikipedia.org/wiki/We%20Can%20Do%20It%21 | "We Can Do It!" is an American World War II wartime poster produced by J. Howard Miller in 1943 for Westinghouse Electric as an inspirational image to boost female worker morale.
The poster was little seen during World War II. It was rediscovered in the early 1980s and widely reproduced in many forms, often mistakenly called "Rosie the Riveter", which is a different depiction of a female war production worker. The "We Can Do It!" image was used to promote feminism and other political issues beginning in the 1980s. The image made the cover of the Smithsonian magazine in 1994 and was fashioned into a US first-class mail stamp in 1999. It was incorporated in 2008 into campaign materials for several American politicians, and was reworked by an artist in 2010 to celebrate the first woman becoming prime minister of Australia. The poster is one of the ten most-requested images at the National Archives and Records Administration.
After its rediscovery, observers often assumed that the image was always used as a call to inspire women workers to join the military war effort. However, during the war the image was strictly internal to Westinghouse, displayed only during February 1943, and was not for recruitment but to exhort already-hired women to work harder. People have seized upon the uplifting attitude and apparent message to remake the image into many different forms, including self empowerment, campaign promotion, advertising, and parodies.
After she saw the Smithsonian cover image in 1994, Geraldine Hoff Doyle mistakenly said that she was the subject of the poster. Doyle thought that she had also been captured in a wartime photograph of a woman factory worker, and she assumed that this photo inspired Miller's poster. Conflating her as "Rosie the Riveter", Doyle was honored by many organizations including the Michigan Women's Historical Center and Hall of Fame. However, in 2015, the woman in the wartime photograph was identified as then 20-year-old Naomi Parker, working in early 1942 before Doyle had graduated from high school. Doyle's notion that the photograph inspired the poster cannot be proved or disproved, so neither Doyle nor Parker can be confirmed as the model for "We Can Do It!".
Background
After the Japanese attack on Pearl Harbor, the U.S. government called upon manufacturers to produce greater amounts of war goods. The workplace atmosphere at large factories was often tense because of resentment built up between management and labor unions throughout the 1930s. Directors of companies such as General Motors (GM) sought to minimize past friction and encourage teamwork. In response to a rumored public relations campaign by the United Auto Workers union, GM quickly produced a propaganda poster in 1942 showing both labor and management rolling up their sleeves, aligned toward maintaining a steady rate of war production. The poster read, "Together We Can Do It!" and "Keep 'Em Firing!" In creating such posters, corporations wished to increase production by tapping popular pro-war sentiment, with the ultimate goal of preventing the government from exerting greater control over production.
J. Howard Miller
J. Howard Miller was an American graphic artist. He painted posters during World War II in support of the war effort, among them the famous "We Can Do It!" poster. Aside from the iconic poster, Miller remains largely unknown. For many years, little had been written about Miller's life, with uncertainty extending to his birth and death dates. In 2022, Professor James J. Kimble uncovered more of Miller's personal information, setting the birth year at 1898, and the death at 1985. Miller was married to Mabel Adair McCauley. Their marriage was childless; surviving family members are related through Miller's siblings.
Miller studied at the Art Institute of Pittsburgh, graduating in 1939. He lived in Pittsburgh during the war. His work came to the attention of the Westinghouse Company (later, the Westinghouse War Production Co-Ordinating Committee), and he was hired to create a series of posters. The posters were sponsored by the company's internal War Production Co-Ordinating Committee, one of the hundreds of labor-management committees organized under the supervision of the national War Production Board. Aside from his commercial work, Miller painted landscapes and studies in oil; Miller's family kept all of his works in their homes.
Westinghouse Electric
In 1942, Miller was hired by Westinghouse Electric's internal War Production Coordinating Committee, through an advertising agency, to create a series of posters to display to the company's workers. The intent of the poster project was to raise worker morale, to reduce absenteeism, to direct workers' questions to management, and to lower the likelihood of labor unrest or a factory strike. Each of the more than 42 posters designed by Miller was displayed in the factory for two weeks, then replaced by the next one in the series. Most of the posters featured men; they emphasized traditional roles for men and women. One of the posters pictured a smiling male manager with the words "Any Questions About Your Work? ... Ask your Supervisor."
No more than 1,800 copies of the 17-by-22-inch (559 by 432 mm) "We Can Do It!" poster were printed. It was not initially seen beyond several Westinghouse factories in East Pittsburgh, Pennsylvania, and the midwestern U.S., where it was scheduled to be displayed for two five-day work weeks starting Monday, February 15, 1943. The targeted factories were making plasticized helmet liners impregnated with Micarta, a phenolic resin invented by Westinghouse. Mostly women were employed in this enterprise, which yielded some 13 million helmet liners over the course of the war. The slogan "We Can Do It!" was probably not interpreted by the factory workers as empowering to women alone; they had been subjected to a series of paternalistic, controlling posters promoting management authority, employee capability and company unity, and the workers would likely have understood the image to mean "Westinghouse Employees Can Do It", all working together. The upbeat image served as gentle propaganda to boost employee morale and keep production from lagging. The badge on the "We Can Do It!" worker's collar identifies her as a Westinghouse Electric plant floor employee; the pictured red, white and blue clothing was a subtle call to patriotism, one of the frequent tactics of corporate war production committees.
Rosie the Riveter
During World War II, the "We Can Do It!" poster was not connected to the 1942 song "Rosie the Riveter", nor to the widely seen Norman Rockwell painting called Rosie the Riveter that appeared on the cover of the Memorial Day issue of the Saturday Evening Post, May 29, 1943. The Westinghouse poster was not associated with any of the women nicknamed "Rosie" who came forward to promote women working for war production on the home front. Rather, after being displayed for two weeks in February 1943 to some Westinghouse factory workers, it disappeared for nearly four decades. Other "Rosie" images prevailed, often photographs of actual workers. The Office of War Information geared up for a massive nationwide advertising campaign to sell the war, but "We Can Do It!" was not part of it.
Rockwell's emblematic Rosie the Riveter painting was loaned by the Post to the U.S. Treasury Department for use in posters and campaigns promoting war bonds. Following the war, the Rockwell painting gradually sank from public memory because it was copyrighted; all of Rockwell's paintings were vigorously defended by his estate after his death. This protection resulted in the original painting gaining value—it sold for nearly $5 million in 2002. Conversely, the lack of protection for the "We Can Do It!" image is one of the reasons it experienced a rebirth.
Ed Reis, a volunteer historian for Westinghouse, noted that the original image was not shown to female riveters during the war, so the recent association with "Rosie the Riveter" was unjustified. Rather, it was targeted at women who were making helmet liners out of Micarta. Reis joked that the woman in the image was more likely to have been named "Molly the Micarta Molder or Helen the Helmet Liner Maker."
Rediscovery
In 1982, the "We Can Do It!" poster was reproduced in a magazine article, "Poster Art for Patriotism's Sake", a Washington Post Magazine article about posters in the collection of the National Archives.
In subsequent years, the poster was re-appropriated to promote feminism. Feminists saw in the image an embodiment of female empowerment. The "We" was understood to mean "We Women", uniting all women in a sisterhood fighting against gender inequality. This was very different from the poster's 1943 use to control employees and to discourage labor unrest. History professor Jeremiah Axelrod commented on the image's combination of femininity with the "masculine (almost macho) composition and body language."
Smithsonian magazine put the image on its cover in March 1994, to invite the viewer to read a featured article about wartime posters. The US Postal Service created a 33¢ stamp in February 1999 based on the image, with the added words "Women Support War Effort". A Westinghouse poster from 1943 was put on display at the National Museum of American History, part of the exhibit showing items from the 1930s and '40s.
Wire service photograph
In 1984, former war worker Geraldine Hoff Doyle came across an article in Modern Maturity magazine which showed a wartime photograph of a young woman working at a lathe, and she assumed that the photograph was taken of her in mid-to-late 1942 when she was working briefly in a factory. Ten years later, Doyle saw the "We Can Do It!" poster on the front of the Smithsonian magazine and assumed the poster was an image of herself. Without intending to profit from the connection, Doyle decided that the 1942 wartime photograph had inspired Miller to create the poster, making Doyle herself the model for the poster. Subsequently, Doyle was widely credited as the inspiration for Miller's poster. From an archive of Acme news photographs, Professor James J. Kimble obtained the original photographic print, including its yellowed caption identifying the woman as Naomi Parker. The photo is one of a series of photographs taken at Naval Air Station Alameda in California, showing Parker and her sister working at their war jobs during March 1942. These images were published in various newspapers and magazines beginning in April 1942, during a time when Doyle was still attending high school in Michigan. In February 2015, Kimble interviewed the Parker sisters: Naomi Fern Fraley, 93, and her sister Ada Wyn Morford, 91; he found out that they had known for five years about the incorrect identification of the photo, and had been rebuffed in their attempt to correct the historical record. Naomi died at age 96 on January 20, 2018.
Although many publications have repeated Doyle's unsupported assertion that the wartime photograph inspired Miller's poster, Westinghouse historian Charles A. Ruch, a Pittsburgh resident who had been friends with J. Howard Miller, said that Miller was not in the habit of working from photographs, but rather live models. However, the photograph of Naomi Parker did appear in the Pittsburgh Press on July 5, 1942, making it possible that Miller saw it as he was creating the poster.
Legacy
Today, the image has become very widely known, far beyond its narrowly defined purpose during World War II. It has adorned T-shirts, tattoos, coffee cups and refrigerator magnets—so many different products that The Washington Post called it the "most over-exposed" souvenir item available in Washington, D.C. It was used in 2008 by some of the various regional campaigners working to elect Sarah Palin, Ron Paul and Hillary Clinton. Michelle Obama was worked into the image by some attendees of the 2010 Rally to Restore Sanity and/or Fear. The image has been employed by corporations such as Clorox who used it in advertisements for household cleaners, the pictured woman provided in this instance with a wedding ring on her left hand. Parodies of the image have included famous women, men, animals and fictional characters. A bobblehead doll and an action figure toy have been produced. The Children's Museum of Indianapolis showed a replica made by artist Kristen Cumings from thousands of Jelly Belly candies.
After Julia Gillard became the first female prime minister of Australia in June 2010, a street artist in Melbourne calling himself Phoenix pasted Gillard's face into a new monochrome version of the "We Can Do It!" poster. AnOther Magazine published a photograph of the poster taken on Hosier Lane, Melbourne, in July 2010, showing that the original "War Production Co-ordinating Committee" mark in the lower right had been replaced with a URL pointing to Phoenix's Flickr photostream. In March 2011, Phoenix produced a color version which stated "She Did It!" in the lower right, then in January 2012 he pasted "Too Sad" diagonally across the poster to represent his disappointment with developments in Australian politics.
Geraldine Doyle died in December 2010. Utne Reader went ahead with their scheduled January–February 2011 cover image: a parody of "We Can Do It!" featuring Marge Simpson raising her right hand in a fist. The editors of the magazine expressed regret at the passing of Doyle.
A stereoscopic image of "We Can Do It!" was created for the closing credits of the 2011 superhero film Captain America: The First Avenger. The image served as the background for the title card of English actress Hayley Atwell.
The Ad Council claimed the poster was developed in 1942 by its precursor, the War Advertising Committee, as part of a "Women in War Jobs" campaign, helping to bring "over two million women" into war production. In February 2012 during the Ad Council's 70th anniversary celebration, an interactive application designed by Animax's HelpsGood digital agency was linked to the Ad Council's Facebook page. The Facebook app was called "Rosify Yourself", referring to Rosie the Riveter; it allowed viewers to upload images of their faces to be incorporated into the "We Can Do It!" poster, then saved to be shared with friends. Ad Council President and CEO Peggy Conlon posted her own "Rosified" face on Huffington Post in an article she wrote about the group's 70-year history. The staff of the television show Today posted two "Rosified" images on their website, using the faces of news anchors Matt Lauer and Ann Curry. However, Seton Hall University professor James J. Kimble and University of Pittsburgh professor Lester C. Olson researched the origins of the poster and determined that it was not produced by the Ad Council nor was it used for recruiting women workers.
In 2010, American singer Pink recreated the poster in the music video for her song "Raise Your Glass".
The poster continues to inspire artists such as Kate Bergen. She has painted images of COVID-19 medical workers in a similar style, initially to cope with the stress of her work but also to encourage others and support front line workers.
See also
American propaganda during World War II
Bras d'honneur
Keep Calm and Carry On, another WWII poster that became famous only decades later
References
External links
"We Can Do It!" poster at the National Museum of American History
Library of Congress Webcast
J. Howard Miller (1918–2004)
1943 works
American art
American propaganda during World War II
Feminist art
Propaganda posters
Westinghouse Electric Company
1943 quotations
American advertising slogans
Motivation | We Can Do It! | Biology | 3,205 |
52,266,363 | https://en.wikipedia.org/wiki/Wave%20Energy%20Scotland | Wave Energy Scotland (WES) is a technology development body set up by the Scottish Government to facilitate the development of wave energy in Scotland. It was set up in 2015 and is a subsidiary of Highlands and Islands Enterprise (HIE) based in Inverness.
WES has managed numerous projects resulting from pre-commercial procurement funding calls in six main topic areas: power take-off, novel wave energy converters, structural materials and manufacturing processes, control systems, quick connection systems, and next generation wave energy. Each of these uses a stage-gate process, with fewer successful projects passing to later stages. WES has also commissioned eight landscaping studies in two phases.
In 2020, together with the Basque Energy Agency ( or EVE), WES set up the EuropeWave programme to develop and test the most promising wave energy technologies, of which three concepts will be tested at sea. This is supported by European Horizon 2020 funding.
Inception
The Scottish Government took positive action to support the ailing wave energy sector in Scotland, following the demise of one of the leading developers Pelamis Wave Power. The Energy Minister Fergus Ewing announced an initial budget for the body of £14.3 million over 13 months at the RenewableUK conference in February 2015.
Organisation objectives
The original objectives for WES were set out by the Scottish Government as:
Seek to retain the intellectual property and know-how from device development in Scotland for future benefit;
Enable Scotland’s indigenous technologies to reach commercial readiness in the most efficient and effective manner, and in a way that allows the public sector to exit in due course;
Ensure that the learning gained from support for wave device development and deployment to date, in particular the learning from Scotland’s leading wave technologies, is retained and used to benefit the wave energy industry;
Avoid duplication in funding, encourage collaboration between companies and research institutes and foster greater standardisation across the industry;
Ensure value for money from public sector investment; and
Promote greater confidence in the technical performance of wave energy systems in order to encourage the return of private sector investment.
Promote innovation in wave energy technology and encourage collaboration between industry, academia, and government
Stage gate selections
The WES development programme uses a series of stage-gates to evaluate technology progress.
Through collaboration with the International Energy Agency's Ocean Energy Systems programme, Wave Energy Scotland has helped to develop "An International Evaluation and Guidance Framework for Ocean Energy Technology", first published in 2021. This sets out a clear evaluation methodology for the technical development and cost-effectiveness of wave and tidal energy technologies. A second edition was published in October 2023, adding the important aspect of environmental acceptability which had been missing from the first draft.
The framework consists of six sequential stages of development, which is equivalent to those used in the IEC guidelines for testing early stage WECs, and can be linked to the widely-used Technology Readiness Level (TRL) scale.
Project calls
The WES development programme uses a staged approach with projects progressing from concept (stage 1), through design (stage 2), to demonstration (stage 3). , WES has held funding calls to start five development programmes, listed below. The successful projects in each stage are tabulated in List of projects funded by Wave Energy Scotland.
In 2023, a sixth area of Next Generation Wave Energy was introduced, focusing on flexible generators.
Power Take-off (PTO)
In March 2015, WES announced the fist call of their development programme, for innovative power take-off systems. Depending on the status of the technology, projects of £100k to £4m were sought, with successful applicants eligible to claim 100% of the cost of development. A total of 42 applications were made for this £7m call, with contracts awarded to nine consortia.
In July 2016, a total of 16 Power Take-Off projects were awarded, with over £7m total funding.
Nine projects in Stage 1, at around £90k each.
Five projects directly into Stage 2, at between £300k and £500k each.
One project starting in Stage 3, the CorPower Ocean HiDrive project with £1.9m in funding.
In September 2016, four of the nine PTO Stage 1 projects progressed to Stage 2, each awarded funding of around £490k.
In March 2017, three of the original Stage 2 projects progressed to Stage 3, with nearly £2.5m funding each.
In February 2018, one of the original Stage 1 projects also progressed to Stage 3 an was awarded £2.5m.
Novel Wave Energy Converter Call (NWEC)
In June 2015, the second call was announced, this time for "truly novel" wave energy converters. Eight projects were funded for the first stage of the NWEC call, out of 37 applications.
In November 2015, eight projects were each awarded between £250k-£300k for 12 month NWEC Stage 1 projects, a total of £2.25m in funding.
Four of these projects progressed to Stage 2 in April 2017, awarded around £700k in further funding.
In January 2019, Mocean Energy and AWS Ocean Energy were awarded £7.7m between them for Stage 3 projects. Both companies planned to build half-scale devices and test them at the European Marine Energy Centre in real-sea conditions.
Structural Materials and Manufacturing Processes Call
A third call for Structural Materials and Manufacturing Processes was launched in July 2016, looking for materials for the WEC structure or prime mover that would facilitate a step change reduction in LCOE.
In January 2017, 10 awards of around £250k each were made, for 12 month Stage 1 projects.
In July 2018, three of these projects progressed to Stage 2, with a further £1.4m in funding between them.
In March 2020, two projects then progressed to Stage 3: one Project led by Arup investigating the use of concrete as a structural material, the second led by Tension Technology International will look into a flexible buoyant pod.
Control Systems
In April 2017, a call on feasibility studies for Control System was announced, particularly welcoming experience from other related sectors. This was for initial projects of up to £47k lasting three months.
In September 2017, 13 projects were awarded at Stage 1, with a total budget of £660k.
Three of these projects progressed to Stage 2 in March 2018.
In May 2019, two then progressed to Stage 3, sharing a budget of almost £1m.
Quick Connection System
A call was launched in July 2019 for systems that facilitate rapid connection and disconnection of a WEC from the moorings/electrical system, which was expected to speed up installation and operations, both leading to reduced costs.
Seven projects were awarded at Stage 1 in December 2019.
Of these, four progressed to Stage 2 in July 2020.
Three then progressed to Stage 3 in July 2021, with almost £1.8m in funding.
Next Generation Wave Energy
In July 2023, a call was launched for concept designs that would directly convert motion into electricity, harnessing novel flexible electrostatic polymers and elastomers. Five projects were awarded up to £50k for 12-14 week concept designs investigating dielectric elastomer generators, and dielectric fluid generators.
Two projects, led by 4c Engineering and TTI Marine Renewables, were awarded a further £400k funding in August 2024 for Stage 2. Over the following nine months, they are expected to form collaborations and progress their concepts for flexible wave energy devices.
EuropeWave
In December 2020, together with the Basque Energy Agency ( or EVE), WES set up the EuropeWave programme. This builds on the WES programme, using the same staged approach and pre-commercial procurement model. The programme has a budget of over €22.5m, comprising national, regional, and European Horizon 2020 funding. Trade association Ocean Energy Europe is also part of the consortium.
As with the WES Novel Wave Energy Converter call, the programme will consist of three stages (1–3), culminating in scaled demonstration in real sea conditions for a year, at either the European Marine Energy Centre, Orkney, Scotland, or the Biscay Marine Energy Platform (BiMEP) near Armintza, Basque Country.
Seven companies, listed in the table below, were selected in December 2021 to develop their device concepts, sharing a budget of €2.4m. After completing Stage 1, the five most promising technologies progressed to Stage 2 to perform more extensive modelling and testing to optimise their design.
In September 2023, it was announced that CETO Wave Energy Ireland ACHIEVE, IDOM Consulting MARMOK-Atlantic, and Mocean Energy Blue Horizon 250 had progressed to the final stage of the EuropeWave programme with a shared budget of €13.4m. In April 2024, CETO secured a berth to test at BiMEP and also passed the authorisation to proceed milestone, enabling them to award the first contracts for fabrication of the device. Mocean plan to test their 250 kW device at the EMEC Billia Croo site, aiming to launch in 2025.
Intellectual property
WES acquired intellectual property developed by the now defunct Scottish wave energy companies Pelamis Wave Power and Aquamarine Power. The former as part of the inception of Wave Energy Scotland, hiring 12 former Pelamis employees including CEO Richard Yemm. The latter was completed in September 2016.
Knowledge Library
WES maintain an online Knowledge Library as part of their website, to provide access to information and documents from their extensive technology development programmes. It also contains reports from the knowledge capture projects from Pelamis Wave Power, Aquamarine Power, and AWS Ocean Energy.
Annual conference
With the exception of 2020 and 2021, WES has held an annual conference since 2016 to showcase progress in the sector.
The first Wave Energy Scotland annual conference was held on 2 December 2016 at Pollock Halls in Edinburgh. This provided an update of ongoing and future calls, plus quick-fire updates from participants ongoing PTO and NWEC calls.
A second annual conference was held on 28 November 2017.
The third annual conference was held on 6 December 2018 at the Edinburgh International Conference Centre.
External links
Wave Energy Scotland website
Wave Energy Scotland Knowledge Library
See also
Wave power
Renewable energy in the United Kingdom
List of wave power projects
Marine energy
Renewable energy in Scotland
References
Renewable energy organizations
Organisations supported by the Scottish Government
Organisations based in Inverness
Wave power | Wave Energy Scotland | Engineering | 2,090 |
65,612,920 | https://en.wikipedia.org/wiki/Freedom%20on%20the%20Net | Freedom on the Net is an annual report providing analytical reports and numerical ratings regarding the state of Internet freedom for countries worldwide, published by the American non-profit research and advocacy group Freedom House. The countries surveyed represent a sample with a broad range of geographical diversity and levels of economic development, as well as varying levels of political and media freedom.
Methodology
The surveys ask a set of questions designed to measure each country's level of Internet and digital media freedom, as well as the access and openness of other digital means of transmitting information, particularly mobile phones and text messaging services. Results are presented for three areas: p.31
Obstacles to Access: infrastructural and economic barriers to access; governmental efforts to block specific applications or technologies; legal and ownership control over internet and mobile phone access providers.
Limits on Content: filtering and blocking of websites; other forms of censorship and self-censorship; manipulation of content; the diversity of online news media; and usage of digital media for social and political activism.
Violations of User Rights: legal protections and restrictions on online activity; surveillance and limits on privacy; and repercussions for online activity, such as legal prosecution, imprisonment, physical attacks, or other forms of harassment.
The results from the three areas are combined into a total score for a country (from 100 for "Most Free" to 0 for "Least Free") and countries are rated as "Free" (100 to 70), "Partly Free" (69 to 40), or "Not Free" (39 to 0) based on the totals. p.31
Results
Starting in 2009 Freedom House has produced eleven editions of the report.
There was no report in 2010. The reports generally cover the period from June through May.
2020 results
Comparison with Other Datasets
Several other organizations measure internet freedom, such as the V-Dem Institute, Access Now, and the OpenNet Initiative. V-Dem's Digital Society project measures a range of questions related to internet censorship, misinformation online, and internet shutdowns using surveys of experts. Access Now maintains an annual list of internet shutdowns, throttling, and blockages as part of the #KeepItOn project. The OpenNet Initiative formerly kept data on internet censorship of particular websites. Freedom on the Net's report covers a range of concepts that the other datasets do not, such as new legislation passed, but lacks the country coverage of other datasets.
Expert surveys such as Freedom House and V-Dem have been found to be more prone to false positives (they are more likely to find uncorroborated instances of censorship). While remote sensing such as the kind done by Access Now and OpenNet Initiative are more likely to be prone to false negatives (they may miss some instances of real censorship).
The Millennium Challenge Corporation used the Key Internet Controls portion of the Freedom on the Net report to inform its country selection process until 2020 when this report was replaced with data on internet shutdowns from Access Now.
References
Digital rights
Works about the Internet | Freedom on the Net | Technology | 623 |
9,002,628 | https://en.wikipedia.org/wiki/Robbins%20pentagon | In geometry, a Robbins pentagon is a cyclic pentagon whose side lengths and area are all rational numbers.
History
Robbins pentagons were named by after David P. Robbins, who had previously given a formula for the area of a cyclic pentagon as a function of its edge lengths. Buchholz and MacDougall chose this name by analogy with the naming of Heron triangles after Hero of Alexandria, the discoverer of Heron's formula for the area of a triangle as a function of its edge lengths.
Area and perimeter
Every Robbins pentagon may be scaled so that its sides and area are integers. More strongly, Buchholz and MacDougall showed that if the side lengths are all integers and the area is rational, then the area is necessarily also an integer, and the perimeter is necessarily an even number.
Diagonals
Buchholz and MacDougall also showed that, in every Robbins pentagon, either all five of the internal diagonals are rational numbers or none of them are. If the five diagonals are rational (the case called a Brahmagupta pentagon by ), then the radius of its circumscribed circle must also be rational, and the pentagon may be partitioned into three Heronian triangles by cutting it along any two non-crossing diagonals, or into five Heronian triangles by cutting it along the five radii from the circle center to its vertices.
Buchholz and MacDougall performed computational searches for Robbins pentagons with irrational diagonals but were unable to find any. On the basis of this negative result they suggested that Robbins pentagons with irrational diagonals may not exist.
References
.
.
.
Arithmetic problems of plane geometry
Circles
Types of polygons | Robbins pentagon | Mathematics | 342 |
77,602,852 | https://en.wikipedia.org/wiki/List%20of%20star%20systems%20within%2095%E2%80%93100%20light-years | This is a list of star systems within 95–100 light years of Earth.
See also
List of nearest stars
List of star systems within 90–95 light-years
List of star systems within 100–150 light-years
References
Lists of stars
Star systems
Lists by distance | List of star systems within 95–100 light-years | Physics,Astronomy | 55 |
992,586 | https://en.wikipedia.org/wiki/NGC%20381 | NGC 381 is an open cluster of stars in the northern constellation of Cassiopeia, located at a distance of approximately from the Sun. Credit for the discovery of this cluster was given to Caroline Herschel by her brother William in 1787, although she may never have actually seen it.
This is a Trumpler class cluster of intermediate age, estimated at 316 million years. This class indicates the cluster is relatively weakly concentrated, with a small brightness range and an intermediate richness of stars. A total of 350 probable members have been identified, down to 20th magnitude, and the cluster contains about 32 times the mass of the Sun. The cluster has a core angular radius of and an outer cluster radius of . It has a physical tidal radius of . No giant stars have been discovered in this cluster. Four candidate variable stars have been found in the field of NGC 381; one of which is a suspected cluster member. The eclipsing binary OX Cassiopeiae was once thought to be a member, but is now known to be a background star system.
References
External links
SEDS – NGC 381
NGC 0381
NGC 0381
0381
17871103 | NGC 381 | Astronomy | 235 |
965,817 | https://en.wikipedia.org/wiki/Center%20tap | In electronics, a center tap (CT) is a contact made to a point halfway along a winding of a transformer or inductor, or along the element of a resistor or a potentiometer.
Taps are sometimes used on inductors for the coupling of signals, and may not necessarily be at the half-way point, but rather, closer to one end. A common application of this is in the Hartley oscillator. Inductors with taps also permit the transformation of the amplitude of alternating current (AC) voltages for the purpose of power conversion, in which case, they are referred to as autotransformers, since there is only one winding. An example of an autotransformer is an automobile ignition coil.
Potentiometer tapping provides one or more connections along the device's element, along with the usual connections at each of the two ends of the element, and the slider connection. Potentiometer taps allow for circuit functions that would otherwise not be available with the usual construction of just the two end connections and one slider connection.
Volts center tapped
Volts center tapped (VCT) describes the voltage output of a center tapped transformer. For example, a 24 VCT transformer will measure 24 VAC across the outer two taps (winding as a whole), and 12 VAC from each outer tap to the center-tap (half winding). These two 12 VAC supplies are 180 degrees out of phase with each other, measured with respect to the tap, thus making it easy to derive positive and negative 12 volt DC power supplies from them.
Applications and history
In vacuum tube audio amplifiers, center-tapped transformers were sometimes used as the phase inverter to drive the two output tubes of a push-pull stage. The technique is nearly as old as electronic amplification and is well documented, for example, in The Radiotron Designer's Handbook, Third Edition of 1940. This technique was carried over into transistor designs also, part of the reason for which was that capacitors were large, expensive and unreliable. However, since that era, capacitors have become vastly smaller, cheaper and more reliable, whereas transformers are still relatively expensive. Furthermore, as designers acquired more experience with transistors, they stopped trying to treat them like tubes. Coupling a class A intermediate amplification stage to a class AB power stage using a transformer doesn't make sense anymore even in small systems powered from a single-voltage supply. Modern higher-end equipment is based on dual-supply designs which eliminates coupling. It is possible for an amplifier, from the input all the way to the loudspeaker, to be DC coupled without any capacitance or inductance. Nevertheless, this use is still relevant in the 21st century because tubes and tube amplifiers continue to be produced for niche markets.
In analog telecommunications systems center-tapped transformers can be used to provide a DC path around an AC coupled amplifier for signalling purposes.
Three wire power distribution can be used, e. g. with 240 VCT to provide two 120 VAC circuits in US/Canada.
Low-frequency mains transformers often have center taps. Historically, rectifier costs were high, so DC power supplies with a center-tapped transformer and two diodes justified extra cost of copper windings and iron laminations, using only half of the secondary coil per half-cycle. Consumer products like cassette recorders often used 18 VCT transformers to obtain 9 VDC until the 1980s. With four diodes, both halves can be used, which leads to efficient designs for symmetrical voltages with the center tap as common ground. E. g. in arcade machines like Atari Asteroids (1979), a 36 VCT transformer is used in four-diode configuration to produce +/- 15 VDC (after regulation), while the same power supply provides 10.3 VDC unregulated from a two-diode configuration. In the late 1970s, it became a better business case and simpler assembly to use bridge rectifiers.
In switch-mode power supplies, center-tapped transformers are often used, sometimes with single diodes or a dual diode half-bridge to optimize their dynamic electromagnetic behavior at the expense of the extra windings.
Phantom power can be supplied to a condenser microphone using center tap transformers. One method, called "direct center tap" uses two center tap transformers, one at the microphone body and one at the microphone preamp. Filtered DC voltage is connected to the microphone preamp center tap, and the microphone body center tap is grounded through the cable shield. The second method uses the same center tap transformer topology at the microphone body, but at the microphone preamp, a matched pair of resistors spanning the signal lines in series creates an "artificial center tap".
References
F. Langford Smith, The Radiotron Designer's Handbook Third Edition, (1940), The Wireless Press, Sydney, Australia, no ISBN, no Library of Congress card
Electrical circuits
Electric transformers
de:Transformator#Anzapfungen | Center tap | Engineering | 1,046 |
14,293,466 | https://en.wikipedia.org/wiki/Tin%20cry | Tin cry is the characteristic sound heard when a bar made of tin is bent. Variously described as a "screaming" or "crackling" sound, the effect is caused by the crystal twinning in the metal. The sound is not particularly loud, despite terms like "crying" and "screaming". It is very noticeable when a hot-dip tin coated sheet metal is bent at high speed over rollers during processing.
Tin cry is often demonstrated using a simple science experiment. A bar of tin will "cry" repeatedly when bent until it breaks. The experiment can then be recycled by melting and recrystallizing the metal. The low melting point of tin, , makes re-casting easy. Tin anneals at reasonably-low temperature as well, normalizing tin's microstructure of crystallites/grains.
Although the cry is most typical of tin, a similar effect occurs in other metals, such as niobium, indium, zinc, cadmium, gallium, and solid mercury.
References
External links
Tin cry on YouTube
Mercury cry on YouTube
Tin
Materials degradation
Fracture mechanics | Tin cry | Materials_science,Engineering | 225 |
51,937,029 | https://en.wikipedia.org/wiki/Oilfield%20scale%20inhibition | Oilfield scale inhibition is the process of preventing the formation of scale from blocking or hindering fluid flow through pipelines, valves, and pumps used in oil production and processing. Scale inhibitors (SIs) are a class of specialty chemicals that are used to slow or prevent scaling in water systems. Oilfield scaling is the precipitation and accumulation of insoluble crystals (salts) from a mixture of incompatible aqueous phases in oil processing systems. is a common term in the oil industry used to describe solid deposits that grow over time, blocking and hindering fluid flow through pipelines, valves, pumps etc. with significant reduction in production rates and equipment damages. Scaling represents a major challenge for flow assurance in the oil and gas industry. Examples of oilfield scales are calcium carbonate (limescale), iron sulfides, barium sulfate and strontium sulfate. Scale inhibition encompasses the processes or techniques employed to treat scaling problems.
Background
The three prevailing water-related problems that upset oil companies today are corrosion, gas hydrates and scaling in production systems. The reservoir water has a high composition of dissolved minerals equilibrated over millions of years at constant physicochemical conditions. As the reservoir fluids are pumped from the ground, changes in temperature, pressure and chemical composition shift the equilibria and cause precipitation and deposition of sparingly soluble salts that build up over time with the potential of blocking vital assets in the oil production setups. Scaling can occur at all stages of oil/gas production systems (upstream, midstream and downstream) and causes blockages of well-bore perforations, casing, pipelines, pumps, valves etc. Severe scaling issues have been reported in Russia and certain North Sea production systems.
Types of scales
Two main classifications of scales are known; inorganic and organic scales and the two types are mutually inclusive, occurring simultaneously in the same system, referred to as mixed scale. Mixed scales may result in highly complex structured scales that are difficult to treat. Such scales require aggressive, severe and sometimes costly remediation techniques. Paraffin wax, asphaltenes and gas hydrates are the most often encountered organic scales in the oil industry. This article focuses on the simplest and common form of scales encountered; inorganic scales.
Inorganic scale
Inorganic scales refer to mineral deposits that occur when the formation water mixes with different brines such as injection water. The mixing changes causes reaction between incompatible ions and changes the thermodynamic and equilibrium state of the reservoir fluids. Supersaturation and subsequent deposition of the inorganic salts occur. The most common types of inorganic scales known to the oil/gas industry are carbonates and sulfates; sulfides and chlorites are often encountered.
While the solubility of most inorganic salts (NaCl, KCl, ...) increases with temperature (endothermic dissolution reaction), some inorganic salts such as calcium carbonate and calcium sulfate have also a retrograde solubility, i.e., their solubility decreases with temperature. In the case of calcium carbonate, it is due to the degassing of CO2 whose solubility decreases with temperature as is the case for most of the gases (exothermic dissolution reaction in water). In calcium sulfate, the reason is that the dissolution reaction of calcium sulfate itself is exothermic and therefore is favored when the temperature decreases (then, the dissolution heat is more easily evacuated; see Le Chatelier's principle). In other terms, the solubility of calcium carbonate and calcium sulfate increases at low temperature and decreases at high temperature, as it is also the case for calcium hydroxide (portlandite), often cited as a didactic case study to explain the reason of retrograde solubility.
Calcium carbonate scale
Water, noted for its high solvation power can dissolve certain gases such as carbon dioxide (CO2) to form aqueous CO2(aq). Under the right conditions of temperature and/or pressure, H2O and CO2(aq) molecules react to yield carbonic acid (H2CO3) whose solubility increases at low temperature and high pressure. The slightest changes in pressure and temperature dissolves H2CO3(aq) in water according to equation (3) to form hydronium and bicarbonate (HCO3−(aq)) ions.
CO2(aq) + H2O(l) ↔ H2CO3(aq)
H2CO3(aq) ↔ H+(aq) + HCO3−(aq)
2 HCO3−(aq) ↔ CO32−(aq) + H2O(l) + CO2(g)
Ca2+(aq) + CO32−(aq) ↔ CaCO3(s)
The two reactions (2) and (4) describe the equilibrium between bicarbonate ions (HCO3−), which are highly soluble in water and calcium carbonate (CaCO3) salt. According to Le Chatelier's principle, drilling operations and extraction of the oil from the well bore decreases the pressure of the formation and the equilibrium shifts to the right (3) to increase the production of CO2 to offset the change in pressure. After years of oil production, wells may experience significant pressure drops resulting in large CaCO3 deposits as the equilibrium shifts to offset the pressure changes.
Sulfate scales
Sulfates of Group (II) metal ions (M2+), generally decrease in solubility down the group. The most difficult scales to remove are those of Barium sulfate because of its high insolubility forming very hard scale deposits. A general representation of the reaction is summarized in reaction:
5. M2+(aq) + SO42−(aq) → MSO4(s)
Sulfate scale usually forms when formation water and injected seawater mix together. The relationship between these and the degree of supersaturation is crucial in estimating the amount of sulfate salts that will precipitate in the system. Seawater has a high concentration of sulfate ions and mixing with formation water with many Ca2+ and other M2+ ions in the formation water. Severe problems with sulfate scale are common in reservoirs where seawater has been injected to enhance oil recovery.
Due to its relatively high solubility in water, Calcium sulfate is the easiest sulfate scale to remove chemically as compared to strontium and barium sulfate. Scale crystals are initially dispersed in production systems until accumulation of stable crystals of insoluble sulfates and scale growth occur at nucleation centers. Uneven pipeline surfaces and production equipment such as pumps and valves cause rapid scale growth to levels that can block pipelines.
The scaling-tendency of an oil-well can be predicted based on the prevailing conditions such as pH, temperature, pressure, ionic strength and the mole fraction of CO2 in the vapor and aqueous phases. For instance the saturation index for CaCO3 scale is calculated using the formula;
Fs= {[Ca2+][CO32−]}/Ksp
Where Fs is the scale saturation ratio, defined as the ratio of the activity product to the solubility product of the salt. Activity is defined as the product of the activity coefficients and the concentrations of Ca2+ and SO42− ions. The ionic strength is a measure of the concentration of the dissociated ions dissolved in water also called as “total dissolved solids” (TDS).
Scale remediation
Different oilfield scale remediation techniques are known but majority are based on three basic themes:
Sulfate ion sequestering from sea injection waters
Chemical or mechanical Scale removal/dissolution
Application of Scale Inhibitors (SIs) for scale prevention
The first two methods may be used for short-term treatment and effective for mild-scaling conditions, however, continuous injection or chemical scale squeeze treatment with SIs have been proven over the years to be the most efficient and cost-effective preventative technique.
Scale inhibitors
Scale inhibitors are specialty chemicals that are added to oil production systems to delay, reduce and/or prevent scale deposition. acrylic acid polymers, maleic acid polymers and phosphonates have been used extensively for scale treatment in water systems due to their excellent solubility, thermal stability and dosage efficiency. In the water treatment industry, the major classes of SIs have inorganic phosphate, organophosphorous and organic polymer backbones and common examples are PBTC (phosphonobutane-1,2,4-tricarboxylic acid), ATMP (amino-trimethylene phosphonic acid) and HEDP (1-hydroxyethylidene-1,1-diphosphonic acid), polyacrylic acid (PAA), phosphinopolyacrylates (such as PPCA), polymaleic acids (PMA), maleic acid terpolymers (MAT), sulfonic acid copolymers, such as SPOCA (sulfonated phosphonocarboxylic acid), polyvinyl sulfonates. Two common oilfield mineral SIs are Poly-Phosphono Carboxylic acid (PPCA) and Diethylenetriamine- penta (methylene phosphonic acid) (DTPMP).
Inhibition of calcium carbonate scale deposition and crystal studies of its polymorphs have been conducted. Different SIs are designed for specific scaling conditions and biodegradability properties. The inhibitor molecules essentially bind ions in aqueous phase of production fluids that could potentially precipitate as scales. For instance, to bind positively charged ions in the water, anions must be present in the inhibitor molecular backbone structure and vice versa. Group (II) metal ions are commonly sequestered by SIs with the following functionalities;
- Phosphonate ions (-PO3H−)
- Phosphate ions (-OPO3H−)
- Phosphonate ions (-PO2H−)
- Sulphonate ions (-SO3−)
- Carboxylate ions (-CO2−)
A SI with a combination of two or more of these functional groups is more efficient in managing scale problems. Usually the sodium salts of the carboxylic derivatives are synthesized as the anionic derivatives and are known to be the most effective due to their high solubilities. Interactions of these functional groups tend to prevent the crystal growth sites using dissociated or un-dissociated groups. The dissociation state is determined by the pH of the system, hence knowledge of the pKa values of the chemicals are important for different pH environments. Again, the inhibition efficiency of the SI depends on its compatibility with other production chemicals such as corrosion inhibitors.
Environmental considerations
Generally, the environmental impacts of SIs are complicated further by combination of other chemicals applied through exploratory, drilling, well-completion and start-up operations. Produced fluids, and other wastes from oil and gas operations with high content of different toxic compounds are hazardous and harmful to human health, water supplies, marine and freshwater organisms. For instance trails of increased turbidity resulting from oil and gas exploratory activities on the eastern shelf of Sakhalin in Russia have been reported with consequential adverse effects on salmon, cod and littoral amphipods.
Efforts to develop more environmentally friendly SIs have been made since the late 1990s and an increasing number of such SIs are becoming commercially available. Recent environmental awareness over the past 15 years has resulted in the production and application of more environmentally friendly SIs, otherwise called 'Green Scale Inhibitors' (GSI). These GSIs are designed to have reduced bio-accumulating and high biodegradability properties and therefore reduce pollution of the waters around oil production systems. Phosphate ester SIs, commonly employed for treating calcium carbonate scales are known to be environmentally friendly but poor inhibition efficiency. Release of SIs containing Nitrogen and Phosphorus distorts the natural equilibrium of the immediate water body with adverse effects on aquatic life.
Another alternative, polysaccharide SIs meet the requirements for environmentally friendly materials; they contain no Phosphorus or Nitrogen and are noted for their non-toxic, renewable, and biodegradable properties. Carboxymethyl inulin (CMI), which is isolated from the roots of Inula helenium has been used in oil exploration and its very low toxicity and crystal growth inhibition power has been reported for treating calcite scales. Examples of poorly biodegradable SIs such as the amino-phosphonate and acrylate-based SIs are being phased-out by stringent environmental regulations as demonstrated in the North sea by Norway zero discharge policy.
Another modern alternative to SI use for environmental protection is the development of materials or coatings that intrinsically resist formation of inorganic scale to begin with. A variety of strategies can be used to accomplish this aim, including engineering of wettability properties and engineering of epitaxial properties to prevent mineral growth or to make minerals easier to remove following growth. Recent work has demonstrated that some classes of hydrophobic and superhydrophobic surfaces can cause self-ejection of scale grown during evaporation
References
Petroleum industry
Chemistry
Engineering | Oilfield scale inhibition | Chemistry | 2,721 |
30,700,069 | https://en.wikipedia.org/wiki/Otto%20Redlich | Otto Redlich (November 4, 1896 – August 14, 1978) was an Austrian physical chemist who is best known for his development of equations of state like the Redlich-Kwong equation. Redlich also made numerous other contributions to science. He won the Haitinger Prize of the Austrian Academy of Sciences in 1932.
Biography
Redlich was born 1896 in Vienna, Austria. He went to school in the Döbling district of Vienna. After finishing school in 1915 he joined the Austrian Hungarian Army and served as an artillery officer, mainly at the Italian front, in World War I. He was wounded and became a prisoner of war in August 1918. He returned to Vienna after the war in 1919. He studied chemistry and received his doctorate in 1922 for work on the equilibrium of nitric acid, nitrous and nitric oxide. Redlich worked for one year in industry before joining Emil Abel at the University of Vienna. He became a lecturer in 1929 and a professor in 1937. During this time he developed the Teller-Redlich isotopic product rule. After the Anschluss in March 1938, Austria became a part of Nazi Germany, and with the implementation of the Nuremberg Laws all government employed Jews lost their jobs, including academics. Like many other scientists, Redlich tried to leave Nazi-governed Austria.
With the help of the Emergency Committee in Aid of Displaced Foreign Scholars, Redlich was able to emigrate to the United States in December 1938. He gave lectures at several universities and met Gilbert N. Lewis and Linus Pauling. Harold Urey helped him to obtain a position in Washington State College. In 1945 he left the college to work in industry, at Shell Development Co. in Emeryville, California. He published his paper on the improvement of the ideal gas equation in 1949, today known as the Redlich–Kwong equation of state.
In 1962 Redlich retired from Shell and received a position at University of California at Berkeley. He died in California in 1978.
Bibliography
References
Jewish emigrants from Austria after the Anschluss to the United States
Washington State University faculty
Academic staff of the University of Vienna
University of California, Berkeley faculty
Jewish American scientists
Scientists from Vienna
Thermodynamicists
Austrian physical chemists
American physical chemists
1978 deaths
1896 births
Austrian chemical engineers
American chemical engineers
20th-century American engineers
20th-century American Jews
20th-century American chemists | Otto Redlich | Physics,Chemistry | 482 |
13,222,629 | https://en.wikipedia.org/wiki/ADP-ribosylation | ADP-ribosylation is the addition of one or more ADP-ribose moieties to a protein. It is a reversible post-translational modification that is involved in many cellular processes, including cell signaling, DNA repair, gene regulation and apoptosis.
Improper ADP-ribosylation has been implicated in some forms of cancer. It is also the basis for the toxicity of bacterial compounds such as cholera toxin, diphtheria toxin, and others.
History
The first suggestion of ADP-ribosylation surfaced during the early 1960s. At this time, Pierre Chambon and coworkers observed the incorporation of ATP into hen liver nuclei extract. After extensive studies on the acid insoluble fraction, several different research laboratories were able to identify ADP-ribose, derived from NAD+, as the incorporated group. Several years later, the enzymes responsible for this incorporation were identified and given the name poly(ADP-ribose)polymerase. Originally, this group was thought to be a linear sequence of ADP-ribose units covalently bonded through a ribose glycosidic bond. It was later reported that branching can occur every 20 to 30 ADP residues.
The first appearance of mono(ADP-ribosyl)ation occurred a year later during a study of toxins: the diphtheria toxin of Corynebacterium diphtheriae was shown to be dependent on NAD+ in order for it to be completely effective, leading to the discovery of enzymatic conjugation of a single ADP-ribose group by mono(ADP-ribosyl)transferase.
It was initially thought that ADP-ribosylation was a post translational modification involved solely in gene regulation. However, as more enzymes with the ability to ADP-ribosylate proteins were discovered, the multifunctional nature of ADP-ribosylation became apparent. The first mammalian enzyme with poly(ADP-ribose)transferase activity was discovered during the late 1980s. For the next 15 years, it was thought to be the only enzyme capable of adding a chain of ADP-ribose in mammalian cells. During the late 1980s, ADP-ribosyl cyclases, which catalyze the addition of cyclic-ADP-ribose groups to proteins, were discovered. Finally, sirtuins, a family of enzymes that also possess NAD+-dependent deacylation activity, were discovered to also possess mono(ADP-ribosyl)transferase activity.
Catalytic mechanism
The source of ADP-ribose for most enzymes that perform this modification is the redox cofactor NAD+. In this transfer reaction, the N-glycosidic bond of NAD+ that bridges the ADP-ribose molecule and the nicotinamide group is cleaved, followed by nucleophilic attack by the target amino acid side chain. (ADP-ribosyl)transferases can perform two types of modifications: mono(ADP-ribosyl)ation and poly(ADP-ribosyl)ation.
Mono(ADP-ribosyl)ation
Mono(ADP-ribosyl)transferases commonly catalyze the addition of ADP-ribose to arginine side chains using a highly conserved R-S-EXE motif of the enzyme. The reaction proceeds by breaking the bond between nicotinamide and ribose to form an oxonium ion. Next, the arginine side chain of the target protein then acts a nucleophile, attacking the electrophilic carbon adjacent to the oxonium ion. In order for this step to occur, the arginine nucleophile is deprotonated by a glutamate residue on the catalyzing enzyme. Another conserved glutamate residue forms a hydrogen bond with one of the hydroxyl groups on the ribose chain to further facilitate this nucleophilic attack. As a result of the cleavage reaction, nicotinamide is released. The modification can be reversed by (ADP-ribosyl)hydrolases, which cleave the N-glycosidic bond between arginine and ribose to release ADP-ribose and unmodified protein; NAD+ is not restored by the reverse reaction.
Poly(ADP-ribosyl)ation
Poly(ADP-ribose)polymerases (PARPs) are found mostly in eukaryotes and catalyze the transfer of multiple ADP-ribose molecules to target proteins. As with mono(ADP-ribosyl)ation, the source of ADP-ribose is NAD+. PARPs use a catalytic triad of His-Tyr-Glu to facilitate binding of NAD+ and positioning of the end of the existing poly(ADP-ribose) chain on the target protein; the Glu facilitates catalysis and formation of a (1''→2') O-glycosidic linkage between two ribose molecules.
There are several other enzymes that recognize poly(ADP-ribose) chains, hydrolyse them or form branches; over 800 proteins have been annotated to contain the loosely defined poly(ADP-ribose) binding motif; therefore, in addition to this modification altering target protein conformation and structure, it may also be used as a tag to recruit other proteins or for regulation of the target protein.
Amino acid specificity
Many different amino acid side chains have been described as ADP-ribose acceptors. From a chemical perspective, this modification represents protein glycosylation: the transfer of ADP-ribose occurs onto amino acid side chains with a nucleophilic oxygen, nitrogen, or sulfur, resulting in N-, O-, or S-glycosidic linkage to the ribose of the ADP-ribose. Originally, acidic amino acids (glutamate and aspartate) were described as the main sites of ADP-ribosylation. However, many other ADP-ribose acceptor sites such as serine, arginine, cysteine, lysine, diphthamide, phosphoserine, and asparagine have been identified in subsequent works.
Function
Apoptosis
During DNA damage or cellular stress PARPs are activated, leading to an increase in the amount of poly(ADP-ribose) and a decrease in the amount of NAD+. For over a decade it was thought that PARP1 was the only poly(ADP-ribose)polymerase in mammalian cells, therefore this enzyme has been the most studied. Caspases are a family of cysteine proteases that are known to play an essential role in programmed cell death. This protease cleaves PARP-1 into two fragments, leaving it completely inactive, to limit poly(ADP-ribose) production. One of its fragments migrates from the nucleus to the cytoplasm and is thought to become a target of autoimmunity.
During caspase-independent apoptosis, also called parthanatos, poly(ADP-ribose) accumulation can occur due to activation of PARPs or inactivation of poly(ADP-ribose)glycohydrolase, an enzyme that hydrolyses poly(ADP-ribose) to produce free ADP-ribose. Studies have shown poly(ADP-ribose) drives the translocation of the apoptosis inducing factor protein to the nucleus where it will mediate DNA fragmentation. It has been suggested that if a failure of caspase activation under stress conditions were to occur, necroptosis would take place. Overactivation of PARPs has led to a necrotic cell death regulated by the tumor necrosis factor protein. Though the mechanism is not yet understood, PARP inhibitors have been shown to affect necroptosis.
Gene regulation
ADP-ribosylation can affect gene expression at nearly every level of regulation, including chromatin organization, transcription factor recruitment and binding, and mRNA processing.
The organization of nucleosomes is key to regulation of gene expression: the spacing and organization of nucleosomes changes what regions of DNA are available for transcription machinery to bind and transcribe DNA. PARP1, a poly-ADP ribose polymerase, has been shown to affect chromatin structure and promote changes in the organization of nucleosomes through modification of histones.
PARPs have been shown to affect transcription factor structure and cause recruitment of many transcription factors to form complexes at DNA and elicit transcription. Mono(ADP-ribosyl)transferases are also shown to affect transcription factor binding at promoters. For example, PARP14, a mono (ADP-ribosyl)transferase, has been shown to affect STAT transcription factor binding.
Other (ADP-ribosyl)transferases have been shown to modify proteins that bind mRNA, which can cause silencing of that gene transcript.
DNA repair
Poly(ADP-ribose)polymerases (PARPs) can function in DNA repair of single strand breaks as well as double strand breaks. In single-strand break repair (base excision repair) the PARP can either facilitate removal of an oxidized sugar or strand cleavage. PARP1 binds the single-strand breaks and pulls any nearby base excision repair intermediates close. These intermediates include XRCC1 and APLF and they can be recruited directly or through the PBZ domain of the APLF. This leads to the synthesis of poly(ADP-ribose). The PBZ domain is present in many proteins involved in DNA repair and allows for the binding of the PARP and thus ADP-ribosylation which recruits repair factors to interact at the break site. PARP2 is a secondary responder to DNA damage but serves to provide functional redundancy in DNA repair.
There are many mechanisms for the repair of damaged double stranded DNA. PARP1 may function as a synapsis factor in alternative non-homologous end joining. Additionally, it has been proposed that PARP1 is required to slow replication forks following DNA damage and promotes homologous recombination at replication forks that may be dysfunctional. It is possible that PARP1 and PARP3 work together in repair of double-stranded DNA and it has been shown that PARP3 is critical for double-stranded break resolution. There are two hypotheses by which PARP1 and PARP3 coincide. The first hypothesis states that the two (ADP-ribosyl)transferases serve to function for each other's inactivity. If PARP3 is lost, this results in single-strand breaks, and thus the recruitment of PARP1. A second hypothesis suggests that the two enzyme work together; PARP3 catalyzes mono(ADP-ribosyl)ation and short poly(ADP-ribosyl)ation and serves to activate PARP1.
The PARPs have many protein targets at the site of DNA damage. KU protein and DNA-PKcs are both double-stranded break repair components with unknown sites of ADP-ribosylation. Histones are another protein target of the PARPs. All core histones and linker histone H1 are ADP-ribosylated following DNA damage. The function of these modifications is still unknown, but it has been proposed that ADP-ribosylation modulates higher-order chromatin structure in efforts to facilitate more accessible sites for repair factors to migrate to the DNA damage.
Protein degradation
The ubiquitin-proteasome system (UPS) figures prominently in protein degradation. The 26S proteasome consists of a catalytic subunit (the 20S core particle), and a regulatory subunit (the 19S cap). Poly-ubiquitin chains tag proteins for degradation by the proteasome, which causes hydrolysis of tagged proteins into smaller peptides.
Physiologically, PI31 attacks 20S catalytic domain of 26S Proteasome that results in decreased proteasome activity. (ADP-ribosyl)transferase Tankyrase (TNKS) causes ADP-ribosylation of PI31 which in turn increases the proteasome activity. Inhibition of TNKs further shows the reduced 26S Proteasome assembly. Therefore, ADP-ribosylation promotes 26S Proteasome activity in both Drosophila and human cells.
Enzyme regulation
The activity of some enzymes is regulated by ADP-ribosylation. For instance, the activity of Rodospirillum rubrum di-nitrogenase-reductase is turned off by ADP-ribosylation of an arginine residue, and reactivated by the removal of the ADP-ribosyl group.
Clinical significance
Cancer
PARP1 is involved in base excision repair (BER), single- and double-strand break repair, and chromosomal stability. It is also involved in transcriptional regulation through its facilitation of protein–protein interactions. PARP1 uses NAD+ in order to perform its function in apoptosis. If a PARP becomes overactive the cell will have decreased levels of NAD+ cofactor as well as decreased levels of ATP and thus will undergo necrosis. This is important in carcinogenesis because it could lead to the selection of PARP1 deficient cells (but not depleted) due to their survival advantage during cancer growth.
Susceptibility to carcinogenesis under PARP1 deficiency depends significantly on the type of DNA damage incurred. There are many implications that various PARPs are involved in preventing carcinogenesis. As stated previously, PARP1 and PARP2 are involved in BER and chromosomal stability. PARP3 is involved in centrosome regulation. Tankyrase is another (ADP-ribosyl)polymerase that is involved in telomere length regulation.
PARP1 inhibition has also been widely studied in anticancer therapeutics. The mechanism of action of a PARP1 inhibitor is to enhance the damage done by chemotherapy on the cancerous DNA by disallowing the reparative function of PARP1 in BRCA1/2 deficient individuals .
PARP14 is another ADP-ribosylating enzyme that has been well-studied in regards to cancer therapy targets; it is a signal transducer and activator of STAT6 transcription-interacting protein, and was shown to be associated with the aggressiveness of B-cell lymphomas.
Bacterial toxins
Bacterial ADP-ribosylating exotoxins (bAREs) covalently transfer an ADP-ribose moiety of NAD+ to target proteins of infected eukaryotes, to yield nicotinamide and a free hydrogen ion. bAREs are produced as enzyme precursors, consisting of a "A" and "B" domains: the "A" domain is responsible for ADP-ribosylation activity; and, the "B" domain for translocation of the enzyme across the membrane of the cell. These domains can exist in concert in three forms: first, as single polypeptide chains with A and B domains covalently linked; second, in multi-protein complexes with A and B domains bound by non-covalent interactions; and, third, in multi-protein complexes with A and B domains not directly interacting, prior to processing.
Upon activation, bAREs ADP-ribosylate any number of eukaryotic proteins; such mechanism is crucial to the instigation of the diseased states associated with ADP-ribosylation. GTP-binding proteins, in particular, are well-established in bAREs pathophysiology. For examples, cholera and heat-labile enterotoxin target the α-subunit of Gs of heterotrimeric GTP-binding proteins. As the α-subunit is ADP-ribosylated, it is permanently in an "active", GTP-bound state; subsequent activation of intracellular cyclic AMP stimulates the release of fluid and ions from intestinal epithelial cells. Furthermore, C. Botulinum C3 ADP-ribosylates GTP-binding proteins Rho and Ras, and Pertussis toxin ADP-ribosylates Gi, Go, and Gt. Diphtheria toxin ADP-ribosylates ribosomal elongation factor EF-2, which attenuates protein synthesis.
There are a variety of bacteria which employ bAREs in infection: CARDS toxin of Mycoplasma pneumoniae, cholera toxin of Vibrio cholerae; heat-labile enterotoxin of E. coli; exotoxin A of Pseudomonas aeruginosa; pertussis toxin of B. pertussis; C3 toxin of C. botulinum; and diphtheria toxin of Corynebacterium diphtheriae.
See also
Histone code
Cell signaling
PARP-1
Cholera toxin
NAD+ ADP-ribosyltransferase
Pertussis toxin
Post-translational modification
References
Further reading
Cell biology
Signal transduction
Post-translational modification | ADP-ribosylation | Chemistry,Biology | 3,628 |
11,807,208 | https://en.wikipedia.org/wiki/Millennium%20Tower%20%28Tokyo%29 | Millennium Tower was a 180-floor skyscraper that was envisioned by architect Sir Norman Foster in 1989. He intended for it to be built in Tokyo Bay, 2 km offshore from Tokyo, Japan.
Design
The design calls for a cone-shaped pyramid 840 meters high, with a base about as big as the Tokyo Olympic Stadium and glass sides for natural lighting. It is intended to be constructed on water, with boat and bridge access. Since the tower was planned for an area with frequent earthquakes and hurricane-strength winds, the shape is aerodynamic to reduce wind stress, and helical bands are wrapped around the tower for structural support. Steel tanks at the top of the tower are filled with water, and can be rotated as a counterweight against wind.
The tower is a self-contained arcology containing one million square meters of commercial development and housing for 60,000 people, split into sections. Offices and light or clean industries are in the lower levels, apartments above, and the top section houses communications systems and wind or solar generators. Restaurants and viewing platforms would be interspersed through all sections.
Horizontal and vertical high-speed metro lines provide long-distance travel, with cars designed to carry 160 people stopping at intermediate five-story 'sky centers' on every thirtieth floor. Each 'sky center' is decorated by gardens and mezzanines, and provides a particular service such as hotels or restaurants. Short-distance travel is by elevators or escalators.
History
The tower design was commissioned by the Obayashi Corporation as an arcology, intended to address land shortage and overpopulation in Tokyo. The design firm's web site states that "the project demonstrates that high-density or high-rise living does not mean overcrowding or hardship; it can lead to an improved quality of life, where housing, work and leisure facilities are all close at hand".
References
(Foster & Partners)
"Millenium Tower", Skyscraperpage
Skyscrapers in Tokyo
Proposed skyscrapers in Japan
Unbuilt buildings and structures in Japan
Unbuilt skyscrapers
Proposed arcologies | Millennium Tower (Tokyo) | Technology | 422 |
52,193,820 | https://en.wikipedia.org/wiki/PSR%20J1841%E2%88%920500 | PSR J1841−0500 is a pulsar located 22,800 light-years from the Sun in the Scutum–Centaurus Arm of the Milky Way. It was discovered in December 2008 by Fernando Camilo, who was using the Parkes Observatory when he discovered the object. At the time of discovery, it was spinning once every 0.9 seconds. However, in 2009, it stopped emitting pulses completely.
Most pulsars that stop emitting pulses only do so for a few minutes. But PSR J1841-0500 did so for 580 days. Then in August 2011, it started pulsing again. In comparison, only one other pulsar is known to stop pulsing for more than a few minutes: PSR B1931+24 turns on for a week and then stops emitting pulses for a month in a cycle.
References
Pulsars
Scutum–Centaurus Arm
Scutum (constellation) | PSR J1841−0500 | Astronomy | 196 |
28,187,419 | https://en.wikipedia.org/wiki/Wetland%20indicator%20status | Wetland indicator status denotes the probability of individual species of vascular plants occurring in freshwater, brackish and saltwater wetlands in the United States. The wetland status of 7,000 plants is determined upon information contained in a list compiled in the National Wetland Inventory undertaken by the U.S. Fish and Wildlife Service and developed in cooperation with a federal inter-agency review panel (Reed, 1988). The National List was compiled in 1988 with subsequent revisions in 1996 and 1998.
The wetland indicator status of a species is based upon the individual species occurrence in wetlands in 13 separate regions within the United States. In some instances the specified regions contain all or part of different floristic provinces and the tension zones which occur between them.
While many Obligate Wetland (OBL) species do occur in permanently or semi-permanently flooded wetlands, there are also a number of obligates that occur in temporary or seasonally flooded wetlands. A few species are restricted entirely to these transient-type wetland environments.
Plant species are general indicators of various degrees of environmental factors; they are however not precise. The presence of a plant species at a specific site depends on a variety of climatic, edaphic and biotic factors, and the effect of individual factors such as degree of substrate saturation and depth and duration of standing water is impossible to isolate.
A plant's indicator status is applied to the species as a whole however individual variations may exist within the species, referred to as "ecotypes"; individual plants which may have adapted to specific environments as may occur in a microhabitat, which isn't indicative of the species as a whole. The morphological differences between these ecotypes and the relevant species may or may not be easily discerned.
Indicator categories
Obligate wetland (OBL) - Almost always occurs in wetlands under natural conditions (estimated probability > 99%).
Facultative wetland (FACW) - Usually occurs in wetlands (estimated probability 67% – 99%), but occasionally found in non-wetlands (estimated probability 1% – 33%).
Facultative (FAC) - Equally likely to occur in wetlands and non-wetlands (estimated probability 34% – 66%).
Facultative upland (FACU) - Usually occurs in non-wetlands (estimated probability 67% – 99%), but occasionally found in wetlands (estimated probability 1% – 33%).
Obligate upland (UPL) - Almost always occurs in non-wetlands under natural conditions (estimated probability > 99%).
A positive (+) or negative (−) sign is used for the facultative categories. The (+) sign indicates a frequency towards the wetter end of the category (more frequently found in wetlands) and the (−) sign indicates a frequency towards the drier end of the category (less frequently found in wetlands).
Wetland regions
Corps wetland regions are defined as follows:
AGCP = Atlantic and Gulf Coastal Plain
AW = Arid West
CB = Caribbean
EMP = Eastern Mountains and Piedmont
GP = Great Plains
HI = Hawaii
MW = Midwest
NCNE = Northcentral and Northeast
WMVC = Western Mountains, Valleys, and Coast
AK = Alaska
External links
1988 National List of Vascular Plant Species that Occur in Wetlands
2016 USACE National Wetland Plant List
References
Wetlands | Wetland indicator status | Environmental_science | 675 |
1,181,748 | https://en.wikipedia.org/wiki/Dermatophyte | Dermatophyte (from Greek derma "skin" (GEN dermatos) and phyton "plant") is a common label for a group of fungus of Arthrodermataceae that commonly causes skin disease in animals and humans. Traditionally, these anamorphic (asexual or imperfect fungi) mold genera are: Microsporum, Epidermophyton and Trichophyton. There are about 40 species in these three genera. Species capable of reproducing sexually belong in the teleomorphic genus Arthroderma, of the Ascomycota (see Teleomorph, anamorph and holomorph for more information on this type of fungal life cycle). As of 2019 a total of nine genera are identified and new phylogenetic taxonomy has been proposed.
Dermatophytes cause infections of the skin, hair, and nails, obtaining nutrients from keratinized material. The organisms colonize the keratin tissues causing inflammation as the host responds to metabolic byproducts. Colonies of dermatophytes are usually restricted to the nonliving cornified layer of the epidermis because of their inability to penetrate the viable tissue of an immunocompetent host. Invasion does elicit a host response ranging from mild to severe. Acid proteinases (proteases), elastase, keratinases, and other proteinases reportedly act as virulence factors. Additionally, the products of these degradative enzymes serve as nutrients for the fungi. The development of cell-mediated immunity correlated with delayed hypersensitivity and an inflammatory response is associated with clinical cure, whereas the lack of or defective cell-mediated immunity predisposes the host to chronic or recurrent dermatophyte infection.
Some of these skin infections are known as ringworm or tinea (which is the Latin word for "worm"), though infections are not caused by worms. It is thought that the word tinea (worm) is used to describe the snake-like appearance of the dermatophyte on the skin. Toenail and fingernail infections are referred to as onychomycosis. Dermatophytes usually do not invade living tissues, but colonize the outer layer of the skin. Occasionally the organisms do invade subcutaneous tissues, resulting in kerion development.
Types of infections
Infections by dermatophytes affect the superficial skin, hair, and nails are named using "tinea" followed by the Latin term for the area that is affected. Manifestation of infection tends to involve erythema, induration, itching, and scaling. Dermatophytoses tend to occur in moist areas and skin folds. The degree of infection depends on the specific site of infection, the fungal species, and the host inflammatory response.
Although symptoms can be barely noticeable in some cases, dermatophytoses can produce "chronic progressive eruptions that last months or years, causing considerable discomfort and disfiguration." Dermatophytoses are generally painless and are not life-threatening.
Tinea pedis or athlete's foot
Contrary to the name, tinea pedis does not solely affect athletes. Tinea pedis affects men more than women, and is uncommon in children. Even in developed countries, tinea pedis is one of the most common superficial skin infections by fungi.
The infection can be seen between toes (interdigital pattern) and may spread to the sole of the foot in a "moccasin" pattern. In some cases, the infection may progress into a "vesiculobullous pattern" in which small, fluid-filled blisters are present. The lesions may be accompanied by peeling, maceration (peeling due to moisture), and itching.
Later stages of tinea pedis might include hyperkeratosis (thickened skin) of the soles, as well as bacterial infection (by streptococcus and staphylococcus) or cellulitis due to fissures developing between the toes.
Another implication of tinea pedis, especially for older adults or those with vascular disease, diabetes mellitus, or nail trauma, is onychomycosis of the toenails. Nails become thick, discolored, and brittle, and often onycholysis (painless separation of nail from nail bed) occurs.
Tinea cruris or jock itch
More commonly occurs in men than women. Tinea cruris may be exacerbated by sweat and tight clothing (hence the term "jock itch"). Frequently, the feet are also involved. The theory is that the feet get infected first from contact with the ground. The fungus spores are carried to the groin from scratching from putting on underclothing or pants. The infection frequently extends from the groin to the perianal skin and gluteal cleft.
The rashes appear red, scaly, and pustular, and is often accompanied by itch. Tinea cruris should be differentiated from other similar dermal conditions such as intertriginous candidiasis, erythrasma, and psoriasis.
Tinea corporis or ringworm of the body
Lesions appear as round, red, scaly, patches with well-defined, raised edges, often with a central clearing and very itchy (usually on trunk, limbs, and also in other body parts). The lesions can be confused with contact dermatitis, eczema, and psoriasis.
Tinea faciei or facial ringworm
Round or ring shaped red patches may occur on non-bearded areas of the face. This type of dermatophytosis can have a subtle appearance, sometimes known as "tine incognito". It can be misdiagnosed for other conditions like psoriasis, discoid lupus, etc. and might be aggravated by treatment with immunosuppressive topical steroid creams.
Tinea capitis or scalp ("blackdot") ringworm
Children from ages 3–7 are most commonly infected with tinea capitis. Trichophyton tonsurans is the most common cause of out breaks of tinea capitis in children, and is the main cause of endothrix (inside hair) infections. Trichophyton rubrum is also a very common cause of favus, a form of tinea capitis in which crusts are seen on the scalp.
Infected hair shafts are broken off just at the base, leaving a black dot just under the surface of the skin, and alopecia can result. Scraping these residual black dot will yield the best diagnostic scrapings for microscopic exam. Numerous green arthrospores will be seen under the microscope inside the stubbles of broken hair shafts at 400×. Tinea capitis cannot be treated topically, and must be treated systemically with antifungals.
Tinea manuum or ringworm of the hands
In most cases of tinea manuum, only one hand is involved. Frequently both feet are involved concurrently, thus the saying "one hand, two feet".
Onychomycosis, tinea unguium, or ringworm of the nail
See Onychomycosis
Tinea incognito
Ringworm infections modified by corticosteroids, systemic or topical, prescribed for some pre-existing pathology or given mistakenly for the treatment of misdiagnosed tinea.
Pathogenesis
In order for dermatophytoses to occur, the fungus must directly contact the skin. Likelihood of infection is increased if the skin integrity is compromised, as in minor breaks.
The fungi use various proteinases to establish infection in the keratinized stratum corneum. Some studies also suggest that a class of proteins called LysM coat the fungal cell walls to help the fungi evade host cell immune response.
The course of infection varies between each case, and may be determined by several factors including: "the anatomic location, the degree of skin moisture, the dynamics of skin growth and desquamation, the speed and extent of the inflammatory response, and the infecting species."
The ring shape of dermatophyte lesions result from outward growth of the fungi. The fungi spread in a centrifugal pattern in the stratum corneum, which is the outermost keratinized layer of the skin.
For nail infections, the growth initiates through the lateral or superficial nail plates, then continues throughout the nail. For hair infections, fungal invasion begins at the hair shaft.
Symptoms manifest from inflammatory reactions due to the fungal antigens. The rapid turnover of desquamation, or skin peeling, due to inflammation limits dermatophytoses, as the fungi are pushed out of the skin.
Dermatophytoses rarely cause serious illness, as the fungi infection tends to be limited to the superficial skin. The infection tends to self-resolve so long as the fungal growth does not exceed inflammatory response and desquamation rate is sufficient. If immune response is insufficient, however, infection may progress to chronic inflammation.
Immune response
Fortunately, dermatophytoses soon progress from the inflammatory stage to spontaneous healing, which is largely cell-mediated. Fungi are destroyed via oxidative pathways by phagocytes both intracellularly and extracellularly. T-cell-mediated response using TH1 cells are likely responsible for controlling infection. It is unclear whether the antifungal antibodies formed in response to the infection play a role in immunity.
Infection may become chronic and widespread if the host has a compromised immune system and is receiving treatment that reduces T-lymphocyte function. Also, the responsible species for chronic infections in both normal and immunocompromised patients tends to be Trichophyton rubrum; immune response tends to be hyporeactive. However, "the clinical manifestations of these infections are largely due to delayed-type hypersensitivity responses to these agents rather than from direct effects of the fungus on the host."
Diagnosis and identification
Usually, dermatophyte infections can be diagnosed by their appearance. However, a confirmatory rapid in-office test can also be conducted, which entails using a scalpel to scrape off a lesion sample from the nail, skin, or scalp and transferring it to a slide. Potassium hydroxide (KOH) is added to the slide and the sample is examined with a microscope to determine presence of hyphae. Care should be taken in procurement of a sample, as false-negative results may occur if the patient is already using an antifungal, if too small a sample is obtained, or if sample from a wrong site is collected.
Additionally, a Wood's lamp examination (ultraviolet light) may be used to diagnose specific dermatophytes that fluoresce. Should there be an outbreak or if a patient is not responding well to therapy, sometimes a fungal culture is indicated. A fungal culture is also used when long-term oral therapy is being considered.
Fungal culture medium can be used for positive identification of the species. The fungi tend to grow well at 25 degrees Celsius on Sabouraud agar within a few days to a few weeks. In the culture, characteristic septate hyphae can be seen interspersed among the epithelial cells, and the conidia may form either on the hyphae or on conidiophores. Trichophyton tonsurans, the causative agent of tinea capitis (scalp infection) can be seen as solidly packed arthrospores within the broken hairshafts scraped from the plugged black dots of the scalp. Microscopic morphology of the micro- and macroconidia is the most reliable identification character, but both good slide preparation and stimulation of sporulation in some strains are needed. While small microconidia may not always form, the larger macroconidia aids in identification of the fungal species.
Culture characteristics such as surface texture, topography and pigmentation are variable, so they are the least reliable criteria for identification. Clinical information such as the appearance of the lesion, site, geographic location, travel history, animal contacts and race is also important, especially in identifying rare non-sporulating species like Trichophyton concentricum, Microsporum audouinii and Trichophyton schoenleinii.
A special agar called Dermatophyte Test Medium (DTM) has been formulated to grow and identify dermatophytes. Without having to look at the colony, the hyphae, or macroconidia, one can identify the dermatophyte by a simple color test. The specimen (scraping from skin, nail, or hair) is embedded in the DTM culture medium. It is incubated at room temperature for 10 to 14 days. If the fungus is a dermatophyte, the medium will turn bright red. If the fungus is not a dermatophyte, no color change will be noted. If kept beyond 14 days, false positive can result even with non-dermatophytes. Specimen from the DTM can be sent for species identification if desired.
Often dermatophyte infection may resemble other inflammatory skin disorders or dermatitis, thus leading to misdiagnosis of fungal infections.
Transmission
Dermatophytes are transmitted by direct contact
with an infected host (human or animal) or by direct or indirect contact
with infected shed skin or hair in fomites such as clothing, combs, hair brushes, theatre seats, caps, furniture, bed linens, shoes, socks, towels, hotel rugs, sauna, bathhouse, and locker room floors. Also, transmission may occur from soil-to-skin contact. Depending on the species the organism may be viable in the environment for up to 15 months.
While even healthy individuals may become infected, there is an increased susceptibility to infection when there is a preexisting injury to the skin such as scars, burns, excessive temperature and humidity. Adaptation to growth on humans by most geophilic species resulted in diminished loss of sporulation, sexuality, and other soil-associated characteristics.
Classification
Dermatophytes are classified as anthropophilic (humans), zoophilic (animals) or geophilic (soil) according to their normal habitat.
Anthropophilic dermatophytes are restricted to human hosts and produce a mild, chronic inflammation.
Zoophilic organisms are found primarily in animals and cause marked inflammatory reactions in humans who have contact with infected cats, dogs, cattle, horses, birds, or other animals. Infection may also be transmitted via indirect contact with infected animals, such as by their hair. This is followed by a rapid termination of the infection.
Geophilic species are usually recovered from the soil but occasionally infect humans and animals. They cause a marked inflammatory reaction, which limits the spread of the infection and may lead to a spontaneous cure but may also leave scars.
Sexual reproduction
Dermatophytes reproduce sexually by either of two modes, heterothallism or homothallism. In heterothallic species, interaction of two individuals with compatible mating types are required in order for sexual reproduction to occur. In contrast, homothallic fungi are self-fertile and can complete a sexual cycle without a partner of opposite mating type. Both types of sexual reproduction involve meiosis.
Frequency of species
In North America and Europe, the nine most common dermatophyte species are:
Trichophyton: rubrum, tonsurans, mentagrophytes, verrucosum, and schoenlenii
Microsporum: canis, audouinii, and gypseum
Epidermophyton: floccosum
About 76% of the dermatophyte species isolated from humans are Trichophyton rubrum.
27% are Trichophyton mentagrophytes
7% are Trichophyton verrucosum
3% are Trichophyton tonsurans
Infrequently isolated (less than 1%) are Epidermophyton floccosum, Microsporum audouinii, Microsporum canis, Microsporum equinum, Microsporum nanum, Microsporum versicolor, Trichophyton equinum, Trichophyton kanei, Trichophyton raubitschekii, and Trichophyton violaceum.
The mixture of species is quite different in domesticated animals and pets (see ringworm for details).
Epidemiology
Since dermatophytes are found worldwide, infections by these fungi are extremely common.
Infections occur more in males than in females, as the predominantly female hormone, progesterone, inhibits the growth of dermatophyte fungi.
Medications
General medications for dermatophyte infections include topical ointments.
Topical medications like clotrimazole, butenafine, miconazole, and terbinafine.
Systemic medications (oral) like fluconazole, griseofulvin, terbinafine, and itraconazole.
For extensive skin lesions, itraconazole and terbinafine can speed up healing. Terbinafine is preferred over itraconazole due to fewer drug interactions.
Treatment
Tinea corpora (body), tinea manus (hands), tinea cruris (groin), tinea pedis (foot) and tinea facie (face) can be treated topically.
Tinea unguium (nails) usually will require oral treatment with terbinafine, itraconazole, or griseofulvin. Griseofulvin is usually not as effective as terbinafine or itraconazole. A lacquer (Penlac) can be used daily, but is ineffective unless combined with aggressive debridement of the affected nail.
Tinea capitis (scalp) must be treated orally, as the medication must be present deep in the hair follicles to eradicate the fungus. Usually griseofulvin is given orally for 2 to 3 months. Clinically dosage up to twice the recommended dose might be used due to relative resistance of some strains of dermatophytes.
Tinea pedis is usually treated with topical medicines, like ketoconazole or terbinafine, and pills, or with medicines that contains miconazole, clotrimazole, or tolnaftate. Antibiotics may be necessary to treat secondary bacterial infections that occur in addition to the fungus (for example, from scratching).
Tinea cruris (groin) should be kept dry as much as possible.
See also
Hair perforation test
References
External links
Images and descriptions of dermatophytes
Animal fungal diseases
Fungus common names | Dermatophyte | Biology | 3,914 |
13,599,122 | https://en.wikipedia.org/wiki/Canyon%20Lake%20Gorge | Canyon Lake Gorge is a limestone gorge in Texas, which is around long, hundreds of yards (metres) wide, and up to or more deep, which was exposed in 2002 when extensive flooding of the Guadalupe River led to a huge amount of water going over the spillway from Canyon Lake reservoir and removing the sediment from the gorge. The gorge provides a valuable exposure of rock strata as old as 111 million years showing fossils and a set of dinosaur tracks, and forms a new ecosystem for wildlife with carp and other creatures in a series of pools fed by springs and waterfalls.
The Gorge Preservation Society formed as a local citizen's group to develop long-term plans for the Gorge in partnership with the Guadalupe-Blanco River Authority and the U.S. Army Corps of Engineers. Public access to the gorge is restricted to guided tours by the Society along a designated route for a hike lasting about three hours. Availability of tours is limited, no pets are permitted and no rock or fossil collecting is allowed. Research permits can be obtained by university or scientific research groups.
The flood of 2002
In July 2002 up to of water per second flowed over the spillway of Canyon Lake, Texas for approximately six weeks, the first time the spillway had been in use since the reservoir dam was constructed in 1964. Normally, the flow out of the reservoir is around of water per second. The Guadalupe River basin forms a part of "Flash Flood Alley" which is one of the river basins most prone to flash flooding in the world. Nine people were killed by the flood over a stretch of the river, which damaged or destroyed 48,000 homes and cost around $1 billion in damages, but the Canyon Lake manager has stated that even though the floodwaters went over the spillway, the dam still prevented an estimated $38.6 million in damages downstream during the event.
Educational and natural resource
On November 29, 2005, a ceremony was held in which representatives of the Guadalupe-Blanco River Authority and the U.S. Army Corps of Engineers signed an agreement to develop the gorge as an educational and natural resource.
Significance for geologists
The 2002 flood at Canyon Lake and subsequent rapid formation of Canyon Lake Gorge presented a unique opportunity to study the geomorphological power of rapidly moving water and to better understand the process of canyon formation.
In their 2010 study, Michael Lamb of the California Institute of Technology and Mark Fonstad of Texas State University documented the dramatic transformation of a section of the Guadalupe River Valley landscape into a steep-walled bedrock canyon in just three days. The scientists documented the excavation of bedrock limestone to an average depth of over 20 feet and average width of 130–200 feet for a distance of over one mile. The “plucking” and transport of massive boulders from the site resulted in the formation of several waterfalls, inner channels, and bedrock terraces. The abrasion of rock by sediment-loaded water sculpted walls and created plunge pools and teardrop-shaped “streamlined islands”. Although some of the geological formations present in the gorge are known to be associated with rapidly flowing flood water (such as the streamlined islands), other formations (such as the inner channels, knickpoints and terraces) have traditionally been interpreted through the “long ago and very slow” paradigm of geologic time in response to shifting climate or tectonic forcing.
Typically, a steep-walled narrow gorge is inferred to represent slow persistent erosion, but because many of the geological formations of Canyon Lake Gorge are virtually indistinguishable from other formations which have been attributed to long term (slower) processes, the data collected from Canyon Lake Gorge lends further credence to the hypothesis that some of the most spectacular canyons on Earth may have been carved rapidly during ancient megaflood events. Additionally, because the flood conditions under which the gorge was formed are known, Canyon Lake Gorge provides a means of developing improved computer model reconstructions of pre-historic floods to determine water volume, flood duration and erosion rates.
References
External links
Canyons and gorges of Texas
Protected areas of Comal County, Texas
Nature reserves in Texas
Landforms of Comal County, Texas
United States Army Corps of Engineers
Guadalupe-Blanco River Authority | Canyon Lake Gorge | Engineering | 846 |
31,619,549 | https://en.wikipedia.org/wiki/Mentat%20Portable%20Streams | Mentat Portable Streams (MPS) was a platform independent implementation of the UNIX System V STREAMS networking protocol stack, normally sold with the Mentat TCP stack providing TCP/IP support. Portable Streams was used in a number of commercial products, including Apple Computer's Open Transport, AIX, VxWorks, Palm OS's Cobalt, Novell's UnixWare and other systems. Mentat also ported the system to Linux and Windows NT as a standalone product. Portable Streams was written by Mentat, who was purchased by Packeteer in 2004.
References
Internet Protocol based network software | Mentat Portable Streams | Technology | 120 |
37,832,906 | https://en.wikipedia.org/wiki/SAF-T | SAF-T (Standard Audit File for Tax) is an international standard for electronic exchange of reliable accounting data from organizations to a national tax authority or external auditors. The standard is defined by the Organisation for Economic Co-operation and Development (OECD). The file requirements are expressed using XML, but the OECD does not impose any particular file format, recommending that (para 6.28) "It is entirely a matter for revenue bodies to develop their policies for implementation of SAF-T, including its representation in XML. However, revenue bodies should consider data formats that permit audit automation today while minimising potential costs to all stakeholders when moving to new global open standards for business and financial data such as XBRL, and XBRL_GL in particular."
The standard is now increasingly adopted within European countries as a means to file tax returns electronically.
The standard was adopted in 2008 by Portugal and has since spread to other European countries, e.g. Luxembourg, Austria, Germany and France. From 1 January 2022 SAF-T is also rolled out in Romania, where large Romanian-resident companies and certain foreign companies.
Although SAF-T is formally standardized, both with respect to syntax (format) and semantics (meaning) to allow for and fulfill automatic data interchange and tools support, e.g. across country borders or common computerized systems, it does include some room for revenue bodies (tax administrations) to add individual elements, e.g. to cover special needs in a taxation or audit system. For example, in Portugal the SAF-T (PT) v1.04_01 standard – based on SAF-T v1.0 – includes some special elements and types relevant to the standard in Portugal.
Standards
In May 2005, the OECD Committee on Fiscal Affairs (CFA) published the first version of the SAF-T guidance. Version 1.0 was based on entries as found in a General Ledger Chart of Accounts, together with master file data for customers and suppliers and details of invoices, orders, payments, and adjustments. The standard describes a set of messages for data exchange between accounting software and national tax authorities or auditors. The syntax is proprietary and based on XML. There are multiple localized versions available which are compatible with the general v1.0 standard. Schema was originally defined in old DTD format – a precursor to today's XML Schema.
The revised version (2.0) extended the standard to include information on Inventory and Fixed Assets. The opportunity was also taken to enhance the original SAF-T specification to take account of suggestions from OECD member countries and others. Schema is changed to XML Schema format and new information covering Inventory and Fixed Assets added. The schema is not fully backward compatible with v1.0.
Country adoptions
The following countries/organizations have laws adopting SAF-T:
See also
XBRL GL
UN/CEFACT
SIE (file format)
External links
SAF-T v2.0 XML schema http://www.oecd.org/ctp/taxadministration/45167181.pdf
OECD http://www.oecd.org
XBRL
UN/CEFACT
SIE
References
Saf-t para faturação Portuguese Edition, Edited 7 July 2019.
Data interchange standards
Accounting software | SAF-T | Technology | 702 |
22,635,490 | https://en.wikipedia.org/wiki/Neopentyl%20glycol | Neopentyl glycol (IUPAC name: 2,2-dimethylpropane-1,3-diol) is an organic chemical compound. It is used in the synthesis of polyesters, paints, lubricants, and plasticizers. When used in the manufacture of polyesters, it enhances the stability of the product towards heat, light, and water. By esterification reaction with fatty or carboxylic acids, synthetic lubricating esters with reduced potential for oxidation or hydrolysis, compared to natural esters, can be produced.
Reactions
Neopentyl glycol is synthesized industrially by the aldol reaction of formaldehyde and isobutyraldehyde. This creates the intermediate hydroxypivaldehyde, which can be converted to neopentyl glycol by either a Cannizzaro reaction with excess formaldehyde, or by hydrogenation using palladium on carbon.
Owing to its tendency to form cyclic derivatives (see Thorpe-Ingold Effect), it is used as a protecting group for ketones, for example in gestodene synthesis. Similarly it gives
boronic acid esters, which can be useful in the cross coupling reactions.
A condensation reaction of neopentyl glycol with 2,6-di-tert-butylphenol gives CGP-7930.
Neopentyl glycol is a precursor to Neopentyl glycol diglycidyl ether. The sequence begins with alkylation with epichlorohydrin using a Lewis acid catalyst. Dehydrochlorination of the resulting halohydrin with sodium hydroxide affords the desired ether.
Research
It has been reported that plastic crystals of neopentyl glycol exhibit a colossal barocaloric effect (CBCEs), which is a cooling effect caused by pressure-induced phase transitions. The obtained entropy changes are about 389 joules per kilogram per kelvin near room temperature. This CBCE phenomenon is likely to be very useful in future solid-state refrigeration technologies.
See also
Pentaerythritol
Trimethylolethane
Trimethylolpropane
References
Monomers
Plasticizers
Alkanediols | Neopentyl glycol | Chemistry,Materials_science | 477 |
751,379 | https://en.wikipedia.org/wiki/ConScript%20Unicode%20Registry | The ConScript Unicode Registry is a volunteer project to coordinate the assignment of code points in the Unicode Private Use Areas (PUA) for the encoding of artificial scripts, such as those for constructed languages. It was founded by John Cowan and was maintained by him and Michael Everson. It is not affiliated with the Unicode Consortium.
History
The ConScript Unicode Registry is a volunteer project that was founded by John Cowan in the early 1990s. It is a joint project of John Cowan and Michael Everson.
Historically, it was hosted on both Cowan and Everson's websites (branded as the North American and European sites, respectively); in 2002, the site was transitioned to be hosted exclusively on Everson's site.
Since 2008, the ConScript Unicode Registry has been largely unmaintained; in 2008, Cowan explained that Everson was too busy to continue maintaining the project. Due to this inactivity, Rebecca Bettencourt founded the Under-ConScript Unicode Registry, aiming to coordinate code points for constructed languages until they can be formally added to the ConScript Unicode Registry. Scripts added to the Under-ConScript Unicode Registry include Sitelen Pona (for Toki Pona) and Cirth.
Scripts
The CSUR and UCSUR include the following scripts:
Font support
Some fonts support ConScript Unicode specified code points:
Constructium, a proportional font based on SIL Gentium
Fairfax, a monospaced font family intended for text editors and terminals
GNU Unifont, a bitmap font intended as a fallback font, includes CSUR and UCSUR characters in the separate Unifont CSUR package
Horta
Kurinto Font Folio
Nishiki-teki
See also
Medieval Unicode Font Initiative
References
External links
ConScript Unicode Registry
Under-ConScript Unicode Registry
Unicode
Constructed languages
Information technology organizations | ConScript Unicode Registry | Technology | 368 |
4,435,023 | https://en.wikipedia.org/wiki/Mud%20logging | Mud logging is the creation of a detailed record (well log) of a borehole by examining the cuttings of rock brought to the surface by the circulating drilling medium (most commonly drilling mud). Mud logging is usually performed by a third-party mud logging company. This provides well owners and producers with information about the lithology and fluid content of the borehole while drilling. Historically it is the earliest type of well log. Under some circumstances compressed air is employed as a circulating fluid, rather than mud. Although most commonly used in petroleum exploration, mud logging is also sometimes used when drilling water wells and in other mineral exploration, where drilling fluid is the circulating medium used to lift cuttings out of the hole. In hydrocarbon exploration, hydrocarbon surface gas detectors record the level of natural gas brought up in the mud. A mobile laboratory is situated by the near the drilling rig or on deck of an offshore drilling rig, or on a drill ship.
The services
Mud logging technicians in an oil field drilling operation determine positions of hydrocarbons with respect to depth, identify downhole lithology, monitor natural gas entering the drilling mud stream, and draw well logs for use by oil company geologist. Rock cuttings circulated to the surface in drilling mud are sampled and discussed.
The mud logging company is normally contracted by the oil company (or operator). They then organize this information in the form of a graphic log, showing the data charted on a representation of the wellbore.
The oil company representative (Company Man, or "CoMan"), together with the tool pusher and well-site geologist (WSG), provides mud loggers with their instructions. The mud logging company is contracted specifically as to when to start well-logging activity and what services to provide. Mud logging may begin on the first day of drilling, known as the "spud in" date, but is more likely at some later time (and depth) determined by the oil industry geologist's research. The mud logger may also possess logs from wells drilled in the surrounding area. This information (known as "offset data") can provide valuable clues as to the characteristics of the particular geostrata that the rig crew is about to drill through.
Mud loggers connect various sensors to the drilling apparatus and install specialized equipment to monitor or "log" drill activity. This can be physically and mentally challenging, especially when having to be done during drilling activity. Much of the equipment will require precise calibration or alignment by the mud logger to provide accurate readings.
Mud logging technicians observe and interpret the indicators in the mud returns during the drilling process, and at regular intervals log properties such as drilling rate, mud weight, flowline temperature, oil indicators, pump pressure, pump rate, lithology (rock type) of the drilled cuttings, and other data. Mud logging requires a good deal of diligence and attention. Sampling the drilled cuttings must be performed at predetermined intervals, and can be difficult during rapid drilling.
Another important task of the mud logger is to monitor gas levels (and types) and notify other personnel on the rig when gas levels may be reaching dangerous levels, so appropriate steps can be taken to avoid a dangerous well blowout condition.
Because of the lag time between drilling and the time required for the mud and cuttings to return to the surface, a modern augmentation has come into use: Measurement while drilling. The MWD technician, often a separate service company employee, logs data in a similar manner but the data is different in source and content. Most of the data logged by an MWD technician comes from expensive and complex, sometimes electronic, tools that are downhole installed at or near the drill bit.
Scope
Mud logging includes observation and microscopic examination of drill cuttings (formation rock chips), and evaluation of gas hydrocarbon and its constituents, basic chemical and mechanical parameters of drilling fluid or drilling mud (such as chlorides and temperature), as well as compiling other information about the drilling parameters. Then data is plotted on a graphic log called a mud log. Example1, Example2.
Other real-time drilling parameters that may be compiled include, but are not limited to; rate of penetration (ROP) of the bit (sometimes called the drill rate), pump rate (quantity of fluid being pumped), pump pressure, weight on bit, drill string weight, rotary speed, rotary torque, RPM (Revolutions per minute), SPM (Strokes per minute) mud volumes, mud weight and mud viscosity. This information is usually obtained by attaching monitoring devices to the drilling rig's equipment with a few exceptions such as the mud weight and mud viscosity which are measured by the derrickhand or the mud engineer.
Rate of drilling is affected by the pressure of the column of mud in the borehole and its relative counterbalance to the internal pore pressures of the encountered rock. A rock pressure greater than the mud fluid will tend to cause rock fragments to spall as it is cut and can increase the drilling rate. "D-exponents" are mathematical trend lines which estimate this internal pressure. Thus both visual evidence of spalling and mathematical plotting assist in formulating recommendations for optimum drilling mud densities for both safety (blowout prevention) and economics. (Faster drilling is generally preferred.)
Mud logging is often written as a single word "mudlogging". The finished product can be called a "mud log" or "mudlog". The occupational description is "mud logger" or "mudlogger". In most cases, the two word usage seems to be more common. The mud log provides a reliable time log of drilled formations.
Details
The rate of penetration in Figure 1 & 2 is represented by the black line on the left side of the log. The farther to the left that the line goes, the faster the rate of penetration. On this mud log, ROP is measured in feet per hour, but on some older, hand-drawn mud logs, it is measured in minutes per foot.
The porosity in Figure 1 is represented by the blue line farthest to the left of the log. It indicates the pore space within the rock structure. Oil and gas reside within this pore space. Note how far to the left the porosity goes, where all the sand (in yellow) is. This indicates that the sand has good porosity. Porosity is not a direct or physical measurement of the pore space but rather an extrapolation from other drilling parameters and, therefore, is not always reliable.
The lithology in Figure 1 & 2 is represented by the cyan, gray/black and yellow blocks of color. Cyan = lime, gray/black = shale and yellow = sand. More yellow represents more sand identified at that depth. The lithology is measured as a percentage of the total sample as visually inspected under a microscope, normally at 10× magnification (Figure 3). These are but a fraction of the different types of formations that might be encountered. (Color coding is not necessarily standardized among different mud logging companies, though the symbol representations for each are very similar.) In Figure 3, a sample of cuttings is seen under a microscope at 10× magnification after they have been washed off. Some of the larger shale and lime fragments are separated from this sample by running it through sieves and must be considered when estimating percentages. Also, this image view is only a fragment of the total sample, and some of the sand at the bottom of the tray cannot be seen and must also be considered in the total estimation. Thus, this sample would be considered to be about 90% shale, 5% sand and 5% lime (in 5% increments).
The gas in Figure 1 & 2 is represented by the green line and is measured in units as the quantity of total gas, but does not represent the actual quantity of oil or gas the reservoir contains. In (Figure 1) the squared-off dash-dot lines just to the right of the sand (in yellow) and left of the gas (in green) represents the heavier hydrocarbons detected. Cyan = C2 (ethane), purple = C3 (propane) and blue = C4 (butane). Detecting and analyzing these heavy gases help to determine the type of oil or gas the formation contains.
See also
Directional drilling
Drilling fluid (mud)
Geosteering
LWD (Logging While Drilling)
MWD (Measurement While Drilling)
Well logging
References
Further reading
Chambre Syndicale de la recherche et de la production du petrole et du gaz naturel, 1982, Geological and mud logging in drilling control: catalogue of typical cases, Houston, TX: Gulf Publishing Company and Paris: Editions technip, 81 p.
Exlog, 1979, Field geologist's training guide: an introduction to oilfield geology, mud logging and formation evaluation, Sacramento, CA: Exploration Logging, Inc., 301 p. Privately published with no ISBN
Whittaker, Alun, 1991, Mud logging handbook, Englewood Cliffs, NJ: Prentice Hall, 531 p.
External links
Articles and books on mud logging
Hand drawn mud logs
Geoservices definition of Mud Logging
Maverick Energy Lexicon
Mud Logging Gas Detectors
Well logging
Petroleum geology | Mud logging | Chemistry,Engineering | 1,906 |
325,156 | https://en.wikipedia.org/wiki/Mezzanine | A mezzanine (; or in Italian, a mezzanino) is an intermediate floor in a building which is partly open to the double-height ceilinged floor below, or which does not extend over the whole floorspace of the building, a loft with non-sloped walls. However, the term is often used loosely for the floor above the ground floor, especially where a very high-ceilinged original ground floor has been split horizontally into two floors.
Mezzanines may serve a wide variety of functions. Industrial mezzanines, such as those used in warehouses, may be temporary or semi-permanent structures.
In Royal Italian architecture, mezzanino also means a chamber created by partitioning that does not go up all the way to the arch vaulting or ceiling; these were historically common in Italy and France, for example in the palaces for the nobility at the Quirinal Palace.
Definition
A mezzanine is an intermediate floor (or floors) in a building which is open to the floor below. It is placed halfway (mezzo means 'half' in Italian) up the wall on a floor which has a ceiling at least twice as high as a floor with minimum height. A mezzanine does not count as one of the floors in a building, and generally does not count in determining maximum floorspace. The International Building Code permits a mezzanine to have as much as one-third of the floor space of the floor below. Local building codes may vary somewhat from this standard. A space may have more than one mezzanine, as long as the sum total of floor space of all the mezzanines is not greater than one-third the floor space of the complete floor below.
Mezzanines help to make a high-ceilinged space feel more personal and less vast, and can create additional floor space. Mezzanines, however, may have lower-than-normal ceilings due to their location. The term "mezzanine" does not imply any particular function; mezzanines can be used for a wide array of purposes.
Mezzanines are commonly used in modern architecture, which places a heavy emphasis on light and space.
Industrial mezzanines
In industrial settings, mezzanines may be installed (rather than built as part of the structure) in high-ceilinged spaces such as warehouses. These semi-permanent structures are usually free-standing, can be dismantled and relocated, and are sold commercially. Industrial mezzanine structures can be supported by structural steel columns and elements, or by racks or shelves. Depending on the span and the run of the mezzanine, different materials may be used for the mezzanine's deck like fibre cement boards. Some industrial mezzanines may also include enclosed, paneled office space on their upper levels. There are three basic types of industrial mezzanines: custom, standard or modular.
A structural engineer is sometimes hired to help determine whether the floor of the building can support a mezzanine (and how heavy the mezzanine may be), and to design the appropriate mezzanine.
Custom mezzanines
Custom Mezzanines are steel, raised industrial platform structures that are designed specifically to match the space and capacity needs of a given facility. It will, at a minimum, include a stairway for accessing the mezzanine. These structures typically are the strongest in terms of support capacity.
Standard mezzanines
Standard Mezzanines are steel, raised industrial platform structures that are completely self-supporting and are sold in predetermined sizes and shapes. These off-the-shelf structures are usually strong (in terms of support capacity) and less expensive than custom mezzanines.
Safety
Employees in material handling and manufacturing are often at risk of falls when they are on the job. Recent figures show approximately 20,000 serious injuries and nearly 100 fatalities a year in industrial facilities. Falls of people and objects from mezzanines are of particular concern.
In many industrial operations, openings are cut into the guardrail on mezzanines and elevated work platforms to allow picking of palletized material to be loaded and unloaded, often with a fork truck, to upper levels. The Occupational Safety and Health Administration (OSHA) and International Building Council (IBC) have published regulations for fall protection and The American National Standards Institute (ANSI) has published standards for securing pallet drop areas to protect workers that work on elevated platforms and are exposed to openings.
In most cases, safety gates are used to secure these openings. OSHA requires openings 48 inches or taller to be secured with a fall protection system. Removable sections of railing or gates that swing or slide open would be used to open up the area and allow the transfer of material, and then close once the material is removed. However, current ANSI standards require dual-gate safety systems for fall protection.
Dual-gate safety systems were created to secure these areas, allowing a barrier to be in place at all times, even while pallets are being loaded or removed. Dual-gate systems create a completely enclosed workstation providing protection for the worker during loading and off-loading operations. When the rear-side gate opens, the ledge gate automatically closes, ensuring there is always a gate between the operator and the ledge.
See also
Overhead storage
References
Bibliography
External links
Proper safeguarding for elevated work platforms (1:37 min. video)
Video showing the main construction of an industrial mezzanine floor (2:46 min video)
Architectural elements
Floors
Industrial equipment | Mezzanine | Technology,Engineering | 1,134 |
39,439,707 | https://en.wikipedia.org/wiki/Footlight%20%28typeface%29 | Footlight is a serif typeface designed by Malaysian type designer Ong Chong Wah in 1986 for the Monotype Corporation. Footlight is an irregular design. It is sold in weights from light to extra-bold with matching italics. It was originally designed as an italic font, a roman version was later made.
Footlight MT
A version of Footlight's light style called "Footlight MT" (without italic) has been bundled with some Microsoft software.
Distribution
It has been distributed in the following products:
Access 97 SR2
Office 2000 Premium
Office 2007
Office 2007 Professional Edition
Office 2010
Office 4.3 Professional
Office 97 Small Business Edition SR2
Office 97 SR1a
Office Professional Edition 2003
PhotoDraw 2000
Picture It! 2000
Picture It! 2002
Picture It! 98
Publisher 2000
Publisher 2007
Publisher 97
Publisher 98
Windows Small Business Server 2003
Unicode
Footlight MT has support for the following Unicode blocks:
Basic Latin
Latin-1 Supplement
References
External links
Microsoft Typography - Footlight MT
Microsoft Typography - Footlight MT Light - Version 1.51
Monotype's page for the entire Footlight family
Identifont - Footlight
Monotype typefaces
Typefaces and fonts introduced in 1985
Display typefaces
Serif typefaces | Footlight (typeface) | Technology | 256 |
67,567,062 | https://en.wikipedia.org/wiki/Neocalyptrella | Neocalyptrella is a genus of diatoms belonging to the family Rhizosoleniaceae.
Species:
Neocalyptrella robusta (G.Norman ex Ralfs) Hernández-Becerril & Castillo
References
Diatoms
Diatom genera | Neocalyptrella | Biology | 57 |
27,458,316 | https://en.wikipedia.org/wiki/Institute%20for%20Philanthropy | The Institute for Philanthropy is a not-for-profit organisation which provides information and educational programmes to philanthropists and to charitable organizations. Originally established in 2000 by Hilary Browne-Wilkinson, a former solicitor at University College London, the Institute currently operates from offices in London and New York.
The Institute carries out research about charitable organizations and charitable tax law, and provides advice to potential donors on the efficient utilisation of funding.
The Institute works to increase effective philanthropy in the United Kingdom and internationally, by raising awareness and understanding of philanthropy, providing donor education and building donor networks.
Programmes
The Institute has developed several international philanthropy programmes:
The Philanthropy Workshop, implemented in 1995 as an offshoot of the Rockefeller Foundation, is a series of three confidential one-week workshops which inform, educate, and connect wealthy donors so they are able to manage their own philanthropic activities more effectively.
The Youth and Philanthropy Initiative (YPI) was launched in Canada by the Toskan-Casale Foundation in 2002 at the Royal St. George's College in Toronto and has been directed by the Philanthropy Institute since 2007, working with the Toskan Casale Foundation and the Wood Family Trust. It is a school-based programme which works with local charities to help increase community awareness and knowledge of philanthropy among young people. As of 2013, it is part of the curriculum in 75 secondary schools. Pupils visit their chosen local charity and prepare presentations showing why that charity is worthy of support. The group judged to have made the best presentation in each school is granted £3,000 to award to their charity. Over 10,000 pupils have participated in the program.
Next Generation Philanthropy is an educational program directed in partnership with the Institute for Family Business. It provides information and education to younger philanthropists in a group setting.
Think Philanthropy is a series of lectures and workshops discussing and providing information about current issues and trends in the field of philanthropy, such as effective charitable asset management, climate change, funding in areas of high risk, and funding in an economic downturn. The talks are led philanthropists and by experts such as Paul Collier, Professor of Economics, Oxford University; Professor David Swensen, Chief Investment Officer, Yale University; Dr. Steve Howard, CEO, Climate Group; and Dr. Sigrid Rausing, Director of the Sigrid Rausing Trust.
Partnerships
The Institute has partnered with several leading organisations including Credit Suisse, Goldman Sachs, The Royal Bank of Canada and Arcapita. It has also worked with charitable foundations such as The Rockefeller Foundation, The Wellcome Trust and The Bill and Melinda Gates Foundation. It also provided advice and nominations for the Inaugural Happy List.
References
Organizations established in 2000
Non-profit organisations based in London
Non-profit organizations based in New York City
Philanthropy | Institute for Philanthropy | Biology | 557 |
36,108,052 | https://en.wikipedia.org/wiki/Contextualization%20%28computer%20science%29 | In computer science, contextualization is the process of identifying the data relevant to an entity (e.g., a person or a city) based on the entity's contextual information.
Definition
Context or contextual information is any information about any entity that can be used to effectively reduce the amount of reasoning required (via filtering, aggregation, and inference) for decision making within the scope of a specific application. Contextualisation is then the process of identifying the data relevant to an entity based on the entity's contextual information. Contextualisation excludes irrelevant data from consideration and has the potential to reduce data from several aspects including volume, velocity, and variety in large-scale data intensive applications (Yavari et al.).
Usage
The main usage of "contextualisation" is in improving the process of data:
Reduce the amount of data: Contextualisation has the potential to reduce the amount of data based on the interests from applications/services/users. Contextualisation can improve the scalability and efficiency of data process, query, delivery by excluding irrelevant data.
As an example, ConTaaS facilitates contextualisation of the data for IoT applications and could improve the processing for large-scale IoT applications from various Big Data aspects including volume, velocity, and variety.
Example domains
Object-oriented programming: Contextualization consists, at object creation time, to provide adequate initialization parameters to a class constructor.
Virtualization: Contextualization permits, at the end of VM instantiation, to set or override VM data having unknown or default values at the time of creation of the Live CD, typically hostname, IP address, .ssh/authorized_keys, ...
References
Computing terminology | Contextualization (computer science) | Technology | 355 |
78,649,718 | https://en.wikipedia.org/wiki/Grayanic%20acid | Grayanic acid is an organic compound found in certain lichens, particularly Cladonia grayi, where it serves as a secondary metabolite with notable taxonomic importance. Identified in the 1930s, it is now recognised as a chemotaxonomic marker that helps distinguish closely related species within the Cladonia chlorophaea species group. Grayanic acid crystallises as colourless, needle-like structures, melts at approximately , and displays distinctive fluorescence under ultraviolet light, aiding in its detection and study.
Chemically, grayanic acid is a depsidone, featuring two aromatic rings linked by ester and ether bonds. Its biosynthesis occurs in the fungal partner of the lichen and does not require the presence of the algal symbiont. Genetic research has identified a key biosynthetic gene cluster responsible for its formation, highlighting biochemical pathways and enzymes that convert precursor compounds into grayanic acid and related metabolites such as sphaerophorin.
Beyond its chemical characteristics, grayanic acid has proven invaluable in refining lichen taxonomy, as variations in its presence and concentration underpin subtle species distinctions. By comparing grayanic acid profiles across different populations and geographic regions, researchers have gained insights into evolutionary relationships, species distribution patterns, and the ecological roles that these fungal–algal partnerships play in diverse environments.
History
Grayanic acid was first isolated in the 1930s by Yasuhiko Asahina and Zyozi Simosato from the lichen species Cladonia grayi. In their initial study, they determined it to be a crystalline acid with a melting point of 185 °C and proposed a molecular formula of C21H24O7. However, further investigation was limited at the time due to a shortage of material.
By 1943, Alexander W. Evans highlighted the utility of Asahina's microchemical methods, including microcrystallisation, in identifying grayanic acid. Evans described its needle-like crystals, which often formed radiating clusters under specific conditions, and noted a melting point near , consistent with Asahina's findings.
In 1963, Shoji Shibata and Hsiich-Ching Chiang revised the molecular formula to C23H26O7 and refined the melting point to 186–189 °C, aligning it with subsequent modern analyses. Their work also supported Asahina's classification of the Cladonia chlorophaea complex into distinct species based on chemical markers, such as grayanic acid, cryptochlorophaeic acid, and merochlorophaeic acid. However, Elke Mackenzie suggested that such differences were better explained as chemical strains (chemotypes) within a single species. Later synthetic studies in 1976 determined a slightly lower range of 181.5–182.5 °C for synthetic grayanic acid, highlighting minor variations attributable to synthetic purity.
Structure
The molecular structure of grayanic acid consists of a depside skeleton with two benzene rings connected by both ester (-CO-O-) and ether (-O-) linkages, forming a depsidone. The molecule contains one methoxy group (H3CO-), one free hydroxyl group (-OH), and a chelated carboxyl group (-COOH). Nuclear magnetic resonance studies revealed the presence of alkyl side chains, specifically determined to be either (1) CH3 and C7H15 or (2) C2H5 and C6H13. The complete systematic name for the compound is 6-heptyl-8-hydroxy-3-methoxy-1-methyl-11-oxo-11H-dibenzo[b,e][1,4]dioxepin-7-carboxylic acid.
While the initial structural assignment was based primarily on spectroscopic evidence, some uncertainty remained regarding the precise positions of the alkyl groups. This ambiguity was definitively resolved through total synthesis in 1976, which confirmed the original structural proposal. The compound's structure is notably similar to sphaerophorin, another lichen metabolite found in the genus Sphaerophorus.
Properties
Physical properties
Grayanic acid forms radiating clusters of colourless needles upon crystallisation, and has a melting point of 186–189°C. It dissolves readily in ethyl acetate, methyl acetate, ethanol, and chloroform, is sparingly solubility in benzene, and is insoluble in hexane and petroleum ether. These solubility characteristics facilitate its extraction and crystallisation from lichen material. Synthetic material provided a more precise melting point, measured at 181.5–182.5°C.
Nuclear magnetic resonance spectroscopy identifies signals at δ 0.89 (deformed triplet, methyl), 1.26 (broad signal, five methylene groups), 2.50 (singlet, methyl), 3.24 (broad signal, ArCH₂), 3.83 (singlet, methoxy), and 6.62–6.72 (aromatic protons). Mass spectrometry detects a molecular ion peak at m/z 414 (M+, C23H26O7), with characteristic fragmentation patterns including peaks at m/z 396 (M+-H₂O), 370 (M+-CO₂), and 165 (A-ring fragment). High-resolution mass spectrometry verifies the molecular formula, providing an exact 414.1679. The compound has identical Rf values across multiple solvent systems when compared with authentic natural samples.
The compound fluoresces blue under ultraviolet light, a distinctive property. This fluorescence aids in studying its accumulation in laboratory cultures of the fungal partner. When the fungus is grown in culture, grayanic acid forms visible extracellular deposits on aerial fungal filaments (hyphae). These deposits appear as patches or bands along the hyphae, accumulating more densely in older regions farther from the growing tips. The deposits dissolve readily in acetone or methanol, leaving only the fungal cell walls' natural fluorescence.
Chemical properties
The chemical behaviour of grayanic acid includes several distinctive reactions and spectroscopic characteristics. In ethanolic solution, it forms a violet colour with 1% ferric chloride, and a pale yellow colour with diazonium reagent. Its ultraviolet absorption spectrum shows two peaks (λmax): one at 258 nm (log ε 4.10), and another at 300–310 nm (log ε of 3.5). Infrared spectroscopy identifies structural features such as a chelated carboxyl group at 1650 cm⁻¹, a lactonic linkage at 1750 cm⁻¹, and benzenoid rings with bands at 1570 and 1610 cm⁻¹. The compound remains stable under methanolysis, showing no changes after boiling in methanol for 18 hours.
Nuclear magnetic resonance studies of grayanic acid in chloroform show proton signals at τ = 9.10 (terminal methyl groups of long alkyl chains), τ = 8.63 (intermediate methylenes), and τ = 6.75 (end methylenes attached to the benzene ring). These signals, compared with those of similar compounds, helped identify the positions of functional groups in the molecule. In acetone, benzene ring protons exhibit chemical shifts at 6.13, 6.66, and 6.80 ppm, matching the pattern of related compounds like sphaerophorin.
Thin-layer chromatography shows grayanic acid as a UV+ pale blue spot before heating, which becomes pale pinkish-brown with a UV+ purple hue after acid spray and heating. This chromatographic behaviour aids in identifying grayanic acid in complex lichen extracts, especially in chemotaxonomic studies distinguishing species like Neophyllis melacarpa and N. pachyphylla by their metabolite profiles.
Grayanic acid displays characteristic behaviour in solvents and chemical tests. During bicarbonate solution tests, it forms an oily layer between ether and aqueous phases, in addition to its standard solubility properties. It fluoresces green when treated with potassium hydroxide and chloral hydrate but gives a negative result in the homofluorescein reaction. These chemical properties helped classify grayanic acid as an orcinol-type depsidone rather than a simple depside.
Reactivity
Grayanic acid undergoes chemical transformations that aid in understanding its structure and reactivity. It readily forms a mono-acetate derivative (melting point 155–157°C) and can be converted to a methyl ether methyl ester (melting point 88–90°C). Acetylgrayanic acid is prepared by treating grayanic acid with acetic anhydride and sulfuric acid. The resulting crystals melt at 57–59°C after recrystallisation from benzene and n-hexane.
Under ice-cooling, potassium hydroxide converts grayanic acid into grayanoldicarboxylic acid, while barium hydroxide treatment yields grayanolic acid. These reactions illustrate the compound's reactivity with bases and its capacity to form structurally distinct derivatives.
Grayanic acid also shows characteristic solubility behaviour in chemical tests. For example, when shaken with aqueous sodium bicarbonate, it forms an oily layer between the ethereal and aqueous phases, a property that facilitates its separation during analysis.
Occurrence
Grayanic acid was first discovered and isolated from Cladonia grayi. Initial extractions yielded about 0.7% grayanic acid from raw lichen material, producing 350 milligrams of pure crystals from 50 grams of lichen. Ethanol and chloroform facilitated this yield, aiding the purification process.
Although initially identified only in C. grayi, later research detected grayanic acid in other Cladonia species. One example is Cladonia anitae, an endemic species discovered in 1982 along the Atlantic Coast of southeastern North Carolina. In this species, grayanic acid is a major metabolite, found with usnic acid and rhodocladonic acid. Grayanic acid is also a major secondary metabolite in Jarmania tristis, a byssoid lichen endemic to Tasmania's cool temperate rainforests. In J. tristis, it co-occurs with usnic acid and 4-O-demethylgrayanic acid, shaping the species' distinctive chemistry.
Grayanic acid production varies geographically among C. grayi populations. Caribbean specimens exhibit chemical variants, with some populations producing grayanic acid alongside related compounds like stenosporonic and divaronic acids. This variation appears geographically influenced, with West Indian specimens showing different proportions of these compounds compared to North American ones. For example, Jamaican specimens typically contain grayanic acid and stenosporonic acid as major constituents, while other populations often produce grayanic acid alone.
Laboratory cultivation has revealed the conditions required for grayanic acid production by the fungal partner (mycobiont) of C. grayi. Isolated from its algal partner, the fungus produces substantial grayanic acid, particularly on solid media under dry conditions. Production starts days after transferring the fungus from liquid to solid growth medium and increases as aerial fungal filaments develop. Under optimal conditions, the cultured fungus can achieve production rates comparable to those of some non-lichen fungi producing similar compounds. The fungus's ability to synthesise grayanic acid in pure culture shows that the compound, while characteristic of the intact lichen, does not require the algal partner.
Taxonomic significance
Grayanic acid is integral to lichen taxonomy, particularly for distinguishing species in the Cladonia chlorophaea complex. Initially used with taste tests to separate species, detailed studies in the 1970s revealed more nuanced relationships between chemical composition and morphology.
Studies of North Carolina populations showed a correlation between grayanic acid and specific morphological traits. C. grayi, which contains grayanic acid, consistently exhibits smaller granules (soredia) in its podetial cups than C. cryptochlorophaea. These differences, unaffected by fumarprotocetraric acid content, indicate grayanic acid's taxonomic relevance. Similarly, in the Australasian genus Neophyllis, grayanic acid is a key chemotaxonomic marker distinguishing N. melacarpa from N. pachyphylla. N. melacarpa consistently produces grayanic acid with melacarpic acid and sometimes fumarprotocetraric acid, whereas N. pachyphylla contains only melacarpic acid. These chemical distinctions help resolve taxonomic ambiguities between the two species.
Taxonomic interpretations of chemical variation in these lichens have changed over time. Early classifications focused on the presence or absence of fumarprotocetraric acid (a bitter compound), but later studies suggested this variation reflects different genotypes of the same species rather than separate species. This pattern mirrors chemical variation seen in other lichens, such as the Cetraria islandica complex.
North American distribution studies reveal that specimens with both grayanic acid and fumarprotocetraric acid are more common in mountainous regions, while coastal populations primarily contain grayanic acid alone. Despite these chemical differences, the variants seem to belong to the same species, sharing consistent morphology aside from fumarprotocetraric acid presence.
Synthesis
The first total synthesis of grayanic acid was accomplished by Peter Djura and Melvyn Sargent in 1976 at the University of Western Australia. The key step in their synthetic route was an Ullmann reaction to construct the diaryl ether linkage. Their successful synthesis not only provided access to the compound but also definitively confirmed its structural assignment.
The synthetic pathway proceeded through several key intermediates. Initially, the researchers constructed the two aromatic rings separately. The first ring component was prepared from methyl acetoacetate and (E)-methyl dec-2-enoate through a series of transformations. The second ring was synthesised starting from a benzyl-protected hydroxybenzoate.
The crucial Ullmann coupling reaction joined these two components with a 73% yield, forming the diaryl ether intermediate. Following this step, hydrogenolysis produced a hydroxy acid which was then converted to methyl O-methylgrayanate through lactonisation with trifluoroacetic anhydride. The final stages of the synthesis involved careful manipulation of protecting groups to yield grayanic acid, which was identical in all respects to the natural product isolated from lichens.
Biosynthesis
The biosynthesis of grayanic acid involves fungal polyketide synthases and subsequent modifications, following a pathway similar to other lichen depsidones. Grayanic acid shares biosynthetic origins with sphaerophorin, a known lichen depside. Structural similarities and chemical transformation studies led Shibata and Chiang to propose sphaerophorin as a biosynthetic precursor to grayanic acid. The relationship is supported by shared structural features, such as similar methoxy and hydroxyl group arrangements on their benzenoid rings.
These foundational insights have been refined through genetic and biochemical studies. A 1985 study showed that grayanic acid biosynthesis depends entirely on the fungal genetics of C. grayi. Resynthesised lichens, formed by pairing fungal spores from grayanic acid-producing chemotypes with algal symbionts from unrelated lichens, consistently produced grayanic acid. This finding confirmed that the algal partner does not influence the chemotype, establishing the fungal component as the sole regulator of secondary metabolite production.
A 1992 study demonstrated that the fungal partner (mycobiont) of Cladonia grayi produces grayanic acid independently of its algal partner. Biosynthesis was linked to the development of aerial hyphae—thread-like fungal filaments that develop blue-fluorescent patches of grayanic acid under ultraviolet light. Production increased significantly under conditions of water stress and air exposure.
Genetic studies have elucidated the molecular mechanisms of grayanic acid biosynthesis. A biosynthetic gene cluster in C. grayi, including CgrPKS16 (a polyketide synthase that assembles the depside precursor 4-O-demethylsphaerophorin), drives the process. The pathway includes CYP682BG1, a cytochrome P450 monooxygenase for oxidative coupling, and an O-methyltransferase that adds a methyl group to complete the synthesis.
Grayanic acid belongs to a broader family of orcinol-type depsidones produced by lichens in the Cladonia chlorophaea group. These compounds form via biosequential patterns, with simpler depsides converting into more complex depsidones. This dynamic biosynthetic network produces related compounds, such as stenosporonic and divaronic acids, which exhibit variations in their carbon side-chain lengths across populations. This variation highlights the ecological and taxonomic relevance of grayanic acid in lichen communities.
The biosynthetic process shows distinct patterns during laboratory cultivation. Under suitable growing conditions, fungi first produce simpler depsides like 4-O-demethylsphaerophorin, followed by more complex depsidones like grayanic acid. This sequential process reflects the gene-driven enzymatic pathway and demonstrates the metabolic flexibility of lichen fungi.
Related compounds
Grayanic acid shares key structural features with sphaerophorin, a depside found in Sphaerophorus lichens. Cryptochlorophaeic acid and merochlorophaeic acid, structurally related to grayanic acid, were first identified in the Cladonia chlorophaea complex. These compounds, described in detail by Shibata and Chiang, share structural similarities with grayanic acid, including benzenoid and ester group arrangements.
In 1985, two additional related depsidones were reported: stenosporonic acid (C23H26O7) and divaronic acid (C21H22O7). These compounds are lower homologs in the same chemical series as grayanic acid, sharing its basic structure but differing in carbon side-chain lengths. Both compounds were first identified in Caribbean populations of C. grayi, where they occur alongside grayanic acid in varying proportions. Mass spectrometry confirmed their structures, with stenosporonic acid displaying a characteristic molecular ion at m/z (mass-to-charge ratio) 414 and divaronic acid at m/z 386.
Discovered in 1982, 4-O-demethylgrayanic acid (C22H24O7) naturally co-occurs with grayanic acid in several lichen species. This compound is present in all studied grayanic acid-producing lichens, including Cladonia and Gymnoderma melacarpum. Congrayanic acid, another related compound, may result from the nonenzymatic hydrolysis of grayanic acid, though it usually appears in trace amounts and is challenging to detect in unmanipulated extracts.
In 1980, congrayanic acid (C23H28O8) was first synthesised by treating grayanic acid with aqueous sodium hydroxide, cleaving the ester linkage. It crystallises as colorless prisms with a melting point of 183–183.5°C. This process confirmed structural aspects of grayanic acid, as congrayanic acid retained key spectroscopic features of the parent compound.
Researchers have prepared several derivatives of grayanic acid, including:
Methyl O-methylgrayanate, which forms needles with a melting point of 86.5–87.5°C
Benzyl grayanate, crystallising as prisms with a melting point of 101.5–102°C
Grayanoldicarboxylic acid, produced by treatment with potassium hydroxide
Grayanic acid belongs to the broader depsidone class, presumably formed through the oxidative cyclisation of p-depsides. This relationship is supported by the occasional, though rare, co-occurrence of depside-depsidone pairs in lichens.
References
Lichen products
Benzoic acids
Phenols
O-methylated natural phenols
Heptyl compounds
Benzodioxepines
Methoxy compounds
Heterocyclic compounds with 3 rings | Grayanic acid | Chemistry | 4,305 |
8,036,355 | https://en.wikipedia.org/wiki/Framework-specific%20modeling%20language | A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework.
FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices.
A FSML concept can be configured by selecting features and providing values for features.
Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept.
Applications
FSMLs are used in model-driven development for creating models or specifications of software to be built.
FSMLs enable
the creation of the models from the framework completion code (that is, automated reverse engineering)
the creation of the framework completion code from the models (that is, automated forward engineering)
code verification through constraint checking on the model
automated round-trip engineering
Examples
Eclipse Workbench Part Interaction FSML
An example FSML for modeling Eclipse Parts (that is, editors and views) and Part Interactions (for example listens to parts, requires adapter, provides selection).
The prototype implementation supports automated round-trip engineering of Eclipse plug-ins that implement workbench parts and part interactions.
See also
General-purpose modeling (GPM)
Model-driven engineering (MDE)
Domain-specific language (DSL)
Model-driven architecture (MDA)
Meta-Object Facility (MOF)
References
Specification languages
Modeling languages | Framework-specific modeling language | Engineering | 312 |
2,368,154 | https://en.wikipedia.org/wiki/Clinical%20decision%20support%20system | A clinical decision support system (CDSS) is a health information technology that provides clinicians, staff, patients, and other individuals with knowledge and person-specific information to help health and health care. CDSS encompasses a variety of tools to enhance decision-making in the clinical workflow. These tools include computerized alerts and reminders to care providers and patients, clinical guidelines, condition-specific order sets, focused patient data reports and summaries, documentation templates, diagnostic support, and contextually relevant reference information, among other tools. CDSSs constitute a major topic in artificial intelligence in medicine.
Characteristics
A clinical decision support system is an active knowledge system that uses variables of patient data to produce advice regarding health care. This implies that a CDSS is simply a decision support system focused on using knowledge management.
Purpose
The main purpose of modern CDSS is to assist clinicians at the point of care. This means that clinicians interact with a CDSS to help to analyze and reach a diagnosis based on patient data for different diseases.
In the early days, CDSSs were conceived to make decisions for the clinician literally. The clinician would input the information and wait for the CDSS to output the "right" choice, and the clinician would simply act on that output. However, the modern methodology of using CDSSs to assist means that the clinician interacts with the CDSS, utilizing both their knowledge and the CDSS's, better to analyse the patient's data than either human or CDSS could make on their own. Typically, a CDSS makes suggestions for the clinician to review, and the clinician is expected to pick out useful information from the presented results and discount erroneous CDSS suggestions.
The two main types of CDSS are knowledge-based and non-knowledge-based:
An example of how a clinician might use a clinical decision support system is a diagnosis decision support system (DDSS). DDSS requests some of the patients' data and, in response, proposes a set of appropriate diagnoses. The physician then takes the output of the DDSS and determines which diagnoses might be relevant and which are not, and, if necessary, orders further tests to narrow down the diagnosis.
Another example of a CDSS would be a case-based reasoning (CBR) system. A CBR system might use previous case data to help determine the appropriate amount of beams and the optimal beam angles for use in radiotherapy for brain cancer patients; medical physicists and oncologists would then review the recommended treatment plan to determine its viability.
Another important classification of a CDSS is based on the timing of its use. Physicians use these systems at the point of care to help them as they are dealing with a patient, with the timing of use being either pre-diagnosis, during diagnosis, or post-diagnosis. Pre-diagnosis CDSS systems help the physician prepare the diagnoses. CDSSs help review and filter the physician's preliminary diagnostic choices to improve outcomes. Post-diagnosis CDSS systems are used to mine data to derive connections between patients and their past medical history and clinical research to predict future events. As of 2012, it has been claimed that decision support will begin to replace clinicians in common tasks in the future.
Another approach, used by the National Health Service in England, is to use a DDSS to triage medical conditions out of hours by suggesting a suitable next step to the patient (e.g. call an ambulance, or see a general practitioner on the next working day). The suggestion, which may be disregarded by either the patient or the phone operative if common sense or caution suggests otherwise, is based on the known information and an implicit conclusion about what the worst-case diagnosis is likely to be; it is not always revealed to the patient because it might well be incorrect and is not based on a medically-trained person's opinion - it is only used for initial triage purposes.
Knowledge-based
Most CDSSs consist of three parts: the knowledge base, an inference engine, and a mechanism to communicate. The knowledge base contains the rules and associations of compiled data which most often take the form of IF-THEN rules. If this was a system for determining drug interactions, then a rule might be that IF drug X is taken AND drug Y is taken THEN alert the user. Using another interface, an advanced user could edit the knowledge base to keep it up to date with new drugs. The inference engine combines the rules from the knowledge base with the patient's data. The communication mechanism allows the system to show the results to the user as well as have input into the system.
An expression language such as GELLO or CQL (Clinical Quality Language) is needed for expressing knowledge artefacts in a computable manner. For example: if a patient has diabetes mellitus, and if the last haemoglobin A1c test result was less than 7%, recommend re-testing if it has been over six months, but if the last test result was greater than or equal to 7%, then recommend re-testing if it has been over three months.
The current focus of the HL7 CDS WG is to build on the Clinical Quality Language (CQL). The U.S. Centers for Medicare & Medicaid Services (CMS) has announced that it plans to use CQL for the specification of Electronic Clinical Quality Measures (eCQMs).
Non-knowledge-based
CDSSs which do not use a knowledge base use a form of artificial intelligence called machine learning, which allow computers to learn from past experiences and/or find patterns in clinical data. This eliminates the need for writing rules and expert input. However, since systems based on machine learning cannot explain the reasons for their conclusions, most clinicians do not use them directly for diagnoses, reliability and accountability reasons. Nevertheless, they can be useful as post-diagnostic systems, for suggesting patterns for clinicians to look into in more depth.
As of 2012, three types of non-knowledge-based systems are support-vector machines, artificial neural networks and genetic algorithms.
Artificial neural networks use nodes and weighted connections between them to analyse the patterns found in patient data to derive associations between symptoms and a diagnosis.
Genetic algorithms are based on simplified evolutionary processes using directed selection to achieve optimal CDSS results. The selection algorithms evaluate components of random sets of solutions to a problem. The solutions that come out on top are then recombined and mutated and run through the process again. This happens over and over until the proper solution is discovered. They are functionally similar to neural networks in that they are also "black boxes" that attempt to derive knowledge from patient data.
Non-knowledge-based networks often focus on a narrow list of symptoms, such as symptoms for a single disease, as opposed to the knowledge-based approach, which covers the diagnosis of many diseases.
An example of a non-knowledge-based CDSS is a web server developed using a support vector machine for the prediction of gestational diabetes in Ireland.
Regulations
History, United States
The IOM had published a report in 1999, To Err is Human, which focused on the patient safety crisis in the United States, pointing to the incredibly high number of deaths. This statistic attracted great attention to the quality of patient care. The Institute of Medicine (IOM) promoted the usage of health information technology, including clinical decision support systems, to advance the quality of patient care.
With the enactment of the American Recovery and Reinvestment Act of 2009 (ARRA), there was a push for widespread adoption of health information technology through the Health Information Technology for Economic and Clinical Health Act (HITECH). Through these initiatives, more hospitals and clinics were integrating electronic medical records (EMRs) and computerized physician order entry (CPOE) within their health information processing and storage.
Despite the absence of laws, the CDSS vendors would almost certainly be viewed as having a legal duty of care to both the patients who may adversely be affected due to CDSS usage and the clinicians who may use the technology for patient care. However, duties of care legal regulations are not explicitly defined yet.
With the enactment of the HITECH Act included in the ARRA, encouraging the adoption of health IT, more detailed case laws for CDSS and EMRs were still being defined by the Office of National Coordinator for Health Information Technology (ONC) and approved by Department of Health and Human Services (HHS). A definition of "Meaningful use" has yet to be published.
Effectiveness
The evidence of the effectiveness of CDSS is mixed. There are certain diseases which benefit more from CDSS than other disease entities. A 2018 systematic review identified six medical conditions in which CDSS improved patient outcomes in hospital settings, including blood glucose management, blood transfusion management, physiologic deterioration prevention, pressure ulcer prevention, acute kidney injury prevention, and venous thromboembolism prophylaxis.
A 2014 systematic review did not find a benefit in terms of risk of death when the CDSS was combined with the electronic health record. There may be some benefits, however, in terms of other outcomes.
A 2005 systematic review had concluded that CDSSs improved practitioner performance in 64% of the studies and patient outcomes in 13% of the studies. CDSSs features associated with improved practitioner performance included automatic electronic prompts rather than requiring user activation of the system.
A 2005 systematic review found "Decision support systems significantly improved clinical practice in 68% of trials."' The CDSS features associated with success included integration into the clinical workflow rather than as a separate log-in or screen, electronic rather than paper-based templates, providing decision support at the time and location of care rather than prior, and providing care recommendations.
However, later systematic reviews were less optimistic about the effects of CDS, with one from 2011 stating "There is a large gap between the postulated and empirically demonstrated benefits of [CDSS and other] eHealth technologies... their cost-effectiveness has yet to be demonstrated".
A five-year evaluation of the effectiveness of a CDSS in implementing rational treatment of bacterial infections for antimicrobial stewardship was published in 2014; according to the authors, it was the first long-term study of a CDSS.
Challenges to adoption
Clinical challenges
Much effort has been put forth by many medical institutions and software companies to produce viable CDSSs to support all aspects of clinical tasks. However, with the complexity of clinical workflows and the demands on staff time high, care must be taken by the institution deploying the support system to ensure that the system becomes an integral part of the clinical workflow. Some CDSSs have met with varying amounts of success, while others have suffered from common problems preventing or reducing successful adoption and acceptance.
Two sectors of the healthcare domain in which CDSSs have had a large impact are the pharmacy and billing sectors. Commonly used pharmacy and prescription-ordering systems now perform batch-based checking orders for negative drug interactions and report warnings to the ordering professional. Another sector of success for CDSS is in billing and claims filing. Since many hospitals rely on Medicare reimbursements to stay in operation, systems have been created to help examine both a proposed treatment plan and the current rules of Medicare to suggest a plan that attempts to address both the care of the patient and the financial needs of the institution.
Other CDSSs that are aimed at diagnostic tasks have found success, but are often very limited in deployment and scope. The Leeds Abdominal Pain System went operational in 1971 for the University of Leeds hospital. It was reported to have produced a correct diagnosis in 91.8% of cases, compared to the clinicians' success rate of 79.6%.
Despite the wide range of efforts by institutions to produce and use these systems, widespread adoption and acceptance have still not yet been achieved for most offerings. One large roadblock to acceptance has historically been workflow integration. A tendency to focus only on the functional decision-making core of the CDSS existed, causing a deficiency in planning how the clinician will use the product in situ. CDSSs were stand-alone applications, requiring the clinician to cease working on their current system, switch to the CDSS, input the necessary data (even if it had already been inputted into another system), and examine the results produced. The additional steps break the flow from the clinician's perspective and cost precious time.
Technical challenges and barriers to implementation
Clinical decision support systems face steep technical challenges in a number of areas. Biological systems are profoundly complicated, and a clinical decision may utilise an enormous range of potentially relevant data. For example, an electronic evidence-based medicine system may potentially consider a patient's symptoms, medical history, family history and genetics, as well as historical and geographical trends of disease occurrence, and published clinical data on therapeutic effectiveness when recommending a patient's course of treatment.
Clinically, a large deterrent to CDSS acceptance is workflow integration.
While it has been shown that clinicians require explanations of Machine Learning-Based CDSS, in order to able to understand and trust their suggestions, there is an overall distinct lack of application of explainable Artificial Intelligence in the context of CDSS, thus adding another barrier to the adoption of these systems.
Another source of contention with many medical support systems is that they produce a massive number of alerts. When systems produce a high volume of warnings (especially those that do not require escalation), besides the annoyance, clinicians may pay less attention to warnings, causing potentially critical alerts to be missed. This phenomenon is called alert fatigue.
Maintenance
One of the core challenges facing CDSS is difficulty in incorporating the extensive quantity of clinical research being published on an ongoing basis. In a given year, tens of thousands of clinical trials are published. Currently, each one of these studies must be manually read, evaluated for scientific legitimacy, and incorporated into the CDSS in an accurate way. In 2004, it was stated that the process of gathering clinical data and medical knowledge and putting them into a form that computers can manipulate to assist in clinical decision-support is "still in its infancy".
Nevertheless, it is more feasible for a business to do this centrally, even if incompletely, than for each doctor to try to keep up with all the research being published.
In addition to being laborious, integration of new data can sometimes be difficult to quantify or incorporate into the existing decision support schema, particularly in instances where different clinical papers may appear conflicting. Properly resolving these sorts of discrepancies is often the subject of clinical papers itself (see meta-analysis), which often take months to complete.
Evaluation
In order for a CDSS to offer value, it must demonstrably improve clinical workflow or outcome. Evaluation of CDSS quantifies its value to improve a system's quality and measure its effectiveness. Because different CDSSs serve different purposes, no generic metric applies to all such systems; however, attributes such as consistency (with and with experts) often apply across a wide spectrum of systems.
The evaluation benchmark for a CDSS depends on the system's goal: for example, a diagnostic decision support system may be rated based upon the consistency and accuracy of its classification of disease (as compared to physicians or other decision support systems). An evidence-based medicine system might be rated based upon a high incidence of patient improvement or higher financial reimbursement for care providers.
Combining with electronic health records
Implementing EHRs was an inevitable challenge. This challenge is because it is a relatively uncharted area, and there are many issues and complications during the implementation phase of an EHR. This can be seen in the numerous studies that have been undertaken. However, challenges in implementing electronic health records (EHRs) have received some attention. Still, less is known about transitioning from legacy EHRs to newer systems.
EHRs are a way to capture and utilise real-time data to provide high-quality patient care, ensuring efficiency and effective use of time and resources. Incorporating EHR and CDSS together into the process of medicine has the potential to change the way medicine has been taught and practiced. It has been said that "the highest level of EHR is a CDSS".
Since "clinical decision support systems (CDSS) are computer systems designed to impact clinician decision making about individual patients at the point in time that these decisions are made", it is clear that it would be beneficial to have a fully integrated CDSS and EHR.
Even though the benefits can be seen, fully implementing a CDSS integrated with an EHR has historically required significant planning by the healthcare facility/organisation for the CDSS to be successful and effective.
The success and effectiveness can be measured by the increased patient care being delivered and reduced adverse events occurring. In addition, there would be a saving of time and resources and benefits in terms of autonomy and financial benefits to the healthcare facility/organisation.
Benefits
A successful CDSS/EHR integration will allow the provision of best practice, high-quality care to the patient, which is the ultimate goal of healthcare. Three areas that can be addressed with the implementation of CDSS and Electronic Health Records (EHRs), are:
Medication prescription errors
Adverse drug events
Other medical errors
CDSSs will be most beneficial in the future when healthcare facilities are "100% electronic" in terms of real-time patient information, thus simplifying the number of modifications that have to occur to ensure that all the systems are up to date with each other.
The measurable benefits of clinical decision support systems on physician performance and patient outcomes remain the subject of ongoing research.
Barriers
Implementing electronic health records (EHR) in healthcare settings incurs challenges; none more important than maintaining efficiency and safety during rollout, but in order for the implementation process to be effective, an understanding of the EHR users' perspectives is key to the success of EHR implementation projects. In addition to this, adoption needs to be actively fostered through a bottom-up, clinical-needs-first approach. The same can be said for CDSS.
As of 2007, the main areas of concern with moving into a fully integrated EHR/CDSS system have been:
Privacy
Confidentiality
User-friendliness
Document accuracy and completeness
Integration
Uniformity
Acceptance
Alert desensitisation
as well as the key aspects of data entry that need to be addressed when implementing a CDSS to avoid potential adverse events from occurring. These aspects include whether:
correct data is being used
all the data has been entered into the system
current best practice is being followed
the data is evidence-based
A service oriented architecture has been proposed as a technical means to address some of these barriers.
Status in Australia
As of July 2015, the planned transition to EHRs in Australia is facing difficulties. Most healthcare facilities are still running completely paper-based systems; some are in a transition phase of scanned EHRs or moving towards such a transition phase.
Victoria has attempted to implement EHR across the state with its HealthSMART program, but it has cancelled the project due to unexpectedly high costs.
South Australia (SA) however is slightly more successful than Victoria in the implementation of an EHR. This may be because all public healthcare organisations in SA are centrally run.
SA is in the process of implementing "Enterprise patient administration system (EPAS)". This system is the foundation for all public hospitals and health care sites for an EHR within SA, and it was expected that by the end of 2014, all facilities in SA will be connected to it. This would allow for successful integration of CDSS into SA and increase the benefits of the EHR.
By July 2015 it was reported that only 3 out of 75 health care facilities implemented EPAS.
With the largest health system in the country and a federated rather than a centrally administered model, New South Wales is making consistent progress towards statewide implementation of EHRs. The current iteration of the state's technology, eMR2, includes CDSS features such as a sepsis pathway for identifying at-risk patients based upon data input to the electronic record. As of June 2016, 93 of 194 sites in-scope for the initial roll-out had implemented eMR2.
Status in Finland
The EBMEDS Clinical Decision Support service provided by Duodecim Medical Publications Ltd is used by more than 60% of Finnish public health care doctors.
Status in India
There have been many recent initiatives in India to promote digital health. New Platforms are emerging in India like Eka.care, Clinisio, Raxa etc, providing EHR integrated clinical decision support.
Research
Prescription errors
A study in the UK tested the Salford Medication Safety Dashboard (SMASH), a web-based CDSS application to help GPs and pharmacists find people in their electronic health records who might face safety hazards due to prescription errors. The dashboard was successfully used in identifying and helping patients with already registered unsafe prescriptions and later it helped monitoring new cases as they appeared.
See also
Gello Expression Language
International Health Terminology Standards Development Organisation
Medical algorithm
Medical informatics
Personal Health Information Protection Act (a law in force in Ontario)
Treatment decision support (decision support tools for patients)
Artificial intelligence in healthcare
References
External links
Duodecim EBMEDS Clinical Decision Support
Decision support chapter from Coiera's Guide to Health Informatics
OpenClinical maintains an extensive archive of Artificial Intelligence systems in routine clinical use.
Robert Trowbridge/ Scott Weingarten. Chapter 53. Clinical Decision Support Systems
Stanford CDSS
Information systems
Health informatics
Medical software
Medical expert systems
Applications of artificial intelligence
Decision support systems
Clinical Decision Support Systems: Enhancing Healthcare Through Technology
In today's rapidly advancing healthcare landscape, clinical decision support systems (CDSS) play a pivotal role in improving patient care, enhancing clinical outcomes, and supporting healthcare professionals in making informed decisions. This article explores the concept, benefits, challenges, and future prospects of CDSS.
What is a Clinical Decision Support System (CDSS)?
A Clinical Decision Support System (CDSS) is a computerized tool designed to assist healthcare providers in making clinical decisions by integrating medical knowledge with patient data. These systems utilize algorithms, databases, and patient information to provide tailored recommendations, alerts, and reminders to healthcare professionals at the point of care.
Components of a CDSS:
1. **Knowledge Base**: Contains medical guidelines, protocols, best practices, and clinical rules.
2. **Patient Data Interface**: Integrates with electronic health records (EHR) systems to access patient demographics, medical history, test results, and current medications.
3. **Inference Engine**: Analyzes patient data and applies clinical rules to generate suggestions or alerts based on predefined algorithms.
4. **User Interface**: Presents recommendations, alerts, and relevant information to healthcare providers in a user-friendly format.
Benefits of Clinical Decision Support Systems:
1. **Improved Clinical Decision Making**: CDSS provides evidence-based recommendations, reducing errors and variability in clinical practice.
2. **Enhanced Patient Safety**: Alerts for drug interactions, allergies, and potential adverse events help prevent medical errors and improve patient outcomes.
3. **Efficiency**: Streamlines workflow by providing quick access to relevant information, reducing the time spent on manual data retrieval and analysis.
4. **Cost-Effectiveness**: Helps in optimizing resource utilization, reducing unnecessary tests, treatments, and hospitalizations.
5. **Continuing Education**: Acts as a learning tool by keeping healthcare providers updated with the latest medical research and guidelines.
Challenges in Implementing CDSS:
1. **Integration Complexity**: Integrating CDSS with existing EHR systems and workflows can be challenging and time-consuming.
2. **Data Quality and Interoperability**: Dependence on accurate and complete data is crucial for the effectiveness of CDSS.
3. **User Acceptance**: Resistance to change and unfamiliarity with new technology among healthcare providers.
4. **Alert Fatigue**: Overwhelming healthcare providers with excessive alerts and reminders, leading to desensitization.
5. **Legal and Ethical Issues**: Concerns regarding liability, privacy, and confidentiality of patient data.
Future Trends and Innovations:
1. **Artificial Intelligence and Machine Learning**: Advanced algorithms for predictive analytics, personalized medicine, and real-time decision-making.
2. **Mobile and Cloud-based Solutions**: Remote access and seamless integration across different healthcare settings.
3. **Natural Language Processing**: Enhancing CDSS capabilities to interpret unstructured data such as clinical notes and imaging reports.
4. **Patient-Centered CDSS**: Involving patients in decision-making processes and personalized health management.
Conclusion:
Clinical Decision Support Systems represent a transformative technology in healthcare, offering substantial benefits in clinical practice, patient safety, and healthcare efficiency. While challenges remain in implementation and adoption, ongoing advancements in technology and healthcare delivery are poised to further enhance the capabilities and impact of CDSS in improving overall healthcare outcomes.
In conclusion, CDSS are pivotal tools in the evolving landscape of healthcare technology, enabling healthcare professionals to leverage data-driven insights and medical knowledge effectively at the point of care, ultimately leading to better patient outcomes and enhanced healthcare delivery. | Clinical decision support system | Technology,Biology | 5,186 |
896 | https://en.wikipedia.org/wiki/Argon | Argon is a chemical element; it has symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust.
Nearly all argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas.
The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is extracted industrially by the fractional distillation of liquid air. It is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. It is also used in incandescent and fluorescent lighting, and other gas-discharge tubes. It makes a distinctive blue-green gas laser. It is also used in fluorescent glow starters.
Characteristics
Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature.
Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized.
History
Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785.
Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon.
Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements.
Prior to 1957, the symbol for argon was "A". This was changed to Ar after the International Union of Pure and Applied Chemistry published the work Nomenclature of Inorganic Chemistry in 1957.
Occurrence
Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively.
Isotopes
The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating.
In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days.
Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes.
The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as .
The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table).
Compounds
Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space.
Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa.
Production
Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year.
Applications
Argon has several desirable properties:
Argon is a chemically inert gas.
Argon is the cheapest alternative when nitrogen is not sufficiently inert.
Argon has low thermal conductivity.
Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications.
Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. It is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of its applications arise simply because it is inert and relatively cheap.
Industrial processes
Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning.
For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium.
Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life.
Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam.
Scientific research
Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions.
At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials.
Preservative
Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon.
In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry.
Argon is sometimes used as the propellant in aerosol cans.
Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage.
Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced.
Laboratory equipment
Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus.
Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication.
Medical use
Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient.
Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects.
Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood.
Lighting
Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers.
Miscellaneous uses
Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity.
Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure.
Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks.
Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse.
Safety
Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
See also
Industrial gas
Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors.
References
Further reading
On triple point pressure at 69 kPa.
On triple point pressure at 83.8058 K.
External links
Argon at The Periodic Table of Videos (University of Nottingham)
USGS Periodic Table – Argon
Diving applications: Why Argon?
Chemical elements
E-number additives
Noble gases
Industrial gases | Argon | Physics,Chemistry,Materials_science | 3,963 |
19,810,946 | https://en.wikipedia.org/wiki/Exophilia | Exophilia is "a fetishism whose object is the sexuality of extraterrestrials." In other words, it is a desire for extraterrestrial, alien, or other non-human life forms. The term was coined by the writer Supervert in the 2001 book Extraterrestrial Sex Fetish. Because exophilia originated in a literary work, it is not a paraphilia recognized by experts such as the American Psychiatric Association in its Diagnostic and Statistical Manual, Fifth Edition (DSM).
References
Further reading
Sexuality | Exophilia | Biology | 115 |
75,993,685 | https://en.wikipedia.org/wiki/Wairarapa%20Dark%20Sky%20Reserve | The Wairarapa Dark Sky Reserve is an International Dark Sky Reserve in the Wairarapa region in the southern part of the North Island of New Zealand. The reserve was designated by DarkSky International in January 2023. It was the second dark sky reserve to be certified in New Zealand (after the Aoraki Mackenzie International Dark Sky Reserve was recognised in 2012). The area covered by the reserve is and includes the Aorangi Forest Park, and the South Wairarapa and Carterton Districts.
The reserve is certified as an International Dark Sky Reserve, requiring a dark "core" zone that is surrounded by a populated area where policy controls protect the darkness of the core. For the Wairarapa reserve, the dark core is the entire area of the Aorangi Forest Park in the south of the reserve. All measurements of night sky luminance in the core area are darker than 21.3 mag/arcsec2 (corresponding to Bortle scale 3), and in places are as dark as 21.8 mag/arcsec2 (Bortle scale 1). Large parts of the Wairarapa region outside the core of the reserve exceed the minimum value of 21.2 mag/arcsec2 required for the core. Measurements taken in the town of Martinborough show that although it is located in the periphery of the reserve, it almost meets the minimum requirements for the core.
History
Proposals for a dark sky reserve in the South Wairarapa District were initially developed in 2017 and presented to an initial public meeting in Martinborough. In 2018, consultation about the proposals included the Carterton and Masterton districts. At that time, the mayor of Carterton stated that their lighting already complied with the standards, and that they would join with the South Wairarapa District in making an application for designation.
An application for Dark Sky Reserve status was submitted in December 2022. The certification by DarkSky International in 2023 was the result of 5 years of volunteer work by the Wairarapa Dark Sky Association Incorporated (a registered charity in New Zealand), and the South Wairarapa and Carterton district councils, together with other local interested parties.
In 2023, the Masterton District Council, governing an area of adjacent to the designated reserve, began planning and consultation for potentially expanding the Wairarapa Dark Sky Reserve to include the Masterton District. The work involved in making an application includes dark sky measurements and photos, a plan for lighting, and reductions in artificial lighting including changes to types of lighting and installation of shields.
References
External links
Official website
Dark Sky Reserve at South Wairarapa District Council
Star Safari Star-gazing business in Ponatahi, Wairarapa
2023 establishments in New Zealand
Dark-sky preserves in New Zealand
International Dark Sky Reserves
Wairarapa | Wairarapa Dark Sky Reserve | Astronomy | 564 |
282,377 | https://en.wikipedia.org/wiki/Anvil | An anvil is a metalworking tool consisting of a large block of metal (usually forged or cast steel), with a flattened top surface, upon which another object is struck (or "worked").
Anvils are massive because the higher their inertia, the more efficiently they cause the energy of striking tools to be transferred to the work piece. In most cases the anvil is used as a forging tool. Before the advent of modern welding technology, it was the primary tool of metal workers.
The great majority of modern anvils are made of cast steel that has been heat treated by either flame or electric induction. Inexpensive anvils have been made of cast iron and low-quality steel, but are considered unsuitable for serious use, as they deform and lack rebound when struck.
The largest single piece tool steel anvil that is heat treated is 1600 pounds. This anvil was made in 2023 by Oak Lawn Blacksmith. There are larger anvils that are made out of multiple pieces such as “The mile long anvil” made by Napier which weighs 6500 pounds. This anvil is not heat treated or made from tool steel.
Structure
The primary work surface of the anvil is known as the face. It is generally made of hardened steel and should be flat and smooth with rounded edges for most work. Any marks on the face will be transferred to the work. Also, sharp edges tend to cut into the metal being worked and may cause cracks to form in the workpiece. The face is hardened and tempered to resist the blows of the smith's hammer, so the anvil face does not deform under repeated use. A hard anvil face also reduces the amount of force lost in each hammer blow. Hammers, tools, and work pieces of hardened steel should never directly strike the anvil face with full force, as they may damage it; this can result in chipping or deforming of the anvil face.
The horn of the anvil is a conical projection used to form various round shapes and is generally unhardened steel or iron. The horn is used mostly in bending operations. It also is used by some smiths as an aid in "drawing down" stock (making it longer and thinner). Some anvils, mainly European, are made with two horns, one square and one round. Also, some anvils are made with side horns or clips for specialized work.
The step is the area of the anvil between the "horn" and the "face". It is soft and is used for cutting; its purpose is to prevent damaging the steel face of the anvil by conducting such operations there and so as not to damage the cutting edge of the chisel, though many smiths shun this practice as it will damage the anvil over time.
There have also been other additions to the anvil such as an upsetting block; this is used to upset steel, generally in long strips/bars as it is placed between the feet of the anvil. Upsetting is a technique often used by blacksmiths for making the steel workpiece short and thick, having probably been originally long and thin.
The hardy hole is a square hole into which specialized forming and cutting tools, called hardy tools, are placed. It is also used in punching and bending operations. These are not to be confused with swage blocks, although their purpose is similar.
The pritchel hole is a small round hole that is present on most modern anvils. Some anvils have more than one. It is used mostly for punching. At times, smiths will fit a second tool to this hole to allow the smith more flexibility when using more than one anvil tool.
Placement
The anvil is placed as near to the forge as is convenient, generally no more than one step from the forge to prevent heat loss in the work piece.
An anvil needs to be placed upon a sturdy base made from an impact and fire resistant material. Common methods of attaching an anvil are spikes, chains, steel or iron straps, clips, bolts where there are holes provided, and cables.
The most common base traditionally was a hard wood log or large timber buried several feet into the floor of the forge shop. In the industrial era, cast iron bases became available. They had the advantage of adding additional weight to the anvil, making it more stable. These bases are highly sought after by collectors today. When concrete became widely available, there was a trend to make steel reinforced anvil bases by some smiths, though this practice has largely been abandoned. In more modern times, anvils have been placed upon bases fabricated from steel, often a short thick section of a large I-beam. In addition, bases have been made from dimensional lumber bolted together to form a large block or steel drums full of oil-saturated sand to provide a damping effect. In recent times, tripod bases of fabricated steel have become popular.
Types
There are many designs for anvils, which are often tailored for a specific purpose or to meet the needs of a particular smith. For example, there were anvils specifically made for farriers, general smiths, cutlers, chain makers, armorers, saw tuners, coach makers, coopers, and many other types of metal workers. Most of these anvil types look similar, but some are radically different. Saw maker anvils, for instance, are generally a large rectangular block of steel that have a harder surface than most other anvils due to hammering on a harder steel for saws. Bladesmith anvils tend to be rectangular with a hardy and pritchel, but no horn. Such designs have originated in diverse geographic locations. Several styles of anvils include, Bavarian, French Pig anvil, Austrian, and Chinese turtle anvil.
The Bavarian style is known for the sloped brow. The brow was first used in medieval times to make armor on the church windows below the brow. Common manufactures include, Söding Halbach and Holthaus. This style of anvil is known not to sway in the face due to the extra mass with the brow.
The common blacksmith's anvil is made of either forged or cast steel, forged wrought iron with a hard steel face or cast iron with a hard steel face. Cast iron anvils are not used for forging as they are incapable of standing up to the impact and will crack and dent. Also, cast iron anvils without a hard steel face do not have the rebound that a harder anvil would and will tire out the smith. Historically, some anvils have been made with a smooth top working face of hardened steel welded to a cast iron or wrought iron body, though this manufacturing method is no longer in use. At one end, the common smith's anvil has a projecting conical bick (beak, horn) used for hammering curved work pieces. The other end is typically called the heel. Occasionally, the other end is also provided with a bick, partly rectangular in section. Most anvils made since the late 18th century also have a hardy hole and a pritchel hole where various tools, such as the anvil-cutter or hot chisel, can be inserted and held by the anvil. Some anvils have several hardy and pritchel holes, to accommodate a wider variety of hardy tools and pritchels. An anvil may also have a softer pad for chisel work.
An anvil for a power hammer is usually supported on a massive anvil block, sometimes weighing over 800 tons for a 12-ton hammer; this again rests on a strong foundation of timber and masonry or concrete.
An anvil may have a marking indicating its weight, manufacturer, or place of origin. American-made anvils were often marked in pounds. European anvils are sometimes marked in kilograms. English anvils were often marked in hundredweight, the marking consisting of three numbers, indicating hundredweight, quarter hundredweight and pounds. For example, a 3-1-5, if such an anvil existed, would be 3×112lb + 1×28lb + 5 lb = 369 lb ≈ 168 kg.
Cheap anvils made from inferior steel or cast iron and often sold at retail hardware stores, are considered unsuitable for serious use, and are often derisively referred to as "ASOs", or "anvil shaped objects". Amateur smiths have used lengths of railroad rail, forklift tines, or even simple blocks of steel as makeshift anvils.
A metalworking vise may have a small anvil integrated into its design.
History
Anvils were first made of stone, then bronze, and later wrought iron. As steel became more readily available, anvils were faced with it. This was done to give the anvil a hard face and to stop the anvil from deforming from impact. Many regional styles of anvils evolved through time from the simple block that was first used by smiths. The majority of anvils found today in the US are based on the London pattern anvil of the mid-19th century.
The wrought iron steel faced anvil was produced up until the early 20th century. Through the 19th and very early 20th centuries, this method of construction evolved to produce extremely high quality anvils. The basic process involved forge-welding billets of wrought iron together to produce the desired shape. The sequence and location of the forge-welds varied between different anvil makers and the kind of anvil being made. At the same time cast iron anvils with steel faces were being made in the United States. At the dawn of the 20th century solid cast steel anvils began to be produced, as well as two piece forged anvils made from closed die forgings. Modern anvils are generally made entirely from steel.
There are many references to anvils in ancient Greek and Egyptian writings, including Homer's works. They have been found at the Calico Early Man Site in North America.
Anvils have since lost their former commonness, along with the smiths who used them. Mechanized production has made cheap and abundant manufactured goods readily available. The one-off handmade products of the blacksmith are less economically viable in the modern world, while in the past they were an absolute necessity. However, anvils are still used by blacksmiths and metal workers of all kinds in producing custom work. They are also essential to the work done by farriers.
In popular culture
Firing
Anvil firing is the practice of firing an anvil into the air using gunpowder. It has been popular in California, the eastern United States and the southern United States, much like how fireworks are used today. There is a growing interest in re-enacting this "ancient tradition" in the US, which has now spread to England.
Television and film
A typical metalworker's anvil, with horn at one end and flat face at the other, is a standard prop for cartoon gags, as the epitome of a heavy and clumsy object that is perfect for dropping onto an antagonist. This visual metaphor is common, for example, in Warner Brothers' Looney Tunes and Merrie Melodies shorts, such as those with Wile E. Coyote and the Road Runner. Anvils in cartoons were also referenced in an episode of Gilmore Girls, where one of the main characters tries to have a conversation about "Where did all the anvils go?", a reference to their falling out of use on a general scale. Animaniacs made frequent gags on the topic throughout its run, even having a kingdom named Anvilania, whose sole national product is anvils.
Books
Dwarves were blacksmiths who used anvils for metalworking in C. S. Lewis's The Chronicles of Narnia, most iconically on The Magician's Nephew and Prince Caspian; as well as in J. R. R. Tolkien's The Hobbit.
Music
Anvils have been used as percussion instruments in several famous musical compositions, including:
Louis Andriessen: De Materie (Part I), which features an extended solo for two anvils
Kurt Atterberg: Symphony No. 5
Daniel Auber: opera Le Maçon
Alan Silvestri: The Mummy Returns
Arnold Bax: Symphony No. 3
The Beatles: "Maxwell's Silver Hammer" makes prominent use of the anvil. Their roadie Mal Evans played the anvil.
Benjamin Britten: The Burning Fiery Furnace
Aaron Copland: Symphony No. 3
Don Davis: The Matrix trilogy
Brad Fiedel: The Terminator
Neil Finn: "Song of the Lonely Mountain," written for the end credits of The Hobbit: An Unexpected Journey
Gustav Holst: Second Suite in F for Military Band, which includes a movement titled "Song of the Blacksmith"
Nicholas Hooper: Harry Potter and the Half-Blood Prince
James Horner: Used it extensively in Aliens, and his other films such as Flightplan, The Forgotten and Titanic
Metallica: "For Whom the Bell Tolls"
Randy Newman: Toy Story 3
Carl Orff: Antigone
Howard Shore: The Lord of the Rings film trilogy. Used predominantly for the theme of Isengard.
Juan María Solare: Veinticinco de agosto, 1983 and Un ángel de hielo y fuego
John Philip Sousa: Dwellers of the Western World, in which the second movement, The White Man, calls for two pairs of anvils, the one small, the other large
Johann Strauss II: Der Zigeunerbaron (The Gipsy Baron; 1885): Ja, da wird das Eisen gefüge
Josef Strauss: Feuerfest!, op. 269 (1869). The title means "fireproof". This was the slogan of the Wertheim fireproof safe company, which commissioned the work.
Edgard Varèse: Ionisation
Giuseppe Verdi: Il Trovatore, featuring the famous Anvil Chorus
Richard Wagner: Der Ring des Nibelungen in Das Rheingold in scene 3, using 18 anvils tuned in F in three octaves, and Siegfried in act I, notably Siegfried's "Forging Song" (Nothung! Nothung! Neidliches Schwert!)
William Walton: Belshazzar's Feast
John Williams: Jaws, Star Wars: Episode III – Revenge of the Sith
Carl Michael Ziehrer: Der Traum eines österreichischen Reservisten (1890)
Wagner's Ring des Nibelungen is notable in using the anvil as pitched percussion. The vast majority of extant works use the anvil as unpitched. However tuned anvils are available as musical instruments, albeit unusual. These are not to be confused with the "sawyers' anvils" used to "tune" big circular saw blades. Steel anvils are used for tuning for use as musical instruments, because those based partly on cast iron and similar materials give a duller sound; this is actually valued in industry, as pure steel anvils are troublesomely noisy, though energetically more efficient. The hammer and anvil have enjoyed varying popularity in orchestral roles. Robert Donington pointed out that Sebastian Virdung notes them in his book of 1510, and Martin Agricola includes it in his list of instruments (Musica instrumentalis deudsch, 1529) largely as a compliment to Pythagoras. In pre-modern or modern times anvils occasionally appear in operatic works by Berlioz, Bizet, Gounod, Verdi, and Wagner for example. Commonly pairs of anvils tuned a third apart are used.
In practice modern orchestras commonly substitute a brake drum or other suitable steel structure that is easier to tune than an actual anvil, although a visibly convincing anvil-shaped prop may be shown as desired. In Das Rheingold Wagner scored for nine little, six mid-sized, and three large anvils, but orchestras seldom can afford instrumentation on such a scale.
See also
Anvil Chorus
Diamond anvil cell
Anvil cloud
References
Further reading
External links
Heraldic charges
Lithics
Metalworking tools
European percussion instruments
Workbenches
Metallic objects | Anvil | Physics | 3,343 |
4,529,099 | https://en.wikipedia.org/wiki/Gaoqiao%2C%20Kai%20County | Gaoqiao () is a town located in a valley in Kai County, in the northeast of Chongqing municipality in Southwest China. Central Chongqing lies to the southwest.
On 23 December 2003 at 21:15, a gas well burst and released highly toxic hydrogen sulfide. According to China Daily, 233 people died and at least 9,000 were injured.
The well was called “” and belonged to PetroChina's Southwest Oil and Gas Field Branch (). It was located in the Chuandongbei gas field () in Gaoqiao's Xiaoyang village ().
In 2007, Chevron Corporation and China National Petroleum Corporation signed a contract to share production in Chuandongbei, with Chevron getting 49 percent of the venture, operating the project and supplying the technology.
See also
List of township-level divisions of Chongqing
References
External links
Gas well blowout kills at least 191 – Article in the China Daily, dated 2003-12-25
China tries to plug burst natural gas well – Article on MSNBC, dated 2003-12-25
Man-made disasters in China
Chemical disasters
Towns in Chongqing | Gaoqiao, Kai County | Chemistry | 232 |
2,696,845 | https://en.wikipedia.org/wiki/Digital%20rights | Digital rights are those human rights and legal rights that allow individuals to access, use, create, and publish digital media or to access and use computers, other electronic devices, and telecommunications networks. The concept is particularly related to the protection and realization of existing rights, such as the right to privacy and freedom of expression, in the context of digital technologies, especially the Internet. The laws of several countries recognize a right to Internet access.
Human rights and the Internet
A number of human rights have been identified as relevant with regard to the Internet. These include freedom of expression, privacy, and freedom of association. Furthermore, the right to education and multilingualism, consumer rights, and capacity building in the context of the right to development have also been identified.
APC Internet Rights Charter (2001)
The APC Internet Rights Charter was established by the Association for Progressive Communications (APC) at the APC Europe Internet Rights Workshop, held in Prague, February 2001. The Charter draws on the People's Communications Charter and develops seven themes: internet access for all; freedom of expression and association; access to knowledge, shared learning and creation - free and open source software and technology development; privacy, surveillance and encryption; governance of the internet; awareness, protection and realization of rights. The APC states that "the ability to share information and communicate freely using the internet is vital to the realization of human rights as enshrined in the Universal Declaration of Human Rights, the International Covenant on Economic, Social and Cultural Rights, the International Covenant on Civil and Political Rights and the Convention on the Elimination of All Forms of Discrimination against Women." The APC Internet Rights Charter is an early example of a so-called Internet bill of rights, an important element of digital constitutionalism.
World Summit on the Information Society (WSIS) (2003-2004)
In December 2003 the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations (UN). After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming human rights:
We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs.
The WSIS Declaration also makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating:
We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organisation. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers.
The 2004 WSIS Declaration of Principles also acknowledged the need to prevent the use of information and technologies for criminal purposes, while respecting human rights. Wolfgang Benedek comments that the WSIS Declaration only contains a number of references to human rights and does not spell out any procedures or mechanism to assure that human rights are considered in practice.
Internet Bill of Rights and Charter on Internet Rights and Principles (2007-2010)
The Dynamic Coalition for an Internet Bill of Rights held a large preparatory Dialogue Forum on Internet Rights in Rome, September 2007 and presented its ideas at the Internet Governance Forum (IGF) in Rio in November 2007 leading to a joint declaration on internet rights.
At the IGF in Hyderabad in 2008 a merger between the Dynamic Coalitions on Human Rights for the Internet and on Principles for the Internet led to the Dynamic Coalition on Internet Rights and Principles, which based on the APC Internet Rights Charter and the Universal Declaration of Human Rights elaborated the Charter of Human Rights and Principles for the Internet presented at the IGF in Vilnius in 2010, which since has been translated into several languages.
Global Network Initiative (2008)
On October 29, 2008, the Global Network Initiative (GNI) was founded upon its "Principles on Freedom of Expression and Privacy". The Initiative was launched in the 60th Anniversary year of the Universal Declaration of Human Rights (UDHR) and is based on internationally recognized laws and standards for human rights on freedom of expression and privacy set out in the UDHR, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR). Participants in the Initiative include the Electronic Frontier Foundation, Human Rights Watch, Google, Microsoft, Yahoo, other major companies, human rights NGOs, investors, and academics.
John Harrington dismissed the impact the GNI as a voluntary code of conduct, calling instead for bylaws to be introduced that force boards of directors to accept human rights responsibilities.
United Nations Human Rights Council (2011-2012)
Some of the 88 recommendations made by the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly supported the argument that internet access itself is or should become a fundamental human right.
67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an "enabler" of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole...
79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest.
The United Nations Human Rights Council declared internet freedom a Human Right in 2012.
Notable laws by place
Several countries and unions have laws dealing with digital rights:
Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica gave the fundamental right of access to digital technologies, especially the Internet.
Estonia: In 2000, the parliament launched a massive program to expand internet access to the countryside, arguing that it is essential for life in the 21st century.
European Union: In 2023, adopted a Declaration on Digital Rights.
Finland: By July 2010, every person in Finland was to have access to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection.
France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review and automatically cut off network access to those who continued to download illicit material after two warnings
Greece: Article 5A of the Constitution of Greece states that all persons have the right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to electronically transmitted information.
Spain: Starting in 2011, Telefónica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain.
United States: The Electronic Frontier Foundation has criticized the United States government in 2012 for considering during the Megaupload seizure process that people lose property rights by storing data on a cloud computing service.
Surveys
BBC World Service global public opinion poll (2009-2010)
A poll of 27,973 adults in 26 countries, including 14,306 Internet users, was conducted for the BBC World Service by the international polling firm GlobeScan using telephone and in-person interviews between 30 November 2009 and 7 February 2010. GlobeScan Chairman Doug Miller interpreted the results as showing that people around the world see access to the internet as their fundamental right, a force for good, and most do not want governments to regulate it.
Findings from the poll include:
Nearly four in five (78%) Internet users felt that the Internet had brought them greater freedom.
Users in Europe and China were more supportive towards regulation of the internet by the government than those in South Korea or Nigeria.
Opinion was evenly split between Internet users who felt that "the internet is a safe place to express my opinions" (48%) and those who disagreed (49%).
The aspects of the Internet that cause the most concern include: fraud (32%), violent and explicit content (27%), threats to privacy (20%), state censorship of content (6%), and the extent of corporate presence (3%).
Almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right (50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion).
Internet Society's Global Internet User Survey (2012)
In July and August 2012 the Internet Society conducted online interviews of more than 10,000 Internet users in 20 countries, including questions on digital rights:
Digital rights advocacy groups
AccessNow
Alliance for Universal Digital Rights (AUDRi)
Center for Democracy and Technology
Global Digital Human Rights (Global Shapers Moscow)
Digital Rights Ireland
Digital Rights Watch
Electronic Frontier Foundation
Entertainment Consumers Association
European Digital Rights
Free Software Foundation
FreedomBox
IT-Political Association of Denmark
Open Rights Group
Open Web Advocacy
Paradigm Initiative, a Pan African digital rights group
Public Knowledge
SMEX
TestPAC
World Wide Web Foundation
Xnet
See also
Advocacy groups
La Quadrature du Net
Open Rights Group
TestPAC US Political Action Committee that defends American Digital Rights
References
External links
Internet Rights Charter by Association for Progressive Communications
Digital Rights by Electronic Privacy Information Center
Internet Rights & Principles Coalition
Computing and society
Access to Knowledge movement | Digital rights | Technology | 2,087 |
7,968,026 | https://en.wikipedia.org/wiki/Centrifugal%20evaporator | A centrifugal evaporator is a device used in chemical and biochemical laboratories for the efficient and gentle evaporation of solvents from many samples at the same time, and samples contained in microtitre plates. If only one sample required evaporation then a rotary evaporator is most often used. The most advanced modern centrifugal evaporators not only concentrate many samples at the same time, they eliminate solvent bumping and can handle solvents with boiling points of up to 220 °C. This is more than adequate for the modern high throughput laboratory.
History
The centrifugal evaporator dates from the second half of the 1800s. Patent US158764 was granted in 1875 to Conrad Wendel and William Florich for an improvement in centrifugal evaporators.
Design
A centrifugal evaporator often comprises a vacuum pump connected to a centrifuge chamber in which the samples are placed. Many systems also have a cold trap or solvent condenser placed in line between the vacuum pump and the centrifuge chamber to collect the evaporated solvents. The most efficient systems also have a cold trap on the pump exhaust. There are many further developments available from manufacturers to speed up the process, and to provide protection for delicate samples.
Mechanism
The system works by lowering the pressure in the centrifuge system - as the pressure drops so does the boiling point of the solvent(s) in the system. When the pressure is sufficiently low that the boiling points of the solvents are below the temperature of the sample holder, then they will boil. This enables solvent to be rapidly removed while the samples themselves are not heated to damaging temperatures. High performance systems can remove very high boiling solvents such as dimethyl sulfoxide (DMSO) or N-methyl-2-pyrrolidone (NMP) while keeping sample temperatures below 40 °C at all times.
The centrifugal force generated by spinning the centrifuge rotor creates a pressure gradient within the solvent contained in the tubes or vials, this means that the samples boil from the top down, helping to prevent "bumping". The most advanced systems apply the vacuum slowly and run the rotor at speeds of 500 x gravity - this system is proven to prevent bumping and was patented by Genevac in the late 1990s.
References
External links
Decanter Centrifuge
evaporator
Evaporators
Laboratory equipment | Centrifugal evaporator | Chemistry,Engineering | 505 |
68,425,612 | https://en.wikipedia.org/wiki/Time%20in%20Mozambique | Time in Mozambique is given by a single time zone, officially denoted as Central Africa Time (CAT; UTC+02:00). Mozambique has never observed daylight saving time.
History
Mozambique adopted UTC+02:00 (Central Africa Time; CAT) unofficially in 1903, and officially on 26 May 1911.
IANA time zone database
In the IANA time zone database, Mozambique is given one zone in the file zone.tab – Africa/Maputo. "MZ" refers to the country's ISO 3166-1 alpha-2 country code. Data for Mozambique directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
List of time zones by country
List of UTC time offsets
References
External links
Current time in Mozambique at Time.is
Time in Mozambique at TimeAndDate.com
Time by country
Geography of Mozambique
Time in Africa | Time in Mozambique | Physics | 186 |
44,372,858 | https://en.wikipedia.org/wiki/Traction%20force%20microscopy | In cellular biology, traction force microscopy (TFM) is an experimental method for determining the tractions on the surface of a cell by obtaining measurements of the surrounding displacement field within an in vitro extracellular matrix (ECM).
Overview
The dynamic mechanical behavior of cell-ECM and cell-cell interactions is known to influence a vast range of cellular functions, including necrosis, differentiation, adhesion, migration, locomotion, and growth. TFM utilizes experimentally observed ECM displacements to calculate the traction, or stress vector, at the surface of a cell.
Before TFM, efforts observed cellular tractions on silicone rubber substrata wrinkling around cells; however, accurate quantification of the tractions in such a technique is difficult due to the nonlinear and unpredictable behavior of the wrinkling. Several years later, the terminology TFM was introduced to describe a more advanced computational procedure that was created to convert measurements of substrate deformation into estimated traction stresses.
General Methodology
In conventional TFM, cellular cultures are seeded on, or within, an optically transparent 3D ECM embedded with fluorescent microspheres (typically latex beads with diameters ranging from 0.2-1 μm). A wide range of natural and synthetic hydrogels can be used for this purpose, with the prerequisite that mechanical behavior of the material is well characterized, and the hydrogel is capable of maintaining cellular viability. The cells will exert their own forces into this substrate which will consequently displace the beads in the surrounding ECM. In some studies, a detergent, enzyme, or drug is used to disturb the cytoskeleton, thereby altering, or sometimes completely eliminating, the tractions generated by the cell.
First, a continuous displacement field is computed from a pair of images: the first image being the reference configuration of microspheres surrounding an isolated cell, and the second image being the same isolated cell surrounded by microspheres that are now displaced due to the cellular-generated tractions. Confocal fluorescence microscopy is usually employed to image the cell surface and fluorescent beads. After computing the translational displacement field between a deformed and undeformed configuration, a strain field can be calculated by often using a regularization approach, the best of which is the elastic net regularization. From the strain field, the stress field surrounding the cell can be calculated with knowledge of the stress-strain behavior, or constitutive model, of the surrounding hydrogel material. It is possible to proceed one step further, and use the stress field to compute the tractions at the surface of the cell, if the normal vectors to the cell surface can be obtained from a 3D image stack. Although this procedure is a common way to obtain the cellular tractions from microsphere displacement, some studies have successfully utilized an inverse computational algorithm to yield the traction field.
Limitations
The spatial resolution of the traction field that can be recovered with TFM is limited by the number of displacement measurements per area. The spacing of independent displacement measurements varies with experimental setups, but is usually on the order of one micrometer. The traction patterns produced by cells frequently contain local maxima and minima that are smaller. Detection of these fine variations in local cellular traction with TFM remains challenging.
Advancements
In 2D TFM, cells are cultured as a monolayer on the surface of a thin substrate with a tunable stiffness, and the microspheres near the surface of the substrate undergo deformation through cell-ECM connections. 2.5D cell cultures are similarly grown on top of a thin layer of ECM, and diluted structural ECM proteins are mixed to the medium added above the cells and substrate. Although most of the seminal work in TFM was performed in 2D, or 2.5D, many cell types require the complex biophysical and biochemical cues from a 3D ECM to behave in a truly physiologically realistic manner within an in vitro environment.
When the rotation or stretch of a sub volume is large, errors can be introduced into the calculation of cell surface tractions since most TFM techniques employ a computational framework based on linear elasticity. Recent advances in TFM have shown that cells are capable of exerting deformations with strain magnitudes up to 40%, which requires usage of a finite deformation theory approach to account for large strain magnitudes.
Applications
Although TFM is frequently used to observe tractions at the surface of a spatially isolated individual cell, a variation of TFM can also be used to analyze the collective behavior of multicellular systems. For example, cellular migration velocities and plithotaxis are observed alongside a computed stress variation map of a monolayer sheet of cells, in an approach termed monolayer stress microscopy. The mechanical behavior of single cells versus a confluent layer of cells differ in that the monolayer experiences a "tug-of-war" state. There is also evidence of a redistribution of tractions that can take place earlier than changes in cell polarity and migration.
TFM has proven particularly useful to study durotaxis as well.
TFM has recently been applied to explore the mechanics of cancer cell invasion with the hypothesis that cells which generate large tractions are more invasive than cells with lower tractions. It is also hoped that recent findings from TFM will contribute to the design of optimal scaffolds for tissue engineering and regeneration of the peripheral nervous system, artery grafts, and epithelial skin cells.
References
Cells
Microscopy | Traction force microscopy | Chemistry | 1,124 |
17,600,071 | https://en.wikipedia.org/wiki/Polydioctylfluorene | Polydioctylfluorene (PFO) is an organic compound, a polymer of 9,9-dioctylfluorene, with formula (C13H6(C8H17)2)n. It is an electroluminescent conductive polymer that characteristically emits blue light. Like other polyfluorene polymers, it has been studied as a possible material for light-emitting diodes.
Structure
The monomer has an aromatic fluorene core -C13H6- with two aliphatic n-octyl -C8H17 tails attached to the central carbon. Polydioctylfluorene (PFO) can be found in liquid-crystalline, glassy, amorphous, semi-crystalline or β-chain formation. This variety is on account of the intermolecular forces that PFO can participate in. The secondary forces present in PFO are typically van der Waals, which are relatively weak. These weak forces makes it a solid that can also be used as a film on a substrate. The glassy films formed by PFO chains form solutions in good solvents, meaning it is at least partially soluble. These van der Waals also add complexity to the microstructure of PFO, which is why it has a wide range of solid formations. The solid formations though, typically form low density due to the low cooling rate of the polymer. The density of polydioctylfluorene is measured by using the process of ultraviolet photoelectron spectroscopy. Chain stiffness is also prominent in PFO, because of this it is predicted that the molecular weight is a factor of 2.7 lower than polystyrene, which can produce an approximation of 190 repeat units in a standard PFO chain. By changing the strain and temperature applied to the polymer's structure results in an alteration of the PFO's properties. Thermal treatment such as friction transfer can be applied to the structure, this is a way to alter the properties. The friction transfer aligns the structure to become crystalline or liquid crystalline. Polymer 196 is the most commonly studied type of polydioctylfluorene. In studies, polymer 196 has shown the most promising properties and the best crystallinity. Within the crystal structure of polymer 196 octyl side chains are inserted between the layer of the polymer to provide more space for efficiency in structuring the material.
In studies, the structure of polydioctylfluorene was observed by using grazing-incidence X-ray diffraction after applying friction to the structure. Experiments revealed PFO was present in crystalline films and liquid crystalline after cooling and use of friction. As a result of the friction exerted, twofold symmetry in PFO was broken. The friction transfer used to obtain a single crystal film is important in the process of fabricating polarized light emitting diodes.
Properties
Polydioctylfluorene, can also be known as polymer 196 to polyfluorene. The molar mass of PFO ranges between 24,000–41,600 (g/mol) and because of this varying molar mass, many other properties vary as well. For example, the glass transition temperature can fall somewhere between 72–113 degrees Celsius. The absolute wavelength emitted by PFO can range between 386–389 nm in a solution of CHCl3, and falls around 389 in a solution of THF. The absolute film wavelength of PFO though falls between 380–394 nm. The melting point of a crystalline molecule of PFO is predicted to be about 150 degrees Celsius.
There have also been reports that some of the solid states of polydioctylfluorene are composted in sheet-like layers which are about 50–100 nm thick. As a result of these sheets, the glassy and semicrystalline states can be formed (excluding amorphous, liquid crystalline, and beta chain states). When cooled quickly, the chains tightly align, giving PFO a close packing factor, though because of the high complexity of the chains, this sometimes gets messy and creates the amorphous state. The parts of the molecule that add this complexity are the carbon rings (that are located in the backbone) making the molecule overall large in size.
Applications
The formation of beta-phase chains in PFO can be formed through dip-pen nanolithography, to represent wavelength changes in metamaterials. The dip-pen technique allows a scale of 500 nm > to be visible. The beta chains can be converted into the glassy films by adding extra stress to the main fluorine backbone unit, whether beta chains are formed is determined by peaks in wavelength absorption. Beta chains can also be confirmed to be present by using solvent to non-solvent mixtures. If the molecule were to be dipped into this mixture for ten seconds, the chains with no dissolution of films are able to produce these said beta chains.
Polydioctylfluorene is a polymer light-emitting device known as PLED, which covalently bonds to the carbon hydrogen chains. PFO is a copolymer of basic polyfluorene, which enables it to release phosphorescent light. This basic fluorene backbone strengthens the molecule on account of the carbon rings. The cross-linking in polydioctylfluorene structure provides an efficient technique for hole-transport layers to emit light. Also, when a solvent-polymer compound is added the β-phase crystalline structure to be maintained. Efficiency in current can reach a maximum of about 17 cd/A and maximum luminance obtained can be approximately 14,000 cd/m(2). The hole-transport layers (HTLs) improve the polymer's anode hole injection and greatly increase electron blocking. By having the capability to control the microstructure of phase domains gives an opportunity to optimize the optoelectronic properties of PFO based products. When needs for optoelectronic emittance are reached in polydioctylfluorene, the electroluminescence given off in dependent on the active layer in the conjugate polymer. Another way to affect the optoelectronic properties is by altering how dense the phase chain segments are ordered. Low densities can be achieved from tremendously slow crystallization while on the other hand directional crystalline solution can be achieved by use of thermal gradients.
References
Organic polymers
Conductive polymers | Polydioctylfluorene | Chemistry | 1,350 |
24,449,712 | https://en.wikipedia.org/wiki/Improved%20Turbine%20Engine%20Program | The Improved Turbine Engine Program (ITEP), formerly the Advanced Affordable Turbine Engine (AATE) program, is a United States Army project to develop a General Electric T700 replacement for the UH-60 Black Hawk and AH-64 Apache, improving fuel consumption, power, durability and cost. Honeywell and Pratt & Whitney formed the ATEC joint venture to develop the T900, while GE Aviation builds the T901. In February 2019, the US Army selected the GE T901 as the winner of the program.
History
In December 2006, the U.S. Army's Aviation Applied Technology Directorate (AATD) solicited proposals for the 3000 shp Advanced Affordable Turbine Engine (AATE) free-turbine turboshaft to replace the GE T700 that currently power the UH-60 Black Hawk and AH-64 Apache rotorcraft, leveraging the DoD/NASA/DOE VAATE program. Refitting the existing fleet of twin-engine Black Hawks and Apaches would require a total of 6,215 engines, including spares.
Any aircraft that currently uses the T700 or its commercial derivative, the CT7, also could be re-powered by the AATE, including commercial rotorcraft like the Sikorsky S-92. Sikorsky is considering it for the single-turbine S-97 Raider instead of its single GE CT7/T700; in addition, AATE could also power the Joint Multi-Role (JMR) helicopter.
In 2007, Honeywell and Pratt & Whitney formed the Advanced Turbine Engine Company (ATEC) joint venture. The science & technology phase to subsidize development of AATE consisted of two contracts: one was awarded in May 2008 to ATEC for $108 million to develop the HPW3000, and the other was awarded to GE Aviation in late 2007 for the GE3000. The four-and-a-half year science and technology phase covered durability and performance demonstration testing and was scheduled to conclude in 2013, but tests continued through 2014.
In July 2009, the United States Army announced the development of AATE would continue under the Improved Turbine Engine Program (ITEP); ITEP would result in an engine that would improve the AH-64 and UH-60 hot and high capacities and increase combat radius. In August 2016, ATEC and GE were awarded 24-month contracts under ITEP to take their engines through preliminary design review; this phase culminated in April 2018 with ATEC and GE demonstrating their prototypes to the Army.
On 1 February 2019, the US Army selected the GE T901 as the winner of the ITEP program, awarding the Engineering and Manufacturing Development (EMD) contract for $517 million. Later that same month, ATEC protested the selection of the GE T901 over its T900 in a filing with the Government Accountability Office (GAO). The GAO denied the protest in a filing posted on May 30, 2019.
Critical design review in the second quarter of 2020 will lead to first engine testing in the third quarter of 2021 before flight tests, and a production decision in 2024.
Design and performance goals
In addition to 3,000-shp output, the targets for AATE were a 25% reduction in fuel consumption (less than ), a 65% improvement in power to weight (more than ), a 20% improvement in design life (more than 6000 hours and 15000 cycles), a 35% reduction in production (less than $650k per engine) and maintenance cost, and a 15% reduction in product development cost. The 3,000-shp goal for AATE is a 50% increase over the most powerful T700-701D variant, but would also require upgrades to gearbox, transmission, rotor blades, and tail rotor.
Both the ATEC and GE designs can start without an auxiliary power unit (APU), using the battery alone. The UH-60 and AH-64 are currently equipped with Honeywell GTCP 36-150 APUs.
Using ITEP, the combat radius is projected to increase by . The hot and high service ceiling will be increased from at . Performance targets have been determined in part by operations in Afghanistan and Iraq, as well as growing airframe weights.
Contenders
The major difference between the ATEC HPW3000 (T900) and the GE3000 (T901) engines is in the number of rotating compressor/turbine assemblies in the gas generator stage. The ATEC T900 is a dual-spool turboshaft, while the GE T901 is a single-spool design. In the single-spool design, the compressor is driven by a single turbine; the dual-spool design uses a separate shafts for a two-stage compressor, requiring two turbines. Both are free-turbine turboshafts, where an independent turbine is used in the exhaust stream downstream of the gas generator to extract power. Although the dual-spool design allows each compressor stage to run in their optimal ranges, it makes the machine more complex.
ATEC HPW3000 (T900)
The Advanced Turbine Engine Company (ATEC) is a 50/50 joint venture created in 2007 between Honeywell Aerospace and Pratt & Whitney Military Engines. ATEC completed a Core Engine (High Pressure system only) test in mid-2011 on the two-spool HPW3000 and completed Gas Generator (both High and Low Pressure systems) testing in January 2012. Durability testing of the first HPW3000 completed in October 2013. A second HPW3000 was tested for performance and sand ingestion during late summer 2014. In February 2017, the Army designated the HPW3000 design as the T900-HPW-900 engine.
General Electric GE3000 (T901)
Since 2010, GE has been developing and testing T901-specific technologies. The second GE3000 was tested for performance, endurance, and sand ingestion in late spring 2014.
In 2016, the Army awarded GE Aviation a 24-month contract for the T901 preliminary design review, and the prototype six month testing was completed in October 2017. The GE3000 engine was officially designated as the T901-GE-900 engine in January 2017.
See also
Adaptive Versatile Engine Technology (ADVENT)
List of aircraft engines
References
External links
ATEC
GE
Aircraft engines
Turboshaft engines | Improved Turbine Engine Program | Technology | 1,312 |
10,204,402 | https://en.wikipedia.org/wiki/New%20York%20statistical%20areas | The U.S. state of New York currently has 34 statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated seven combined statistical areas, 13 metropolitan statistical areas and 14 micropolitan statistical areas in . As of 2023, the largest of these is the New York-Newark, NY-NJ-CT-PA CSA, which includes New York City and its surrounding suburbs; with over 21 million people, it is the largest primary statistical area in the United States.
Table
Primary statistical areas
Primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Of the 34 statistical areas of New York, 14 are PSAs, consisting of seven combined statistical areas, three metropolitan statistical areas and four micropolitan statistical areas.
See also
Geography of New York (state)
Demographics of New York (state)
Notes
References
External links
Office of Management and Budget
United States Census Bureau
United States statistical areas
Statistical Areas Of New York
Statistical Areas Of New York | New York statistical areas | Mathematics | 228 |
576,510 | https://en.wikipedia.org/wiki/Durham%20tube | Durham tubes are used in microbiology to detect production of gas by microorganisms. They are simply smaller test tubes inserted upside down in another test tube so they are freely movable. The culture media to be tested is then added to the larger tube and sterilized, which also eliminates the initial air gap produced when the tube is inserted upside down. The culture media typically contains a single substance to be tested with the organism, such as to determine whether an organism can ferment a particular carbohydrate. After inoculation and incubation, any gas that is produced will form a visible gas bubble inside the small tube. Litmus solution can also be added to the culture media to give a visual representation of pH changes that occur during the production of gas. The method was first reported in 1898 by British microbiologist Herbert Durham.
One limitation of the Durham tube is that it does not allow for precise determination of the type of gas that is produced within the inner tube, or measurements of the quantity of gas produced. However, Durham argued that quantitive measurements are of limited value because of the culture solution will absorb some of the gas in unknown, variable proportions. Additionally, using Durham tubes to provide evidence of fermentation may not be able to detect slow- or weakly-fermenting organisms when the resultant carbon dioxide diffuses back into the solution as quickly as it is formed, so a negative test using Durham tubes does not indicate decisive physiological significance.
References
Microbiology equipment | Durham tube | Biology | 306 |
48,317,186 | https://en.wikipedia.org/wiki/S%C3%BBrtab | Sûrtab S.A. is a Haitian technology company headquartered in Port-au-Prince, Haiti, that designs, develops, and sells computer hardware and consumer electronics, most notably, tablet computers.
Etymology
The name Sûrtab, is derived from a contraction between the French word, "sûr", which is used to designate things that are emphatic, certain, and true, that can not be questioned, must happen infallibly, and are reliable; with the English word "tablet".
See also
Comparison of tablet computers
References
2013 establishments in Haiti
Companies based in Port-au-Prince
Computer companies established in 2013
Haitian companies established in 2013
Computer hardware companies
Display technology companies
Electronics companies established in 2013
Haitian brands
Retail companies of Haiti
Tablet computers
Tablet computers introduced in 2013
Companies of Haiti
Touchscreen portable media players | Sûrtab | Technology | 166 |
965,610 | https://en.wikipedia.org/wiki/Tim%20Russ | Timothy Darrell Russ (born June 22, 1956) is an American actor, musician, screenwriter, director and amateur astronomer. He is best known for his roles as Lieutenant Commander Tuvok on Star Trek: Voyager, Robert Johnson in Crossroads (1986), Casey in East of Hope Street (1998), Frank on Samantha Who?, Principal Franklin on the Nickelodeon sitcom iCarly, and D. C. Montana on The Highwaymen (1987–1988). He appeared in The Rookie: Feds (2022) and reprised his role as Captain Tuvok on Season 3 of Star Trek: Picard.
Early life, family and education
Russ was born in Washington, D.C., on June 22, 1956, to a government employee mother and a U.S. Air Force officer father. He spent part of his childhood in Turkey. He attended his senior year of high school at Rome Free Academy, from which he graduated in 1974. He graduated from St. Edward's University with a degree in theater arts. He additionally attended graduate school at Illinois State University where he was inducted into its Hall of Fame.
Career
Acting
In 1985, Russ appeared in The Twilight Zone episode "Kentucky Rye" as Officer #2. He made a brief appearance in the comedy film Spaceballs as a trooper who "combs" the desert with a giant comb. Russ had a prominent role in the Charles Bronson film Death Wish 4.
Russ has been involved in the Star Trek franchise as a voice and film actor, writer, director, and producer. He played several minor roles before landing the role as the main character Tuvok in Star Trek: Voyager. Russ screentested, in 1987, for the role of Geordi La Forge on Star Trek: The Next Generation before being cast as Tuvok. Russ went into Voyager as a dedicated Trekkie with an extensive knowledge of Vulcan lore, and has played the following roles in the Star Trek universe:
Devor, a mercenary aboard the Enterprise-D disguised as a service engineer in The Next Generation episode "Starship Mine" (1993)
T'Kar, a Klingon in the Deep Space Nine episode "Invasive Procedures" (1993)
A human tactical Lieutenant on the USS Enterprise-B in the film Star Trek Generations (1994).
Tuvok's Mirror Universe counterpart in the Deep Space Nine episode "Through the Looking Glass" (1995).
A changeling impersonating Tuvok in Star Trek: Picard season 3.
In 1995, Russ co-wrote the story for the Malibu Comics Star Trek: Deep Space Nine #29 and 30, with Mark Paniccia. Russ performed voice acting roles as Tuvok for the video games Star Trek: Voyager – Elite Force and Star Trek: Elite Force II. Russ is the director and one of the stars of the fan series Star Trek: Of Gods and Men, the first third of which was released in December 2007, with the remaining two-thirds released in 2008.
Russ's character's name D. C. Montana in The Highwayman was a reference to Trek writer D. C. Fontana.
In 1990, he appeared in an episode of Freddy's Nightmares.
Russ directed and co-starred in Star Trek: Renegades, and in both 2013 and 2014 reprised his role as the voice of Tuvok in the massively multiplayer online game Star Trek Online.
Later work
Russ appeared as Frank, a sarcastic doorman in the sitcom Samantha Who? from 2007 to 2009, and appeared for six seasons as Principal Ted Franklin in Nickelodeon's show iCarly. He also portrayed a doctor on an episode of Hannah Montana, "I Am Hannah, Hear Me Croak."
Russ won an Emmy Award in 2014 for public service ads he did for the FBI's Los Angeles Field Office on intellectual property theft and cyberbullying.
He played Captain Kells in the 2015 Bethesda Game Studios video game Fallout 4.
Music and astronomy
Russ has been a lifelong musician and a singer. In addition, Russ has been an avid amateur astronomer most of his adult life, and is a member of the Los Angeles Astronomical Society. In 2021 he was among a small group of citizen astronomers who assisted in detection of the asteroid 617 Patroclus in preparation for NASA's Lucy probe. In February 2022, he stated that he owned a 10-inch Dobsonian telescope, an 8" Schmidt-Cassegrain telescope, and a Unistellar eVscope.
Filmography
References
External links
1956 births
Living people
African-American film directors
African-American male singers
African-American male writers
African-American screenwriters
African-American television directors
American expatriates in Turkey
American male film actors
American male screenwriters
American male singers
American male television actors
American male video game actors
American male voice actors
American television directors
Film directors from Washington, D.C.
Illinois State University alumni
Male actors from Washington, D.C.
Screenwriters from Washington, D.C.
Singers from Washington, D.C.
St. Edward's University alumni
20th-century African-American male actors
20th-century American male actors
21st-century African-American male actors
21st-century American male actors
20th-century American screenwriters
21st-century American screenwriters
20th-century American singers
21st-century American singers
Amateur astronomers | Tim Russ | Astronomy | 1,078 |
74,368,203 | https://en.wikipedia.org/wiki/Californium%28III%29%20oxybromide | Californium(III) oxybromide is a inorganic compound of californium, bromine, and oxygen with the formula CfOBr.
Physical properties
Californium bromide is obtained by heating in HBr.
The compound is isostructural with CfOCl. Both are prepared by the same method.
References
Californium compounds
Oxybromides | Californium(III) oxybromide | Chemistry | 83 |
14,132,752 | https://en.wikipedia.org/wiki/CYR61 | Cysteine-rich angiogenic inducer 61 (CYR61) or CCN family member 1 (CCN1), is a matricellular protein that in humans is encoded by the CYR61 gene.
CYR61 is a secreted, extracellular matrix (ECM)-associated signaling protein of the CCN family (CCN intercellular signaling protein). CYR61 is capable of regulating a broad range of cellular activities, including cell adhesion, migration, proliferation, differentiation, apoptosis, and senescence through interaction with cell surface integrin receptors and heparan sulfate proteoglycans. During embryonic development, CYR61 is critical for cardiac septal morphogenesis, blood vessel formation in placenta, and vascular integrity. In adulthood CYR61 plays important roles in inflammation and tissue repair, and is associated with diseases related to chronic inflammation, including rheumatoid arthritis, atherosclerosis, diabetes-related nephropathy and retinopathy, and many different forms of cancers.
CCN protein family
CYR61 was first identified as a protein encoded by a serum-inducible gene in mouse fibroblasts. Other highly conserved homologs were later identified to comprise the CCN protein family (CCN intercellular signaling protein). The CCN acronym is derived from the first three members of the family identified, namely CYR61 (CCN1), CTGF (connective tissue growth factor, or CCN2), and NOV (nephroblastoma overexpressed, or CCN3). These proteins, together with WISP1 (CCN4), WISP2 (CCN5), and WISP3 (CCN6) comprise the six members of the family in vertebrates and have been renamed CCN1-6 in order of their discovery by international consensus. CCN proteins function as matricellular proteins, which are extracellular matrix proteins that play regulatory roles, particularly in the context of wound repair.
Gene structure and regulation
CYR61 is located at human chromosome 1p22.3, whereas the mouse Cyr61 gene is located at chromosome 3, 72.9cM. The mouse CYR61 coding region spans ~3.2 Kb, containing 5 exons interspaced with 4 introns. The first exon encodes 5’-UTR sequence and the first several amino acids in the secretory signal peptide. The remaining four exons each encode a distinct CCN1 domain. The 5th exon also contains the 3’-UTR sequences, which has 5 copies of AU-rich elements that confers a short mRNA half life, and a mir-155 target site.
The CYR61 promoter is a TATA box containing promoter, with binding sites for many transcription factors including AP1, ATF, E2F, HNF3b, NF1, NFκB, SP1, and SRF, and 2 poly(CA) stretches that may form Z-DNA structure. Transcriptional activation of CYR61 is exquisitely sensitive to a wide range of environmental perturbations, including stimulation by platelet-derived growth factor and basic fibroblast growth factor, transforming growth factor β1 (TGF-β1), growth hormone, the phorbol ester 12-O-tetradecanoylphorbol-13-acetate (TPA), cAMP, vitamin D3, estrogen and tamoxifen, angiotensin II, hypoxia, UV light, and mechanical stretch.
Protein structure and function
Structural domains
Full-length CYR61 protein contains 381 amino acids with an N-terminal secretory signal peptide followed by four structurally distinct domains. The four CYR61 domains are, from N- to C-termini, the insulin-like growth factor binding protein (IGFBP) domain, von Willebrand type C repeats (vWC) domain, thrombospondin type 1 repeat domain (TSR), and the C-terminal (CT) domain that contains a cysteine-knot motif. CCN1 has unusually high cysteine residue content (10% or 38 in total). The number and spacing of cysteine residues are completely conserved among CYR61 (CCN1), CTGF (CCN2), NOV (CCN3), and WISP-1 (CCN4), and are largely conserved with WISP-2 (CCN5), which lacks precisely the CT domain, and WISP3 (CCN6), which lacks 4 cysteines in the vWC domain. CYR61 is glycosylated, although the regulation and function of glycosylation are unknown.
Integrin binding
CYR61 binds directly to various integrin receptors in a cell type-dependent manner, including integrin αvβ3 in endothelial cells, α6β1 and heparan sulfate proteoglycans (HSPGs) in fibroblasts and smooth muscle cells αIIbβ3 in activated platelets, αMβ2 in monocytes and macrophages, and αDβ2 in macrophage foam cells. Where examined, syndecan-4 has been identified as the HSPG critical for CCN1 functions. The CYR61 binding sites for some of these integrins have been mapped (Figure 1). Due to the cell type specificity of integrin expression, CYR61 acts through distinct integrins to mediate specific functions in different types of cells. For example, CYR61 induces angiogenic functions in endothelial cells through αvβ3, and in fibroblasts promotes cellular senescence and enables TNFα to induce apoptosis through binding to α6β1-HSPGs. However, CYR61 supports cell adhesion through all of the integrins identified above.
Cell signaling and function
As a cell adhesive substrate, CYR61 induces the activation of focal adhesion kinase, paxillin, RAC, and sustained activation of MAPK/ERK1-2. In macrophages, CYR61 also activates the transcription factor NFκB and stimulates M1 polarization. CYR61 activates Akt signaling in thymic epithelial cells, promoting their proliferation and thus thymic size growth. CYR61 has potent angiogenic activity upon endothelial cells and induces neovascularization, first demonstrated in a corneal micropocket implant assay and subsequently confirmed in a rabbit ischemic hindlimb model. CYR61 also accelerates and promotes the chondrogenic differentiation of mouse limb bud mesenchymal cells, and stimulates osteoblast differentiation but inhibits osteoclastogenesis. Cyr61 is a strong inducer of reactive oxygen species accumulation in fibroblastic cells, and this activity underlies many CYR61-induced apoptosis and senescence. CYR61 is able to support cell adhesion, stimulate cell migration, promote growth factor-induced cell proliferation and differentiation in some cell types, promote apoptosis in synergy with TNF family cytokines, and induce cellular senescence in fibroblasts.
Embryonic development
During embryo development in mice, Cyr61 is highly expressed in the cardiovascular, skeletal, and neuronal systems. Cyr61 knockout mice are embryonic lethal due to defects in cardiac septal morphogenesis, deficient blood vessel formation in placenta, and compromised vascular integrity. In Xenopus laevis, Cyr61 is required for normal gastrulation and modulation of Wnt signaling.
Clinical relevance
CYR61 is highly expressed at sites of inflammation and wound repair, and is associated with diseases involving chronic inflammation and tissue injury.
Wound healing and fibrosis
In skin wound healing, CYR61 is highly expressed in the granulation tissue by myofibroblasts, which proliferate and rapidly synthesize ECM to maintain tissue integrity and to promote regeneration of parenchymal cells. However, excessive matrix deposition can lead to fibrosis, scarring, and loss of tissue function. In skin wounds, CYR61 accumulates in the granulation tissue as myofibroblasts proliferate, and eventually reaches a sufficiently high level to drive the myofibroblasts themselves into senescence, whereupon these cells cease to proliferate and express matrix-degrading enzymes. Thus, CYR61 limits synthesis and deposition of ECM by myofibroblasts, reducing the risk of fibrosis during wound healing. In addition to skin wound healing, CYR61 expression is elevated in remodeling cardiomyocytes after myocardial infarction, in vascular injury, and in the long bones during fracture repair. Blockade of CYR61 by antibodies inhibits bone fracture healing in mice. In the kidney, CYR61 is expressed in podocytes in normal adult and embryonic glomeruli, but expression is decreased in IgA nephropathy, diabetic nephropathy, and membranous nephropathy, particularly in diseased kidneys with severe mesangial expansion. CYR61 induction of cellular senescence in the kidney is a potential therapy to limit fibrosis.
Inflammation
CYR61 promotes the apoptotic functions of inflammatory cytokines such as TNFα, FasL, and TRAIL. It also reprograms macrophages towards M1 polarization through αMβ2-mediated activation of NF-κB. CYR61 is upregulated in patients with Crohn's disease and ulcerative colitis. CYR61 supports the patrolling behavior of murine resident Ly6Clow monocytes along the endothelial in the steady state and is required for their accumulation under viral-mimicking vascular inflammation.
Arthritis
CYR61 is highly expressed in collagen-induced arthritis in rodents, and inhibition of CCN1 expression correlates with suppression of inflammatory arthritis. CYR61 is also found in articular cartilage from patients with osteoarthritis and appears to suppress ADAMTS4 (aggrecanase) activity, possibly leading to cartilage cell (chondrocyte) cloning.
Vascular diseases
CYR61 is overexpressed in vascular smooth muscle cells of atherosclerotic lesions and in the neointima of restenosis after balloon angioplasty, both in rodent models and in humans. Suppression of CYR61 expression results in reduced neointimal hyperplasia after balloon angioplasty, an effect that is reversed by delivery of CYR61 via gene transfer In a mouse model of oxygen-induced retinopathy, expression of CYR61 in the vitreous humor produced significant beneficial effects in repairing damaged vasculature.
Cancer
Angiogenesis is essential for the supply of oxygen and nutrients to nourish the growing tumor. CYR61 is a powerful angiogenic inducer in vivo, and it can also promote cancer cell proliferation, invasion, survival, epithelial–mesenchymal transition, and metastasis. Accordingly, forced overexpression of CYR61 enhanced tumor growth in xenografts of breast cancer cells, prostate cancer cells, ovarian carcinoma cells, and squamous carcinoma cells. Clinically, CYR61 expression correlates with the tumor stage, tumor size, lymph node positivity, and poor prognosis in several cancers, including breast cancer, prostate cancer, glioma, gastric adenocarcinoma, and squamous cell carcinoma.
However, CYR61 can also induce apoptosis and cellular senescence, two well-established mechanisms of tumor suppression Thus, whereas CYR61 can promote the proliferation of prostate cancer cells, it can also exacerbate apoptosis of these cells in the presence of the immune surveillance molecule TRAIL. CYR61 has an inhibitory effect on some cancers, and suppresses tumor growth of non-small-cell lung cancer (NSCLC) cells, endometrial adenocarcinoma cells, and in melanoma cells.
References
Aging-related genes
Aging-related proteins
CCN proteins | CYR61 | Biology | 2,643 |
1,416,951 | https://en.wikipedia.org/wiki/Oxoglutarate%20dehydrogenase%20complex | The oxoglutarate dehydrogenase complex (OGDC) or α-ketoglutarate dehydrogenase complex is an enzyme complex, most commonly known for its role in the citric acid cycle.
Units
Much like pyruvate dehydrogenase complex (PDC), this enzyme forms a complex composed of three components:
Three classes of these multienzyme complexes have been characterized: one specific for pyruvate, a second specific for 2-oxoglutarate, and a third specific for branched-chain α-keto acids. The oxoglutarate dehydrogenase complex has the same subunit structure and thus uses the same cofactors as the pyruvate dehydrogenase complex and the branched-chain alpha-keto acid dehydrogenase complex (TTP, CoA, lipoate, FAD and NAD). Only the E3 subunit is shared in common between the three enzymes.
Properties
Metabolic pathways
This enzyme participates in three different pathways:
Citric acid cycle (KEGG link: MAP00020)
Lysine degradation (KEGG link: MAP00310)
Tryptophan metabolism (KEGG link: MAP00380)
Kinetic properties
The following values are from Azotobacter vinelandii (1):
KM: 0.14 ± 0.04 mM
Vmax : 9 ± 3 μmol.min−1.mg−1
Citric acid cycle
Reaction
The reaction catalyzed by this enzyme in the citric acid cycle is:
α-ketoglutarate + NAD+ + CoA → Succinyl CoA + CO2 + NADH
This reaction proceeds in three steps:
decarboxylation of α-ketoglutarate,
reduction of NAD+ to NADH,
and subsequent transfer to CoA, which forms the end product, succinyl CoA.
ΔG°' for this reaction is -7.2 kcal mol−1. The energy needed for this oxidation is conserved in the formation of a thioester bond of succinyl CoA.
Regulation
Oxoglutarate dehydrogenase is a key control point in the citric acid cycle. It is inhibited by its products, succinyl CoA and NADH. A high energy charge in the cell will also be inhibitive. ADP and calcium ions are allosteric activators of the enzyme.
By controlling the amount of available reducing equivalents generated by the Krebs cycle, Oxoglutarate dehydrogenase has a downstream regulatory effect on oxidative phosphorylation and ATP production. Reducing equivalents (such as NAD+/NADH) supply the electrons that run through the electron transport chain of oxidative phosphorylation. Increased Oxoglutarate dehydrogenase activation levels serve to increase the concentrations of NADH relative to NAD+. High NADH concentrations stimulate an increase in flux through oxidative phosphorylation.
While an increase in flux through this pathway generates ATP for the cell, the pathway also generates free radical species as a side product, which can cause oxidative stress to the cells if left to accumulate.
Oxoglutarate dehydrogenase is considered to be a redox sensor in the mitochondria, and has an ability to change the functioning level of mitochondria to help prevent oxidative damage. In the presence of a high concentration of free radical species, Oxoglutarate dehydrogenase undergoes fully reversible free radical mediated inhibition. In extreme cases, the enzyme can also undergo complete oxidative inhibition.
When mitochondria are treated with excess hydrogen peroxide, flux through the electron transport chain is reduced, and NADH production is halted. Upon consumption and removal of the free radical source, normal mitochondrial function is restored.
It is believed that the temporary inhibition of mitochondrial function stems from the reversible glutathionylation of the E2-lipoac acid domain of Oxoglutarate dehydrogenase. Glutathionylation, a form of post-translational modification, occurs during times of increased concentrations of free radicals, and can be undone after hydrogen peroxide consumption via glutaredoxin. Glutathionylation "protects" the lipoic acid of the E2 domain from undergoing oxidative damage, which helps spare the Oxoglutarate dehydrogenase complex from oxidative stress.
Oxoglutarate dehydrogenase activity is turned off in the presence of free radicals in order to protect the enzyme from damage. Once free radicals are consumed by the cell, the enzyme's activity is turned back on via glutaredoxin. The reduction in activity of the enzyme under times of oxidative stress also serves to slow the flux through the electron transport chain, which slows production of free radicals.
In addition to free radicals and the mitochondrial redox state, Oxoglutarate dehydrogenase activity is also regulated by ATP/ADP ratios, the ratio of Succinyl-CoA to CoA-SH, and the concentrations of various metal ion cofactors (Mg2+, Ca2+). Many of these allosteric regulators act at the E1 domain of the enzyme complex, but all three domains of the enzyme complex can be allosterically controlled. The activity of the enzyme complex is upregulated with high levels of ADP and Pi, Ca2+, and CoA-SH. The enzyme is inhibited by high ATP levels, high NADH levels, and high Succinyl-CoA concentrations.
Stress response
Oxoglutarate dehydrogenase plays a role in the cellular response to stress. The enzyme complex undergoes a stress-mediated temporary inhibition upon acute exposure to stress. The temporary inhibition period sparks a stronger up-regulation response, allowing an increased level of oxoglutarate dehydrogenase activity to compensate for the acute stress exposure. Acute exposures to stress are usually at lower, tolerable levels for the cell.
Pathophysiologies can arise when the stress becomes cumulative or develops into chronic stress. The up-regulation response that occurs after acute exposure can become exhausted if the inhibition of the enzyme complex becomes too strong. Stress in cells can cause a deregulation in the biosynthesis of the neurotransmitter glutamate. Glutamate toxicity in the brain is caused by a buildup of glutamate under times of stress. If oxoglutarate dehydrogenase activity is dysfunctional (no adaptive stress compensation), the build-up of glutamate cannot be fixed, and brain pathologies can ensue. Dysfunctional oxoglutarate dehydrogenase may also predispose the cell to damage from other toxins that can cause neurodegeneration.
Pathology
2-Oxo-glutarate dehydrogenase is an autoantigen recognized in primary biliary cirrhosis, a form of acute liver failure. These antibodies appear to recognize oxidized protein that has resulted from inflammatory immune responses. Some of these inflammatory responses are explained by gluten sensitivity. Other mitochondrial autoantigens include pyruvate dehydrogenase and branched-chain alpha-keto acid dehydrogenase complex, which are antigens recognized by anti-mitochondrial antibodies.
Activity of the 2-oxoglutarate dehydrogenase complex is decreased in many neurodegenerative diseases. Alzheimer's disease, Parkinson's disease, Huntington disease, and supranuclear palsy are all associated with an increased oxidative stress level in the brain. Specifically for Alzheimer Disease patients, the activity of oxoglutarate dehydrogenase is significantly diminished. This leads to a possibility that the portion of the TCA cycle responsible for causing the build-up of free radical species in the brain of patients is a malfunctioning oxoglutarate dehydrogenase complex. The mechanism for disease-related inhibition of this enzyme complex remains relatively unknown.
In the metabolic disease combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, mitochondrial fatty acid synthesis (mtFASII) is impaired, which is the precursor reaction of lipoic acid biosynthesis. The result is a reduced lipoylation degree of important mitochondrial enzymes, such as oxoglutarate dehydrogenase complex (OGDC).
References
Further reading
External links
EC 1.2.4
Autoantigens
Citric acid cycle | Oxoglutarate dehydrogenase complex | Chemistry | 1,784 |
32,222,497 | https://en.wikipedia.org/wiki/A%20Ergo | The A Ergo is a stand-in stacker forklift truck pioneered by the Swedish truck manufacturing company Atlet AB. When Atlet was founded in 1958, the truck market consisted of many types of trucks including the reach trucks. In 1961, to improve handling efficiency and safety, Atlet AB launched the stand-in stacker as the “impossible truck” the A Ergo, an alternative to the pedestrian stackers and reach trucks already in the market.
It handles open load carriers with standard straddle legs, and closed load carriers in between wide straddle legs. High drive speed and lift/lowering speed add to a high throughput and productivity – as does the ergonomic design. And for even higher residual capacity it offers foldable side stabilizers. Atlet's smart modular concept makes it possible to customize each truck for your specific needs. For multi-shift applications, the battery is placed on rollers, for quick, easy changes.
Other features:
Atlet Modular Concept design for highest First Visit Fix Rate.
AC motor for reduced maintenance plus maximum acceleration and drive speed.
Atlet Stability Support System S3.
Go-faster Stripes
References
External links
Atlet Ergo AjN/ASN
Engineering vehicles | A Ergo | Engineering | 251 |
12,953,288 | https://en.wikipedia.org/wiki/IEEE%20Alexander%20Graham%20Bell%20Medal | The IEEE Alexander Graham Bell Medal is an award honoring "exceptional contributions to communications and networking sciences and engineering" in the field of telecommunications. The medal is one of the highest honors awarded by the Institute of Electrical and Electronics Engineers (IEEE) for achievements in telecommunication sciences and engineering.
It was instituted in 1976 by the directors of IEEE, commemorating the centennial of the invention of the telephone by Alexander Graham Bell. The award is presented either to an individual, or to a team of two or three persons.
The institute's reasoning for the award was described thus:
Recipients of the award receive a gold medal, bronze replica, certificate, and an honorarium.
Recipients
As listed by the IEEE:
1976 Amos E. Joel, Jr., William Keister, and Raymond W. Ketchledge
1977 Eberhardt Rechtin
1978 M. Robert Aaron, John S. Mayo, and Eric E. Sumner
1979 A. Christian Jacobaeus
1980 Richard R. Hough
1981 David Slepian
1982 Harold A. Rosen
1983 Stephen O. Rice
1984 Andrew J. Viterbi
1985 Charles K. Kao
1986 Bernard Widrow
1987 Joel S. Engel, Richard H. Frenkiel, and William C. Jakes, Jr.
1988 Robert M. Metcalfe
1989 Gerald R. Ash and Billy B. Oliver
1990 Paul Baran
1991 C. Chapin Cutler, John O. Limb, and Arun N. Netravali
1992 James L. Massey
1993 Donald C. Cox
1994 Hiroshi Inose
1995 Irwin M. Jacobs
1996 Tadahiro Sekimoto
1997 Vinton G. Cerf and Robert E. Kahn
1998 Richard E. Blahut
1999 David G. Messerschmitt
2000 Vladimir A. Kotelnikov
2002 Tsuneo Nakahara
2003 Joachim Hagenauer
2005 Jim K. Omura
2006 John Wozencraft
2007 Norman Abramson
2008 Gerard J. Foschini
2009 Robert McEliece
2010 John Cioffi
2011 Arogyaswami Paulraj
2012 Leonard Kleinrock
2013 Andrew Chraplyvy, Robert Tkach
2014 Dariush Divsalar
2015 Frank Kelly
2016 Roberto Padovani
2017 H. Vincent Poor
2018 Nambirajan Seshadri
2019 Teresa H. Meng
2020 Rajiv Laroia
2021 Nick McKeown
2022 Panganamala R. Kumar
2023 Erwin Hochmair, Ingeborg Hochmair
2025 Richard D. Gitlin
See also
Alexander Graham Bell honors and tributes
IEEE Medal of Honor
IEEE awards
World Communication Awards
References
Alexander Graham Bell Medal
Alexander Graham Bell
Telecommunications engineering | IEEE Alexander Graham Bell Medal | Engineering | 522 |
5,292,724 | https://en.wikipedia.org/wiki/Lithium%20peroxide | Lithium peroxide is the inorganic compound with the formula Li2O2. Lithium peroxide is a white solid, and unlike most other alkali metal peroxides, it is nonhygroscopic. Because of its high oxygen:mass and oxygen:volume ratios, the solid has been used to remove CO2 from and release O2 to the atmosphere in spacecraft.
Preparation
It is prepared by the reaction of hydrogen peroxide and lithium hydroxide. This reaction initially produces lithium hydroperoxide:
LiOH + H2O2 → LiOOH + H2O
This lithium hydroperoxide may exist as lithium peroxide monoperoxohydrate trihydrate (Li2O2·H2O2·3H2O).
Dehydration of this material gives the anhydrous peroxide salt:
2 LiOOH → Li2O2 + H2O2
Li2O2 decomposes at about 450 °C to give lithium oxide:
2 Li2O2 → 2 Li2O + O2
The structure of solid Li2O2 has been determined by X-ray crystallography and density functional theory. The solid features eclipsed "ethane-like" Li6O2 subunits with an O-O distance of around 1.5 Å.
Uses
Air Purification
It is used in air purifiers where weight is important, e.g., spacecraft or other sealed spaces and apparatuses to absorb carbon dioxide and release oxygen in the reaction:
Li2O2 + CO2 → Li2CO3 + O2
Similar to the reaction of lithium hydroxide with carbon dioxide to release 1 Li2CO3 and 1 H2O, lithium peroxide has high absorption capacity and absorbs more CO2 than does the same weight of lithium hydroxide and offers the bonus of releasing oxygen instead of water.
Polymerization of Styrene
Lithium peroxide can also act as a catalyst for polymerization of styrene to polystyrene. The polymerization of styrene to polystyrene typically involves the use of radical initiators via the free radical chain mechanism but lithium peroxide can also initiate radical polymerization reactions under certain conditions, although not as widely used.
Lithium-air Battery
The reversible lithium peroxide reaction is the basis for a prototype lithium–air battery. Using oxygen from the atmosphere allows the battery to eliminate storage of oxygen for its reaction, saving battery weight and size.
See also
Lithium oxide
References
External links
WebElements entry
Peroxides
Lithium compounds
Oxidizing agents | Lithium peroxide | Chemistry | 523 |
24,529,342 | https://en.wikipedia.org/wiki/Super%20Bit%20Mapping | Super Bit Mapping (SBM) is a noise shaping process, developed by Sony for CD mastering.
Sony claims that the Super Bit Mapping process converts a 20-bit signal from master recording into a 16-bit signal nearly without sound quality loss, using noise shaping to improve signal-to-noise ratio over the frequency bands most acutely perceived by human hearing.
Audible quantization error is reduced by noise shaping the error according to an equal-loudness contour.
This processing takes place in dedicated hardware inside the recording device. A similar process is used in Sony's DSD to PCM conversion and is called SBM Direct.
See also
Extended Resolution Compact Disc (XRCD)
High Definition Compatible Digital (HDCD)
References
Sound technology
Digital signal processing
Optical disc authoring
Digital audio storage | Super Bit Mapping | Technology | 162 |
7,754,843 | https://en.wikipedia.org/wiki/Swedish%20Accident%20Investigation%20Authority | The Swedish Accident Investigation Authority (, SHK), formerly the Swedish Accident Investigation Board in English, is a Swedish government agency tasked with investigating all types of serious civil or military accidents that can occur on land, on the sea or in the air. Incidents are also to be investigated if there was a serious risk of an accident. Its headquarters are in Stockholm.
Directors General
Directors General:
1978-07-01 – 1987-06-30: Göran Steen
1987-07-01 – 1997-05-29: Olof Forssberg
1997-05-30 – 1997-06-08: S-E Sigfridsson (acting)
1997-06-09 – 2002-01-06: Ann-Louise Eksborg
2002-01-07 – 2004-01-31: Lena Svenaeus
2004-02-01 – 2004-05-31: Carin Hellner (acting)
2004-04-01 – 2011-04-17: Åsa Kastman Heuman
2011-04-18 – 2020-05-01: Hans Ytterberg
2020-05-01 – present: John Ahlberk
Notable investigations
Scandinavian Airlines Flight 751 (1991)
M/S Estonia (1994)
Falsterbo Swedish Coast Guard C-212 crash (2006)
MV Finnbirch (2006)
Norwegian Air Force C-130 crash (2012)
Saltsjöbanan train crash (2013)
West Air Sweden Flight 294 (2016)
Skydive Umeå Gippsland GA8 Airvan crash (2019)
See also
Swedish Civil Aviation Administration
Swedish Maritime Administration
References
External links
Rail accident investigators
Organizations investigating aviation accidents and incidents
Aviation in Sweden
Automotive safety
Accident Investigation Board
Transport organizations based in Sweden
Transport safety organizations | Swedish Accident Investigation Authority | Technology | 349 |
11,524,334 | https://en.wikipedia.org/wiki/Standpipe%20%28street%29 | A standpipe is a freestanding pipe fitted with a tap which is installed outdoors to dispense water in areas which do not have a running water supply to the buildings.
Use
In the United Kingdom, an "Emergency Drought Order" permits a water company to shut off the primary water supply to homes, and to supply water instead from tanks or standpipes in the streets. This was done in some areas during the 1976 heat wave, for example.
In some Middle Eastern, Caribbean and North African countries a standpipe is used as a communal water supply for neighbourhoods which lack individual housing water service. In areas such as Morocco, standpipes often yield unreliable service and lead to water scarcity for large numbers of people.
Freeze resistance
In areas where the air or surface ground temperatures reach below freezing point for part or all of the year, some standpipes are equipped with a feature whereby the same mechanism that valves the water for the bib also uncovers a drainage hole (the 'weep hole') at the base of the pipe when the standpipe is closed, ensuring that the column of water drains into the ground rather than remaining in the pipe where it might freeze and expand, bursting the plumbing. Standpipes that are equipped with this feature are sometimes referred to as 'frost-free hydrants' although frost buildup can still occur to a lesser extent.
Gallery
References
External links
Water supply | Standpipe (street) | Chemistry,Engineering,Environmental_science | 283 |
69,287,506 | https://en.wikipedia.org/wiki/S/2019%20S%201 | S/2019 S 1 is a natural satellite of Saturn. Its discovery was announced by Edward Ashton, Brett J. Gladman, Jean-Marc Petit, and Mike Alexandersen on 16 November 2021 from Canada–France–Hawaii Telescope observations taken between 1 July 2019 and 14 June 2021.
S/2019 S 1 is about 5 kilometres in diameter, and orbits Saturn at an average distance of in 443.78 days, at an inclination of 44° to the ecliptic, in a prograde direction and with an eccentricity of 0.623. It belongs to the Inuit group of prograde irregular satellites, and is among the innermost irregular satellites of Saturn. It might be a collisional fragment of Kiviuq and Ijiraq, which share very similar orbital elements.
This moon's eccentric orbit takes it closer than to Iapetus several times per millennium.
References
Inuit group
Irregular satellites
Moons of Saturn
20211116
Moons with a prograde orbit | S/2019 S 1 | Astronomy | 198 |
31,178,109 | https://en.wikipedia.org/wiki/Logarithmically%20concave%20sequence | In mathematics, a sequence = of nonnegative real numbers is called a logarithmically concave sequence, or a log-concave sequence for short, if holds for .
Remark: some authors (explicitly or not) add two further conditions in the definition of log-concave sequences:
is non-negative
has no internal zeros; in other words, the support of is an interval of .
These conditions mirror the ones required for log-concave functions.
Sequences that fulfill the three conditions are also called Pólya Frequency sequences of order 2 (PF2 sequences). Refer to chapter 2 of for a discussion on the two notions. For instance, the sequence satisfies the concavity inequalities but not the internal zeros condition.
Examples of log-concave sequences are given by the binomial coefficients along any row of Pascal's triangle and the elementary symmetric means of a finite sequence of real numbers.
References
See also
Unimodality
Logarithmically concave function
Logarithmically concave measure
Sequences and series | Logarithmically concave sequence | Mathematics | 220 |
419,993 | https://en.wikipedia.org/wiki/Fibrocartilage%20callus | A fibrocartilage callus is a temporary formation of fibroblasts and chondroblasts which forms at the area of a bone fracture as the bone attempts to heal itself. The cells eventually dissipate and become dormant, lying in the resulting extracellular matrix that is the new bone.
The callus is the first sign of union visible on x-rays, usually 3 weeks after the fracture. Callus formation is slower in adults than in children, and in cortical bones than in cancellous bones.
See also
Bone healing
References
Morgan, Elise F., et al. “Overview of Skeletal Repair (Fracture Healing and Its Assessment).” Methods in Molecular Biology Skeletal Development and Repair, 2014, pp. 13–31.
External links
Bone fractures
Physiology | Fibrocartilage callus | Biology | 160 |
53,844,682 | https://en.wikipedia.org/wiki/Renyi%20Zhang | Renyi Zhang is an American geoscientist, currently a university distinguished professor and Harold G. Haynes Chair at Texas A&M University and an elected fellow of the American Association for the Advancement of Science and American Geophysical Union.
References
Year of birth missing (living people)
Living people
Fellows of the American Association for the Advancement of Science
Texas A&M University faculty
American geochemists | Renyi Zhang | Chemistry | 80 |
3,100,179 | https://en.wikipedia.org/wiki/Locally%20nilpotent | In the mathematical field of commutative algebra, an ideal I in a commutative ring A is locally nilpotent at a prime ideal p if Ip, the localization of I at p, is a nilpotent ideal in Ap.
In non-commutative algebra and group theory, an algebra or group is locally nilpotent if and only if every finitely generated subalgebra or subgroup is nilpotent. The subgroup generated by the normal locally nilpotent subgroups is called the Hirsch–Plotkin radical and is the generalization of the Fitting subgroup to groups without the ascending chain condition on normal subgroups.
A locally nilpotent ring is one in which every finitely generated subring is nilpotent: locally nilpotent rings form a radical class, giving rise to the Levitzki radical.
References
Commutative algebra | Locally nilpotent | Mathematics | 185 |
14,617,864 | https://en.wikipedia.org/wiki/Czech%20chemical%20nomenclature | Foundations of the Czech chemical nomenclature () and terminology were laid during the 1820s and 1830s. These early naming conventions fit the Czech language and, being mostly the work of a single person, Jan Svatopluk Presl, provided a consistent way to name chemical compounds. Over time, the nomenclature expanded considerably, following the recommendations by the International Union of Pure and Applied Chemistry (IUPAC) in the recent era.
Unlike the nomenclature that is used in biology or medicine, the chemical nomenclature stays closer to the Czech language and uses Czech pronunciation and inflection rules, but developed its own, very complex, system of morphemes (taken from Greek and Latin), grammar, syntax, punctuation and use of brackets and numerals. Certain terms (such as ) use the phonetic transcription, but the rules for spelling are inconsistent.
History
Medieval alchemists in the Czech lands used obscure and inconsistent terminology to describe their experiments. Edward Kelley, an alchemist at the court of Rudolf II, even invented his own secret language. Growth of the industry in the region during the 19th century, and the nationalistic fervour of the Czech National Revival, led to the development of Czech terminologies for natural and applied sciences.
Jan Svatopluk Presl (1791–1849), an all-round natural scientist, proposed a new Czech nomenclature and terminology in the books Lučba čili chemie zkusná (1828–1835) and Nerostopis (1837). Presl had invented Czech neologisms for most of the then known chemical elements; ten of these, including , , , and , have entered the language. Presl also created naming conventions for oxides, in which the electronegative component of the compound became the noun and the electropositive component became an adjective. The adjectives were associated with a suffix, according to the valence number of the component they represented. Originally there were five suffixes: , , , , and . These were later expanded to eight by Vojtěch Šafařík: , , , , and , , , and , representing oxidation numbers from 1 to 8. For example, corresponds to and to .
Salts were identified by the suffix added to the noun. Many of the terms created by Presl derive from Latin, German or Russian; only some were retained in use.
A similar attempt published in Orbis pictus (1852) by Karel Slavoj Amerling (1807–1884) to create Czech names for the chemical elements (and to order the elements into a structure, similar to the work of Russian chemist Nikolay Beketov) was not successful.
Later work on the nomenclature was performed by Vojtěch Šafařík (1829–1902). In 1876 Šafařík started to publish the journal Listy chemické, the first chemistry journal in Austria-Hungary (today issued under the name Chemické Listy), and this journal has played an important role in the codification of the nomenclature and terminology. During a congress of Czech chemists in 1914, the nomenclature was reworked, and the new system became normative in 1918. Alexandr Sommer-Batěk (1874–1944) and Emil Votoček (1872–1950) were the major proponents of this change. Presl's original conventions remained in use, but formed only a small part of the naming system.
Several changes were applied to the basic terminology during the second half of the 20th century, usually moving closer to the international nomenclature. For example, the former term was officially replaced by , by and later even . The spelling of some chemical elements also changed: should now be written . Adoption of these changes by the Czech public has been quite slow, and the older terms are still used decades later.
The Czechoslovak Academy of Sciences, founded in 1953, took over responsibility for maintenance of the nomenclature and proper implementation of the IUPAC recommendations. Since the Velvet Revolution (1989) this activity has slowed down considerably.
Oxidation state suffixes
Notes
External links
Website about the early history of the Czech chemical nomenclature (in Czech)
Article in a Czech Academy of Sciences bulletin: current problems faced by the Czech chemical nomenclature (2000, section "Současný stav a problémy českého chemického názvosloví")
Organizations
Journal Chemické listy (nomenclature related articles are in Czech, ISSN 1213-7103, printed version ISSN 0009-2770)
Czech Chemical Society (Česká společnost chemická, ČSCH, founded in 1866)
National IUPAC Centre for the Czech Republic
Czech language
Science and technology in the Czech Republic
Chemical nomenclature | Czech chemical nomenclature | Chemistry | 946 |
5,111,875 | https://en.wikipedia.org/wiki/Cadmium%20iodide | Cadmium iodide is an inorganic compound with the formula CdI2. It is a white hygroscopic solid. It also can be obtained as a mono- and tetrahydrate. It has few applications. It is notable for its crystal structure, which is typical for compounds of the form MX2 with strong polarization effects.
Preparation
Cadmium iodide is prepared by the addition of cadmium metal, or its oxide, hydroxide or carbonate to hydroiodic acid. Also, the compound can be made by heating cadmium with iodine.
Applications
Historically, cadmium iodide was used as a catalyst for the Henkel process, a high-temperature isomerisation of dipotassium phthalate to yield the terephthalate. The salt was then treated with acetic acid to yield potassium acetate and commercially valuable terephthalic acid.
While uneconomical compared to the production of terephthalic acid from p-xylene, the Henkel method has been proposed as a potential route to produce terephthalic acid from furfural. As existing Bio-PET is still reliant on petroleum as a source of p-xylene, the Henkel process could theoretically offer a completely bioplastic route to polyethylene terephthalate.
Crystal structure
In cadmium iodide the iodide anions form a hexagonal closely packed arrangement while the cadmium cations fill all of the octahedral sites in alternate layers. The resultant structure consists of a layered lattice. This same basic structure is found in many other salts and minerals. Cadmium iodide is mostly ionically bonded but with partial covalent character.
Cadmium iodide's crystal structure is the prototype on which the crystal structures of many other compounds can be considered to be based. Compounds with any of the following characteristics tend to adopt the CdI2 structure:
Iodides of moderately polarising cations; bromides and chlorides of strongly polarising cations
Hydroxides of dications, i.e. compounds with the general formula M(OH)2
Sulfides, selenides and tellurides (chalcogenides) of tetracations, i.e. compounds with the general formula MX2, where X = S, Se, Te
References
Cadmium compounds
Iodides
Metal halides
Photographic chemicals
Crystal structure types | Cadmium iodide | Chemistry,Materials_science | 505 |
57,294,720 | https://en.wikipedia.org/wiki/Dinnie%20Stones | The Dinnie Stones (also called Stanes or Steens) are a pair of Scottish lifting stones located in Potarch, Aberdeenshire. They were made famous by strongman Donald Dinnie, who reportedly carried the stones barehanded across the width of the Potarch Bridge, a distance of , in 1860. They remain in use as lifting stones.
The stones are composed of granite, with iron rings affixed. They have a combined weight of , with the larger stone weighing and the smaller stone weighing .
The stones were reportedly selected in the 1830s as counterweights for use in maintaining the Potarch Bridge. They were lost following World War I, but were rediscovered in 1953 by David P. Webster.
Replicas
Replicas of the Dinnie Stones (pioneered by Gordon Dinnie) have been used in international competitions most notably during the Rogue record breakers event of the Arnold Strongman Classic.
While the replica Dinnie Stones are very close in weight (with the replicas being 1lb heavier), there are several differences between the sets of stones. The replica stones have slightly different handles, the sets of stones are different shapes, and the replicas sit one inch higher than the original stones. The rules for the walk also differ, with lifters being allowed one 10 second drop while walking with the replica stones.
World records
Carrying
Original method: The ultimate challenge is to replicate the 1860 performance of Donald Dinnie, by walking the original stones (heavier stone to be gripped from the front and the lighter stone from the back) over the historical Potarch Bridge distance of . Only 6 other men have ever been recorded as matching this feat (unassisted without using any weightlifting straps). The first to replicate it was Donald Dinnie's father Robert Dinnie. However, some sources state it was in fact Robert who did it first. The feat then went unrepeated for 113 years, until Northern Irishsman Jack Shanks did so on 3 June 1973. The feat was followed by Mark Haydock (2012), Mark Felix (2014), Brian Irwin (2017) and Pete Seddon (2019).
Farmer's walk method: Another feat of strength is to pick up the stones from the sides and walk them in a farmers walk style carry until dropping them. Picking up of the stones this way is more challenging than the original method because it makes the range of motion of the lift longer and takes the wider sumo stance out of the equation. This record, with the original stones, is held by Laurence Shahlaei, who carried them a distance of in 2023. Mitchell Hooper holds the record for the longest distance walked with the Rogue replica Dinnie stones, carrying them a distance of in 2024.
Holding
The record for lifting and holding the stones up unassisted (which is regarded as a world class feat of grip strength) for the longest time is 46.30 seconds, set on 18 May 2019 by Mark Haydock of England. This record was first introduced at the Aboyne Highland Games in 2016, and the first holder of the record was James Gardner. Annika Eilmann of Finland holds the women's record in this with a time of 10.31 seconds, also set in 2019. Kevin Faires holds the record with the Rogue replica Dinnie stones with 41.31 seconds while Gabi Dixon holds the women's record with 6.86 seconds, both achieved during 2023 Rogue Record Breakers.
Lifting
, 370 individuals have managed to lift the original stones off the ground (also known as putting the wind under the stones, i.e. just lifting/ not walking with them). David Prowse was the first to do so assisted (with straps) in October, 1963, followed by Charlie McLaggan, Ken Morrison and Bill Bangert (1971). Jack Shanks was the first to lift them unassisted (raw grip without straps) in 1972, followed by Syd Strachan, Jim Splaine, Imlach Shearer (1973) and Jim Fraser (1978). 13 women have also managed to lift the stones. The first was Jan Todd in 1979, a feat which was not matched by any woman for the next 39 years until Leigh Holland-Keen in 2018 (both assisted with straps). In January 2019, Emmajane Smith lifted the stones without straps, making her the first woman to do so. In June 2019, Annika Eilmann lifted the stones without straps and also held them, making her the first woman to do so. In October 2019, Chloe Brennan at a bodyweight of 64 kg (141 lb) lifted the stones (unassisted partial lift) and became the lightest lifter to put the wind beneath the stones. In May 2019, Kristin Rhodes became the first woman to lift the Rogue replica Dinnie stones unassisted.
Most number of lifts: Jim Splaine became the first person to lift the Dinnie Stones more than 50 times, a feat he went on to achieve a total of 67 times from 1973 to 1990. Most of his early lifts were done at a bodyweight of 65 kg (143 lb) and with his son sitting on his shoulders. Brett Nicol is the current record holder for lifting the Dinnie Stones for the most number of times, with 499 lifts from 2008 to date. In 2012 Mark Haydock set a record by lifting the stones 25 times in a single day, including 10 times within 1 minute.
Notes:
See also
History of physical training and fitness
Húsafell Stone
References
Stones
Sport in Aberdeenshire
Tourist attractions in Aberdeenshire
History of Aberdeenshire
Weightlifting in Scotland
Highland games in Scotland
Lost objects | Dinnie Stones | Physics | 1,147 |
57,803,107 | https://en.wikipedia.org/wiki/Alruba | Alruba, a name derived from Arabic for "the foal", is a suspected astrometric binary star system in the northern circumpolar constellation of Draco. It is just barely visible to the naked eye as a dim point of light with an apparent visual magnitude of 5.76. Based on parallax measurements obtained during the Gaia mission, it is located at a distance of about from the Sun. The system is drifting closer with a radial velocity of −2 km/s.
The visible component is an A-type main-sequence star with a stellar classification of A0 V. It is about 58 million years old with three times the mass of the Sun and has a high rate of spin, showing a projected rotational velocity of 170 km/s. The star is radiating 147 times the luminosity of the Sun from its photosphere at an effective temperature of 9,226 K. The system is a source for X-ray emission, which is most likely coming from the unseen companion.
Nomenclature
In the Henry Draper catalogue this system has the designation HD 161693, while it has the identifier HR 6618 in the Bright Star Catalogue.
It bore the traditional Arabic name الربع Al Rubaʽ "the foal" (specifically a young camel born in the spring), a member of the Mother Camels asterism in early Arabic astronomy.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Alruba for this star on 1 June 2018 and it is now so entered on the List of IAU-approved Star Names.
References
A-type main-sequence stars
Draco (constellation)
161693
086782
6618
Alruba
BD+53 1978 | Alruba | Astronomy | 382 |
41,460,890 | https://en.wikipedia.org/wiki/Iota1%20Muscae | {{DISPLAYTITLE:Iota1 Muscae}}
ι1 Muscae, Latinised as Iota1 Muscae, is a solitary star in the southern constellation of Musca, near the southern constellation border with Chamaeleon. It is visible to the naked eye as a dim, orange-hued star with an apparent visual magnitude is 5.05. The star is located around 222 light-years distant from the Sun based on parallax, and is drifting further away with a radial velocity of 27.5 km/s.
This object is an aging giant star with a stellar classification of K0III; a star that has used up its core hydrogen and is cooling and expanding. At present it has nearly 12 times the girth of the Sun. The star is radiating 56.5 times the luminosity of the Sun from its swollen photosphere at an effective temperature of about .
References
K-type giants
Musca
Muscae, Iota1
Durchmusterung objects
116244
065468
5042 | Iota1 Muscae | Astronomy | 219 |
33,550,497 | https://en.wikipedia.org/wiki/R%C3%A9union%27s%20coral%20reef | Réunion is an island located near the eastern coast of Madagascar in the Indian Ocean. Its coral reef covers a concentrated part of the western littoral. The coral reef is located between St Leu and St Gilles. It is more than long and ranges in width from in its northern part at St. Gilles to in the south. Since the island is distant from the continental shelf, the sea becomes deep not far from the coast. The presence of nearby deeper ocean currents supports a rich biodiversity fauna and flora in the reef environment.
Environment and threats
Coral reefs are among the most densely populated marine environments. The coral reef fringing Réunion is a rich habitat for soft and hard corals that provide food and habitat for an abundance of fish and shellfish. The corals feed on a diet of plankton and algae which they catch with their tentacles. The growth cycle of corals is relatively slow, with growth of about per year. Male corals disseminate sperm cells into the water, which fertilize free-floating eggs released by the female corals. Reproduction is limited due to the proportionately small number of male corals.
The integrity of Réunion's reef has recently become threatened by an increase in pollution and human exploitation of the ecologically-sensitive marine life. Ocean water quality and the health of coral species are both symbiotic and reflexive; polluted water negatively impacts the health of coral reefs, and unhealthy reefs add to the degradation of ocean water quality. Studies show that ocean pollution inhibits coral growth and therefore contributes to algal blooms, which result in hypoxic conditions for coral reefs.
Coral is very sensitive to variations in water temperature. The effects of global warming on oceans have a direct impact on the health of coral reefs. The optimal water temperature for coral at Réunion is between 23 °C and 28 °C. Increased surface water temperature related to global warming can contribute to disease and bleaching of coral reefs, as is apparent on Réunion.
Natural cyclones and anthropogenic disturbances such as urban development of catchment areas affect the reefs of Réunion. These ecological disturbances have led to an increase in coral reef nutrient levels via submarine groundwater discharge. Increased nutrient levels have resulted in the modification of flora and fauna in the benthic zone. The coral reef is a natural barrier that protects the coast from typhoons.
Management
The least vulnerable sectors of the reef are in St Paul and l’Etang-Salé (in the north), due to the absence of a reef ecosystem and the stability of the shoreline. Due to pressures from the nearby urban areas, the sector in Grande Anse and Boucan Canot is moderately vulnerable, but there are few severe impacts. The highly vulnerable points are located at la Pointe des Aigrettes, la Pointe au Sel, la Ravine Blanche, St-Pierre and Grand Bois (in the south). The main factors causing vulnerability include beach erosion, a poor recovery rate of coral reef flats, hydrodynamic conditions, and urbanisation, which puts pressure on the soil and aggravates coastal erosion, degrades the landscape, and generates pollution. The reef areas near the dock at Pirogue and St-Pierre Ville are suffering irreversible deterioration due to increasing development.
The Marine Nature Reserve, created in 2007, plays a role in protecting the reef. By educating visitors, instilling respect and awareness of protective regulations, and working to decrease poaching, the Marine Nature Reserve has already made some progress in returning the reef to health.
One way to assess the degeneration is to measure the carrying capacity, one of the key indicators for coast management. Carrying capacity refers to the limit at which risk of irreparable degradation of the environment and the social climate are high. It is evaluated using 45 criteria, including anarchic development (unplanned construction), unmanaged use of land, and a lack of regard for the local architecture. Anarchic development compromises the environmental health of the water and the reef, and leads to visual degradation of the coast, increased atmospheric pollution due to car and boat emissions, increased release of waste (waste water and solid waste) due to the absence of appropriate infrastructure to treat it, saturates service infrastructures during the high season, increases loss of habitat and biodiversity, leads to overconsumption of natural resources by a growing population with demands for facilities such as golf courses, causes an abandonment of traditional, conservation-friendly activity, and leads to the modification of socio-cultural identities due to the change in lifestyle.
The International Coral Reef Initiative (ICRI), in partnership with the Global Fund for Coral Reefs (GFCR) and the UN Climate Change High-Level Climate Champions, has launched the Coral Reef Breakthrough. This initiative aims to secure the future of at least 125,000 km^2 of shallow-water tropical coral reefs by 2030 through investments of at least US$12 billion. The strategy focuses on mitigating local drivers of loss, doubling the area of coral reefs under effective protection, accelerating restoration, and securing significant investments from both public and private sources.
Economic impact
The coral barrier is an important resource for Réunion. It sustains the standards of living of these parts of the island due to increased tourism profits and its influence on real estate prices.
Many activities are organized around the coral reef, making it an essential part of the island's economy. Activities include scuba diving, snorkelling, boat trips, fishing, helicopter flights, and paragliding. Many diving schools suggest training sessions and first dives at reef sites. Local businesses, organizations, and government localities organize photography contests and other events to boost tourism around the reef.
In a recent study, specialists estimated the benefits of leisure activities such as scuba diving to be above €2,076,150, water sports above €118,018, and diving boats and submarines above €1,851,614. The coral reef influences the price of local housing; homes close to a beach protected by corals enjoy lower real estate prices.
See also
List of reefs
References
Further reading
Chabanet, P.; L. Bigot, Naim, O.; Garnier, R.; Tessier E.; Moyne-Picard, M. Coral reef monitoring at Reunion island (Western Indian Ocean) using the GCRMN method, Oct. 2000., Proceedings 9th International Coral Reef Symposium, Bali, Indonesia
Holland, J.S., 2011, 'Une fragile muraille', National Geographic France, 27 September 2011, p. 2.
Lison, C., 2011, 'Rencontre sous la mer', National Geographic France, 27 September 2011, p. 24.
Montaggioni, L., 2007, Coraux et Récifs, archives du climat., société géologique de France Vuibert,
Saffache, P., Université des Antilles et de la Guyane, Campus de Schoelcher, Département de Géographie, From Degradation to Environmental Management: Case in Point: The Reunion Island Sea Bed, Martinique
External links
http://www.reunion.fr/modules/rechercher/rechercher-dans-les-services-touristiques.html
http://coraux.univ-reunion.fr/
http://www.liledelareunion.com/Fr/economie/index.php
http://www.aquaportail.com/modules/news/index.php?storytopic=12&start=586
http://coraux.univ-reunion.fr/spip.php?article5
http://www.liledelareunion.com/
http://www.reunion.fr/modules/rechercher/rechercher-dans-les-services-touristiques.html
http://en.ird.fr/the-media-library/scientific-news-sheets/354-reunion-island-coral-reefs-in-poor-health
http://vieoceane.free.fr/paf/ficheb3.html
http://www.liledelareunion.com/Fr/economie/index.php
Landforms of Réunion
Reefs of France
Coral reefs | Réunion's coral reef | Biology | 1,712 |
10,261,692 | https://en.wikipedia.org/wiki/Satellite%20ground%20track | A satellite ground track or satellite ground trace is the path on the surface of a planet directly below a satellite's trajectory. It is also known as a suborbital track or subsatellite track, and is the vertical projection of the satellite's orbit onto the surface of the Earth (or whatever body the satellite is orbiting).
A satellite ground track may be thought of as a path along the Earth's surface that traces the movement of an imaginary line between the satellite and the center of the Earth. In other words, the ground track is the set of points at which the satellite will pass directly overhead, or cross the zenith, in the frame of reference of a ground observer.
The ground track of a satellite can take a number of different forms, depending on the values of the orbital elements, parameters that define the size, shape, and orientation of the satellite's orbit, although identification of the always reliant upon the recognition of the physical form that is in motion; This was emphasised during speculation over the Vela incident, whereby identification of the matter in question was subject to numerous theories.
Direct and retrograde motion
Typically, satellites have a roughly sinusoidal ground track. A satellite with an orbital inclination between zero and ninety degrees is said to be in what is called a direct or prograde orbit, meaning that it orbits in the same direction as the planet's rotation. A satellite with an orbital inclination between 90° and 180° (or, equivalently, between 0° and −90°) is said to be in a retrograde orbit.
A satellite in a direct orbit with an orbital period less than one day will tend to move from west to east along its ground track. This is called "apparent direct" motion. A satellite in a direct orbit with an orbital period greater than one day will tend to move from east to west along its ground track, in what is called "apparent retrograde" motion. This effect occurs because the satellite orbits more slowly than the speed at which the Earth rotates beneath it. Any satellite in a true retrograde orbit will always move from east to west along its ground track, regardless of the length of its orbital period.
Because a satellite in an eccentric orbit moves faster near perigee and slower near apogee, it is possible for a satellite to track eastward during part of its orbit and westward during another part. This phenomenon allows for ground tracks that cross over themselves in a single orbit, as in the geosynchronous and Molniya orbits discussed below.
Effect of orbital period
A satellite whose orbital period is an integer fraction of a day (e.g., 24 hours, 12 hours, 8 hours, etc.) will follow roughly the same ground track every day. This ground track is shifted east or west depending on the longitude of the ascending node, which can vary over time due to perturbations of the orbit. If the period of the satellite is slightly longer than an integer fraction of a day, the ground track will shift west over time; if it is slightly shorter, the ground track will shift east.
As the orbital period of a satellite increases, approaching the rotational period of the Earth (in other words, as its average orbital speed slows towards the rotational speed of the Earth), its sinusoidal ground track will become compressed longitudinally, meaning that the "nodes" (the points at which it crosses the equator) will become closer together until at geosynchronous orbit they lie directly on top of each other. For orbital periods longer than the Earth's rotational period, an increase in the orbital period corresponds to a longitudinal stretching out of the (apparent retrograde) ground track.
A satellite whose orbital period is equal to the rotational period of the Earth is said to be in a geosynchronous orbit. Its ground track will have a "figure eight" shape over a fixed location on the Earth, crossing the equator twice each day. It will track eastward when it is on the part of its orbit closest to perigee, and westward when it is closest to apogee.
A special case of the geosynchronous orbit, the geostationary orbit, has an eccentricity of zero (meaning the orbit is circular), and an inclination of zero in the Earth-Centered, Earth-Fixed coordinate system (meaning the orbital plane is not tilted relative to the Earth's equator). The "ground track" in this case consists of a single point on the Earth's equator, above which the satellite sits at all times. Note that the satellite is still orbiting the Earth — its apparent lack of motion is due to the fact that the Earth is rotating about its own center of mass at the same rate as the satellite is orbiting.
Effect of inclination
Orbital inclination is the angle formed between the plane of an orbit and the equatorial plane of the Earth. The geographic latitudes covered by the ground track will range from –i to i, where i is the orbital inclination. In other words, the greater the inclination of a satellite's orbit, the further north and south its ground track will pass. A satellite with an inclination of exactly 90° is said to be in a polar orbit, meaning it passes over the Earth's north and south poles.
Launch sites at lower latitudes are often preferred partly for the flexibility they allow in orbital inclination; the initial inclination of an orbit is constrained to be greater than or equal to the launch latitude. Vehicles launched from Cape Canaveral, for instance, will have an initial orbital inclination of at least 28°27′, the latitude of the launch site—and to achieve this minimum requires launching with a due east azimuth, which may not always be feasible given other launch constraints. At the extremes, a launch site located on the equator can launch directly into any desired inclination, while a hypothetical launch site at the north or south pole would only be able to launch into polar orbits. (While it is possible to perform an orbital inclination change maneuver once on orbit, such maneuvers are typically among the most costly, in terms of fuel, of all orbital maneuvers, and are typically avoided or minimized to the extent possible.)
In addition to providing for a wider range of initial orbit inclinations, low-latitude launch sites offer the benefit of requiring less energy to make orbit (at least for prograde orbits, which comprise the vast majority of launches), due to the initial velocity provided by the Earth's rotation. The desire for equatorial launch sites, coupled with geopolitical and logistical realities, has fostered the development of floating launch platforms, most notably Sea Launch.
Effect of argument of perigee
If the argument of perigee is zero, meaning that perigee and apogee lie in the equatorial plane, then the ground track of the satellite will appear the same above and below the equator (i.e., it will exhibit 180° rotational symmetry about the orbital nodes.) If the argument of perigee is non-zero, however, the satellite will behave differently in the northern and southern hemispheres. The Molniya orbit, with an argument of perigee near −90°, is an example of such a case. In a Molniya orbit, apogee occurs at a high latitude (63°), and the orbit is highly eccentric (e = 0.72). This causes the satellite to "hover" over a region of the northern hemisphere for a long time, while spending very little time over the southern hemisphere. This phenomenon is known as "apogee dwell", and is desirable for communications for high latitude regions.
Repeat orbits
As orbital operations are often required to monitor a specific location on Earth, orbits that cover the same ground track periodically are often used. On earth, these orbits are commonly referred to as Earth-repeat orbits, and are often designed with "frozen orbit" parameters to achieve a repeat ground track orbit with stable (minimally time-varying) orbit elements. These orbits use the nodal precession effect to shift the orbit so the ground track coincides with that of a previous orbit, so that this essentially balances out the offset in the revolution of the orbited body. The longitudinal rotation after a certain period of time of a planet is given by:
where
is the time elapsed
is the time for a full revolution of the orbiting body, in the case of Earth one sidereal day
The effect of the nodal precession can be quantified as:
where
is the body's second dynamic form factor
is the body's radius
is the orbital inclination
is the orbit's semi-major axis
is the orbital eccentricity
These two effects must cancel out after a set orbital revolutions and (sidereal) days. Hence, equating the elapsed time to the orbital period of the satellite and combining the above two equations yields an equation which holds for any orbit that is a repeat orbit:
where
is the standard gravitational parameter for the body being orbited
is the number of orbital revolutions after which the same ground track is covered
is the number of sidereal days after which the same ground track is covered
See also
Course (navigation)
Ground tracking station
Pass (spaceflight), the period in which a spacecraft is visible above the local horizon
Satellite revisit period, the time elapsed between observations of the same point on Earth by a satellite
Satellite watching, as a hobby
Subsolar point
Terminator (solar), the moving line that separates the illuminated day side and the dark night side of a planetary body
Notes
References
Lyle, S. and Capderou, Michel (2006) Satellites: Orbits and Missions Springer pp 175–264
External links
Satellite Tracker at eoPortal.org
satview.org
heavens-above.com
https://isstracker.pl ISS Tracker
Small Satellites (software code)
infosatellites.com
n2yo.com
Astrodynamics
Curves
Satellites
Air navigation | Satellite ground track | Astronomy,Engineering | 2,026 |
14,055,458 | https://en.wikipedia.org/wiki/Reactive%20nitrogen%20species | Reactive nitrogen species (RNS) are a family of antimicrobial molecules derived from nitric oxide (•NO) and superoxide (O2•−) produced via the enzymatic activity of inducible nitric oxide synthase 2 (NOS2) and NADPH oxidase respectively. NOS2 is expressed primarily in macrophages after induction by cytokines and microbial products, notably interferon-gamma (IFN-γ) and lipopolysaccharide (LPS).
Reactive nitrogen species act together with reactive oxygen species (ROS) to damage cells, causing nitrosative stress. Therefore, these two species are often collectively referred to as ROS/RNS.
Reactive nitrogen species are also continuously produced in plants as by-products of aerobic metabolism or in response to stress.
Types
RNS are produced in animals starting with the reaction of nitric oxide (•NO) with superoxide (O2•−) to form peroxynitrite (ONOO−):
•NO (nitric oxide) + O2•− (superoxide) → ONOO− (peroxynitrite)
Superoxide anion (O2−) is a reactive oxygen species that reacts quickly with nitric oxide (NO) in the vasculature. The reaction produces peroxynitrite and depletes the bioactivity of NO. This is important because NO is a key mediator in many important vascular functions including regulation of smooth muscle tone and blood pressure, platelet activation, and vascular cell signaling.
Peroxynitrite itself is a highly reactive species which can directly react with various biological targets and components of the cell including lipids, thiols, amino acid residues, DNA bases, and low-molecular weight antioxidants. However, these reactions happen at a relatively slow rate. This slow reaction rate allows it to react more selectively throughout the cell. Peroxynitrite is able to get across cell membranes to some extent through anion channels. Additionally peroxynitrite can react with other molecules to form additional types of RNS including nitrogen dioxide (•NO2) and dinitrogen trioxide (N2O3) as well as other types of chemically reactive free radicals. Important reactions involving RNS include:
ONOO− + H+ → ONOOH (peroxynitrous acid) → •NO2 (nitrogen dioxide) + •OH (hydroxyl radical)
ONOO− + CO2 (carbon dioxide) → ONOOCO2− (nitrosoperoxycarbonate)
ONOOCO2− → •NO2 (nitrogen dioxide) + O=C(O•)O− (carbonate radical)
•NO + •NO2 N2O3 (dinitrogen trioxide)
Biological targets
Peroxynitrite can react directly with proteins that contain transition metal centers. Therefore, it can modify proteins such as hemoglobin, myoglobin, and cytochrome c by oxidizing ferrous heme into its corresponding ferric forms. Peroxynitrite may also be able to change protein structure through the reaction with various amino acids in the peptide chain. The most common reaction with amino acids is cysteine oxidation. Another reaction is tyrosine nitration; however peroxynitrite does not react directly with tyrosine. Tyrosine reacts with other RNS that are produced by peroxynitrite. All of these reactions affect protein structure and function and thus have the potential to cause changes in the catalytic activity of enzymes, altered cytoskeletal organization, and impaired cell signal transduction.
See also
Reactive oxygen species
Reactive sulfur species
Reactive carbonyl species
References
External links
Short article on RN chemistry
Article on global RN trends
Nitrogen compounds
Free radicals | Reactive nitrogen species | Chemistry,Biology | 785 |
150,049 | https://en.wikipedia.org/wiki/S/PDIF | S/PDIF (Sony/Philips Digital Interface) is a type of digital audio interface used in consumer audio equipment to output audio over relatively short distances. The signal is transmitted over either a coaxial cable using RCA or BNC connectors, or a fibre-optic cable using TOSLINK connectors. S/PDIF interconnects components in home theaters and other digital high-fidelity systems.
S/PDIF is based on the AES3 interconnect standard. S/PDIF can carry two channels of uncompressed PCM audio or compressed 5.1 surround sound; it cannot support lossless surround formats that require greater bandwidth.
S/PDIF is a data link layer protocol as well as a set of physical layer specifications for carrying digital audio signals over either optical or electrical cable. The name stands for Sony/Philips Digital Interconnect Format but is also known as Sony/Philips Digital Interface. Sony and Philips were the primary designers of S/PDIF. S/PDIF is standardized in IEC 60958 as IEC 60958 type II (IEC 958 before 1998).
Applications
A common use is to carry two channels of uncompressed digital audio from a CD player to an amplifying receiver.
The S/PDIF interface is also used to carry compressed digital audio for surround sound as defined by the IEC 61937 standard. This mode is used to connect the output of a Blu-ray, DVD player or computer, via optical or coax, to a home theatre amplifying receiver that supports Dolby Digital or DTS Digital Surround decoding.
Hardware specifications
S/PDIF was developed at the same time as the main standard, AES3, used to interconnect professional audio equipment in the professional audio field. This resulted from the desire of the various stakeholders to have at least sufficient similarities between the two interfaces to allow the use of the same, or very similar, designs for interfacing ICs. S/PDIF is nearly identical at the protocol level, but uses either coaxial cable (with RCA connectors) or optical fibre (TOSLINK; i.e., JIS F05 or EIAJ optical), both of which cost less than the XLR connection used by AES3. The RCA connectors are typically colour-coded orange to differentiate from other RCA connector uses such as composite video. S/PDIF uses 75 Ω coaxial cable while AES3 uses 110 Ω balanced twisted pair.
Signals transmitted over consumer-grade TOSLINK connections are identical in content to those transmitted over coaxial connectors. Optical provides electrical isolation that can help address ground loop issues in systems. The electrical connection can be more robust and supports longer connections.
Protocol specifications
S/PDIF is used to transmit digital signals in a number of formats, the most common being the 48 kHz sample rate format (used in Digital Audio Tape) and the 44.1 kHz format, used in CD audio. In order to support both sample rates, as well as others that might be needed, the format has no defined bit rate. Instead, the data is sent using biphase mark code, which has either one or two transitions for every bit, allowing the original word clock to be extracted from the signal itself.
S/PDIF protocol differs from AES3 only in the channel status bits; see for the high-level view. Both protocols group 192 samples into an audio block, and transmit one channel status bit per sample, providing one 192-bit channel status word per channel per audio block. For S/PDIF, the 192-bit status word is identical between the two channels and is divided into 12 words of 16 bits each, with the first 16 bits being a control code.
Data framing
S/PDIF is meant to be used for transmitting 20-bit audio data streams plus other related information. S/PDIF can also transport 24-bit samples by way of four extra bits; however, not all equipment supports this, and these extra bits may be ignored.
To transmit sources with less than 20 bits of sample accuracy, the superfluous bits will be set to zero, and the 4:1–3 bits (sample length) are set accordingly.
IEC 61937 encapsulation
IEC 61937 defines a way to transmit compressed, multi-channel data over S/PDIF.
The control word bit 0:1 is set to indicate the presence of non-linear-PCM data.
The sample rate is set to maintain the needed symbol (data) rate. The symbol rate is usually 64 times the sample rate.
Data is packed into blocks. Each data block is given a IEC 61937 preamble, containing two 16-bit sync words and indicating the state and identity (type, validity, bitstream number, length) of encapsulated data present. Padding is added to match full block size as required by timing.
A number of encodings are available over IEC 61937, including Dolby AC-3/E-AC-3, Dolby TrueHD, MP3, AAC, ATRAC, DTS, and WMA Pro.
Limitations
The receiver does not control the data rate, so it must avoid bit slip by synchronizing its reception with the source clock. Many S/PDIF implementations cannot fully decouple the final signal from influence of the source or the interconnect. Specifically, the process of clock recovery used to synchronize reception may produce jitter. If the DAC does not have a stable clock reference then noise will be introduced into the resulting analog signal. However, receivers can implement various strategies that limit this influence.
See also
ADAT Lightpipe
I2S
McASP
Notes
References
External links
S/PDIF at Epanorama.net
More about channel data bits
Interfacing AES3 and S/PDIF
Audio communications protocols
Computer hardware standards
IEC 60958
Digital audio transport
Digital audio connectors | S/PDIF | Technology | 1,224 |
639,767 | https://en.wikipedia.org/wiki/Holmdel%20Horn%20Antenna | The Holmdel Horn Antenna is a large microwave horn antenna that was used as a satellite communication antenna and radio telescope during the 1960s at the Bell Telephone Laboratories facility located on Crawford Hill in Holmdel Township, New Jersey, United States. It was designated a National Historic Landmark in 1989 because of its association with the research work of two radio astronomers, Arno Penzias and Robert Wilson.
In 1965, while using this antenna, Penzias and Wilson discovered the cosmic microwave background radiation (CMBR) that permeates the universe. This was one of the most important discoveries in physical cosmology since Edwin Hubble demonstrated in the 1920s that the universe was expanding. It provided the evidence that confirmed George Gamow's and Georges Lemaître's "Big Bang" theory of the creation of the universe. This helped change the science of cosmology, the study of the universe's history, from a field for unlimited theoretical speculation into a discipline of direct observation. In 1978 Penzias and Wilson received the Nobel Prize for Physics for their discovery.
Description
The horn antenna at Bell Telephone Laboratories in Holmdel, New Jersey, was constructed on Crawford Hill in 1959 to support Project Echo, the National Aeronautics and Space Administration's passive communications satellites, which used large aluminized plastic balloons (satellite balloon) as reflectors to bounce radio signals from one point on the Earth to another.
The antenna is in length with a radiating aperture of and is constructed of aluminum. The antenna's elevation wheel, which surrounds the midsection of the horn, is in diameter and supports the structure's weight using rollers mounted on a base frame. All axial or thrust loads are taken by a large ball bearing at the narrow apex end of the horn. The horn continues through this bearing into the equipment building or cab. The ability to locate receiver equipment at the horn apex, thus eliminating the noise contribution of a connecting line, is an important feature of the antenna. A radiometer for measuring the intensity of radiant energy is located in the cab.
The triangular base frame of the antenna is made from structural steel. It rotates on wheels about a center pintle ball bearing on a turntable track in diameter. The track consists of stress-relieved, planed steel plates individually adjusted to produce a track that is flat to about . The faces of the wheels are cone-shaped to minimize contact friction. A tangential force of 100 pounds (400 N) is sufficient to start the antenna rotating on the turntable. The antenna beam can be directed to any part of the sky using the turntable for azimuth adjustments and the elevation wheel to change the elevation angle or altitude above the horizon.
Except for the steel base frame, which a local steel company made, the Holmdel Laboratory shops fabricated and assembled the antenna under the direction of Mr. H. W. Anderson, who also collaborated on the design. Assistance in the design was also given by Messrs. R. O'Regan and S. A. Darby. Construction of the antenna was completed under the direction of Arthur Crawford.
When not in use, the turntable azimuth sprocket drive is disengaged, allowing the structure to "weathervane" and seek a position of minimum wind resistance. The antenna was designed to withstand winds of , and the entire structure weighs 18 short tons (16 tonnes).
A plastic clapboarded utility shed with two windows, a double door, and a sheet-metal roof, is located on the ground next to the antenna. This structure houses equipment and controls for the antenna and is included as a part of the designation as a National Historic Landmark.
The antenna has not been used for several decades.
Technical
This type of antenna is called a Hogg or horn-reflector antenna, invented by Alfred C. Beck and Harald T. Friis in 1941. It was built by David C. Hogg. It consists of a flaring metal horn with a curved reflecting surface mounted in its mouth at a 45° angle to the long axis of the horn. The reflector is a segment of a parabolic reflector, so the antenna is a parabolic antenna that is fed off-axis. A Hogg horn combines several characteristics useful for radio astronomy. It is extremely broad-band, has calculable aperture efficiency, and the walls of the horn shield it from radiation coming from angles outside the main beam axis. Therefore, the back and side lobes are so minimal that scarcely any thermal energy is received from the ground. Consequently, it is an ideal radio telescope for accurately measuring low levels of weak background radiation. The antenna has a gain of about 43.3 dBi and a beamwidth of about 1.5° at 2.39 GHz and an aperture efficiency of 76%.
Preservation
In 2021, the Crawford Hill site was sold to a developer who was interested in building a residential development. In reaction, this triggered a "Save Holmdel's Horn Antenna" petition to preserve the property as a park. Advocates felt that a better fate than the horn antenna or its site encountering destruction to make way for a planned real estate development.
As of October 2023, the site is now planned to be preserved. After public support for the preservation of the horn antenna emerged—demonstrated in part by more than 8,000 signatures on a petition disseminated by community groups—the Holmdel Township Committee agreed to pay $5.5 million for of land, including that which the antenna sits on. The town plans to turn the land into a public park.
See also
Andover Earth Station, location of another large Hogg horn antenna
References
Footnotes
Aaronson, Steve. "The Light of Creation: An Interview with Arno A. Penzias and Robert W. Wilson." Bell Laboratories Record. January 1979, pp. 12–18.
Abell, George O. Exploration of the Universe. 4th ed., Philadelphia: Saunders College Publishing, 1982.
Asimov, Isaac. Asimov's Biographical Encyclopedia of Science and Technology. 2nd ed., New York: Doubleday & Company, Inc., 1982.
Bernstein, Jeremy. Three Degrees Above Zero: Bell Labs in the Information Age. New York: Charles Scribner's Sons, 1984.
Chown, Marcus. "A Cosmic Relic in Three Degrees," New Scientist, September 29, 1988, pp. 51–55.
Crawford, A.B., D.C. Hogg and L.E. Hunt. "Project Echo: A Horn-Reflector Antenna for Space Communication," The Bell System Technical Journal, July 1961, pp. 1095–1099.
Disney, Michael. The Hidden Universe. New York: Macmillan Publishing Company, 1984.
Ferris, Timothy. The Red Limit: The Search for the Edge of the Universe. 2nd ed., New York: Quill Press, 1978.
Friedman, Herbert. The Amazing Universe. Washington, DC: National Geographic Society, 1975.
Hey, J.S. The Evolution of Radio Astronomy. New York: Neale Watson Academic Publications, Inc., 1973.
Jastrow, Robert. God and the Astronomers. New York : W. W. Norton & Company, Inc., 1978.
H.T. Kirby-Smith U.S. Observatories: A Directory and Travel Guide. New York: Van Nostrand Reinhold Company, 1976.
Penzias, A.A., and R. W. Wilson. "A Measurement of the Flux Density of CAS A At 4080 Mc/s," Astrophysical Journal Letters, May 1965, pp. 1149–1154.
Further reading
External links
Buildings and structures in Monmouth County, New Jersey
Holmdel Township, New Jersey
National Historic Landmarks in New Jersey
Physical cosmology
Radio telescopes
National Register of Historic Places in Monmouth County, New Jersey | Holmdel Horn Antenna | Physics,Astronomy | 1,598 |
55,193,805 | https://en.wikipedia.org/wiki/Atherton%E2%80%93Todd%20reaction | The Atherton-Todd reaction is a name reaction in organic chemistry, which goes back to the British chemists F. R. Atherton, H. T. Openshaw and A. R. Todd. These described the reaction for the first time in 1945 as a method of converting dialkyl phosphites into dialkyl chlorophosphates. The dialkyl chlorophosphates formed are often too reactive to be isolated, though. For this reason, the synthesis of phosphates or phosphoramidates can follow the Atherton-Todd reaction in the presence of alcohols or amines. The following equation gives an overview over the Atherton-Todd reaction using the reactant dimethyl phosphite as an example:
The reaction takes place after the addition of tetrachloromethane and a base. This base is usually a primary, secondary or tertiary amine. Instead of methyl groups other alkyl or benzyl groups may be present.
Reaction mechanism
A possible reaction mechanism for the Atherton-Todd reaction is presented here for the example of dimethylphosphite, just like in the overview reaction:
First, a tertiary amine is used to cleave a methyl group of dimethyl phosphite. The intermediate 1 results from this reaction step.
Subsequently, the intermediate 1 deprotonates the starting compound dimethylphosphite, so that intermediates 2a and intermediates 2b are formed. The intermediate 1 is then regenerated from the intermediate 2a.
Finally, intermediate 2b is chlorinated by tetrachloromethane and dimethyl chlorophosphate 3 is formed.
Possible subsequent reactions
After the synthesis of the dimethyl chlorophosphate, a further reaction (for example with a primary amine like aniline) is possible by the following reaction equation:
Atom economy
In this reaction, in addition to the starting compound dialkyl phosphite, tetrachloromethane and a base (an amine) are used in stoichiometric amounts. Only chloroform, which occurs after two reaction steps from tetrachloromethane, is relevant as a waste product for the assessment of the atomic economy. It should furthermore be kept in mind that the product of the reaction has a greater molar mass than the starting compound. The atom economy of this reaction can therefore be classified as relatively good.
See also
The Atherton-Todd reaction is related to the Appel reaction. In the Appel reaction, tetrachloromethane is used for chlorination as well.
References
Name reactions | Atherton–Todd reaction | Chemistry | 550 |
491,714 | https://en.wikipedia.org/wiki/Gerard%20%27t%20Hooft | Gerardus "Gerard" 't Hooft (; born July 5, 1946) is a Dutch theoretical physicist and professor at Utrecht University, the Netherlands. He shared the 1999 Nobel Prize in Physics with his thesis advisor Martinus J. G. Veltman "for elucidating the quantum structure of electroweak interactions".
His work concentrates on gauge theory, black holes, quantum gravity and fundamental aspects of quantum mechanics. His contributions to physics include a proof that gauge theories are renormalizable, dimensional regularization and the holographic principle.
Biography
Early life
Gerard 't Hooft was born in Den Helder on July 5, 1946, but grew up in The Hague. He was the middle child of a family of three. He comes from a family of scholars. His great uncle was Nobel prize laureate Frits Zernike, and his grandmother was married to Pieter Nicolaas van Kampen, a professor of zoology at Leiden University. His uncle Nico van Kampen was an (emeritus) professor of theoretical physics at Utrecht University, and his mother married a maritime engineer. Following his family's footsteps, he showed interest in science at an early age. When his primary school teacher asked him what he wanted to be when he grew up, he replied, "a man who knows everything."
After primary school Gerard attended the Dalton Lyceum, a school that applied the ideas of the Dalton Plan, an educational method that suited him well. He excelled at science and mathematics courses. At the age of sixteen he won a silver medal in the second Dutch Math Olympiad.
Education
After Gerard 't Hooft passed his high school exams in 1964, he enrolled in the physics program at Utrecht University. He opted for Utrecht instead of the much closer Leiden, because his uncle was a professor there and he wanted to attend his lectures. Because he was so focused on science, his father insisted that he join the Utrechtsch Studenten Corps, a student association, in the hope that he would do something else besides studying. This worked to some extent; during his studies he was a coxswain with their rowing club "Triton" and organized a national congress for science students with their science discussion club "Christiaan Huygens".
In the course of his studies he decided he wanted to go into what he perceived as the heart of theoretical physics, elementary particles. His uncle had grown to dislike the subject and in particular its practitioners, so when it became time to write his doctoraalscriptie (former name of the Dutch equivalent of a master's thesis) in 1968, 't Hooft turned to the newly appointed professor Martinus Veltman, who specialized in Yang–Mills theory, a relatively fringe subject at the time because it was thought that these could not be renormalized. His assignment was to study the Adler–Bell–Jackiw anomaly, a mismatch in the theory of the decay of neutral pions; formal arguments forbid the decay into photons, whereas practical calculations and experiments showed that this was the primary form of decay. The resolution of the problem was completely unknown at the time, and 't Hooft was unable to provide one.
In 1969, 't Hooft started on his doctoral research with Martinus Veltman as his advisor. He would work on the same subject Veltman was working on, the renormalization of Yang–Mills theories. In 1971 his first paper was published. In it he showed how to renormalize massless Yang–Mills fields, and was able to derive relations between amplitudes, which would be generalized by Andrei Slavnov and John C. Taylor, and become known as the Slavnov–Taylor identities.
The world took little notice, but Veltman was excited because he saw that the problem he had been working on was solved. A period of intense collaboration followed in which they developed the technique of dimensional regularization. Soon 't Hooft's second paper was ready to be published, in which he showed that Yang–Mills theories with massive fields due to spontaneous symmetry breaking could be renormalized. This paper earned them worldwide recognition, and would ultimately earn the pair the 1999 Nobel Prize in Physics.
These two papers formed the basis of 't Hooft's dissertation, The Renormalization procedure for Yang–Mills Fields, and he obtained his PhD degree in 1972. In the same year he married his wife, Albertha A. Schik, a student of medicine in Utrecht.
Career
After obtaining his doctorate 't Hooft went to CERN in Geneva, where he had a fellowship. He further refined his methods for Yang–Mills theories with Veltman (who went back to Geneva). In this time he became interested in the possibility that the strong interaction could be described as a massless Yang–Mills theory, i.e. one of a type that he had just proved to be renormalizable and hence be susceptible to detailed calculation and comparison with experiment.
According to 't Hooft's calculations, this type of theory possessed just the right kind of scaling properties (asymptotic freedom) that this theory should have according to deep inelastic scattering experiments. This was contrary to popular perception of Yang–Mills theories at the time, that like gravitation and electrodynamics, their intensity should decrease with increasing distance between the interacting particles; such conventional behaviour with distance was unable to explain the results of deep inelastic scattering, whereas 't Hooft's calculations could.
When 't Hooft mentioned his results at a small conference at Marseilles in 1972, Kurt Symanzik urged him to publish this result; but 't Hooft did not, and the result was eventually rediscovered and published by Hugh David Politzer, David Gross, and Frank Wilczek in 1973, which led to their earning the 2004 Nobel Prize in Physics.
In 1974, 't Hooft returned to Utrecht where he became assistant professor. In 1976, he was invited for a guest position at Stanford and a position at Harvard as Morris Loeb lecturer. His eldest daughter, Saskia Anne, was born in Boston, while his second daughter, Ellen Marga, was born in 1978 after he returned to Utrecht, where he was made full professor. In the academic year 1987–1988 't Hooft spent a sabbatical in the Boston University Physics Department along with Howard Georgi, Robert Jaffe and others arranged by the then new Department chair Lawrence Sulak.
In 2007 't Hooft became editor-in-chief for Foundations of Physics, where he sought to distance the journal from the controversy of ECE theory. 't Hooft held the position until 2016.
On July 1, 2011 he was appointed Distinguished professor by Utrecht University.
Personal life
He is married to Albertha Schik (Betteke) and has two daughters.
Honors
In 1999 't Hooft shared the Nobel prize in Physics with his thesis adviser Veltman for "elucidating the quantum structure of the electroweak interactions in physics". Before that time his work had already been recognized by other notable awards. In 1981, he was awarded the Wolf Prize, possibly the most prestigious prize in physics after the Nobel prize. Five years later he received the Lorentz Medal, awarded every four years in recognition of the most important contributions in theoretical physics. In 1995, he was one of the first recipients of the Spinozapremie, the highest award available to scientists in the Netherlands. In the same year he was also honoured with a Franklin Medal. In 2000, 't Hooft received the Golden Plate Award of the American Academy of Achievement.
Since his Nobel Prize, 't Hooft has received a slew of awards, honorary doctorates and honorary professorships. He was knighted commander in the Order of the Netherlands Lion, and officer in the French Legion of Honor. The asteroid 9491 Thooft has been named in his honor, and he has written a constitution for its future inhabitants.
He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 1982, where he was made academy professor in 2003. He is also a foreign member of many other science academies, including the French Académie des Sciences, the American National Academy of Sciences and American Academy of Arts and Sciences and the Britain and Ireland based Institute of Physics.
't Hooft has appeared in season 3 of Through the Wormhole with Morgan Freeman.
Research
't Hooft's research interest can be divided in three main directions: 'gauge theories in elementary particle physics', 'quantum gravity and black holes', and 'foundational aspects of quantum mechanics'.
Gauge theories in elementary particle physics
't Hooft is most famous for his contributions to the development of gauge theories in particle physics. The best known of these is the proof in his PhD thesis that Yang–Mills theories are renormalizable, for which he shared the 1999 Nobel Prize in Physics. For this proof he introduced (with his adviser Veltman) the technique of dimensional regularization.
After his PhD, he became interested in the role of gauge theories in the strong interaction, the leading theory of which is called quantum chromodynamics or QCD. Much of his research focused on the problem of color confinement in QCD, i.e. the observational fact that only color neutral particles are observed at low energies. This led him to the discovery that SU(N) gauge theories simplify in the large N limit, a fact which has proved important in the examination of the conjectured correspondence between string theories in an Anti-de Sitter space and conformal field theories in one lower dimension. By solving the theory in one space and one time dimension, 't Hooft was able to derive a formula for the masses of mesons.
He also studied the role of so-called instanton contributions in QCD. His calculation showed that these contributions lead to an interaction between light quarks at low energies not present in the normal theory. Studying instanton solutions of Yang–Mills theories, 't Hooft discovered that spontaneously breaking a theory with SU(N) symmetry to a U(1) symmetry will lead to the existence of magnetic monopoles. These monopoles are called 't Hooft–Polyakov monopoles, after Alexander Polyakov, who independently obtained the same result.
As another piece in the color confinement puzzle 't Hooft introduced 't Hooft loops, which are the magnetic dual of Wilson loops. Using these operators he was able to classify different phases of QCD, which form the basis of the QCD phase diagram.
In 1986, he was finally able to show that instanton contributions solve the Adler–Bell–Jackiw anomaly, the topic of his master's thesis.
Quantum gravity and black holes
When Veltman and 't Hooft moved to CERN after 't Hooft obtained his PhD, Veltman's attention was drawn to the possibility of using their dimensional regularization techniques to the problem of quantizing gravity. Although it was known that perturbative quantum gravity was not completely renormalizible, they felt important lessons were to be learned by studying the formal renormalization of the theory order by order. This work would be continued by Stanley Deser and another PhD student of Veltman, Peter van Nieuwenhuizen, who later found patterns in the renormalization counter terms, which led to the discovery of supergravity.
In the 1980s, 't Hooft's attention was drawn to the subject of gravity in 3 spacetime dimensions. Together with Deser and Jackiw he published an article in 1984 describing the dynamics of flat space where the only local degrees of freedom were propagating point defects. His attention returned to this model at various points in time, showing that Gott pairs would not cause causality violating timelike loops, and showing how the model could be quantized. More recently he proposed generalizing this piecewise flat model of gravity to 4 spacetime dimensions.
With Stephen Hawking's discovery of Hawking radiation of black holes, it appeared that the evaporation of these objects violated a fundamental property of quantum mechanics, unitarity. 't Hooft refused to accept this problem, known as the black hole information paradox, and assumed that this must be the result of the semi-classical treatment of Hawking, and that it should not appear in a full theory of quantum gravity. He proposed that it might be possible to study some of the properties of such a theory, by assuming that such a theory was unitary.
Using this approach he has argued that near a black hole, quantum fields could be described by a theory in a lower dimension. This led to the introduction of the holographic principle by him and Leonard Susskind.
Fundamental aspects of quantum mechanics
't Hooft has "deviating views on the physical interpretation of quantum theory". He believes that there could be a deterministic explanation underlying quantum mechanics. Using a speculative model he has argued that such a theory could avoid the usual Bell inequality arguments that would disallow such a local hidden-variable theory. In 2016 he published a book length exposition of his ideas which, according to 't Hooft, has encountered mixed reactions.
Popular publications
Academic publications
See also
Asymptotic freedom
Center vortex
Hierarchy problem
Pauli–Villars regularization
Slavnov–Taylor identities
Superdeterminism
Mars One (Gerard 't Hooft is a main supporter of the project)
References
External links
Gerard 't Hooft (homepage)
How To Become a Good Theoretical Physicist
including the Nobel Lecture A Confrontation with Infinity
Publications from Google Scholar
Publications on the arXiv
TVO.org video – Gerard t'Hooft lectures on Science Fiction and Reality Lecture delivered at the Perimeter Institute in Waterloo, Ontario, Canada on May 7, 2008
1946 births
Living people
20th-century Dutch physicists
Members of the Royal Netherlands Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Foreign members of the Russian Academy of Sciences
Nobel laureates in Physics
Dutch Nobel laureates
Utrecht University alumni
Academic staff of Utrecht University
Wolf Prize in Physics laureates
Commanders of the Order of the Netherlands Lion
Officers of the Legion of Honour
Lorentz Medal winners
Recipients of the Lomonosov Gold Medal
People from Den Helder
Scientists from Utrecht (city)
Members of the French Academy of Sciences
Institute for Advanced Study visiting scholars
Spinoza Prize winners
Dutch theoretical physicists
Mars One
People associated with CERN
21st-century Dutch physicists
Recipients of Franklin Medal | Gerard 't Hooft | Technology | 2,964 |
65,677,074 | https://en.wikipedia.org/wiki/Life-years%20lost | The life-years lost or years of lost life (YLL) is a unit to measure the number of expected years of human life lost following an unexpected event, such as death by illness, crime or war.
Life-years lost is a flexible measure which have been used to measure the effects of overall mortality of non-communicable diseases, drug misuse and suicide, epidemics (for example COVID-19 pandemic), wars, and natural disasters such as earthquakes. Life-years lost are based on both the number of deaths and the age of those who died. It estimates the number of years that those who died would have lived if they did not met their accidental a deadly fate. Higher YLLs can be due to larger numbers of death, few sharply younger deaths or some combination of the two.
See also
Quality-adjusted life year
Years of potential life lost
References
Epidemiology
Health economics
Life expectancy | Life-years lost | Biology,Environmental_science | 191 |
33,418,730 | https://en.wikipedia.org/wiki/Translation%20surface | In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form.
These surfaces arise in dynamical systems where they can be used to model billiards, and in Teichmüller theory. A particularly interesting subclass is that of Veech surfaces (named after William A. Veech) which are the most symmetric ones.
Definitions
Geometric definition
A translation surface is the space obtained by identifying pairwise by translations the sides of a collection of plane polygons.
Here is a more formal definition. Let be a collection of (not necessarily convex) polygons in the Euclidean plane and suppose that for every side of any there is a side of some with and for some nonzero vector (and so that . Consider the space obtained by identifying all with their corresponding through the map .
The canonical way to construct such a surface is as follows: start with vectors and a permutation on , and form the broken lines and starting at an arbitrarily chosen point. In the case where these two lines form a polygon (i.e. they do not intersect outside of their endpoints) there is a natural side-pairing.
The quotient space is a closed surface. It has a flat metric outside the set images of the vertices. At a point in the sum of the angles of the polygons around the vertices which map to it is a positive multiple of , and the metric is singular unless the angle is exactly .
Analytic definition
Let be a translation surface as defined above and the set of singular points. Identifying the Euclidean plane with the complex plane one gets coordinates charts on with values in . Moreover, the changes of charts are holomorphic maps, more precisely maps of the form for some . This gives the structure of a Riemann surface, which extends to the entire surface by Riemann's theorem on removable singularities. In addition, the differential where is any chart defined above, does not depend on the chart. Thus these differentials defined on chart domains glue together to give a well-defined holomorphic 1-form on . The vertices of the polygon where the cone angles are not equal to are zeroes of (a cone angle of corresponds to a zero of order ).
In the other direction, given a pair where is a compact Riemann surface and a holomorphic 1-form one can construct a polygon by using the complex numbers where are disjoint paths between the zeroes of which form an integral basis for the relative cohomology.
Examples
The simplest example of a translation surface is obtained by gluing the opposite sides of a parallelogram. It is a flat torus with no singularities.
If is a regular -gon then the translation surface obtained by gluing opposite sides is of genus with a single singular point, with angle .
If is obtained by putting side to side a collection of copies of the unit square then any translation surface obtained from is called a square-tiled surface. The map from the surface to the flat torus obtained by identifying all squares is a branched covering with branch points the singularities (the cone angle at a singularity is proportional to the degree of branching).
Riemann–Roch and Gauss–Bonnet
Suppose that the surface is a closed Riemann surface of genus and that is a nonzero holomorphic 1-form on , with zeroes of order . Then the Riemann–Roch theorem implies that
If the translation surface is represented by a polygon then triangulating it and summing angles over all vertices allows to recover the formula above (using the relation between cone angles and order of zeroes), in the same manner as in the proof of the Gauss–Bonnet formula for hyperbolic surfaces or the proof of Euler's formula from Girard's theorem.
Translation surfaces as foliated surfaces
If is a translation surface there is a natural measured foliation on . If it is obtained from a polygon it is just the image of vertical lines, and the measure of an arc is just the euclidean length of the horizontal segment homotopic to the arc. The foliation is also obtained by the level lines of the imaginary part of a (local) primitive for and the measure is obtained by integrating the real part.
Moduli spaces
Strata
Let be the set of translation surfaces of genus (where two such are considered the same if there exists a holomorphic diffeomorphism such that ). Let be the moduli space of Riemann surfaces of genus ; there is a natural map mapping a translation surface to the underlying Riemann surface. This turns into a locally trivial fiber bundle over the moduli space.
To a compact translation surface there is associated the data where are the orders of the zeroes of . If is any integer partition of then the stratum is the subset of of translation surfaces which have a holomorphic form whose zeroes match the partition.
The stratum is naturally a complex orbifold of complex dimension (note that is the moduli space of tori, which is well-known to be an orbifold; in higher genus, the failure to be a manifold is even more dramatic). Local coordinates are given by
where and is as above a symplectic basis of this space.
Masur-Veech volumes
The stratum admits a -action and thus a real and complex projectivization . The real projectivization admits a natural section if we define it as the space of translation surfaces of area 1.
The existence of the above period coordinates allows to endow the stratum with an integral affine structure and thus a natural volume form . We also get a volume form on by disintegration of . The Masur-Veech volume is the total volume of for . This volume was proved to be finite independently by William A. Veech and Howard Masur.
In the 90's Maxim Kontsevich and Anton Zorich evaluated these volumes numerically by counting the lattice points of . They observed that should be of the form times a rational number. From this observation they expected the existence of a formula expressing the volumes in terms of intersection numbers on moduli spaces of curves.
Alex Eskin and Andrei Okounkov gave the first algorithm to compute these volumes. They showed that the generating series of these numbers are q-expansions of computable quasi-modular forms. Using this algorithm they could confirm the numerical observation of Kontsevich and Zorich.
More recently Chen, Möller, Sauvaget, and don Zagier showed that the volumes can be computed as intersection numbers on an algebraic compactification of . Currently the problem is still open to extend this formula to strata of half-translation surfaces.
The SL2(R)-action
If is a translation surface obtained by identifying the faces of a polygon and then the translation surface is that associated to the polygon . This defined a continuous action of on the moduli space which preserves the strata . This action descends to an action on that is ergodic with respect to .
Half-translation surfaces
Definitions
A half-translation surface is defined similarly to a translation surface but allowing the gluing maps to have a nontrivial linear part which is a half turn. Formally, a translation surface is defined geometrically by taking a collection of polygons in the Euclidean plane and identifying faces by maps of the form (a "half-translation"). Note that a face can be identified with itself. The geometric structure obtained in this way is a flat metric outside of a finite number of singular points with cone angles positive multiples of .
As in the case of translation surfaces there is an analytic interpretation: a half-translation surface can be interpreted as a pair where is a Riemann surface and a quadratic differential on . To pass from the geometric picture to the analytic picture one simply takes the quadratic differential defined locally by (which is invariant under half-translations), and for the other direction one takes the Riemannian metric induced by , which is smooth and flat outside of the zeros of .
Relation with Teichmüller geometry
If is a Riemann surface then the vector space of quadratic differentials on is naturally identified with the tangent space to Teichmüller space at any point above . This can be proven by analytic means using the Bers embedding. Half-translation surfaces can be used to give a more geometric interpretation of this: if are two points in Teichmüller space then by Teichmüller's mapping theorem there exists two polygons whose faces can be identified by half-translations to give flat surfaces with underlying Riemann surfaces isomorphic to respectively, and an affine map of the plane sending to which has the smallest distortion among the quasiconformal mappings in its isotopy class, and which is isotopic to .
Everything is determined uniquely up to scaling if we ask that be of the form , where , for some ; we denote by the Riemann surface obtained from the polygon . Now the path in Teichmüller space joins to , and differentiating it at gives a vector in the tangent space; since was arbitrary we obtain a bijection.
In facts the paths used in this construction are Teichmüller geodesics. An interesting fact is that while the geodesic ray associated to a flat surface corresponds to a measured foliation, and thus the directions in tangent space are identified with the Thurston boundary, the Teichmüller geodesic ray associated to a flat surface does not always converge to the corresponding point on the boundary, though almost all such rays do so.
Veech surfaces
The Veech group
If is a translation surface its Veech group is the Fuchsian group which is the image in of the subgroup of transformations such that is isomorphic (as a translation surface) to . Equivalently, is the group of derivatives of affine diffeomorphisms (where affine is defined locally outside the singularities, with respect to the affine structure induced by the translation structure). Veech groups have the following properties:
They are discrete subgroups in ;
They are never cocompact.
Veech groups can be either finitely generated or not.
Veech surfaces
A Veech surface is by definition a translation surface whose Veech group is a lattice in , equivalently its action on the hyperbolic plane admits a fundamental domain of finite volume. Since it is not cocompact it must then contain parabolic elements.
Examples of Veech surfaces are the square-tiled surfaces, whose Veech groups are commensurable to the modular group . The square can be replaced by any parallelogram (the translation surfaces obtained are exactly those obtained as ramified covers of a flat torus). In fact the Veech group is arithmetic (which amounts to it being commensurable to the modular group) if and only if the surface is tiled by parallelograms.
There exists Veech surfaces whose Veech group is not arithmetic, for example the surface obtained from two regular pentagons glued along an edge: in this case the Veech group is a non-arithmetic Hecke triangle group. On the other hand, there are still some arithmetic constraints on the Veech group of a Veech surface: for example its trace field is a number field that is totally real.
Geodesic flow on translation surfaces
Geodesics
A geodesic in a translation surface (or a half-translation surface) is a parametrised curve which is, outside of the singular points, locally the image of a straight line in Euclidean space parametrised by arclength. If a geodesic arrives at a singularity it is required to stop there. Thus a maximal geodesic is a curve defined on a closed interval, which is the whole real line if it does not meet any singular point. A geodesic is closed or periodic if its image is compact, in which case it is either a circle if it does not meet any singularity, or an arc between two (possibly equal) singularities. In the latter case the geodesic is called a saddle connection.
If (or in the case of a half-translation surface) then the geodesics with direction theta are well-defined on : they are those curves which satisfy (or in the case of a half-translation surface ). The geodesic flow on with direction is the flow on where is the geodesic starting at with direction if is not singular.
Dynamical properties
On a flat torus the geodesic flow in a given direction has the property that it is either periodic or ergodic. In general this is not true: there may be directions in which the flow is minimal (meaning every orbit is dense in the surface) but not ergodic. On the other hand, on a compact translation surface the flow retains from the simplest case of the flat torus the property that it is ergodic in almost every direction.
Another natural question is to establish asymptotic estimates for the number of closed geodesics or saddle connections of a given length. On a flat torus there are no saddle connections and the number of closed geodesics of length is equivalent to . In general one can only obtain bounds: if is a compact translation surface of genus then there exists constants (depending only on the genus) such that the both of closed geodesics and of saddle connections of length satisfy
Restraining to a probabilistic results it is possible to get better estimates: given a genus , a partition of and a connected component of the stratum there exists constants such that for almost every the asymptotic equivalent holds:
,
The constants are called Siegel–Veech constants. Using the ergodicity of the -action on , it was shown that these constants can explicitly be computed as ratios of certain Masur-Veech volumes.
Veech dichotomy
The geodesic flow on a Veech surface is much better behaved than in general. This is expressed via the following result, called the Veech dichotomy:
Let be a Veech surface and a direction. Then either all trajectories defied over are periodic or the flow in the direction is ergodic.
Relation with billiards
If is a polygon in the Euclidean plane and a direction there is a continuous dynamical system called a billiard. The trajectory of a point inside the polygon is defined as follows: as long as it does not touch the boundary it proceeds in a straight line at unit speed; when it touches the interior of an edge it bounces back (i.e. its direction changes with an orthogonal reflection in the perpendicular of the edge), and when it touches a vertex it stops.
This dynamical system is equivalent to the geodesic flow on a flat surface: just double the polygon along the edges and put a flat metric everywhere but at the vertices, which become singular points with cone angle twice the angle of the polygon at the corresponding vertex. This surface is not a translation surface or a half-translation surface, but in some cases it is related to one. Namely, if all angles of the polygon are rational multiples of there is ramified cover of this surface which is a translation surface, which can be constructed from a union of copies of . The dynamics of the billiard flow can then be studied through the geodesic flow on the translation surface.
For example, the billiard in a square is related in this way to the billiard on the flat torus constructed from four copies of the square; the billiard in an equilateral triangle gives rise to the flat torus constructed from an hexagon. The billiard in a "L" shape constructed from squares is related to the geodesic flow on a square-tiled surface; the billiard in the triangle with angles is related to the Veech surface constructed from two regular pentagons constructed above.
Relation with interval exchange transformations
Let be a translation surface and a direction, and let be the geodesic flow on with direction . Let be a geodesic segment in the direction orthogonal to , and defined the first recurrence, or Poincaré map as follows: is equal to where for . Then this map is an interval exchange transformation and it can be used to study the dynamic of the geodesic flow.
Notes
References
Surfaces
Dynamical systems | Translation surface | Physics,Mathematics | 3,389 |
3,233,543 | https://en.wikipedia.org/wiki/Phosphoinositide%20phospholipase%20C | Phosphoinositide phospholipase C (PLC, EC 3.1.4.11, triphosphoinositide phosphodiesterase, phosphoinositidase C, 1-phosphatidylinositol-4,5-bisphosphate phosphodiesterase, monophosphatidylinositol phosphodiesterase, phosphatidylinositol phospholipase C, PI-PLC, 1-phosphatidyl-D-myo-inositol-4,5-bisphosphate inositoltrisphosphohydrolase; systematic name 1-phosphatidyl-1D-myo-inositol-4,5-bisphosphate inositoltrisphosphohydrolase) is a family of eukaryotic intracellular enzymes that play an important role in signal transduction processes. These enzymes belong to a larger superfamily of Phospholipase C. Other families of phospholipase C enzymes have been identified in bacteria and trypanosomes. Phospholipases C are phosphodiesterases.
Phospholipase Cs participate in phosphatidylinositol 4,5-bisphosphate (PIP2) metabolism and lipid signaling pathways in a calcium-dependent manner. At present, the family consists of six sub-families comprising a total of 13 separate isoforms that differ in their mode of activation, expression levels, catalytic regulation, cellular localization, membrane binding avidity and tissue distribution. All are capable of catalyzing the hydrolysis of PIP2 into two important second messenger molecules, which go on to alter cell responses such as proliferation, differentiation, apoptosis, cytoskeleton remodeling, vesicular trafficking, ion channel conductance, endocrine function and neurotransmission.
Reaction and catalytic mechanism
All family members are capable of catalyzing the hydrolysis of PIP2, a phosphatidylinositol at the inner leaflet of the plasma membrane into the two second messengers, inositol trisphosphate (IP3) and diacylglycerol (DAG).
The chemical reaction may be expressed as:
1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate + H2O 1D-myo-inositol 1,4,5-trisphosphate + diacylglycerol
PLCs catalyze the reaction in two sequential steps. The first reaction is a phosphotransferase step that involves an intramolecular attack between the hydroxyl group at the 2' position on the inositol ring and the adjacent phosphate group resulting in a cyclic IP3 intermediate. At this point, DAG is generated. However, in the second phosphodiesterase step, the cyclic intermediate is held within the active site long enough to be attacked by a molecule of water, resulting in a final acyclic IP3 product. It should be mentioned that bacterial forms of the enzyme, which contain only the catalytic lipase domain, produce cyclic intermediates exclusively, whereas the mammalian isoforms generate predominantly the acyclic product. However, it is possible to alter experimental conditions (e.g., temperature, pH) in vitro such that some mammalian isoforms will alter the degree to which they produce mixtures of cyclic/acyclic products along with DAG. This catalytic process is tightly regulated by reversible phosphorylation of different phosphoinositides and their affinity for different regulatory proteins.
Cell location
Phosphoinositide phospholipase C performs its catalytic function at the plasma membrane where the substrate PIP2 is present. This membrane docking is mediated mostly by lipid-binding domains (e.g. PH domain and C2 domain) that display affinity for different phospholipid components of the plasma membrane. It is important to note that research has also discovered that, in addition to the plasma membrane, phosphoinositide phospholipase C also exists within other sub-cellular regions such as the cytoplasm and nucleus of the cell. At present, it is unclear exactly what the definitive roles for these enzymes in these cellular compartments are, particularly the nucleus.
Function
Phospholipase C performs a catalytic mechanism, depleting PIP2 and generating inositol trisphosphate (IP3) and diacylglycerol (DAG).
Depletion of PIP2 inactivates numerous effector molecules in the plasma membrane, most notably PIP2 dependent channels and transporters responsible for setting the cell's membrane potential.
The hydrolytic products also go on to modulate the activity of downstream proteins important for cellular signaling. IP3 is soluble, and diffuses through the cytoplasm and interacts with IP3 receptors on the endoplasmic reticulum, causing the release of calcium and raising the level of intracellular calcium.
Further reading: Function of calcium in humans
DAG remains within the inner leaflet of the plasma membrane due to its hydrophobic character, where it recruits protein kinase C (PKC), which becomes activated in conjunction with binding calcium ions. This results in a host of cellular responses through stimulation of calcium-sensitive proteins such as Calmodulin.
Further reading: Function of protein kinase C
Domain structure
In terms of domain organization, all family members possess homologous X and Y catalytic domains in the form of a distorted Triose Phosphate Isomerase (TIM) barrel with a highly disordered, charged, and flexible intervening linker region. Likewise, all isoforms possess four EF hand domains, and a single C2 domain that flank the X and Y catalytic core. An N-terminal PH domain is present in every family except for the sperm-specific ζ isoform.
SH2 (phosphotyrosine binding) and SH3 (proline-rich-binding) domains are found only in the γ form (specifically within the linker region), and only the ε form contains both guanine nucleotide exchange factor (GEF) and RA (Ras Associating) domains. The β subfamily is distinguished from the others by the presence of a long C-terminal extension immediately downstream of the C2 domain, which is required for activation by Gαq subunits, and which plays a role in plasma membrane binding and nuclear localization.
Isoenzymes and activation
The phospholipase C family consists of 13 isoenzymes split between six subfamilies, PLC-δ (1,3 & 4), -β(1-4), -γ(1,2), -ε, -ζ, and the recently discovered -η(1,2) isoform. Depending on the specific subfamily in question, activation can be highly variable. Activation by either Gαq or Gβγ G-protein subunits (making it part of a G protein-coupled receptor signal transduction pathway) or by transmembrane receptors with intrinsic or associated tyrosine kinase activity has been reported. In addition, members of the Ras superfamily of small GTPases (namely the Ras and Rho subfamilies) have also been implicated. It should also be mentioned that all forms of phospholipase C require calcium for activation, many of them possessing multiple calcium contact sites in the catalytic region. The only isoform that is known to be inactive at basal intracellular calcium levels is the δ subfamily of enzymes suggesting that they function as calcium amplifiers that become activated downstream of other PLC family members.
PLC-β
PLC-β(1-4) (120-155kDa) are activated by Gαq subunits through their C2 domain and long C-terminal extension. Gβγ subunits are known to activate the β2 and β3 isozymes only; however, this occurs through the PH domain and/or through interactions with the catalytic domain. The exact mechanism still requires further investigation. The PH domain of β2 and β3 plays a dual role, much like PLC-δ1, by binding to the plasma membrane, as well as being a site of interaction for the catalytic activator. However, PLC-β binds to the lipid surface independent of PIP2 with all isozymes preferring phosphoinositol-3-phosphate or neutral membranes.
Members of the Rho GTPase family (e.g., Rac1, Rac2, Rac3, and cdc42) have been implicated in their activation by binding to an alternate site on the N-terminal PH domain followed by subsequent recruitment to the plasma membrane. A crystal structure of Rac1 bound to the PH domain of PLCβ2 has been solved. Like PLC-δ1, many PLC-β isoforms (in particular, PLC-β1) have been found to take up residence in the nuclear compartment. A basic amino acid region within the enzyme's long C-terminal tail appears to function as a Nuclear Localization Signal for import into the nucleus. PLC-β1 seems to play unspecified roles in cellular proliferation and differentiation.
PLC-γ
PLC-γ (120-155kDa) is activated by receptor and non-receptor tyrosine kinases due to the presence of two SH2 and a single SH3 domain situated between a split PH domain within the linker region. Although this particular isoform does not contain classic nuclear export or localization sequences, it has been found within the nucleus of certain cell lines. There are two main isoforms of PLCγ expressed in human specimens, PLC-γ1 and PLC-γ2.
PLC-γ2
PLC-γ2 plays a major role in BCR signal transduction. Absence of this enzyme in knockout specimens severely inhibits the development of B cells because the same signaling pathways necessary for antigen mediated B cell activation are necessary for B cell development from CLPs.
In B cell signaling, PI 3-kinase is recruited to the BCR early in the signal transduction pathway. PI-3K phosphorylates PIP2 (Phosphatidylinositol 4,5-bisphosphate) into PIP3 (Phosphatidylinositol 3,4,5-trisphosphate). The increase in concentration of PIP3 recruits PLC-γ2 to the BCR complex which binds to BLNK on the BCR scaffold and membrane PIP3. PLC-γ2 is then phosphorylated by Syk on one site and Btk on two sites. PLC-γ2 then competes with PI-3K for PIP2 which it hydrolyzes into IP3 (inositol 1,4,5-trisphosphate), which ultimately raises intercellular calcium, and diacylglycerol (DAG), which activates portions of the PKC family. Because PLC-γ2 competes for PIP2 with the original signaling molecule PI3K, it serves as a negative feedback mechanism.
PLC-δ
The PLC-δ subfamily consists of three family members, δ1, 2, and 3. PLC-δ1 (85kDa) is the most well understood of the three. The enzyme is activated by high calcium levels generated by other PLC family members, and therefore functions as a calcium amplifier within the cell. Binding of its substrate PIP2 to the N-terminal PH domain is highly specific and functions to promote activation of the catalytic core. In addition, this specificity helps tether the enzyme tightly to the plasma membrane in order to access substrate through ionic interactions between the phosphate groups of PIP2 and charged residues in the PH domain. While the catalytic core does possess a weak affinity for PIP2, the C2 domain has been shown to mediate calcium-dependent phospholipid binding as well. In this model, the PH and C2 domains operate in concert as a "tether and fix" apparatus necessary for processive catalysis by the enzyme.
PLC-δ1 also possesses a classical leucine-rich nuclear export signal (NES) in its EF hand motif, as well as a Nuclear localization signal within its linker region. These two elements combined allow PLC-δ1 to actively translocate into and out of the nucleus. However, its function in the nucleus remains unclear.
The widely expressed PLC-δ1 isoform is the best-characterized phospholipase family member, as it was the first to have high-resolution X-ray crystal structures available for analysis. In terms of domain architecture, all of the enzymes are built upon a common PLC-δ backbone, wherein each family displays similarities, as well as obvious distinctions, that contribute to unique regulatory properties within the cell. Because it is the only family found expressed in lower eukaryotic organisms such as yeast and slime molds, it is considered the prototypical PLC isoform. The other family members more than likely evolved from PLC-δ as their domain architecture and mechanism of activation were expanded. Although a full'' crystal structure has not been obtained, high-resolution X-ray crystallography has yielded the molecular structure of the N-terminal PH domain complexed with its product IP3, as well as the remainder of the enzyme with the PH domain ablated. These structures have provided researchers with the necessary information to begin speculating about other family members such as PLCβ2.
Other PLC families
PLC-ε (230-260kDa ) is activated by Ras and Rho GTPases.
PLC-ζ (75kDa) is thought to play an important role in vertebrate fertilization by producing intracellular calcium oscillations important for the start of embryonic development. However, the mechanism of activation still remains unclear. This isoform is also capable of entering the early-formed pronucleus after fertilization, which seems to coincide with the cessation of calcium mobilization. It, like PLC-δ1 and PLC-β, possesses nuclear export and localization sequences.
PLC-η has been implicated in neuronal functioning.
Human proteins in this family
PLCB1; PLCB2; PLCB3; PLCB4; PLCD1; PLCD3; PLCD4; PLCE1;
PLCG1; PLCG2; PLCH1; PLCH2; PLCL1; PLCL2; PLCZ1
See also
Clostridium perfringens alpha toxin
Lipid signaling
PH domain, found in some phospholipases C
Phospholipase
Zinc-dependent phospholipase C, a different family of phospholipase C
References
EC 3.1.4
Peripheral membrane proteins
Enzymes of known structure
Signal transduction
Protein families
Enzymes
Calcium enzymes
Hydrolases
Calcium signaling
Cell signaling
G protein-coupled receptors
Cell biology
de:Phospholipase C
es:Fosfolipasa C
fr:Phospholipase C
he:פוספוליפאז C
ru:Фосфолипаза C | Phosphoinositide phospholipase C | Chemistry,Biology | 3,186 |
8,960,053 | https://en.wikipedia.org/wiki/Declared%20Rare%20and%20Priority%20Flora%20List | The Declared Rare and Priority Flora List is the system by which Western Australia's conservation flora are given a priority. Developed by the Government of Western Australia's Department of Environment and Conservation, it was used extensively within the department, including the Western Australian Herbarium. The herbarium's journal, Nuytsia, which has published over a quarter of the state's conservation taxa, requires a conservation status to be included in all publications of new Western Australian taxa that appear to be rare or endangered.
The system defines six levels of priority taxa:
X: Threatened (Declared Rare Flora) – Presumed Extinct Taxa These are taxa that are thought to be extinct, either because they have not been collected for over 50 years despite thorough searching, or because all known wild populations have been destroyed. They have been declared as such in accordance with the Wildlife Conservation Act 1950, and are therefore afforded legislative protection under that act.
T: Threatened (Declared Rare Flora) – Extant Taxa These are taxa that have been thoroughly surveyed, and determined to be rare, in danger of extinction, or otherwise in need of special protection. They have been declared rare in accordance with the Wildlife Conservation Act 1950, and are therefore afforded legislative protection under that act. The code for this category was previously 'R'.
P1: Priority One – Poorly Known Taxa These are taxa that are known from only a few (generally less than five) populations, all of which are under immediate threat. They are candidates for declaration as rare flora, but are in need of further survey.
P2: Priority Two – Poorly Known Taxa These are taxa that are known from only a few (generally less than five) populations, some of which are not thought to be under immediate threat. They are candidates for declaration as rare flora, but are in need of further survey.
P3: Priority Three – Poorly Known Taxa That are taxa that are known from several populations, some of which are not thought to be under immediate threat. They are candidates for declaration as rare flora, but are in need of further survey.
P4: Priority Four – Rare Taxa These are taxa that have been adequately surveyed, and are rare but not known to be under threat.
See also
Conservation status
Wildlife Conservation Act 1950, state legislation
Environment Protection and Biodiversity Conservation Act 1999, federal legislation
ROTAP (Rare or Threatened Australian Plants) coding system
References
"Nuytsia – WA's Journal of Systematic Botany". Department of Environment and Conservation, Government of Western Australia.
"Nuytsia – WA's Journal of Systematic Botany - Editorial Guidelines". Department of Environment and Conservation, Government of Western Australia.
External links
Threatened plants – Government of Western Australia
Nature conservation in Western Australia
Biota by conservation status
Botany in Australia | Declared Rare and Priority Flora List | Biology | 550 |
76,756,314 | https://en.wikipedia.org/wiki/IC%202628 | IC 2628 is a type SBa barred spiral galaxy with a ring located in Leo constellation. It is located 600 million light-years from the Solar System and has an approximate diameter of 135,000 light-years. IC 2628 was discovered on March 27, 1906, by Max Wolf and is classified as a ring galaxy due to its peculiar appearance. The galaxy has a surface brightness of magnitude 23.8 and located at right ascension (11:11:37.8) and declination (+12:07:21) respectively.
See also
List of ring galaxies
PGC 1000714
Hoag's Object
NGC 6028
References
2628
Barred spiral galaxies
Ring galaxies
Leo (constellation)
034038
2MASS objects
Astronomical objects discovered in 1906
Discoveries by Max Wolf
SDSS objects
034038 | IC 2628 | Astronomy | 165 |
52,799,912 | https://en.wikipedia.org/wiki/NGC%201869 | NGC 1869 (also known as ESO 85-SC55) is an open cluster in the Dorado constellation. It is located within the Large Magellanic Cloud. It was discovered by James Dunlop on September 24, 1826, using a telescope reflector with a nine-inch aperture. It is a large cluster of rich scattered stars. It is part of a triple association with NGC 1871 and NGC 1873. It has an apparent magnitude of 14.0.
References
Dorado
ESO objects
1869
Open clusters
Astronomical objects discovered in 1826
Large Magellanic Cloud | NGC 1869 | Astronomy | 113 |
665,492 | https://en.wikipedia.org/wiki/Toxiphobia | Toxiphobia in non-human animals is rejection of foods with tastes, odors, or appearances which are followed by illness resulted from toxins found in these foods. In humans, toxiphobia is the irrational fear of poisons and being poisoned.
Notable people
Kurt Gödel (1906–1978), Austrian-American scientist who, after the assassination of his close friend Moritz Schlick, developed an irrational fear of being poisoned, and only accepted food cooked by his wife.
Nicolae Ceaușescu (1918–1989), Romanian dictator, who ordered not to have any air conditioning in Palace of the Parliament due to fear of being poisoned by ventilation.
See also
Foodborne illness
References
Phobias
Poisons | Toxiphobia | Environmental_science | 146 |
42,206,064 | https://en.wikipedia.org/wiki/Raffaello%20D%27Andrea | Raffaello D’Andrea (born August 13, 1967, in Pordenone, Italy) is a Canadian-Italian-Swiss engineer, artist, and entrepreneur. He is professor of dynamic systems and control at ETH Zurich. He is a co-founder of Kiva Systems (now operating as Amazon Robotics), and the founder of Verity, an innovator in autonomous drones. He was the faculty advisor and system architect of the Cornell Robot Soccer Team, four time world champions at the annual RoboCup competition. He is a new media artist, whose work includes The Table, the Robotic Chair, and Flight Assembled Architecture. In 2013, D’Andrea co-founded ROBO Global, which launched the world's first exchange traded fund focused entirely on the theme of robotics and AI. ROBO Global was acquired by VettaFi in 2023.
D'Andrea was a speaker at TED Global 2013 and spoke at TED 2016. In 2016, he received the IEEE Robotics and Automation Award, and in 2020 he was elected a member of the National Academy of Engineering for contributions to the design and implementation of distributed automation systems for commercial applications.
Life
Born in Pordenone, Italy, D’Andrea moved to Canada in 1976, where he graduated valedictorian from Anderson Collegiate in Whitby, Ontario. He received a Bachelor of Applied Science from the University of Toronto, graduating in Engineering Science (Major in Electrical and Computer Engineering) in 1991 and winning the Wilson Medal as the top graduating student that year. In 1997 he received a Ph.D. in Electrical Engineering from the California Institute of Technology, under the supervision of John Doyle and Richard Murray.
He joined the Cornell faculty in 1997. While on sabbatical in 2003, he co-founded Kiva Systems with Mick Mountz and Peter Wurman. He became Kiva Systems’ chief technical advisor in 2007 when he was appointed professor of dynamic systems and control at ETH Zurich. He founded Verity with Markus Waibel and Markus Hehn in 2014.
Work
Academic work
After receiving his PhD in 1997, he joined the Cornell faculty as an assistant professor, where he was a founding member of the Systems Engineering program, and where he established robot soccer — a competition featuring fully autonomous robots — as the flagship, multidisciplinary team project. In addition to pioneering the use of semi-definite programming for the design of distributed control systems, he went on to lead the Cornell Robot Soccer Team to four world championships at international RoboCup competitions in Sweden, Australia, Italy, and Japan. D'Andrea received the Presidential Early Career Award for complex interconnected systems research in 2002.
After being appointed professor at ETH Zurich in 2007, D’Andrea established a research program that combined his broad interests and cemented his hands-on teaching style. His team engages in cutting-edge research by designing and building creative experimental platforms that allow them to explore the fundamental principles of robotics, control, and automation. His creations include the Flying Machine Arena, where flying robots perform aerial acrobatics, juggle balls, balance poles, and cooperate to build structures; the Distributed Flight Array, a flying platform consisting of multiple autonomous single propeller vehicles that are able to drive, dock with their peers, and fly in a coordinated fashion; the Balancing Cube, a dynamic sculpture that can balance on any of its edges or corners; Blind Juggling Machines that can juggle balls without seeing them, and without catching them; and the Cubli, a cube that can jump up, balance, and walk.
Entrepreneurial work
D’Andrea co-founded Kiva Systems in 2003 with Mick Mountz and Peter Wurman. He became chief technical advisor when he was appointed professor of dynamic systems and control at ETH Zurich in 2007. At Kiva, he led the systems architecture, robot design, robot navigation and coordination, and control algorithms efforts.
D’Andrea founded Verity in 2014 with Markus Hehn and Markus Waibel. The stated purpose of the company is "to develop autonomous indoor drone systems and related technologies for commercial applications." The company partnered with Cirque du Soleil to create Sparked, a live interaction between humans and quadcopters and has provided autonomous drone shows for large concert tours like Metallica's WorldWired Tour, Drake (musician)'s Aubrey & the Three Migos Tour, Celine Dion's Courage World Tour, Justin Bieber's 2022 Justice World Tour, and the Australasian Dance Collective (ADC).
Since 2016, D'Andrea and Verity have been focused on delivering autonomous inventory drone systems for commercial warehouses to support inventory tracking and management, and other use cases. In 2023, IKEA announced the milestone of 100 Verity drones in use in its warehouses, and Maersk announced its use of the Verity system in its warehouses. In July 2023, Verity announced completion of a $43M Series B fundraising round that included Qualcomm Ventures.
Artistic work
D’Andrea and Canadian artist Max Dean unveiled their collaborative work The Table at the Venice Biennale in 2001. They orchestrate a scenario wherein a spectator, selected by the table, becomes a performer, who is now an object not only of the table's "attention", but also of the other viewers'. It is part of the permanent collection of the National Gallery of Canada (NGC).
The Robotic Chair was created by D’Andrea, Max Dean, and Canadian artist Matt Donovan. It is an ordinary looking chair that falls apart and re-assembles itself. It was first unveiled to the general public at IdeaCity in 2006. It is part of the permanent collection of the National Gallery of Canada (NGC).
D’Andrea and Swiss architects Gramazio & Kohler created Flight Assembled Architecture, the first architectural installation assembled by flying robots. It took place at the FRAC Centre Orléans in France in 2011–2012. The installation consists of 1,500 modules put into place by a multitude of quadrotor helicopters. Within the build, an architectural vision of a 600-metre high "vertical village" for 30,000 inhabitants unfolds as a model in 1:100 scale. It is in the permanent collection of the FRAC Centre.
Awards and honors
2020 National Academy of Engineering Member
2020 National Inventors Hall of Fame Inductee
2016 IEEE Robotics and Automation Award
2015 Engelberger Robotics Award
2008 IEEE/IFR Invention and Entrepreneurship Award
2002 Presidential Early Career Award
References
Electronics engineers
Living people
1967 births
Cornell University faculty
Recipients of the Presidential Early Career Award for Scientists and Engineers
Academic staff of ETH Zurich | Raffaello D'Andrea | Engineering | 1,328 |
30,071,992 | https://en.wikipedia.org/wiki/A.%20Aneesh | Aneesh Aneesh is a sociologist of globalization, labor, and technology. He is Executive Director of the School of Global Studies and Languages at the University of Oregon and a Professor of Global Studies and Sociology. Previously, he served as a professor of sociology and director of the Institute of World Affairs and the global studies program at the University of Wisconsin, Milwaukee. In the early 2000s, he taught in the science and technology program at Stanford University and formulated a theory of algocracy, distinguishing it from bureaucratic, market, and surveillance-based governance systems, pioneering the field of algorithmic governance in the social sciences. Author of Virtual Migration: The Programming of Globalization (Duke 2006) and Neutral Accent: How Language, Labor and Life Become Global (Duke 2015), Aneesh is currently completing a manuscript on the rise of what he calls modular citizenship.
Education
Aneesh studied Physics, Economics, and Philosophy at the University of Allahabad, earning a Bachelor's degree there in 1987. After pre-doctoral study in Philosophy at Jawaharlal Nehru University he came to the University of California, Irvine for a Master's degree in social relations in 1996, and completed a Ph.D. in Sociology at Rutgers University in 2001.
Books
Aneesh has written or edited the following books:
Neutral Accent: How Language, Labor and Life Become Global (2015)
The Long 1968: Revisions and New Perspectives (co-edited, 2012)
Beyond Globalization: Making New Worlds in Media, Art, and Social Practices (co-edited, 2011)
Virtual Migration: the Programming of Globalization (2006)
References
External links
Home page
Living people
Indian emigrants to the United States
University of Allahabad alumni
Rutgers University alumni
University of Wisconsin–Milwaukee faculty
Year of birth missing (living people)
Place of birth missing (living people)
Government by algorithm | A. Aneesh | Engineering | 373 |
2,954,685 | https://en.wikipedia.org/wiki/Povarov%20reaction | The Povarov reaction is an organic reaction described as a formal cycloaddition between an aromatic imine and an alkene. The imine in this organic reaction is a condensation reaction product from an aniline type compound and a benzaldehyde type compound. The alkene must be electron rich which means that functional groups attached to the alkene must be able to donate electrons. Such alkenes are enol ethers and enamines. The reaction product in the original Povarov reaction is a quinoline. Because the reactions can be carried out with the three components premixed in one reactor it is an example of a multi-component reaction.
Reaction mechanism
The reaction mechanism for the Povarov reaction to the quinoline is outlined in Scheme 1. In step one aniline and benzaldehyde react to the Schiff base in a condensation reaction. The Povarov reaction requires a Lewis acid such as boron trifluoride to activate the imine for an electrophilic addition of the activated alkene. This reaction step forms an oxonium ion which then reacts with the aromatic ring in a classical electrophilic aromatic substitution. Two additional elimination reactions create the quinoline ring structure.
The reaction is also classified as a subset of aza Diels-Alder reactions; however, it occurs by a step-wise rather than concerted mechanism.
Examples
The reaction depicted in Scheme 2 illustrates the Povarov reaction with an imine and an enamine in the presence of yttrium triflate as the Lewis acid. This reaction is regioselective because the iminium ion preferentially attacks the nitro ortho position and not the para position. The nitro group is a meta directing substituent but since this position is blocked, the most electron rich ring position is now ortho and not para. The reaction is also stereoselective because the enamine addition occurs with a diastereomeric preference for trans addition without formation of the cis isomer. This is in contrast to traditional Diels–Alder reactions, which are stereospecific based on the alkene geometry.
In 2013, Doyle and coworkers reported a Povarov-type, formal [4+2]-cycloaddition reaction between donor-acceptor cyclopropenes and imines (Scheme 3). In the first step, a dirhodium catalyst effects diazo decomposition from silyl enol ether diazo compound to yield a donor/acceptor cyclopropene. The donor/acceptor cyclopropene is then reacted with an aryl imine under scandium(III) triflate catalyzed conditions to yield cyclopropane-fused tetrahydroquinolines in good yields and diastereoselectivities. Treatment of these compounds with TBAF invokes a ring-expansion that provides the corresponding benzazepines.
Variations
One variation of the Povarov reaction is a four component reaction. Whereas in the traditional Povarov reaction the intermediate carbocation gives an intramolecular reaction with the aryl group, this intermediate can also be terminated by an additional nucleophile such as an alcohol. Scheme 4 depicts this 4 component reaction with the ethyl ester of glyoxylic acid, 3,4-dihydro-2H-pyran, aniline and ethanol with lewis acid scandium(III) triflate and molecular sieves.
References
See also
Doebner reaction
Doebner-Miller reaction
Grieco three-component condensation
Cycloadditions
Multiple component reactions
Quinoline forming reactions
Name reactions | Povarov reaction | Chemistry | 774 |
35,933,568 | https://en.wikipedia.org/wiki/Conspicuous%20expression | Conspicuous expression or performative consumption are terms used to describe the act of doing something for the primary purpose of having someone see you do it. This is based on the concepts of conspicuous consumption, conspicuous leisure, and the performative turn.
This is similar to conspicuous consumption except that it does not involve buying anything. Additionally, rather than showing off wealth, conspicuous expression is used to show off social status. In other words, it is doing something for others to witness so that they think you are "cool".
See also
Hipster (contemporary subculture)
Commodity fetishism
Embeddedness
References
Interpersonal relationships | Conspicuous expression | Biology | 123 |
2,828,266 | https://en.wikipedia.org/wiki/Elevator%20operator | An elevator operator (North American English), liftman (in Commonwealth English, usually lift attendant), or lift girl (in British English), is a person specifically employed to operate a manually operated elevator.
While largely considered an obsolete occupation, elevator operators continue to work in historic installations and fill modern-day niches.
Historic description
Being an effective elevator operator required many skills. Manual elevators were often controlled by a large lever. The elevator operator had to regulate the elevator's speed, which typically required a good sense of timing to consistently stop the elevator level with each floor. In addition to their training in operation and safety, department stores later combined the role of operator with greeter and tour guide, announcing product departments, floor by floor, and occasionally mentioning special offers.
Remaining examples
Buildings
With the advent of user-operated elevators such as those utilizing push buttons to select the desired floor, few elevator operators remain. A few older buildings still maintain working manually operated elevators and thus elevator operators may be employed to run them. In Dayton, Ohio, the Mendelson Liquidation Outlet operates out of an old Delco building that has an old passenger elevator run by an operator. The Fine Arts Building in Chicago; the Young–Quinlan Building in downtown Minneapolis, Minnesota; City Hall in Buffalo, New York; the Commodore Apartment Building in Louisville, Kentucky; City Hall in Asheville, North Carolina; and the Cyr Building in downtown Waterville, Maine are a few in the United States to employ elevator operators. In 2017, it was estimated that over 50 buildings in New York City used elevator operators, primarily in apartment buildings on the Upper East and West Sides of Manhattan, as well as some buildings in Brooklyn. The Stockholm Concert Hall, in Sweden, employs an elevator operator by necessity since there is an entrance to the elevator directly from street level, requiring an employee to be positioned in the elevator to inspect tickets.
In more modern buildings, elevator operators are still occasionally encountered. For example, they are commonly seen in Japanese department stores such as Sogo and Mitsukoshi in Japan and Taiwan, as well as high speed elevators in skyscrapers, as seen in Taipei 101, and at the Lincoln Center for the Performing Arts. Some monuments, such as the Space Needle in Seattle, the Eiffel Tower in Paris and the CN Tower in Toronto, employ elevator operators to operate specialized or high-speed elevators, discuss the monument (or the elevator technology) and help direct crowd traffic.
New York City Subway stations
There are a few elevator operators working in the New York City Subway system. They are located at five stations: 168th Street, 181st Street at St. Nicholas Avenue and at Fort Washington Avenue, 190th Street, and 191st Street in Washington Heights, upper Manhattan. In these stations, elevators serve as the sole or primary means of non-emergency access. The elevator attendants currently serve as a way to reassure passengers as the elevators are the only entrance to the platforms, and passengers often wait for the elevators with an attendant. The attendants at the five stations are primarily maintenance and cleaning workers who suffered injuries that made it hard for them to continue doing their original jobs.
History
The elevators were made automated during the 1970s, but the operators were retained, though they were reduced in quantity in 2003.
In 2004, the number of elevator attendants at the stations was reduced to one per station as a result of budget cuts by the Metropolitan Transportation Authority (MTA). The agency had intended to remove all the attendants, but kept one in each station after many riders protested. The change saved $1.2 million a year. In November 2007, the MTA proposed to eliminate the operators' positions, but on December 7, 2007, the MTA announced that it would not remove the remaining elevator operators due to pushback from elected officials and residents from the area. In October 2018, the MTA again proposed removing the elevator operators at the five stations, but this decision was reversed after dissent from the Transport Workers' Union.
San Francisco BART
As of 2022, elevator operators are currently employed in Market Street stations of the San Francisco Bay Area's Bay Area Rapid Transit rapid transit system to provide for passenger safety and elevator cleanliness amidst regional problems with homelessness and substance dependence.
Amusement parks
Theme parks and amusement parks often have observation towers, which employ elevator operators. An example is the Sky Tower at Six Flags Magic Mountain in Santa Clarita, California. While these rides may have modern or button-operated elevators that a patron is capable of using, they often employ ride operators for safety and crowd control purposes. Because many jurisdictions have stringent injury liability laws for amusement park operators and the fact that vandalism can be a big problem, some parks do not allow patrons to ride these rides without an employee present. Additionally, if there is a museum at the top of such a ride, the operator will usually give an introduction to the purpose and contents of the museum and other promotional messages about the park.
Construction sites
Manual elevator operators can be employed in the construction of multi-storied buildings, either using temporary exterior hoists or traditional elevators that are still being installed.
Elevator girls in Japan
, shorted to erega, describes the occupation of women who operate elevators in Japan. When the role became common in the 1920s, additional terms such as shokoki garu ("up and down controller girl"), hakojo ("box girl"), and erebeta no onna untensyu ("woman elevator driver") were also used to describe this role. However, erebeta girl remains the popular term for this occupation, a staple sight of urban Japan. Sporting tailored uniforms and robotic smiles, elevator girls are trained extensively in polite speaking and posture. In contrast with the salaryman of Japan, the elevator girl has been symbolic of women's roles in society literally and physically moving up and down as women entered the Japanese workforce. Today, few elevator girls remain in department stores, although those which retain them consider the elevator girl an effective marketing strategy. Elevator girls are an example of feminized occupations in the workplace.
History
Prior to 1929, elevator operators were men. In 1929, the Ueno Branch of Matsuzakaya department store hired women to operate the elevators in its new facilities. In the same year, Yomiuri Shinbun ran an article calling elevator operation the new occupation of Japanese women, commenting on the experiences of the first elevator girls. Although women in the United States had performed the same duty previously, in Japan the shift to female elevator operators was remarkable. At first, female elevator operators had to perform the same functional tasks as male operators, operating levers and closing elevator doors. As elevators became automated, the role shifted to greeting customers, advertising sales to customers, and making announcements.
Depiction
Elevator girls appear in numerous works of literature and film. A key storytelling tool using the elevator girl has been to juxtapose the reserved, controlled role of the elevator girl at work with the unknown, potentially scandalous role that the woman plays in her personal life. A pornographic film featuring Shoji Miyuki, Going Up: I am an Elevator Girl, played off this contrast, telling the story of a demure elevator girl who is secretly a nymphomaniac engaging in sexual activities in the elevator.
Popular anime series Crayon Shin Chan featured an elevator girl who became trapped in her elevator when the elevator broke.
The 2009 film Elevator Nightmare was advertised by comedienne Torii Miyuki watching the film in an elevator with three professional elevator girls.
Karl Greenfeld's 1995 expose of Japanese culture Speed Tribes: Days and Nights with Japan's Next Generation, featured a fictional story of an elevator girl who works the elevator by day and engages in drugs and risky sex by night.
Courtney Barnett wrote a song called Elevator Operator.
Notes
References
Operator
Service occupations
Obsolete occupations | Elevator operator | Engineering | 1,592 |
21,483 | https://en.wikipedia.org/wiki/Numeral%20%28linguistics%29 | In linguistics, a numeral in the broadest sense is a word or phrase that describes a numerical quantity. Some theories of grammar use the word "numeral" to refer to cardinal numbers that act as a determiner that specify the quantity of a noun, for example the "two" in "two hats". Some theories of grammar do not include determiners as a part of speech and consider "two" in this example to be an adjective. Some theories consider "numeral" to be a synonym for "number" and assign all numbers (including ordinal numbers like "first") to a part of speech called "numerals". Numerals in the broad sense can also be analyzed as a noun ("three is a small number"), as a pronoun ("the two went to town"), or for a small number of words as an adverb ("I rode the slide twice").
Numerals can express relationships like quantity (cardinal numbers), sequence (ordinal numbers), frequency (once, twice), and part (fraction).
Identifying numerals
Numerals may be attributive, as in two dogs, or pronominal, as in I saw two (of them).
Many words of different parts of speech indicate number or quantity. Such words are called quantifiers. Examples are words such as every, most, least, some, etc. Numerals are distinguished from other quantifiers by the fact that they designate a specific number. Examples are words such as five, ten, fifty, one hundred, etc. They may or may not be treated as a distinct part of speech; this may vary, not only with the language, but with the choice of word. For example, "dozen" serves the function of a noun, "first" serves the function of an adjective, and "twice" serves the function of an adverb. In Old Church Slavonic, the cardinal numbers 5 to 10 were feminine nouns; when quantifying a noun, that noun was declined in the genitive plural like other nouns that followed a noun of quantity (one would say the equivalent of "five of people"). In English grammar, the classification "numeral" (viewed as a part of speech) is reserved for those words which have distinct grammatical behavior: when a numeral modifies a noun, it may replace the article: the/some dogs played in the park → twelve dogs played in the park. (*dozen dogs played in the park is not grammatical, so "dozen" is not a numeral in this sense.) English numerals indicate cardinal numbers. However, not all words for cardinal numbers are necessarily numerals. For example, million is grammatically a noun, and must be preceded by an article or numeral itself.
Numerals may be simple, such as 'eleven', or compound, such as 'twenty-three'.
In linguistics, however, numerals are classified according to purpose: examples are ordinal numbers (first, second, third, etc.; from 'third' up, these are also used for fractions), multiplicative (adverbial) numbers (once, twice, and thrice), multipliers (single, double, and triple), and distributive numbers (singly, doubly, and triply). Georgian, Latin, and Romanian (see Romanian distributive numbers) have regular distributive numbers, such as Latin singuli "one-by-one", bini "in pairs, two-by-two", terni "three each", etc. In languages other than English, there may be other kinds of number words. For example, in Slavic languages there are collective numbers (monad, pair/dyad, triad) which describe sets, such as pair or dozen in English (see Russian numerals, Polish numerals).
Some languages have a very limited set of numerals, and in some cases they arguably do not have any numerals at all, but instead use more generic quantifiers, such as 'pair' or 'many'. However, by now most such languages have borrowed the numeral system or part of the numeral system of a national or colonial language, though in a few cases (such as Guarani), a numeral system has been invented internally rather than borrowed. Other languages had an indigenous system but borrowed a second set of numerals anyway. An example is Japanese, which uses either native or Chinese-derived numerals depending on what is being counted.
In many languages, such as Chinese, numerals require the use of numeral classifiers. Many sign languages, such as ASL, incorporate numerals.
Larger numerals
English has derived numerals for multiples of its base (fifty, sixty, etc.), and some languages have simplex numerals for these, or even for numbers between the multiples of its base. Balinese, for example, currently has a decimal system, with words for 10, 100, and 1000, but has additional simplex numerals for 25 (with a second word for 25 only found in a compound for 75), 35, 45, 50, 150, 175, 200 (with a second found in a compound for 1200), 400, 900, and 1600. In Hindustani, the numerals between 10 and 100 have developed to the extent that they need to be learned independently.
In many languages, numerals up to the base are a distinct part of speech, while the words for powers of the base belong to one of the other word classes. In English, these higher words are hundred 102, thousand 103, million 106, and higher powers of a thousand (short scale) or of a million (long scale—see names of large numbers). These words cannot modify a noun without being preceded by an article or numeral (*hundred dogs played in the park), and so are nouns.
In East Asia, the higher units are hundred, thousand, myriad 104, and powers of myriad. In the Indian subcontinent, they are hundred, thousand, lakh 105, crore 107, and so on. The Mesoamerican system, still used to some extent in Mayan languages, was based on powers of 20: bak’ 400 (202), pik 8000 (203), kalab 160,000 (204), etc.
Numerals of cardinal numbers
The cardinal numbers have numerals. In the following tables, [and] indicates that the word and is used in some dialects (such as British English), and omitted in other dialects (such as American English).
This table demonstrates the standard English construction of some cardinal numbers. (See next table for names of larger cardinals.)
English names for powers of 10
This table compares the English names of cardinal numbers according to various American, British, and Continental European conventions. See English numerals or names of large numbers for more information on naming numbers.
There is no consistent and widely accepted way to extend cardinals beyond centillion (centilliard).
Myriad, Octad, and -yllion systems
The following table details the myriad, octad, Ancient Greek Archimedes's notation, Chinese myriad, Chinese long and -yllion names for powers of 10.
There is also a Knuth-proposed system notation of numbers, named the -yllion system. In this system, a new word is invented for every 2n-th power of ten.
Fractional numerals
This is a table of English names for non-negative rational numbers less than or equal to 1. It also lists alternative names, but there is no widespread convention for the names of extremely small positive numbers.
Keep in mind that rational numbers like 0.12 can be represented in infinitely many ways, e.g. zero-point-one-two (0.12), twelve percent (12%), three twenty-fifths (), nine seventy-fifths (), six fiftieths (), twelve hundredths (), twenty-four two-hundredths (), etc.
Other specific quantity terms
Various terms have arisen to describe commonly used measured quantities.
Unit: 1 (based on a single entity of counting or measurement of an object or item)
Pair: 2 (the base of the binary numeral system)
Leash: 3 (the base of the trinary numeral system)
Dozen: 12 (the base of the duodecimal numeral system)
Baker's dozen: 13 (based on a group of thirteen objects or items)
Score: 20 (the base of the vigesimal numeral system)
Shock: 60 (the base of the sexagesimal numeral system)
Gross: (based on a group of 144 objects or items)
Great gross: (based on a group of 1,728 objects or items)
Basis of counting system
Not all peoples use counting, at least not verbally. Specifically, there is not much need for counting among hunter-gatherers who do not engage in commerce. Many languages around the world have no numerals above two to four (if they are actually numerals at all, and not some other part of speech)—or at least did not before contact with the colonial societies—and speakers of these languages may have no tradition of using the numerals they did have for counting. Indeed, several languages from the Amazon have been independently reported to have no specific number words other than 'one'. These include Nadëb, pre-contact Mocoví and Pilagá, Culina and pre-contact Jarawara, Jabutí, Canela-Krahô, Botocudo (Krenák), Chiquitano, the Campa languages, Arabela, and Achuar. Some languages of Australia, such as Warlpiri, do not have words for quantities above two, and neither did many Khoisan languages at the time of European contact. Such languages do not have a word class of 'numeral'.
Most languages with both numerals and counting use base 8, 10, 12, or 20. Base 10 appears to come from counting one's fingers, base 20 from the fingers and toes, base 8 from counting the spaces between the fingers (attested in California), and base 12 from counting the knuckles (3 each for the four fingers).
No base
Many languages of Melanesia have (or once had) counting systems based on parts of the body which do not have a numeric base; there are (or were) no numerals, but rather nouns for relevant parts of the body—or simply pointing to the relevant spots—were used for quantities. For example, 1–4 may be the fingers, 5 'thumb', 6 'wrist', 7 'elbow', 8 'shoulder', etc., across the body and down the other arm, so that the opposite little finger represents a number between 17 (Torres Islands) to 23 (Eleman). For numbers beyond this, the torso, legs and toes may be used, or one might count back up the other arm and back down the first, depending on the people.
2: binary
Binary systems are based on the number 2, using zeros and ones. Due to its simplicity, only having two distinct digits, binary is commonly used in computing, with zero and one often corresponding to "off/on" respectively.
3: ternary
Ternary systems are based on the number 3, having practical usage in some analog logic, in baseball scoring and in self–similar mathematical structures.
4: quaternary
Quaternary systems are based on the number 4. Some Austronesian, Melanesian, Sulawesi, and Papua New Guinea ethnic groups, count with the base number four, using the term asu or aso, the word for dog, as the ubiquitous village dog has four legs. This is argued by anthropologists to be also based on early humans noting the human and animal shared body feature of two arms and two legs as well as its ease in simple arithmetic and counting. As an example of the system's ease a realistic scenario could include a farmer returning from the market with fifty asu heads of pig (200), less 30 asu (120) of pig bartered for 10 asu (40) of goats noting his new pig count total as twenty asu: 80 pigs remaining. The system has a correlation to the dozen counting system and is still in common use in these areas as a natural and easy method of simple arithmetic.
5: quinary
Quinary systems are based on the number 5. It is almost certain the quinary system developed from counting by fingers (five fingers per hand). An example are the Epi languages of Vanuatu, where 5 is luna 'hand', 10 lua-luna 'two hand', 15 tolu-luna 'three hand', etc. 11 is then lua-luna tai 'two-hand one', and 17 tolu-luna lua 'three-hand two'.
5 is a common auxiliary base, or sub-base, where 6 is 'five and one', 7 'five and two', etc. Aztec was a vigesimal (base-20) system with sub-base 5.
6: senary
Senary systems are based on the number 6. The Morehead-Maro languages of Southern New Guinea are examples of the rare base 6 system with monomorphemic words running up to 66. Examples are Kanum and Kómnzo. The Sko languages on the North Coast of New Guinea follow a base-24 system with a sub-base of 6.
7: septenary
Septenary systems are based on the number 7. Septenary systems are very rare, as few natural objects consistently have seven distinctive features. Traditionally, it occurs in week-related timing. It has been suggested that the Palikúr language has a base-seven system, but this is dubious.
8: octal
Octal systems are based on the number 8. Examples can be found in the Yuki language of California and in the Pamean languages of Mexico, because the Yuki and Pame keep count by using the four spaces between their fingers rather than the fingers themselves.
9: nonary
Nonary systems are based on the number 9. It has been suggested that Nenets has a base-nine system.
10: decimal
Decimal systems are based on the number 10. A majority of traditional number systems are decimal. This dates back at least to the ancient Egyptians, who used a wholly decimal system. Anthropologists hypothesize this may be due to humans having five digits per hand, ten in total. There are many regional variations including:
Western system: based on thousands, with variants (see English numerals)
Indian system: crore, lakh (see Indian numbering system. Indian numerals)
East Asian system: based on ten-thousands (see below)
12: duodecimal
Duodecimal systems are based on the number 12.
These include:
Chepang language of Nepal,
Mahl language of Minicoy Island in India
Nigerian Middle Belt areas such as Janji, Kahugu and the Nimbia dialect of Gwandara.
Melanesia
reconstructed proto-Benue–Congo
Duodecimal numeric systems have some practical advantages over decimal. It is much easier to divide the base digit twelve (which is a highly composite number) by many important divisors in market and trade settings, such as the numbers 2, 3, 4 and 6.
Because of several measurements based on twelve, many Western languages have words for base-twelve units such as dozen, gross and great gross, which allow for rudimentary duodecimal nomenclature, such as "two gross six dozen" for 360. Ancient Romans used a decimal system for integers, but switched to duodecimal for fractions, and correspondingly Latin developed a rich vocabulary for duodecimal-based fractions (see Roman numerals). A notable fictional duodecimal system was that of J. R. R. Tolkien's Elvish languages, which used duodecimal as well as decimal.
16: hexadecimal
Hexadecimal systems are based on the number 16.
The traditional Chinese units of measurement were base-16. For example, one jīn (斤) in the old system equals sixteen taels. The suanpan (Chinese abacus) can be used to perform hexadecimal calculations such as additions and subtractions.
South Asian monetary systems were base-16. One rupee in Pakistan and India was divided into 16 annay. A single anna was subdivided into four paisa or twelve pies (thus there were 64 paise or 192 pies in a rupee). The anna was demonetised as a currency unit when India decimalised its currency in 1957, followed by Pakistan in 1961.
20: vigesimal
Vigesimal systems are based on the number 20. Anthropologists are convinced the system originated from digit counting, as did bases five and ten, twenty being the number of human fingers and toes combined.
The system is in widespread use across the world. Some include the classical Mesoamerican cultures, still in use today in the modern indigenous languages of their descendants, namely the Nahuatl and Mayan languages (see Maya numerals). A modern national language which uses a full vigesimal system is Dzongkha in Bhutan.
Partial vigesimal systems are found in some European languages: Basque, Celtic languages, French (from Celtic), Danish, and Georgian. In these languages the systems are vigesimal up to 99, then decimal from 100 up. That is, 140 is 'one hundred two score', not *seven score, and there is no numeral for 400 (great score).
The term score originates from tally sticks, and is perhaps a remnant of Celtic vigesimal counting. It was widely used to learn the pre-decimal British currency in this idiom: "a dozen pence and a score of bob", referring to the 20 shillings in a pound. For Americans the term is most known from the opening of the Gettysburg Address: "Four score and seven years ago our fathers...".
24: quadrovigesimal
Quadrovigesimal systems are based on the number 24. The Sko languages have a base-24 system with a sub-base of 6.
32: duotrigesimal
Duotrigesimal systems are based on the number 32. The Ngiti ethnolinguistic group uses a base 32 numeral system.
60: sexagesimal
Sexagesimal systems are based on the number 60. Ekari has a base-60 system. Sumeria had a base-60 system with a decimal sub-base (with alternating cycles of 10 and 6), which was the origin of the numbering of modern degrees, minutes, and seconds.
80: octogesimal
Octogesimal systems are based on the number 80. Supyire is said to have a base-80 system; it counts in twenties (with 5 and 10 as sub-bases) up to 80, then by eighties up to 400, and then by 400s (great scores).
799 [i.e. 400 + (4 x 80) + (3 x 20) + {10 + (5 + 4)}]’
See also
Numerals in various languages
A database Numeral Systems of the World's Languages compiled by Eugene S.L. Chan of Hong Kong is hosted by the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. The database currently contains data for about 4000 languages.
Proto-Indo-European numerals
English numerals
Indian numbering system
Polish numerals
Hindustani numerals
Proto-Semitic numerals
Hebrew numerals
Chinese numerals
Japanese numerals
Korean numerals
Vietnamese numerals
Australian Aboriginal enumeration
Balinese numerals
Dzongkha numerals
Finnish numerals
Javanese numerals
Yoruba numerals
Related topics
Long and short scales
Names of large numbers
Numeral system
Numeral prefix
Names of small numbers
Notes
Further reading
Crespo Cantalapiedra, I. (2023). La diversidad en las lenguas: los numerales . Online book (in Spanish).
Names | Numeral (linguistics) | Mathematics | 4,249 |
729,308 | https://en.wikipedia.org/wiki/Navigation%20light | A navigation light, also known as a running or position light, is a source of illumination on a watercraft, aircraft or spacecraft, meant to give information on the craft's position, heading, or status. Some navigation lights are colour-coded red and green to aid traffic control by identifying the craft's orientation. Their placement is mandated by international conventions or civil authorities such as the International Maritime Organization (IMO).
A common misconception is that marine or aircraft navigation lights indicate which of two approaching vessels has the "right of way" as in ground traffic; this is never true. However, the red and green colours are chosen to indicate which vessel has the duty to "give way" or "stand on" (obligation to hold course and speed). Consistent with the ground traffic convention, the rightmost of the two vehicles is usually given stand-on status and the leftmost must give way. Therefore a red light is used on the (left (port)) side to indicate "you must give way"; and a green light on the (right (starboard)) side indicates "I will give way; you must stand on". In case of two power-driven vessels approaching head-on, both are required to give way.
Marine navigation
In 1838 the United States passed an act requiring steamboats running between sunset and sunrise to carry one or more signal lights; colour, visibility and location were not specified.
In 1846 the United Kingdom passed the Steam Navigation Act 1846 (9 & 10 Vict. c. 100) enabling the Lord High Admiral to publish regulations requiring all sea-going steam vessels to carry lights. The admiralty exercised these powers in 1848 and required steam vessels to display red and green sidelights as well as a white masthead light whilst under way and a single white light when at anchor.
In 1849 the U.S. Congress extended the light requirements to sailing vessels.
In 1889 the United States convened the first International Maritime Conference to consider regulations for preventing collisions. The resulting Washington Conference Rules were adopted by the U.S. in 1890 and became effective internationally in 1897. Within these rules was the requirement for steamships to carry a second mast head light.
The international 1948 Safety of Life at Sea Conference recommended a mandatory second masthead light solely for power-driven vessels over in length and a fixed sternlight for almost all vessels. The regulations have changed little since then.
The International Regulations for Preventing Collisions at Sea (COLREGs) established in 1972 stipulates the requirements for navigation lights required on a vessel.
Basic lighting
Watercraft navigation lights must permit other vessels to determine the type and relative angle of a vessel, and thus decide if there is a danger of collision. In general, sailing vessels are required to carry a green light that shines from dead ahead to 2 points (°) abaft the beam on the starboard side (the right side from the perspective of someone on board facing forward), a red light from dead ahead to two points abaft the beam on the port side (left side) and a white light that shines from astern to two points abaft the beam on both sides. Power driven vessels in addition to these lights, must carry either one or two (depending on length) white masthead lights that shine from ahead to two points abaft the beam on both sides. If two masthead lights are carried then the aft one must be higher than the forward one.
Small power-driven vessels (under ) may carry a single all-round white light in place of the two or three white lights carried by larger vessels, they must also carry red and green navigation lights. Vessels under with a maximum speed of less than are not required to carry navigation lights, but must be capable of showing a white light. Hovercraft at all times and some boats operating in crowded areas may also carry a yellow flashing beacon for added visibility during day or night.
Lights of special significance
In addition to red, white and green running lights, a combination of red, white and green mast lights placed on a mast higher than all the running lights, and viewable from all directions, may be used to indicate the type of craft or the service it is performing. See "User Guide" in external links.
Ships at anchor display one or two white anchor lights (depending on the vessel's length) that can be seen from all directions. If two lights are shown then the forward light is higher than the aft one.
Boats classed as "small" are not compelled to carry navigation lights and may make use of a hand-held flashlight.
Aviation navigation
Aircraft are fitted with external navigational lights similar in purpose to those required on watercraft. These are used to signal actions such as entering an active runway or starting up an engine. Historically, incandescent bulbs have been used to provide light; however, recently light-emitting diodes have been used.
Aircraft navigation lights follow the convention of marine vessels established a half-century earlier, with a red navigation light located on the left wingtip leading edge and a green light on the right wingtip leading edge. A white navigation light is as far aft as possible on the tail or each wing tip. High-intensity strobe lights are located on the aircraft to aid in collision avoidance. Anti-collision lights are flashing lights on the top and bottom of the fuselage, wingtips and tail tip. Their purpose is to alert others when something is happening that ground crew and other aircraft need to be aware of, such as running engines or entering active runways.
In civil aviation, pilots must keep navigation lights on from sunset to sunrise, even after engine shutdown when at the gate. High-intensity white strobe lights are part of the anti-collision light system, as well as the red flashing beacon.
All aircraft built after 11 March 1996 must have an anti-collision light system (strobe lights or rotating beacon) turned on for all flight activities in poor visibility. The anti-collision system is recommended in good visibility, where only strobes and beacon are required can use white (clear) lights to increase conspicuity during the daytime. For example, just before pushback, the pilot must keep the beacon lights on to notify ground crews that the engines are about to start. These beacon lights stay on for the duration of the flight. While taxiing, the taxi lights are on. When coming onto the runway, the taxi lights go off and the landing lights and strobes go on. When passing 10,000 feet, the landing lights are no longer required, and the pilot can elect to turn them off. The same cycle in reverse order applies when landing. Landing lights are bright white, forward and downward facing lights on the front of an aircraft. Their purpose is to allow the pilot to see the landing area, and to allow ground crew to see the approaching aircraft.
Civilian commercial airliners also have other non-navigational lights. These include logo lights, which illuminate the company logo on the tail fin. These lights are optional to turn on, though most pilots switch them on at night to increase visibility from other aircraft. Modern airliners also have a wing light. These are positioned on the outer side just in front of the engine cowlings on the fuselage. These are not required to be on, but in some cases pilots turn these lights on for engine checks and also while passengers board the aircraft for better visibility of the ground near the aircraft. While seldom seen, the International Code of Signals allows for the exclusive use of flashing blue lights (60 to 100 flashes/minute), visible from as many directions as possible, by medical aircraft to signal their identity.
Spacecraft navigation
In 2011, ORBITEC developed the first light-emitting diode (LED) system for use as running lights on spacecraft. Currently, Cygnus spacecraft, which are uncrewed transport vessels designed for cargo transport to the International Space Station, utilize a navigational lighting system consisting of five flashing high power LED lights. The Cygnus displays a flashing red light on the port side of the vessel, a flashing green on the starboard side of the vessel, two flashing white lights on the top and one flashing yellow on the bottom side of the fuselage.
The SpaceX Dragon and Dragon 2 spacecraft also feature a flashing strobe along with red and green lights.
See also
Formation light
Landing lights
Notes
References
External links
Aerospace engineering
Aircraft external lights
Nautical terminology
Signalling lights | Navigation light | Engineering | 1,710 |
11,610,688 | https://en.wikipedia.org/wiki/Irish%20Transverse%20Mercator | Irish Transverse Mercator (ITM) is the geographic coordinate system for Ireland. It was implemented jointly by the Ordnance Survey Ireland (OSi) and the Ordnance Survey of Northern Ireland (OSNI) in 2001. The name is derived from the Transverse Mercator projection it uses and the fact that it is optimised for the island of Ireland.
History
The older Irish grid reference system required GPS measurements to be "translated" (using co-ordinate transformations). The more precise the GPS measurements were, the more the translation process introduced inaccuracies.
While the existing UTM co-ordinate system partly fulfilled the requirement for direct GPS compatibility it had some drawbacks, including varying levels of distortion across the island due to the central meridian being at the west coast of Ireland.
The new system needed to satisfy various criteria: GPS compatibility, map distortion for the whole island of Ireland had to be minimal, it was to be conformal and backward compatible with existing mapping. A customised Transverse Mercator projection was chosen.
ITM and the older more established Irish Grid will (initially at least) be used in parallel. As a result ITM coordinates had to be obviously different so users would not confuse the two. This was done by shifting the ITM false origin further into the Atlantic and thereby creating substantially different co-ordinate numbers for any given location.
While OSi and OSNI intend to supply map information in the older Irish Grid format into the future, the Irish Institution of Surveyors has recommended that ITM be adopted as soon as possible as the preferred official co-ordinate system for Ireland.
Examples
An ITM co-ordinate is generally given as a pair of two six-digit numbers (excluding any digits behind a decimal point which may be used in very precise surveying). The first number is always the easting and the second is the northing. The easting and northing are in metres from the false origin.
The ITM co-ordinate for the Spire of Dublin on O'Connell Street is:
715830, 734697
The first figure is the easting and means that the location is 715,830 metres east from the false origin (along the X axis). The second figure is the northing and puts the location 734,697 metres north of the false origin (along the Y axis)
The equivalent Irish Grid co-ordinate for the same location is:
315904, 234671 or O1590434671
The Spire of Dublin example provides a fix for a location that is accurate to 1 metre. With ITM it is possible to give a more accurate co-ordinate for a given location by using a decimal point after the initial six figure easting and northing.
The ITM co-ordinate for the passive GPS station at the OSi office is:
E 709885.081m, N 736167.699m
This ITM co-ordinate has three digits behind the decimal point which gives a fix for a location with millimetre accuracy. Also notice how the easting in this example is indicated with an “E” and likewise an “N” for the northing. The fact that the co-ordinate is in metres is indicated by the lowercase m.
With ITM there is no provision for using myriad letters and truncated coordinates as there is with the Irish Grid.
Every co-ordinate must be given with at least a six-digit easting and northing from the false origin.
Comparison of ITM, Irish Grid and UTM
See also
ETRS89
GRS80
Spatial reference system
References
Ireland’s Surveying Infrastructure for the 21st Century by William Patrick Prendergast.
A New Coordinate System for Ireland-OSi
New Map Projections for Ireland-OSi
Coordinate Positioning Strategy-OSi
Map Projections
OSi Passive GPS station coordinates (If registration is required enter: ie@ie.ie for the email address and password for the password.)
External links
OSi
OSNI
Irish Institution of Surveyors
OSi: Migrating to ITM
Online converters
OSi Coordinate Converter Allows conversion between Irish Grid, ITM, UTM & ETRF89. (If registration is required enter: ie@ie.ie for the email address and password for the password.)
www.fieldenmaps.info Detailed converter: ITM, UTM, Irish Grid, War Office Irish Grid, Bonne Projection, Decimal/Deg. Min. Sec. Lat. Long. with multiple datums.
Ordnance Survey (UK) Coordinate Converter. Click on the ITM button to toggle between Irish Grid and ITM.
Geographic coordinate systems
Geography of Ireland
Land surveying systems
Maps from Ordnance Survey
Navigation
Surveying
Geodesy | Irish Transverse Mercator | Mathematics,Engineering | 988 |
2,673,834 | https://en.wikipedia.org/wiki/Numeronym | A numeronym is a word, usually an abbreviation, composed partially or wholly of numerals. The term can be used to describe several different number-based constructs, but it most commonly refers to a contraction in which all letters between the first and last of a word are replaced with the number of omitted letters (for example, "i18n" for "internationalization").
According to Anne H. Soukhanov, editor of the Microsoft Encarta College Dictionary, it originally referred to phonewords – words spelled by the letters of keys of a telephone pad.
A numeronym can also be called an alphanumeric acronym or alphanumeric abbreviation.
Types
Homophones
A number may be substituted into a word where its pronunciation matches that of the omitted letters. For example, "K9" is pronounced "kay-nine", which sounds like "canine" (relating to dogs).
Examples of numeronyms based on homophones include:
sk8r: skater
B4: before
l8r: later; L8R, also sometimes abbreviated as L8ER, is commonly used in chat rooms and other text based communications as a way of saying goodbye.
G2G: "good to go", "got to go", or "get together", also found as "GTG".
gr8: "great"
P2P: "pay-to-play" or "peer-to-peer"
F2P: "free-to-play"
T2UL/T2YL: "talk to you later", also found as "TTYL".
B2B: "business-to-business"
B2C: "business-to-consumer"
Numerical contractions
Alternatively, letters between the first and last letters of a word may be replaced by the number of letters omitted. For example, the word "internationalization" can be abbreviated by replacing the eighteen middle letters ("nternationalizatio") with "18", leaving "i18n". Sometimes the last letter is also counted and omitted. These word shortenings are sometimes called numerical contractions.
According to Tex Texin, the first numeronym of this kind was "S12n", the electronic mail account name given to Digital Equipment Corporation (DEC) employee Jan Scherpenhuizen by a system administrator because his surname was too long to be an account name. By 1985, colleagues who found Jan's name unpronounceable often referred to him verbally as "S12n" (ess-twelve-en). The use of such numeronyms became part of DEC corporate culture.
Examples of numerical contractions include:
g11n – globalization
i14y – interoperability
a11y – accessibility
m12n – modularization
p13n – personalization
s5n – shorten
l10n – localization
i18n – internationalization
a16z – Andreessen Horowitz
K8s – Kubernetes
o11y – observability (software)
c12s – communications
c14n – canonicalization
E15 – The Eyjafjallajökull volcano in Iceland
Purely numeric
Some numeronyms are composed entirely of numbers, such as "212" for "New Yorker", "4-1-1" for "information", "9-1-1" for "help", "101" for "basic introduction to a subject", and "420" for "Cannabis". Words of this type have existed for decades, including those in 10-code, which has been in use since before World War II. Chapter or title numbers of some jurisdictions' statutes have become numeronyms, for example 5150 and 187 from California's penal code. Largely because the production of many American movies and television programs are based in California, usage of these terms has spread beyond its original location and user population.
Examples of purely numeric words include:
64 – Tiananmen Square protests of 1989
69 – 69 (sex position)
143 – I love you
187 – slang for "murder", based on section 187 of the California Penal Code
520 – I love you (one of many numeronyms used in Chinese Internet Slang)
8:46 – The length of time associated with the murder of George Floyd (May 25, 2020 in Minneapolis).
2137 - Time of day at which John Paul II died
1312 - ACAB (All Cops Are Bastards)
Repeated letters
A number may also denote how many times the character before or after it is repeated. This is typically used to represent a name or phrase in which several consecutive words start with the same letter, as in W3 (World Wide Web) or W3C (World Wide Web Consortium).
SI prefixes
Numeronyms can also make use of SI prefixes, as are commonly used to abbreviate long numbers (e.g. "1k" for or "1M" for ).
Examples of numeronyms using SI prefixes include
Y2K problem – Year 2000 problem
Y2K38 problem – Year 2038 problem
C10k problem – Ten-thousand concurrent connections problem
See also
Leet or leetspeak
Nominal number
-onym
References
Abbreviations
Numbers | Numeronym | Mathematics | 1,096 |
14,261,424 | https://en.wikipedia.org/wiki/Trimethylarsine%20%28data%20page%29 | This page provides supplementary chemical data on Trimethylarsine.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions.
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Trimethylarsine (data page) | Chemistry | 84 |
53,398,219 | https://en.wikipedia.org/wiki/Fluent%20%28mathematics%29 | A fluent is a time-varying quantity or variable. The term was used by Isaac Newton in his early calculus to describe his form of a function. The concept was introduced by Newton in 1665 and detailed in his mathematical treatise, Method of Fluxions. Newton described any variable that changed its value as a fluent – for example, the velocity of a ball thrown in the air. The derivative of a fluent is known as a fluxion, the main focus of Newton's calculus. A fluent can be found from its corresponding fluxion through integration.
See also
Method of Fluxions
History of calculus
Leibniz–Newton calculus controversy
Derivative
Newton's notation
Fluxion
References
Mathematical analysis
Differential calculus
History of calculus | Fluent (mathematics) | Mathematics | 142 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.