id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
361,266 | https://en.wikipedia.org/wiki/List%20of%20major%20power%20outages | This is a list of notable wide-scale power outages. To be included, the power outage must conform to of the following criteria:
The outage must not be planned by the service provider.
The outage must affect at least 1,000 people.
The outage must last at least one hour.
There must be at least 1,000,000 person-hours of disruption.
For example:
1,000 people affected for 1,000 hours (42 days) or more would be included; fewer than 1,000 people would not be, regardless of duration.
One million people affected for a minimum of one hour would be included; if the duration were less than one hour, it would not, regardless of number of people.
10,000 people affected for 100 hours, or 100,000 for 10 hours would be included.
Largest
Longest
This method is a formula that multiplies the number of hours by the population affected, and does not reflect the nominal time in hours that the outages lasted.
Chronology
1960s
1965
November 9—United States and Canada—The Northeast blackout of 1965 affected portions of seven northeastern U.S. states and Ontario. Most radio and television stations within the area lost power or lost teletype communications, so people within the blackout area relied on broadcasts from other areas to learn information about the blackout.
1969
August 5—United States—A 50-mile (80 km) stretch of Florida's Gold Coast was hit with a general power failure after an explosion at the Cutler Ridge facility. The outage affected more than 2 million people, and created a vast traffic jam. Miami and Fort Lauderdale downtown areas were offline for almost two hours, with other areas dark for much longer periods.
1970s
1971
February 7—United States—On that evening, power in parts of the New York City borough of Manhattan was lost for over four hours. The power outage was originally thought to have been caused by an explosion at Con Ed's Waterside power facility on 40th Street and 1st Avenue in Manhattan, but the cause was later suspected to be a result of a fault in a transformer at a substation on 13th Street. New York City television and FM radio stations that transmit from the Empire State Building were off the air. AM radio stations were largely unaffected, as most of their transmitters were located in either Northern New Jersey (e.g. WABC (AM)) or on High Island (e.g. WCBS (AM)) in the Bronx, which was not affected by the blackout. However, several Manhattan AM station studios were affected due to insufficient power backups. Several lines of the IND and IRT subway lines were affected, stranding passengers. At Grand Central Terminal power in the building was lost, but power on the tracks was retained because it ran on direct current. The New York Daily News was also affected when the blackout caused their printing facility to halt operations.
1976
July 4—United States—A major power failure affected most of Utah and parts of Wyoming for 1.5 to 6 hours.
1977
May 10—Romania—A nationwide blackout that lasted 5 hours caused US$1 billion losses, larger than the Vrancea earthquake on March 4. Subsequent investigations showed it was caused by human error.
May 17—United States—Parts of South Florida were blacked out after a malfunctioning relay caused the Turkey Point Nuclear Generating Station in Miami to go offline.
July 13–14 —United States—In New York City, 9 million people were affected by a power outage. It was a result of a transmission failure due to a lightning strike on power lines. A second lightning strike caused the loss of two more overhead power lines, and the last power connection between New York City and the Northwest. The power outage resulted in high instances of looting occurring over 26 hours.
September 20—Canada—A power outage covered almost the entire province of Quebec, affecting two million people, after a failure at the Montagnais Substation along a series of 735 kV transmission lines connecting to the Churchill Falls Generating Station in Labrador. Power was restored to scattered rural areas within an hour and service was brought back to parts of Montreal and Quebec City within two hours; it took several hours to fully restore power.
1978
December 19—France—A power line overload caused a 4 hour outage in the French mainland.
1980s
1981
January 8—United States—Prisoners on a work assignment burning trash and debris at the Utah State Prison in Draper, accidentally caused a major power failure when something they were burning exploded, causing a fireball that shorted out transmission lines above them. 1.5 million people lost power, in almost all of Utah, as well as parts of southeastern Idaho and southwestern Wyoming.
1982
December 22—United States—A transmission tower near Tracy, California collapsed onto an adjacent tower bringing down two 500-kV lines and a pair of 230-kV lines that passed underneath the 500-kV right of way. Total loss of 12,530 MW affected approximately five million people on the west coast.
1983
December 27—Sweden—Two-thirds of the country's network was shut down when a single component in a switching station failed, causing a short circuit in a transformer. This affected about 4.5million people in the more densely populated southern half.
1985
May 17—United States—Most of South Florida was blacked out after a brush fire in the Everglades damaged overhead transmission lines. Miami, Fort Lauderdale, West Palm Beach, and the Florida Keys lost power for about 3.5 hours. About 4.5 million people were affected.
1987
October 16—United Kingdom—The Great Storm of 1987 interrupted the High-Voltage Cross-Channel Link between the UK and France. The storm caused a domino effect of power outages throughout South East England.
1989
March 13—Canada—The March 1989 geomagnetic storm caused the Hydro-Québec power failure which left seven million people in Quebec without power for over nine hours.
October 17—United States—The 1989 Loma Prieta earthquake knocked out power to about 1.4 million customers in Northern California, mainly due to damaged electrical substations.
1990s
1991
July 7—United States and Canada—A powerful wind storm affected a large portion of central North America and caused power outages for about one million customers from Iowa to Ontario.
1992
August 24—United States—As Hurricane Andrew passed over the northern Florida Keys it downed 17 miles (27 km) of power lines, breaking the wooden poles they were strung on, along a path that was in four feet (122 cm) of water, stretching from the Turkey Point Nuclear Plant southward to the upper Keys. The shallow depth prevented the use of large construction barge cranes for rebuilding the power pylons but the water was too deep for land-based construction vehicles. As a result, the Upper and Middle Keys were largely without power for several months as the Middle Keys Electric Co-op only had generating capacity for 10% of its demand. The power lines heading north to Miami were restored much more quickly, as they were strung along the side of US Highway 1 on dry land. Key West power was in the process of decommissioning an end-of-life oil-fired plant and was able to restore 75% generating capacity for the lower keys in one day as there was no storm damage that far south. Key West power was in the process of converting to sourcing 100% of its electricity from the Turkey Point facility.
1995
October 4—United States and Canada—Hurricane Opal, which killed at least 59 people, knocked out power to over two million customers across eastern and southern North America.
1996
July 2–3—United States, Canada and Mexico—Two million people lost power due to a transmission line overheating (the temperature was around 38 °C/100 °F) in Idaho and a 230-kV line between Montana and Idaho tripping. Some customers were without power for minutes, while others were without for hours.
August 10—United States and Mexico—the Western Intertie buckled under the high summer heat of the 1996 Western North America blackouts, causing a cascading power failure affecting nine western U.S. states and parts of Mexico. Four million people were affected. Power was out in some locations for four days.
November 19—United States—A severe ice storm affected the region around Spokane, Washington and Coeur d'Alene, Idaho causing power outages lasting up to two weeks.
1998
January—United States and Canada—The North American Ice Storm of 1998 caused prolonged blackouts in northeastern North America, particularly in Quebec, where many transmission towers were destroyed by ice. Over 3.5 million customers in total lost power during the event.
January 3—Philippines—A broken power line caused by a falling Meralco utility post in Laguna led to a power interruption that affected the entire island of Luzon for almost 7 hours.
February 19–March 27—New Zealand—The 1998 Auckland power crisis resulted in the entire Central Business District of Auckland being without power for several weeks, after a line failure caused a chain reaction leading to the failure of three other lines.
May 31—United States and Canada—A powerful wind storm caused a power outage for nearly two million customers across much of central North America.
September 7—United States—A series of widespread derechos in the Northeast (the Labor Day Derechos) caused a power outage for hundreds of thousands of customers for several days.
December 8—United States—In the San Francisco area, over 350,000 customers (buildings) or 940,000 people were affected by an outage caused when the Pacific Gas and Electric Company placed a San Mateo sub-station online at 8:17 am PST, while the station was still grounded following maintenance. This drew so much power from the transmission lines on the San Francisco Peninsula that 25 other sub-stations in the city automatically and immediately shut down. Power was not fully restored until almost 4:00 pm PST the same day. Economic costs were estimated in tens of millions of dollars.
1999
March 11–June 22—Brazil—The 1999 Southern Brazil blackout was a widespread power outage (the largest ever at the time) that involved São Paulo, Rio de Janeiro, Minas Gerais, Goiás, Mato Grosso, Mato Grosso do Sul and Rio Grande do Sul, affecting an estimated 75 to 97 million people and lasting 103 days. A chain reaction was started when a lightning strike occurred at 22h 16m at an electricity substation in Bauru, São Paulo State, causing most of the 440 kV circuits at the substation to trip. With few routes for the power to flow from the generating stations via the 440 kV system (a very important system to São Paulo, carrying electricity generated by the Paraná River) a lot of generators automatically shut down because they did not have any load. The world's biggest power plant at the time, Itaipu, tried to support the load that was no longer being supplied by the 440 kV power plants, but the 750 kV AC lines and the 600 kV DC lines that connected the plant to the rest of the system could not take the load and tripped too. In Rio, the military police placed 1,200 men in the streets to avoid looting. In São Paulo, traffic authorities announced they closed the city's tunnels to prevent robberies. More than 60,000 people were on Rio's subway when lights went out. At midnight, power began returning to some areas, with fully power restored on June 22.
July 5—United States and Canada—the Boundary Waters–Canadian derecho cut power to over 600,000 homes in Quebec with additional outages in New England and in the Upper Great Lakes region.
July 29—Taiwan—The 326 transmission tower collapsed due to a landslide, disconnecting around 8.46 million electricity consumers.
December 26–28—France—Cyclone Lothar and Martin left 3.4 million customers without electricity, and forced Électricité de France to acquire all the available portable power generators in Europe, with some even brought in from Canada. These storms brought a fourth of France's high-tension transmission lines down and 300 high-voltage transmission pylons were toppled. It was described as one of the greatest energy disruptions ever experienced by a modern developed country.
2000s
2000
May 9—Portugal—A major power outage left the entire southern half of the country, including Lisbon, without power for a few hours. The blackout occurred shortly after 10 pm local time. The apagão (translated as "super outage"), suddenly plunged Lisbon in complete darkness. Stalled commuter trains and traffic light failures wreaked some havoc in the streets. Security was immediately reinforced in the city, but no rise in criminal activity was registered. Energias de Portugal, the main electricity operator, later reported that the blackout was due to the electrocution of a stork, which landed "on the wrong place at the wrong time".
United States—During the 12-month California electricity crisis of 2000–01, there were regular power failures due to energy shortages.
October 20—Philippines—A massive power outage affected most of Luzon including Metro Manila, caused by system failures in the transmission lines of the National Power Corporation in Pangasinan and Bulacan; electricity was fully restored 16 hours later.
2001
January 2—India—A fault in the transmission system in the state of Uttar Pradesh led to cascading failure throughout North India.
May 20-August 28—Iran—A problem at a power substation caused a major blackout. Outages were reported in Tehran and at least six provincial capitals: Isfahan, Shiraz, Tabriz, Kermanshah, Qazvin, and Hamadan.
2002
January 30—United States—A major ice storm hit Kansas City, Missouri, knocking trees into power lines and blowing up transformers throughout the city. The outage affected more than 270,000 people.
March 12—Indonesia—A power failure affected 13 million people in South Sumatra and Lampung.
July 13—Azerbaijan—Baku and nearly the entirety of the country experienced a blackout due to unknown causes.
April 30—United States—Nearly all of JEA's 355,000 customers in Jacksonville, Florida, lost power.
2003
July 22—United States—A severe wind storm disrupted power to over 300,000 customers in the Memphis, Tennessee, metropolitan area.
August 14–16—United States and Canada—The Northeast blackout of 2003, a wide-area power failure in the northeastern US and central Canada, affected over 55 million people, 14 days fully restored.
September 2—Malaysia—The 2003 southern Malaysia blackout resulted when a power failure affected five states (out of 13), including the capital Kuala Lumpur, for five hours, starting at 10:00 am local time.
September 23—Denmark and Sweden—A power failure affected five million people in east Denmark and southern Sweden.
September 28—Italy—The 2003 Italy blackout affected the entire country except Sardinia, cutting service to more than 56 million people.
2004
July 12—Greece—Two power plants in Lavrio and Megalopolis shut down due to malfunction within 12 hours of each other during a period of high demand due to a heat wave. That led to a cascading failure causing the collapse of the entire Southern (Power) System, affecting several million people in southern Greece.
2005
Malaysia—The 2005 Malaysia electricity blackout crisis caused electricity to fail in many states in Peninsular Malaysia, including Perak, Penang, Kedah, and Perlis. This was due to a fault of the main cable transmission line grid near Serendah, Selangor.
January—Brazil—A cyber attack disrupted power service in three cities north of Rio de Janeiro, affecting tens of thousands of people.
May 25–August 3—Russia—The 2005 Moscow power blackouts ten-week-long power outage affected more than two million people in central Russia. The blackout was due to a cascading failure of the power grid started by a transformer failure. Some lines of the Moscow Metro lost power, stranding people in trains, 10 weeks fully power restored.
August 29—United States—Hurricane Katrina caused widespread power outages throughout Louisiana, Mississippi, Alabama, Florida, Kentucky, and Tennessee. Exact totals are difficult to define, especially in Louisiana parishes which became unoccupied for months. Power was also disrupted to 1.3 million customers when it passed over Florida several days earlier. In total, around 2.6million people across the US were left without power as a result of the storm.
2006
August 1—Canada—In the Laurentians of Québec, a large number (146,000, at its peak in the evening) of households were left without electricity for a whole day, and some for up to a week, due to intense thunderstorms that rolled through southern Quebec, including the greater Montreal area. Over 450,000 customers in total were affected.
August 2—Canada—Nearly a quarter million customers of Hydro One lost power after severe thunderstorms that included tornadoes and damaging wind ripped through southern and eastern Ontario.
August 14—Japan—A floating crane hit and broke a transmission line across the Edo River, interrupting power to 1,391,000 customers in the Tokyo Metropolitan Area, including Tokyo, Yokohama, and part of Kawasaki and Ichikawa. Power was restored to all but 15,000 customers within an hour. Full restoration was completed four hours and 42 minutes after the start of the incident.
November 4—Europe—On that night, over 15 million households across the main parts of Germany, France, Italy, Belgium, Spain, and Portugal were left without power after a cascading breakdown of the 2006 European blackout. Power grids of several other nations (Belgium, Netherlands, Poland, Switzerland, Czech Republic, Greece, and Morocco) experienced minor local outages. The root cause was an overload triggered by the German electricity company E.ON switching off an electricity line over the river Ems to allow the cruise ship Norwegian Pearl to pass through safely. The impact of this disconnection on the security of the network had not been properly assessed, and resulted in the European transmission grid splitting into three independent parts for a period of two hours. The imbalance between generation and demand in each section resulted in the power outages for consumers.
December 14—United States and Canada—The Hanukkah Eve windstorm of 2006 caused widespread damage to the power grid throughout Washington and into parts of Oregon, British Columbia, and Idaho; in some cases, blackouts in the affected area lasted longer than a week.
2007
January 16–March 24—Australia—Power was cut to 200,000 people in Victoria when bushfires caused the state's electricity connection to the national grid to shut down.
April 26—Colombia—A nationwide blackout struck at approximately 10:15 am local time, caused by an undetermined technical failure at a substation in the capital, Bogotá. Power returned to most parts of the country after several hours.
July 23—Spain—Barcelona suffered a near-total blackout. Several areas remained without electricity for more than 78 hours due to a massive electrical substation chain failure.
September 26–27—Brazil—A cyberattack caused major disruptions affecting more than three million people in dozens of cities in Espírito Santo.
December 2—Canada—A winter storm damaged transmission systems, resulting in a blackout over much of Eastern Newfoundland and Labrador affecting close to 100,000 customers. About 7,500 customers on the Bonavista Peninsula were without service for almost a week.
December 8–12—United States—A series of ice events cut power to over one million homes and businesses across the Great Plains, including large portions of Oklahoma, Kansas, and Nebraska.
December 12—Netherlands—A Royal Netherlands Air Force AH-64 Apache Attack helicopter on a routine training mission, crashed into high-voltage power lines. This resulted in a blackout affecting over 50,000 households in the Tielerwaard and Bommelerwaard region. Power was restored after three days.
2008
February 20—Indonesia—Coal supplies to some power plants in Java were stopped, as ships could not dock at ports due to large waves. This resulted in an electricity deficit of about 1,000 megawatts, and the power supply was shut off in several areas to protect the aging infrastructure. This affected the capital, Jakarta.
February 26—United States—A failed switch and fire at an electrical substation outside Miami triggered widespread blackouts in parts of Florida affecting four million people. The nuclear reactors at Turkey Point power plant were shut down on the day. The failure disrupted power to customers in 35 southern Florida counties and spread into the northern Florida peninsula. The affected region ultimately ranged from Miami to Tampa on the state's west coast and Brevard County on the east coast.
April 2—Australia—Around 420,000 households were left without power in Melbourne and in other parts of Victoria after the state was hit by winds of up to 130 km/h.
April 8—Poland—From around 3:30 am, around 400,000 persons were left without power in the city of Szczecin and its surroundings (as far as 100 km away). Most power was restored within a day. The reason was the fall of wet, heavy snow, which stuck to the power cables and caused them to break. One of the major powerline pillars broke in the aftermath.
May 20—Tanzania—The entire island of Zanzibar suffered a complete shutdown of power. It happened at around 10:00 pm local time, and was caused by a rupture of the undersea cable from mainland Tanzania. Power was restored after one month, on June 18.
September 13—United States—Hurricane Ike landed in Galveston, Texas and left over two million customers without power in the Greater Houston area. Power to one million homes was restored by day 6 and to two million homes by day 16.
December 11—United States—Rare winter snowfall in Southern Louisiana caused 10,000 power outages due to the accumulation of snow on transmission lines. Later that night in Massachusetts and New Hampshire, an ice storm hit, causing one million people to lose their power.
December 12—United States—A large ice storm in the Northeast collapsed power lines from Maine to Pennsylvania due to ice buildup on wires and trees and branches falling on power lines. At the peak of the outages, about 1.5 million people were without power. It took about two weeks to restore power to all locations.
December 26—United States—Power was lost for about 12 hours on the entire island of Oahu, Hawaii, starting at about 6:45 pm, where President-elect Barack Obama and his family were vacationing. It occurred due to lightning strikes on power lines, which caused HECO's system to trip.
2009
January 23—France—A severe windstorm knocked out power to 1.2 million customers.
January 27—United States—An ice storm hit Kentucky and in Southern Indiana knocking out power to about 769,000. As of February 15, about 12,000 were still without power from this storm.
January 27–31—Australia—Hundreds of thousands of homes in Victoria, including Melbourne, suffered various power failures as a result of a record heat wave. It is estimated that over 500,000 residents in Melbourne were without power for the evening of January 30. The outage affected much of central Melbourne with train and tram services cancelled, the evacuation of Crown Casino, traffic light failures, people being rescued from lifts and patrons of the Victorian Arts Centre evacuated and shows cancelled. The outage occurred only an hour after the National Electricity Market Management Company (NEMMCO) issued a statement saying load shedding was ending and power had been restored. Authorities say there had been a major electricity failure in the city's west, caused by the three-day heatwave. It is believed an explosion at South Morang contributed to the power problems along three transmission lines supplying Victoria's west and Victorian power supplier SP AusNet shed 1,000 megawatts.
March 30—United Kingdom—A major power cut hits homes and business in Glasgow and parts of western Scotland. The affected areas included the west end of Glasgow, Bearsden, Clydebank, Helensburgh, Dumbarton and as far afield as Lochgilphead and Oban. Arran was also affected from the outage. The power cut occurred at 4:20 pm and power was slowly restored between 5:20 and 6:30 pm.
April 15—Kazakhstan and Kyrgyzstan—A little before 9:00 pm, a severe power cut blacked out up to 80% of Almaty and northern parts of Kyrgyzstan, affecting a few million people for several hours. Power was not restored until after midnight local time.
July 20—United Kingdom—Power was cut to around 100,000 homes in the areas of South East London and North Kent, after vandals deliberately caused a fire near a cable installation, which caused failure of a 132 kV cable and four circuit boards. Due to the nature of the cable, it was impossible to re-route supplies around other cables without overloading them. As a result, power supplies were cut to about half of the homes for approximately four days, while other homes were given three-hour allocations of power followed by six hours "off". Over 70 mobile generators were brought in from around the country to help restore power in what was the largest deployment in London's history.
October 30—New Zealand—At around 8:00 am local time, power was cut to the whole of Northland and most of the northern half of Auckland, affecting 280,000 customers (14.5% of the country). A forklift carrying a shipping container accidentally hit one of the Ōtāhuhu to Henderson 220 kV circuits while the other circuit was out for maintenance, leaving the region supplied by four low capacity 110 kV circuits. Power was restored to the entire region around 11:00 am.
November 10–20—Brazil and Paraguay—Starting at 10:13 pm Brasília official time, the 2009 Brazil and Paraguay blackout was caused by the failure of transmission lines from Itaipu Dam, the world's second-largest hydroelectric dam, affecting over 80 million customers. The failure was caused by a major thunder storm which affected a key transmission line to southeastern Brazil, causing all 20 turbines at the plant to shut down due to the abrupt fall of power demand. Four of Brazil's most densely populated states entirely lost power (including São Paulo and Rio de Janeiro) with 14 more states being partly affected. The entire country of Paraguay experienced the power failure. It took about seven hours for the system to fully recover. This is regarded as one of the largest blackouts in history, taking 10 days for power to be fully restored.
2010s
2010
January 30—Australia—Two separate transmission lines were hit by lightning, blacking out Darwin, Northern Territory and the nearby cities of Katherine and Palmerston starting at about 6:00 am. Power was restored to all areas by 4:30 pm.
February—United States—A pair of blizzards hit the Northeast on February 5–6 and again just a few days later on February 9–10. Among the hardest hit areas was the Baltimore–Washington corridor, with well over 200,000 people impacted at the height of the outages and about two-thirds of those without power for periods lasting from half a day to several days. Other urban areas, such as Pittsburgh, were also affected.
March 14—Chile—The March 2010 Chile blackout left roughly 15 million people, about 90% of the country's population, without power when a major transformer failed in southern Chile. Power began to be restored within a few hours, and almost all of the country had power by the following day. The outage was not directly related to damage from the earthquake that hit the country the previous month.
March 14—United States—A severe windstorm disrupted power to hundreds of thousands of customers primarily in southwestern Connecticut as well as parts of Westchester County, Long Island, and New Jersey as a result of a severe wind and rain storm. The outage lasted as long as six days for some customers in the hardest hit communities. Many public school districts were closed for up to five days the following week.
March 30—United Kingdom—about 30,000 homes in Northern Ireland were hit by a power cut, caused by winter weather conditions. Omagh, Enniskillen, Dungannon, Derry, Coleraine, and Ballymena were affected.
June 27—United Kingdom—Portsmouth, England suffered a massive blackout when a substation caught fire.
July 15—United States—76,000 people in Oakland and Wayne counties in southeastern Michigan lost power at approximately 7:00 pm during heavy storms.
July 25—United States—An estimated 250,000 Pepco customers lost power in the Washington, D.C., area due to severe storms that swept through the area.
September 1–21—Iceland—The country experienced a massive power outage.
2011
February 2—United States—In Texas, forced outages at two major coal-fired power plants and high electricity demand due to cold weather caused rolling blackouts, affecting up to 3.2 million people.
February 3—Australia—Cyclone Yasi hit communities in North Queensland. The cyclone winds reached 300 km/h (186mp/h) and caused widespread damage through many communities. 170,000 homes lost electricity.
February 4—Brazil—At least eight states in the Northeast Region—Alagoas, Bahia, Ceará, Paraíba, Pernambuco, Piauí, Rio Grande do Norte, and Sergipe—suffered a major blackout from around midnight to 4:00 am. It is estimated that 53 million people were affected. Major cities such as Salvador, Recife, and Fortaleza were completely out of power.
February 22—New Zealand—At 12:51, a 6.3-magnitude earthquake struck Christchurch. Over 80 percent of the city (approximately 160,000 customers) lost power. Most power was restored within five days, though some central areas were still without power as late as May 1.
April 27—United States—One of the United States's most devastating tornado outbreaks disrupted power to most of northern Alabama; some 311 high-tension electrical transmission towers were destroyed by multiple, violent tornadoes. The Browns Ferry Nuclear Plant, the largest in Alabama and the second largest nuclear plant in the US, was also disconnected by the tornadoes, leading the operators to shut down all three reactors following the event.
June 30—India—Chennai suffered a major power outage that affected many parts of the city for more than 15 hours.
July 11—United States—The Chicago area was hit by a large derecho which disrupted power to over 850,000.
July 11—Cyprus—A half-week power outage affected all cities on the Greek part of the island. The outage was caused by an explosion next to the Vassilikos power plant.
July 23—Canada—the failure of a glass insulator caused an outage of most of Northern Saskatchewan for about four hours.
August 27–28—United States—Hurricane Irene caused over five million power outages.
September 8–9—United States and Mexico—the 2011 Southwest blackout affected parts of Southern California and Arizona, as well as parts of northwestern Mexico. The failure initiated after maintenance of a 500 kV line brought it offline, and subsequent weaknesses in operations planning and lack of real-time situational awareness at multiple power stations led to cascading outages. Power restoration was generally effective, but also affected by communication issues, with 100% power restoration occurring from 6–12 hours depending on location. Over five million people were affected.
September 16—South Korea—The country experienced a widespread blackout due to hot weather.
September 24—Chile—Nine million people in the north and central region were affected by the 2011 Chile blackout which lasted for at least two hours.
October—United States—A snowstorm along the East Coast caused over two million power outages. Some residents of Connecticut and western Massachusetts were without electricity for over 11 days.
2012
January 14–April 27—Turkey—A 380 kV transformer failure in Bursa Natural Gas Fueled Combined Cycle PP, was accused of voltage deviations in the interconnected power grid that resulted in a blackout. Another failure occurred in 154 kV Babaeski substation which caused a blackout in Thrace. Six cities and more than 20 million people were affected by the Marmara blackout of 2012. The power was back in all cities in the evening. The blackout disrupted metro and tram operation in Istanbul. Also gas heating systems did not work during the blackout. The problem was resolved by getting electricity from Bulgaria to Thrace and feeding lines in Istanbul from Ambarlı Natural Gas PP in Istanbul, it took 104 days before power was fully restored.
April 4—Cyprus—A blackout hit every city in the island after the Dhekelia Power Station failed from 4:42 to 9:20 am.
April—United States—PG&E customers in Oakland, California, and surrounding areas in Alameda County suffered a heat-related power outage.
June 29—United States—A line of thunderstorms with hurricane-force winds swept from Iowa to the Mid-Atlantic coast and disrupted power to more than 3.8 million people in Indiana, Ohio, West Virginia, Pennsylvania, Maryland, New Jersey, Virginia, Delaware, North Carolina, Kentucky, and Washington, D.C.
July 30—India—due to a massive breakdown in the northern grid, there was a major power failure which affected seven northern states, including Delhi, Punjab, Haryana, Himachal Pradesh, Uttar Pradesh, Jammu and Kashmir, and Rajasthan. It was the preludium for the outage at the following day.
July 31—India—the 2012 India blackout left half the country without electricity supply. This affected hundreds of trains, hundreds of thousands of households and other establishments as the grid that connects generating stations with customers collapsed for the second time in two days.
October 29–30—United States—Hurricane Sandy brought high winds and coastal flooding to a large portion of the eastern United States, leaving an estimated 8 million customers without power. The storm, which came ashore near Atlantic City, New Jersey, as a Category 1 hurricane, ultimately left scores of homes and businesses without power in New Jersey (2.7 million), New York (2.2 million), Pennsylvania (1.2 million), Connecticut (620,000), Massachusetts (400,000), Maryland (290,000), West Virginia (268,000), Ohio (250,000), and New Hampshire (210,000). Power outages were also reported in a number of other states, including Virginia, Maine, Rhode Island, Vermont, and the District of Columbia.
2013
January 26–February 5—Australia—Ex-Tropical Cyclone Oswald caused the loss of power to over 250,000 customers in South East Queensland. Power was gradually restored over about 10 days.
February 8–9—United States—Some 650,000 homes and businesses in the northeast lost power as the result of a powerful nor'easter that brought hurricane-force wind gusts and more than two feet (60 cm) of snow to New England.
March 22—United Kingdom—200,000 homes in the Greater Belfast area lost power as the result of a fault with the high-voltage transmission network during a snow storm.
March 28—Trinidad and Tobago- A nationwide blackout occurred, which was reportedly caused by low gas pressure around 12:37 am AST. The outage stemmed from two causes: a problem with the gas supply from Phoenix Park Gas Processors Ltd, which affected Trinidad, and a subsequent problem at the Cove power plant, which affected Tobago. T&TEC was able to restart the generators at Cove soon after, restoring power to the island from as early as 1 am. The final customer came back on at approximately 3 am. In Trinidad, T&TEC said the restoration started at approximately 4.45 am, as there was some delay in restarting the generators at the PowerGen plant in Point Lisas. Around 11 am, approximately 90% of customers in Trinidad were back on with their electricity supply.
April 1—Poland—100,000 people suffered under power outages due to heavy snowfall. Warsaw Airport found the snow difficult to operate in.
May 5—Philippines—40–50% of Luzon suffered power outages after several transmission lines tripped out, resulting in the isolation of Santa Rita, San Lorenzo, Calaca, Ilijan and Pagbilao Power Plants.
May 21—Thailand—A power failure affected fourteen provinces (out of 76) for four hours, starting at 7:00 pm local time.
May 22—Vietnam and Cambodia—2013 Southern Vietnam and Cambodia blackout. When moving a tree in Binh Duong, a truck driver let the tree bump onto a line in the national power grid (500 kV), causing an outage in 22 provinces and cities in the southern part of Vietnam, it took 8 hours before power was fully restored.
September 24—Turkey the Trakia region lost electric power. According to TREDAS (Power distribution utility of Trakia region), a failure in the substation of Hamitabat Gas Fueled Combined Cycle Power Plant in Lüleburgaz in Kırklareli Province caused a power outage in the TEİAŞ() 154 kV interconnected power transmission grid. Affected places included Tekirdağ, Edirne, Kırklareli provinces and Silivri in Istanbul Province. The affected population was about 1.5 million citizens. Power was fully restored after two hours (by 00:24 on 25 September).
December 22- Canada—The December 2013 North American storm complex, covering an area from Ontario to as far east as the maritime provinces, caused power failures. According to reports, as many as 300,000 customers in Toronto lost power. Later reports placed the peak number in Ontario without power at 600,000 The storm also caused widespread power outages in mid-Michigan. According to reports, as many as 500,000 lost power with restoration efforts expected through December 29.
2014
February 27—Philippines—Parts of Mindanao suffered power outages for six hours.
July 15—Philippines— 60% of the power grid in Luzon was lost due to Typhoon Rammasun (Glenda) that devastated the Southern part of the island where many power plants are located, such as the Geothermal Plant in Bicol and the Coal Plant in Batangas.
July 21—United Kingdom—A major power outage left London, Essex, Kent, and surrounding areas with no power for about half an hour. The cause was revealed to be schoolchildren who set fire to books near power lines in Havering, East London.
August 12—Malta—A nationwide power outage lasted for almost six hours. Power was lost across Malta island and Gozo at 7:50 pm and restored to most areas by 1:30 am. Due to problems with emergency generators, Malta International Airport had to close the runway and several flights were diverted to Catania and Palermo. The cause was a damaged cable which caused an explosion at the electricity distribution centre and automatic shutdown of both power stations. A previous nationwide power cut occurred on January 9, caused by a fault at the Delimara Power Station.
September 4—Egypt—A major blackout struck Cairo and other cities at 6 am, continuing for hours, bringing some key services to a halt. The power outage cost the strategic facilities of the Suez Canal an estimated LE100 million, as naval traffic and industrial activity came to a halt along the vital waterway. Some television channels were halted for nearly two hours due to the outage.
October 5—New Zealand—At 2:15 am, a cable trench fire at Transpower's Penrose substation in Auckland disconnected supply to Vector's local distribution network. Over 85,000 customers in Auckland's central-eastern suburbs lost electricity for over 12 hours. 50% of customers were reconnected by evening and 75% by the following morning.
November 1—Bangladesh—A nationwide power outage lasted for almost 10 hours. Power was lost at around 11:30 am and restored to most areas by 11:00 pm.
November 21—South Africa—Rolling blackouts were implemented nationwide and continued for the duration of the weekend. This followed similar outages earlier in the same month, all of which were triggered as a result of a collapsed coal silo at Eskom's Majuba Power Station, during a period when the state power company was already experiencing severe supply strain on the national grid due to technical difficulties affecting some of its other major turbines.
2015
On January 26—Pakistan—80% of the country (some 140 million people) were without power due to technical fault at a power station in Sindh. (2015 Pakistan blackout)
February 11—Kuwait—A technical problem in one of the main power grids caused most of the country to lose power.
March 27—Netherlands—A technical problem in one of the main power grids in North Holland caused 1 million households to lose power for at least an hour.
March 31—Turkey—Due to technical problems, over 90% of the country (about 70 million people) lost power. Unaffected regions were Van and Hakkari provinces which were fed by electricity from Iran.
August 29—Canada—A powerful wind storm disrupted power to 710,000 customers (nearly 50% of BCHydro customers) on Vancouver Island and Vancouver's lower mainland. 705,000 customers had power restored within 72 hours of the storm. This was BCHydro's single largest outage.
November 17—United States—A powerful wind storm that downed power lines left more than 161,000 customers without electricity in Spokane County, Washington and in neighboring counties. It exceeded the ice storm that occurred 19 years previous, almost to the day.
November 21—Crimea—A power outage left 1.2 million people in Crimea with reduced or no power following explosions at transmission towers in Ukraine.
December 23—Ukraine—The December 2015 Ukraine power grid cyberattack left 230 thousand people without power for 1–6 hours.
2016
June 7—Kenya—A nationwide blackout which lasted for over 4 hours was caused by a monkey entering a power station. Only about 10 million citizens were affected by the outage as the World Bank estimates that only 23% of the population have access to electricity.
September 1—United States—Hurricane Hermine swept across the Florida Panhandle, directly affecting the state capital, Tallahassee. Hermine disrupted power for more than 350,000 people in Florida and southern Georgia, many of whom were without power for a week.
September 21—Puerto Rico—A full power system collapse occurred on the island which affected its 3.5 million inhabitants. The outage, popularly referred to as the "Apagón" (translated as "super outage") has been labeled as the largest in Puerto Rico not caused by an atmospheric event. The outage occurred after two transmission lines, with power running up to 230 kV, failed.
September 28—Australia—The 2016 South Australian blackout affected the entire state of South Australia (1.7 million people), owing to two tornados that destroyed three critical elements of infrastructure, and the power system protected itself by shutting down.
2017
March 8—United States—A severe winter windstorm interrupted power for about 1 million customers in Michigan. About 730,000 were still without power the next day.
July 1—Central America—Countries in the region suffered a 6-hour power outage affecting millions.
July 8—United States—An explosion at a Northridge power plant caused a widespread power outage in the San Fernando Valley, Los Angeles.
July 27—United States—A crew working on the replacement for the Herbert C. Bonner Bridge in the Outer Banks of North Carolina, severed a power cable and caused a blackout on the Outer Banks islands which affected more than 7,000 people during the peak of tourist season. The outage lasted eight days.
August 15—Taiwan—A massive power cut affected millions of households.
August 26—Uruguay—Half the population endured a 4-hour outage. No cause was reported besides bad weather.
September 20—Puerto Rico—Hurricane Maria knocked out power to the entire island. Restoration efforts involved rebuilding significant parts of the already dilapidated power grid. Only 55% of residents had power back after three months, and as of August 2018, electricity had finally been restored to the entirety of the island.
October 30—United States and Canada—A combination of the remnants of tropical storm Philippe and an extratropical system resulted in approximately 1.8 million power outages in New England. The storm was particularly bad in Midcoast Maine where roads became impassible for almost a week, leaving many schools to close for five to six days. Many people did not get their power back on for over ten days in some of the worst hit areas. In Canada, Hydro-Québec reported 200,000 customers losing power because of damages due to strong winds produced by the storm.
December 7–10—United States—Winter Storm Benji came through the southeast US states, causing over 900,000 customers to lose power.
2018
January 10, January 21, and February 27—Sudan—The entire country suffered a complete power outage on those days.
March 2—United States—A Nor'easter struck the East Coast, leaving over two million people without power.
March 21—Brazil—A power outage struck large swathes of the country, affecting tens of millions of people, especially in the northern and northeastern regions. The blackout was due to the failure of a transmission line near the Belo Monte hydroelectric station.
April 12—Puerto Rico—870,000 customers lost power when a tree fell on a major power line near Cayey while workers were clearing vegetation. A week later, on April 18, power was lost to all of Puerto Rico when an excavator repairing 2017 damage from Hurricane Maria hit a line connecting two major power plants. After a request by Governor Ricardo Rosselló, the government electricity monopoly, PREPA, terminated its relationship with D. Grimm, the subcontractor responsible for both incidents.
July 3—Azerbaijan—From around 00:20 till around 8:00, nearly the whole country, except Nakhchivan (which had its own independent station), Nagorno-Karabakh, and other areas controlled by Armenian forces, had a major power outage. The reason was unexpectedly high temperatures which could not be handled by Mingachevir Electric Station (the country's main electricity supplier).
September 6—Japan—The 2018 Hokkaido Eastern Iburi earthquake knocked out power to about 2.95 million households in Hokkaido, mainly due to damage to the coal-fired thermal power station at Atsuma, according to a Japan Federation of Electric Power Companies report.
September 21—Canada—A severe thunderstorm, with wind gusts up to 260 km/h, hit the Ottawa/Gatineau region. The storm caused large scale damage to the power infrastructure, with 80 poles broken and one transformer station damaged. The destruction caused power outage for about 172,000 customers for intervals between few hours and several days.
October 10—United States—Hurricane Michael hit the U.S. Gulf Coast, causing thousands of customers in the Florida Panhandle, especially Panama City and Port St. Joe, to lose power for up to 10 days.
October 15—Venezuela—A fire in La Arenosa electrical station in Carabobo caused a massive blackout which affected 16 states varying from 1 to 3 hours, although some reported that it took 18 hours in some zones. The Electrical Energy Minister Luis Motta Domínguez reported that the cause of the fire was because of an explosion.
November 15—Indonesia—A power outage struck South Sulawesi, West Sulawesi and parts of Central Sulawesi leaving an estimated total of nine million people without electrical supply. The blackout was due to the interference with the transmission path of the Makale-Palopo line.
December 4—Canada—Transmission line failures in south Saskatchewan caused widespread outages to 175,000 to 200,000 SaskPower customers for several hours. The outage was caused by significant frost collection on grid equipment.
December 20—Canada—A windstorm caused outages to 600,000 BC Hydro customers across the Lower Mainland, Vancouver Island and Gulf Islands. The storm damaged 300 power poles and 170 transformers. Power was fully restored December 31. Winds reached speeds of 100 km/h.
2019
February 24—United States—A wind storm in the Lower Great Lakes caused by a very tight pressure gradient around a low pressure system caused hundreds of thousands of power outages, with Ohio and Pennsylvania being the hardest hit. Restoration efforts lasted for up to five days.
March 7—Venezuela—The first in a series of concurrent, nationwide blackouts. The first large outage was partially resolved by March 14, but smaller outages persisted in some regions for days afterwards, and a second multi-day outage began on March 25. In March, the country was without power for at least 10 days overall. The blackouts stemmed from the failure of Simón Bolívar Hydroelectric Plant (Guri Dam) in the state of Bolívar, and left most of the country in darkness. By March 12, power began returning to some parts of the country, but Caracas remained only partially powered and western Venezuela remained dark. Government officials claimed the blackout was "an act of sabotage," while experts attributed the failure to aging infrastructure and insufficient maintenance. At least 43 deaths were attributed to the initial wave of blackouts. The last reported blackout occurred on July 22, but was resolved the following day.
June 9—United States—350,000 people in Dallas County, Texas lost power after a severe thunderstorm downed hundreds of trees across the area. 200,000 remained without power on the evening of June 10 and 16,000 on the afternoon of June 12 restored. 41% of traffic signals in the city of Dallas were affected; 496 were temporarily inoperable and 168 reverted to flashing red signals.
June 16—Argentina, Uruguay and Paraguay were affected by a blackout—The entirety of those countries suffered a massive power outage, leaving an estimated total of 48 million people without electricity. The cause was operative error.
July 19–20—United States—Severe thunderstorms, tornadoes and floods caused damage and power outages throughout Wisconsin disrupted power to more than 277,000 customers. Governor Tony Evers declared a statewide state of emergency, with preliminary estimates of damage and cleanup costs of US$5.3 million. Some affected customers were still without power a week later.
July 19—United States—Storms and high winds in Michigan caused loss of power to roughly 600,000 to 800,000 customers and left many still without power for six days, the second highest number of storm related outages in Michigan power company DTE Energy Co.'s history.
July 22—United States—Over 300,000 people went without power in New Jersey following a storm.
August 4—Indonesia—More than 100 million people were affected by a massive blackout that affected most of Java; particularly Banten, Jakarta, West Java, parts of Central Java, and the Special Region of Yogyakarta. The blackout began as early as 11:50 local time, when Jakarta MRT authorities began to detect the loss of electrical supply, rendering its trains inoperable and requiring people stuck inside to evacuate from it. Jakarta LRT and KRL Commuterline also suffered from the blackout making TransJakarta the only mass transit transportation remaining in operation at the time of the blackout. Go-Jek and Grab had major problems due to lack of internet services. Most of traffic lights stopped functioning, causing congestion. The initial blackout lasted around 9 hours where at 21:00 local time power to most of the affected areas was restored, with almost 20 hours of blackout in total. Initially, the PLN (Indonesia's state electricity company) stated that cause of the outage was due to disruptions in a number of plants in Java, but later said that the cause was due to a disruption in the Ungaran–Pemalang high-voltage power line.
August 9—United Kingdom—A major power blackout hit parts of England and Wales, affecting over a million customers and causing widespread travel disruption. Power cuts were reported in north Oxfordshire, the Midlands, Wales, London, the North, and South East. Oxfordshire County Council issued a statement saying traffic lights had failed in parts of the county and one victim said the failure had caused 'gridlock' in Banbury.Train services were affected across South East England causing trains to be delayed and canceled when departing from London stations after a 'surge' on the grid cut off the controls to many railway signals. The blackout was blamed on a lightning strike and subsequent failure to remain operational by energy providers Hornsea Wind Farm, Npower and UK Power Networks, who were later fined £10.5m.
September 1—United States, Canada and the Bahamas—Hurricane Dorian damaged transmission systems and caused extensive lengthy power outages along the Atlantic seaboard of North America.
September 29—Spain—A power cut struck the entire island of Tenerife, affecting almost one million people and carrying out dozens of emergency services, most of them people trapped in elevators.
November 1—United States and Canada—A major storm left nearly 2,000,000 people without power. In some areas of eastern Ontario and most of southern Québec, 964,000 people were affected. The same storm also cut power to over 800,000 customers in 14 US states between Thursday, October 31 and Saturday, November 2, with 420,000 still without power after three days. On November 2, 600,000 Canadian homes had been reconnected, though over 200,000 still remained disconnected. Many flooded areas—like Sherbrooke—were left without power even longer.
November 3—France—Around 140,000 people were without power for over nine hours in the Pyrénées-Atlantiques region.
November 18—Barbados—Approximately 130,000 people on the island lost power for over 13 hours starting at 7:29 am. The outage continued into the following day.
November 25—Australia—A wind storm ripped through Sydney, leaving 76,000 homes without power, with 24,000 still in the dark on Wednesday, November 27.
2020s
2020
January 19—Indonesia—A power outage struck Central and South Kalimantan, leaving an estimated total of 6.8 million people without electrical supply due to a thunderstorm.
April 12–14—United States—A tornado outbreak moved across the US from Texas to Maine, causing 4.3 million customers to lose power, and affecting an estimated 9.3 million people. The system created 140 confirmed tornadoes including three rated EF4.
August 3–5—United States—Hurricane Isaias pushed through the US from South Carolina to New England then up into Canada, causing 6.4 million customers to lose power, affecting an estimated 13.8 million people. There were around 1.65 million customers affected in New Jersey and 1.19 million customers in New York.
August 10—United States—a derecho moved through the US across Nebraska, Iowa, Illinois, and Indiana, leaving more than 1 million customers without electricity access; including 250,000 customers in the Chicago area.
October 12—India—Mumbai suffered one of its worst blackouts in decades as technical glitches caused its power-transmission network to shut down, leaving millions of people without power for hours.
October 26–28—United States—An ice storm, bringing snow from New Mexico into Oklahoma and northern Texas, left over 400,000 people without power in Oklahoma for multiple days, with over 40,000 still without power 10 days after. Oklahoma Gas & Electric called it "the worst storm in our company's history". The storm was especially damaging because leaves were still on the trees and other tall vegetation, causing large limbs to break and fall onto power lines and city streets due to the extra weight.
2021
January 9—Pakistan—A power outage struck almost the entire country, leaving around 200 million people without electrical supply. The blackout was due to a frequency drop resulting from a "fault" at Guddu at 11:41 p.m.
January 10—United States—Outages related to snowfall were experienced across eastern Texas, affecting over 100,000 customers.
January 19—Philippines—The entire Western Mindanao area experienced a total blackout starting at 8:30 am PST. According to NGCP, the incident has been blamed on maliciously planted vegetation.
February 14–15 and 17–18—United States—A first and second winter storm and associated cold wave caused over five million inhabitants to lose power across the US, with Texas alone having over 4.3 million customers without power.
May 21—Jordan—A blackout left the entire population of 10 million people without electricity for three hours.
May 25—Australia—Almost 400,000 customers in Queensland lost power at around 2pm AEST after a fire in a turbine at a power station in Central Queensland. Power was gradually restored over the following hours into the evening.
May 27—Indonesia—At 13:39 pm, there were blackouts in East, Central, and South Kalimantan provinces. The cause of the blackout was a disruption in the Tengkawang–Embalut 150 kV transmission network. The disruption caused 29 substations to experience blackouts. More than half of Kalimantan's population was affected.
June 10—Puerto Rico—A fire at a transformer substation in Monacillos, interrupted power to 400,000 customers.
August 11—United States—Severe thunderstorms knocked out power to more than 830,000 people in Michigan.
China—Persistent blackouts in the second half of the year reduced factory output, hitting millions of factories and homes in more than half the nation and impacting 44% of industrial activity. Factories were cut off from power for 3 to 7 days at a time. Sales of candles "skyrocketed".
29 October—Australia—A severe windstorm hit Melbourne, knocking out power to more than 520,000 customers. Victorian energy minister Lily D'Ambrosio said that this was the largest number of customers without power in the state's history.
2022
January 25—Kazakhstan, Kyrgyzstan, and Uzbekistan—The Kyrgyz capital Bishkek, Uzbek capital Tashkent, and Kazakhstan's economic center Almaty were hit by a severe power outage caused by a grid stressed by summer drought and a recent boom in cryptocurrency mining.
May 21—Canada—A derecho, with peak winds of around , ravaged through parts of Ontario and Quebec. At its peak, it caused roughly 1.1 million customers to lose their power and thousands were still without power a week after. It cost approximately to repair the damage.
June 13–14—United States—The Great Lakes Derecho pounded portions of Indiana, West Virginia, and Ohio with wind gusts up to 95 miles per hour; more than 600,000 households lost power across these three states. The situation was worsened by a following heat wave, causing high demand on electrical infrastructure which resulted in additional outages. Some of those affected by this storm did not get power restored for nearly a week.
October 4—Bangladesh—140 million people lost power. It started shortly after 2:00 p.m. Government spokesman Shamim Ahsan alleged it was caused by a technical malfunction. The country was unable to import enough fuel for all its power stations to use, so some of them were shut down.
October 22–present—Ukraine and Moldova—Air and missile attacks by Russian forces as part of its invasion of Ukraine have devastated much of the country's infrastructure and left millions without electricity. Outages also affected Moldova, which is heavily interconnected with the Ukrainian grid.
November 4—United States—190,000 customers (more than half of the county's 365,000 households) in Snohomish County, Washington, were without power for several days after a windstorm swept through the area with wind speeds of up to 80 mph.
December 3—United States—40,000 customers in Moore County, North Carolina, were left without power after gunfire at two substations.
2023
January 23—Pakistan—220 million people were without power from 7.34 am local time due to chronic power frequency fluctuations in the south of the country, up to 05:15 the next day.
March 1—Argentina—Parts of Buenos Aires and the provinces of Buenos Aires, Santa Fe, Neuquén, Córdoba, and Mendoza experienced blackouts. Millions of people were affected for at least two hours during a heatwave as traffic lights went out of order and Buenos Aires Metro stations underwent total darkness. The outage was believed to have been caused by a fire in a field near high-tension lines connected to the Atucha Nuclear Power Plant.
March 14—United States—Nearly 300,000 people in the San Francisco Bay Area, California were without power after strong wind storms.
April 5—Canada—over 1.1 million customers were left without power due to a major ice storm that covered Montreal in ice and left hundreds of thousands without power well into the weekend.
9 April—South Africa—over 2.5 million people were left with out power in the capital Pretoria after vandalised high voltage 132kV power lines caused seven pylons to collapse onto the N4 Highway.
8 May—Botswana—the entire country was plunged into darkness after midnight following a "grid disturbance" at Morupule A and B Power plants, as well as the Transmission Power Line that connects the country to South Africa. Load-shedding was implemented across the country while repairs were conducted.
14 September—Nigeria—a "total system collapse" resulted in widespread blackouts that affected the entire country.
20 September—Tunisia—a "sudden breakdown" at the Radès power station in the southern suburbs of Tunis caused a nationwide power outage that lasted for up to four hours.
9 December—2023 Sri Lankan blackouts
2024
January 2—Philippines—The entire Western Visayas region experienced a massive blackout starting at 2:19 pm PST. The cause of the outage was being investigated by the NGCP.
June 4–5—Indonesia—Most of Sumatra experienced a caused by a disturbance at a transmission substation located on a high voltage overhead line in South Sumatra. Around 600,000 customers in West Sumatra alone were affected by the blackout.
June 19—Ecuador—A nationwide blackout occurred following the failure of a transmission line.
June 20—New Zealand—A falling transmission tower caused a power outage that affected most of Northland Region.
21 June—Balkans—A massive power outage left nearly all of Montenegro, as well as the coastal areas of Croatia and parts of Albania and Bosnia and Herzegovina without electricity, with Bosnia saying that the outage was caused by problems in a distribution line, while Albania attributed the outage to high temperatures.
July 8–10—United States—Nearly 3 million people in Texas lost power after Hurricane Beryl moved through the state. More than 1.6 million people remained without power two days after the hurricane struck the state.
August 17–present–2024 Lebanon blackout
August 30—Venezuela—A major power outage was reported nationwide, which the government of President Nicolas Maduro blamed on sabotage by the political opposition.
September 27-October 11-United States-Hurricane Helene caused power outages in the Central Savannah River Area and an area ranging from Florida to New England.
October 9–present–United States–Hurricane Milton caused power outages in the state of Florida.
October 18–present-Cuba–A nationwide power outage following the unexpected shutdown of the Antonio Guiteras thermoelectric power plant.
November 19-26—United States—Over 600,000 people lost power when a bomb cyclone hit Washington State.
December 31—Puerto Rico—A massive blackout left most of the territory without electricity.
2025
January 8-present–United States–Wildfires left much of Los Angeles without power because of emergency power shutdowns.
References
Further reading
Technology-related lists | List of major power outages | [
"Technology"
] | 13,135 | [
"Technological failures",
"Power outages"
] |
361,286 | https://en.wikipedia.org/wiki/Hanover%20bars | Hanover bars, in one of the PAL television video formats, are an undesirable visual artifact in the reception of a television image. The name refers to the city of Hannover, in which the PAL system developer Telefunken Fernseh und Rundfunk GmbH was located.
The PAL system encodes color as YUV. The U (corresponding to B-Y) and V (corresponding to R-Y) signals carry the color information for a picture, with the phase of the V signal reversed (i.e. shifted through 180 degrees) on alternate lines (hence the name PAL, or phase alternate line). This is done to cancel minor phase errors in the reception process. However, if gross errors occur, complementary errors from the V signal carry into the U signal, and thus visible stripes occur.
Later PAL systems introduced alterations to ensure that Hanover bars do not occur, introducing a swinging burst to the color synchronization. Other PAL systems may handle this problem differently.
Suppression of Hanover bars
To suppress Hanover bars, PAL color decoders use a delay line that repeats the chroma information from each previous line and blends it with the current line. This causes phase errors to cancel out, at the cost of vertical color resolution, and in early designs, also a loss of color saturation proportional to the phase error.
References
See also
Dot crawl
PAL
PAL-S
Television technology | Hanover bars | [
"Technology"
] | 284 | [
"Information and communications technology",
"Television technology"
] |
361,331 | https://en.wikipedia.org/wiki/Rubik%27s%20Magic | Rubik's Magic, like the Rubik's Cube, is a mechanical puzzle invented by Ernő Rubik and first manufactured by Matchbox in the mid-1980s.
The puzzle consists of eight black square tiles (changed to red squares with goldish rings in 1997) arranged in a 2 × 4 rectangle; diagonal grooves on the tiles hold wires that connect them, allowing them to be folded onto each other and unfolded again in two perpendicular directions (assuming that no other connections restrict the movement) in a manner similar to a Jacob's ladder toy. The front side of the puzzle shows, in the initial state, three separate, rainbow-colored rings; the back side consists of a scrambled picture of three interconnected rings. The goal of the game is to fold the puzzle into a heart-like shape and unscramble the picture on the back side, thus interconnecting the rings.
Numerous ways to accomplish this exist, and experienced players can transform the puzzle from its initial into the solved state in less than 2 seconds. Other challenges for Rubik's Magic include reproducing given shapes (which are often three-dimensional), sometimes with certain tiles required to be in certain positions and/or orientations.
History
Rubik's Magic was first manufactured by Matchbox in 1986. Professor Rubik holds both a Hungarian patent (HU 1211/85, issued 19 March 1985) and a US patent (US 4,685,680, issued 11 August 1987) on the mechanism of Rubik's Magic.
In 1987, Rubik's Magic: Master Edition was published by Matchbox; it consisted of 12 silver tiles arranged in a 2 × 6 rectangle, showing 5 interlinked rings that had to be unlinked by transforming the puzzle into a shape reminiscent of a W. Around the same time, Matchbox also produced Rubik's Magic Create the Cube, a "Level Two" version of Rubik's Magic, in which the puzzle is solved when folded into a cube with a base of two tiles, and the tile colors match at the corners of the cube. It did not have as wide a release, and is rare to find.
In 1996, the original version of Rubik's Magic was re-released by Oddzon, this time with yellow rings on a red background; other versions (for example, a variant of the original with silver tiles instead of black ones) were also produced, and there also was a strategy game based on Rubik's Magic. An unlicensed 2 × 8 version was also produced, with spheres printed on its tiles instead of rings. Custom versions as large as 2 × 12 have been built using kits available from Oddzon.
Details
It can be seen that the total number of 2 × 4 rectangles that can possibly be created using Rubik's Magic is only thirty-two; these can be created from eight distinct chains. The easiest way to classify chains is by the means of the middle tile of the puzzle's finished form (the only tile that has segments of all three rings) and the tile next to it featuring a yellow/orange ring segment (the indicator tile).
Every chain either has the middle tile on the outside (O) or the inside (I) of the chain; if it is arranged so that the indicator tile is to the right of the middle tile, then the position of the ring segment on the indicator tile can either be the upper left (UL), upper right (UR), lower left (LL), or lower right (LR) corner. The position and orientation of the remaining tiles are then determined by the middle and indicator tiles, and eight distinct chains (OUL to ILR) are obtained, although the naming convention is not standardized.
Similarly, the 2 × 4 rectangle forms of them can be categorized. Each of these forms has exactly one chain associated with it, and each chain yields four different rectangle forms, depending on the position of the edge where it is folded with regard to the middle tile. By concatenating one of the numbers 0, 1, 2, or 3 to the chain's name, depending on whether the number of tiles to the right of the middle tile before the folding edge, a categorization of the rectangle forms is obtained. The starting position, for example, is rectangle form OUR2.
The cube now is rainbow and has silver rings. A game rule for this one is you can match the silver rings and color squares, which can make it more complicated.
A similar classification can also be obtained for the heart-shaped forms of the puzzle, of which 64 exist.
Analysis
One question when analyzing Rubik's Magic concerns its state space: What is the set of configurations that can be reached from the initial state? This question is harder to answer than for Rubik's Cube, because the set of operations on Rubik's Magic does not form a mathematical group.
The basic operation (move) consists of transferring a hinge between two tiles T1 and T2, from one pair of edges (E11 of T1 and E21 on T2) to another pair E12 and E22.
Here, edges E11 and E12 are adjacent on tile T1,
and so are edges E21 and E22 on tile T2 but in opposite order. See the figure below for an example, where E11 is the East edge of the yellow tile, E21 is the West edge of the red tile, and both E21 and E22 are the North edges.
In order to carry out such a move, the hinge being moved cannot cross another hinge. Thus, the two hinges on a tile can take up one of five relative positions (see figure below). The positions are encoded as a number in the range from -2 to +2, called the wrap. The difference between wrap -2 and wrap +2 is the order of the neighboring tiles (which one is on top). The total wrap of a configuration is calculated as the alternating sum of the wraps of the individual tiles in the chain.
The total wrap is invariant under a move. Thus, one can calculate the number of theoretically possible shapes of the chain (disregarding the patterns on the individual tiles) as 1351.
Furthermore, the other tiles in the chain will have to move through space appropriately to allow the folding and unfolding needed to carry out a move. This limits the practically reachable number of configurations further. That number also depends on how much stretching of the wires you tolerate.
Records
The world record for a single solve of the Magic is 0.69 seconds, set by Yuxuan Wang. Yuxuan Wang also holds the record for an average of five solves - 0.76 seconds set at the Beijing Summer Open 2011 competition. Due to the World Cube Association no longer recognizing Rubik's Magic as an official event in 2012, Yuxuan Wang holds the permanent world record for this puzzle.
Top 5 Magic singles
Top 5 solvers by average of 5 solves
Rubik's Magic: Master Edition
Rubik's Magic: Master Edition (most commonly known as Master Magic) was manufactured by Matchbox in 1987. It is a modification from the Rubik's Magic, with 12 tiles instead of the original's 8. The puzzle has 12 panels interconnected with nylon wires in a 2 × 6 rectangular shape, measuring approximately 4.25 inches (10.5 cm) by 13 inches (32 cm). The goal of the game is the same as for Rubik's Magic, which is to fold the puzzle from a 2 × 6 rectangular shape into a W-like shape with a certain tile arrangement. Initially, the front side shows a set of 5 linked rings. Once solved, the puzzle takes the shape of the letter W, and shows 5 unlinked rings on the back side of the previous initial state.
As a puzzle, the Master Edition is actually simpler than the original Rubik's Magic. With more hinges, the player can work on one part, mostly ignoring the other parts. The minimal solution involves 16 quarter-turn moves. There are multiple solutions. The puzzle was an official World Cube Association (WCA) event from 2003 to 2012.
Top 5 singles
Top 5 solvers by average of 5 solves
Reviews
Jeux & Stratégie #42
1986 Games 100
See also
Pocket Cube
Rubik's Cube
Rubik's Revenge
Professor's Cube
V-Cube 6
V-Cube 7
V-Cube 8
Combination puzzles
Mechanical puzzles
Jacob's ladder (toy)
References
External links
Pictures of Rubik's Magic in various configurations
Detailed description and analysis
List of all 1351 theoretically possible shapes (Legend: = stands for wrap -2; - stands for wrap -1; 0 stands for wrap 0; + stands for wrap +1; # stands for wrap +2)
Categorising folding plate puzzles (plus tips)
New themes and different (solving-wise) mechanical types of folding plate puzzles
Mechanical puzzles
Combination puzzles
Hungarian inventions
1985 works
1985 introductions
1980s toys | Rubik's Magic | [
"Mathematics"
] | 1,856 | [
"Recreational mathematics",
"Mechanical puzzles"
] |
361,356 | https://en.wikipedia.org/wiki/Negentropy | In information theory and statistics, negentropy is used as a measure of distance to normality. The concept and phrase "negative entropy" was introduced by Erwin Schrödinger in his 1944 popular-science book What is Life? Later, French physicist Léon Brillouin shortened the phrase to néguentropie (negentropy). In 1974, Albert Szent-Györgyi proposed replacing the term negentropy with syntropy. That term may have originated in the 1940s with the Italian mathematician Luigi Fantappiè, who tried to construct a unified theory of biology and physics. Buckminster Fuller tried to popularize this usage, but negentropy remains common.
In a note to What is Life? Schrödinger explained his use of this phrase.
Information theory
In information theory and statistics, negentropy is used as a measure of distance to normality. Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.
Negentropy is defined as
where is the differential entropy of the Gaussian density with the same mean and variance as and is the differential entropy of :
Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.
The negentropy of a distribution is equal to the Kullback–Leibler divergence between and a Gaussian distribution with the same mean and variance as (see for a proof). In particular, it is always nonnegative.
Correlation between statistical negentropy and Gibbs' free energy
There is a physical quantity closely linked to free energy (free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873, Willard Gibbs created a diagram illustrating the concept of free energy corresponding to free enthalpy. On the diagram one can see the quantity called capacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume. In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 by Massieu for the isothermal process (both quantities differs just with a figure sign) and by then Planck for the isothermal-isobaric process. More recently, the Massieu–Planck thermodynamic potential, known also as free entropy, has been shown to play a great role in the so-called entropic formulation of statistical mechanics, applied among the others in molecular biology and thermodynamic non-equilibrium processes.
where:
is entropy
is negentropy (Gibbs "capacity for entropy")
is the Massieu potential
is the partition function
the Boltzmann constant
In particular, mathematically the negentropy (the negative entropy function, in physics interpreted as free entropy) is the convex conjugate of LogSumExp (in physics interpreted as the free energy).
Brillouin's negentropy principle of information
In 1953, Léon Brillouin derived a general equation stating that the changing of an information bit value requires at least energy. This is the same energy as the work Leó Szilárd's engine produces in the idealistic case. In his book, he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy.
See also
Exergy
Free entropy
Entropy in thermodynamics and information theory
Notes
Entropy and information
Statistical deviation and dispersion
Thermodynamic entropy | Negentropy | [
"Physics",
"Mathematics"
] | 844 | [
"Physical quantities",
"Thermodynamic entropy",
"Entropy and information",
"Entropy",
"Statistical mechanics",
"Dynamical systems"
] |
361,366 | https://en.wikipedia.org/wiki/Line%20bundle | In mathematics, a line bundle expresses the concept of a line that varies from point to point of a space. For example, a curve in the plane having a tangent line at each point determines a varying line: the tangent bundle is a way of organising these. More formally, in algebraic topology and differential topology, a line bundle is defined as a vector bundle of rank 1.
Line bundles are specified by choosing a one-dimensional vector space for each point of the space in a continuous manner. In topological applications, this vector space is usually real or complex. The two cases display fundamentally different behavior because of the different topological properties of real and complex vector spaces: If the origin is removed from the real line, then the result is the set of 1×1 invertible real matrices, which is homotopy-equivalent to a discrete two-point space by contracting the positive and negative reals each to a point; whereas removing the origin from the complex plane yields the 1×1 invertible complex matrices, which have the homotopy type of a circle.
From the perspective of homotopy theory, a real line bundle therefore behaves much the same as a fiber bundle with a two-point fiber, that is, like a double cover. A special case of this is the orientable double cover of a differentiable manifold, where the corresponding line bundle is the determinant bundle of the tangent bundle (see below). The Möbius strip corresponds to a double cover of the circle (the θ → 2θ mapping) and by changing the fiber, can also be viewed as having a two-point fiber, the unit interval as a fiber, or the real line.
Complex line bundles are closely related to circle bundles. There are some celebrated ones, for example the Hopf fibrations of spheres to spheres.
In algebraic geometry, an invertible sheaf (i.e., locally free sheaf of rank one) is often called a line bundle.
Every line bundle arises from a divisor under the following conditions:
(I) If is a reduced and irreducible scheme, then every line bundle comes from a divisor.
(II) If is a projective scheme then the same statement holds.
The tautological bundle on projective space
One of the most important line bundles in algebraic geometry is the tautological line bundle on projective space. The projectivization of a vector space over a field is defined to be the quotient of by the action of the multiplicative group . Each point of therefore corresponds to a copy of , and these copies of can be assembled into a -bundle over . But differs from only by a single point, and by adjoining that point to each fiber, we get a line bundle on . This line bundle is called the tautological line bundle. This line bundle is sometimes denoted since it corresponds to the dual of the Serre twisting sheaf .
Maps to projective space
Suppose that is a space and that is a line bundle on . A global section of is a function such that if is the natural projection, then . In a small neighborhood in in which is trivial, the total space of the line bundle is the product of and the underlying field , and the section restricts to a function . However, the values of depend on the choice of trivialization, and so they are determined only up to multiplication by a nowhere-vanishing function.
Global sections determine maps to projective spaces in the following way: Choosing not all zero points in a fiber of chooses a fiber of the tautological line bundle on , so choosing non-simultaneously vanishing global sections of determines a map from into projective space . This map sends the fibers of to the fibers of the dual of the tautological bundle. More specifically, suppose that are global sections of . In a small neighborhood in , these sections determine -valued functions on whose values depend on the choice of trivialization. However, they are determined up to simultaneous multiplication by a non-zero function, so their ratios are well-defined. That is, over a point , the values are not well-defined because a change in trivialization will multiply them each by a non-zero constant λ. But it will multiply them by the same constant λ, so the homogeneous coordinates are well-defined as long as the sections do not simultaneously vanish at . Therefore, if the sections never simultaneously vanish, they determine a form which gives a map from to , and the pullback of the dual of the tautological bundle under this map is . In this way, projective space acquires a universal property.
The universal way to determine a map to projective space is to map to the projectivization of the vector space of all sections of . In the topological case, there is a non-vanishing section at every point which can be constructed using a bump function which vanishes outside a small neighborhood of the point. Because of this, the resulting map is defined everywhere. However, the codomain is usually far, far too big to be useful. The opposite is true in the algebraic and holomorphic settings. Here the space of global sections is often finite dimensional, but there may not be any non-vanishing global sections at a given point. (As in the case when this procedure constructs a Lefschetz pencil.) In fact, it is possible for a bundle to have no non-zero global sections at all; this is the case for the tautological line bundle. When the line bundle is sufficiently ample this construction verifies the Kodaira embedding theorem.
Determinant bundles
In general if is a vector bundle on a space , with constant fibre dimension , the -th exterior power of taken fibre-by-fibre is a line bundle, called the determinant line bundle. This construction is in particular applied to the cotangent bundle of a smooth manifold. The resulting determinant bundle is responsible for the phenomenon of tensor densities, in the sense that for an orientable manifold it has a nonvanishing global section, and its tensor powers with any real exponent may be defined and used to 'twist' any vector bundle by tensor product.
The same construction (taking the top exterior power) applies to a finitely generated projective module over a Noetherian domain and the resulting invertible module is called the determinant module of .
Characteristic classes, universal bundles and classifying spaces
The first Stiefel–Whitney class classifies smooth real line bundles; in particular, the collection of (equivalence classes of) real line bundles are in correspondence with elements of the first cohomology with coefficients; this correspondence is in fact an isomorphism of abelian groups (the group operations being tensor product of line bundles and the usual addition on cohomology). Analogously, the first Chern class classifies smooth complex line bundles on a space, and the group of line bundles is isomorphic to the second cohomology class with integer coefficients. However, bundles can have equivalent smooth structures (and thus the same first Chern class) but different holomorphic structures. The Chern class statements are easily proven using the exponential sequence of sheaves on the manifold.
One can more generally view the classification problem from a homotopy-theoretic point of view. There is a universal bundle for real line bundles, and a universal bundle for complex line bundles. According to general theory about classifying spaces, the heuristic is to look for contractible spaces on which there are group actions of the respective groups and , that are free actions. Those spaces can serve as the universal principal bundles, and the quotients for the actions as the classifying spaces . In these cases we can find those explicitly, in the infinite-dimensional analogues of real and complex projective space.
Therefore the classifying space is of the homotopy type of , the real projective space given by an infinite sequence of homogeneous coordinates. It carries the universal real line bundle; in terms of homotopy theory that means that any real line bundle on a CW complex determines a classifying map from to , making a bundle isomorphic to the pullback of the universal bundle. This classifying map can be used to define the Stiefel-Whitney class of , in the first cohomology of with coefficients, from a standard class on .
In an analogous way, the complex projective space carries a universal complex line bundle. In this case classifying maps give rise to the first Chern class of , in (integral cohomology).
There is a further, analogous theory with quaternionic (real dimension four) line bundles. This gives rise to one of the Pontryagin classes, in real four-dimensional cohomology.
In this way foundational cases for the theory of characteristic classes depend only on line bundles. According to a general splitting principle this can determine the rest of the theory (if not explicitly).
There are theories of holomorphic line bundles on complex manifolds, and invertible sheaves in algebraic geometry, that work out a line bundle theory in those areas.
See also
I-bundle
Ample line bundle
Line field
Notes
References
Michael Murray, Line Bundles, 2002 (PDF web link)
Robin Hartshorne. Algebraic geometry. AMS Bookstore, 1975.
Differential topology
Algebraic topology
Homotopy theory
Vector bundles | Line bundle | [
"Mathematics"
] | 1,890 | [
"Fields of abstract algebra",
"Topology",
"Differential topology",
"Algebraic topology"
] |
8,941,378 | https://en.wikipedia.org/wiki/MIT%20General%20Circulation%20Model | The MIT General Circulation Model (MITgcm) is a numerical computer code that solves the equations of motion governing the ocean or Earth's atmosphere using the finite volume method. It was developed at the Massachusetts Institute of Technology and was one of the first non-hydrostatic models of the ocean. It has an automatically generated adjoint that allows the model to be used for data assimilation. The MITgcm is written in the programming language Fortran.
History
See also
Physical oceanography
Global climate model
References
External links
The MITgcm home page
Department of Earth, Atmospheric and Planetary Science at MIT
The ECCO2 consortium
Physical oceanography
Numerical climate and weather models | MIT General Circulation Model | [
"Physics"
] | 135 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
8,941,394 | https://en.wikipedia.org/wiki/Downs%20cell | Downs' process is an electrochemical method for the commercial preparation of metallic sodium, in which molten NaCl is electrolyzed in a special apparatus called the Downs cell. The Downs cell was invented in 1923 (patented: 1924) by the American chemist James Cloyd Downs (1885–1957).
Operation
The Downs cell uses a carbon anode and an iron cathode. The electrolyte is sodium chloride that has been heated to the liquid state. Although solid sodium chloride is a poor conductor of electricity, when molten the sodium and chloride ions are mobilized, which become charge carriers and allow conduction of electric current.
Some calcium chloride and/or chlorides of barium (BaCl2) and strontium (SrCl2), and, in some processes, sodium fluoride (NaF) are added to the electrolyte to reduce the temperature required to keep the electrolyte liquid. Sodium chloride (NaCl) melts at 801 °C (1074 Kelvin), but a salt mixture can be kept liquid at a temperature as low as 600 °C at the mixture containing, by weight: 33.2% NaCl and 66.8% CaCl2. If pure sodium chloride is used, a metallic sodium emulsion is formed in the molten NaCl which is impossible to separate. Therefore, one option is to have a NaCl (42%) and CaCl2 (58%) mixture.
The anode reaction is:
2Cl− → Cl2 (g) + 2e−
The cathode reaction is:
2Na+ + 2e− → 2Na (l)
for an overall reaction of
2Na+ + 2Cl− → 2Na (l) + Cl2 (g)
The calcium does not enter into the reaction because its reduction potential of -2.87 volts is lower than that of sodium, which is -2.38 volts. Hence the sodium ions are reduced to metallic form in preference to those of calcium. If the electrolyte contained only calcium ions and no sodium, calcium metal would be produced as the cathode product (which indeed is how metallic calcium is produced).
Both the products of the electrolysis, sodium metal and chlorine gas, are less dense than the electrolyte and therefore float to the surface. Perforated iron baffles are arranged in the cell to direct the products into separate chambers without their ever coming into contact with each other.
Although theory predicts that a potential of a little over 4.07 volts should be sufficient to cause the reaction to go forward, in practice potentials of up to 8 volts are used. This is done in order to achieve useful current densities in the electrolyte despite its inherent electrical resistance. The overvoltage and consequent resistive heating contributes to the heat required to keep the electrolyte in a liquid state.
The Downs' process also produces chlorine as a byproduct, although chlorine produced this way accounts for only a small fraction of chlorine produced industrially by other methods.
References
Chemical processes
Electrolytic cells
Metallurgical processes | Downs cell | [
"Chemistry",
"Materials_science"
] | 645 | [
"Metallurgical processes",
"Metallurgy",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
8,941,596 | https://en.wikipedia.org/wiki/Cured-in-place%20pipe | A cured-in-place pipe (CIPP) is a trenchless rehabilitation method used to repair existing pipelines. It is a jointless, seamless pipe lining within an existing pipe. As one of the most widely used rehabilitation methods, CIPP has applications in sewer, water, gas, chemical and district heating pipelines ranging in diameter from 0.1 to 2.8 meters (2–110 inches).
The process of CIPP involves inserting and running a felt lining into a preexisting pipe that is the subject of repair. Resin within the liner is then exposed to a curing element to harden it and make it attach to the inner walls of the pipe. Once fully cured, the lining now acts as a new pipeline.
Process
Installation
A resin impregnated felt tube made of polyester, fiberglass cloth, spread tow carbon fiber or other resin-impregnable substance, is inserted or pulled through a damaged pipe, usually from an upstream access point such as a manhole or excavation. (It is possible to insert the liner from a downstream access point, but this is more risky). CIPP is considered a trenchless technology, meaning little to no digging is typically required, for a potentially more cost-effective and less disruptive method than traditional "dig and replace" pipe repair methods. The liner is inserted using water or air pressure, applied via pressure vessels, scaffolds or a "chip unit".
Curing
Cured-in-place pipes require that their resin be cured after installation to achieve full strength, by hot water or steam or, if a fiberglass tube is used, by UV light. As the resin cures, a tight-fitting, jointless and corrosion-resistant replacement pipe is formed. Service laterals, where present, can be reconnected from within the newly-formed larger-diameter pipe, by cutting replacement openings using robotically controlled cutting devices, then sealed using specially-designed CIPP materials referred to as 'top-hats'. The resins used are typically polyester for mainline lining and epoxy for lateral lines.
Since all resins shrink (epoxy resins shrink far less than poly and vinyl ester versions) and because it is impossible to bond to a sewer line with fats, oils, and grease present, an annular space is always created around the new CIPP liner, between it and the host pipe. Some spaces are large enough to require additional work to prevent water from moving along them and re-entering the waste stream, for example: insertion of hydrophilic material which swells to fill the void; lining of the entire connection and host pipe with continuous repair (YT repair) gaskets; and point repairs placed at the ends of the host pipe.
History
Conception
In 1971, Eric Wood implemented the first cured-in-place pipe technology in London, England. He called the CIPP process insitu form, derived from the Latin meaning "form in place". Wood applied for U.S. patent no. 4009063 on January 29, 1975. The patent was granted February 22, 1977, and was commercialized by Insituform Technologies until it entered the public domain on February 22, 1994.
Implementation
The process began to be used in residential and commercial applications in Japan and Europe in the 1970s and for residential application in the United States in the 1980s.
Advantages
CIPP does not typically require excavation to rehabilitate a leaking or structurally unsound pipeline. (Depending upon design considerations an excavation may be made, but the liner is often installed through a manhole or other existing access point.) CIPP has a smooth, jointless interior and no joints.
Disadvantages and limitations
Except for very common sizes, liners are not usually stocked and must be made specifically for each project. CIPP requires bypassing the existing pipeline while the liner is being installed, which may be inconvenient as, depending on diameter and system used (steam, water or UV), curing may take from one to 30 hours and must be carefully monitored, inspected, and tested. Obstructions in the existing pipeline, such as protruding laterals, must be removed prior to installation. CIPP is not always cheaper than similar methods such as Shotcrete, thermoformed pipe, close-fit pipe, spiral wound pipe and sliplining. The CIPP process may release chemical agents into the surrounding environment. The most common liner material, a non-woven felted fabric, does not go around bends well without wrinkling nor maintain roundness going around corners. Once a line is repaired with the CIPP method, it can no longer be cleaned using cables or snakes; instead, high-pressure water blasting (hydrojetting) must be used.
Quality assurance and quality control
Testing of CIPP installations is required to confirm that the materials used comply with the site and engineering requirements. Since ground and ambient installation conditions as well as crew skills can affect the success or failure of a cure cycle, testing is performed by 3rd party laboratories in normal cases and should be requested by the owner.
Samples should be representative of the installation environment since the liner is installed in the ground. Wet sandbags should be used around the restraint where the test sample will be extracted from. As with any specimen preparation for a materials test, it is important to not affect the material properties during the specimen preparation process. Research has shown that test specimen selection can have a significant effect on the CIPP flexural testing results. A technical presentation at the CERIU INFRA 2012 Infrastructures Municipales Conference in Montreal outlined the results of a research project which examined the effects of test specimen preparation on measured flexural properties. Test specimens for ASTM D790 flexural testing must meet the dimensional tolerances of ASTM D790.
The North American CIPP industry has standardized around the standard ASTM F1216 which uses test specimens oriented parallel with the pipe axis, while Europe uses the standard EN ISO 11296–4 with test specimens oriented in the hoop direction. Research has shown that flexural testing results from the same liner material are usually lower when determined using EN ISO 11296-4 as compared to ASTM F1216.
Environmental, public health, and infrastructure incidents
Testing conducted by the Virginia Department of Transportation and university researchers from 2011 to 2013 showed that some CIPP installations can cause aquatic toxicity. A list of environmental, public health, and infrastructure incidents caused by CIPP installations as of 2013 was published by the Journal of Environmental Engineering. In 2014, university researchers published a more detailed study in Environmental Science & Technology that examined CIPP condensate chemical and aquatic toxicity as well as chemical leaching from stormwater culvert CIPP installations in Alabama. In this new report additional water and air environmental contamination incidents were reported not previously described elsewhere.
In 2017, CALTRANS backed university researchers examined water impacts caused by CIPPs used for stormwater culvert repairs.
In April 2018, a study funded by six state transportation agencies (1) compiled and reviewed CIPP-related surface water contamination incidents from publicly reported data; (2) analyzed CIPP water quality impacts; (3) evaluated current construction practices for CIPP installations as reported by US state transportation agencies; and (4) reviewed current standards, textbooks, and guideline documents. In 2019, another study funded by these agencies identified actions to reduce chemical release from ultraviolet light (UV) CIPP manufacturing sites.
With proper engineering design specifications, contractor installation procedures, and construction oversight many of these problems can likely be prevented.
Worker and public safety concerns
On July 26, 2017, Purdue University researchers published a peer-reviewed study in the American Chemical Society's journal Environmental Science & Technology Letters about material emissions collected and analyzed from steam cured CIPP installations in Indiana and California. To further make the study accessible to the public and CIPP worker community, the study authors established a website and made their publication open-access, freely available for download. Purdue University professors also commented on their study and called for changes to the process to better protect workers, the public, and environment from harm.
On August 25, 2017, the National Association of Sewer Service Companies, Incorporated (NASSCO), which is a (501c6) nonprofit dedicated to "improving the success rate of everyone involved in the pipeline rehabilitation industry through education, technical resources, and industry advocacy", posted a document on its website bringing up several important concerns and unanswered questions regarding the study, and its messaging. NASSCO then sent a letter to the researchers who then responded.
On September 22, 2017, NASSCO announced it would fund and coordinate an assessment of previous data and studies, and an additional study and analysis of possible risks related to the CIPP installation and curing process. Later in September, the NASSCO posted a request for proposals to “review of recent publication(s) that propose the presence of organic chemicals and other available literature relating to emissions associated with the CIPP installation process, and a scope of services for additional sampling and analysis of emissions during the field installation of CIPP using the steam cure process.” The request specifically identified the project would review studies conducted by the Virginia Department of Transportation, California Department of Transportation, and Purdue University.
At the federal and state levels in September 2017, on September 26, the US Centers for Disease Control and Prevention (CDC) National Institute for Occupational Safety and Health (NIOSH) published a Science Blog contribution regarding Inhalation and Dermal Exposure Risks Associated with Sanitary Sewer, Storm Sewer, and Drinking Water Pipe Repairs. In September 2017, the California Department of Public Health issued a notice to municipalities and health officials about CIPP installations. One of several statements in this document was that "municipalities, engineers, and contractors should not tell residents the exposures are safe."
On October 5, 2017, the National Environmental Health Association sponsored a webinar about the hazards involved for workers and residents associated with cured-in-place pipe repair. The video can be found here. Several questions about the webinar, and the study have been raised, and feedback noted by industry members.
On October 25, 2017, a 22-year old CIPP worker died at a sanitary sewer worksite in Streamwood, Illinois. The U.S. Occupational Safety and Health Administration (OSHA) completed their investigation April 2018 and issued the company a penalty. Chemical exposure was a contributing factor in the worker fatality.
In 2018, NASSCO funded a study on chemical emissions from six CIPP installations. In 2020, the study was completed. A few locations and worker tasks were identified of potential chemical exposure concern and worksite recommendations were provided.
In 2019 and 2021, the U.S. National Institute for Occupational Safety and Health published a safety evaluations ofUV, steam and hot water CIPP worksites. A UV CIPP company was the first to engage NIOSH. Study results indicated several worker chemical exposure conditions that exceeded recommended limits, and this US federal agency recommended several actions to reduce worker exposures. Two years later, the NIOSH published results of a steam and hot water CIPP worksite study. Results indicated several worker chemical exposure conditions that exceeded recommended limits. The US federal agency recommended several actions to reduce worker exposures.
In 2020, the Florida Department of Health issued their own factsheet about CIPP to municipalities and health departments. The document explained the CIPP process, health concerns, chemicals used and created, how persons living nearby can protect themselves from exposure, and biomonitoring and blood testing considerations after exposure.
In 2022, researchers made several additional discoveries. In the Journal of Hazardous Materials, a study funded by the National Institute of Environmental Health Sciences and National Science Foundation revealed CIPP pressure makes blowback from sinks and toilets in nearby buildings possible and provided recommendations for emergency responders and health officials. Later that year, a study in the Journal of Cleaner Production revealed that by modifying the initiator loading, an ingredient in thermal based CIPP resins, pollution potential of the process could be reduced by 33-42%. Though, also found was that non-styrene CIPP resin contained styrene due to handling at the resin processing facility. In October, researchers discovered that steam based CIPP creates and emits nanoplastics into the air during plastic manufacture. Results of these investigations help better understand the occupational safety, bystander safety, and environmental pollution risks associated with current practices, and also improve technology and practice to reduce undesirable consequences.
See also
Hobas
References
External links
New research on CIPP published by a scientific journal
Information on U.S. Patent no. 4009063
Related information on CIPP patents
Trenchless technology: Wayback Machine
How CIPP is installed
North American & European Test Methods - Impact on CIPP Flexural Properties
Water quality and aquatic toxicity impacts of CIPP sites from the ASCE Journal of Environmental Engineering: Whelton, A., Salehi, M., Tabor, M., Donaldson, B., and Estaba, J. (2013). ”Impact of Infrastructure Coating Materials on Storm-Water Quality: Review and Experimental Study.” J. Environ. Eng., 139(5), 746–756.
CIPP Worker and Public Safety Study - Chemical Air Emissions: [Whelton, Sendesi S.M.T., Ra K., Conkling E.N., Boor B.E., Nuruddin M., Howarter J.A., Youngblood Y.P., Kobo L.M., Shannahan J.H., Jafvert C.T., Whelton A.J. (2017). ”Worksite Chemical Air Emissions and Worker Exposure during Sanitary Sewer and Stormwater Pipe Rehabilitation Using Cured-in-Place-Pipe (CIPP).” Environ. Sci. Technol. Letters, DOI: 10.1021/acs.estlett.7b00237]
CIPP Worker and Public Safety Website
Piping
Trenchless technology | Cured-in-place pipe | [
"Chemistry",
"Engineering"
] | 2,873 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
8,941,807 | https://en.wikipedia.org/wiki/Lunar%20Explorers%20Society | The Lunar Explorers Society is an organisation dedicated to achieving permanent presence of humanity on the Moon. The Society is open to all people in the world with an interest in lunar exploration. It hopes to bring the best of humanity to the Moon, and to bring the benefits of the Moon to all people on Earth.
The last human mission to the Moon was in 1972, and though the first exploration efforts provided a huge scientific return, no further human exploration of the Moon has been done. However, several robotic lunar exploration missions have been conducted since the beginning of the 1990s. These missions fuelled the desire to return to the Moon among many lunar enthusiasts, and this was the background for the establishment of the Lunar Explorers Society in 2000.
The founding members saw the need for an organisation where the members could share knowledge, join forces and pursue their ultimate goal: To establish a permanent human presence on the Moon to the benefit of all people on Earth.
Objectives
To support the establishment of a permanent human presence on the Moon
To promote international cooperation between scientists working with Lunar exploration by providing a neutral platform for their discussions
To raise awareness of what could be achieved by returning to the Moon through educational and outreach activities
To promote the peaceful and fair use of the resources available on the Moon, to the benefit of mankind
The Young Lunar Explorers Award
The Young Lunar Explorers Award is presented to a candidate who has been instrumental in promoting lunar exploration among young lunar explorers once per year, at the annual International Conference on the Exploration and Utilisation of the Moon (ICEUM). The awards have been given to the following winners:
2008: The Google Lunar X Prize Foundation
2007: The Lunar Explorers Society
2005: The SSETI Express team
2004: International Space University (ISU)
History
The Lunar Explorers Society was founded 14 July 2000, during the 4th International Conference on the Exploration and Utilisation of the Moon (ICEUM4). There, the 156 ICEUM4 participants signed the founding declaration of the Lunar Explorers Society. The International Lunar Exploration Working Group (ILEWG) has supported the Lunar Explorers Society since the beginning
External links
Lunar Explorers Society web site
Moon Society
Lunarpedia
Exploration of the Moon
Space advocacy organizations
Scientific organizations established in 2000 | Lunar Explorers Society | [
"Astronomy"
] | 441 | [
"Space advocacy organizations",
"Astronomy organizations"
] |
8,941,842 | https://en.wikipedia.org/wiki/Low-carbon%20economy | A low-carbon economy (LCE) is an economy which absorbs as much greenhouse gas as it emits. Greenhouse gas (GHG) emissions due to human activity are the dominant cause of observed climate change since the mid-20th century. There are many proven approaches for moving to a low-carbon economy, such as encouraging renewable energy transition, energy conservation, and electrification of transportation (e.g. electric vehicles). An example are zero-carbon cities.
Shifting from high-carbon economies to low-carbon economies on a global scale could bring substantial benefits for all countries. It would also contribute to climate change mitigation.
Definition and terminology
There are many synonyms or similar terms in use for low-carbon economy which stress different aspects of the concept, for example: green economy, sustainable economy, carbon-neutral economy, low-emissions economy, climate-friendly economy, decarbonised economy.
The term carbon in low-carbon economy is short hand for all greenhouse gases.
The UK Office for National Statistics published the following definition in 2017: "The low carbon economy is defined as economic activities that deliver goods and services that generate significantly lower emissions of greenhouse gases; predominantly carbon dioxide."
Rationale and aims
GHG emissions due to human activity are the dominant cause of observed climate change since the mid-20th century. Continued emission of greenhouse gases will cause long-lasting changes around the world, increasing the likelihood of severe, pervasive, and irreversible effects for people and ecosystems.
Nations may seek to become low-carbon or decarbonised economies as a part of a national climate change mitigation strategy. A comprehensive strategy to mitigate climate change is through carbon neutrality.
Methods
Achieving a low-carbon economy involves reducing greenhouse gas emissions in all sectors that produce greenhouse gases, for example energy, transportation, industry, and agriculture. The literature often speaks of a transition from a high-carbon economy to a low-carbon economy. This transition should take place in a just manner (this is termed just transition).
There are many strategies and approaches for moving to a low-carbon economy, such as encouraging renewable energy transition, efficient energy use, energy conservation, electric vehicles, heat pumps, and climate-smart agriculture. This requires for example suitable energy policies, financial incentives (e.g. emissions trading, carbon tax), individual action on climate change, business action on climate change.
Actions taken by countries
On the international scene, the most prominent early step in the direction of a low-carbon economy was the signing of the Kyoto Protocol, which came into force in 2005, under which most industrialized countries committed to reduce their carbon emissions.
OECD countries could learn from each other and follow the examples of these countries in these sectors: Switzerland for their energy sector, UK for their industry, Netherlands for their transport sector, South Korea for their agriculture, and Sweden for their building sector.
Co-benefits
The main benefit of a transition to low-carbon economies is that it would contribute towards climate change mitigation. Apart from that, other co-benefits can also be identified: Low-carbon economies present multiple benefits to ecosystem resilience, trade, employment, health, energy security, and industrial competitiveness.
During the green transition, workers in carbon-intensive industries are more likely to lose their jobs. The transition to a carbon-neutral economy will put more jobs at danger in regions with higher percentages of employment in carbon-intensive industries. Employment opportunities by the green transition are associated with the use of renewable energy sources or building activity for infrastructure improvements and renovations.
Low emission industrial development and resource efficiency can offer many opportunities to increase the competitiveness of economies and companies. According to the Low Emission Development Strategies Global Partnership (LEDS GP), there is often a clear business case for switching to lower emission technologies, with payback periods ranging largely from 0.5–5 years, leveraging financial investment.
Energy aspects
Low-carbon electricity
Nuclear power
As of 2021, the expansion of nuclear energy as a method of achieving a low-carbon economy has varying degrees of support. Agencies and organizations that believe decarbonization is not possible without some nuclear power expansion include the United Nations Economic Commission for Europe, the International Energy Agency (IEA), and the International Atomic Energy Agency. The IEA believes that widespread decarbonization must occur by 2040 in order mitigate the adverse effects of climate change and that nuclear power must play a role.
Energy transition
Indices for comparison
The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuel exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.
See also
References
Sources
Economics and climate change
Energy economics
Sustainable technologies
Alternative energy economics
Renewable energy economics | Low-carbon economy | [
"Environmental_science"
] | 996 | [
"Energy economics",
"Environmental social science"
] |
8,941,866 | https://en.wikipedia.org/wiki/Sri%20Sri%20University | Sri Sri University, is an Indian private university, based in Cuttack, Odisha, established on 26 December 2009. The university came into operation in the year 2012. At present, the university is offering different courses in areas of management, architecture, humanities, agriculture, health and wellness, science, literature, osteopathy and performing & fine arts.
Founder
Sri Sri Ravi Shankar is the founder of Sri Sri University. Popularly known as Sri Sri, he has been an advocate of India's ancient traditions, scientific knowledge and spiritual heritage.
History
On 22 February 2012, Sri Sri University was notified by the Higher Education Department of the Government of Odisha as a legal entity after clearing a High Power Committee (HPC) scrutiny of its infrastructure, academic, regulatory, financial and manpower preparedness. The notification has featured in an extraordinary issue of Government of Odisha Gazette.
Statutory approvals
The university has received approval from the Government of Odisha, the national University Grants Commission, All India Council for Technical Education, Indian Council of Agricultural Research and Council of Architecture to offer its degree programmes. As an independent university, Sri Sri University is also authorised to offer new courses, as per the Sri Sri University Act 2009.
Awards
Sri Sri University's focus on sustainable living, emphasis on synthesis of ancient and modern along with innovation in teaching learning process have been appreciated and received recognition.,
Environment Friendly Green Campus Award by District Administration, Cuttack, 2016
'Prakruti Mitra Award 2016' by Minister – Forest & Environment
'Sri Sri University as the Trend setting synthesizer of traditional and global outlook' by ASSOCHAM, 2017
Best Innovative University Award-Second National Education Summit and Educational Excellence Awards 2017
Nominated for Non-Violence Award by Non-Violence Project India Foundation, 2018
‘Green U Award 2019’ and ‘Inspiring Climate Educator Award 2019’ for bringing Nature into Higher Education at the National Green Mentors Conference, Ahmedabad.
Campus
The campus houses the admin block, academic block, computer lab, library, seminar halls, practise halls, amphitheatre, hostels, Vidya (skill training centre), dining hall and cafeteria. The academic block houses classrooms, tutorial centers and a student activity centre. The university offers sports facilities including a gym, basketball court, lawn tennis court, volleyball court and a cricket ground.
Student clubs
There are currently 16 clubs dedicated to different activities.
Vocational training
The university has started a vocational program in collaboration with Larsen & Toubro under its Corporate Social Responsibility commitments. All students of this vocational program are employed by L&T in their domestic and international projects. To begin with the university had started only bar bending and masonry skill programs. But presently new programs like driving, tailoring and other vocations have been included.
Notable alumni
Raghavan Seetharaman achieved a doctorate by submitting a Research Thesis on 'Green Banking and Sustainability' at Sri Sri University. He has also received Pravasi Bharatiya Samman Awards-2017, the highest honour conferred on overseas Indians, from the President of India.
NIRF Rankings
Sri Sri University Overall ranking by National Institutional Ranking Framework Innovation is 51 out of 312 colleges in India in 2023 (2nd in Odisha).
IIRF Rankings
Sri Sri University Overall ranking by IIRF is 11 out of 171 colleges in India in 2023 and it was 8 out of 161 colleges in India in 2022. Sri Sri University Architecture ranking by IIRF is 37 out of 45 colleges in India in 2024 and it was 36 out of 44 colleges in India in 2023.
References
External links
Sri Sri University
Sri Sri University
All India Council for Technical Education
Educational institutions established in 2009
Engineering colleges in Odisha
Education in Cuttack
2009 establishments in Orissa
Architecture schools in India
Architectural education | Sri Sri University | [
"Engineering"
] | 754 | [
"Architectural education",
"Architecture"
] |
8,942,120 | https://en.wikipedia.org/wiki/Vertical%20resistance | The term vertical resistance, used commonly in the context of plant selection, was first used in 1963 by James Edward Van der Plank to describe single-gene resistance. This contrasted with the term horizontal resistance which was used to describe many-gene resistance.
In 1976, Raoul A. Robinson adapted the original definition of vertical resistance and argued that in vertical resistance there were individual genes for resistance in the host plant and also individual genes for parasitic ability in the parasite. This phenomenon is known as the gene-for-gene relationship, and it was the defining character of vertical resistance.
References
Phytopathology
Molecular biology | Vertical resistance | [
"Chemistry",
"Biology"
] | 123 | [
"Biochemistry",
"Molecular biology"
] |
8,942,272 | https://en.wikipedia.org/wiki/Horizontal%20resistance | In genetics, the term horizontal resistance was first used by J. E. Vanderplank to describe many-gene resistance, which is sometimes also called generalized resistance. This contrasts with the term vertical resistance which was used to describe single-gene resistance. Raoul A. Robinson further refined the definition of horizontal resistance. Unlike vertical resistance and parasitic ability, horizontal resistance and horizontal parasitic ability are entirely independent of each other in genetic terms.
In the first round of breeding for horizontal resistance, plants are exposed to pathogens and selected for partial resistance. Those with no resistance die, and plants unaffected by the pathogen have vertical resistance and are removed. The remaining plants have partial resistance and their seed is stored and bred back up to sufficient volume for further testing. The hope is that in these remaining plants are multiple types of partial-resistance genes, and by crossbreeding this pool back on itself, multiple partial resistance genes will come together and provide resistance to a larger variety of pathogens.
Successive rounds of breeding for horizontal resistance proceed in a more traditional fashion, selecting plants for disease resistance as measured by yield. These plants are exposed to native regional pathogens, and given minimal assistance in fighting them.
References
Phytopathology
Molecular biology | Horizontal resistance | [
"Chemistry",
"Biology"
] | 248 | [
"Biochemistry",
"Molecular biology"
] |
8,942,458 | https://en.wikipedia.org/wiki/Hill%E2%80%93Robertson%20effect | In population genetics, the Hill–Robertson effect, or Hill–Robertson interference, is a phenomenon first identified by Bill Hill and Alan Robertson in 1966. It provides an explanation as to why there may be an evolutionary advantage to genetic recombination.
Explanation
In a population of finite but effective size which is subject to natural selection, varying extents of linkage disequilibria (LD) will occur. These can be caused by genetic drift or by mutation, and they will tend to slow down the process of evolution by natural selection.
This is most easily seen by considering the case of disequilibria caused by mutation:
Consider a population of individuals whose genome has only two genes, a and b. If an advantageous mutant (A) of gene a arises in a given individual, that individual's genes will through natural selection become more frequent in the population over time. However, if a separate advantageous mutant (B) of gene b arises before A has gone to fixation, and happens to arise in an individual who does not carry A, then individuals carrying B and individuals carrying A will be in competition. If recombination is present, then individuals carrying both A and B (of genotype AB) will eventually arise. Provided there are no negative epistatic effects of carrying both, individuals of genotype AB will have a greater selective advantage than aB or Ab individuals, and AB will hence go to fixation. However, if there is no recombination, AB individuals can only occur if the latter mutation (B) happens to occur in an Ab individual. The chance of this happening depends on the frequency of new mutations, and on the size of the population, but is in general unlikely unless A is already fixed, or nearly fixed. Hence one should expect the time between the A mutation arising and the population becoming fixed for AB to be much longer in the absence of recombination. Hence recombination allows evolution to progress faster. [Note: This effect is often erroneously equated with "clonal interference", which happens when A and B mutations arise in different wild type (ab) individuals and describes the ensuing competition between Ab and aB lineages.]
There tends to be a correlation between the rate of recombination and the likelihood of the preferred haplotype (in the above example labeled as AB) goes into fixation in a population.
Joe Felsenstein (1974) showed this effect to be mathematically identical to the Fisher–Muller model proposed by R. A. Fisher (1930) and H. J. Muller (1932), although the verbal arguments were substantially different. Although the Hill-Robertson effect is usually thought of as describing a disproportionate buildup of fitness-reducing (relative to fitness increasing) LD over time, these effects also have immediate consequences for mean population fitness.
See also
Clonal interference
Genetic hitchhiking
References
Genetics in the United Kingdom
Population genetics
Evolutionary biology | Hill–Robertson effect | [
"Biology"
] | 604 | [
"Evolutionary biology"
] |
8,942,962 | https://en.wikipedia.org/wiki/Dribbleware | Dribbleware, in the context of computer software, is a product for which patches are often being released. The term usually has negative connotations, and can refer to software which hasn't been tested properly prior to release, or for which planned features could not be implemented.
Dribbleware is not necessarily due to poor programming; it can be indicative of a product whose development was rushed to meet a release date.
References
Software industry | Dribbleware | [
"Technology",
"Engineering"
] | 92 | [
"Computer industry",
"Software industry",
"Software engineering"
] |
8,943,611 | https://en.wikipedia.org/wiki/Prompt%20gamma%20neutron%20activation%20analysis | Prompt-gamma neutron activation analysis (PGAA) is a very widely applicable technique for determining the presence and amount of many elements simultaneously in samples ranging in size from micrograms to many grams. It is a non-destructive method, and the chemical form and shape of the sample are relatively unimportant. Typical measurements take from a few minutes to several hours per sample.
The technique can be described as follows. The sample is continuously irradiated with a beam of neutrons. The constituent elements of the sample absorb some of these neutrons and emit prompt gamma rays which are measured with a gamma ray spectrometer. The energies of these gamma rays identify the neutron-capturing elements, while the intensities of the peaks at these energies reveal their concentrations. The amount of analyte element is given by the ratio of count rate of the characteristic peak in the sample to the rate in a known mass of the appropriate elemental standard irradiated under the same conditions.
Typically, the sample will not acquire significant long-lived radioactivity, and the sample may be removed from the facility and used for other purposes. One of the typical applications of PGAA is an online belt elemental analyzer or bulk material analyzer used in cement, coal and mineral industries.
References
External links
https://www.nist.gov/manuscript-publication-search.cfm?pub_id=903948
Video on the PGAA-NIPS at the Budapest Neutron Centre
Analytical chemistry
Neutron | Prompt gamma neutron activation analysis | [
"Physics",
"Chemistry"
] | 303 | [
" and optical physics stubs",
" molecular",
"nan",
"Analytical chemistry stubs",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
8,945,170 | https://en.wikipedia.org/wiki/Eugene%20Guth | Eugene Guth (August 21, 1905 – July 5, 1990) was a Hungarian-American physicist who made contributions to polymer physics and to nuclear and solid state physics. He was awarded a Ph.D. in theoretical physics by the University of Vienna in 1928. He was a postdoctoral research associate with Wolfgang Pauli at the Austrian–German Science Foundation, Federal Institute of Technology (ETH) Zurich and University of Leipzig, with Werner Heisenberg from 1930 to 1931. He was professor at the University of Vienna (1932–1937) and the University of Notre Dame 1937-1955. He was at Oak Ridge National Laboratory from 1955 to 1971.
Discoveries
He is noted for several pioneering discoveries that advanced the field of polymer physics, which was recognised by the award of the Bingham Medal for rheology in 1965. These included the treatment of the flexible, randomly kinked molecule in Brownian motion of polymers; the explanation of the entropic origin of the elastic force; and the Kinetic Theory of Rubber Elasticity.
Aside from establishing the first polymer physics laboratory at an academic institution in America,
Dr. Guth had an international reputation in physics and polymer science. In 1976, he delivered the first plenary lecture on "Birth and Rise of Polymer Science - Myth and Truth," before the International Symposium on Applied Polymer Science. Two years later, he received the University of Vienna's Distinguished Alumnus Award, and in 1979, he was awarded the Honor Cross of Science and Arts by President Rudolf Kirchschläger of the Republic of Austria. He remained interested in science throughout his entire life. His last article was published posthumously in 1991 in the Journal of Polymer Science Part B.
Legacy
A book, co-edited by his long-time friend and colleague Professor J. E. (Jim) Mark of the University of Cincinnati, was intended to celebrate Eugene Guth's 85th birthday, but subsequently was published as a memorial. The book is entitled "Elastomeric Polymer Networks", Prentice Hall Publishers, 1992, . The oval picture to the right is found in the inside preface to that collected papers volume.
References
External links
Homepage of Dr. Eugene Guth
1905 births
1990 deaths
20th-century American physicists
American nuclear physicists
Polymer scientists and engineers
Rheologists
Oak Ridge National Laboratory people
University of Notre Dame faculty
Academic staff of the University of Vienna
Fellows of the American Physical Society
Austrian emigrants to the United States | Eugene Guth | [
"Chemistry",
"Materials_science"
] | 490 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
8,945,793 | https://en.wikipedia.org/wiki/Underwater%20tunnel | An underwater tunnel is a tunnel which is partly or wholly constructed under the sea or a river. They are often used where building a bridge or operating a ferry link is unviable, or to provide competition or relief for existing bridges or ferry links. While short tunnels are often road tunnels which may admit motorized traffic, unmotorized traffic or both, concerns with ventilation lead to the longest tunnels (such as the Channel Tunnel or the Seikan Tunnel) being electrified rail tunnels.
Types of tunnel
Various methods are used to construct underwater tunnels, including an immersed tube and a submerged floating tunnel. The immersed tube method involves steel tube segments that are positioned in a trench in the sea floor and joined together. The trench is then covered and the water pumped from the tunnel. Submerged floating tunnels use the law of buoyancy to remain submerged, with the tunnel attached to the sea bed by columns or tethers, or hung from pontoons on the surface.
Advantages
Compared with bridges
One such advantage would be that a tunnel would still allow shipping to pass. A low bridge would need an opening or swing bridge to allow shipping to pass, which can cause traffic congestion. Conversely, a higher bridge that does allow shipping may be unsightly and opposed by the public. Higher bridges can also be more expensive than lower ones. Bridges can also be closed due to harsh weather such as high winds.
Tunneling makes excavated soil available that can be used to create new land (see land reclamation). This was done with the rock excavated for the Channel Tunnel, which was used to create Samphire Hoe.
Compared with ferry links
As with bridges, albeit with more chance, ferry links will also be closed during adverse weather. Strong winds or the tidal limits may also affect the workings of a ferry crossing. Travelling through a tunnel is significantly quicker than travelling using a ferry link, shown by the times for travelling through the Channel Tunnel (75–90 minutes for Ferry and 21 minutes on the Eurostar). Ferries offer much lower frequency and capacity and travel times tend to be longer with a ferry than a tunnel. Ferries also usually use fossil fuels emitting greenhouse gases in the process while most railway tunnels are electrified. In the Baltic Sea, one of the busiest areas for passenger ferries in the world, sea ice is a problem, causing seasonal disruption or requiring expensive ice-breaking ships. In the Øresund region the construction of the bridge-tunnel has been cited as enhancing regional integration and giving an economic boom not possible with the previous ferry links. Similar arguments are used by proponents of the Helsinki-Tallinn tunnel in the Talsinki region. There are various issues with the safety of both tunnels and ferries, in the case of tunnels, fire is a particular hazard with several fires having broken out in the Channel Tunnel. On the other hand, the free surface effect is a significant safety risk for RORO ferries as seen in the sinking of MS Estonia. Tunnels which exclude dangerous, combustible freights and the fuel or lithium-ion batteries carried aboard motorcars can significantly reduce fire risk.
Disadvantages
Compared with bridges
Tunnels require far higher costs of security and construction than bridges. This may mean that over short distances bridges may be preferred rather than tunnels (for example Dartford Crossing). As stated earlier, bridges may not allow shipping to pass, so solutions such as the Øresund Bridge have been constructed.
Compared with ferry links
As with bridges, ferry links are far cheaper to construct than tunnels, but not to operate. Also tunnels don't have the flexibility to be deployed over different routes as transport demand changes over time. Without the cost of a new ferry, the route over which a ferry provides transport can easily be changed. However, this flexibility can be a downside for customers who have come to rely on the ferry service only to see it abandoned. Fixed infrastructure such as bridges or tunnels represent a much more concrete commitment to sustained service.
List of notable examples
Proposed
Road
Rogfast tunnel in Norway – construction having started in 2018, at 27 km length, 392 m depth, it will be the longest road tunnel and deepest undersea tunnel in the world.
Karnaphuli Tunnel or Bangabandhu Sheikh Mujibur Rahman Tunnel in Bangladesh Tunnel is an underwater expressway tunnel in the port city of Chittagong, Bangladesh under the Karnaphuli river.
Underwater Road Tunnel Salamina island-Perama - planned road tunnel in Attica, Greece. Currently at the second stage of the tender from which the concessionaire will be selected.
India-Sri Lanka Sea Tunnel (proposed)
Penang Undersea Tunnel in Malaysia – to open in 2025
Western Harbour Tunnel in Sydney, New South Wales, Australia – to open in 2028
Suðuroyartunnilin in the Faroe Islands – at least 25 km in length, it would connect the islands of Suðuroy and Skúgvoy to Sandoy, which is part of the fixed-link interconnected Faroese "mainland".
Rail
Bohai Strait tunnel in China between Dalian and Yantai (decided, construction to start 'as soon as possible'.)
Helsinki to Tallinn Tunnel under the Gulf of Finland (proposed)
Irish Sea Tunnel (suggested)
Rio de Janeiro Metro Bay Tunnel (Line 3 – Rio de Janeiro-Niterói) (proposed)
Fehmarn Belt Fixed Link between Denmark and Germany (decided, construction started in January 2021)
Mumbai–Ahmedabad high-speed rail corridor of India (decided, construction start November 2018)
Taiwan Strait Tunnel - if built would become the longest rail tunnel in the world. Engineering challenges and the unsolved political status of Taiwan make construction unlikely
Strait of Gibraltar Tunnel - linking Gibraltar or the Spanish mainland to the African mainland. If built it would most likely become the deepest tunnel ever built
See also
Immersed tube tunnel
Intercontinental and transoceanic fixed links
Shark tunnel
References
Coastal construction | Underwater tunnel | [
"Engineering"
] | 1,186 | [
"Construction",
"Coastal construction"
] |
8,946,593 | https://en.wikipedia.org/wiki/CAFASP | CAFASP, or the Critical Assessment of Fully Automated Structure Prediction, is a large-scale blind experiment in protein structure prediction that studies the performance of automated structure prediction webservers in homology modeling, fold recognition, and ab initio prediction of protein tertiary structures based only on amino acid sequence. The experiment runs once every two years in parallel with CASP, which focuses on predictions that incorporate human intervention and expertise. Compared to related benchmarking techniques LiveBench and EVA, which run weekly against newly solved protein structures deposited in the Protein Data Bank, CAFASP generates much less data, but has the advantage of producing predictions that are directly comparable to those produced by human prediction experts. Recently CAFASP has been run essentially integrated into the CASP results rather than as a separate experiment.
References
External links
Protein Structure Prediction Center
CAFASP4 (2004)
CAFASP5 (2006)
Bioinformatics
Protein methods | CAFASP | [
"Chemistry",
"Engineering",
"Biology"
] | 190 | [
"Biochemistry methods",
"Biological engineering",
"Bioinformatics stubs",
"Protein methods",
"Biotechnology stubs",
"Protein biochemistry",
"Biochemistry stubs",
"Bioinformatics"
] |
8,947,095 | https://en.wikipedia.org/wiki/Solar%20energetic%20particles | Solar energetic particles (SEP), formerly known as solar cosmic rays, are high-energy, charged particles originating in the solar atmosphere and solar wind. They consist of protons, electrons and heavy ions with energies ranging from a few tens of keV to many GeV. The exact processes involved in transferring energy to SEPs is a subject of ongoing study.
SEPs are relevant to the field of space weather, as they are responsible for SEP events and ground level enhancements.
History
SEPs were first detected in February and March 1942 by Scott Forbush indirectly as ground level enhancements.
Solar particle events
SEPs are accelerated during solar particle events. These can originate either from a solar flare site or by shock waves associated with coronal mass ejections (CMEs). However, only about 1% of CMEs produce strong SEP events.
Two main mechanisms of acceleration are possible: diffusive shock acceleration (DSA, an example of second-order Fermi acceleration) or the shock-drift mechanism. SEPs can be accelerated to energies of several tens of MeV within 5–10 solar radii (5% of the Sun–Earth distance) and can reach Earth in a few minutes in extreme cases. This makes prediction and warning of SEP events quite challenging.
In March 2021, NASA reported that scientists had located the source of several SEP events, potentially leading to improved predictions in the future.
Research
SEPs are of interest to scientists because they provide a good sample of solar material. Despite the nuclear fusion occurring in the core, the majority of solar material is representative of the material that formed the solar system. By studying SEP's isotopic composition, scientists can indirectly measure the material that formed the solar system.
See also
Solar wind
References
Reames D.V., Solar Energetic Particles, Springer, Berlin, (2017a) , doi: 10.1007/978-3-319-50871-9.
External links
Solar Energetic Particles (Rainer Schwenn)
NASA
The Isotopic Composition of Solar Energetic Particles
Solar phenomena | Solar energetic particles | [
"Physics"
] | 422 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
8,947,106 | https://en.wikipedia.org/wiki/Finite%20von%20Neumann%20algebra | In mathematics, a finite von Neumann algebra is a von Neumann algebra in which every isometry is a unitary. In other words, for an operator V in a finite von Neumann algebra if , then . In terms of the comparison theory of projections, the identity operator is not (Murray-von Neumann) equivalent to any proper subprojection in the von Neumann algebra.
Properties
Let denote a finite von Neumann algebra with center . One of the fundamental characterizing properties of finite von Neumann algebras is the existence of a center-valued trace. A von Neumann algebra is finite if and only if there exists a normal positive bounded map with the properties:
,
if and then ,
for ,
for and .
Examples
Finite-dimensional von Neumann algebras
The finite-dimensional von Neumann algebras can be characterized using Wedderburn's theory of semisimple algebras.
Let Cn × n be the n × n matrices with complex entries. A von Neumann algebra M is a self adjoint subalgebra in Cn × n such that M contains the identity operator I in Cn × n.
Every such M as defined above is a semisimple algebra, i.e. it contains no nilpotent ideals. Suppose M ≠ 0 lies in a nilpotent ideal of M. Since M* ∈ M by assumption, we have M*M, a positive semidefinite matrix, lies in that nilpotent ideal. This implies (M*M)k = 0 for some k. So M*M = 0, i.e. M = 0.
The center of a von Neumann algebra M will be denoted by Z(M). Since M is self-adjoint, Z(M) is itself a (commutative) von Neumann algebra. A von Neumann algebra N is called a factor if Z(N) is one-dimensional, that is, Z(N) consists of multiples of the identity I.
Theorem Every finite-dimensional von Neumann algebra M is a direct sum of m factors, where m is the dimension of Z(M).
Proof: By Wedderburn's theory of semisimple algebras, Z(M) contains a finite orthogonal set of idempotents (projections) {Pi} such that PiPj = 0 for i ≠ j, Σ Pi = I, and
where each Z(M)Pi is a commutative simple algebra. Every complex simple algebras is isomorphic to
the full matrix algebra Ck × k for some k. But Z(M)Pi is commutative, therefore one-dimensional.
The projections Pi "diagonalizes" M in a natural way. For M ∈ M, M can be uniquely decomposed into M = Σ MPi. Therefore,
One can see that Z(MPi) = Z(M)Pi. So Z(MPi) is one-dimensional and each MPi is a factor. This proves the claim.
For general von Neumann algebras, the direct sum is replaced by the direct integral. The above is a special case of the central decomposition of von Neumann algebras.
Abelian von Neumann algebras
Type factors
References
Linear algebra | Finite von Neumann algebra | [
"Mathematics"
] | 647 | [
"Linear algebra",
"Algebra"
] |
8,947,566 | https://en.wikipedia.org/wiki/DMAPI | Data Management API (DMAPI) is the interface defined in the X/Open document "Systems Management: Data Storage Management (XDSM) API" dated February 1997. XFS, IBM JFS, VxFS, AdvFS, StorNext and IBM Spectrum Scale file systems support DMAPI for Hierarchical Storage Management (HSM).
External links
Systems Management: Data Storage Management (XDSM) API
Overview of IBM Spectrum Scale Data Management API
Open Source XFS Source code with DMAPI Implementation and Test Suite
Data management
Open Group standards | DMAPI | [
"Technology"
] | 116 | [
"Data management",
"Data"
] |
8,948,377 | https://en.wikipedia.org/wiki/Spirit%20gum | Spirit gum is an adhesive, made mostly of SD Alcohol 35-A (the solvent, or "spirit") and resin (the adhesive, or "gum") originally consisting of mastix, used primarily for affixing costume prosthetics such as wigs, merkins, or false facial hair. It has been manufactured since at least the 1870s, and has long been a standard tool in theatrical performances where prosthetic makeup or affixed costuming is used. It was mentioned in the earliest known published theatre makeup manual: "How to make-up; a practical guide for amateurs by Haresfoot and Rouge" in 1877. At the end of the nineteenth century, spirit gum could be procured by performers at theatrical wig makers and it was removed with alcohol, cocoa butter or petroleum jelly.
References
Costume design
Adhesives | Spirit gum | [
"Physics",
"Engineering"
] | 174 | [
"Costume design",
"Materials stubs",
"Materials",
"Design",
"Matter"
] |
8,948,754 | https://en.wikipedia.org/wiki/Pyroptosis | Pyroptosis is a highly inflammatory form of lytic programmed cell death that occurs most frequently upon infection with intracellular pathogens and is likely to form part of the antimicrobial response. This process promotes the rapid clearance of various bacterial, viral, fungal and protozoan infections by removing intracellular replication niches and enhancing the host's defensive responses. Pyroptosis can take place in immune cells and is also reported to occur in keratinocytes and some epithelial cells.
The process is initiated by formation of a large supramolecular complex termed the inflammasome (also known as a pyroptosome) upon intracellular danger signals. The inflammasome activates a different set of caspases as compared to apoptosis, for example, caspase-1/4/5 in humans and caspase-11 in mice. These caspases contribute to the maturation and activation of the pro-inflammatory cytokines IL-1β and IL-18, as well as the pore-forming protein gasdermin D. Formation of pores causes cell membrane rupture and release of cytokines, as well as various damage-associated molecular pattern (DAMP) molecules such as HMGB-1, ATP and DNA, out of the cell. These molecules recruit more immune cells and further perpetuate the inflammatory cascade in the tissue.
However, in pathogenic chronic diseases, the inflammatory response does not eradicate the primary stimulus. A chronic form of inflammation ensues that ultimately contributes to tissue damage. Pyroptosis is associated with diseases including autoinflammatory, metabolic, and cardiovascular diseases, as well as cancer and neurodegeneration. Some examples of pyroptosis include the cell death induced in Salmonella-infected macrophages and abortively HIV-infected T helper cells.
Discovery
This type of inherently pro-inflammatory programmed cell death was named pyroptosis in 2001 by Molly Brennan and Dr. Brad T. Cookson, an associate professor of microbiology and laboratory medicine at the University of Washington. The Greek pyro refers to fire and ptosis means falling. The compound term of pyroptosis may be understood as "fiery falling", which describes the bursting of pro-inflammatory chemical signals from the dying cell. Pyroptosis has a distinct morphology and mechanism compared to those of other forms of cell death. It has been suggested that microbial infection was the main evolutionary pressure for this pathway. Inflammasome formation was initially thought to be required for the induction of pyroptosis, but in 2013, the caspase-11 dependent noncanonical pathway was discovered, suggesting lipopolysaccharides (LPS) can trigger pyroptosis and subsequent inflammatory responses independent of toll-like receptor 4 (TLR4). In 2015, gasdermin D (GSDMD) was identified as the effector of pyroptosis that forms pores in the cell membrane. In 2021, the high-resolution structure of the GSDMD pore was solved by cryo-electron microscopy (cryo-EM). Also in 2021, an additional molecule, NINJ1, was found to be required for the plasma membrane rupture during pyroptosis.
Morphological characteristics
Pyroptosis, as a form of programmed cell death, has many morphological differences as compared to apoptosis. Both pyroptosis and apoptosis undergo chromatin condensation, but during apoptosis, the nucleus breaks into multiple chromatin bodies; in pyroptosis, the nucleus remains intact. In a cell that undergoes pyroptosis, gasdermin pores are formed on the plasma membrane, resulting in water influx.
In terms of mechanism, pyroptosis is activated by inflammatory caspases, including caspase-1/4/5 in humans and caspase-11 in mice. Caspase-8 can act as an upstream regulator of inflammasome activation in context-dependent manners. Caspase-3 activation can take place in both apoptosis and pyroptosis.
Although both pyroptosis and necroptosis are triggered by membrane pore formation, pyroptosis is more controlled. Cells that undergo pyroptosis exhibit membrane blebbing and produce protrusions known as pyroptotic bodies, a process not found in necroptosis. Also, necroptosis works in a caspase-independent fashion. It is proposed that both pyroptosis and necroptosis may act as defence systems against pathogens when apoptotic pathways are blocked.
Mechanism
The innate immune system, by using germ-line encoded pattern recognition receptors (PRRs), can recognize a wide range of pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) upon microbe infection. Classic examples of PRRs include toll-like receptors (TLRs) and NOD-like receptors (NLRs). Recognition of PAMPs and DAMPs triggers the formation of multi-protein complex inflammasomes, which then activates caspases to initiate pyroptosis. The inflammasome pathway may be canonical or noncanonical, with the former using caspase-1-activating inflammasomes and the latter using other caspases.
The canonical inflammasome pathway
In the canonical inflammasome pathway, PAMPs and DAMPs are recognised by certain endogenous PRRs. For example, NLR proteins NLRC4 can recognise flagellin and type III secretion system components. NLRP3 is activated by cellular events induced by different PAMPs and DAMPs stimuli. Some non-NLR proteins like absent in melanoma 2 (AIM2) and pyrin can also be activated and form inflammasomes. Also, non-inflammasome-forming PRRs such as TLRs, NOD1 and NOD2 also play important roles in pyroptosis. These receptors upregulate expression of inflammatory cytokines such as IFN α/β, tumour necrosis factor (TNF), IL-6 and IL-12 through NF-κB and MAPK-signaling pathways. In addition, pro-IL-1β and pro-IL-18 are released to be processed by cysteine-mediated caspase-1.
Canonical inflammasomes mostly contain three components: a sensor protein (PRRs), an adaptor (ASC) and an effector (caspase-1). Generally, inflammasome-forming NLR proteins share a similar structure, several leucine-rich repeat (LRR) domains, a central nucleotide-binding and oligomerization domain (NBD) and an N-terminal pyrin domain (PYD). NLRP3, for example, recruits ASC adaptor protein via PYD-PYD interaction. Both pro-caspase-1 and ASC contain a caspase activation and recruitment domain (CARD), and this homotypic CARD-CARD interaction enables autocatalytic cleavage and reassembly of procaspase-1 to form active caspase-1. Alternatively, NLRC4 can directly recruit pro-caspase-1, as it has a CARD instead of a PYD. In addition to their formation as a complex to induce pyroptosis, inflammasomes can also be integral components of larger cell death-inducing complexes called PANoptosomes to induce PANoptosis, another inflammatory form of cell death.
Activated caspase-1 is responsible for cleavage of pro-IL-1β and pro-IL-18. These cytokines, once processed, will be in their biologically active form ready to be released from the host cells. In addition, caspase-1 also cleaves the cytosolic gasdermin D (GSDMD). GSDMD can be cleaved to produce an N-terminal domain (GSDMD-N) and a C-terminal domain (GSDMD-C). GSDMD-N can oligomerize and form transmembrane pores that have an inner diameter of 10-14 nm. The pores allow secretion of IL-1β and IL-18 and various cytosolic content to extracellular space, and they also disrupt the cellular ionic gradient. The resulting increase in osmotic pressure causes an influx of water followed by cell swelling and bursting. Notably, GSDMD-N is autoinhibited by GSDMD C-terminal domain before cleavage to prevent cell lysis in normal conditions. Also, GSDMD-N can only insert itself into the inner membrane with specific lipid compositions, which limits its damage to neighbour cells. Downstream of GSDMD, NINJ1 is now thought to be required for the plasma membrane rupture during pyroptosis.
The noncanonical inflammasome pathway
The noncanonical inflammasome pathway is initiated by binding of lipopolysaccharide (LPS) of gram-negative bacteria directly onto caspase-4/5 in humans and caspase-11 in murines. Binding of LPS onto these caspases promotes their oligomerization and activation. These caspases can cleave GSDMD to release GSDMD-N and trigger pyroptosis. In addition, an influx of potassium ions upon membrane permeabilization triggers activation of NLRP3, which then leads to formation of NLRP3 inflammasome and activation of caspase-1. These processes facilitate the cleavage of GSDMD and promote the maturation and release of pro-inflammatory cytokines.
Caspase-3-dependent cell death pathway
An alternative pathway that links apoptosis and pyroptosis has been recently proposed. Caspase-3, an executioner caspase in apoptosis, can cleave gasdermin E (GSDME) to produce a N-terminal fragment and a C-terminal fragment in a way similar to GSDMD cleavage. When apoptotic cells are not scavenged by macrophages, GSDME expression is then upregulated by p53. GSDME is then activated by caspase-3 to form pores on the cell membrane. It has also been found that GSDME can permeabilise mitochondrial membranes to release cytochrome c, which further activates caspase-3 and accelerates GSDME cleavage. This positive feedback loop ensures that programmed cell death is carried forward.
Clinical relevance
Infection
Pyroptosis acts as a defence mechanism against infection by inducing pathological inflammation. The formation of inflammasomes and the activity of caspase-1 determine the balance between pathogen resolution and disease.
In a healthy cell, caspase-1 activation helps to fight infection caused by Salmonella and Shigella by introducing cell death to restrict pathogen growth. When the "danger" signal is sensed, the quiescent cells will be activated to undergo pyroptosis and produce inflammatory cytokines IL-1β and IL-18. IL-18 will stimulate IFNγ production and initiates the development of TH1 responses. (TH1 responses tend to release cytokines that direct an immediate removal of the pathogen.) The cell activation results in an increase in cytokine levels, which will augment the consequences of inflammation and this, in turn, contributes to the development of the adaptive response as infection progresses. The ultimate resolution will clear pathogens.
In contrast, persistent inflammation will produce excessive immune cells, which is detrimental. If the amplification cycles persist, metabolic disorder, autoinflammatory diseases and liver injury associated with chronic inflammation will occur.
Recently, pyroptosis and downstream pathways were identified as promising targets for treatment of severe COVID-19-associated diseases.
Cerebrovascular disease
Recent studies show that pyroptosis plays a role in the pathophysiology of intracerebral hemorrhage, and mitigating pyroptosis could be an intervention strategy to inhibit the inflammatory response after intracerebral hemorrhage.
Cancer
Pyroptosis, as an inflammation-associated programmed cell death, has wide implications in various cancer types. Principally, pyroptosis can kill cancer cells and inhibit tumour development in the presence of endogenous DAMPs. In some cases, GSDMD can be used as a prognostic marker for cancers. However, prolonged production of inflammatory bodies may facilitate the formation of microenvironments that favour tumour growth. Understanding the mechanisms of pyroptosis and identifying pyroptosis-associated molecules can be useful in treating different cancers.
In gastric cancer cells, presence of GSDMD can inhibit cyclin A2/CDK2 complexes, leading to cell cycle arrest and thus inhibit tumour development. Also, cellular concentration of GSDME increases when gastric cancer cells are treated with certain chemotherapy drugs. GSDME then activates caspase-3 and triggers pyroptotic cell death.
Cervical cancer can be caused by human papillomavirus (HPV) infection. AIM2 protein can recognise viral DNA in cytoplasm and form AIM2 inflammasome, which then triggers by a caspase-1 dependent canonical pyroptosis pathway. HPV infection causes the upregulation of sirtuin 1 protein, which disrupts the transcription factor for AIM2, RelB. Knockdown of sirtuin 1 upregulates AIM2 expression and triggers pyroptosis.
Metabolic disorder
The level of expression of NLRP3 inflammasome and caspase-1 has a direct relation to the severity of several metabolic syndromes, such as obesity and type II diabetic mellitus (T2DM). This is because the subsequent production level of IL-1β and IL-18, cytokines that impair the secretion of insulin, is affected by the activity of caspase-1. Glucose uptake level is then diminished, and the condition is known as insulin resistance. The condition is further accelerated by the IL-1β-induced destruction of pancreatic β cells.
Cryopyrinopathies
A mutation in the gene coding of inflammasomes leads to a group of autoinflammatory diseases called cryopyrinopathies. This group includes Muckle–Wells syndrome, cold autoinflammatory syndrome and chronic infantile neurologic cutaneous and articular syndrome, all showing symptoms of sudden fevers and localized inflammation. The mutated gene in such cases is the NLRP3, impeding the activation of inflammasome and resulting in an excessive production of IL-1β. This effect is known as "gain-of-function".
HIV and AIDS
Recent studies demonstrate that caspase-1-mediated pyroptosis drives CD4 T-cell depletion and inflammation by HIV, two signature events that propel HIV disease progression to AIDS. Although pyroptosis contributes to the host's ability to rapidly limit and clear infection by removing intracellular replication niches and enhancing defensive responses through the release of proinflammatory cytokines and endogenous danger signals, in pathogenic inflammation, such as that elicited by HIV-1, this beneficial response does not eradicate the primary stimulus. In fact, it appears to create a pathogenic vicious cycle in which dying CD4 T cells release inflammatory signals that attract more cells into the infected lymphoid tissues to die and to produce chronic inflammation and tissue injury.
It may be possible to break this pathogenic cycle with safe and effective caspase-1 inhibitors. These agents could form a new and exciting 'anti-AIDS' therapy for HIV-infected subjects in which the treatment targets the host instead of the virus. Of note, Caspase-1 deficient mice develop normally, arguing that inhibition of this protein would produce beneficial rather than harmful therapeutic effects in HIV patients.
References
Programmed cell death | Pyroptosis | [
"Chemistry",
"Biology"
] | 3,404 | [
"Senescence",
"Programmed cell death",
"Signal transduction"
] |
8,949,082 | https://en.wikipedia.org/wiki/Of%20the%20form | In mathematics, the phrase "of the form" indicates that a mathematical object, or (more frequently) a collection of objects, follows a certain pattern of expression. It is frequently used to reduce the formality of mathematical proofs.
Example of use
Here is a proof which should be appreciable with limited mathematical background:
Statement:
The product of any two even natural numbers is also even.
Proof:
Any even natural number is of the form 2n, where n is a natural number. Therefore, let us assume that we have two even numbers which we will denote by 2k and 2l. Their product is (2k)(2l) = 4(kl) = 2(2kl). Since 2kl is also a natural number, the product is even.
Note:
In this case, both exhaustivity and exclusivity were needed. That is, it was not only necessary that every even number is of the form 2n (exhaustivity), but also that every expression of the form 2n is an even number (exclusivity). This will not be the case in every proof, but normally, at least exhaustivity is implied by the phrase of the form.
References
External links
Mathematical proofs
Mathematical terminology | Of the form | [
"Mathematics"
] | 259 | [
"nan"
] |
8,949,194 | https://en.wikipedia.org/wiki/Stropharia%20rugosoannulata | Stropharia rugosoannulata, commonly known as the wine cap stropharia, "garden giant", burgundy mushroom, king stropharia, or wine-red stropharia, is a species of agaric mushroom in the family Strophariaceae native to Europe and North America.
Unlike many other members of the genus Stropharia, it is regarded as a choice edible and is commercially cultivated.
Description
The king stropharia can grow to high with a reddish-brown convex to flattening cap up to across, the size leading to another colloquial name godzilla mushroom. The gills are initially pale, then grey, and finally dark purple-brown in colour. The firm flesh is white, as is the tall stem which bears a wrinkled ring. This is the origin of the specific epithet which means "wrinkled-ringed".
Distribution and habitat
The species is found on wood chips across North America in summer and autumn. It is also found in Europe, and has been introduced to Australia and New Zealand.
Ecology
In Paul Stamets' book Mycelium Running, a study done by Christiane Pischl showed that the king stropharia makes an excellent garden companion to corn. The fungus also has a European history of being grown with corn.
A 2006 study, published in the journal Applied and Environmental Microbiology, found the king stropharia to have the ability to attack the nematode Panagrellus redivivus; the fungus produces unique spiny cells called acanthocytes which are able to immobilise and digest the nematodes.
Uses
Described as very tasty by some authors, the fungus is easily cultivated on a medium similar to that on which it grows naturally. Antonio Carluccio recommends sautéeing them in butter or grilling them.
References
Further reading
Zadrazil, Frantisek and Joachim Schliemann: "Ein Beitrag zur Ökologie und Anbautechnik von Stropharia rugosoannulata (Farlow ex Murr.)" in: Der Champignon Nr.163, March 1975
Strophariaceae
Carnivorous fungi
Edible fungi
Fungi described in 1922
Fungi of New Zealand
Fungi of Europe
Fungi of North America
Fungi in cultivation
Fungal pest control agents
Fungus species | Stropharia rugosoannulata | [
"Biology"
] | 471 | [
"Fungi",
"Fungus species",
"Fungal pest control agents"
] |
8,949,285 | https://en.wikipedia.org/wiki/Ringwoodite | Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate).
Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions.
Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep.
This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle.
Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle.
Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km.
Characteristics
Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth.
Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure range for stability of ringwoodite lies in the approximate range from 18 to 23 GPa.
Natural ringwoodite has been found in many shocked chondritic meteorites, in which the ringwoodite occurs as fine-grained polycrystalline aggregates.
Natural ringwoodite generally contains much more magnesium than iron and can form a gapless solid solution series from the pure magnesium endmember to the pure iron endmember. The latter, the iron-rich endmember of the γ-olivine solid solution series, γ-Fe2SiO4, was named ahrensite in honor of US mineral physicist Thomas J. Ahrens (1936–2010).
Geological occurrences
In meteorites, ringwoodite occurs in the veinlets of quenched shock-melt cutting the matrix and replacing olivine probably produced during shock metamorphism.
In Earth's interior, olivine occurs in the upper mantle at depths less than about 410 km, and ringwoodite is inferred to be present within the transition zone from about 520 to 660 km depth. Seismic activity discontinuities at about 410 km, 520 km, and at 660 km depth have been attributed to phase changes involving olivine and its polymorphs.
The 520-km depth discontinuity is generally believed to be caused by the transition of the olivine polymorph wadsleyite (beta-phase) to ringwoodite (gamma-phase), while the 660-km depth discontinuity by the phase transformation of ringwoodite (gamma-phase) to a silicate perovskite plus magnesiowüstite.
Ringwoodite in the lower half of the transition zone is inferred to play a pivotal role in mantle dynamics, and the plastic properties of ringwoodite are thought to be critical in determining flow of material in this part of the mantle. The ability of ringwoodite to incorporate hydroxide is important because of its effect on rheology.
Ringwoodite has been synthesized at conditions appropriate to the transition zone, containing up to 2.6 weight percent water.
Because the transition zone between the Earth's upper and lower mantle helps govern the scale of mass and heat transport throughout the Earth, the presence of water within this region, whether global or localized, may have a significant effect on mantle rheology and therefore mantle circulation. In subduction zones, the ringwoodite stability field hosts high levels of seismicity.
An "ultradeep" diamond (one that has risen from a great depth) found in Juína in western Brazil contained an inclusion of ringwoodite — at the time the only known sample of natural terrestrial origin — thus providing evidence of significant amounts of water as hydroxide in the Earth's mantle. The gemstone, about 5mm long, was brought up by a diatreme eruption. The ringwoodite inclusion is too small to see with the naked eye. A second such diamond was later found.
The mantle reservoir could contain about three times more water, in the form of hydroxide contained within the wadsleyite and ringwoodite crystal structure, than the Earth's oceans combined.
Synthetic
For experiments, hydrous ringwoodite has been synthesized by mixing powders of forsterite (), brucite (), and silica () so as to give the desired final elemental composition. Putting this under 20 gigapascals of pressure at for three or four hours turns this into ringwoodite, which can then be cooled and depressurized.
Crystal structure
Ringwoodite has the spinel structure, in the isometric crystal system with space group Fdm (or F3m). On an atomic scale, magnesium and silicon are in octahedral and tetrahedral coordination with oxygen, respectively. The Si-O and Mg-O bonds have mixed ionic and covalent character. The cubic unit cell parameter is 8.063 Å for pure Mg2SiO4 and 8.234 Å for pure Fe2SiO4.
Chemical composition
Ringwoodite compositions range from pure Mg2SiO4 to Fe2SiO4 in synthesis experiments. Ringwoodite can incorporate up to 2.6 percent by weight H2O.
Physical properties
The physical properties of ringwoodite are affected by pressure and temperature. At the pressure and temperature condition of the Mantle Transition Zone, the calculated density value of ringwoodite is 3.90 g/cm3 for pure Mg2SiO4; 4.13 g/cm3 for (Mg0.91,Fe0.09)2SiO4 of pyrolitic mantle; and 4.85 g/cm3 for Fe2SiO4. It is an isotropic mineral with an index of refraction n = 1.768.
The colour of ringwoodite varies between the meteorites, between different ringwoodite bearing aggregates, and even in one single aggregate. The ringwoodite aggregates can show every shade of blue, purple, grey and green, or have no colour at all.
A closer look at coloured aggregates shows that the colour is not homogeneous, but seems to originate from something with a size similar to the ringwoodite crystallites. In synthetic samples, pure Mg ringwoodite is colourless, whereas samples containing more than one mole percent Fe2SiO4 are deep blue in colour. The colour is thought to be due to Fe2+–Fe3+ charge transfer.
References
Magnesium minerals
Iron minerals
Nesosilicates
Polymorphism (materials science)
Spinel group
Cubic minerals
Minerals in space group 227
High pressure science | Ringwoodite | [
"Physics",
"Materials_science",
"Engineering"
] | 1,591 | [
"High pressure science",
"Applied and interdisciplinary physics",
"Materials science",
"Polymorphism (materials science)"
] |
8,950,098 | https://en.wikipedia.org/wiki/Soluforce | SoluForce is a type of Reinforced Thermoplastic Pipe (RTP, also known as flexible composite pipe or FCP).
Introduction
SoluForce is a brand name of Pipelife Nederland B.V. (part of Wienerberger AG), with its main offices and production facilities located in Enkhuizen, The Netherlands. It develops, manufactures and markets RTP, which is a flexible high pressure pipe. It is supplied in long length coils of up to 400m length and has design pressure ratings from 36 to 450 bar. This type of pipe is typically used in the oil and gas industry for oil and gas flowlines, high pressure water injection and water transportation lines. However, they are also used for applications outside of the oil and gas industry including domestic gas, mining, and hydrogen applications.
This pipe has faster installation time compared to conventional steel pipes, as speeds of up to 2000m per day have been reached installing RTP in ground surface (with average speeds being approx. 1000m per day for normal RTP installations). The pipe mainly benefits applications where steel fails due to corrosion and installation time is an issue.
History
RTP was developed in the early 1990s by Wavin Repox, Akzo Nobel and by Tubes d'Aquitaine from France. They developed the first pipes reinforced with synthetic fibre to replace medium pressure steel pipes in response to growing demand for non-corrosive conduits for application in the onshore oil and gas industry, particularly from Shell in the Middle East. Because of its expertise in producing pipes, Pipelife Netherlands was involved in the project to develop long length RTP in 1998. The resulting system is marketed today under the name SoluForce.
SoluForce was the first ever RTP to be installed and used in the year 2000.
Properties
The Soluforce RTP has a three layer pipe construction:
A HDPE liner pipe (different composition of material for low or high operating temperatures)
A reinforcement layer, typically Aramid (Twaron or Kevlar) or high strength steel wire
A white HDPE protective outer layer for UV, damage and abrasion protection
In some SoluForce pipe versions, an extra bonded aluminium layer is added to prevent light components and gasses from permeating.
SoluForce pipes are available in 4 and 6 inch versions. Depending on the reïnforcement layer, SoluForce pipes have design pressures of up to 450 bar / 6527 psi.
Typical applications
Soluforce is used for the following applications:
Oil and/or gas flowlines
Oil field waste water disposal lines
Oil field injection lines
Offshore water injection risers
Offshore oil flowlines
High pressure Water injection lines
High pressure gas transport lines
Relining existing pipes
Although these kind of pipes have been developed for the oil and gas industry, they are also used for domestic gas, mining, and hydrogen applications.
Testing and qualification
Soluforce RTP is tested and acknowledged by the following organisations:
DNV Certification D-2615 - Soluforce System 4" and 5" with in-line couplings and end fittings
ASTM - WK11803
API - RP 15S (oil field service)
ISO/TS 18226:2006 (gas service)
DVGW VP 642 (German gas service)
NYSEARCH project by the Northeast Gas Association (USA)
See also
Plastic Pressure Pipe Systems
Pipeline transport
Reinforced Thermoplastic Pipes
Water injection (oil production)
Wienerberger
References
External links
Official website
JIP proposal 1999 from Newcastle University
Conference paper 23rd World Gas Conference
Bibliography
Piping
Pipeline transport
Petroleum production
Composite materials
Brand name materials | Soluforce | [
"Physics",
"Chemistry",
"Engineering"
] | 741 | [
"Building engineering",
"Chemical engineering",
"Composite materials",
"Materials",
"Mechanical engineering",
"Piping",
"Matter"
] |
8,950,361 | https://en.wikipedia.org/wiki/Triple%20modular%20redundancy | In computing, triple modular redundancy, sometimes called triple-mode redundancy, (TMR) is a fault-tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a majority-voting system to produce a single output. If any one of the three systems fails, the other two systems can correct and mask the fault.
The TMR concept can be applied to many forms of redundancy, such as software redundancy in the form of N-version programming, and is commonly found in fault-tolerant computer systems.
Space satellite systems often use TMR, although satellite RAM usually uses Hamming error correction.
Some ECC memory uses triple modular redundancy hardware (rather than the more common Hamming code), because triple modular redundancy hardware is faster than Hamming error correction hardware. Called repetition code, some communication systems use N-modular redundancy as a simple form of forward error correction. For example, 5-modular redundancy communication systems (such as FlexRay) use the majority of 5 samples – if any 2 of the 5 results are erroneous, the other 3 results can correct and mask the fault.
Modular redundancy is a basic concept, dating to antiquity, while the first use of TMR in a computer was the Czechoslovak computer SAPO, in the 1950s.
General case
The general case of TMR is called N-modular redundancy, in which any positive number of replications of the same action is used. The number is typically taken to be at least three, so that error correction by majority vote can take place; it is also usually taken to be odd, so that no ties may happen.
Majority logic gate
3-input majority gate
The 3-input majority gate output is 1 if two or more of the inputs of the majority gate are 1; output is 0 if two or more of the majority gate's inputs are 0. Thus, the majority gate is the carry output of a full adder, i.e., the majority gate is a voting machine.
The 3-input majority gate can be represented by the following boolean equation and truth table:
In TMR, three identical logic circuits (logic gates) are used to compute the same set of specified Boolean function. If there are no circuit failures, the outputs of the three circuits are identical. But due to circuit failures, the outputs of the three circuits may be different.
TMR operation
Assuming the Boolean function computed by the three identical logic gates has value 1, then: (a) if no circuit has failed, all three circuits produce an output of value 1, and the majority gate output has value 1. (b) if one circuit fails and produces an output of 0, while the other two are working correctly and produce an output of 1, the majority gate output is 1, i.e., it still has the correct value. And similarly for the case when the Boolean function computed by the three identical circuits has value 0. Thus, the majority gate output is guaranteed to be correct as long as no more than one of the three identical logic circuits has failed.
For a TMR system with a single voter of reliability (probability of working) and three components of reliability , the probability of it being correct can be shown to be .
TMR systems should use data scrubbing – rewrite flip-flops periodically – in order to avoid accumulation of errors.
Voter
The majority gate itself could fail. This can be protected against by applying triple redundancy to the voters themselves.
In a few TMR systems, such as the Saturn Launch Vehicle Digital Computer and functional triple modular redundancy (FTMR) systems, the voters are also triplicated. Three voters are used – one for each copy of the next stage of TMR logic. In such systems there is no single point of failure.
Even though only using a single voter brings a single point of failure – a failed voter will bring down the entire system – most TMR systems do not use triplicated voters. This is because the majority gates are much less complex than the systems that they guard against, so they are much more reliable. By using the reliability calculations, it is possible to find the minimum reliability of the voter for TMR to be a win.
Chronometers
To use triple modular redundancy, a ship must have at least three chronometers; two chronometers provided dual modular redundancy, allowing a backup if one should cease to work, but not allowing any error correction if the two displayed a different time, since in case of contradiction between the two chronometers, it would be impossible to know which one was wrong (the error detection obtained would be the same of having only one chronometer and checking it periodically). Three chronometers provided triple modular redundancy, allowing error correction if one of the three was wrong, so the pilot would take the average of the two with closer reading (vote for average precision).
There is an old adage to this effect, stating: "Never go to sea with two chronometers; take one or three."
Mainly this means that if two chronometers contradict, how do you know which one is correct? At one time this observation or rule was an expensive one as the cost of three sufficiently accurate chronometers was more than the cost of many types of smaller merchant vessels.
Some vessels carried more than three chronometers – for example, HMS Beagle carried 22 chronometers. However, such a large number was usually only carried on ships undertaking survey work as was the case with the Beagle.
In the modern era, ships at sea use GNSS navigation receivers (with GPS, GLONASS & WAAS etc. support) – mostly running with WAAS or EGNOS support so as to provide accurate time (and location).
In popular culture
In Arthur C. Clarke's science fiction novel Rendezvous with Rama, the Ramans make heavy use of triple redundancy.
In the popular anime Neon Genesis Evangelion, the Magi are a set of three biological supercomputers that must agree with a 2/3 majority vote before delivering a decision.
In the film Minority Report, 3 "precogs" are used to predict impending homicides, using a triple modular redundancy. In the plot, this system fails, causing a false positive: an innocent man is wrongly accused of murder.
See also
Fault tolerant system
Lockstep (computing)
Segal's law
References
External links
Article about TMR with reference to TMR usage in avionics and industry
Johnson, J. M., & Wirthlin, M. J. (2010, February). Voter insertion algorithms for FPGA designs using triple modular redundancy. In Proceedings of the 18th annual ACM/SIGDA international symposium on Field programmable gate arrays (pp. 249–258). ACM.
Engineering concepts
Reliability engineering
Safety
Fault-tolerant computer systems
Error detection and correction | Triple modular redundancy | [
"Technology",
"Engineering"
] | 1,465 | [
"Systems engineering",
"Reliability engineering",
"Error detection and correction",
"Computer systems",
"Fault-tolerant computer systems",
"nan"
] |
8,950,551 | https://en.wikipedia.org/wiki/Operational%20calculus | Operational calculus, also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation.
History
The idea of representing the processes of calculus, differentiation and integration, as operators
has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied.
This approach was further developed by Francois-Joseph Servois who developed convenient notations. Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester.
Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 and by Boole in 1859.
This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy.
Guided greatly by intuition and his wealth of knowledge on the physics behind his circuit studies, [Heaviside] developed the operational calculus now ascribed to his name.
At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians.
Operational calculus first found applications in electrical engineering problems, for
the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush.
A rigorous mathematical justification of Heaviside's operational methods came only
after the work of Bromwich that related operational calculus with
Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition).
Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using
integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener).
A different approach to operational calculus was developed in the 1930s by Polish mathematician
Jan Mikusiński, using algebraic reasoning.
Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926:
The brilliant work of Heaviside is purely heuristic, devoid of even the pretense to mathematical rigor. Its operators apply to electric voltages and currents, which may be discontinuous and certainly need not be analytic. For example, the favorite corpus vile on which he tries out his operators is a function which vanishes to the left of the origin and is 1 to the right. This excludes any direct application of the methods of Pincherle…
Although Heaviside’s developments have not been justified by the present state of the purely mathematical theory of operators, there is a great deal of what we may call experimental evidence of their validity, and they are very valuable to the electrical engineers. There are cases, however, where they lead to ambiguous or contradictory results.
Principle
The key element of the operational calculus is to consider differentiation as an operator acting on functions. Linear differential equations can then be recast in the form of "functions" of the operator acting on the unknown function equaling the known function. Here, is defining something that takes in an operator and returns another operator .
Solutions are then obtained by making the inverse operator of act on the known function. The operational calculus generally is typified by two symbols: the operator , and the unit function . The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator in the Heaviside calculus initially is to represent the time differentiator . Further, it is desired for this operator to bear the reciprocal relation such that denotes the operation of integration.
In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step function , such that if , and if .
The simplest example of application of the operational calculus is to solve: , which gives
From this example, one sees that represents integration. Furthermore iterated integrations is represented by so that
Continuing to treat as if it were a variable,
which can be rewritten by using a geometric series expansion:
Using partial fraction decomposition, one can define any fraction in the operator and compute its action on .
Moreover, if the function has a series expansion of the form
it is straightforward to find
Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem.
Heaviside went further and defined fractional power of , thus establishing a connection between operational calculus and fractional calculus.
Using the Taylor expansion, one can also verify the Lagrange–Boole translation formula, , so the operational
calculus is also applicable to finite-difference equations and to electrical engineering problems with delayed signals.
See also
Calculus of finite differences
Umbral calculus
References
Further sources
During Heaviside's lifetime
— Some historical references on the precursor work up to Carmichael].
After Heaviside's death
External links
IV Lindell HEAVISIDE OPERATIONAL RULES APPLICABLE TO ELECTROMAGNETIC PROBLEMS
Ron Doerfler Heaviside's Calculus
Jack Crenshaw essay showing use of operators More On the Rosetta Stone
Linear operators
Electrical engineering
Differential equations | Operational calculus | [
"Mathematics",
"Engineering"
] | 1,071 | [
"Functions and mappings",
"Mathematical objects",
"Linear operators",
"Equations",
"Differential equations",
"Mathematical relations",
"Electrical engineering"
] |
8,951,286 | https://en.wikipedia.org/wiki/Peter%20Hilton | Peter John Hilton (7 April 19236 November 2010) was a British mathematician, noted for his contributions to homotopy theory and for code-breaking during World War II.
Early life
He was born in Brondesbury, London, the son of Mortimer Jacob Hilton (1893–1959), a Jewish physician who was in general practice in Peckham, and his wife Elizabeth Amelia Freedman (1900–1984), and was brought up in Kilburn. The physiologist Sidney Montague Hilton (1921–2011) of the University of Birmingham Medical School was his elder brother.
Hilton was educated at St Paul's School, London. He went to The Queen's College, Oxford in 1940 to read mathematics, on an open scholarship, where the mathematics tutor was Ughtred Haslam-Jones.
Bletchley Park
A wartime undergraduate in wartime Oxford, on a shortened course, Hilton was obliged to train with the Royal Artillery, and faced scheduled conscription in summer 1942. After four terms, he took the advice of his tutor, and followed up a civil service recruitment contact. He had an interview for mathematicians with knowledge of German, and was offered a position in the Foreign Office without being told the nature of the work. The team was, in fact, recruiting on behalf of the Government Code and Cypher School. Aged 18, he arrived at the codebreaking station Bletchley Park on 12 January 1942.
Hilton worked with several of the Bletchley Park deciphering groups. He was initially assigned to Naval Enigma in Hut 8. Hilton commented on his experience working with Alan Turing, whom he knew well for the last 12 years of his life, in his "Reminiscences of Bletchley Park" from A Century of Mathematics in America: Hilton echoed similar thoughts in the Nova PBS documentary Decoding Nazi Secrets (UK Station X, Channel 4, 1999).
In late 1942, Hilton transferred to work on German teleprinter ciphers. A special section known as the "Testery" had been formed in July 1942 to work on one such cipher, codenamed "Tunny", and Hilton was one of the early members of the group. His role was to devise ways to deal with changes in Tunny, and to liaise with another section working on Tunny, the "Newmanry", which complemented the hand-methods of the Testery with specialised codebreaking machinery. Hilton has been counted as a member of the Newmanry, possibly on a part-time basis.
Recreational
A convivial pub drinker at Bletchley Park, Hilton also spent time with Turing working on chess problems and palindromes. He there constructed a 51-letter palindrome: "Doc note, I dissent. A fast never prevents a fatness. I diet on cod."
Mathematics
Hilton obtained his DPhil in 1949 from Oxford University under the supervision of John Henry Whitehead. His dissertation was "Calculation of the homotopy groups of -polyhedra". His principal research interests were in algebraic topology, homological algebra, categorical algebra and mathematics education. He published 15 books and over 600 articles in these areas, some jointly with colleagues. Hilton's theorem (1955) is on the homotopy groups of a wedge of spheres. It addresses an issue that comes up in the theory of "homotopy operations".
Turing, at the Victoria University of Manchester, in 1948 invited Hilton to see the Manchester Mark 1 machine. Around 1950, Hilton took a position at the university maths department. He was there in 1949, when Turing engaged in a discussion that introduced him to the word problem for groups. Hilton worked with Walter Lederman. Another colleague there was Hugh Dowker, who in 1951 drew his attention to the Serre spectral sequence.
In 1952, Hilton moved to DPMMS in Cambridge, England, where he ran a topology seminar attended by John Frank Adams, Michael Atiyah, David B. A. Epstein, Terry Wall and Christopher Zeeman. Via Hilton, Atiyah became aware of Jean-Pierre Serre's coherent sheaf proof of the Riemann–Roch theorem for curves, and found his first research direction in sheaf methods for ruled surfaces.
In 1955, Hilton started work with Beno Eckmann on what became known as Eckmann-Hilton duality for the homotopy category. Through Eckmann, he became editor of the Ergebnisse der Mathematik und ihrer Grenzgebiete, a position he held from 1964 to 1983.
Hilton returned to Manchester as Professor, in 1956. In 1958, he became the Mason Professor of Pure Mathematics at the University of Birmingham. He moved to the United States in 1962 to be Professor of Mathematics at Cornell University, a post he held until 1971. From 1971 to 1973, he held a joint appointment as Fellow of the Battelle Seattle Research Center and Professor of Mathematics at the University of Washington. On 1 September 1972, he was appointed Louis D. Beaumont University Professor at Case Western Reserve University; on 1 September 1973, he took up the appointment. In 1982, he was appointed Distinguished Professor of Mathematics at Binghamton University, becoming Emeritus in 2003. Latterly, he spent each spring semester as Distinguished Professor of Mathematics at the University of Central Florida.
Hilton is featured in the book Mathematical People.
Death and family
Peter Hilton died on 6 November 2010 in Binghamton, New York, at age 87. He left behind his wife, Margaret Mostyn (born 1925), whom he married in 1949, and their two sons, who were adopted. Margaret, a schoolteacher, had an acting career as Margaret Hilton in the US, in summer stock theatre. She also played television roles. She died in Seattle in 2020.
In popular culture
Hilton is portrayed by actor Matthew Beard in the 2014 film The Imitation Game, which tells the tale of Alan Turing and the cracking of Nazi Germany's Enigma code.
Academic positions
Lecturer at University of Cambridge, 1952–55
Senior Lecturer at University of Manchester, England, 1956–58
Mason Professor of Pure Mathematics, University of Birmingham, England, 1958–62
Visiting Professor at the Eidgenössische Technische Hochschule at Zürich, ETH Zurich, 1966–67, 1981–82, 1988–89
Visiting Professor at the Courant Institute of Mathematical Sciences, New York University, 1967–68
Visiting Professor at the Universitat Autònoma de Barcelona, Autonomous University of Barcelona, 1989
Professeur invité, University of Lausanne, in 1996
Honours
Silver Medal, University of Helsinki, 1975
Doctor of Humanities (hon. causa), N. University of Michigan, 1977
Corresponding Member, Brazilian Academy of Sciences, 1979
Doctor of Science (hon. causa), Memorial University of Newfoundland, 1983
Doctor of Science (hon. causa), Autonomous University of Barcelona, 1989
In August 1983, an international conference on algebraic topology was held, under the auspices of the Canadian Mathematical Society, to mark Hilton's 60th Birthday. Professor Hilton was presented with a Festschrift of papers dedicated to him (London Mathematical Society Lecture Notes, Volume 86, 1983). The American Mathematical Society has published the proceedings under the title ‘Conference on Algebraic Topology in Honor of Peter Hilton’
Hilton was selected in October 1992, to deliver the invited lecture at the ‘Georges de Rham’ day at the University of Lausanne.
An International Conference was held in Montreal in May 1993, to mark the 70th birthday of Hilton. The proceedings were published as The Hilton Symposium, CRM Proceedings and Lecture Notes, Volume 6, American Mathematical Society (1994), edited by Guido Mislin.
In 1994, Hilton was the Mahler Lecturer of the Australian Mathematical Society.
In the summers of 2001 and 2002, Hilton was Visiting Erskine Fellow at the University of Canterbury, Christchurch, New Zealand.
In winter term of 2005 Hilton received an appointment as Courtesy Faculty in the College of Arts and Sciences at University of South Florida.
Hilton's former PhD students
According to the Mathematics Genealogy Project site, Hilton supervised at least 27 doctoral students, including Paul Kainen at Cornell University.
Bibliography
Peter J. Hilton, An introduction to homotopy theory, Cambridge Tracts in Mathematics and Mathematical Physics, no. 43, Cambridge University Press, 1953.
Peter J. Hilton, Shaun Wylie, Homology theory: An introduction to algebraic topology, Cambridge University Press, New York, 1960.
Peter Hilton, Homotopy theory and duality, Gordon and Breach, New York-London-Paris, 1965
H.B. Griffiths and P.J. Hilton, "A Comprehensive Textbook of Classical Mathematics", Van Nostrand Reinhold, London, 1970,
Peter J. Hilton, Guido Mislin, Joe Roitberg, Localization of nilpotent groups and spaces, North-Holland Publishing Co., Amsterdam-Oxford, 1975.
Peter Hilton, Jean Pedersen, Build your own polyhedra. Second edition, Dale Seymour Publications, Palo Alto, 1994.
Peter Hilton, Derek Holton, Jean Pedersen, Mathematical reflections: In a room with many mirrors. Corrected edition, Undergraduate Texts in Mathematics, Springer-Verlag, New York, 1996.
Peter J. Hilton, Urs Stammbach, A course in homological algebra. Second edition, Graduate Texts in Mathematics, vol 4, Springer-Verlag, New York, 1997.
Hans Walser, 99 Points of Intersection, translated by Peter Hilton and Jean Pedersen, MAA Spectrum, Mathematical Association of America, 2006.
Peter Hilton, Derek Holton, Jean Pedersen, Mathematical vistas: From a room with many windows, Undergraduate Texts in Mathematics, Springer-Verlag, New York, 2010.
Peter Hilton, Jean Pedersen, A mathematical tapestry: Demonstrating the beautiful unity of mathematics, Cambridge University Press, Cambridge, 2010.
References
External links
1923 births
2010 deaths
20th-century British mathematicians
21st-century British mathematicians
British topologists
People educated at St Paul's School, London
Alumni of the Queen's College, Oxford
Bletchley Park people
Academics of the Victoria University of Manchester
Academics of the University of Birmingham
State University of New York faculty
Cornell University faculty
Binghamton University faculty
Courant Institute of Mathematical Sciences faculty
Palindromists | Peter Hilton | [
"Physics"
] | 2,086 | [
"Palindromists",
"Symmetry",
"Palindromes"
] |
8,951,334 | https://en.wikipedia.org/wiki/Diver%27s%20pump | A diver's pump is a manually operated low pressure air compressor used to provide divers in standard diving dress with air while they are underwater.
Rotary
Rotary pumps are driven by a crankshaft that is rotated by handles on two flywheels attached to the ends of the shaft on each side of the pump. Rotary pumps were built with one, two or three cylinders, and are operated by a team of two men. Pistons attached to the crankshaft draw in air through the inlet valves and then pump it through the outlet valves to an air hose which delivers the air to the helmet of the diver. Cylinders, valves and outlet fittings for air are generally made from brass for corrosion resistance in the marine environment. Rotary operated pumps were manufactured with single or double action.
Flow of air through the helmet could be controlled by manually adjusting the back-pressure on the helmet exhaust valve, usually on the lower right side of the bonnet, and by manually adjusting the inlet supply valve on the airline, usually fastened to the front lower left of the corselet. Flow rate would also be affected by the surface delivery system and depth. Manual pumps would be operated at the speed necessary for sufficient air supply, which could be judged by delivery pressure and feedback from the diver. Many manual pumps had delivery pressure gauges calibrated in units of water depth - feet or metres of water column - which would provide the supervisor with a reasonable indication of diver depth. If the diver needed more air, the operators would have to crank faster.
Lever
Lever pumps have one or two cylinders, which are operated by rocking a beam with handles attached to its ends which is pivoted at the centre for a two-cylinder pump, and at the end for a single cylinder pump. Vertical lever pumps with bell-crank operation were also made, usually for shallow water work. The piston rods are connected to the beam near the pivot. Upward movement of the pistons pulls the air into the cylinders through the inlet valves, and then downward movement pumps the air through the hose to the helmet of the diver in a single action pump. Cylinders, valves and outlet for air are usually made from brass for reliability.
Other components
The pump may be mounted in a cabinet for protection during transport and storage, and may be fitted with one or more pressure gauges.
Gallery
See also
References
Sources
External links
All about pumps in worlds largest virtual diving museum: Diving Heritage
Diving support equipment
Pumps
Gas technologies | Diver's pump | [
"Physics",
"Chemistry"
] | 489 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
8,951,436 | https://en.wikipedia.org/wiki/Type%204%20Ka-Tsu | The was a Japanese amphibious landing craft of World War II. The first prototype was completed in late 1943 and trials were conducted off Kure in March 1944.
History
Japan's combat experience in the Solomon Islands in 1942 which revealed the difficulty of resupplying Japanese forces in such situations prompted the IJN to commence an amphibious tractor program in 1943, as the Ka-Tsu, which was designed by Commander Hori Motoyoshi of the Kure Naval Yard.
Design
The Ka-Tsu's primary purpose was to transport cargo and/or troops ashore. It had light armored shielding with a maximum of 10 mm. Its engine compartment and electric final drives were hermetically sealed, as it was intended to be launched from a submarine. The twin drive propeller shafts were designed to retract "into their ducts" once the vehicle reached the beach.
The first prototype was completed in late 1943 and trials were conducted off Kure in March 1944. By the time development had been completed, it was proposed that the Ka-Tsu be used to attack US battleships anchored in atolls (such as Ulithi), which could not readily be attacked using conventional means. It was proposed that a Ka-Tsu armed with a pair of naval Type 93 torpedoes be dropped off by submarine away from the atoll, propel itself to the outer reef using its tracks, and then enter the lagoon on the inside of the reef. Tests were successfully carried out with a modified Ka-Tsu carrying two torpedoes on its deck, but the war ended before any such mission could be mounted and the Ka-Tsu deployed in combat. A total of 49 units were produced.
Gallery
See also
Blockade Runner
Type 3 submergence transport vehicle
Ha-101 class submarine
Notes
References
External links
Taki's Imperial Japanese Army Page - Akira Takizawa
World War II armoured fighting vehicles of Japan
Imperial Japanese Navy
Landing craft
Tractors
Amphibious vehicles of World War II
Tracked amphibious vehicles
Amphibious military vehicles
Military vehicles of Japan
Blockades of World War II
Mitsubishi
Military vehicles introduced from 1940 to 1944
Armoured personnel carriers of WWII
Tracked armoured personnel carriers | Type 4 Ka-Tsu | [
"Engineering"
] | 426 | [
"Engineering vehicles",
"Tractors"
] |
8,951,529 | https://en.wikipedia.org/wiki/Frontiers%20%281989%20TV%20series%29 | Frontiers is an eight-part BBC television series, and accompanying book, that explored the geographic boundaries between countries. Eight writers and journalists in a variety of countries investigated the economic, political, geographical and historical reasons that account for why people are divided. The series was aired in 1989, just a few months before the fall of the Berlin Wall, which was featured in one episode.
Episodes
"Natural Break": Frederic Raphael explored the Pyrenees, the frontier between France and Spain, which at the time was preparing to join the (then) European Economic Community.
"Gone Tomorrow": John Wells covered the Iron Curtain that split East and West Germans.
"Gold and the Gun": Nadine Gordimer visited the war-torn border area between Mozambique and her native South Africa.
"Night and Day": Richard Rodriguez showed how the rich North and poor South converged at the US/Mexican border.
"Long Division": Ronald Eyre looked at the people living on both sides of the border in Ireland that splits the Republic from Ulster.
"Big Brother's Bargain": Nigel Hamilton hiked up the boundary between Russia and Finland.
"Border Run": Jon Swain visited the Thai/Cambodian border where thousands of Cambodian refugees had been stranded for over ten years.
"Cyprus: Stranded in Time": Christopher Hitchens investigated the divided island of Cyprus.
Further reading
Frontiers, published in 1990 by BBC Books,
External links
1989 British television series debuts
1989 British television series endings
1980s British documentary television series
BBC television documentaries
Borders
British English-language television shows | Frontiers (1989 TV series) | [
"Physics"
] | 312 | [
"Spacetime",
"Borders",
"Space"
] |
8,951,767 | https://en.wikipedia.org/wiki/Reinforced%20thermoplastic%20pipe | Reinforced thermoplastic pipe (RTP) is a type of pipe reinforced using a high strength synthetic fibre such as glass, aramid or carbon. It was initially developed in the early 1990s by Wavin Repox, Akzo Nobel and by Tubes d'Aquitaine from France, who developed the first pipes reinforced with synthetic fibre to replace medium pressure steel pipes in response to growing demand for non-corrosive conduits for application in the onshore oil and gas industry, particularly in the Middle East. Typically, the materials used in the construction of the pipe might be Polyethylene (PE), Polyamide-11 or PVDF and may be reinforced with Aramid or Polyester fibre although other combinations are used. More recently the technology of producing such pipe, including the marketing, rests with a few key companies, where it is available in coils up to length. These pipes are available in pressure ratings from . Over the last few years this type of pipe has been acknowledged as a standard alternative solution to steel for oilfield flowline applications by certain oil companies and operators. An advantage of this pipe is also its very fast installation time compared to steel pipe when considering the welding time as average speeds up to /day have been reached installing RTP in ground surface.
Primarily, the pipe provides benefit to applications where steel may rupture due to corrosion and installation time is an issue.
Technology and history
The idea of synthetic fibre reinforced pipe has origins in the flexible hose and offshore industry where it has been frequently used for applications such as control lines in umbilicals and production flowlines for over 30 years. However, the commercialisation and realisation of a competitive product for the onshore oil industry came from a partnership between Teijin Aramid (supplier of aramid fibre Twaron) and Wavin Repox (manufacturer of reinforced thermoset pipes), where Bert Dalmolen initiated a project to develop such a pipe. He was later employed by Pipelife where a state of the art production line was developed to produce RTP. Pipelife also developed a pipe reinforced with steel wire to achieve even higher pressure ratings of over using steel reinforcement. Mr Chevrier (Tubes d'Aquitaine) also developed machinery that could produce such pipes, but was not successful in commercialising RTP.
See also
Pipeline transport
Plastic pipework
Plastic Pressure Pipe Systems
References
Notes
Bibliography
http://catalogue.bl.uk:80/F/CJYYHDQ2ECVFDKHH129FR4VSPYY4BKK89XIJJCATY7GIV1KIBD-12392?func=full-set-set&set_number=165133&set_entry=000001&format=999 PhD Thesis - Reinforced Thermoplastic Pipes, 1998, Dr Ben Chapman, BSc (Hons), PhD]
(1.01 MiB)
(0.51 MiB)
(0.16 MiB)
External links
CheFEM Software: Permeation Analysis of Annulus Condition
Piping
Pipeline transport
Composite materials
Fibre-reinforced polymers | Reinforced thermoplastic pipe | [
"Physics",
"Chemistry",
"Engineering"
] | 642 | [
"Building engineering",
"Chemical engineering",
"Composite materials",
"Materials",
"Mechanical engineering",
"Piping",
"Matter"
] |
7,432,103 | https://en.wikipedia.org/wiki/2%2C4%2C6-Trinitroaniline | 2,4,6-Trinitroaniline, C6H4N4O6, abbreviated as TNA and also known as picramide, a nitrated amine. Materials in this group range from slight to strong oxidizing agents. If mixed with reducing agents, including hydrides, sulfides and nitrides, they may begin a vigorous reaction that culminates in a detonation. The aromatic nitro compounds may explode in the presence of a base such as sodium hydroxide or potassium hydroxide even in the presence of water or organic solvents. The explosive tendencies of aromatic nitro compounds are increased by the presence of multiple nitro groups. The appearance of trinitroaniline varies from yellow to orange to red depending on its purity and concentration.
Applications
Trinitroaniline is only used in modern times in the small warheads of some explosive devices such as mortars. In World War II it was used by Imperial Japanese Navy as Type 97 bakuyaku (Model 1931 explosive) in some versions of gun projectiles instead of less stable burster schimose (picric acid). It was also used in the Yokosuka MXY-7 Ohka, a kamikaze antishipping human-guided rocket aircraft.
Health and safety
Trinitroaniline is dangerously explosive and also hepatoxic. Symptoms of exposure to this compound may include skin and eye irritation, headache, drowsiness, weakness, cyanosis, and respiratory distress.
See also
Aniline
Tetryl
References
Anilines
Nitrobenzene derivatives
Explosive chemicals | 2,4,6-Trinitroaniline | [
"Chemistry"
] | 330 | [
"Explosive chemicals"
] |
7,432,107 | https://en.wikipedia.org/wiki/Subgrain%20rotation%20recrystallization | In metallurgy, materials science and structural geology, subgrain rotation recrystallization is recognized as an important mechanism for dynamic recrystallisation. It involves the rotation of initially low-angle sub-grain boundaries until the mismatch between the crystal lattices across the boundary is sufficient for them to be regarded as grain boundaries. This mechanism has been recognized in many minerals (including quartz, calcite, olivine, pyroxenes, micas, feldspars, halite, garnets and zircons) and in metals (various magnesium, aluminium and nickel alloys).
Structure
In metals and minerals, grains are ordered structures in different crystal orientations. Subgrains are defined as grains that are oriented at a < 10–15 degree angle at the grain boundary, making it a low-angle grain boundary (LAGB). Due to the relationship between the energy versus the number of dislocations at the grain boundary, there is a driving force for fewer high-angle grain boundaries (HAGB) to form and grow instead of a higher number of LAGB. The energetics of the transformation depend on the interfacial energy at the boundaries, the lattice geometry (atomic and planar spacing, structure [i.e. FCC/BCC/HCP] of the material, and the degrees of freedom of the grains involved (misorientation, inclination). The recrystallized material has less total grain boundary area, which means that failure via brittle fracture along the grain boundary is less probable.
Mechanism
Subgrain rotation recrystallization is a type of continuous dynamic recrystallization. Continuous dynamic recrystallization involves the evolution of low-angle grains into high-angle grains, increasing their degree of misorientation. One mechanism could be the migration and agglomeration of like-sign dislocations in the LAGB, followed by grain boundary shearing. The transformation occurs when the subgrain boundaries contain small precipitates, which pin them in place. As the subgrain boundaries absorb dislocations, the subgrains transform into grains by rotation, instead of growth. This process generally occurs at elevated temperatures, which allows dislocations to both glide and climb; at low temperatures, dislocation movement is more difficult and the grains are less mobile.
By contrast, discontinuous dynamic recrystallization involves nucleation and growth of new grains, where due to increased temperature and/or pressure, new grains grow at high angles compared to the surrounding grains.
Mechanical properties
Grain strength generally follows the Hall–Petch relation, which states that material strength decreases with the square root of the grain size. A higher number of smaller subgrains leads to a higher yield stress, and so some materials may be purposefully manufactured to have many subgrains, and in this case subgrain rotation recrystallization should be avoided.
Precipitates may also form in grain boundaries. It has been observed that precipitates in subgrain boundaries grow in a more elongated shape parallel to the adjacent grains, whereas precipitates in HAGB are blockier. This difference in aspect ratio may provide different strengthening effects to the material; long plate-like precipitates in the LAGB may delaminate and cause brittle failure under stress. Subgrain rotation recrystallization reduces the number of LAGB, thus reducing the number of flat, long precipitates, and also reducing the number of available pathways for this brittle failure.
Experimental techniques
Different grains and their orientations can be observed using scanning electron microscope (SEM) techniques such as electron backscatter diffraction (EBSD) or polarized optical microscopy (POM). Samples are initially cold- or hot-rolled to introduce a high degree of dislocation density, and then deformed at different strain rates so that dynamic recrystallization occurs. The deformation may be in the form of compression, tension, or torsion. The grains elongate in the direction of applied stress and the misorientation angle of subgrain boundaries increases.
References
Metallurgy
Structural geology | Subgrain rotation recrystallization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 860 | [
"Metallurgy",
"Materials science",
"nan"
] |
7,432,187 | https://en.wikipedia.org/wiki/European%20Union%20Public%20Licence | The European Union Public Licence (EUPL) is a free software licence that was written and approved by the European Commission. The licence is available in 23 official languages of the European Union. All linguistic versions have the same validity. Its latest version, EUPL v1.2, was published in May 2017. Revised documentation for was issued in late2021.
Software has been licensed under the EUPL since the launch of the European Open Source Observatory and Repository (OSOR) in October 2008, now part of Joinup collaborative platform. Although private individuals can utilize the EUPL, its primary users to date have been governments, administrations, and local authorities.
History
EUPL was originally intended to be used for the distribution of software developed in the framework of the IDABC programme, given its generic scope it was also suitable for use by any software developer. Its main goal is its focusing on being consistent with the copyright law in the Member States of the European Union, while retaining compatibility with popular free software licences such as the GNU General Public License. The first IDABC software packages mentioned are CIRCA groupware, IPM and the eLink G2G, G2C, G2B specification software.
Comparison to other open source/free software licences
EUPL is the first open source licence to be released by an international governing body. A goal of this licence is to create an open-source licence available into 23 official languages of the European Union, and that is sure to conform to the existing copyright laws of the Member States of the European Union.
The licence was developed with other open-source licences in mind and specifically authorizes covered works to be re-released under the following licences, when combined with their covered code in larger works:
Many other OSI-approved licences are compatible with the EUPL: JOINUP publish a general compatibility matrix between all OSI-approved licences and the EUPL.
An overview of the EUPL licence and on what makes it different has been published in OSS-Watch.
In 2020, the European Commission publishes its Joinup Licensing Assistant, which makes possible the selection and comparison of more than 50 licences, with access to their SPDX identifier and full text.
Versions
EUPL v1.0 was approved on 9 January 2007.
EUPL v1.1 was approved by the European Commission on 9 January 2009. EUPL v1.1 is OSI certified as from March 2009.
EUPL v1.2 was published in May 2017. EUPL v1.2 is OSI certified in July 2017.
Version 1.2
The EUPL v1.2 was prepared as from June 2013 its decision process started in 2016 and released on 19 May 2017. A principal objective of the EUPL v1.2 is to update the appendix of compatible licences to cover newer popular licences such as the GNU GPLv3 and AGPLv3.
According to the EUPL v.1.1, the European Commission may publish other linguistic versions and/or new versions of the EUPL, so far this is required and reasonable, without reducing the scope of the rights granted by the Licence. Future upgrades will not be applicable automatically when software was expressly released "under the EUPL v.1.1 only".
New provisions cover the Application service provider loophole of software distribution: Distribution and/or Communication (of software) includes providing on-line "access to its essential functionalities".
An important characteristic of the EUPL v1.2 is that, unlike the GPL, it is compatible with all other reciprocal licenses listed in the EUPL appendix. Compatibility means that after merging the covered code with code covered by a compatible license, the resulting (combined) derivative work can be distributed under the compatible license.
Another characteristic of the EUPL is that it is interoperable, without any "viral effect" in case of static and dynamic linking. This currently depends on European and national law, according to the Computer Programs Directive (Directive 91/250 EEC or 2009/24). Recital 10 of this Directive defines interoperability and recital 15 states that for making two programs interoperable, the code needed can be copied, translated or adapted. For example, take program A (new original code just written) and program B (a program licensed by a third party), the developer/licensor of A, who is also a legitimate holder or recipient of B may reproduce in A the needed code from B (e.g. the APIs or the needed data structures from program B) without copyright infringement and without authorization from the copyright holder of B. The licensor of A can do and distribute this without being bonded by conditions or limitations imposed by a licence of program B. This must stay compatible with the normal use of program B and cannot prejudice the legitimate interest of the copyright holder of B.
Unlike the "articles", the directive "recitals" are not transposed as such in national laws. However, recitals are part of European law: they are serving for understanding the scope and rationale of the law, and will be used by the court for interpreting the law, as the case may be. While recitals in EU Directives and Regulations are not considered to have independent legal value, they can expand an ambiguous provision's scope. They cannot restrict an unambiguous provision's scope, but they can be used to determine the nature of a provision, or to adapt it to new circumstances.
Interoperability
It is important to make a distinction between the various flavours of the “Strong Copyleft” concept. According to the GPL/AGPL licensor vision, this means some restrictions and conditions regarding interoperability (due to the theory that linking other software with the covered code creates a combined derivative) and regarding compatibility (since no derivative could be licensed under another license, which may create incompatibilities). The EUPL vision that is depending on the EU law is all the contrary: linking makes no derivatives and when merging source code licensed differently is a necessity, the resulting derivative can be licensed under a compatible license. For some of those,
the copyleft is known to be “weaker” (i.e. the MPL), but this has no impact because according to the EUPL the compatible license will prevail when its provisions conflict with those of the EUPL. Since none of the compatible license prohibits the strong reciprocity implemented by the EUPL (obligation to publish and share the source code of derivatives, even distributed through a network) the copyleft resulting from the EUPL can be considered as strong. For this reason, the German lawyer Niklas Plutte created for the EUPL the new category of "Interoperable copyleft licence".
Philosophy
In November 2023, a discussion paper, "The seven pillars of wisdom", published in the framework of the adoption of the Interoperable Europe Act, was proposed for discussion by the writing author of the EUPL-1.2 and explains the philosophy behind the EUPL text.
Member states policies
As from 2010, EU member states adopt or revise policies aimed to encourage – when appropriate – the open source distribution of public sector applications. The EUPL is formally mentioned in some of these policies:
Malta
Spain
Estonia: Ministry of Economic Affairs and Communications, Department of State Information Systems. Information Society Yearbook 2009.
Slovakia
France: Décret n° 2021-1559 of 1 December 2021, amending the Code of Relations between the Public and the Administration, Article D323-2-1, et seq.
See also
Software using the European Union Public Licence
Comparison of free and open-source software licences
GPL linking exception
References
External links
Full English text of the licence (PDF)
Legal context and milestones of the elaboration of the EUPL (by Severine Dusollier) (PDF)
Article of professor Severine Dusollier with a particular reference to the EUPL (PDF)
"Speech of Neelie Kroes, Vice President of the European Commission", YouTube video
"The European Union can show off with its own, free, open source license", Linux magazine
EUPL - An overview (by Rowan Wilson)
The European Union Public Licence (by Patrice-Emmanuel Schmitz) - A legal analysis in the IFOSSLR (International Free and Open Source Software Law Review), Vol. 5 n°2 (2013)
Computer law
Copyleft
Copyright law of the European Union
Free content licenses
Free and open-source software licenses
Copyleft software licenses
Information technology organizations based in Europe | European Union Public Licence | [
"Technology"
] | 1,755 | [
"Computer law",
"Computing and society"
] |
7,433,307 | https://en.wikipedia.org/wiki/List%20of%20New%20York%20City%20housing%20cooperatives | A partial list of housing cooperatives in New York City.
Projects originally built as housing cooperatives
Alku and Alku Toinen, started in 1916 by Finnish immigrants
Hudson View Gardens (1923–25), Hudson Heights, real estate developer Charles Paterno, architect George Fred Pelham Jr.
United Workers Cooperative Colony (1927–1929), 339 + 385 units, on Allerton Avenue on the Bronx, sponsored by communist garment industry workers; known as "The Communist Coops"
Dunbar Apartments, built by John D. Rockefeller Jr. in 1928 as a housing cooperative to provide housing for African Americans. Bankrupt in 1936 and taken over by Rockefeller.
Sponsored by Amalgamated Clothing Workers of America, Architects Springsteen and Goldhammer, Herman Jessor
Amalgamated Housing Cooperative (1927, 1947–49, expansion 1952–55, 1968–70 Bronx, "The Amalgamated", 1,435 units; still operating as a co-operative
Amalgamated Dwellings (1930), in Cooperative Village, Lower East Side of Manhattan, New York City, 236 units
Hillman Housing Corporation (1947–1950), in Cooperative Village, 807 units
Under the Housing Development Fund Corporation
566 W. 159th Street, Washington Heights
1007-09 E. 174th Street, the Bronx
Lenox Court, East Harlem
Sponsored by the United Housing Foundation and International Ladies' Garment Workers' Union. Architects George W. Springsteen and Herman Jessor
East River Houses, (1956), in Cooperative Village, 1,672 units,
Seward Park Housing Corporation, in Cooperative Village, 1,728 units
Mutual Houses and Park Reservoir Housing Corporation (1955), Bronx affiliated with Amalgamated Housing
Penn South (1962), 2,820 units, Chelsea, Manhattan
Rochdale Village (1965), 5,860 units, central Queens
Amalgamated Warbasse Houses (1965), 2,585 units, Coney Island, Brooklyn
Amalgamated Towers (1969), 316 units (see "Amalgamated Housing Cooperative" above)
Co-op City (1968–1971), Baychester area of the Bronx 15,382 units
Twin Pines Village (Starrett City) (1975), 5,881 units, southern Brooklyn
Mitchell-Lama Housing Program
Morningside Gardens (1957), Morningside Heights
Southbridge Towers (1969), Lower Manhattan
Confucius Plaza (1975), Chinatown, Manhattan
Converted rental property
Castle Village (1939, 1985), real estate developer Charles Paterno, architect George Fred Pelham Jr.
See also
List of condominiums in the United States
References
Labor and housing in New York City
2004 Annual Report – Mitchell-Lama Housing Companies in New York State
DHCR-Supervised Developments Within New York City
DHCR-Supervised Developments Outside New York City
cooperatives
cooperatives | List of New York City housing cooperatives | [
"Engineering"
] | 554 | [
"Architecture lists",
"Architecture"
] |
7,433,848 | https://en.wikipedia.org/wiki/Salicylate%20sensitivity | Salicylate sensitivity is any adverse effect that occurs when a usual amount of salicylate is ingested. People with salicylate intolerance are unable to consume a normal amount of salicylate without adverse effects.
Salicylate sensitivity differs from salicylism, which occurs when an individual takes an overdose of salicylates. Salicylate overdose can occur in people without salicylate sensitivity, and can be deadly if untreated. For more information, see aspirin poisoning.
Salicylates are derivatives of salicylic acid that occur naturally in plants and serve as a natural immune hormone and preservative, protecting the plants against diseases, insects, fungi, and harmful bacteria. Salicylates can also be found in many medications, perfumes and preservatives. Both natural and synthetic salicylates can cause health problems in anyone when consumed in large doses. But for those who are salicylate intolerant, even small doses of salicylate can cause adverse reactions.
Symptoms and signs
The most common symptoms of salicylate sensitivity are:
Intestinal inflammation or diarrhea
Itchy skin, hives or rashes
Asthma and other breathing difficulties
Polyps with asthma
Angioedema
Rhinitis, sinusitis, nasal polyps
Asthma and nasal polyps are also symptoms of aspirin-exacerbated respiratory disease (AERD, Samter's Triad), which is not believed to be caused by dietary salicylates.
Cause
Diagnosis
There is no laboratory test for salicylate sensitivity. Typically testing is done by an "elimination challenge," to see if symptoms improve, or "provocative challenge," which intends to induce a controlled reaction as a means of confirming diagnosis. During provocative challenge, the person is given incrementally higher doses of salicylates, usually aspirin, under medical supervision, until either symptoms appear or the likelihood of symptoms appearing is ruled out. This only pertains to short-term symptoms such as digestive, respiratory, and skin itching, rather than slower-developing symptoms such as nasal polyps.
Treatment
Salicylate sensitivity can be treated with the use of low-salicylate diets, such as the Feingold Diet. The Feingold Diet removes artificial colors and preservatives and salicylates, whereas the Failsafe Diet removes salicylates, as well as amines and glutamates. The range of foods that have no salicylate content is very limited, and consequently salicylate-free diets are very restricted.
Montelukast is one form of treatment used in aspirin-intolerant asthma.
Epidemiology
Salicylate sensitivity is noted to be more common in those who also have asthma, 2-22% of people with asthma have a likelihood of also having the intolerance.
History
An important salicylate drug is aspirin, which has a long history. Aspirin intolerance was widely known by 1975, when the understanding began to emerge that it is an adverse drug reaction, not an allergy.
Terminology
Salicylate intolerance is a form of food intolerance or of drug intolerance.
Salicylate sensitivity is a pharmacological reaction, not a true IgE-mediated allergy. However, it is possible for aspirin to trigger non-allergic hypersensitivity reactions. About 5–10% of asthmatics have aspirin hypersensitivity, but dietary salicylates have been shown not to contribute to this. The reactions in AERD (Samter's triad) are due to inhibition of the COX-1 enzyme by aspirin, as well as other NSAIDs that are not salicylates. Dietary salicylates have not been shown to significantly affect COX-1.
AERD refers to NSAID sensitivity in conjunction with nasal polyps and asthma.
See also
Aspirin-exacerbated respiratory disease
NSAID hypersensitivity reactions
References
Further reading
External links
Metabolic disorders
Sensitivities
Aspirin | Salicylate sensitivity | [
"Chemistry"
] | 864 | [
"Metabolic disorders",
"Metabolism"
] |
7,434,970 | https://en.wikipedia.org/wiki/Hard%20engineering | Hard engineering involves the construction of hydraulic structures to protect coasts from erosion. Such structures include seawalls, gabions, breakwaters, groynes and tetrapods.
Effects
Hard engineering can cause unintended environmental consequences, such as new erosion and altered sedimentation patterns, that are detrimental to the immediate human and natural environment or along down-coast locations and habitats.
Seawalls and bulkheads may have multiple negative effects on nearshore ecosystems due to the way they reflect wave energy instead of dissipating it. Energy from reflected waves can cause a scouring effect on substrate below the structure, resulting in loss or displacement of sediment. Over time, this effect may lead to a decrease in the size of intertidal and nearshore habitats. This effect is also known as coastal squeeze. In addition, bulkheads and seawalls offer no filtering for surface runoff, this means that anthropogenic pollutants and chemicals in armored areas may enter coastal waters relatively quickly.
Hard engineering, also called shoreline armoring, comes with other ecological effects on top of habitat loss and increased surface runoff. Structures that are built between land and sea are usually made of material not native to shoreline ecosystems. For instance, most sea walls and interlocking coastal defense structures are made of concrete, which may lend itself as habitat for invasive species rather than native ones. These structures also impede shoreline access, blocking some or all species from accessing refuge on dry land. In these armored areas, nutrient exchange between tidal and riparian ecosystems is threatened or cut off entirely. These issues arise from hard engineered sea shores, and lead many to believe that living shoreline techniques are far more beneficial ecologically and in terms of long-term erosion control.
Examples
Examples of hard engineering include:
Groynes – Low walls constructed at right angles to retain sediments that might otherwise be removed due to longshore drift. These structures absorb or reduce the energy of the waves and cause materials to be deposited on the updrift side of the groyne facing the longshore drift.
Seawalls – Seawalls are constructed to protect coastlines against wave attack by absorbing wave energy. Most seawalls are made out of concrete or stone and are built parallel to the coast. They have been constructed in thousands of locations throughout the world.
Rip-rap/rock armour – Boulders piled up against the coast that absorb the energy of the waves
Gabions – wire cages filled with rocks to absorb wave energy
References
Water and the environment | Hard engineering | [
"Environmental_science"
] | 508 | [
"Hydrology",
"Hydrology stubs"
] |
7,435,035 | https://en.wikipedia.org/wiki/Soft%20engineering | Regarding the civil engineering of shorelines, soft engineering is a shoreline management practice that uses sustainable ecological principles to restore shoreline stabilization and protect riparian habitats. Soft Shoreline Engineering (SSE) uses the strategic placement of organic materials such as vegetation, stones, sand, debris, and other structural materials to reduce erosion, enhance shoreline aesthetic, soften the land-water interface, and lower costs of ecological restoration.
To differentiate Soft Shoreline Engineering from Hard Shoreline Engineering, Hard Shoreline Engineering tends to use steel sheet piling or concrete breakwalls to prevent danger and fortify shorelines. Generally, Hard Shoreline Engineering is used for navigational or industrial purposes. To contrast, Soft Shoreline Engineering emphasizes the application of ecological principles rather than compromising the engineered integrity of the shoreline. The opposite alternative is hard engineering.
Background
Hard shoreline engineering is the use of non-organic reinforcing materials, such as concrete, steel, and plastic to fortify shorelines, stop erosion, and protect urban development from flooding. However, as shoreline development among coastal cities increased dramatically, the detrimental ecological factors became apparent. Hard shoreline engineering was designed to accommodate human development along the coast, focusing on increasing efficiency in the commercial, navigational, and industrial sectors of the economy. In 2003, the global population living within of an ocean was 3 billion and is expected to double by the year 2025. These developments came at a high cost, destroying biological communities, isolating riparian habitats, altering the natural transport of sediment by disrupting wave action and long-shore currents. Many coastal regions began to see significant coastal degradation due to human development, the Detroit River losing as great as 97% of its coastal wetland habitats. Singapore, as well, documented the disappearance of the majority of its mangrove forests, coastal reefs, and mudflat regions between 1920 and 1990 due to shoreline development.
Towards the end of the 20th century, coastal engineering practices underwent a gradual transition towards incorporating the natural environment into planning considerations. In stark contrast to hard engineering, employed with the sole purpose of improving navigation, industrial and commercial uses of the river, soft engineering takes a multi-faceted approach, developing shorelines for a multitude of benefits and incorporating consideration of fish and wildlife habitat. Tasked with the responsibility to construct and maintain United States Federally authorized coastal civil works projects, the U.S. Army Corps of Engineers plays a major part in the development of the principles of coastal engineering as practiced within the U.S. In part due to degradation of coastline across the United States, the Corps has since updated its coastal management practices with an increased emphasis on computer-based modeling, project upkeep, and environmental restoration. However, soft and hard engineering are not mutually exclusive; a blend of the two management practices can be used to design waterfronts, especially for high flow bodies of water.
Principles of Soft Shoreline Engineering
Imitate Nature - Imitating the characteristics of the natural environment is critical to the success of soft engineering efforts. Existing traits of a landscape provide telltale signs of the geomorphic forces at play. Trying to add vegetation to a barren area with high winds will not produce the intended results.
Gentle Slopes - Gentle slopes are most commonly found in the natural environment and are the most stable under the forces of gravity. Gradually inclined slopes along banks and shorelines allow for the dissipation of wave energy over a greater distance, reducing the force of erosion.
"Soft Armoring" - Soft armoring includes the use of materials such as live plants, shrubs, root wads, logs, vegetative mats, etc. These materials, which are alive, can adapt to changes in the environment and help maintain regular coastal processes by disrupting the natural shoreline in the least way possible. Soft armoring is also paramount to enhancing shoreline habitats and improving water quality.
Material Variety - A variety of textures and vegetation enhances aesthetic, diversifies the natural landscape, and maximizes biodiversity. Native plants and endangered or threatened species should be used whenever possible. The use of locally abundant and easily accessible natural resources also cuts development costs significantly.
Techniques
Planting
The most basic and fundamental form of soft shoreline engineering is adding native vegetation to degraded or damaged shoreline areas to bolster the structural integrity of the soil. The deep roots of the vegetation bind the soil together, strengthening the structural integrity of the soil and preventing it from cracking apart and crumbling into the body of water. An added layer of vegetation also protects embankments from corrosive forces such as rain and wind.
Rolled Erosion Control Products (RECP)
Rolled erosion control products are blankets or netting created with both natural and synthetic materials used to protect the surface of the ground from erosive forces and promote the growth of vegetation. RECPs are often used in locations highly susceptible to erosion, such as steep slopes, channels, and areas where natural vegetation is sparse. These products aid the growth of vegetation by protecting soil from raindrops, keeping seed in place, and maintaining moisture and temperature parameters consistent with plant growth. The typical composition of an RECP includes seed, fertilizer, degradable stakes, and a binding material. Although design varies by manufacturer, most RECPs are biodegradable or photodegradable and decompose after a given amount of time.
Coir Logs
Erosion control coir logs are natural fiber products designed to stabilize soil by supporting erosion prone areas such as river banks, slopes, hills, and streams. Coir is coconut fiber extracted from the outer husk of a coconut and used in products such as ropes, mats, and nets. Like RECPs, coir logs are natural and biodegradable, being composed primarily of densely packed coir fibers held together by a tubular coir twine netting. Coir fiber is strong and water resistant, making it a durable barrier against waves and river currents. Multiple sections of coir log can be joined together by twine to provide erosion control and prevention to vulnerable areas. Coir logs can also be vegetated and used to establish root systems of native plants along wetland edges.
Live Stakes and Fascines
Lives stakes and fascines are a specific tree or shrub species that thrive in moist soil conditions and can be strategically used to stabilize stream banks and shorelines. Live stakes are hardwood cuttings with the branches removed that, when planted in moist soil, will grow new plants from the stems of the cut branches. They can be used alone, implanted into pilot holes in the soil, or used as a device to secure other bioengineering materials such as rolled erosion control products and coir logs. Fascines are similar live branches strapped together and laid horizontally across streambank contours to impede or prevent the flow of water and curb erosion.
Brush Mattress
Brush mattresses, also known as live brush mats or brush matting is a technique used to form immediate protective cover of a streambank. Brush mattresses are dense compilations of live stakes, fascines, and branch cuttings held down with additional stakes to protect the embankment. The brush mattress is intended to eventually take root and enhance the conditions for the colonization of native plants. Along with aiding in the restoration of riparian habitats, this product intercepts sediment flowing downstream and provides a number of benefits for fish and aquatic species by offering physical protection from predators, regulating the water temperature, and shading the stream.
Live Cribwalls
Live crib walls are structures that resemble that of a wooden log cabin built into a streambank and rilled with natural materials such as soil, dormant wood cuttings, and rock. The live crib wall is able to fortify stream banks with the combination of the sturdy log structure and the root mass that will sprout from the wood cuttings and take hold deep in the bank, armoring it from erosion. Although quite labor intensive, cribwalls can last for decades and provide excellent aquatic habitats under the surface of the body of water. Cribwalls have the ability to prevent the occurrence of a split channel in a stream but should not be used in streams with downcutting as the base of the structure will be compromised.
Encapsulated Soil Lifts
Encapsulated soil lifts are a technique that "encapsulates" soil in a biodegradable blanket and organized on a slope in such a way that creates the desired stream bank slope. The layers of soil, or lifts, are used to stabilize the banks of moderate to high level energy shorelines. Once constructed, the lifts are planted with the seeds of native flowers, shrubs and grasses. In addition to reducing dirt erosion in the body of water, soil lifts protect water quality and the encompassed riparian habitats.
Vegetated Riprap
Vegetated riprap is a soft shoreline engineering technique that is an alternative to conventional riprap for erosion protection. Conventional riprap is a form of rock armor, rubble, or concrete used to fortify shoreline structures against the forces of erosion. Vegetated riprap is a more economically efficient form of shoreline protection that enhances fish and wildlife habitat as well as softening the appearance and improving embankment aesthetic. Vegetated riprap incorporated native vegetation along with rocks to create live cuttings in the bank. This technique improves the natural habitat of aquatic species along with armoring the banks and redirecting water flows.
Geo Bags
Geo bags or erosion control bags/tubes act as sediment removing filters, protecting against shoreline erosion by trapping sludge and sand particles and preventing them from leaving the coastal area. The bags are designed to allow the natural flow of water to filter in and out without inhibition, limiting disruption to the coastline. These geo bags or tubes are designed to look natural in the coastal environment, as opposed to concrete alternatives, and are built to endure the outdoors. Geo bag material is typically composed of geotextile fabric and can be designed for different specifications.
Best Management Practices
In order to incorporate principles of soft engineering into practice, shorelines must be redeveloped to achieve multiple objectives. For example, soft shoreline engineering has the ability to decrease costs, stabilize banks, enhance aesthetic value, protect riparian habitats, expand public access, and support a diversity of wildlife. To achieve the goal of multiple objectives for waterfront development and design, a multi-disciplinary team must be formed to integrate environmental, social, and economic principles.
The first step in implementing soft engineering is conducting a preliminary assessment of the site and determining whether soft engineering is applicable and practical. A typical assessment includes identifying the extent of the project area, evaluating existing uses, documenting amenities and characteristics such as habitats, species, public access, development, and considering impact of future desired use. If the team decides the site is fit to implement soft engineering, a complex process is designed in order to achieve the predetermined goals of the development and complete with objectives. Standards and targets must then be created to measure project development and progress. Interdisciplinary partnerships must be established at an early stage in the process to ensure the incorporation of environmental, social, and economic values, as well as target objectives implemented to measure progress. Priorities and alternative are established, with the team working together to decide on the best management practices to achieve maximum effectiveness. After best management practices have been determined and incorporated, project success is based upon the meeting of objectives and effective preservation and conservation efforts.
Case Studies
Greater Detroit American Heritage River Initiative
In 1998, the President of the United States created the American Heritage River Initiative to restore and revitalize rivers and waterfronts through the use of newly introduced soft engineering techniques. A report by Schneider reported that 47.2% of the U.S. and Canadian Detroit River had been fortified with concrete or steel, in accordance with traditional hard engineering management practices. In 1999, a U.S. Canadian SSE conference developed the best management practices for SSE use, which was put into effect among the 38 SSE projects that took place in the Detroit River-western Lake Erie watershed. A grand total of $17.3 million was spent on these projects which aimed to improve riparian and aquatic habitat, restore natural shoreline, and treat stormwater. The study found that the economic benefits to ecological restoration are profound and provide compelling evidence for further investigation and investment into shoreline rehabilitation processes. Researchers also found that SSE not only improved the natural habitat, but from a social perspective, the efforts aided in reconnecting people to nature, fostering a sense of human attachment to the success and health of these waterfronts.
Mississippi
Beginning with British colonial establishment in 1819, Mississippi's coastline has undergone an extensive history of decline through alteration and land reclamation. Hilton and Manning found that from the period of 1922 to 1993, the area of mangroves, coral reefs, and intertidal mudflats decreased dramatically, the actual percentage of natural coastline dropping from 96 to 40%. In order to combat these deleterious anthropogenic effects, Mississippi's government came up with a Master Plan in 2008 which incorporated the modification of shorelines in accordance with the ecological principles of soft engineering. A study regarding the success of ecological engineering in Singapore found that the most effective way to introduce ecological principles into shoreline design and preservation is to implement a top down approach that coordinates and educates the multitude of agencies that are involved in coastal management. Mississippi's loss of natural coastline is just one example of the inevitable detriment of intensive human development and soft engineering techniques provide an effective way to balance shoreline conservation and restoration with the urban development that is sure to continue.
References
See also
Hard engineering
Ecological Engineering
Erosion Control
Coastal engineering | Soft engineering | [
"Engineering"
] | 2,730 | [
"Coastal engineering",
"Civil engineering"
] |
7,435,450 | https://en.wikipedia.org/wiki/ABS%20Steels | ABS Steels are types of structural steel which are standardized by the American Bureau of Shipping for use in shipbuilding.
ABS steels include many grades in ordinary-strength and two levels of higher-strength specifications.
All of these steels have been engineered to be optimal long-lived shipbuilding steels. ABS does permit the use of other steels in shipbuilding, but discourages it, and requires more detailed engineering analysis.
Basic properties
All ABS steels are standard carbon steels. As with other grades of steel, they have a specific gravity of 7.8.-
Material Properties
Ordinary-Strength
Ordinary-strength ABS shipbuilding steel comes in a number of grades, A, B, D, E, DS, and CS. On certified steels, the plates are marked with the grade and a preceding "AB/", e.g. AB/A etc.
Yield point for all ordinary-strength ABS steels is specified as 34,000 psi (235 MPa), except for ABS A in thicknesses of greater than 1 inch (25 mm) which has yield strength of 32,000 psi (225 MPa), and cold flange rolled sections, which have yield strength of 30,000 psi (205 MPa).
Ultimate tensile strength of ordinary strength alloys is 58,000 - 71,000 psi (400-490 MPa), except for ABS A shapes and bars with 58,000 - 80,000 psi (400-550 MPa), and cold flanged sections with 55,000 - 65,000 psi (380-450 MPa).
The various grades have slightly differing alloy chemical ingredients, and differing fracture toughness.
Higher-Strength
Higher-strength ABS shipbuilding steel comes in six grades of two strengths, AH32, DH32, EH32, AH36, DH36, and EH36.
The 32 grades have yield strength of 45,500 psi (315 MPa), and ultimate tensile strength of 64,000 - 85,000 psi (440-590 MPa).
The 36 grades have yield strength of 51,000 psi (355 MPa), and ultimate tensile strength of 71,000 - 90,000 psi (490-620 MPa).
Per Steel Vessel Rules Part 2 Chapter 1 Section 3 Table 2 (pg 36).
Forms
ABS steel is produced in a variety of different forms, including:
Plates
Bars
Pipes
Structural Shapes
References
Shipbuilding
Steels
Structural steel | ABS Steels | [
"Engineering"
] | 495 | [
"Structural engineering",
"Structural steel",
"Steels",
"Shipbuilding",
"Alloys",
"Marine engineering"
] |
7,435,692 | https://en.wikipedia.org/wiki/Punch%20list | A punch list is a document prepared during key milestones or near the end of a construction project listing works that do not conform to contract drawings and specifications that the general contractor must correct prior to final payment. The work may include incomplete or incorrect installations or incidental damage to existing finishes, material, and structures. The list is usually made by the owner, architect or designer, or general contractor while they tour and visually inspect the project.
In the United States construction industry, contract agreements are usually written to allow the owner to withhold (retain) the final payment to the general contractor as "retainage". The contractor is bound by the contract to complete a list of contract items, called a punch list, in order to receive final payment from the owner. The designer (typically a licensed professional architect or engineer) is usually also incorporated into the contract as the owner's design representative and agent, to verify that completed contract work has complied with the design.
In most contracts, the general conditions of the contract for construction require the contractor, when they believe it to be so, to declare the construction project to have reached "substantial completion" and to request a "pre-final" inspection. According to the General Conditions (AIA A201 Section 9.8.2), the Contractor prepares and submits to the architect a comprehensive list of items to be completed or corrected. This snag list, as generated by the Contractor, is known as the punch list. Upon receipt of the contractor's list, the architect then inspects the work to determine if the work is "substantially complete."
Final payment to the contractor is only made when all of the items on the punch list have been confirmed to meet the project-design specifications required by the contract, or some other mutually agreed resolution for each item has been reached.
Examples of punch-list items include damaged building components (e.g. repair broken window, replace stained wallboard, repair cracked paving, etc.), or problems with the final installation of building materials or equipment (for example, install light fixture, connect faucet plumbing, install baseboard trim, reinstall peeling carpet, replace missing roof shingles, rehang misaligned exterior door, fire and pressure-test boiler, obtain elevator use permit, activate security system, and so on).
Under one hypothesis, the phrase takes its name from the historical process of punching a hole in the margin of the document, next to one of the items on the list. This indicated that the work was completed for that particular construction task. Two copies of the list were punched at the same time, in order to provide an identical record for the architect and contractor.
A rolling punch list is the most common approach towards managing these tasks efficiently and thereby minimizing the likelihood of having to grapple with large number of punch-list items at the end of a major project. A rolling punch list entails constantly verifying the work status throughout the duration of the project, with a rigid closeout schedule being assigned to each task. Finishing the project error-free requires planning, communication, and managing the punch list throughout the project.
Construction punch list software
Starting in 2013 when mobile software became popular on construction sites, many construction teams started using software to manage their punch lists. Today there are a variety of punch list applications, ranging from simple mobile apps to more comprehensive web and mobile platforms.
See also
References
Construction management
Construction documents | Punch list | [
"Engineering"
] | 696 | [
"Construction",
"Construction management"
] |
7,436,045 | https://en.wikipedia.org/wiki/6061%20aluminium%20alloy | 6061 aluminium alloy (Unified Numbering System (UNS) designation A96061) is a precipitation-hardened aluminium alloy, containing magnesium and silicon as its major alloying elements. Originally called "Alloy 61S", it was developed in 1935. It has good mechanical properties, exhibits good weldability, and is very commonly extruded (second in popularity only to 6063). It is one of the most common alloys of aluminium for general-purpose use.
It is commonly available in pre-tempered grades such as 6061-O (annealed), tempered grades such as 6061-T6 (solutionized and artificially aged) and 6061-T651 (solutionized, stress-relieved stretched and artificially aged).
Chemical composition
6061 Aluminium alloy composition by mass:
Properties
The mechanical properties of 6061 greatly depend on the temper, or heat treatment, of the material. Young's Modulus is regardless of temper.
6061-O
Annealed 6061 (6061-O temper) has maximum ultimate tensile strength no more than , and maximum yield strength no more than or . The material has elongation (stretch before ultimate failure) of 10–18%. To obtain the annealed condition, the alloy is typically heat soaked at 415 °C for 2-3 hours.
6061-T4
T4 temper 6061 has an ultimate tensile strength of at least or and yield strength of at least . It has elongation of 10-16%.
6061-T6
T6 temper 6061 has been treated to provide the maximum precipitation hardening (and therefore maximum yield strength) for a 6061 aluminium alloy. It has an ultimate tensile strength of at least and yield strength of at least . More typical values are and , respectively. This can exceed the yield strength of certain types of stainless steel. In thicknesses of or less, it has elongation of 8% or more; in thicker sections, it has elongation of 10%. T651 temper has similar mechanical properties.
The typical value for thermal conductivity for 6061-T6 at is around 152 W/m K.
The fatigue limit under cyclic load is for 500,000,000 completely reversed cycles using a standard RR Moore test machine and specimen. Note that aluminium does not exhibit a well defined "knee" on its S-N curve, so there is some debate as to how many cycles equates to "infinite life". Also note the actual value of fatigue limit for an application can be dramatically affected by the conventional de-rating factors of loading, gradient, and surface finish.
Microstructure
Different aluminium heat treatments control the size and dispersion of precipitates in the material. Grain boundary sizes also change, but do not have as important of an impact on strength as the precipitates. Grain sizes can change orders of magnitude based upon stress and can have grains as small as a few hundred nanometres, but are typically a few micrometres to hundreds of micrometres in diameter. Iron, manganese, and chromium secondary phases (, ) often form as inclusions in the material.
Grain sizes in aluminium alloys are heavily dependent upon the processing techniques and heat treatment. Different cross-sections of material which has been stressed can cause order of magnitude differences in grain size. Some specially processed aluminium alloys have grain diameters which are hundreds of nanometres, but most range from a few micrometres to hundreds of micrometres.
Uses
6061 is commonly used for the following:
construction of aircraft structures, such as wings and fuselages, more commonly in homebuilt aircraft than commercial or military aircraft. 2024 alloy is somewhat stronger, but 6061 is more easily worked and remains resistant to corrosion even when the surface is abraded. This is not the case for 2024, which is usually used with a thin Alclad coating for corrosion resistance.
yacht construction, including small utility boats.
automotive parts, such as the chassis of the Audi A8 and the Plymouth Prowler.
flashlights
Scuba tanks and other high pressure gas storage cylinders (post 1995)
6061-T6 is used for:
bicycle frames and components
middle to high-end recurve risers
many fly fishing reels.
the Pioneer plaque
the secondary chambers and baffle systems in firearm sound suppressors (primarily pistol suppressors for reduced weight and improved mechanical functionality), while the primary expansion chambers usually require 17-4PH or 303 stainless steel or titanium.
the upper and lower receivers of many non mil-spec AR-15 rifle variants.
many aluminium docks and gangways, welded into place.
material used in some ultra-high vacuum (UHV) chambers
many parts for remote controlled model aircraft, notably helicopter rotor components.
large amateur radio antennas.
fire department rescue ladders
Welding
6061 is highly weldable, for example using tungsten inert gas welding (TIG) or metal inert gas welding (MIG). Typically, after welding, the properties near the weld are those of 6061-T4, a loss of strength of around 40%. The material can be re-heat-treated to restore near -T6 temper for the whole piece. After welding, the material can naturally age and restore some of its strength as well. Most strength is recovered in the first few days to a few weeks. Nevertheless, the Aluminum Design Manual (Aluminum Association) recommends the design strength of the material adjacent to the weld to be taken as 165 MPa/24000 PSI without proper heat treatment after the welding. Typical filler material is 4043 or 5356.
Extrusions
6061 is an alloy used in the production of extrusions—long constant–cross-section structural shapes produced by pushing metal through a shaped die.
Cold and Hot Stamping
6061 sheet in the T4 condition can be formed with limited ductility in the cold state. For deep draw and complex shapes, and for the avoidance of spring-back, an aluminium hot stamping process (Hot Form Quench) can be used, which forms a blank at a elevated temperature (~ 550 C) in a cooled die, leaving a part in W-temper condition before artificial aging to the T6 full strength state.
Forgings
6061 is an alloy that is suitable for hot forging. The billet is heated through an induction furnace and forged using a closed die process. This particular alloy is suitable for open die forgings. Automotive parts, ATV parts, and industrial parts are just some of the uses as a forging. Aluminium 6061 can be forged into flat or round bars, rings, blocks, discs and blanks, hollows, and spindles. 6061 can be forged into special and custom shapes.
Castings
6061 is not an alloy that is traditionally cast due to its low silicon content affecting the fluidity in casting. It can be suitably cast using a specialized centrifugal casting method. Centrifugally cast 6061 is ideal for larger rings and sleeve applications that exceed the limitations of most wrought offerings.
Equivalent materials
6061 Aluminium Equivalent Table
Standards
Different forms and tempers of 6061 aluminium alloy are discussed in the following standards:
ASTM B209: Standard Specification for Aluminum and Aluminum-Alloy Sheet and Plate
ASTM B210: Standard Specification for Aluminum and Aluminum-Alloy Drawn Seamless Tubes
ASTM B211: Standard Specification for Aluminum and Aluminum-Alloy Bar, Rod, and Wire
ASTM B221: Standard Specification for Aluminum and Aluminum-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes
ASTM B308/308M: Standard Specification for Aluminum-Alloy 6061-T6 Standard Structural Profiles
ASTM B483: Standard Specification for Aluminum and Aluminum-Alloy Drawn Tube and Pipe for General Purpose Applications
ASTM B547: Standard Specification for Aluminum and Aluminum-Alloy Formed and Arc-Welded Round Tube
ISO 6361: Wrought Aluminium and Aluminium Alloy Sheets, Strips and Plates
References
Further reading
"Properties of Wrought Aluminum and Aluminum Alloys: 6061 Alclad 6061", Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, Vol 2, ASM Handbook, ASM International, 1990, p. 102-103.
External links
Aluminum 6061 Properties
6061 Aluminum vs 5052 Aluminum
Aluminium alloy table
6061
Aerospace materials
Silicon alloys
Aluminium–silicon alloys
Aluminium–magnesium–silicon alloys | 6061 aluminium alloy | [
"Chemistry",
"Engineering"
] | 1,766 | [
"Aerospace materials",
"Aluminium alloys",
"Silicon alloys",
"Alloys",
"Aerospace engineering"
] |
7,436,090 | https://en.wikipedia.org/wiki/Regenerative%20capacitor%20memory | Regenerative capacitor memory is a type of computer memory that uses the electrical property of capacitance to store the bits of data. Because the stored charge slowly leaks away, these memories must be periodically regenerated (i.e. read and rewritten, also called refreshed) to prevent data loss.
Other types of computer memory exist that use the electrical property of capacitance to store the data, but do not require regeneration. Traditionally these have either been somewhat impractical (e.g., the Selectron tube) or are considered to be suitable only as read-only memory (e.g., EPROM, EEPROM/Flash memory) since writing data takes significantly longer than reading.
History
The first regenerative capacitor memory built was the rotating capacitor drum memory of the Atanasoff–Berry Computer (1942). Each of its two drums stored thirty 50-bit binary numbers (1500 bits each), rotated at 60 rpm and was regenerated every rotation (1 Hz refresh rate).
The first random access regenerative capacitor memory was the Williams tube (1947). As fitted to the first practical programmable digital computer, a single Williams tube held a total of 2560 bits, arranged in two 'pages'. One page was an array of thirty two 40-bit binary numbers, the capacity of a basic Williams-Kilburn Tube. The refresh rate required varied depending on the type of CRT used.
The modern DRAM (1966) is a regenerative capacitor memory.
Notes
References
Further reading
Computer memory
Capacitance | Regenerative capacitor memory | [
"Physics",
"Mathematics"
] | 338 | [
"Physical quantities",
"Quantity",
"Capacitance",
"Voltage",
"Wikipedia categories named after physical quantities"
] |
7,436,202 | https://en.wikipedia.org/wiki/7075%20aluminium%20alloy | 7075 aluminium alloy (AA7075) is an aluminium alloy with zinc as the primary alloying element. It has excellent mechanical properties and exhibits good ductility, high strength, toughness, and good resistance to fatigue. It is more susceptible to embrittlement than many other aluminium alloys because of microsegregation, but has significantly better corrosion resistance than the alloys from the 2000 series. It is one of the most commonly used aluminium alloys for highly stressed structural applications and has been extensively used in aircraft structural parts.
7075 aluminium alloy's composition roughly includes 5.6–6.1% zinc, 2.1–2.5% magnesium, 1.2–1.6% copper, and less than a half percent of silicon, iron, manganese, titanium, chromium, and other metals. It is produced in many tempers, some of which are 7075-0, 7075-T6, 7075-T651.
The first 7075 was developed in secret by a Japanese company, Sumitomo Metal, in 1935, but reverse engineered by Alcoa in 1943, after examining a captured Japanese aircraft. 7075 was standardized for aerospace use in 1945. 7075 was eventually used for airframe production in the Imperial Japanese Navy.
Basic properties
Aluminium 7075A has a density of 2.810 g/cm3.
Mechanical properties
The mechanical properties of 7075 depend greatly on the tempering of the material.
Aluminum 7075 has a low formability at low room temperature and is vulnerable to stress corrosion cracking. Using different elevated temperature forming techniques has been shown to reduce springback and fracture. Examples of those elevated temperature forming techniques are retrogression forming and warm forming.
7075-O
Un-heat-treated 7075 (7075-0 temper) has a maximum tensile strength of no more than , and maximum yield strength of no more than . The material has an elongation (stretch before ultimate failure) of 9–10%. As is the case for all 7075 aluminum alloys, 7075-0 is highly corrosion-resistant combined with generally acceptable strength profile.
7075-T6
T6 temper 7075 has an ultimate tensile strength of and yield strength of at least . It has a failure elongation of 5–11%.
The T6 temper is usually achieved by homogenizing the cast 7075 at 450 °C for several hours, quenching, and then ageing at 120 °C for 24 hours. This yields the peak strength of the 7075 alloys. The strength is derived mainly from finely dispersed eta and eta' precipitates both within grains and along grain boundaries.
7075-T651
T651 temper 7075 has an ultimate tensile strength of and yield strength of . It has a failure elongation of 3–9%. These properties can change depending on the form of material used. The thicker plates may exhibit lower strengths and elongation than the numbers listed above.
7075-T7
T7 temper has an ultimate tensile strength of and a yield strength of . It has a failure elongation of 13%. T7 temper is achieved by overaging (meaning aging past the peak hardness) the material. This is often accomplished by aging at 100–120 °C for several hours and then at 160–180 °C for 24 hours or more. The T7 temper produces a microstructure of mostly eta precipitates. In contrast to the T6 temper, these eta particles are much larger and prefer growth along the grain boundaries. This reduces the susceptibility to stress corrosion cracking. T7 temper is equivalent to T73 temper.
7075-RRA
The retrogression and reage (RRA) temper is a multistage heat treatment temper. Starting with a sheet in the T6 temper, it involves overaging past peak hardness (T6 temper) to near the T7 temper. A subsequent reaging at 120 °C for 24 hours returns the hardness and strength to or very nearly to T6 temper levels.
RRA treatments can be accomplished with many different procedures. The general guidelines are retrogressing between 180 and 240 °C for 15 min 10 s.
Equivalent materials
Uses
The world's first mass-production usage of the 7075 aluminum alloy was for the Mitsubishi A6M Zero fighter. The aircraft was known for its excellent maneuverability which was facilitated by the higher strength of 7075 compared to previous aluminum alloys.
7000 series alloys such as 7075 are often used in transport applications due to their high specific strength, including marine, automotive and aviation. These same properties lead to its use in rock climbing equipment, bicycle components, inline-skating-frames and hang glider airframes are commonly made from 7075 aluminium alloy. Hobby-grade RC models commonly use 7075 and 6061 for chassis plates. 7075 is used in the manufacturing of M16 rifles for the U.S. military as well as AR-15 style rifles for the civilian market. In particular high-quality M16 rifle lower and upper receivers, as well as extension tubes, are typically made from 7075-T6 alloy. Desert Tactical Arms, SIG Sauer, and French armament company PGM use it for their precision rifles. It is also commonly used in shafts for lacrosse sticks, such as the STX sabre, and camping knife and fork sets. It is a common material used in competition yo-yos as well.
Another application for the 7075-series alloy has been in connecting rods used in drag racing engines. Aluminum rods do not have the fatigue life of forged steel rods, but have less mass than their steel counterparts, resulting in lower mechanical stress during periods in which an engine is operated under full-throttle, high-RPM conditions.
It has also been the standard material for crankcase guards on off-road motorcycles.
Due to its high strength, low density, thermal properties, and its ability to be highly polished, 7075 is widely used in mold tool manufacturing. This alloy has been further refined into other 7000 series alloys for this application, namely 7050 and 7020.
Aerospace applications
7075 was used in the Space Shuttle SRB nozzles, and the external tank SRB beam in the Inter-tank section. The forward- and aft skirt as well as the Interstage of the S-II, the second stage of the Saturn V was made from 7075.
Applications
Aircraft fittings
Gears and shafts
Missile parts
Regulating valve parts
Worm gears
Aerospace/defense applications
Automotive
Bicycle Chainrings
Bicycle Gearboxes
Archery equipment
Trade names
7075 has been sold under various trade names including Zicral, Ergal, and Fortal Constructal. Some 7000 series alloys sold under brand names for making molds include Alumec 79, Alumec 89, Contal, Certal, Alumould, and Hokotol.
See also
Northwest Airlines Flight 421
https://www.thomasnet.com/articles/metals-metal-products/all-about-7075-aluminum-properties-strength-and-uses/
WHAT ARE THE DIFFERENCES BETWEEN 6061 AND 7075 ALUMINUM?
7075 Aluminum: Get to Know its Properties and Uses
Properties of 7075 aluminum alloy
Properties of 7075 aluminum alloy
7075 aluminum
References
Further reading
"Properties of Wrought Aluminum and Aluminum Alloys: 7075, Alclad 7075", Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, Vol. 2, ASM Handbook, ASM International, 1990, pp. 115–116.
External links
Aluminum 7075 Properties
Aluminium–zinc alloys | 7075 aluminium alloy | [
"Chemistry"
] | 1,555 | [
"Alloys",
"Aluminium alloys"
] |
7,436,855 | https://en.wikipedia.org/wiki/2024%20aluminium%20alloy | 2024 aluminium alloy is an aluminium alloy, with copper as the primary alloying element. It is used in applications requiring a high strength-to-weight ratio, as well as good fatigue resistance. It is weldable only through friction welding, and has average machinability. Due to poor corrosion resistance, it is often clad with aluminium or Al-1Zn for protection, although this may reduce the fatigue strength. In older systems of terminology, 2XXX series alloys were known as duralumin, and this alloy was named 24ST.
2024 is commonly extruded, and also available in alclad sheet and plate forms. It is not commonly forged (the related 2014 aluminium alloy is, though).
Basic properties
Aluminium alloy 2024 has a density of 2.78 g/cm3 (0.1 lb/in3), electrical conductivity of 30% IACS, Young's modulus of 73 GPa (10.6 Msi) across all tempers, and begins to melt at .
Chemical composition
The alloy composition of 2024 is:
Aluminium (90.7–94.7%)
Silicon no minimum, maximum 0.5% by weight
Iron no minimum, maximum 0.5%
Copper minimum 3.8%, maximum 4.9%
Manganese minimum 0.3%, maximum 0.9%
Magnesium minimum 1.2%, maximum 1.8%
Chromium no minimum, maximum 0.1%
Zinc no minimum, maximum 0.25%
Titanium no minimum, maximum 0.15%
Other elements no more than 0.05% each, 0.15% total
Mechanical properties
The mechanical properties of 2024 depend greatly on the temper of the material.
2024-O
2024-O temper aluminium has no heat treating. It has an ultimate tensile strength of , and a maximum yield strength of no more than . The material has elongation (stretch before ultimate failure) of 10–25%, this is the allowable range per applicable AMS specifications.
2024-T3
T3 temper 2024 sheet has an ultimate tensile strength of and yield strength of at least . It has an elongation of 10–15%.
2024-T4
Solution treated at foundry and naturally aged.
2024-T5
Cooled from hot-working and artificially aged (at elevated temperature)
2024-T351
T351 temper 2024 plate has an ultimate tensile strength of and yield strength of . It has elongation of 20%.
Uses
Due to its high strength and fatigue resistance, 2024 is widely used in aircraft, especially wing and fuselage structures under tension. Additionally, since the material is susceptible to thermal shock, 2024 is used in the qualification of liquid penetrant tests outside of normal temperature ranges.
References
Further reading
"Properties of Wrought Aluminium and Aluminium Alloys: 2024, Alclad 2024", Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, Vol 2, ASM Handbook, ASM International, 1990, pp. 70–71.
Aluminium alloy table
2024
Aluminium–copper alloys | 2024 aluminium alloy | [
"Chemistry"
] | 641 | [
"Alloys",
"Aluminium alloys"
] |
7,437,222 | https://en.wikipedia.org/wiki/Period%20%28algebraic%20geometry%29 | In mathematics, specifically algebraic geometry, a period or algebraic period is a complex number that can be expressed as an integral of an algebraic function over an algebraic domain. The periods are a class of numbers which includes, alongside the algebraic numbers, many well known mathematical constants such as the number π. Sums and products of periods remain periods, such that the periods form a ring.
Maxim Kontsevich and Don Zagier gave a survey of periods and introduced some conjectures about them.
Periods play an important role in the theory of differential equations and transcendental numbers as well as in open problems of modern arithmetical algebraic geometry. They also appear when computing the integrals that arise from Feynman diagrams, and there has been intensive work trying to understand the connections.
Definition
A number is a period if it can be expressed as an integral of the form
where is a polynomial and a rational function on with rational coefficients. A complex number is a period if its real and imaginary parts are periods.
An alternative definition allows and to be algebraic functions; this looks more general, but is equivalent. The coefficients of the rational functions and polynomials can also be generalised to algebraic numbers because irrational algebraic numbers are expressible in terms of areas of suitable domains.
In the other direction, can be restricted to be the constant function or , by replacing the integrand with an integral of over a region defined by a polynomial in additional variables.
In other words, a (nonnegative) period is the volume of a region in defined by polynomial inequalities with rational coefficients.
Properties and motivation
The periods are intended to bridge the gap between the well-behaved algebraic numbers, which form a class too narrow to include many common mathematical constants and the transcendental numbers, which are uncountable and apart from very few specific examples hard to describe. The latter are also not generally computable.
The ring of periods lies in between the fields of algebraic numbers and complex numbers and is countable. The periods themselves are all computable, and in particular definable. It is: .
Periods include some of those transcendental numbers, that can be described in an algorithmic way and only contain a finite amount of information.
Numbers known to be periods
The following numbers are among the ones known to be periods:
Open questions
Many of the constants known to be periods are also given by integrals of transcendental functions. Kontsevich and Zagier note that there "seems to be no universal rule explaining why certain infinite sums or integrals of transcendental functions are periods".
Kontsevich and Zagier conjectured that, if a period is given by two different integrals, then each integral can be transformed into the other using only the linearity of integrals (in both the integrand and the domain), changes of variables, and the Newton–Leibniz formula
(or, more generally, the Stokes formula).
A useful property of algebraic numbers is that equality between two algebraic expressions can be determined algorithmically. The conjecture of Kontsevich and Zagier would imply that equality of periods is also decidable: inequality of computable reals is known recursively enumerable; and conversely if two integrals agree, then an algorithm could confirm so by trying all possible ways to transform one of them into the other one.
Further open questions consist of proving which known mathematical constants do not belong to the ring of periods. An example of a real number that is not a period is given by Chaitin's constant Ω. Any other non-computable number also gives an example of a real number that is not a period. It is also possible to construct artificial examples of computable numbers which are not periods. However there are no computable numbers proven not to be periods, which have not been artificially constructed for that purpose.
It is conjectured that 1/π, Euler's number e and the Euler–Mascheroni constant γ are not periods.
Kontsevich and Zagier suspect these problems to be very hard and remain open a long time.
Extensions
The ring of periods can be widened to the ring of extended periods by adjoining the element 1/π.
Permitting the integrand to be the product of an algebraic function and the exponential of an algebraic function, results in another extension: the exponential periods . They also form a ring and are countable. It is .
The following numbers are among the ones known to be exponential periods:
See also
Transcendental number theory
Mathematical constant
L-function
Jacobian variety
Gauss–Manin connection
Mixed motives (math)
Tannakian formalism
References
External links
PlanetMath: Period
Mathematical constants
Algebraic geometry
Integral calculus
Transcendental numbers | Period (algebraic geometry) | [
"Mathematics"
] | 969 | [
"Calculus",
"Mathematical objects",
"Fields of abstract algebra",
"nan",
"Algebraic geometry",
"Mathematical constants",
"Numbers",
"Integral calculus"
] |
7,437,427 | https://en.wikipedia.org/wiki/EudraVigilance | EudraVigilance (European Union Drug Regulating Authorities Pharmacovigilance) is the European data processing network and management system for reporting and evaluation of suspected adverse reactions to medicines or devices which have received marketing authorisation or are actively being studied in clinical trials in the European Economic Area (EEA). The European Medicines Agency (EMA) operates the system on behalf of the European Union (EU) medicines regulatory network.
The European EudraVigilance system deals with the:
Electronic exchange of Individual Case Safety Reports (ICSR, based on the ICH E2B specifications):
EudraVigilance Clinical Trial Module (EVCTM) for reporting Suspected Unexpected Serious Adverse Reactions (SUSARs).
EudraVigilance Post-Authorisation Module (EVPM) for post-authorisation ICSRs.
Early detection of possible safety signals from marketed drugs for human use.
Continuous monitoring and evaluation of potential safety issues in relation to reported adverse reactions.
Decision-making process, based on a broader knowledge of the adverse reaction profile of drugs.
EMA publishes data from EudraVigilance in the European database for suspected adverse drug reaction reports.
The EudraVigilance access policy governs the level of access different stakeholder groups have to adverse drug reactions reports.
See also
Clinical trial
Drug development
EudraCT
EudraGMP
EudraLex
EUDRANET
EudraPharm
European Clinical Research Infrastructures Network
European Medicines Agency
International Society of Pharmacovigilance
Medication
Pharmacovigilance
Serious adverse event
Uppsala Monitoring Centre
Yellow Card Scheme
References
External links
EudraVigilance
Drug safety
European clinical research
Government databases of the European Union
Health and the European Union
Medical databases
Pharmaceuticals policy
Pharmacovigilance databases
National agencies for drug regulation | EudraVigilance | [
"Chemistry"
] | 361 | [
"Pharmacovigilance databases",
"National agencies for drug regulation",
"Drug safety"
] |
7,437,933 | https://en.wikipedia.org/wiki/SUPARCO | The Space & Upper Atmosphere Research Commission, commonly referred to as SUPARCO, is the national space agency of Pakistan.
The agency, originally established in 1961 as a committee in Karachi, became an independent commission in 1981. Its initial objective was to learn rocketry and high altitude research from the United States, which ultimately led to the development of a national satellite program. This culminated in the successful launch of Pakistan's first satellite from China in 1990. The agency was also an early participant in the rocket development program launched by the Ministry of Defence of Pakistan.
The agency leads the National Space Program (NSP) and maintains the orbital operations of its satellites with support facilities throughout the country. The agency has sustained significant criticism within Pakistan for failing to compete with its Indian and Chinese counterparts in terms of capabilities in spite of being established earlier than them.
History
Creation
The past federal ministries of Pakistan initially avoided to fund the space program and engineering education in spite of opportunity available from the United States. The Punjab University was the only university that was undertaking the research in aeronautics in 1957; only after when the former Soviet Union launched its first satellite in space, the Sputnik 1.
It was during the development of Apollo program in 1961 when Abdus Salam found an opportunity for Pakistan to start its space program with the foreign funding coming from the United States. The American NASA was embarking the Apollo program in a competition with Soviet space program had realize the need of scientific data in upper atmosphere and therefore, invited India and Pakistan (bordering nations of the Indian Ocean) to join the studies and experimentations. Initially, the engineers from Pakistan Atomic Energy Commission (PAEC) were directed at the Wallops Flight Facility to learn the rocketry from the United States as Abdus Salam worked on approving to establishing a commission from Ayub administration.
A commission to study the upper atmosphere and rocketry was established under Abdus Salam and Ishrat Usmani as its chairman with nuclear engineers from PAEC, Tariq Mustafa and Salim Mehmud, becoming its first members in 1961 through the "Space Sciences Research Wing" of PAEC. The Commission was first in the Muslim world to start studies in establishing the space program, and was named as "Space and Upper Atmosphere Research Commission" to represent its purpose and mission on 16 September 1961.
The Commission working under Abdus Salam was tasked with learning the rocket engineering, and contribution from France and the United States helped start this mission. Foundation of the agency made Pakistan the first South Asian country to start a space program. Furthermore, a Flight Test Range was established in Sonmiani which is west of Karachi, from where a program of sounding rocket launches was conducted based on the Nike-Ajax rockets followed by the Judi-Dart program.
On 7 July 1962, the Commission launched the first rocket, known as "Rehbar-I", which reached the altitude of in space.. The United States publicly supported and hailed the program as the beginning of "a program of continuous cooperation in space research of mutual interest." Until 1972, the United States provided training on rocket engines at the Goddard Space Flight Center. The ground stations for satellite navigation were set-up by the Commission in Karachi and Lahore in 1973, and were visited by the Apollo 17 astronauts. In 1973, the Islamabad Ionospheric Station was established at the Quaid-e-Azam University and Landsat ground station was established near Lahore.
Funding and Support
Following the cessation of U.S. funding for upper atmosphere research in 1972 and the prioritization of nuclear weapons programs after the 1971 war with India, SUPARCO experienced a significant decline in funding and support. Engineers with backgrounds in nuclear engineering were transferred to the Pakistan Atomic Energy Commission, and the Pakistan Air Force's support for the Rehbar program also ceased.
Inspired by India's launch of its first satellite in 1975, SUPARCO began lobbying for Pakistan's own satellite program. In 1981, the Commission was reorganized as an independent federal agency. During the same period, a communication satellite project called PakSAT was initiated. Collaborations with the Pakistan Radio Society (PRS) and the University of Surrey in England enabled SUPARCO to participate in satellite engineering projects, contributing to the development of UoSAT-1 and UO-11, which were launched in 1984.
Pakistan's communication satellite program also led to the expansion of a ground station in Lahore in 1983. SUPARCO began constructing its first satellite, Badr-1, in 1983, and it was eventually launched by China in 1990 after negotiations with the United States failed.
SUPARCO continued its collaboration with the United Kingdom, developing the Badr-B satellite in partnership with the British Rutherford Laboratory. However, due to orbital crises and funding constraints, Badr-B was not launched until 2001 by Russia.
Gen Zia's decision to delay the PakSAT project in 1984, citing a lack of funds, led to a significant setback for SUPARCO. This ultimately resulted in the loss of two orbital slots between 1993 and 1994. To secure a priority slot, the agency negotiated with Hughes Satellite Systems to acquire PakSAT-1, a geo-stationary satellite originally intended for Indonesia.
Functions
As per the National Space Policy of Pakistan approved in 2024, SUPARCO as the National Space Agency is mandated to carry out all activities related to outer space which includes but not limited to the following:
Policy Development: Formulate national space policies and legislation to comply with international obligations and establish guidelines for the space sector.
Program Management: Plan, manage, and execute the National Space Program, encompassing space science, technology, and applications.
Space Infrastructure: Design, develop, launch, and operate satellites, ground control infrastructure, space transportation systems, launch facilities, navigation systems, and tracking observatories.
National Register: Maintain a register of space objects launched by Pakistan and submit information to the United Nations.
Commercialization: Promote the commercial exploitation of space capabilities, technologies, and applications.
Private Sector Engagement: Encourage private sector involvement in space activities.
International Cooperation: Coordinate with international space organizations and agencies, and represent Pakistan in relevant forums.
Projects
Satellite Programs
Badr
PakSAT
The PakSAT program is the national communication satellites program of the commission conceived in 1979–80. The program is envisioned to consist of two geostationary communication satellites – one operating in 38°E orbit and other at 41°E, respectively.
The PakSAT program was originally designed to develop the television receive-only (TVRO) terminals for the receptions of news, entertainment, and educational channels from direct broadcasting satellite dishes.
Remote Sensing Satellites
CubeSats
Rocket Programs
Sounding Rockets
Since 1961, the commission supported and led the early studies on solid-propellant rockets, which it succeeded in developing the Rehbar-I. The Rehbar-I rocket was a derivative based on the U.S. Nike-Cajun, and continued its service until 1972.
Hatf-I & Abdali
In 1987, the military funded the commission's design study on rocket engines for Hatf-I, which was completed with the Khan Research Laboratories (KRL), the national defense laboratory of the Ministry of Defence. In 1995, the Commission designed the rocket engine for the Abdali project, which was completed in 2004.
The Commission also conducted studies on rocket engines for the Shaheen program.
Space Science and Astronomy
Every year, SUPARCO sponsors and organizes the World Space Week (WSW) to promote the understanding of the Earth science all over the country. SUPARCO works with a number of universities and research institutions to engage in research in observational astronomy and astrophysics. The Institute of Space and Planetary Astrophysics (ISPA) of the Karachi University conducts key research and co-sponsors with international level research programs in astrophysics, with joint ventures of SUPARCO.
SUPARCO operates a national balloon launching facility in Karachi to conduct studies in atmospheric sciences to determine the vertical profile of ozone up to 30–35 km. This balloon sounding facility has been extensively used for carrying out research in better understanding of the meteorology and how the ozone layer vary seasonally in the stratosphere and troposphere. The Ionospheric Station at Karachi operates a Lonosonde observation facility, and recently the balloon flight mission was carried out by the station on 16 January 2004, up to an altitude of about 36 km to measure the vertical profile of the O3 trends. The maximum O3 observed 12.65 mPa at 27 km. One of the most notable mission of SUPARCO is its Lunar program that conducts observational studies on the activity of Lunar phases and distributes its publications within the public domain.
The SUPARCO Astrophysics program, is an active scientific mission of SUPARCO, dedicated for the development of space science. The program's mainstream objective and aim is to conduct research studies for the advancement and better understanding of the theoretical physics, astronomy, astrophysics, and mathematics involving the three-dimensional universal space and time.
SUPARCO's Space Program 2040, launched in 2012, incorporates astronomy and astrophysics research into a single program focused on theoretical and observational studies. This program explores vast topics like quantum mechanics, deep space objects, dark matter and energy, supernovae, nebulae, and galaxies. Aligned with Pakistan's official space policy, it also aims to strengthen public understanding of physics and mathematics through educational initiatives like academic bulletins and public events celebrating astronomy milestones. This program fosters collaboration with international space agencies and builds upon public interest sparked by SUPARCO's celebration of the International Year of Astronomy in 2009 which was widely appreciated by the public.
Since its establishment, a total of nine important publications has been released under the auspicious of this program with the last volume was issued in September 2012.
Space Weather Monitoring
SUPARCO has been actively involved in space weather monitoring for over five decades, utilizing a nationwide network of ground-based sensors. As Pakistan's reliance on satellites for critical services like communication, navigation, and earth observation increases, SUPARCO has established the Pakistan Space Weather Center, which employs an array of instruments to observe space weather phenomena in real time. Processed data, including HF communication products, are then disseminated to relevant national users.
SUPARCO initiated its Geomagnetic Field Monitoring Program in 1983 at the Sonmiani space facility, establishing a second observatory in Karachi in 2008. This program involves studying the Earth's magnetic field and its variations in the South Asian region using data collected from these observatories. The data is used to understand the Earth's magnetic environment and mitigate associated hazards.
SUPARCO regularly publishes a public domain bulletin of geomagnetic data, which includes research on the effects of solar flares and severe magnetic storms recorded by the observatories.
Satellite Navigation Program
SUPARCO has been a pivotal player in Pakistan's satellite navigation landscape. Its involvement extends to the entire spectrum of satellite navigation, encompassing the design and development of space, ground, and user segments.
It has established a Ground Based Augmentation System (GBAS) on a proof of concept basis to provide correction signals to authorized users. Additionally, SUPARCO has deployed a Space Based Augmentation System (SBAS) via PakSAT-MM1 to cater to the specific needs of aviation, marine, and land users who require high-integrity correction signals.
SUPARCO operates a Satellite Navigation Signal Monitoring facility that plays a crucial role in monitoring, archiving, and analyzing satellite navigation signals from various monitoring stations located across Pakistan. This facility fosters collaboration with national and international organizations involved in satellite navigation systems.
International COSPAS-SARSAT Programme
Pakistan has been participating in a multinational humanitarian programme for satellite–aided search and rescue, the International Cospas-Sarsat Programme. In 1990, the Government of Pakistan accorded approval for SUPARCO's participation in the Cospas program as ground segment provider and lead space station with close coordination with the Soviet Union. Over the years, the mission control center has equipped itself with advanced technology and is capable of distributing distress alert data to rescue coordination centres in the country.
Remote Sensing Program
SUPARCO has been a pioneer in introducing Remote Sensing/GIS and allied technologies in Pakistan, providing turnkey solutions and services to diverse users across various fields. These applications range from agriculture and forestry to disaster management, water resources, environmental monitoring, urban planning, and coastal and marine studies. SUPARCO's expertise extends to climate change and environmental degradation, utilizing ground-based observations and satellite data for now-casting and forecasting environmental indicators in the atmosphere, biosphere, cryosphere, and hydrosphere. To support these endeavors, SUPARCO operates specialized centers and facilities, including environmental laboratories, mobile laboratories, a mathematical modeling center, and a microgravity experiments facility.
Other Scientific programs
Scientific space research
Geographic Information Systems
Natural Resource Surveying
Environmental monitoring
Acquisition of data for atmospheric/meteorological studies
Development of the ground-based infrastructure for navigation and special information system
Development of research, test and production base of the space sector
Facilities
SUPARCO, headquartered in Islamabad, maintains a network of key technical and support facilities across Pakistan. These facilities are strategically located in major cities to support the organization's various research and development activities.
Ground Stations
Space Applications Centres
Research and Development Centres
Space Ports
Human resource development
International Cooperation
SUPARCO, has consistently prioritized international cooperation as a central component of its space development program. The agency maintains active membership in several international organizations, institutes, scientific committees and United Nations bodies like UNCOPUOS, UN-SPIDER, UN-ESCAP, COSPAR, IAF, ISPRS, APSCO etc. This has fostered collaboration in scientific and technical information exchange, data sharing, joint projects, technology transfer, training, collaborative studies. SUPARCO has also entered into numerous bilateral and multilateral cooperation agreements and MoUs to facilitate space-related activities. Pakistan is a signatory to all five United Nations treaties governing the peaceful uses of outer space.
Inter-Islamic Network on Space Sciences and Technology
ISNET was established in 1987 by the Organization of Islamic Cooperation (OIC) Standing Committee on Scientific and Technological Cooperation (COMSTECH) and SUPARCO hosted the founding meeting of the Network at Karachi. The secretariat of ISNET is located within SUPARCO and Chairman SUPARCO is the ex-officio President of ISNET. ISNET has 17 member states and it organises different international conferences, seminars, workshops, training courses related to space science, technology and satellite application in different member states in which a number of scientists, engineers and researchers stake-holders participate.
China
In August 2006, People's Republic of China signed an agreement with Pakistan to conduct joint research in space technology and committed to work with Pakistan to launch three Earth-weather satellites over the next five years. In May 2007, China (as a strategic partner) publicly signed an agreement with Pakistan to enhance cooperation in the areas of space science and technology. The Pakistan-China bilateral cooperation in the space industry span a broad spectrum, including climate science, clean energy technologies, atmospheric and Earth sciences, and marine sciences. On the occasion of Chinese launch of PakSAT-1R, Pakistan's ambassador to China expressed the natural desire of Pakistan for China to send the first officially designated Pakistani astronaut to space aboard a Chinese spacecraft. In cooperation with CNSA, Pakistan sent its first lunar orbiter mission called ICUBE-Q along with Chang'e 6.
United Arab Emirates
In 2019, the commission reached out to United Arab Emirates Space Agency to take part in the Global Space Congress for the first time held at Abu Dhabi, where they held an exhibition on their satellite-related projects.
Turkey
In December 2006, Turkey showed interest to form a joint-venture with Asia-Pacific Space Cooperation Organization where Pakistan is a member. In 2006, Turkish minister of science, accompanied by the Turkish Ambassador to Pakistan, signed the Memorandum of understanding (MOU) with Pakistan to form a joint-venture with Pakistan in the development of satellite technology. The Scientific and Technological Research Council of Turkey and Turkish Aerospace Industries's senior ranking officials and representative signed a separate accord with the SUPARCO to enhance the cooperation in the satellite development program.
Leadership History
See also
List of government space agencies
SUPARCO's spaceflight missions and tests
SUPARCO Space Programme 2040
Jinnah Antarctic Station
Notes
References
Sources
External links
CNBC Pakistan televised Interview with Salim Mehmud – Chairman SUPARCO - Available in Urdu language
Space and Upper Atmosphere Research Commission (SUPARCO)
FAS report on SUPARCO
Astronautix
Shaheen - PSLV
Space agencies
Space organizations
Science and technology in Pakistan
Government agencies established in 1961
International research institutes
Research institutes in Pakistan
Pakistan federal departments and agencies
Space research
Science and technology in Karachi
1961 establishments in Pakistan
Space technology research institutes | SUPARCO | [
"Astronomy"
] | 3,427 | [
"Astronomy organizations",
"Space organizations"
] |
7,438,245 | https://en.wikipedia.org/wiki/WF%20trac | The WF trac is a skidder made by Werner Forst & Industrietechnik Scharf. It was developed from the MB-trac in the 1990s.
Models
First series WF trac
WF trac 900 ()
WF trac 1100 ()
WF trac 1300 ()
WF trac 1500 ()
WF trac 1700 ()
Second and current series WF trac
The WF trac is now produced both in a 4x4 and a 6x6 version. Its maximum road speed is in the 4x4 and in the 6x6 variant.
WF trac 2040 4x4 (, 800 N·m torque, 4.8 litres 4-cyl Mercedes-Benz engine)
WF trac 2040 6x6 (, 800 N·m torque, 4.8 litres 4-cyl Mercedes-Benz engine)
WF trac 2460 4x4 (, 850 N·m torque, 7.2 litres 6-cyl Mercedes-Benz engine)
WF trac 2460 6x6 (, 850 N·m torque, 7.2 litres 6-cyl Mercedes-Benz engine)
References
External links
, official website
Tractors
Logging
German brands | WF trac | [
"Engineering"
] | 258 | [
"Engineering vehicles",
"Tractors"
] |
7,438,314 | https://en.wikipedia.org/wiki/MB-trac | MB-trac is a range of agricultural tractors developed and produced from 1973 until 1991 by Mercedes-Benz Group, formerly known as Daimler-Benz. It is based on the trac design principle for tractors and shares its drivetrain with the Unimog. Mercedes-Benz offered the MB-trac in light duty, medium-duty, and heavy-duty versions in four different type series: 440, 441, 442, and 443. About 41,000 MB-tracs were made by former Daimler-Benz, before the manufacture was sold to Werner Forsttechnik in the early 1990s, who developed the WF trac skidder from the MB-trac.
MB-trac types and model family
Naming scheme
In total, four type series (440, 441, 442, and 443) and twenty-one different series-production types of MB-trac were made, with either ten or sixteen models, depending on whether the Turbo-designated models are considered separate from the non-Turbo-designated models. The naming scheme of the MB-trac is mostly homogeneous, with MB trac followed by a number roughly indicative of the engine's DIN-PS power output times ten. Models equipped with a turbocharger have a turbo suffix in their model name, and models with an additional intercooler have an intercooler suffix instead. The exceptions to this naming scheme are the MB-trac 65/70, which has a power-output figure indicative of the DIN-PS/SAE-hp power output, MB-trac 1500 and MB-trac 900, which have no turbo suffix despite being turbocharged, and the MB-trac turbo 900, which has a turbo prefix rather than a suffix. The MB-trac name is spelled MB-trac in technical documents, but MB trac on the tractors' bonnets.
Model overview
Light-duty (440 series)
Medium-duty (441 series)
Heavy-duty (442 and 443 series)
History
Former Daimler-Benz had been making the Unimog, a tractor-like implement carrier vehicle, since 1951. Mercedes-Benz customers however sought a dedicated agricultural and silvicultural tractor. Thus, Daimler-Benz, on 25 March 1968 issued a development order for an agricultural tractor that would share various components with the Unimog 403. Throughout the development phase, various prototypes were built; these were designated the AM 60. During the development phase, a version with a rotating steering-column and driver seat was considered, but originally not put forward.
Eventually, the 440 series MB-trac was presented at the 1972 DLG fair as the MB-trac 65/70. Series production started on 1 July 1973 at the , alongside the Unimog. Until 31 December 1973, 520 units of the MB-trac were produced. The original colour was pebble-grey with red wheels. In March 1976, the heavy-duty 442 and 443 series MB-trac production commenced, which shares various components with the Unimog 425.
In 1987, Daimler-Benz entered into a strategic cooperation with Deutz, who at the time built the Deutz Intrac, a similar trac-like tractor. Joint development was carried out by Daimler-Benz (40 %) and Deutz (60 %) owned Trac-Technik-Entwicklungsgesellschaft (TTE) throughout the late 1980s, however, due to cost concerns, development was cancelled, and it was decided at Daimler-Benz to cancel the MB-trac as a result.
In 1990, the MB-trac lineup saw its last and most powerful addition, the MB-trac 1800 intercooler. It was based on a 1600 turbo, but had an additional intercooler installed. Production ended in 1991, and Daimler-Benz sold the manufacture to Werner Forsttechnik, who developed the WF trac from the MB-trac.
Technical description
The MB-trac comprises four different series which have distinct technical differences. The following description thus only gives a basic overview of the MB-trac.
In general, the MB-trac is a trac type of agricultural tractor which is characterised by four same-size wheels, a frame undercarriage for the engine and front axle, and steerable wheels rather than articulated steering. The cab is installed in the center of the vehicle. This design allows the MB-trac to mount implements in front, on the rear axle behind the cab, and behind the rear axle.
Despite sharing its drivetrain with the Unimog 403 or Unimog 425, the MB-trac is not a modified Unimog. It is distinctly different and much more suited for agricultural use, for example, due to its larger wheels. Unusual for an agricultural tractor, the MB-trac's front axle is coil sprung, which improves its on-road handling characteristics.
Light-duty MB-trac models were equipped with the Mercedes-Benz OM 314 or Mercedes-Benz OM 364 four-cylinder Diesel engines and use a gearbox with 16 forward and 8 reverse gears. The gears are shifted using a conventional H-shift lever which is fitted with a splitter device. A separate gear lever is used to engage forward or reverse, and the same lever is also used to shift into a high-speed forward range, which does not exist in reverse. Medium-duty models are technically similar to the light-duty models but feature the Mercedes-Benz OM 352 or OM 366 six-cylinder engines. The heavy-duty MB-trac models were equipped with the Mercedes-Benz OM 352 or Mercedes-Benz OM 366 six-cylinder Diesel engines, and have a dedicated reverse-drive feature with a reversible cab and gearbox.
Bibliography
Wener Schmeing, Hans-Jürgen Wischof: Traktoren der Daimler AG – Unbekannte Einblicke in Technik und Wirtschaft: vom Unimog zum MB-trac und warum es keinen Nachfolger gab, Vol. 2, DLG-Verlag, DLG-Verlag, 2011, ISBN 9783769007343
References
Tractors
Mercedes-Benz | MB-trac | [
"Engineering"
] | 1,294 | [
"Engineering vehicles",
"Tractors"
] |
7,438,501 | https://en.wikipedia.org/wiki/Pulse%20labelling | Pulse labelling is a biochemistry technique of identifying the presence of a target molecule by labeling a sample with a radioactive compound. This is mainly done to identify the stage at which the messenger RNA is being produced in a cell.
References
Biochemistry detection methods | Pulse labelling | [
"Chemistry",
"Biology"
] | 50 | [
"Biochemistry methods",
"Biotechnology stubs",
"Chemical tests",
"Biochemistry stubs",
"Biochemistry detection methods",
"Biochemistry"
] |
7,438,961 | https://en.wikipedia.org/wiki/Colicin | A colicin is a type of bacteriocin produced by and toxic to some strains of Escherichia coli. Colicins are released into the environment to reduce competition from other bacterial strains. Colicins bind to outer membrane receptors, using them to translocate to the cytoplasm or cytoplasmic membrane, where they exert their cytotoxic effect, including depolarisation of the cytoplasmic membrane, DNase activity, RNase activity, or inhibition of murein synthesis.
Structure
Channel-forming colicins (colicins A, B, E1, Ia, Ib, and N) are transmembrane proteins that depolarize the cytoplasmic membrane, leading to dissipation of cellular energy. These colicins contain at least three domains: an N-terminal translocation domain responsible for movement across the outer membrane and periplasmic space (T domain); a central domain responsible for receptor recognition (R domain); and a C-terminal cytotoxic domain responsible for channel formation in the cytoplasmic membrane (C domain). R domain regulates the target and binds to the receptor on the sensitive cell. T domain is involved in translocation, co-opting the machinery of the target cell. The C domain is the 'killing' domain and may produce a pore in the target cell membrane, or act as a nuclease to chop up the DNA or RNA of the target cell.
Translocation
Most colicins are able to translocate the outer membrane by a two-receptor system, where one receptor is used for the initial binding and the second for translocation. The initial binding is to cell surface receptors such as the outer membrane proteins OmpF, FepA, BtuB, Cir and FhuA; colicins have been classified according to which receptors they bind to. The presence of specific periplasmic proteins, such as TolA, TolB, TolC, or TonB, are required for translocation across the membrane. Cloacin DF13 is a bacteriocin that inactivates ribosomes by hydrolysing 16S RNA in 30S ribosomes at a specific site.
Resistance
Because they target specific receptors and use specific translocation machinery, cells can make themselves resistant to the colicin by repressing or deleting the genes for these proteins. Such resistant cells may suffer the lack of a key nutrient (such as iron or a B vitamin), but benefit by not being killed. Colicins exhibit a '1-hit killing kinetic' which does not necessarily mean a single molecule is sufficient to kill, but certainly that it only takes a small number. In his 1969 Nobel Laureate speech, Salvador E. Luria speculated that colicins could only be this toxic by causing a domino effect that destabilized the cell membrane. He was not entirely correct, but pore-forming colicins do depolarize the membrane and thus eliminate the energy source for the cell. The colicins are highly effective toxins.
Genetic organisation
Virtually all colicins are carried on plasmids. The two general classes of colicinogenic plasmids are large, low-copy-number plasmids, and small, high-copy-number plasmids. The larger plasmids carry other genes, as well as the colicin operon. The colicin operons are generally organized with several major genes. These include a colicin structural gene, an immunity gene, and a bacteriocin release protein (BRP), or lysis, gene. The immunity gene is often produced constitutively, while the BRP is generally produced only as a read-through of the stop codon on the colicin structural gene. The colicin itself is repressed by the SOS response and may be regulated in other ways as well.
Retaining the colicin plasmid is very important for cells that live with their relatives, because if a cell loses the immunity gene, it quickly becomes subject to destruction by circulating colicin. At the same time, colicin is only released from a producing cell by the use of the lysis protein, which results in that cell's death. This suicidal production mechanism would appear to be very costly, except for the fact that it is regulated by the SOS response, which responds to significant DNA damage. In short, colicin production may only occur in terminally ill cells. The Professor Kleanthous Research Group at the University of Oxford study colicins extensively as a model system for characterising and investigating protein-protein interactions and recognition.
BACTIBASE database is an open-access database for bacteriocins including colicins (view complete list).
References
External links
Molecular mechanisms of colicin evolution pdf
The newly characterized colicin Y provides evidence of positive selection in pore-former colicin diversification
Colicin OPM database
3D interactive pages about colicins
Transport Classification Database listing for colicin
Protein Data Bank colicin listing
Bacteriocins
Toxicology
Peripheral membrane proteins
Escherichia coli | Colicin | [
"Biology",
"Environmental_science"
] | 1,072 | [
"Model organisms",
"Toxicology",
"Escherichia coli"
] |
7,439,014 | https://en.wikipedia.org/wiki/Q-gamma%20function | In q-analog theory, the -gamma function, or basic gamma function, is a generalization of the ordinary gamma function closely related to the double gamma function. It was introduced by . It is given by
when , and
if . Here is the infinite -Pochhammer symbol. The -gamma function satisfies the functional equation
In addition, the -gamma function satisfies the q-analog of the Bohr–Mollerup theorem, which was found by Richard Askey ().
For non-negative integers ,
where is the -factorial function. Thus the -gamma function can be considered as an extension of the -factorial function to the real numbers.
The relation to the ordinary gamma function is made explicit in the limit
There is a simple proof of this limit by Gosper. See the appendix of ().
Transformation properties
The -gamma function satisfies the q-analog of the Gauss multiplication formula ():
Integral representation
The -gamma function has the following integral representation ():
Stirling formula
Moak obtained the following q-analogue of the Stirling formula (see ):
where , denotes the Heaviside step function, stands for the Bernoulli number, is the dilogarithm, and is a polynomial of degree satisfying
Raabe-type formulas
Due to I. Mező, the q-analogue of the Raabe formula exists, at least if we use the -gamma function when . With this restriction,
El Bachraoui considered the case and proved that
Special values
The following special values are known.
These are the analogues of the classical formula .
Moreover, the following analogues of the familiar identity hold true:
Matrix version
Let be a complex square matrix and positive-definite matrix. Then a -gamma matrix function can be defined by -integral:
where is the q-exponential function.
Other q-gamma functions
For other -gamma functions, see Yamasaki 2006.
Numerical computation
An iterative algorithm to compute the q-gamma function was proposed by Gabutti and Allasia.
Further reading
References
Gamma and related functions
Q-analogs | Q-gamma function | [
"Mathematics"
] | 423 | [
"Q-analogs",
"Combinatorics"
] |
7,439,102 | https://en.wikipedia.org/wiki/Raml%C3%B6sa%20H%C3%A4lsobrunn | Ramlösa hälsobrunn ('Ramlösa health well') () is a mineral spa located in Ramlösa in southeastern Helsingborg, Sweden, founded on 17 June 1707 by Johan Jacob Döbelius. The well was built around a chalybeate (iron-containing) spring, which Döbelius investigated in 1701, 1705 and 1706. During the 18th century, the reputation of the well increased and it was visited by guests from both Sweden and Denmark.
Ramlösa well had its heyday in the early 19th century, when several members of the royal family and members of the nobility visited the facility regularly. In the late 1890s, a new and more mineral-rich alkaline spring was found, which was the start of bottled water in the country and the modern Ramlösa company. At the same time, the original activity of the mineral spa declined, and it was finally closed in 1973. Following threats of development in the 1970s, several of the buildings were protected as listed buildings and almost the entire park was classified as a protected area within the listed buildings designation.
History
Discovery and establishment
Rumors about the chalybeate springs in Ramlösa and their curative effect had existed for a long time. During the Scanian War of 1675–1679, Charles XI's soldiers were said to have come there to recuperate. In Sven Lagerbring's description of the life of the field marshal Count Ascheberg, , it is stated that in 1677, soldiers who were plagued by field sickness (, outdated term for infectious gastrointestinal ailments among troops) recovered after quenching their thirst at the spring at Ramlösa. At the end of the war, in 1679, the Swedish headquarters were located in Western Ramlösa. The governorate physician and professor of medicine at Lund University, Johan Jacob Döbelius, drew attention to the well water in Ramlösa at the beginning of the 18th century. Although he had been warned by the inhabitants of the village about robbers in the forest where the spring was located, he examined the water several times: in 1701, 1705 and 1706. The water had the same composition as the water in several of Sweden's () [well containing naturally carbonated water with high iron content]: iron in the form of carbonated iron oxide. After this, Döbelius wrote about how the water's curative effect counteracted several ailments:
Döbelius turned to the then-governor of Scania, Magnus Stenbock, who gave permission, and help, to clear around the well. On 17 June 1707, on the 25th birthday of Charles XII, the well was inaugurated. A thousand people had gathered at the well to drink its water, but when Döbelius pointed out that they had to stay at the well for a longer period of time, the number of patients was reduced to about 40. In 1708, Döbelius published a book entitled ('Description of the invention of the Ramlösa health and , its location, nature, effect and proper use'), in which he tells of patients for whom the mineral water had an effect. The year after the opening, several patients returned and word of the well spread, especially among the gentry, thanks to Döbelius' dual role as a prominent physician and businessman. The well developed into a clinic and hospital where many sick people went to drink the water. The treatment likely consisted of drinking 15 to 17 glasses of the Ramlösa water every morning to cleanse the body. In addition, a lively entertainment life developed, with people from various social classes gathering to socialize in the spring's surroundings.
After Döbelius, medical doctor Hans Roslin took over the well in 1713, but when he was appointed provincial physician in Kristianstad, he lost contact with the facility and in 1727 the city physician in Malmö, Kilian Stobæus, was instead appointed as well doctor. It was largely thanks to Stobæus that Ramlösa's reputation as a health resort grew, largely because of his good contacts, including in Denmark, but also because of his popularity as a doctor; although Stobæus was sickly and had a limp, he always cared for his patients. Stobæus had several disciples, among them Carl Linnaeus, and he remained as a well doctor until his death in 1742, although his followers took care of the patients in the later years.
Linnaeus was to return to Ramlösa, together with his student Olof Söderberg, during his trip to Scania in 1749. During his visit to Helsingborg, he stayed with the mayor, Petter Pihl the Younger, at his residence , and then travelled from there to Ramlösa mineral spa. About the well Linnaeus noted:
Around the middle of the 18th century, a long timber-framed well house was built with only five guest rooms. Most of the guests at Ramlösa well instead rented accommodation from locals. The same year Carl Linnaeus visited Ramlösa, 1749, architect and court superintendent Carl Hårleman also visited the well. Hårleman described the well's guests, who during his stay consisted of retired soldiers and ordinary peasants as well as diplomats and civil servants. Hårleman had a runestone erected to commemorate his stay at the well. The stone was placed on the hill opposite the well house and the runic inscription reads:
Before Hårleman's visit, the area around the spring had not been maintained in any particular way, but had retained its natural form. In his memoirs, Gustaf Adolf Reuterholm describes how his mother, on a visit in the 1730s, had found the mineral spa divine in its primitive appearance, describing it as "made only by nature, but so marvelous that art could not have better established it". Hårleman became one of the initiators of the rehabilitation of the spring. The area was enclosed, new trees were planted and new roads and paths were laid out.
Company establishment
Ramlösa well flourished under the stewardship of from 1760 until his death in 1796. During this time, the number of visitors increased considerably and attempts were made to purchase land adjacent to the well so that a new hotel could be built there with room for 200 guests. However, none of the neighboring farms were willing to sell, as they would then lose income from the well's guests who rented accommodation for the summer. To capitalize on the well's popularity, a limited company was set up to raise capital for the maintenance of the facilities. The first owners consisted mainly of Scanian landowners, but also of doctors and merchants. The first board of directors consisted of County Governor , Major General , Secretary of State , merchants Carl Magnus Tönningh and , then-well doctor and Baron (friherre) Rutger Macklean. In the same year, a request was sent to the king, Gustav IV Adolf, to acquire new land in this way. The king not only granted the purchase, but also had a plan drawn up for the well's expansion, and von Rosen thus succeeded, through the state, in buying half a manor () in Köpinge for the well. In 1801, the well received its first royal privileges, which stipulated that it was entitled to charge for the water from the well's guests. The fee for the well house was one riksdaler. Wealthier people had to pay two riksdaler and the common people 24 shillings. The priest, bell-keeper and well-keeper were paid at discretion. However, the poor could still drink from the water free of charge. With the help of the new share capital of 16,000 riksdaler, it was possible to expand the facility; for example, the well house was rebuilt, a new guesthouse – now the doctor's villa – was added, new stables were built, and four new buildings for guest rooms were constructed. A bathhouse was built on the beach between Helsingborg and Råå, where guests were taken by horse-drawn carriage, but it burned down in 1811. The bathhouse was located on the town's land. It was 28 ells long and 15 ells wide and had been donated to the well by one of the members of the well board, Tönningh. In addition, several landowning families built villas around the park, including the Trolle, , Hamilton and Wedderkopp families.
The guests of the well stayed in Ramlösa or Köpinge and also in Helsingborg. Drinking from the well was accompanied by loud music. There was dancing every Sunday and Thursday between 4:00 and 7:00 p.m. There was also a visit to the Comedie House in Helsingborg. Twice a week collections were made for the poor and medicines were also given to them for free. There was no hospital, so the poor were accommodated in barns and lodges in Ramlösa and Köpinge. In 1802 a special dwelling house was built for the poor. The number of guests had increased over the years: in 1796: 74, in 1797: 55, in 1798: 81, in 1800: 106 and in 1801: 110. If the poor were also included, the number of guests at the well was almost 200.
von Platen and von Dannfelt
Due to the many new buildings, the capital soon ran out, which meant that the court marshal proposal to lease Ramlösa Well for 50 years was gladly accepted. Together with the well doctor Eberhard Zacharias Munck af Rosenschöld, von Platen was responsible for the well's new glory days, von Platen through his investments and expansions of the facility and Munck af Rosenschöld through his contacts who built up the reputation of the mineral spa as a social haven. In 1807, for example, von Platen had the first hotel built within the grounds of the well: the "" – a long and wide two-storey timber-frame building with a large salon, two halls and four large guest rooms on the ground floor and 20 guest rooms in total. He also built new stables for a total of 44 horses and ten carriages and made improvements to the park. Five barrels of the 70 barrels of land to the west of the park owned by the well were set aside for gardens and the planting of 100 fruit trees. Despite doctor Munck af Rosenschöld's many disagreements with King Charles XIV John, as a liberal member of parliament, many members of the royal family visited the park regularly. Crown Prince Oscar (later Oscar I) in particular was a frequent visitor. The royal splendor attracted Scanian society, as well as much of the capital's upper class, to balls and other events. The period was depicted by the publicist as follows:
In 1824, von Platen transferred the well to Colonel for 18,000 riksdaler and soon Dannfelt also owned the majority of the well's shares. Dannfelt built on what von Platen had started; for example, he rebuilt the bathhouse at Öresund and a new hospital was built in the eastern side of the area with room for 80 patients. The park was further refurbished in the romantic English style, with walkways designed to attract visitors. During this time, entertainment was lively and several balls and concerts were held at the well, guests could rent horses for excursions, and there were opportunities for card and billiard games. In 1824, Charles XIV John visited the park and the possibility of building a special residence for the royal family was discussed when they visited the well. But when the plans were presented in Stockholm the following year, the king had changed his mind and instead chose to donate a sum of 8,000 riksdaler to the park, to be used to subsidize stays for those who could not afford to rent the well themselves. This donation laid the foundations for what was to become the well hospital, which was completed in 1835. It became one of the most lavish buildings in the park and housed 46 hospital beds. In 1828, Queen Désirée visited the well and in her honor Dannfelt had a "Queen's Garden" planted in front of the house she lived in, now known as Villa Desideria.
Frequent changes of ownership and the new company
In 1840, the well suffered a major setback when both the popular Doctor af Rosenschöld and the owner Dannfelt died within a year. There followed a period of many changes of ownership and great confusion. Dannfelt's relatives, who had inherited the majority of the shares, transferred the lease to Lieutenant Påhlman in 1842 and then sold it to the manager Olof Westergren. Westergren died in 1849 and the unclear ownership of the well led to fears that the well would be split up, so twelve farmers got together and bought the well the same year. However, ten of the farmers withdrew the following year, which meant that the well was then owned by the gardener Carl Hultberg and estate owner Jöns Pålsson. Pålsson's father bought the well in 1853, but died shortly afterwards. His heirs planned to divide up the surrounding land and sell it off, and at the same time the 50-year lease from 1805 was about to expire, so in 1855 a new company was formed by Scanian businessmen, headed by Ryttmästare (cavalry captain) Rudolf Tornérhjelm, who took over the well.
The new company began an intensive period of refurbishment and new construction, with a new bathhouse built at the spring, the old well house demolished and a new hall completed in 1862, paths built and trees felled to open up the park. In 1865 the park's communications were improved with the completion of the railway to Landskrona, along which Ramlösa had its own station, located where the present Ramlösa Station is. In addition, a few years later a number of horse-drawn buses began to run between Helsingborg and the well. In 1875, train connections were further improved when the was built and the track was laid just south of the well park. A second station was established for Ramlösa, Ramlösa Well station.
With the help of the financier , the doctor took over the well in 1876, both as owner and manager. Wallis refurbished the aging facilities, for example building a bathhouse by the sound, containing both a restaurant and a music pavilion, which he connected to the park with a horse-drawn railway – Sweden's first when it opened in 1877. New villas were built, increasing the number of rooms from 100 to 170. In 1879, the old hotel burned down and was replaced a few years later by a new and larger one, which is largely the hotel that remains today. Wallis did much to restore the reputation of the well after the somewhat chaotic period, and it was to his credit that the Ramlösa mineral spa survived the end of the 19th century.
In 1882, the well's owner changed again. This time it was a consortium of Swedish and Danish investors. They transformed the well park into a more leisure-oriented park, ('the Mountain'), containing a miniature amusement park, a dance floor, shooting range, saloon and more. The amusement area was set up on the hill north of the spring where flowers and trees were planted. All these investments attracted large numbers of visitors, who from 1891 could reach the well by means of a new narrow-gauge railway: the Decauville, or "Little Ghost", as it was also called. On the other hand, faithful well visitors found it difficult to recognize the new surroundings and the well doctors did not stay particularly long each season because of the disorder. In 1896 in Helsingborg opened, taking a large proportion of the amusement park visitors, and by the end of the 19th century the traditional well business had returned to normal.
New well and decline of the spa
At the end of the 19th century, drilling for coal and clay was carried out in the area around the park, but instead large amounts of water were encountered, which caused drilling to be stopped. At the same time, Doctor Claus, the manager of the well at the time, had problems with limescale in his steam boilers. At the suggestion of the managing director, Baron Uggla, they tried to replace the stream water with water from the borehole, and to their great delight the limestone disappeared. This led to the water being analyzed; results showed that it was very pure and rich in minerals, on par with the best German health water, Apollinaris, from Bad Neuenahr. However, it was not until the beginning of the 20th century that a bottling plant was set up. The first factory was built in 1912 and was located next to the bathhouse in the park. Sales were slow at first, but in 1914 they picked up and sales increased by 50 percent. At the same time, rumors began to circulate that the original well was dying, as it appeared to be, and the management tried to cover it up. However, on closer inspection it was found that the pipes had burst again and the problem was solved by installing new pipes. At the same time, further drilling was carried out and another spring was found with better water quality than the old one.
In the 1920s, the income from the water-bottling plant was greater than that from the traditional well operation. The reason for this was that society now had better health care facilities, which eventually competed with the mineral spas. In the 1930s, sales of bottled water continued to increase. The market had now expanded abroad and, in addition to Denmark, where Ramlösa had long been known, the water was exported to Finland, Great Britain, the Netherlands, France, Turkey, Syria, Mandatory Palestine, and Egypt. The old well had finally become a financial burden for the company and was kept mostly for tradition's sake. Sales declined during World War II, and instead several of the park buildings were used as refugee facilities for Jews fleeing Denmark during the expanded persecutions that began in 1943. Then, on 9 April 1945, released prisoners from the German concentration camps arrived in Folke Bernadotte's white buses in Helsingborg and many of them were received at Ramlösa well. From 1943 to 1945, over 13,000 refugees were received, 200 to 700 per day. When peace returned, the turnover of the water-bottling plant rose sharply, which meant that the plant had to be expanded on two occasions, and finally moved to a completely new plant in the Ättekulla Industrial Area in 1973. The same year, the spa operations were finally closed down at the mineral spa. That year the well hospital had 66 patients.
Even before the closure of the well, plans had been made for the use of the park area and buildings. In 1967, AB Ramlösa Brunnsanläggning was established with the aim of creating a year-round well, where the old environment was to be preserved without any radical changes. However, two of the main owners, the construction companies Hermanssons Byggnads AB and JM Byggnads AB, were opposed and felt that the operation could not be maintained without extensive building projects to finance it. One of the plans was to demolish the old Great Hotel and replace it with a service building containing a gallery, library and conference facilities. They also wanted to demolish the bathhouse and build tennis courts. In addition, a major hospital pavilion, townhouses and a 13-storey hotel building were to be built. These plans met with strong protests, led by Hans Alfredson among others, and in 1973 the company instead donated the park to the city and was content to build a townhouse area outside the park.
Well managers 1707–1973
Medicinal properties
According to the well manager Pehr Unge, the water had proved effective against the following diseases: nausea and vomiting in the mornings, constipation, colic, phlegm and insomnia, chest pain and heartburn, flatulence, worms, melancholy, hysteria and convulsions as well as menstrual disorders, gout, rheumatism, contractures, urinary tract disorders, paralysis and even skin conditions. He also cites a couple of diseases in the chest that have been healed by the water.
Ramlösa Hälsobrunn
Today's has its roots in the old limited company founded in 1798, although today's operations are entirely different. Today the business consists only of bottling the water from the various wells of the old business. The company was bought by Norwegian conglomerate Orkla in 1995. AB Ramlösa Hälsobrunn ceased to exist as a separate company in 1999 and since 2001 the business has been part of Carlsberg Sverige AB, which is owned by the Danish brewery group Carlsberg Breweries. The plant bottles over 80 million liters of water annually, of which about 10% is exported to various countries.
Well park
Park design
The park is situated around a natural valley in the landscape, which can be said to divide the park into three different parts, a southern part consisting of the mineral spa buildings and a number of villas, the part down in the valley containing the springs and the well pavilion, and a northern part consisting of deciduous forest. The park has elements of both a French park with long shoulders and sight lines and an English park with undulating lawns.
At the southern end there are two older official entrances, one to the west and one to the east. From the western entrance, where one can see a preserved wooden entrance corridor of unknown date, one is led through a long avenue, now known as Von Platen's allé. The trees in the avenue were formerly linden trees , but are now mostly horse chestnut, which create a dense canopy over the road, but also elm, linden and maple. The avenue is bordered on both sides by an English park with elements of a number of residential buildings, up to the Great Hotel, where the park finally opens up. In front of the main entrance to the hotel, on its northern side, there is an open space paved with gravel and to the north a shrub-fenced garden with fountains and flowers, laid out in the 1930s. To the north of the garden are the timber-frame houses Villa Linnea and Villa Viola and beyond this the old bath house. Further east from the hotel is a row of villas on the north side of the walkway, Villa Bellis, Villa Salvia and the Doctors' Villa, and to the south is the Accountant's Villa. At the back of the corridor is the well hospital and just before it the corridor branches off to the south to lead to the second entrance gate. Previously, a poplar avenue led from here down to the spring in the valley. However, the alley no longer exists.
The part of the park located in the valley, usually referred to as the Ramlösa Valley, has an open character with large grassy areas interspersed with a number of smaller trees and shrubs, where rhododendrons are abundant. This part has an intimate feel as it is bordered on both sides by steep slopes with compact greenery, giving the impression of being enclosed. In the eastern part is the Water Pavilion, built in 1919–1921, just south of the original iron spring, marked by the exposed rock wall, stained red by the ferruginous water. Various concerts and events are held here from time to time, especially on Well Day. At the centre of the open grassy area, towards the valley slope, is another spring, the old alkaline spring. This is enclosed by an artificial cave, now enclosed by a fence. The valley is home to a number of unique biotopes for lichens, fungi, insects, birds and plants.
From the Ramlosa Valley, the so-called Philosophical Path leads up to the second hill, which consists of a mixed deciduous forest that used to be dominated by oaks and was therefore called ('the Oak Hill'), but nowadays the proportion of beech has increased. Along the path are several initials carved into the trees, as this was a popular romantic walking route in the days of the mineral spa. At the beginning of the Philosophical Path stands Hårleman's runestone (1750), donated and designed by the 18th-century architect Carl Hårleman, where he gives thanks for a refreshing stay.
The park is home to a number of unusual trees, including Caucasian wingnut (Pterocarya fraxinifolia), Turkish hazel (Corylus colurna) and dawn redwood (Metasequoia glyptostroboides) around Villa Primula. There is also a large tulip tree (Liriodendron tulipifera) between the villas Bellis and Salvia, which offers a heavy flowering display in early summer, as well as London plane (Platanus × hispanica) and black locust (Robinia pseudoacacia) at Villa Malvia. To the east of the Grand Hotel grows a sweet chestnut (Castanea sativa).
Park buildings
There are 22 individual buildings in the park. In order to protect the park's buildings from development threats such as those in the 1970s, 13 of the buildings in the park were protected by being listed between 1973 and 1974, but almost all of the buildings are protected as the park is part of a conservation area covered by the listed building designation.
Historic buildings
Doctor's villa
Ramlösa Well Hotel
Bath house
Water Pavilion
Villa Bellis was built in 1807 and is a yellow-painted wooden building with lock panels on one floor with a mansard roof. The corners of the building, window linings, verandas and balconies are painted white. The building was constructed as a semi-detached house, i.e. with accommodation for two families, and therefore has two entrances, both of which are located on the south long side of the building. The entrances are set on the porches with glazed sides, on which balconies are placed and between which there are two more balconies. Apart from the two balconies on the porches, the building is identical to Villa Salvia. The architect is unknown. The house became a listed building in 1973.
Villa Iris is a one-and-a-half-storey white-plastered building, probably built in the early 19th century. The building has a gabled roof with red eternit fiber cement tiles, and metal-clad chimneys and dormers. The south façade of the building has a frontispiece with exposed timber framing and a veranda with a balcony above. The gables of the building also have exposed timber framing and the west gable also has a veranda. The villa was built as a private residence and therefore its construction is not recorded in the well's records until it passed into its ownership in 1878. It has seven rooms downstairs and eight rooms upstairs and is now a private residence. The villa became a listed building in 1973.
Villa Linnéa was designed by architect and built in 1896. The house is a one-and-a-half-storey angular building with a gabled roof and a façade of imitation timber framing with red patterned brickwork. The roof is covered with grey fiber cement tiles. Building details such as windows, balustrades and doors are painted in pastel yellow. The villa has balconies on both gables and a south-facing veranda has been added at the corner. The building became a listed building in 1973. The design and background are the same as the Pyrola, Veronika and Viola villas, although there are some individual differences.
Villa Malva was built around 1880 and is a wooden building clad in yellow painted horizontal panels. The house is on one and a half floors with a gable roof clad in red fiber cement tiles and the gable ends and the underside of the eaves clad in seam panelling with sawtooth frieze. The main entrance is located on the house's southern long side and is accentuated by a frontispiece, as well as a porch with a balcony above. Both gables have extended rooms with balconies on the roof. The building probably replaced a building on the same site that burned down in 1879 and consists of seven rooms on the ground floor and six rooms on the upper floor. The building became a listed building in 1973 and in 1977 the interior was converted to house three apartments.
Villa Pyrola was designed by architect Alfred Hellerström and built in 1896. It is identical to Villa Linnéa and was declared a listed building in 1973.
Villa Salvia was built in 1800 and is identical in design to Villa Bellis above, except that it has no balconies above the porches. The architect is also unknown and the villa became a listed building in 1973.
Villa Tora is the smallest villa in the park and was built sometime between 1878 and 1899. The building is single-storey with a strongly pitched gable roof clad in red fiber cement panels. The façades are divided into bays of white-painted timber profiles, with the spaces between them formed by yellow-painted standing chamfered panels at the level of the windows and varying horizontal and diagonal beadboard panels in the bays above and below the windows. The villa is adorned with rich joinery. The main entrance is located to the south under a raised dormer window. The building probably stood on another site in the past and may have served as a bathing pavilion or ticket office and waiting room for the horse-drawn railway.
Villa Veronika was designed by architect Alfred Hellerström and built in 1896. It is identical to Villa Linnéa, except that both gables have half-vaulted roofs, and like it became a listed building in 1973.
Villa Viola was designed by architect Alfred Hellerström and built in 1896. It is identical to Villa Linnéa, except that it is mirrored and that both gables have half-vaulted roofs, and like Villa Linnéa it became a listed building in 1973.
Other
The well hospital () consists of two buildings, the oldest of which was built in 1835 and designed by architect Fredrik Blom. The building is a two-storey, yellow brick building with brickwork and a hipped gabled roof, with white profiled bands broken on the ground floor by the cornices of the windows, which are also white. The upper floor windows, however, have straight brick arches. The main entrance is located on the west façade of the building and is surrounded by white pilasters with black bases, which support a profiled cornice with the inscription LASARETT. The eastern long side has a two-storey extension which was added to the building in 1928. In 1982 there were plans to declare the building a listed building, but due to its poor condition, it never came to be. The second building is the hospital annex, which was built in 1885. It is a two-storey building whose façades are clad in yellow-painted recumbent beadboard with white knots and bands. Enclosures, balconies and other building details are also white. The house is covered by a hipped gable roof with red brick. The building was extended to the west in 1934.
The two-and-a-half-storey accountant's villa was built in 1829 in yellow plaster. The gable end is clad in chamfered paneling with sawtooth frieze, which, like the building's joinery, is painted brown. The windows, on the other hand, are painted white and are of the central mullion type with glazing bars, where those in the gable ends have white painted coverings. At the main entrance, on the north long side, the villa has an extended porch with an overhanging balcony with glazed sides, which in turn is covered by a small projected gabled roof. Above the balcony is a smaller paneled frontispiece, which has a similar counterpart on the opposite side of the house.
The office villa was built in 1932 and designed by architect . It is a single-storey building with a furnished attic and a half-vaulted roof covered with red tiles. The roof is interrupted on both long sides by frontispieces and on either side of these groups of three metal-clad dormers. In front of the southern frontispiece, a small extension protrudes on which a balcony is placed. The façades are covered in grey paneling, while building details such as windows, doors, and knots are painted white.
Villa Begonia was designed by and built in 1913. It is a two-storey building with a plastered façade, with corners articulated by white corner links and a tented roof of red tiles. On the east and west façades, the building has loggias at both corners, with separate openings with single staircases, leading out to the garden. The villa's four rooms could be rented by the well guests during the summer, but now the ground floor is used by a preschool.
Villa Desideria was built in 1801 for the Trolle family. It is a one-and-a-half-storey timber-frame house with brown painted oak and yellow brick in the bays. The building's long side façades have frontispieces to the north and south, while the gables have secondary porches to the east and west, added in 1899. The main entrance faces south and is located under a carved porch added between 1873 and 1879. The carpentry details are painted brown, except for the veranda balustrade and the windows and linings of the porch, which are painted white. The villa takes its name from Queen Desideria, the wife of Charles XIV John, who stayed in the villa during her stays at the spa. Previously, the house was called the Old Trolle House.
Villa Flora was built in 1920 and designed by architect Ola Andersson. It is a large, plastered two-and-a-half-storey building with a hipped gable roof and red tiles. Its southern façade has a projecting section with loggias on both floors. The site had previously been occupied by the New Trolle House from 1802, which was demolished in 1920. Villa Flora was built for renting out rooms during the summer well season and contained 20 rooms with 36 beds. In 1979, the building was renovated internally and converted into a multi-unit dwelling with eight apartments.
Villa Primula, also known as Villa Viktoria, was built in 1896 and is a one-and-a-half-storey angular building with yellow-painted sheet pile panels on the ground floor and pearl-painted ones on the upper floor. The roof is gabled with tiles and has two frontispieces to the south, one larger and centrally located and one slightly smaller on its eastern side. Below the central front porch is a smaller extension supporting an overhead balcony. The extension is flanked on the ground floor by verandas on either side. To the east is a pentagonal staircase tower, now converted into alcoves. Building details such as knots, skirting, linings and other joinery are painted white. The villa was refurbished in 1976 when the paneling, joinery and windows were replaced.
See also
Mineral spring
Loka Brunn
References
Citations
Sources
Further reading
External links
Ramlösa.se – homepage for the Ramlösa park
Helsingborg
Springs of Sweden
Mineral water
18th century in Skåne County | Ramlösa Hälsobrunn | [
"Chemistry"
] | 7,258 | [
"Mineral water"
] |
7,439,432 | https://en.wikipedia.org/wiki/Picryl%20chloride | Picryl chloride is an organic compound with the formula ClC6H2(NO2)3. It is a bright yellow solid that is highly explosive, as is typical for polynitro aromatics such as picric acid. Its detonation velocity is 7,200 m/s.
Reactions
The reactivity of picryl chloride is strongly influenced by the presence of three electron-withdrawing nitro groups. Consequently picryl chloride is an electrophile as illustrated by its reactivity toward sulfite to give the sulfonate:
ClC6H2(NO2)3 + Na2SO3 → NaO3SC6H2(NO2)3 + NaCl
Picryl chloride is also a strong electron acceptor. It forms a 1:1 charge-transfer complex with hexamethylbenzene.
References
Explosive chemicals
Nitrobenzene derivatives | Picryl chloride | [
"Chemistry"
] | 180 | [
"Explosive chemicals"
] |
7,439,763 | https://en.wikipedia.org/wiki/Syrian%20camel | The "Syrian camel" is an extinct, undescribed, species of camel from Syria. It has been discovered in the Hummal area of the western Syrian Desert. Found to have existed around 100,000 years ago, the camel was up to tall at the shoulder, and tall overall. The first of the fossils were discovered late in 2005, and several more were discovered about a year later. The camelid was found together with Middle Paleolithic human remains.
See also
Megacamelus
Titanotylopus
Megatylopus
References
External links
Mammals of the Middle East
Nomina nuda
Pleistocene extinctions
Prehistoric camelids | Syrian camel | [
"Biology"
] | 130 | [
"Biological hypotheses",
"Nomina nuda",
"Controversial taxa"
] |
14,766,360 | https://en.wikipedia.org/wiki/Isolation%20amplifier | Isolation amplifiers are a form of differential amplifier that allow measurement of small signals in the presence of a high common mode voltage by providing electrical isolation and an electrical safety barrier. They protect data acquisition components from common mode voltages, which are potential differences between instrument ground and signal ground. Instruments that are applied in the presence of a common mode voltage without an isolation barrier allow ground currents to circulate, leading in the best case to a noisy representation of the signal under investigation. In the worst case, assuming that the magnitude of common mode voltage or current is sufficient, instrument destruction is likely. Isolation amplifiers are used in medical instruments to ensure isolation of a patient from power supply leakage current.
Amplifiers with an isolation barrier allow the front-end of the amplifier to float with respect to common mode voltage to the limit of the barrier's breakdown voltage, which is often 1,000 volts or more. This action protects the amplifier and the instrument connected to it, while still allowing a reasonably accurate measurement.
These amplifiers are also used for amplifying low-level signals in multi-channel applications. They can also eliminate measurement errors caused by ground loops. Amplifiers with internal transformers eliminate external isolated power supply. They are usually used as analogue interfaces between systems with separated grounds.
Isolation amplifiers may include isolated power supplies for both the input and output stages, or may use external power supplies on each isolated portion.
Concepts
Signal source components
All signal sources are a composite of two major components. The normal mode component (VNM) represents the signal of interest and is the voltage that is applied directly across the inputs of the amplifier. The common mode component (VCM) represents the difference in potential between the low side of the normal mode component and the ground of the amplifier that is used to measure the signal of interest (the normal mode voltage).
In many measurement situations the common mode component is irrelevantly low, but rarely zero. Common mode components of only a few millivolts are frequently encountered and largely and successfully ignored, especially when the normal mode component is orders of magnitude larger.
The first indicator that common mode voltage magnitude is competing with the normal mode component is a noisy reproduction of the latter at the amplifier's output. Such a situation does not usually define the need for an isolation amplifier, but rather a differential amplifier. Since the common mode component appears simultaneously and in phase on both amplifier inputs, the differential amplifier, within the limits of the amplifier's design, can reject it.
However, if the sum of the normal mode and common mode voltages exceeds either the differential amplifier's common mode range, or maximum range without damage then the need for an isolation amplifier is firmly established.
Operating principles
Isolation amplifiers are commercially available as hybrid integrated circuits made by several manufacturers. There are three methods of providing isolation.
A transformer-isolated amplifier relies on transformer coupling of a high-frequency carrier signal between input and output. Some models also include a transformer-isolated power supply, that may also be used to power external signal processing devices on the isolated side of the system. The bandwidth available depends on the model and may range from 2 to 20 kHz. The isolation amplifier contains a voltage-to-frequency converter connected through a transformer to a frequency-to-voltage converter. The isolation between input and output is provided by the insulation on the transformer windings.
An optically isolated amplifier modulates current through an LED optocoupler. The linearity is improved by using a second optocoupler within a feedback loop. Some devices provide up to 60 kHz bandwidth. Galvanic isolation is provided by the conversion of electric current to photonic flux through the space between the LED and the detector, regardless of the intervening medium.
A third strategy is to use small capacitors to couple a modulated high-frequency carrier; the capacitors can stand off large DC or power frequency AC voltages but provide coupling for the much higher frequency carrier signal. Some models on this principle can stand off 3.5 kilovolts and provide up to 70 kHz bandwidth.
Isolation amplifier usage
Isolation amplifiers are used to allow measurement of small signals in the presence of a high common mode voltage. The capacity of an isolation amplifier is a function of two key isolation amplifier specifications:
The amplifier’s isolation breakdown voltage, which defines the absolute maximum common mode voltage that it will tolerate without damage. Specifications of 1,000 volts and more are common.
The amplifier’s common mode rejection ratio (CMRR). The CMRR specification defines the degree to which the common mode voltage will disrupt the normal mode component measurement, and therefore affect measurement accuracy.
The frequency of the common mode voltage can adversely affect performance. Higher frequency common mode voltages create difficulty for many isolation amplifiers due to the parasitic capacitance of the isolation barrier. This capacitance appears as a low impedance to higher frequency signals, and allows the common mode voltage to essentially blow past the barrier and interfere with measurements, or even damage the amplifier. However, most common mode voltages are a composite of line voltages, so frequencies generally remain in the 50 to 60 Hz region with some harmonic content, well within the rejection range of most isolation amplifiers.
Differential amplifiers
A non-isolated differential amplifier does not provide isolation between input and output circuits. They share a power supply and a DC path can exist between input and output. A non-isolated differential amplifier can only withstand common-mode voltages up to the power supply voltage.
Similar to the instrumentation amplifier, isolation amplifiers have fixed differential gain over a wide range of frequencies, high input impedance and low output impedance.
Amplifier selection guidelines
Instrumentation amplifiers can be classified into four broad categories, organized from least to most costly:
Single-ended. An unbalanced input, non-isolated. Suitable for measurements where common mode voltages are zero, or extremely small. Very inexpensive.
Differential. A balanced input, non-isolated. Suitable for measurements where the sum of common mode and normal mode voltages remains within the measurement range of the amplifier.
Single-ended, floating common. An isolated and quasi-balanced input (the floating common is typically connected to the (-) input of a differential amplifier). Suitable for off-ground measurements up to the breakdown voltage of the isolation barrier, and exhibits very good common mode rejection (100 db typical).
Differential, floating common. An isolated and balanced input. Suitable for off-ground measurements to the breakdown voltage of the isolation barrier, and exhibits superb common mode rejection (>120 db).
For most industrial applications that require isolation, the single-ended floating design provides the best price/performance.
There are also two broad classifications of isolation amplifiers that should be considered in tandem with the application:
Amplifiers providing input-to-output isolation without channel-to-channel isolation. This is a less expensive form of isolation that offers only one isolation barrier for a multi-channel instrument. Although the commons of each channel are isolated from power ground by the input-to-output isolation barrier, they are not isolated from each other. Therefore a common mode voltage on one will attempt to float all the others, sometimes with disastrous results. This form of isolation is suitable only when it is certain that there is only one common mode voltage that is equally applied to all channels.
Amplifiers providing both input-to-output and channel-to-channel isolation. This is the purest form of isolation, and the option that should be considered for nearly all applications. Multi-channel instruments that employ it are immune to inconsistent common mode voltages on any combination of channels within the limits of the amplifiers.
Typical application
Stacked voltage cell measurements
Stacked voltage cell measurements are common with the growing popularity of solar cells and fuel cells. In this application the technician wants to profile the performance of individual series-connected voltages cells, but the need for an isolated amplifier is often overlooked. Each voltage cell (the normal mode voltage) is removed from ground by an amount equal to the sum of the voltage cells below it (the common mode voltage). Unless the amplifiers used to measure individual cell voltages are allowed to float at a level equal to the common mode voltage, measurements are not likely to be accurate for any but the first cell in the string where the common mode voltage is zero.
A non-isolated differential amplifier can be used but it will have a rated maximum common mode voltage that cannot be exceeded while maintaining accuracy.
References
External links
Learn the importance of isolation
A guide to isolation amplifier selection
Electronic design
Electronic amplifiers | Isolation amplifier | [
"Technology",
"Engineering"
] | 1,720 | [
"Electronic design",
"Electronic engineering",
"Electronic amplifiers",
"Amplifiers",
"Design"
] |
14,766,388 | https://en.wikipedia.org/wiki/MORF4L1 | Mortality factor 4-like protein 1 is a protein that in humans is encoded by the MORF4L1 gene.
Interactions
MORF4L1 has been shown to interact with MYST1, Retinoblastoma protein and MRFAP1.
References
Further reading | MORF4L1 | [
"Chemistry"
] | 57 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,766,618 | https://en.wikipedia.org/wiki/HOXB5 | Homeobox protein Hox-B5 is a protein that in humans is encoded by the HOXB5 gene.
Function
This gene is a member of the Antp homeobox family and encodes a nuclear protein with a homeobox DNA-binding domain. It is included in a cluster of homeobox B genes located on chromosome 17. The encoded protein functions as a sequence-specific transcription factor that is involved in lung and gut development. Increased expression of this gene is associated with a distinct biologic subset of acute myeloid leukemia (AML) and the occurrence of bronchopulmonary sequestration (BPS) and congenital cystic adenomatoid malformation (CCAM) tissue.
See also
Myeloid tissue
References
Further reading
External links
Transcription factors | HOXB5 | [
"Chemistry",
"Biology"
] | 164 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,766,648 | https://en.wikipedia.org/wiki/IGHM | Ig mu chain C region is a protein that in humans is encoded by the IGHM gene.
It is associated with agammaglobulinemia-1.
References
Further reading
Proteins
Antibodies | IGHM | [
"Chemistry"
] | 42 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
14,766,680 | https://en.wikipedia.org/wiki/INCENP | Inner centromere protein is a protein that in humans is encoded by the INCENP gene. It is a regulatory protein in the chromosome passenger complex (CPC). It is involved in regulation of the catalytic proteins Aurora B and Aurora C. It acts in association with two other proteins - Survivin and Borealin. These proteins form a tight three-helical bundle. The N-terminal domain of INCENP is the domain involved in formation of this three-helical bundle while its C-terminal domain is responsible for the interaction with Aurora B.
In mammalian cells, two broad groups of centromere-interacting proteins have been described: constitutively binding centromere proteins and 'passenger' (or transiently interacting) proteins. The constitutive proteins include CENPA (centromere protein A), CENPB, CENPC1, and CENPD.
The term 'passenger proteins' encompasses a broad collection of proteins that localize to the centromere during specific stages of the cell cycle. These include CENPE; MCAK; KID; cytoplasmic dynein (e.g., DYNC1H1); CliPs (e.g. CLIP1); and CENPF/mitosin (CENPF). The inner centromere proteins (INCENPs), the initial members of the passenger protein group, display a broad localization along chromosomes in the early stages of mitosis but gradually become concentrated at centromeres as the cell cycle progresses into mid-metaphase. During telophase, the proteins are located within the midbody in the intercellular bridge, where they are discarded after cytokinesis.
Interactions
INCENP has been shown to interact with H2AFZ, Survivin and CDCA8. The ARK binding region has been found to be necessary and sufficient for binding to aurora-related kinase. This interaction has been implicated in the coordination of chromosome segregation with cell division in yeast.
References
Further reading
Protein families | INCENP | [
"Biology"
] | 417 | [
"Protein families",
"Protein classification"
] |
14,766,741 | https://en.wikipedia.org/wiki/Model%20K%20%28calculator%29 | The Model K was an early 2-bit binary adder built in 1937 by Bell Labs scientist George Stibitz as a proof of concept, using scrap relays and metal strips from a tin can. The "K" in "Model K" came from "kitchen table", upon which he assembled it.
References
American inventions
Calculators
Digital electronics | Model K (calculator) | [
"Mathematics",
"Engineering"
] | 73 | [
"Calculators",
"Electronic engineering",
"Digital electronics"
] |
14,766,781 | https://en.wikipedia.org/wiki/HEY1 | Hairy/enhancer-of-split related with YRPW motif protein 1 is a protein that in humans is encoded by the HEY1 gene.
Function
This gene encodes a nuclear protein belonging to the hairy and enhancer of split-related (HESR) family of basic helix-loop-helix (bHLH)-type transcriptional repressors. Expression of this gene is induced by the Notch and c-Jun signal transduction pathways. Two similar and redundant genes in mouse are required for embryonic cardiovascular development, and are also implicated in neurogenesis and somitogenesis. Alternative splicing results in multiple transcript variants.
Role in disease
HEY1::NCOA2 fusion which may arise via a small deletion del(8)(q13.3q21.1) is highly specific for the diagnosis of mesenchymal chondrosarcoma.
References
Further reading
External links
Transcription factors | HEY1 | [
"Chemistry",
"Biology"
] | 192 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,768,005 | https://en.wikipedia.org/wiki/Conservation%20psychology | Conservation psychology is the scientific study of the reciprocal relationships between humans and the rest of nature, with a particular focus on how to encourage conservation of the natural world. Rather than a specialty area within psychology itself, it is a growing field for scientists, researchers, and practitioners of all disciplines to come together and better understand the Earth and what can be done to preserve it. This network seeks to understand why humans hurt or help the environment and what can be done to change such behavior. The term "conservation psychology" refers to any fields of psychology that have understandable knowledge about the environment and the effects humans have on the natural world. Conservation psychologists use their abilities in "greening" psychology and make society ecologically sustainable. The science of conservation psychology is oriented toward environmental sustainability, which includes concerns like the conservation of resources, conservation of ecosystems, and quality of life issues for humans and other species.
One common issue is a lack of understanding of the distinction between conservation psychology and the more-established field of environmental psychology, which is the study of transactions between individuals and all their physical settings, including how people change both the built and the natural environments and how those environments change them. Environmental psychology began in the late 1960s (the first formal program with that name was established at the City University of New York in 1968), and is the term most commonly used around the world. Its definition as including human transactions with both the natural and built environments goes back to its beginnings, as exemplified in these quotes from three 1974 textbooks: "Environmental psychology is the study of the interrelationship between behavior and the built and natural environment" and "...the natural environment is studied as both a problem area, with respect to environmental degradation, and as a setting for certain recreational and psychological needs", and a third that included a chapter entitled The Natural Environment and Behavior.
Conservation psychology, proposed more recently in 2003 and mainly identified with a group of US academics with ties to zoos and environmental studies departments, began with a primary focus on the relations between humans and animals. Introduced in ecology, policy, and biology journals, some have suggested that it should be expanded to try to understand why humans feel the need to help or hurt the environment, along with how to promote conservation efforts.
Pioneers in this field
Who is involved
Psychologists from all fields including philosophy, biology, sociology, industrial and organizational, health, and consumer psychology, along with many other subfields like environmental education and conservation biology come together to put their knowledge to practice in educating others to work together and encourage a congruous relationship between humans and the environment around them. These psychologists work together with places such as zoos and aquariums. Zoos and aquariums may seem to only be places of recreation and fun but are actually trying hard to put positive messages out and to educate the public on the homes and needs of the animals that live there. They are trying to find ways to interact and teach the public the consequences of their day to day actions to the animals and the environment rather than simply viewing the animals. Psychologists and sociologists have been visiting workshops and think tanks at the zoos to evaluate if the animals are being viewed and shown to the best of their ability while still giving informative knowledge to the public.
Research to consider
What characterizes conservation psychology research is that in addition to descriptive and theoretical analyses, studies will explore how to cause the kinds of changes that lessen the impact of human behavior on the natural environment, and that lead to more sustainable and harmonious relationships. Some of the research being done with respect to conservation is estimating exactly how much land and water resources are being used by each human at this point along with projected future growth. Also important to consider is the partitioning of land for this future growth. Additionally, conservation efforts look at the positive and negative consequences for the biodiversity of plant and animal life after humans have used the land to their advantage. In addition to creating better conceptual models, more applied research is needed to: 1) identify the most promising strategies for fostering ways of caring about nature, 2) find ways to reframe debates and strategically communicate to the existing values that people have, 3) identify the most promising strategies for shifting the societal discourse about human–nature relationships, and 4) measure the success of these applications with respect to the conservation psychology mission. The ultimate success of conservation psychology will be based on whether its research resulted in programs and applications that made a difference with respect to environmental sustainability. We need to be able to measure the effectiveness of the programs in terms of their impact on behavior formation or behavior change, using tools developed by conservation psychologists.
Present research and future planning
Conservation psychology research has broken down the four most important tenets of promoting positive conservation attitudes into "the four 'I's". These include: Information, Identity, Institutions, and Incentives. Research has been done in all four categories.
Information
Studies have shown that the way in which crises are presented is a key predictor for how people will react to them. When people hear that they personally can help to alleviate a crisis through their conservation efforts, just by simple actions with their personal energy use, they are more likely to conserve. However, if people are told that the other people around them are overusing energy, it increases selfish behavior and causes people to actually consume more. Other studies show that when people believe in the efficacy of collective action, awareness of the predicament climate change places on society can lead to pro-environmental behaviour. Furthermore, when adequate support is provided for climate related emotions to be reflected on and processed, this leads to an increase in resilience and community engagement.
Teaching people about the benefits of conservation, including easy ways to help conserve, is an effective way to inform about and promote more environmentally friendly behavior. Additionally, research has shown that making sure people understand more about the boundaries of land they can help preserve actually improves positive attitudes towards conservation. When people know more about local regions they can help protect, they will care more. Knowing more about the regions includes knowing the extent of the biodiversity in that region, and being sure that the ecosystem will remain healthy and protected. Cost analysis is another important factor. People do not want to take risks on valuable lands, which in places like California, could be worth billions.
Identity
In general, people like to fit in and identify with their peer social groups. Studies have shown people identify more intimately with close friends and family, which is why conservation campaigns try to directly address the most people. The "think of the children" argument for conservation follows this logic by offering a group everyone can relate to and feel close to. Studies have also shown that this need to fit in among peer social groups can be reinforced positively or negatively: giving positive feedback on energy bills for conserving in their homes encourages people to continue lower energy use. Examples of negative reinforcement include the use of negative press against companies infamous for heavy pollution.
Another interesting line of research looks at how people identify positively or negatively with certain issues. One relevant idea is the notion of "consistency attitudes". Studies have shown that people tend to take a good association they have, and then use this to make positive or negative links with other, related things. For example, if someone thinks it is a good idea to protect old Pacific forests, this will positively form a link to also want to protect smaller forests and even grasslands. This same line of thinking can cause someone who supports the protection of old Pacific forests to start thinking negatively about the creation of more logging roads. Other studies on consistency attitudes have shown that, with one particular issue, people like to align their preferences with each other. This has been shown repeatedly while looking at political ideologies and racial attitudes, and studies have shown that this can also include environmental issues. Finally, other studies have shown that how people identify an ecosystem geographically can affect their concern for it. For instance, when people think of saving the rainforests, they often think of this as a global problem and support it more readily. However, lesser known but still significant local ecosystems remain ignored and unprotected.
Institutions
Another approach that has been considered is the use of organized institutions and government as the leaders for promoting conservation. However, these leaders can only be effective if they are trusted. Studies from previous crises where conserving resources was extremely necessary showed that people were more likely to obey energy restrictions and follow certain leaders when they felt they could trust the people directing them. That is, people are more likely to obey restrictions when they believe that they are being encouraged to act a certain way out of necessity and that they are not being misled.
Incentives
Incentivizing conservation through rewards and fines is another approach. Studies have shown that people who identify more with their community need less incentives to conserve than those who do not identify strongly with their surrounding community. For corporations, monetary incentives have been shown to work for companies showing some effort to make their buildings and practices more "green". Studies have also shown that doing something as simple as putting a water meter in homes has helped incentivize conservation by letting people track their energy consumption levels. Finally, studies have shown that when giving fines, it is better to start with very small and then raise it for repeated violations. If the fines are too high, the issue becomes too economic, and people start to mistrust the authorities enforcing the fines.
Main concepts
Conservation psychology assesses as a whole four different concepts. At the country's first Conservation Psychology conference these four things were discussed. The first is the main original topic of the field, and the other three are topics with a previous history in environmental psychology.
The first topic being discussed is the connection of humans and animals. The Multi-Institutional Research Project (MIRP) works diligently on finding ways to develop a compassionate stance towards animals in the public eye. Many different questions were assessed to find answers to questions concerning ways to help develop loving attitudes for animals and the earth. With these questions and answers, effective educational and interpretive programs were made that would help review the progress.
The second concept that was discussed at the conference concerned connections of humans and places. A new language of conservation will be supported if there are abundant opportunities for meaningful interactions with the natural world in both urban and rural settings. Unfortunately, as biodiversity is lost, every generation has fewer chances to experience nature.
It is estimated that more than 80% of the world's population currently lives under light-polluted skies.
Light pollution, along with urbanization, adds to the rising alienation between people and nature, which has been defined as "the extinction of experience". This rising distance is affecting public support for conservation efforts by preventing people from connecting with, understanding, and developing ties to the natural environment as well as the related cultural heritage. For example the Milky Way, which is a common theme in many founding myths, is currently hidden from one-third of the global population.
Miller (2005) contends that designing urban landscapes that encourage "meaningful interactions with the natural world" can help address this growing distance between humans and the skies. Giving individuals access to the Milky Way and letting them experience the natural cycles of sunshine and moonlight, to which they are genetically preadapted to synchronize their physiology and behavior, may be the most meaningful way to help them re-establish a connection with nature.
There were many questions asked concerning how humans in their everyday lives could be persuaded or educated well enough to make them want to join in programs or activities that help maintain biodiversity in their proximity. Local public and private organizations were asked to come together to help find ways to protect and manage local land, plants, and animals. Other discussions came to whether people on an individual or community level would voluntarily choose to become involved in maintaining and protecting their local biodiversity. These plus many other important questions were contemplated. Techniques in marketing are a key tool in helping people connect to their environment. If an identity could be connected from the environment to towns becoming more urbanized, maybe those living there would be more prone to keep it intact.
The third discussion covered the aspects of producing people who act environmentally friendly. Collectively, any activities that support sustainability, either by reducing harmful behaviors or by adopting helpful ones, can be called conservation behaviors. Achieving more sustainable relationships with nature will basically require that large numbers of people change their reproductive and consumptive behaviors. Any action, small or large, that helps the environment in any way is a good beginning to a future of generations who only practice environmentally friendly behavior. This may seem to be a far-fetched idea but with any help at all in educating those who do not know the repercussions of their actions could help achieve this. Approaches to encouraging a change in behavior were thought about carefully. Many do not want to change their way of life. A more simplistic lifestyle rather than their materialistic, current lives hurt their environment around them rather than help, but could people willingly change? To take public transportation rather than drive a car, recycling, turning off lights when they are not needed, all these things are very simple yet a nuisance to actually follow through with. Would restructuring tax-code help people to want to change their attitudes? Any concept to reach the goal of helping people act ecologically aware was discussed and approached. Some empirical evidence shows that simply "being the change you want to see in the world" can influence others to behave in more environmentally friendly ways as well.
The fourth and final point at the first Conservation Psychology convention was the discussion of the values people have to their environment. Understanding our relationship to the natural world well enough so that we have a language to celebrate and defend that relationship is another research area for conservation psychology. According to the biophilia hypothesis, the human species evolved in the company of other life forms, and we continue to rely physically, emotionally, and intellectually on the quality and richness of our affiliations with natural diversity. A healthy and diverse natural environment is considered an essential condition for human lives of satisfaction and fulfillment. Where did they get these values and are they ingrained to the point they cannot be changed? How can environmentally educated people convey value-based communication to a community, a nation, or even on a global level? National policy for this model is something that is desired but under such a strong political scrutiny this could be very challenging. Advocates for biodiversity and different programs came together to try to find methods of changing Americans' values concerning their environment and different methods to express and measure them.
Connection of conservation in biology and psychology
Conservation biology was originally conceptualized as a crisis-oriented discipline, with the goal of providing principles and tools for preserving biodiversity. This is a branch of biology that is concerned with preserving genetic variation in plants and animals. This scientific field evolved to study the complex problems surrounding habitat destruction and species protection. The objectives of conservation biologists are to understand how humans affect biodiversity and to provide potential solutions that benefit both humans and non-human species. It is understood in this field that there are underlying fields of biology that could readily help to have a better understanding and contribute to conservation of biodiversity. Biological knowledge alone is not sufficient to solve conservation problems, and the role of the social sciences in solving these problems has become increasingly important. With the knowledge of conservation biology combined with other fields, much was thought to be gained. Psychology is defined as the scientific study of human thought, feeling, and behavior. Psychology was one of the fields that could take its concepts and apply them to conservation. It was also always understood that in the field of psychology there could be much aid to be given, the field only had to be developed. Psychology can help in providing insight into moral reasoning and moral functioning, which lie in the heart of human–nature relationships.
See also
Biodiversity
Conservation movement
Conservation ethic
Ecopsychology
Environmental movement
Environmental psychology
Natural environment
Sustainability
References
Notes
Brook, Amara; Clayton, Susan. Can Psychology Help Save the World? A Model for Conservation Psychology. Analyses of Social Issues and Public Policy, Vol. 5, No. 1, 2005, pp. 87–102.
De Young, R. (2013). "Environmental Psychology Overview." In Ann H. Huffman & Stephanie Klein [Eds.] Green Organizations: Driving Change with IO Psychology. (Pp. 17-33). NY: Routledge.
Exploring the Potential of Conservation Psychology. Human Ecology Review, Vol 10. No. 2. 2003. pp. iii–iv.
Kahn, P.K., Jr. 1999. The human relationship with nature. Development and culture. Massachusetts Institute of Technology Press, Cambridge, Massachusetts.
Kellert, S.R. & Wilson E.O. (eds.). 1993. The Biophilia Hypothesis. Washington, DC: Island Press.
Mascia, M.B.; Brosius, J.P.; Dobson, T.A.; Forbes, B.C.; Horowitz, L.; McKean, M.A. & N.J. Turner. 2003. Conservation and the social sciences. Conservation Biology 17: 649–50.
Miller, J. 2006. Biodiversity conservation and the extinction of experience. Trends in Ecology & Evolution: in press.
Myers, Gene. Conservation Psychology. WWU. January 20, 2002.
Myers, D.G. 2003. Psychology, 7th Edition. New York: Worth Publishers.
Saunders, C.D. 2003. The Emerging Field of Conservation Psychology. Human Ecology Review, Vol. 10, No, 2. 137–49.
Soule, M.E. (1987). History of the Society for Conservation Biology: How and why we got here. Conservation Biology, 1, 4–5.
Werner, C.M. 1999. Psychological perspectives on sustainability. In E. Becker and T. Jahn (eds.), Sustainability and the Social Sciences: A Cross-Disciplinary Approach to Integrating Environmental Considerations into Theoretical Reorientation, 223–42. London: Zed Books.
Zelezny, L.C. & Schultz, P.W. (eds.). 2000. Promoting environmentalism. Journal of Social Issues 56, 3, 365–578.
Ecology
Environmental conservation
Environmental psychology
Environmental social science | Conservation psychology | [
"Biology",
"Environmental_science"
] | 3,682 | [
"Environmental social science",
"Ecology",
"Environmental psychology"
] |
14,768,031 | https://en.wikipedia.org/wiki/DNS%20hijacking | DNS hijacking, DNS poisoning, or DNS redirection is the practice of subverting the resolution of Domain Name System (DNS) queries. This can be achieved by malware that overrides a computer's TCP/IP configuration to point at a rogue DNS server under the control of an attacker, or through modifying the behaviour of a trusted DNS server so that it does not comply with internet standards.
These modifications may be made for malicious purposes such as phishing, for self-serving purposes by Internet service providers (ISPs), by the Great Firewall of China and public/router-based online DNS server providers to direct users' web traffic to the ISP's own web servers where advertisements can be served, statistics collected, or other purposes of the ISP; and by DNS service providers to block access to selected domains as a form of censorship.
Technical background
One of the functions of a DNS server is to translate a domain name into an IP address that applications need to connect to an Internet resource such as a website. This functionality is defined in various formal internet standards that define the protocol in considerable detail. DNS servers are implicitly trusted by internet-facing computers and users to correctly resolve names to the actual addresses that are registered by the owners of an internet domain.
Rogue DNS server
A rogue DNS server translates domain names of desirable websites (search engines, banks, brokers, etc.) into IP addresses of sites with unintended content, even malicious websites. Most users depend on DNS servers automatically assigned by their ISPs. A router's assigned DNS servers can also be altered through the remote exploitation of a vulnerability within the router's firmware. When users try to visit websites, they are instead sent to a bogus website. This attack is termed pharming. If the site they are redirected to is a malicious website, masquerading as a legitimate website, in order to fraudulently obtain sensitive information, it is called phishing.
Manipulation by ISPs
A number of consumer ISPs such as AT&T, Cablevision's Optimum Online, CenturyLink, Cox Communications, RCN, Rogers, Charter Communications (Spectrum), Plusnet, Verizon, Sprint, T-Mobile US, Virgin Media, Frontier Communications, Bell Sympatico, Deutsche Telekom AG, Optus, Mediacom, ONO, TalkTalk, Bigpond (Telstra), TTNET, Türksat, and all Indonesian customer ISPs use or used DNS hijacking for their own purposes, such as displaying advertisements or collecting statistics. Dutch ISPs XS4ALL and Ziggo use DNS hijacking by court order: they were ordered to block access to The Pirate Bay and display a warning page while all customer ISP in Indonesia do DNS hijacking to comply with the National DNS law which requires every customer Indonesian ISP to hijack port 53 and redirect it to their own server to block website that are listed in Trustpositif by Kominfo under Internet Sehat campaign. These practices violate the RFC standard for DNS (NXDOMAIN) responses, and can potentially open users to cross-site scripting attacks.
The concern with DNS hijacking involves this hijacking of the NXDOMAIN response. Internet and intranet applications rely on the NXDOMAIN response to describe the condition where the DNS has no entry for the specified host. If one were to query the invalid domain name (for example www.example.invalid), one should get an NXDOMAIN response – informing the application that the name is invalid and taking the appropriate action (for example, displaying an error or not attempting to connect to the server). However, if the domain name is queried on one of these non-compliant ISPs, one would always receive a fake IP address belonging to the ISP. In a web browser, this behavior can be annoying or offensive as connections to this IP address display the ISP redirect page of the provider, sometimes with advertising, instead of a proper error message. However, other applications that rely on the NXDOMAIN error will instead attempt to initiate connections to this spoofed IP address, potentially exposing sensitive information.
Examples of functionality that breaks when an ISP hijacks DNS:
Roaming laptops that are members of a Windows Server domain will falsely be led to believe that they are back on a corporate network because resources such as domain controllers, email servers and other infrastructure will appear to be available. Applications will therefore attempt to initiate connections to these corporate servers, but fail, resulting in degraded performance, unnecessary traffic on the Internet connection and timeouts.
Many small office and home networks do not have their own DNS server, relying instead on broadcast name resolution. Many versions of Microsoft Windows default to prioritizing DNS name resolution above NetBIOS name resolution broadcasts; therefore, when an ISP DNS server returns a (technically valid) IP address for the name of the desired computer on the LAN, the connecting computer uses this incorrect IP address and inevitably fails to connect to the desired computer on the LAN. Workarounds include using the correct IP address instead of the computer name, or changing the DhcpNodeType registry value to change name resolution service ordering.
Browsers such as Firefox no longer have their "Browse By Name" functionality (where keywords typed in the address bar take users to the closest matching site).
The local DNS client built into modern operating systems will cache results of DNS searches for performance reasons. If a client switches between a home network and a VPN, false entries may remain cached, thereby creating a service outage on the VPN connection.
DNSBL anti-spam solutions rely on DNS; false DNS results therefore interfere with their operation.
Confidential user data might be leaked by applications that are tricked by the ISP into believing that the servers they wish to connect to are available.
User choice over which search engine to consult in the event of a URL being mistyped in a browser is removed as the ISP determines what search results are displayed to the user.
Computers configured to use a split tunnel with a VPN connection will stop working because intranet names that should not be resolved outside the tunnel over the public Internet will start resolving to fictitious addresses, instead of resolving correctly over the VPN tunnel on a private DNS server when an NXDOMAIN response is received from the Internet. For example, a mail client attempting to resolve the DNS A record for an internal mail server may receive a false DNS response that directed it to a paid-results web server, with messages queued for delivery for days while retransmission was attempted in vain.
It breaks Web Proxy Autodiscovery Protocol (WPAD) by leading web browsers to believe incorrectly that the ISP has a proxy server configured.
It breaks monitoring software. For example, if one periodically contacts a server to determine its health, a monitor will never see a failure unless the monitor tries to verify the server's cryptographic key.
In some, but not most cases, the ISPs provide subscriber-configurable settings to disable hijacking of NXDOMAIN responses. Correctly implemented, such a setting reverts DNS to standard behavior. Other ISPs, however, instead use a web browser cookie to store the preference. In this case, the underlying behavior is not resolved: DNS queries continue to be redirected, while the ISP redirect page is replaced with a counterfeit DNS error page. Applications other than web browsers cannot be opted out of the scheme using cookies as the opt-out targets only the HTTP protocol, when the scheme is actually implemented in the protocol-neutral DNS.
Response
In the UK, the Information Commissioner's Office has acknowledged that the practice of involuntary DNS hijacking contravenes PECR, and EC Directive 95/46 on Data Protection which require explicit consent for processing of communication traffic. In Germany, in 2019 it was revealed that the Deutsche Telekom AG not only manipulated their DNS servers, but also transmitted network traffic (such as non-secure cookies when users did not use HTTPS) to a third-party company because the web portal T-Online, at which users were redirected due to the DNS manipulation, was not (any more) owned by the Deutsche Telekom. After a user filed a criminal complaint, the Deutsche Telekom stopped further DNS manipulations.
ICANN, the international body responsible for administering top-level domain names, has published a memorandum highlighting its concerns, and affirming:
Remedy
End users, dissatisfied with poor "opt-out" options like cookies, have responded to the controversy by finding ways to avoid spoofed NXDOMAIN responses. DNS software such as BIND and Dnsmasq offer options to filter results, and can be run from a gateway or router to protect an entire network. Google, among others, run open DNS servers that currently do not return spoofed results. So a user could use Google Public DNS instead of their ISP's DNS servers if they are willing to accept that they use the service under Google's privacy policy and potentially be exposed to another method by which Google can track the user. One limitation of this approach is that some providers block or rewrite outside DNS requests. OpenDNS, owned by Cisco, is a similar popular service which does not alter NXDOMAIN responses.
Google in April 2016 launched DNS-over-HTTPS service. This scheme can overcome the limitations of the legacy DNS protocol. It performs remote DNSSEC check and transfers the results in a secure HTTPS tunnel.
There are also application-level work-arounds, such as the NoRedirect Firefox extension, that mitigate some of the behavior. An approach like that only fixes one application (in this example, Firefox) and will not address any other issues caused. Website owners may be able to fool some hijackers by using certain DNS settings. For example, setting a TXT record of "unused" on their wildcard address (e.g. *.example.com). Alternatively, they can try setting the CNAME of the wildcard to "example.invalid", making use of the fact that ".invalid" is guaranteed not to exist per the RFC. The limitation of that approach is that it only prevents hijacking on those particular domains, but it may address some VPN security issues caused by DNS hijacking.
See also
Captive portal
DNS cache poisoning
DNS rebinding
DNS spoofing
Domain hijacking
Dynamic Host Configuration Protocol
Pharming
Point-to-Point Protocol
Spoofing attack
TCP reset attack
Trojan.Win32.DNSChanger
References
Domain Name System
Internet fraud
Hacking (computer security)
Internet ethics
Internet privacy
Internet security | DNS hijacking | [
"Technology"
] | 2,273 | [
"Internet ethics",
"Ethics of science and technology"
] |
14,768,821 | https://en.wikipedia.org/wiki/KCNJ8 | Potassium inwardly-rectifying channel, subfamily J, member 8, also known as KCNJ8, is a human gene encoding the Kir6.1 protein. A mutation in KCNJ8 has been associated with cardiac arrest in the early repolarization syndrome.
Potassium channels are present in most mammalian cells, where they participate in a wide range of physiologic responses. Kir6.1 is an integral membrane protein and inward-rectifier type potassium channel. Kir6.1, which has a greater tendency to allow potassium to flow into a cell rather than out of a cell, is controlled by G-proteins.
See also
Inward-rectifier potassium ion channel
References
Further reading
External links
Ion channels | KCNJ8 | [
"Chemistry"
] | 154 | [
"Neurochemistry",
"Ion channels"
] |
14,768,838 | https://en.wikipedia.org/wiki/KCNMB1 | Calcium-activated potassium channel subunit beta-1 is a protein that in humans is encoded by the KCNMB1 gene.
Function
MaxiK channels are large conductance, voltage and calcium-sensitive potassium channels which are fundamental to the control of smooth muscle tone and neuronal excitability. MaxiK channels can be formed by 2 subunits: the pore-forming alpha subunit and the product of this gene, the modulatory beta subunit. Intracellular calcium regulates the physical association between the alpha and beta subunits. Beta subunits (beta 1-4) are highly tissue specific in their expression, with beta-1 being present predominantly on vascular smooth muscle. Endothelial cells are not known to express beta-1 subunits. Beta-1 is also known to be expressed in urinary bladder and in some regions of the brain. Association of the beta-1 subunit with the BK channel increases the apparent Ca2+ sensitivity of the channel and decreases voltage dependence.
See also
BK channel
Voltage-gated potassium channel
References
Further reading
Ion channels | KCNMB1 | [
"Chemistry"
] | 215 | [
"Neurochemistry",
"Ion channels"
] |
14,768,917 | https://en.wikipedia.org/wiki/MMP19 | Matrix metalloproteinase-19 (MMP-19) also known as matrix metalloproteinase RASI is an enzyme that in humans is encoded by the MMP19 gene.
Function
Proteins of the matrix metalloproteinase (MMP) family are involved in the breakdown of extracellular matrix in normal physiological processes, such as embryonic development, reproduction, and tissue remodeling, as well as in disease processes, such as arthritis and metastasis. Most MMP's are secreted as inactive proproteins which are activated when cleaved by extracellular proteinases. This protein is expressed in human epidermis and endothelial cells and it has a role in cellular proliferation, migration, angiogenesis and adhesion. Multiple transcript variants encoding distinct isoforms have been identified for this gene.
References
Further reading
External links
The MEROPS online database for peptidases and their inhibitors: M10.021
Matrix metalloproteinases
EC 3.4.24 | MMP19 | [
"Chemistry"
] | 212 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,768,987 | https://en.wikipedia.org/wiki/NFATC4 | Nuclear factor of activated T-cells, cytoplasmic 4 is a protein that in humans is encoded by the NFATC4 gene.
Function
The product of this gene is a member of the nuclear factors of activated T cells DNA-binding transcription complex. This complex consists of at least two components: a preexisting cytosolic component that translocates to the nucleus upon T cell receptor (TCR) stimulation and an inducible nuclear component. Other members of this family of nuclear factors of activated T cells also participate in the formation of this complex. The product of this gene plays a role in the inducible expression of cytokine genes in T cells, especially in the induction of the IL-2 and IL-4.
Interactions
NFATC4 has been shown to interact with CREB-binding protein.
See also
NFAT
References
Further reading
External links
Transcription factors
Human proteins | NFATC4 | [
"Chemistry",
"Biology"
] | 183 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,769,070 | https://en.wikipedia.org/wiki/Baxter%20Woods | Mayor Baxter Woods Park is a nature reserve and municipal forest in the Deering Center neighborhood of Portland, Maine, United States. The land which became Baxter Woods was owned by Congressman Francis Ormand Jonathan Smith. He died in 1876 and his estate sold the forest to canning magnate, land developer, and future Mayor James Phinney Baxter in 1882. When J. P. Baxter died in 1921, it had not been developed during the preceding building boom and was bequeathed to his son Percival P. Baxter. In April 1946, Percival Baxter donated the land to the City of Portland on the condition that it would "...forever be retained and used by [the] City in trust for the benefit of the people of Portland as a Municipal Forest and Park and for public recreation and educational purposes". On June 19, 1956, U.S. Senator Frederick Payne mentioned the land in a speech honoring Percival Baxter, calling the land a "beautiful nature sanctuary given by you in honor of your father..."
Covering , Baxter Woods is the largest undisturbed forested area in the city. The park is bordered by major roads Stevens Avenue to its east and Forest Avenue to its west. Its trail connects to Evergreen Cemetery and is also close to Baxter Boulevard.
References
External links
Portland Trails profile of Baxter Woods
History of Portland's forests on Portland, Maine, official web site
Old-growth forests
Parks in Portland, Maine
Buildings and structures completed in 1946
Parks established in the 1940s
Protected areas established in 1946
Baxter family | Baxter Woods | [
"Biology"
] | 304 | [
"Old-growth forests",
"Ecosystems"
] |
14,769,219 | https://en.wikipedia.org/wiki/C.%20N.%20Yang%20Institute%20for%20Theoretical%20Physics |
The C. N. Yang Institute of Theoretical Physics (YITP) is a research center at Stony Brook University. In 1965, it was the vision of then University President J.S. Toll and Physics Department chair T.A. Pond to create an institute for theoretical physics and invite the famous physicist Chen Ning Yang from Institute for Advanced Study to serve as its director with the Albert Einstein Professorship of Physics. While the center is often referred to as "YITP", this can be confusing as YITP also stands for the Yukawa Institute for Theoretical Physics in Japan.
The active research areas of the institute include: quantum field theory, string theory, conformal field theory, mathematical physics and statistical mechanics. The YITP is situated on top of the Math Tower, home to the Department of Mathematics which is connected to the Department of Physics and the Simons Center for Geometry and Physics—therefore the physicists enjoy intimate interactions with the mathematicians. This close relationship dates back to the friendship of C.N. Yang and the mathematician James Harris Simons.
Founded in 1967, YITP celebrated its 50th anniversary in 2017. During the time span, the YITP has produced significant results in different areas, most notably was the discovery of supergravity in 1976 by Peter van Nieuwenhuizen, Daniel Z. Freedman, and Sergio Ferrara, who were all working there at the time.
It houses two Breakthrough Prize in Fundamental Physics laureates; Peter Van Nieuwenhuizen (2019) and Alexander Zamolodchikov (2024). Former director Chen Ning Yang is a Nobel Prize in Physics laureate (1957).
Directors
Chen Ning Yang - First director (1967-1999) and 1957 Nobel Laureate.
Peter van Nieuwenhuizen - Second director (1999-2002) and co-discoverer of supergravity.
George Sterman - Third director (2002-) and noted field theorist
Notable tenants
Luis Álvarez-Gaumé - String theory
Gerald E. Brown - Nuclear physics, theoretical astrophysics
Michael Creutz - Lattice gauge theory, computational physics
Michael Douglas - String theory
Ephraim Fischbach - Nuclear physics
Zohar Komargodski - Conformal field theory
Vladimir Korepin - Mathematical physics, quantum information
Barry M. McCoy - Statistical mechanics, conformal field theory
Nikita Nekrasov - Mathematical physics
Peter van Nieuwenhuizen - Field theory, string theory, co-discoverer of supergravity
Martin Roček - Mathematical physics, string theory
Warren Siegel - Field theory, string theory
George Sterman - Field theory, quantum chromodynamics
Alexander Zamolodchikov - Quantum field theory, statistical mechanics, conformal field theory
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
References
External links
YITP website
8th Simons Workshop in Mathematics and Physics
Yang Chen-Ning
Physics research institutes
Stony Brook University
Brookhaven, New York
Research institutes in New York (state)
1967 establishments in New York (state)
Theoretical physics institutes | C. N. Yang Institute for Theoretical Physics | [
"Physics"
] | 621 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
14,769,281 | https://en.wikipedia.org/wiki/TRIM27 | Zinc finger protein RFP is a protein that in humans is encoded by the TRIM27 gene.
This gene encodes a member of the tripartite motif (TRIM) family. The TRIM motif includes three zinc-binding domains, a RING, a B-box type 1 and a B-box type 2, and a coiled-coil region. This protein localizes to the nuclear matrix. It interacts with the enhancer of polycomb protein and represses gene transcription. It is also thought to be involved in the differentiation of male germ cells. Fusion of the N-terminus of this protein with the truncated C-terminus of the RET gene product has been shown to result in production of the ret transforming protein.
Interactions
TRIM27 has been shown to interact with PRAM1 and EIF3S6.
References
Further reading | TRIM27 | [
"Chemistry"
] | 171 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,769,433 | https://en.wikipedia.org/wiki/SKIL | Ski-like protein is a protein that in humans is encoded by the SKIL gene.
Interactions
SKIL interacts with SKI protein, Mothers against decapentaplegic homolog 3 and Mothers against decapentaplegic homolog 2.
Protein Family
SKIL belongs to the Ski/Sno/Dac family, shared by SKI protein, Dachshund, and SKIDA1. Members of the Ski/Sno/Dac family share a domain that is roughly 100 amino acids long.
References
Further reading
Proteins
Genes on human chromosome 3 | SKIL | [
"Chemistry"
] | 113 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,769,630 | https://en.wikipedia.org/wiki/LHX3 | LIM/homeobox protein Lhx3 is a protein that in humans is encoded by the LHX3 gene.
Function
LHX3 encodes a protein of a large protein family, members of which carry the LIM domain, a unique cysteine-rich zinc-binding domain. The encoded protein is a transcription factor that is required for pituitary development and motor neuron specification. Two transcript variants encoding distinct isoforms have been identified for this gene.
Clinical significance
Mutations in this gene have been associated with a syndrome of combined pituitary hormone deficiency and rigid cervical spine.
Interactions
LHX3 has been shown to interact with Ldb1.
References
Further reading
External links
Transcription factors | LHX3 | [
"Chemistry",
"Biology"
] | 144 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,769,657 | https://en.wikipedia.org/wiki/Cell%20division%20cycle%207-related%20protein%20kinase | Cell division cycle 7-related protein kinase is an enzyme that in humans is encoded by the CDC7 gene. The Cdc7 kinase is involved in regulation of the cell cycle at the point of chromosomal DNA replication. The gene CDC7 appears to be conserved throughout eukaryotic evolution; this means that most eukaryotic cells have the Cdc7 kinase protein.
Function
The product encoded by this gene is predominantly localized in the nucleus and is a cell division cycle protein with kinase activity.
The protein is a serine-threonine kinase that is activated by another protein called either Dbf4 in the yeast Saccharomyces cerevisiae or ASK in mammals. The Cdc7/Dbf4 complex adds a phosphate group to the minichromosome maintenance (MCM) protein complex allowing for the initiation of DNA replication in mitosis (as explained in the Cdc7 and Replication section below).
Although expression levels of the protein appear to be constant throughout the cell cycle, the protein kinase activity appears to increase during S phase. It has been suggested that the protein is essential for initiation of DNA replication and that it plays a role in regulating cell cycle progression. Overexpression of this gene product may be associated with neoplastic transformation for some tumors. Additional transcript sizes have been detected, suggesting the presence of alternative splicing.
Cell cycle regulation
The gene, CDC7, is involved in the regulation of cell cycle because of the gene product Cdc7 kinase. The protein is expressed at constant levels throughout the cell cycle. The gene coding for the Dbf4 or ASK protein is regulated during the different phases of cell cycle. The concentration of Dbf4 at the G1/S transition of the cell cycle is higher than the concentration at the M/G1 transition. This tells us that Dbf4 is expressed around the time for replication; right after replication is over, the protein levels drop. Because the two proteins, Cdc7 and Dbf4, must form a complex before activating the MCM complex, the regulation of one protein is sufficient for both.
It has been shown that CDC7 is important for replication. There are several ways its expression can be altered that leads to problems. In mouse embryonic stem cells (ESCs), Cdc7 is needed for proliferation. Without the CDC7 gene DNA synthesis is stopped, and the ESCs do not grow. With the loss of function of Cdc7 in ESCs the S phase is stopped at the G2/M checkpoint. Recombinational repair (RR) is done at this point to try to fix the CDC7 gene so replication can occur. By copying and replacing the altered area with a very similar area on the sister homolog chromosome, the gene can be replicated as if nothing was ever wrong on the chromosome. However, when the cell enters this arrested state, levels of p53 may increase. These increased levels of p53 may initiate cell death.
Replication
After chromatin undergoes changes in telophase of mitosis, the hexameric protein complex of MCM proteins 2-7 forms part of the pre-replication complex (pre-RC) by binding to the chromatin and other aiding proteins (Cdc6 and Cdt1). Mitosis occurs during M phase of the cell cycle and has a number of stages; telophase is the end stage of mitosis when the replication of chromosomes is complete, but separation has not occurred.
The Cdc7/Dbf4 kinase complex, along with another serine-threonine kinase, cyclin-dependent kinase (Cdk), phosphorylates the pre-RC which activates it at the G1/S transition. The Dbf4 tethers itself to part of the pre-RC, the origin recognition complex (ORC). Since Cdc7 is attached to the Dbf4 protein the entire complex is held in place during replication. This activation of MCM 2 leads to helicase activity of the MCM complex at the origin of replication. This is most likely due to the change in conformation allowing the remainder of replication machinery proteins to be loaded. DNA replication can begin after all the necessary proteins are in place.
Interactions
CDC7 has been shown to interact with:
DBF4,
MCM5,
MCM4,
MCM7,
ORC1L, and
ORC6L.
Ligands
Inhibitors
XL-413
References
Further reading
Cell cycle regulators
EC 2.7.11 | Cell division cycle 7-related protein kinase | [
"Chemistry"
] | 921 | [
"Cell cycle regulators",
"Signal transduction"
] |
14,769,749 | https://en.wikipedia.org/wiki/CDKN2D | Cyclin-dependent kinase 4 inhibitor D is an enzyme that in humans is encoded by the CDKN2D gene.
The protein encoded by this gene is a member of the INK4 family of cyclin-dependent kinase inhibitors. This protein has been shown to form a stable complex with CDK4 or CDK6, and prevent the activation of the CDK kinases, thus function as a cell growth regulator that controls cell cycle G1 progression. The abundance of the transcript of this gene was found to oscillate in a cell-cycle dependent manner with the lowest expression at mid G1 and a maximal expression during S phase. The negative regulation of the cell cycle involved in this protein was shown to participate in repressing neuronal proliferation, as well as spermatogenesis. The expression of this gene and its protein product (p19) is observed in neurons with neurofibrillary tangles (NFTs) and it is suggested as a marker for senescent neurons. Two alternatively spliced variants of this gene, which encode an identical protein, have been reported.
Note, this protein should not be confused with p19-ARF (mouse) or the human equivalent p14ARF, which are alternative products of the CDKN2A gene.
References
Further reading
External links
Cell cycle regulators | CDKN2D | [
"Chemistry"
] | 276 | [
"Cell cycle regulators",
"Signal transduction"
] |
14,769,835 | https://en.wikipedia.org/wiki/Tankyrase | Tankyrase, also known as tankyrase 1, is an enzyme that in humans is encoded by the TNKS gene. It inhibits the binding of TERF1 to telomeric DNA.
Tankyrase attracts substantial interest in cancer research through its interaction with AXIN1 and AXIN2, which are negative regulators of pro-oncogenic β-catenin signaling. Importantly, activity in the β-catenin destruction complex can be increased by tankyrase inhibitors and thus such inhibitors are a potential therapeutic option to reduce the growth of β-catenin-dependent cancers.
Description
Tankyrase-1 is a poly-ADP-ribosyltransferase involved in various processes such as Wnt signaling pathway, telomere length and vesicle trafficking. Acts as an activator of the Wnt signaling pathway by mediating poly-ADP-ribosylation (PARylation) of AXIN1 and AXIN2, 2 key components of the beta-catenin destruction complex: poly-ADP-ribosylated target proteins are recognized by RNF146, which mediates their ubiquitination and subsequent degradation. Also mediates PARsylation of BLZF1 and CASC3, followed by recruitment of RNF146 and subsequent ubiquitination. Mediates PARsylation of TERF1, thereby contributing to the regulation of telomere length. Involved in centrosome maturation during prometaphase by mediating PARsylation of HEPACAM2/MIKI. May also regulate vesicle trafficking and modulate the subcellular distribution of SLC2A4/GLUT4-vesicles. May be involved in spindle pole assembly through PARsylation of NUMA1. Stimulates 26S proteasome activity.
Protein interactions
TNKS has been shown to interact with:
FNBP1,
MCL1,
TERF1, and
TNKS1BP1.
References
Further reading
Telomere-related proteins
Aging-related enzymes | Tankyrase | [
"Biology"
] | 428 | [
"Senescence",
"Aging-related enzymes"
] |
14,769,961 | https://en.wikipedia.org/wiki/BAG3 | BAG family molecular chaperone regulator 3 is a protein that in humans is encoded by the BAG3 gene. BAG3 is involved in chaperone-assisted selective autophagy.
Function
BAG proteins compete with Hip-1 for binding to the Hsc70/Hsp70 ATPase domain and promote substrate release. All the BAG proteins have an approximately 45-amino acid BAG domain near the C terminus but differ markedly in their N-terminal regions. The protein encoded by this gene contains a WW domain in the N-terminal region and a BAG domain in the C-terminal region. The BAG domains of BAG1, BAG2, and BAG3 interact specifically with the Hsc70 ATPase domain in vitro and in mammalian cells. All 3 proteins bind with high affinity to the ATPase domain of Hsc70 and inhibit its chaperone activity in a Hip-repressible manner.
Clinical significance
BAG gene has been implicated in age related neurodegenerative diseases such as Alzheimer's. It has been demonstrated that BAG1 and BAG3 regulate the proteasomal and lysosomal protein elimination pathways, respectively. It has also been shown to be a cause of familial dilated cardiomyopathy.
That BAG3 mutations are responsible for familial dilated cardiomyopathy is confirmed by another study describing 6 new molecular variants (2 missense and 4 premature Stops
). Moreover, the same publication reported that BAG3 polymorphisms are also associated with sporadic forms of the disease together with HSPB7 locus.
In muscle cells, BAG3 cooperates with the molecular chaperones Hsc70 and HspB8 to induce the degradation of mechanically damaged cytoskeleton components in lysosomes. This process is called chaperone-assisted selective autophagy and is essential for maintaining muscle activity in flies, mice and men.
BAG3 is able to stimulate the expression of cytoskeleton proteins in response to mechanical tension by activating the transcription regulators YAP1 and WWTR1. BAG3 balances protein synthesis and protein degradation under mechanical stress.
Interactions
PLCG1 has been shown to interact with:
FGFR1,
CD117,
CD31,
Cbl gene
CISH
Epidermal growth factor receptor,
Eukaryotic translation elongation factor 1 alpha 1,
FLT1,
GAB1,
GIT1,
Grb2,
HER2/neu,
IRS2,
ITK,
KHDRBS1,
Linker of activated T cells,
Lymphocyte cytosolic protein 2,
PDGFRA,
PLD2,
RHOA,
SOS1,
TUB,
TrkA,
TrkB,
VAV1, and
Wiskott-Aldrich syndrome protein.
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Myofibrillar Myopathy
Ageing
Aging-related genes
Aging-related proteins
Co-chaperones | BAG3 | [
"Biology"
] | 616 | [
"Senescence",
"Aging-related genes",
"Aging-related proteins"
] |
14,769,967 | https://en.wikipedia.org/wiki/CCL4L1 | C-C motif chemokine 4-like is a protein that in humans is encoded by the CCL4L1 gene.
Function
This gene is one of several cytokine genes clustered on the q-arm of chromosome 17. Cytokines are a family of secreted proteins involved in immunoregulatory and inflammatory processes. This protein is similar to CCL4 which inhibits HIV entry by binding to the cellular receptor CCR5. The copy number of this gene varies among individuals; most individuals have 1-5 copies in the diploid genome, although rare individuals do not contain this gene at all. The human genome reference assembly contains two copies of this gene. This record represents the more centromeric gene.
References
External links
Further reading | CCL4L1 | [
"Chemistry"
] | 155 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,770,014 | https://en.wikipedia.org/wiki/ARPC1B | Actin-related protein 2/3 complex subunit 1B is a protein that in humans is encoded by the ARPC1B gene.
Function
This gene encodes one of seven subunits of the human Arp2/3 protein complex. This subunit is a member of the SOP2 family of proteins and is most similar to the protein encoded by gene ARPC1A. The similarity between these two proteins suggests that they both may function as p41 subunit of the human Arp2/3 complex that facilitates branching of actin filaments in cells. Isoforms of the p41 subunit may adapt the functions of the complex to different cell types or developmental stages. Indeed, it has recently been shown that variants of the Arp2/3 complex differ in their ability to promote actin assembly, with complexes containing ARPC1B and ARPC5L being better at this than those containing ARPC1A and ARPC5. The differing functions of ARPC1A and ARPC1B are also evident in the recent discovery of patients with severe or total ARPC1B deficiency, who have platelet and immune system abnormalities yet survive, possibly due to a compensatory up-regulation of ARPC1A expression.
Interactions
ARPC1B has been shown to interact with PAK1.
References
Further reading
External links | ARPC1B | [
"Chemistry"
] | 272 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,770,257 | https://en.wikipedia.org/wiki/ADH5 | Alcohol dehydrogenase class-3 is an enzyme that in humans is encoded by the ADH5 gene.
This gene encodes glutathione-dependent formaldehyde dehydrogenase or the class III alcohol dehydrogenase chi subunit, which is a member of the alcohol dehydrogenase family. Members of this family metabolize a wide variety of substrates, including ethanol, retinol, other aliphatic alcohols, hydroxysteroids, and lipid peroxidation products. Class III alcohol dehydrogenase is a homodimer composed of 2 chi subunits. It has virtually no activity for ethanol oxidation, but exhibits high activity for oxidation of long-chain primary alcohols and for oxidation of S-hydroxymethyl-glutathione, a spontaneous adduct between formaldehyde and glutathione.
This enzyme is an important component of cellular metabolism for the elimination of formaldehyde, a potent irritant and sensitizing agent that causes lacrymation, rhinitis, pharyngitis, and contact dermatitis.
Clinical significance
Mutations of the ADH5 gene and ALDH2 gene cause AMED syndrome, an autosomal recessive digenic multisystem disorder characterized by global developmental delay with impaired intellectual development, short stature, growth impairment and early development of myelodysplastic syndrome and bone marrow failure. The syndrome was first described in 2020.
References
Further reading
External links | ADH5 | [
"Chemistry"
] | 307 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,770,265 | https://en.wikipedia.org/wiki/CKS1B | Cyclin-dependent kinases regulatory subunit 1 is a protein that in humans is encoded by the CKS1B gene.
Function
The CKS1B protein binds to the catalytic subunit of the cyclin-dependent kinases and is essential for their biological function. The CKS1B mRNA is found to be expressed in different patterns through the cell cycle in HeLa cells, which reflects a specialized role for the encoded protein.
CKS1B and CKS2 proteins have demonstrated principal roles in cell cycle regulation. Defined originally as suppressors of mutations in both fission and budding yeast Cdk1 genes, Cks molecules interact with Cdk1, Cdk2 and Cdk3. These Cdk-dependent enzyme complexes in cell cycle regulation frequently consist of Cdk molecules bound to a catalytic Cdk subunit, i.e. Cks and a regulatory cyclin subunit, such as a G1 cyclin, controlling Cdk function by directing cyclin-cdk complex activity toward specific and significant substrates. Malfunctions of cdk-dependent associations lead to defects into the entry of mitosis for cells.
Cks1 in the Cdk-independent pathway involves the recognition of substrates p27Kip1 and p21cip1 by directly associating with E3 SCFSkp2 when stimulated by certain mitogenic signals, such as TGF-β.
Clinical significance
Cks1-depleted breast cancer cells not only exhibit slowed G(1) progression, but also accumulate in G(2)-M due to blocked mitotic entry. Cdk1 expression, which is crucial for M phase entry, is drastically diminished by Cks1 depletion, and that restoration of cdk1 reduces G(2)-M accumulation in Cks1-depleted cells.
Interactions
CKS1B has been shown to interact with SKP2 and CDKN1B.
References
External links
Further reading
Cell cycle regulators | CKS1B | [
"Chemistry"
] | 401 | [
"Cell cycle regulators",
"Signal transduction"
] |
14,770,397 | https://en.wikipedia.org/wiki/Environmental%20Vulnerability%20Index | The Environmental Vulnerability Index (EVI) is a measurement devised by the South Pacific Applied Geoscience Commission (SOPAC), the United Nations Environment Program and others to characterize the relative severity of various types of environmental issues suffered by 243 enumerated individual nations and other geographies (such as Antarctica). The results of the EVI are used to focus on planned solutions to negative pressures on the environment, whilst promoting sustainability.
Development
The beginning stages of the Environmental Vulnerability Index (EVI) were developed to be appropriate for Small Island Developing States (SIDs), this theoretical idea at the time was presented by the South Pacific Applied Geoscience Commission (SOPAC) on February 4, 1999. The ideas and plans for The Environmental Vulnerability Index were worked on further with the creation of a (EVI) Think Tank that took place from September 7–10, 1999 in Pacific Harbour, Fiji. Expanding the (EVI) to other SIDS was aided by a meeting of experts convened in Malta on November 29 – December 3, 1999 by (SOPAC) and the Foundation for International Studies (of the University of Malta's Islands and Small States Institute) with the support of the United Nations Environment Programme (UNEP).
During the second phase of the development, the Environmental Vulnerability Index (EVI) was tested in five different countries. A workshop was made to expand the application of the Environmental Vulnerability Index to a demonstrative set of countries from around the world. The workshop was hosted by UNEP in Geneva, Switzerland on August 27 – August 29, 2001. Continuation of work and development on The Environmental Vulnerability Index, lead to a presentation of the first functional results with the Demonstration EVI.
Calculation
To be able to calculate an Environmental Vulnerability Index it requires the compilation of relevant environmental vulnerability data for the 50 indicators. Once compiled then this data must be used to calculate each indicator. As the indicators are heterogeneous, include variables for which responses are numerical, qualitative and on different scales (linear, non-linear, or with different ranges) they are mapped onto a 1–7 vulnerability scale. Where data is not available, no value is given for the indicator and the denominator of the average adjusted down by one value. Where an indicator is considered 'non-applicable' in a country (such as volcanic eruptions in Tuvalu which has no volcanoes), the lowest vulnerability score of 1 is attributed to that indicator. The vulnerability scores for each indicator are then accumulated either into categories or sub-indices and the average calculated. An overall average of all indicators is calculated to generate the country EVI. The EVI is accumulated into three sub-indices: Hazards, Resistance, Damage
The 50 EVI indicators are also divided up in the issue categories for use as required: Climate change, Biodiversity, Water, Agriculture and fisheries, Human health aspects, Desertification, and Exposure to Natural Disasters.
Indicators
High Winds – Average annual excess winds over the last five years (summing speeds on days during which the maximum recorded wind speed is greater than 20% higher than the 30 year average maximum wind speed for that month) averaged over all reference climate stations.
Dry Periods – Average annual rainfall deficit (mm) over the past 5 years for all months with more than 20% lower rainfall than the 30 year monthly average, averaged over all reference climate stations.
Wet Periods – Average annual excess rainfall (mm) over the past 5 years for all months with more than 20% higher rainfall than the 30 year monthly average, averaged over all reference climate stations
Hot Periods – Average annual excess heat (degrees C) over the past 5 years for all days more than 5°C (9°F) hotter than the 30 year mean monthly maximum, averaged over all reference climate stations.
Cold Periods – Average annual heat deficit (degrees C) over the past 5 years for all days more than 5°C (9°F) cooler than the 30 year mean monthly minimum, averaged overall reference climate stations.
Sea Temperatures – Average annual deviation in Sea Surface Temperatures (SST) in the last 5 years in relation to the 30 year monthly means
Volcanoes – Cumulative volcano risk as the weighted number of volcanoes with the potential for eruption greater than or equal to a Volcanic Explosively Index of 2 (VEI 2) within 100 km of the country land boundary (divided by the area of land).
Earthquakes – Cumulative earthquake energy within 100 km of country land boundaries measured as Local Magnitude (ML) ≥ 6.0 and occurring at a depth of less than or equal to fifteen kilometers(≤15 km depth) over 5 years (divided by land area).
Tsunamis – Number of tsunamis or storms surges with run-up greater than 2 meters above Mean High Water Spring tide (MHWS) per 1000 km coastline since 1900.
Slides – Number of slides recorded in the last 5 years (EMDAT definitions), divided by land area
Land Area – Total land area (km2)
Country Dispersion – Ratio of length of borders (land and maritime) to total land area.
Isolation – Distance to nearest continent (km)
Relief – Altitude range (highest point subtracted from the lowest point in country)
Lowlands – Percentage of land area less than or equal to 50m above sea level
Borders – Number of land and sea borders (including EEZ shared with other countries.)
Ecosystem Imbalance – Weighted average change in trophic level since fisheries began (for trophic level slice ≤3.35).
Environmental Openness – Average annual USD freight imports over the past 5 years by any means per km2 land area
Migrations – Number of known species that migrate outside the territorial area at any time during their life spans (including land and all aquatic species) / area of land
Endemics – Number of known endemic species per million square kilometer land area
Introductions – Number of introduced species per 1000 square kilometer of land area
Endangered Species – Number of endangered and vulnerable species per 1000 km2 land area (IUCN definitions)
Extinctions – Number of species known to have become extinct since 1900 per 1000 km2 land area (IUCN definitions)
Vegetation Cover – Percentage of natural and regrowth vegetation cover remaining (include forests, wetlands, prairies, tundra, desert and alpine associations).
Loss Of Cover – Net percentage change in natural vegetation cover over the last five years
Habitat fragmentation – Total length of all roads in a country divided by land area.
Degradation – Percent of land area that is either severely or very severely degraded (FAO/AGL Terrastat definitions)
Terrestrial Reserves – Percent of terrestrial land area legally set aside as no take reserves
Marine Reserves – Percentage of continental shelf legally designated as marine protected areas (MPAs).
Intensive Farming – Annual tonnage of intensively farmed animal products (includes aquaculture, pigs, poultry) produced over the last five years per square kilometer land area.
Fertilizers – Average annual intensity of fertilizer use over the total land area over the last 5 years.
Pesticides – Average annual pesticides used as kg/km2/year over total land area over last 5 years.
Biotechnology – Cumulative number of deliberate field trials of genetically modified organisms conducted in the country since 1986.
Productivity Over-fishing – Average ratio of productivity : fisheries catch over the last 5 years
Fishing Effort – Average annual number of fishers per kilometer of coastline over the last 5 years
Renewable Water – Average annual water usage as percentage of renewable water resources over the last 5 years
SO2 Emissions – Average annual SO2 emissions over the last 5 years.
Generated and imported toxic, hazardous and municipal wastes per square kilometer land area over the last 5 years
Waste Treatment – Mean annual percent of hazardous, toxic and municipal waste effectively managed and treated over the past 5 years.
Industry – Average annual use of electricity for industry over the last 5 years per square kilometer of land
Spills – Total number of spills of oil and hazardous substances greater than 1000 liters on land, in rivers or within territorial waters per million km maritime coast during the last five years
Mining – Average annual mining production (include all surface and subsurface mining and quarrying) per km2 of land area over the past 5 years.
Sanitation – Density of population without access to safe sanitation (WHO definitions)
Vehicles – Number of vehicles per square kilometer of land area (most recent data)
Population – Total human population density (number per km2 land area)
Population Growth – Annual human population growth rate over the last 5 years
Tourists Average annual number of international tourists per km2 land over the past 5 years.
Coastal Settlements – Density of people living in coastal settlements, i.e. with a city center within 100 km of any maritime or lake* coast.
Environmental Agreements – Number of environmental treaties in force in a country.
Conflicts – Average number of conflict years per decade within the country over the past 50 years.
List
See also
Biotic index, a simple measurement of stream pollution and its effects on the biology of the stream.
Climate Vulnerability Monitor (CVM)
Environmental Performance Index (EPI)
Environmental Sustainability Index (ESI)
References
Further reading
Kaly U, Pratt C and Mitchell J (2004) Environmental Vulnerability Index (EVI) 2004 SOPAC.
Barnett J, Lambert S and Fry I (2008) "The Hazards of Indicators: Insights from the Environmental Vulnerability Index" Annals of the Association of American Geographers, 98 (1).
External links
Environmental science
Vulnerability
Vulnerability | Environmental Vulnerability Index | [
"Environmental_science"
] | 1,907 | [
"nan"
] |
14,770,410 | https://en.wikipedia.org/wiki/EN2%20%28gene%29 | Homeobox protein engrailed-2 is a protein that in humans is encoded by the EN2 gene. It is a member of the engrailed gene family.
Function
Homeobox-containing genes are thought to have a role in controlling development. In Drosophila, the 'engrailed' (en) gene plays an important role during development in segmentation, where it is required for the formation of posterior compartments. Different mutations in the mouse homologs, En1 and En2, produced different developmental defects that frequently are lethal. The human engrailed homologs 1 and 2 encode homeodomain-containing proteins and have been implicated in the control of pattern formation during development of the central nervous system.
Description
The Engrailed-2 gene encodes for the Engrailed-2 homeobox transcription factor. The signaling molecule, fibroblast growth factor 8 (FGF8), controls the expression of the En2 gene. The isthmus organizer expresses varying concentrations of FGF8 that influence the En2 transcription factor. En2 transcription factor is involved in patterning the midbrain of the central nervous system during embryonic development. Specifically, it is required for proper positioning of folia in the developing hemispheres. It continues to regulate foliation throughout nervous system development. En2 patterns cerebellum foliation in the mediolateral axis. Several birth defects can arise from inadequate or abnormal En2 expression. Scientists use a mice model to study the effects of En2 knockout alleles on development. When the En2 gene is knocked out, vermis foliation patterning becomes extremely altered. Along with decreased cerebellum foliation complexity, mutations in the En2 gene result in a depleted vermis or an overly simplified foliation pattern. The Engrailed genes are essential to proper neural circuit development.
In cancer diagnosis
A method for diagnosing prostate cancer by detection of EN2 in urine has been developed. The results of a clinical trial of 288 men suggest that EN2 could be a marker for prostate cancer which might prove more reliable than current methods that use prostate-specific antigen (PSA). If effective, a urine test is considered easier and less embarrassing for the patient than blood tests or rectal examinations and, therefore, less likely to discourage early diagnosis. At the time of the report, it was not clear whether or not the EN2 test could distinguish between aggressive tumours that would require intervention and relatively benign ones that would not.
Licensing and marketing
The EN2 test for prostate cancer has been licensed to Zeus Scientific, as they reported in March 2013. In that announcement they said they expected the test to be submitted to the US-FDA in a year, and available worldwide in 2 years.
Negative results
However, an independent study published in 2020 questioned the value of EN2 as a urinary marker for prostate cancer. In a comparison between 90 PC patients and 30 healthy subjects, their results show that EN2 as a PC biomarker brings no additional value to the current use of PSA in clinical practice. Despite their announcement of new clinical trial in 2018, the developers of the urinary EN2 test at the University of Surrey never registered such a trial at ClinicalTrials.gov or published any results of it. Also, Randox Ltd, the diagnostic company which was to commercialize the urinary EN2, does not offer it any more in their product portfolio.
References
Further reading
External links
Transcription factors
Prostate cancer | EN2 (gene) | [
"Chemistry",
"Biology"
] | 723 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,770,430 | https://en.wikipedia.org/wiki/EPHA7 | Ephrin type-A receptor 7 is a protein that in humans is encoded by the EPHA7 gene.
This gene belongs to the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands.
References
Further reading
Tyrosine kinase receptors | EPHA7 | [
"Chemistry"
] | 151 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,770,924 | https://en.wikipedia.org/wiki/HOXB4 | Homeobox protein Hox-B4 is a protein that in humans is encoded by the HOXB4 gene.
Function
This gene is a member of the Antp homeobox family and encodes a nuclear protein with a homeobox DNA-binding domain. It is included in a cluster of homeobox B genes located on chromosome 17. The encoded protein functions as a sequence-specific transcription factor that is involved in development. Intracellular or ectopic expression of this protein expands hematopoietic stem and progenitor cells in vivo and in vitro, making it a potential candidate for therapeutic stem cell expansion.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXB4 | [
"Chemistry",
"Biology"
] | 142 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,770,935 | https://en.wikipedia.org/wiki/HOXC8 | Homeobox protein Hox-C8 is a protein that in humans is encoded by the HOXC8 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, which are located on different chromosomes and consist of 9 to 11 genes arranged in tandem. This gene is one of several homeobox HOXC genes located in a cluster on chromosome 12. The product of this gene may play a role in the regulation of cartilage differentiation. It could also be involved in chondrodysplasias or other cartilage disorders. HOXC8 was found to have activity in promoting nerve growth and its expression is dysregulated in patients with neurofibromatosis type 1.
See also
Homeobox
Interactions
HOXC8 has been shown to interact with Mothers against decapentaplegic homolog 6 and Mothers against decapentaplegic homolog 1.
References
Further reading
External links
Transcription factors | HOXC8 | [
"Chemistry",
"Biology"
] | 250 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,770,965 | https://en.wikipedia.org/wiki/HOXD13 | Homeobox protein Hox-D13 is a protein that in humans is encoded by the HOXD13 gene. This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms.
Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, located on different chromosomes, consisting of 9–11 genes arranged in tandem. HOXD13 is the first of several HOXD genes located in a cluster on chromosome 2. Deletions that remove the entire HOXD gene cluster or the 5' end of this cluster have been associated with severe limb and genital abnormalities. The product of the mouse Hoxd13 gene plays a role in axial skeleton development and forelimb morphogenesis.
Changes in the expression of the Hoxd13 gene in early lobe-finned fish may have also contributed to the evolution of the tetrapod limb. Experiments investigating the impact of 5′ Hoxd overexpression in zebrafish embryos observed modified development of distal fin structures, resulting in increased proliferation, distal expansion of cartilage tissue and fin fold reduction. A number of similar studies conducted with a range of animals, including catsharks and marsupials, lend further credibility to the role of the Hoxd13 gene in the fin-to-limb transition.
Clinical significance
Mutations in HOXD13 can cause several types of autosomal dominant syndactyly and brachydactyly, including brachydactyly type D ("club thumb"), brachydactyly type E, syndactyly type 5 and synpolydactyly type 1.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXD13 | [
"Chemistry",
"Biology"
] | 390 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,039 | https://en.wikipedia.org/wiki/ISL1 | Insulin gene enhancer protein ISL-1 is a protein that in humans is encoded by the ISL1 gene.
Function
This gene encodes a transcription factor containing two N-terminal LIM domains and one C-terminal homeodomain. The encoded protein plays an important role in the embryogenesis of pancreatic islets of Langerhans. In mouse embryos, a deficiency of this gene results in failure to undergo neural tube motor neuron differentiation.
Interactions
ISL1 has been shown to interact with Estrogen receptor alpha.
Role in cardiac development
ISL1 is a marker for cardiac progenitors of the secondary heart field (SHF) which includes the right ventricle and the outflow tract. The biological function of ISL1 is demonstrated through ISL1 mutant mice and chick embryos that have altered cell proliferation, survival, and migration of cardiogenic precursors and severe cardiac defects. More recently it has been defined as a marker for a cardiac progenitor cell lineage that is capable of differentiating into all 3 major cell types of the heart: cardiomyocytes, smooth muscle and endothelial cell lineages. Research has shown that ISL1 promotes differentiation of cardiac cells and a depletion of ISL1 can respecify the cell fate of nascent cardiomyocytes, such as from ventricular to an atrial identity.
The validity of ISL1 as a marker for cardiac progenitor cells has been questioned since some groups have found no evidence that ISL1 cells serve as cardiac progenitors. Furthermore, ISL1 is not restricted to second heart field progenitors in the developing heart, but also labels cardiac neural crest. This paper supports work from the Vilquin group in 2011, which concluded that ISL1 can represent cells from both neural crest and cardiomyocyte lineages. While it has been demonstrated by multiple groups that ISL1-positive cells can indeed differentiate into all 3 major cell types of the heart, their clinical relevance has been seriously questioned.
References
Further reading
External links
Transcription factors | ISL1 | [
"Chemistry",
"Biology"
] | 426 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,070 | https://en.wikipedia.org/wiki/KCNA3 | Potassium voltage-gated channel, shaker-related subfamily, member 3, also known as KCNA3 or Kv1.3, is a protein that in humans is encoded by the KCNA3 gene.
Potassium channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. Four sequence-related potassium channel genes – shaker, shaw, shab, and shal – have been identified in Drosophila, and each has been shown to have human homolog(s).
This gene encodes a member of the potassium channel, voltage-gated, shaker-related subfamily. This member contains six membrane-spanning domains with a shaker-type repeat in the fourth segment. It belongs to the delayed rectifier class, members of which allow nerve cells to efficiently repolarize following an action potential. It plays an essential role in T cell proliferation and activation. This gene appears to be intronless and is clustered together with KCNA2 and KCNA10 genes on chromosome 1.
Function
KCNA3 encodes the voltage-gated Kv1.3 channel, which is expressed in T and B lymphocytes. All human T cells express roughly 300 Kv1.3 channels per cell along with 10-20 calcium-activated KCa3.1 channels. Upon activation, naive and central memory T cells increase expression of the KCa3.1 channel to approximately 500 channels per cell, while effector-memory T cells increase expression of the Kv1.3 channel. Among human B cells, naive and early memory B cells express small numbers of Kv1.3 and KCa3.1 channels when they are quiescent, and augment KCa3.1 expression after activation. In contrast, class-switched memory B cells express high numbers of Kv1.3 channels per cell (about 1500/cell) and this number increases after activation.
Kv1.3 is physically coupled through a series of adaptor proteins to the T-cell receptor signaling complex and it traffics to the immunological synapse during antigen presentation. However, blockade of the channel does not prevent immune synapse formation. Kv1.3 and KCa3.1 regulate membrane potential and calcium signaling of T cells. Calcium entry through the CRAC channel is promoted by potassium efflux through the Kv1.3 and KCa3.1 potassium channels.
Blockade of Kv1.3 channels in effector-memory T cells suppresses calcium signaling, cytokine production (interferon-gamma, interleukin 2) and cell proliferation. In vivo, Kv1.3 blockers paralyze effector-memory T cells at the sites of inflammation and prevent their reactivation in inflamed tissues. In contrast, Kv1.3 blockers do not affect the homing to and motility within lymph nodes of naive and central memory T cells, most likely because these cells express the KCa3.1 channel and are, therefore, protected from the effect of Kv1.3 blockade.
Kv1.3 has been reported to be expressed in the inner mitochondrial membrane in lymphocytes. The apoptotic protein Bax has been suggested to insert into the outer mitochondrial membrane and occlude the pore of Kv1.3 via a lysine residue. Thus, Kv1.3 modulation may be one of many mechanisms that contribute to apoptosis.
Clinical significance
Autoimmune
In patients with multiple sclerosis (MS), disease-associated myelin-specific T cells from the blood are predominantly co-stimulation-independent effector-memory T cells that express high numbers of Kv1.3 channels. T cells in MS lesions in postmortem brain lesions are also predominantly effector-memory T cells that express high levels of the Kv1.3 channel. In children with type-1 diabetes mellitus, the disease-associated insulin- and GAD65-specific T cells isolated from the blood are effector-memory T cells that express high numbers of Kv1.3 channels, and the same is true of T cells from the synovial joint fluid of patients with rheumatoid arthritis. T cells with other antigen specificities in these patients were naive or central memory T cells that upregulate the KCa3.1 channel upon activation. Consequently, it should be possible to selectively suppress effector-memory T cells with a Kv1.3-specific blocker and thereby ameliorate many autoimmune diseases without compromising the protective immune response. In proof-of-concept studies, Kv1.3 blockers have prevented and treated disease in rat models of multiple sclerosis, type-1 diabetes mellitus, rheumatoid arthritis, contact dermatitis, and delayed-type hypersensitivity.
At therapeutic concentrations, the blockers did not cause any clinically evident toxicity in rodents, and it did not compromise the protective immune response to acute influenza viral infection and acute chlamydia bacterial infection. Many groups are developing Kv1.3 blockers for the treatment of autoimmune diseases.
Metabolic
Kv1.3 is also considered a therapeutic target for the treatment of obesity, for enhancing peripheral insulin sensitivity in patients with type-2 diabetes mellitus, and for preventing bone resorption in periodontal disease. A genetic variation in the Kv1.3 promoter region is associated with low insulin sensitivity and impaired glucose tolerance.
Neurodegeneration
Kv1.3 channels have been found to be highly expressed by activated and plaque-associated microglia in human Alzheimer's disease (AD) post-mortem brains as well as in mouse models of AD pathology. Patch-clamp recordings and flow cytometric studies performed on acutely isolated mouse microglia have confirmed upregulation of Kv1.3 channels with disease progression in mouse AD models. The Kv1.3 channel gene has also been found to be a regulator of pro-inflammatory microglial responses. Selective blockade of Kv1.3 channels by the small molecule Pap1 as well as a peptide sea anemone toxin-based peptide ShK-223 have been found to limit amyloid beta plaque burden in mouse AD models, potentially via augmented clearance by microglia.
Blockers
Kv1.3 is blocked by several peptides from venomous creatures including scorpions (ADWX1, OSK1, margatoxin, kaliotoxin, charybdotoxin, noxiustoxin, anuroctoxin, OdK2) and sea anemone (ShK, ShK-F6CA, ShK-186, ShK-192, BgK), and by small molecule compounds (e.g., PAP-1, Psora-4, correolide, benzamides, CP339818, progesterone and the anti-lepromatous drug clofazimine). The Kv1.3 blocker clofazimine has been reported to be effective in the treatment of chronic graft-versus-host disease, cutaneous lupus, and pustular psoriasis in humans. Furthermore, clofazimine in combination with the antibiotics clarithromycin and rifabutin induced remission for about 2 years in patients with Crohn's disease, but the effect was temporary; the effect was thought to be due to anti-mycobacterial activity, but could well have been an immunomodulatory effect by clofazimine.
See also
Voltage-gated potassium channel
References
External links
Ion channels | KCNA3 | [
"Chemistry"
] | 1,631 | [
"Neurochemistry",
"Ion channels"
] |
14,771,100 | https://en.wikipedia.org/wiki/KIF5B | Kinesin family member 5B (KIF5B) is a protein that in humans is encoded by the KIF5B gene. It is part of the kinesin family of motor proteins.
Interactions
KIF5B has been shown to interact with:
KLC1,
KLC2,
SNAP-25,
SNAP23, and
YWHAH.
References
Further reading
External links
Human proteins
Motor proteins | KIF5B | [
"Chemistry"
] | 86 | [
"Molecular machines",
"Motor proteins"
] |
14,771,212 | https://en.wikipedia.org/wiki/CLIC4 | Chloride intracellular channel 4, also known as CLIC4,p644H1,HuH1, is a eukaryotic gene.
Chloride channels are a diverse group of proteins that regulate fundamental cellular processes including stabilization of cell membrane potential, transepithelial transport, maintenance of intracellular pH, and regulation of cell volume. Chloride intracellular channel 4 (CLIC4) protein, encoded by the clic4 gene, is a member of the p64 family; the gene is expressed in many tissues. These channels are implicated in angiogenesis, pulmonary hypertension, cancer, and cardioprotection from ischemia-reperfusion injury. They exhibit an intracellular vesicular pattern in PANC-1 cells (pancreatic cancer cells).
Binding partners
CLIC4 binds to dynamin I, α-tubulin, β-actin, creatine kinase and two 14-3-3 isoforms.
See also
Chloride channel
References
Further reading
External links
Ion channels | CLIC4 | [
"Chemistry"
] | 208 | [
"Neurochemistry",
"Ion channels"
] |
14,771,355 | https://en.wikipedia.org/wiki/ASCL1 | Achaete-scute homolog 1 is a protein that in humans is encoded by the ASCL1 gene. Because it was discovered subsequent to studies on its homolog in Drosophila, the Achaete-scute complex, it was originally named MASH-1 for mammalian achaete scute homolog-1.
Function
This gene encodes a member of the basic helix-loop-helix (BHLH) family of transcription factors. The protein activates transcription by binding to the E box (5'-CANNTG-3'). Dimerization with other BHLH proteins is required for efficient DNA binding. This protein plays a role in the neuronal commitment and differentiation and in the generation of olfactory and autonomic neurons. It is highly expressed in medullary thyroid cancer and small cell lung cancer and may be a useful marker for these cancers. The presence of a CAG repeat in the gene suggests that it may also play a role in tumor formation.
Role in neuronal commitment
Development of the vertebrate nervous system begins when the neural tube forms in the early embryo. The neural tube eventually gives rise to the entire nervous system, but first neuroblasts must differentiate from the neuroepithelium of the tube. The neuroblasts are the cells that undergo mitotic division and produce neurons. Asc is central to the differentiation of the neuroblasts and the lateral inhibition mechanism which inherently creates a safety net in the event of damage or death in these incredibly important cells.
Differentiation of the neuroblast begins when the cells of the neural tube express Asc and thus upregulate the expression of Delta, a protein essential to the lateral inhibition pathway of neuronal commitment. Delta can diffuse to neighboring cells and bind to the Notch receptor, a large transmembrane protein which upon activation undergoes proteolytic cleavage to release the intracellular domain (Notch-ICD). The Notch-ICD is then free to travel to the nucleus and form a complex with Suppressor of Hairless (SuH) and Mastermind. This complex acts as transcription regulator of Asc and accomplishes two important tasks. First, it prevents the expression of factors required for differentiation of the cell into a neuroblast. Secondly, it inhibits the neighboring cell's production of Delta. Therefore, the future neuroblast will be the cell that has the greatest Asc activation in the vicinity and consequently the greatest Delta production that will inhibit the differentiation of neighboring cells. The select group of neuroblasts that then differentiate in the neural tube are thus replaceable because the neuroblast's ability to suppress differentiation of neighboring cells depends on its own ability to produce Asc.
This process of neuroblast differentiation via Asc is common to all animals. Although this mechanism was initially studied in Drosophila, homologs to all proteins in the pathway have been found in vertebrates that have the same bHLH structure.
Autonomic nervous system development
In addition to its important role in neuroblast formation, Asc also functions to mediate autonomic nervous system (ANS) formation. Asc was initially suspected to play a role in the ANS when ASCL1 was found expressed in cells surrounding the dorsal aorta, the adrenal glands and in the developing sympathetic chain during a specific stage of development. Subsequent studies of mice genetically altered to be MASH-1 deficient revealed defective development of both sympathetic and parasympathetic ganglia, the two constituents of the ANS.
Interactions
ASCL1 has been shown to interact with Myocyte-specific enhancer factor 2A.
References
Further reading
External links
Transcription factors | ASCL1 | [
"Chemistry",
"Biology"
] | 766 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,402 | https://en.wikipedia.org/wiki/MAF%20%28gene%29 | Transcription factor Maf, also known as proto-oncogene c-Maf or V-maf musculoaponeurotic fibrosarcoma oncogene homolog, is a transcription factor that in humans is encoded by the MAF gene.
Types
One type, MafA, also known as RIPE3b1, promotes pancreatic development, as well as insulin gene transcription.
Interactions
MAF has been shown to interact with:
CREBBP
EP300
MYB
SOX9.
References
Further reading
External links
Transcription factors | MAF (gene) | [
"Chemistry",
"Biology"
] | 116 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,485 | https://en.wikipedia.org/wiki/AFF1 | AF4/FMR2 family member 1 is a protein that in humans is encoded by the AFF1 gene. At its same location was a record for a separate PBM1 gene, which has since been withdrawn and considered an alias. It was previously known as AF4 (ALL1-fused gene from chromosome 4).
The gene is a member of the AF4/FMR2 (AFF) family, a group of nuclear transcriptional activators which encourage RNA elongation. It is a component of the super elongation complex. It is recognized as a proto-oncogene: chromosomal translocations associated with leukemia can fuse this gene with others like KMT2A, producing an uncontrolled activator protein.
References
External links
Further reading | AFF1 | [
"Chemistry"
] | 166 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,771,558 | https://en.wikipedia.org/wiki/PAX7 | Paired box protein Pax-7 is a protein that in humans is encoded by the PAX7 gene.
Function
Pax-7 plays a role in neural crest development and gastrulation, and it is an important factor in the expression of neural crest markers such as Slug, Sox9, Sox10 and HNK-1. PAX7 is expressed in the palatal shelf of the maxilla, Meckel's cartilage, mesencephalon, nasal cavity, nasal epithelium, nasal capsule and pons.
Pax7 is a transcription factor that plays a role in myogenesis through regulation of muscle precursor cells proliferation. It can bind to DNA as an heterodimer with PAX3. Also interacts with PAXBP1; the interaction links PAX7 to a WDR5-containing histone methyltransferase complex By similarity. Interacts with DAXX too.
PAX7 functions as a marker for a rare subset of spermatogonial stem cells, specifically a sub set of Asingle spermatogonia. These PAX7+ spermatogonia are rare in adult testis but are much more prevalent in newborns, making up 28% of germ cells in neonate testis. Unlike PAX7+ muscle satellite cells, PAX7+ spermatogonia rapidly proliferate and are not quiescent. PAX7+ spermatogonia are able to give rise to all stages of spermatogenesis and produce motile sperm. However, PAX7 is not required for spermatogenesis, as mice without PAX7+ spermatogonia show no deficits in fertility.
PAX7 may also function in the recovery in spermatogenesis. Unlike other spermatogonia, PAX7+ spermatogonia are resistant to radiation and chemotherapy. The surviving PAX7+ spermatogonia are able to increase in number following these therapies and differentiate into the other forms of spermatogonia that did not survive. Additionally, mice lacking PAX7 had delayed recovery of spermatogenesis following exposure to busulfan when compared to control mice.
Clinical significance
Pax proteins play critical roles during fetal development and cancer growth. The specific function of the paired box gene 7 is unknown but speculated to involve tumor suppression since fusion of this gene with a forkhead domain family member has been associated with alveolar rhabdomyosarcoma. Alternative splicing in this gene has produced two known products but the biological significance of the variants is unknown. Animal studies show that mutant mice have malformation of maxilla and the nose.
See also
Pax genes
References
Further reading
External links
Transcription factors | PAX7 | [
"Chemistry",
"Biology"
] | 558 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,771,568 | https://en.wikipedia.org/wiki/PAX9 | Paired box gene 9, also known as PAX9, is a protein which in humans is encoded by the PAX9 gene. It is also found in other mammals.
Expression and function
This gene is a member of the paired box (PAX) family of transcription factors. During mouse embryogenesis Pax9 expression starts from embryonic day 8.5 and becomes more evident by E9.5; at this stage its expression is restricted to the pharyngeal endoderm. Later on, Pax9 is also expressed in the axial skeleton. Pax9 is required for craniofacial, tooth and limb development, and may more generally involve development of stratified squamous epithelia as well as various organs and skeletal elements. PAX9 plays a role in the absence of wisdom teeth in some human populations (possibly along with the less well studied AXIN2 and MSX1).
Clinical significance
This gene was found amplified in lung cancer. The amplification covers three tissue developmental genes - TTF1, NKX2-8, and PAX9. It appears that certain lung cancer cells select for DNA copy number amplification and increased RNA/protein expression of these three coamplified genes for functional advantages.
Oligodontia
Oligodontia is a genetic disorder caused by the mutation of the PAX9 gene. This disorder results in the congenital absence of 6 or more permanent teeth, with the exception of the third molar. Also known as selective tooth agenesis (STHAG), it is the most common disorder in regard to human dentition, affecting a little less than one fourth of the population. The gene PAX9 which can be found on chromosome 14 encodes a group of transcription factors that play an important role in early tooth development. In humans, a frameshift mutation in the paired domain of PAX9 was discovered in those affected with oligodontia. Multiple mechanisms are possible by which the mutation may arise. Recently, a study involving the missense mutation of a PAX9 gene suggests that the loss of function due to the absence DNA binding domain is a mechanism that causes oligodontia. Those who express the PAX9 mutation and develop the disorder continue to have a normal life expectancy. Along with the mutation of the PAX9 gene, MSX1 gene mutations have also shown to affect dental development in fetuses.
Interactions
PAX9 has been shown to interact with JARID1B.
References
Further reading
External links
Transcription factors | PAX9 | [
"Chemistry",
"Biology"
] | 519 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.