id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
20,126,484 | https://en.wikipedia.org/wiki/Georges%20Reeb | Georges Henri Reeb (12 November 1920 – 6 November 1993) was a French mathematician. He worked in differential topology, differential geometry, differential equations, topological dynamical systems theory and non-standard analysis.
Biography
Reeb was born in Saverne, Bas-Rhin, Alsace, to Theobald Reeb and Caroline Engel. He started studying mathematics at University of Strasbourg, but in 1939 the entire university was evacuated to Clermont-Ferrand due to the German occupation of France.
After the war, he completed his studies and in 1948 he defended his PhD thesis, entitled Propriétés topologiques des variétés feuilletées [Topological properties of foliated manifolds] and supervised by Charles Ehresmann.
In 1952 Reeb was appointed professor at Université Joseph Fourier in Grenoble and in 1954 he visited the Institute for Advanced Study. From 1963 he worked at Université Louis Pasteur in Strasbourg.
There, in 1965 he created with Jean Leray and Pierre Lelong the series of meeting Rencontres entre Mathématiciens et Physiciens Théoriciens. in 1966 Reeb and Jean Frenkel founded the Institute de Recherche mathématique Avancée, the first university laboratory associated to the Centre National de la Recherche Scientifique, which he directed between 1967 and 1972.
In 1967 he was President of the Société Mathématique de France and in 1971 he was awarded the .
In 1991 Reeb received an honorary doctorate from Albert-Ludwigs-Universität Freiburg and from Université de Neuchâtel. He died in 1993 in Strasbourg when he was 72 years old.
Research
Reeb was the founder of the topological theory of foliations, a geometric structure on smooth manifolds which partition them in smaller pieces. In particular, he described what is now called the Reeb foliation, a foliation of the 3-sphere, whose leaves are all diffeomorphic to , except one, which is a 2-torus.
One of its first significant result, Reeb stability theorem, describes the local structure foliations around a compact leaf with finite holonomy group.
His works on foliations had also applications in Morse theory. In particular, the Reeb sphere theorem says that a compact manifold with a function with exactly two critical points is homeomorphic to the sphere. In turn, in 1956 this was used to prove that the Milnor spheres, although not diffeomorphic, are homeomorphic to the sphere .
Other important geometric concepts named after him include the Reeb graph and the Reeb vector field associated to a contact form.
Towards the end of his career, Reeb become a supporter of the theory of non-standard analysis by Abraham Robinson, coining the slogan "The naïve integers don't fill up " and working on its applications to dynamical systems.
Selected works
Books
with Wu Wen-Tsün: Sur les espaces fibrés et les variétés feuilletées, 1952
with A. Fuchs: Statistiques commentées, 1967
with J. Klein: Formules commentées de mathématiques: Programme P.C., 1971
Feuilletages: résultats anciens et nouveaux (Painlevé, Hector et Martinet), 1974
Articles
with André Haefliger:
See also
Séminaire Nicolas Bourbaki
Séminaire Nicolas Bourbaki (1950–1959)
References
1920 births
1993 deaths
People from Saverne
Academic staff of the University of Strasbourg
20th-century French mathematicians
Topologists
Geometers | Georges Reeb | Mathematics | 728 |
13,429,182 | https://en.wikipedia.org/wiki/Nuova%20Accademia%20di%20Belle%20Arti | The Nuova Accademia di Belle Arti, "New Academy of Fine Arts", also known as NABA, is a private academy of fine art in Milan, in Lombardy in northern Italy. It has approximately 3000 students, some of whom are from abroad; it participates in the Erasmus Programme.
History
NABA was founded in Milan in 1980.
In 1994 the Nuova Accademia received one of the forty "Ambrogino" certificates of civic merit awarded each year by the Comune of Milan.
In 2008 NABA began hosting a "node" of the Planetary-Collegium research platform of the University of Plymouth.
NABA was bought by Bastogi Spa of Milan in 2002. In December 2009 Bastogi sold it to Laureate Education of Baltimore, Maryland, for €22 million. In 2017, Laureate Education sold it to Galileo Global Education as part of a $263 million deal that also included Domus Academy.
The school is listed by the Ministero dell'Istruzione, dell'Università e della Ricerca, the Italian ministry of education, as a "legally recognised academy" in the AFAM classification of schools of music, art and dance that are considered equivalent to a traditional university.
References
Art schools in Italy
Fashion schools
Design schools in Italy
Communication design
Graphic design schools
Education in Milan
Higher education in Italy
Educational institutions established in 1980
1980 establishments in Italy | Nuova Accademia di Belle Arti | Engineering | 283 |
509,225 | https://en.wikipedia.org/wiki/Soldier%20Field | Soldier Field is a multi-purpose stadium on the Near South Side of Chicago, Illinois, United States. Opened in 1924 and reconstructed in 2003, the stadium has served as the home of the Chicago Bears from the National Football League (NFL) since 1971, as well as Chicago Fire FC of Major League Soccer (MLS) from 1998 to 2006 and since 2020. It also regularly hosts stadium concerts and other large crowd events. The stadium has a football capacity of 62,500, making it the smallest stadium in the NFL. Soldier Field is also the oldest stadium established in the NFL and 3rd oldest in MLS.
The stadium's interior was rebuilt as part of a major renovation project in 2002, which modernized the facility but lowered its seating capacity, eventually causing it to be delisted as a National Historic Landmark in 2006. Soldier Field has served as the home venue for a number of other sports teams in its history, including the Chicago Cardinals of the NFL and University of Notre Dame football. It hosted the 1994 FIFA World Cup, the 1999 FIFA Women's World Cup, and multiple CONCACAF Gold Cup championships. In 1968, it hosted the inaugural World Games of the Special Olympics, as well as its second World Games in 1970. Other historic events have included large rallies with speeches, including by Amelia Earhart, Franklin D. Roosevelt, and Martin Luther King Jr.
History
On December 3, 1919, Chicago-based architectural firm Holabird & Roche was chosen to design the stadium, which broke ground on August 11, 1922. The stadium cost $13 million to construct (equivalent to $ million in ), a large sum for a sporting venue at that time (in comparison, the Los Angeles Memorial Coliseum had cost less than US$1 million in 1923 dollars). On October 9, 1924, the 53rd anniversary of the Great Chicago Fire, the stadium was officially dedicated as "Grant Park Stadium", although it had hosted a few events before then, including a field day for Chicago police officers on September 6, and the stadium's first football game, between Louisville Male High School and Austin Community Academy High School, on October 4. On November 22, the stadium hosted its first college football game, in which Notre Dame defeated Northwestern University 13–6.
On November 11, 1925, the stadium's name was changed to Soldier Field, in dedication to U.S. soldiers who had died in combat during World War I. Its formal rededication as Soldier Field was held during the 29th annual playing of the Army–Navy Game on November 27, 1926. Several months earlier, in June 1926, the stadium hosted several events during the 28th International Eucharistic Congress, the first held in the United States. During the Century of Progress World's Fair in 1933, it served as the main stage.
The stadium's design is in the Neoclassical style, with Doric columns rising above the East and West entrances. In its earliest configuration, Soldier Field was capable of seating 74,280 spectators, and was in the shape of a U. Additional seating could be added along the interior field, upper promenades, and on the large, open field and terrace beyond the north endzone, bringing the seating capacity to over 100,000.
Chicago Bears move in
Before they moved into the stadium, the Chicago Bears had played select charity games at Soldier Field as early as , when they played their former crosstown rivals, the Chicago Cardinals. The Cardinals also used the stadium as their home field for their final season in the city in 1959.
In , the Bears moved into Soldier Field full-time, originally with a three-year commitment. The team previously played home games at Wrigley Field, the home stadium of the Chicago Cubs of Major League Baseball (MLB), but were forced to move to a larger venue due to post-AFL–NFL merger policies requiring that stadium capacities seat at least 50,000 spectators as well as lighting for potential night games. The Bears had initially intended to build a stadium in Arlington Heights, but the property did not fit the league's specifications.
On September 19, 1971, the Bears played their first home game at Soldier Field, in which they defeated the Pittsburgh Steelers 17–15. In 1978, the Bears and the Chicago Park District agreed to a 20-year lease and renovation of the stadium; both parties pooled their resources for the renovation. The playing surface was AstroTurf from 1971 until 1987, and was replaced with natural grass in 1988. On February 27, 1987, Soldier Field was designated a National Historic Landmark.
Replacement talks
In 1989, Soldier Field's future was in jeopardy after a proposal was created for a "McDome", which was intended to be a domed stadium for the Bears, but was rejected by the Illinois Legislature in 1990. Because of this, Bears president Michael McCaskey considered relocation as a possible factor for a new stadium. The Bears had also purchased options in Hoffman Estates, Elk Grove Village and Aurora. In 1995, McCaskey announced that he and Northwest Indiana developers agreed to construction of an entertainment complex called "Planet Park", which would also include a new stadium. However, the plan was rejected by the Lake County Council, and in 1998, then-Chicago mayor Richard M. Daley proposed that the Bears share Comiskey Park with the Chicago White Sox.
Renovations
Beginning in 1978, the plank seating was replaced by individual seats with backs and armrests. In 1982, a new press box, as well as 60 skyboxes, were added to the stadium, boosting its capacity to 66,030. In 1988, 56 more skyboxes were added, increasing capacity to 66,946. Capacity was slightly increased to 66,950 in 1992. By 1994, however, capacity was slightly reduced to 66,944. During the renovation, seating capacity was reduced to 55,701 by building a grandstand in the open end of the U shape. This moved the field closer to both ends in order to move the fans closer to the field, at the expense of seating capacity. The front row 50-yard line seats were only away from the sidelines, the shortest distance of all NFL stadiums until MetLife Stadium opened in 2010 with a distance of .
2002–03 renovation and landmark delisting
In 2001, the Chicago Park District, which owns the structure, faced substantial criticism when it announced plans to alter the stadium with a design by Benjamin T. Wood and Carlos Zapata of Wood + Zapata in Boston. The stadium grounds were reconfigured by local architecture firm Lohan Associate, led by architect Dirk Lohan, grandson of Ludwig Mies van der Rohe. The stadium's interior would be demolished and reconstructed while the exterior would be preserved in an example of facadism. A similar endeavor of constructing a new stadium within the confines of a historic stadium's exterior was done with Leipzig's Red Bull Arena, which similarly built a modern stadium while preserving the exterior of the original Zentralstadion. Fans and radio hosts, such as WSCR's Mike North, criticized the small seating capacity of the new venue, and others have criticized the Park District's lack of care to the field surface after the first seasonal freeze and a refusal to consider a new-generation artificial surface, leaving the Bears to play on dead grass.
On January 19, 2002, the night of the Bears' playoff loss to the Philadelphia Eagles, demolition began as tailgate fires still burned in trash cans in the parking lots. The removal of 24,000 stadium seats in 36 hours by Archer Seating Clearinghouse, a speed record never exceeded since, was the first step in building the new Soldier Field. Nostalgic Bears fans recalling the team's glory seasons (especially 1985), as well as some retired players, picked up their seats in the South parking lot. The foremen on the job were Grant Wedding, who installed the seats himself in 1979, and Mark Wretschko, an executive for the factory who made the new seats. As Soldier Field underwent renovation, the Bears spent the 2002 NFL season playing their home games at Memorial Stadium at the University of Illinois. On September 29, , the Bears played their first game at the renovated Soldier Field, in which they were defeated by the Green Bay Packers, 38–23. The total funding for the renovation cost $632 million; taxpayers were responsible for $432 million while the Chicago Bears and the NFL contributed $200 million.
Several writers and columnists attacked the Soldier Field renovation project as an aesthetic, political and financial nightmare. The project received mixed reviews within the architecture community, with criticism from civic and preservation groups. Prominent architect and native Chicagoan Stanley Tigerman called it "a fiasco. Chicago Tribune architecture critic Blair Kamin dubbed it the "Eyesore on the Lake Shore," while others called it "Monstrosity on the Midway" or "Mistake by the Lake". The renovation was described by some as if "a spaceship landed on the stadium". Lohan responded:
I would never say that Soldier Field is an architectural landmark. Nobody has copied it; nobody has learned from it. People like it for nostalgic reasons. They remember the games and parades and tractor pulls and veterans' affairs they've seen there over the years. I wouldn't do this if it were the Parthenon. But this isn't the Parthenon.
Proponents of the renovation argued it was badly needed because of aging and cramped facilities. The New York Times named the renovated Soldier Field one of the five best new buildings of 2003. Soldier Field was given an award in design excellence by the American Institute of Architects in 2004.
On September 23, 2004, as a result of the renovation, a 10-member federal advisory committee unanimously recommended that Soldier Field be delisted as a National Historic Landmark. The recommendation to delist was prepared by Carol Ahlgren, an architectural historian at the National Park Service's Midwest Regional Office in Omaha, Nebraska, who was quoted in Preservation Online stating, "if we had let this stand, I believe it would have lowered the standard of National Historic Landmarks throughout the country. ... If we want to keep the integrity of the program, let alone the landmarks, we really had no other recourse." The stadium lost the landmark designation on February 17, 2006.
Subsequent developments
In May 2012, Soldier Field became the first NFL stadium to achieve LEED status, a program intended to award environmentally sustainable buildings.
On July 9, 2019, the Chicago Fire of Major League Soccer (MLS) announced an agreement with the Village of Bridgeview to release the team from their lease with SeatGeek Stadium, where they had played since 2006. As a result, the Fire returned to Soldier Field for the 2020 MLS season.
On June 17, 2021, the Chicago Bears submitted a bid for the Arlington Park Racetrack property, making a move from Soldier Field to a new venue more possible. On September 29, the Bears and Churchill Downs Incorporated announced that they had reached an agreement for the property.
On September 5, 2022, the Kentucky bluegrass was replaced with Bermuda grass after poor field conditions were noted in an August 13 preseason game.
Public transportation
The closest Chicago 'L' station to Soldier Field is the Roosevelt station on the Orange, Green and Red lines. The Chicago Transit Authority also operates the #128 Soldier Field Express bus route to the stadium from Ogilvie Transportation Center and Union Station. There are also two Metra stations close by: the Museum Campus/11th Street station on the Metra Electric Line, which also is used by South Shore Line trains, and 18th Street, which is only served by the Metra Electric Line. Pace also provides access from the Northwest, West and Southwest suburbs to the stadium with four express routes from Schaumburg, Lombard, Bolingbrook, Burr Ridge, Palos Heights and Oak Lawn.
Facility contracts
The pouring rights of non-alcoholic beverages at Soldier Field were held by The Coca-Cola Company from at least 1992 until 2012, when the Bears signed a contract with Dr Pepper Snapple Group (later Keurig Dr Pepper), making it the only stadium in the NFL then (with Cleveland Browns Stadium striking a similar deal in 2018) to have such rights held by the company. With the 2003 renovation, the Bears gained power in striking sponsorship deals at Soldier Field; the Miller Brewing Company was given the pouring rights of alcoholic beverages, while Delaware North Sportservice was named the food and beverage service provider. Aramark took over service operations at the stadium when the latter contract expired in 2013.
Events
American football
Single events
The stadium hosted its first football game on October 4, 1924, between Louisville Male High School and Chicago's Austin Community Academy High School; Louisville's team won 26–0.
Over 100,000 spectators attended the 1926 Army–Navy Game. It would decide the national championship, as Navy entered undefeated and Army had lost only to Notre Dame. The game lived up to its hype, and even though it ended in a 21–21 tie, Navy was awarded the national championship.
The all-time collegiate attendance record of 123,000+ was established November 26, 1927, as Notre Dame beat the USC Trojans 7–6. Subsequently, in 2016, 150,000+ attended a game between the Virginia Tech Hokies and Tennessee Volunteers at Bristol Speedway.
Austin defeated Leo to win the 1937 Chicago Prep Bowl; another contender for the highest attendance ever (estimated at over 120,000 spectators). The Chicago Prep Bowl games are held at Soldier Field yearly on the day after Thanksgiving. The bowl game is older than the IHSA state championship tournament held since the 1960s.
The stadium was host to 41 College All-Star Games, an exhibition between the previous year's NFL champion (or, in its final years, Super Bowl champion) and a team of collegiate all-star players prior to their reporting to their new professional teams training camps. This game was discontinued after the 1976 NFL season. The final game in 1976 was halted in the third quarter when a torrential thunderstorm broke out and play was never resumed.
The University of Notre Dame has hosted two games at Soldier Field, as part of their Shamrock Series. The first was in 2012, against the University of Miami, with another, against the University of Wisconsin-Madison, following in 2021.
NFL playoffs
1985 NFC Divisional Playoff: New York Giants 0, Chicago Bears 21. The last home playoff game was in 1963, when the team played in Wrigley Field.
1985 NFC Championship Game: Los Angeles Rams 0, Chicago Bears 24. This was the first NFC Championship held here.
1986 NFC Divisional Playoff: Washington 27, Chicago Bears 13.
1987 NFC Divisional Playoff: Washington 21, Chicago 17.
1988 NFC Divisional Playoff: Philadelphia Eagles 12, Chicago Bears 20. This game is best remembered as the Fog Bowl, where a dense fog covered the stadium, reducing visibility to 15–20 yards.
1988 NFC Championship Game San Francisco 49ers 28, Bears 3. The 49ers would then go on to win Super Bowl XXIII.
1990 NFC Wild Card: New Orleans Saints 6, Chicago Bears 16.
1991 NFC Wild Card: Dallas Cowboys 17, Chicago Bears 13.
2001 NFC Divisional Playoff: Philadelphia Eagles 33, Chicago Bears 19. This was also the last home game before the renovations took place in 2002.
2005 NFC Divisional Playoff: Carolina Panthers 29, Chicago Bears 21. First playoff game post-renovations.
2006 NFC Divisional Playoff: Seattle Seahawks 24, Chicago Bears 27 (OT).
2006 NFC Championship Game: New Orleans Saints 14, Bears 39. Granted the team their second trip to the Super Bowl (their first in 21 years), where they lost to the Colts 29-17 in a rainy Miami.
2010 NFC Divisional Playoff: Seattle Seahawks 24, Chicago Bears 35.
2010 NFC Championship Game: Green Bay Packers 21, Bears 14. The Bears were defeated by the eventual Super Bowl XLV champions.
2018 NFC Wild Card: Philadelphia Eagles 16, Chicago Bears 15. This game is known for its "Double Doink" field goal.
College football
Northern Illinois Huskies play select games at Soldier Field, all of which have featured them hosting a team from the Big Ten Conference. Northern Illinois University (NIU) is located in DeKalb, to the west on Interstate 88.
On September 1, 2007, NIU faced the University of Iowa in the first Division I College Football game at Soldier Field since the 2002 renovations. The Hawkeyes defeated the Huskies 16–3.
On September 17, 2011, the Huskies returned to play the Wisconsin Badgers in a game that was called "Soldier Field Showdown II". The eventual Big Ten champion Badgers topped NIU 49–7.
On September 1, 2012, NIU hosted the Iowa Hawkeyes in a season opener that was called "Soldier Field Showdown III". The Hawkeyes narrowly defeated the Huskies 18–17.
Notre Dame Fighting Irish football used the stadium as home field for the 1929 season while Notre Dame Stadium was being constructed. The school has used Soldier Field for single games on occasion both prior to and since the 1929 season, and boasts an undefeated 10–0–2 record there. At Soldier Field, Notre Dame has played Northwestern four times, USC and Wisconsin twice, and Army, Drake, Great Lakes Naval Base, Navy, and Miami once each.
Motorsport
Beginning in the 1940s and through the late-1960s (except for during World War II), motorsport races regularly were held on a short track at the stadium. In 1956 and 1957, NASCAR held races at the stadium, including a NASCAR Cup race.
In the early-to-mid 1980s saw the US Hot Rod Association host Truck and Tractor Sled Pull Competitions and Monster Truck exhibitions here. The engines on some of the vehicles would echo through the skyscrapers in downtown Chicago as they made their pull. Damage to the stadium turf on a few of the event occasion's led USHRA to move events to the Rosemont Horizon (known today as Allstate Arena).
Ice hockey
On February 7, 2013, the stadium hosted a high school hockey game between St. Rita High School from the city's Southwest side and Fenwick High School from suburban Oak Park.
The Notre Dame Fighting Irish and Miami RedHawks played a doubleheader on February 17, 2013, with the Wisconsin Badgers and Minnesota Golden Gophers in the Hockey City Classic, the first outdoor hockey game in the history of the stadium. A Chicago Gay Hockey Association intra-squad game was held in affiliation with the Hockey City Classic.
On March 1, 2014, the Chicago Blackhawks played against the Pittsburgh Penguins as part of the NHL Stadium Series. The Blackhawks defeated the Penguins 5–1 before a sold-out crowd of 62,921. The team also held its 2015 Stanley Cup Championship celebration at the stadium instead of Grant Park, where other city championships have typically been held, due to recent rains.
On February 7, 2015, Soldier Field hosted another edition of the Hockey City Classic. The event had been delayed due to unusually warm weather () and complications with the quality of the ice. The 2015 edition of the Hockey City Classic featured a match between Miami University and Western Michigan, followed by a match between the Big Ten's Michigan and Michigan State On February 5, the organizers of the Hockey City Classic organized the Unite on the Ice event benefiting St. Jude Children's Research Hospital. The event was centered upon a celebrity hockey game with former NHL and AHL players, as well as a public free skate at Soldier Field. Participants in the celebrity game included Éric Dazé, Jamal Mayers and Gino Cavallini. Denis Savard was in attendance, serving as an honorary coach during the game. On February 15, 2015, Soldier Field hosted another Chicago Gay Hockey Association intra-league match in association with the Hockey City Classic.
Soccer
1994 FIFA World Cup
1999 FIFA Women's World Cup
CONCACAF Gold Cups
2007 CONCACAF Gold Cup
2009 CONCACAF Gold Cup
2011 CONCACAF Gold Cup
2013 CONCACAF Gold Cup
2015 CONCACAF Gold Cup
2019 CONCACAF Gold Cup
2023 CONCACAF Gold Cup
Copa América Centenario
Single events
Over 15,000 spectators attended the first leg of the 1928 National Challenge Cup (now known as the Lamar Hunt U.S. Open Cup) between soccer teams Bricklayers and Masons F.C. of Chicago and New York Nationals of New York City. The match ended in a 1–1 tie, and New York won the second leg 3–0 in New York City.
Numerous Men's and Women's National Team friendly matches.
Liverpool vs Olympiacos in the 2014 International Champions Cup with Liverpool winning 1–0.
Manchester United vs. Paris Saint-Germain in the 2015 International Champions Cup with PSG winning 2–0.
Bayern Munich vs. Milan in the 2016 International Champions Cup with the game resulting in a 3–3 draw and Milan winning the penalty shootout 5–3.
Site of the 2017 MLS All-Star Game, played on August 2, 2017, between Real Madrid and a group of all-stars representing Major League Soccer.
Manchester City vs. Borussia Dortmund in the 2018 International Champions Cup with Borussia Dortmund winning 1–0.
Venue for the 2019 CONCACAF Gold Cup Final, with Mexico defeating the United States 1–0.
Special Olympics
The first Special Olympics games were held at Soldier Field on July 20, 1968. The games involved over 1,000 people with intellectual disabilities from 26 U.S. states and Canada competing in track and field and swimming. In 1970, the second international games occurred, when Special Olympics returned to Soldier Field.
Rugby union
On November 1, 2014, the stadium hosted its first international rugby union test match between the United States Eagles and New Zealand All Blacks as part of the 2014 end-of-year rugby union tests. Over half of the 61,500 tickets were sold within two days. The All Blacks beat the Eagles 74–6. The stadium hosted its second international rugby union match on September 5, 2015, with the United States hosting Australia as part of the 2015 Rugby World Cup warm-up matches shortly before both teams were due to travel to England for the 2015 Rugby World Cup. The Eagles were defeated 47–10. On November 5, 2016, Ireland beat New Zealand 40–29 at Soldier Field as part of the 2016 end-of-year rugby union internationals – the very first time Ireland had beaten the All Blacks in a test match in 111 years of play.
Concerts
Other events
June 21–23, 1926: the 28th International Eucharistic Congress held three days of outdoor day and evening events.
September 22, 1927: The Long Count Fight, the second heavyweight championship bout between Jack Dempsey and Gene Tunney, was held at Soldier Field.
June 24, 1932: a war show celebrating the bicentennial of George Washington's birth featured Amelia Earhart.
May 27, 1933: Soldier Field held the opening ceremonies of the Century of Progress World's Fair. Postmaster General and DNC-Chairman James Farley facilitated the opening ceremony.
October 28, 1944: U.S. President Franklin D. Roosevelt made an appearance at Soldier Field, which was the only Midwestern speaking appearance he made in his last re-election campaign. This appearance was attended by over 150,000 (with at least as many people attempting to attend who were unable to gain admission).
April 25, 1951: Douglas MacArthur, US General during World War II, addressed a crowd of 50,000 at Soldier Field in his first visit to the United States in 14 years.
June 21, 1964: the Chicago Freedom Movement, led by Martin Luther King Jr., held a rally here. As many as 75,000 came to hear Reverend King, Reverend Theodore Hesburgh (president of the University of Notre Dame, Archbishop Arthur M. Brazier, and Minister Edgar Chandler, among others.
July 10, 1966: the Chicago Freedom Movement held a second rally here. As many as 60,000 people came to hear Dr. King, as well as Mahalia Jackson, Stevie Wonder and Peter, Paul and Mary.
1974: The Chicago Fire of the World Football League (WFL) played here before folding in 1975.
October 13, 1983: David D. Meilahn made the first-ever commercial cell phone call on a Motorola DynaTAC from his Mercedes-Benz 380SL at Soldier Field. This is considered a major turning point in communications. The call was to Bob Barnett, the former president of Ameritech Mobile Communications, who then placed a call on a DynaTAC from a Chrysler convertible to the grandson of Alexander Graham Bell, who was in Germany.
The stadium was listed on the National Register of Historic Places beginning in 1984. Its National Historic Landmark status was removed in 2006.
In the summer of 2006, the stadium hosted the opening ceremony of the Gay Games.
In 2012, United States President Barack Obama held the 2012 Chicago summit, a summit of the North Atlantic Treaty Organization (NATO), at McCormick Place and Soldier Field.
When the field and nearby Shedd Aquarium had to close to visitors due to the COVID-19 pandemic, Soldier Field became the exercise grounds for the aquarium's penguins.
In popular culture
In the Marvel Comics event Siege, Soldier Field is inadvertently destroyed mid-game by Thor's friend Volstagg when he is tricked into fighting the U-Foes through Loki and Norman Osborn's manipulations of events. The stadium is later seen being rebuilt by the heroes after Steve Rogers is appointed head of U.S. Security, following the aforementioned event.
The 1977 documentary film Powers of Ten focuses on two people having a picnic on the east side of Soldier Field.
The stadium appears in the 2006 Clint Eastwood–directed movie Flags of Our Fathers, when the survivors of the Iwo Jima flag-raising reenact it for a patriotic rally.
The opening match of the 1994 World Cup at Soldier Field was one of the five events covered in the ESPN 30 for 30 documentary June 17, 1994.
Soldier Field features (much changed) in August 4017a.d. in From The Highlands short story in David Weber's anthology collection Changer Of Worlds. It appears to have gone through multiple renovations, rebuilds and even having been built over, until nothing but the open space of the original remained.
In the 13th episode of Chicago Fires fourth season, Soldier Field is featured on one of their calls for a terrorist hoax. The stadium appears again in the 21st episode of the fifth season as one of their calls for a high angle rescue. This stadium is featured again in the eighth season as members of firehouse 51 respond to help victims of a deadly infection. It is also featured and referenced in the fifteenth episode of season 9 as the preferred location for a medal ceremony for firefighter Randy McHolland (Mouch).
In both the book and TV series, Daisy Jones & the Six, the eponymous group plays their final concert at Soldier Field on July 11, 1977.
Gallery
See also
List of events at Soldier Field
Lists of stadiums
Notes
References
Further reading
External links
Central Chicago
Sports venues in Chicago
American football venues in Chicago
Athletics (track and field) venues in Chicago
Boxing venues in Chicago
Buildings and structures on the National Register of Historic Places in Chicago
Chicago Bears stadiums
Chicago Blitz stadiums
Chicago Cardinals stadiums
Chicago Circle Chikas football
Chicago Fire FC
s
Defunct athletics (track and field) venues in the United States
DePaul Blue Demons football
1999 FIFA Women's World Cup stadiums
Major League Soccer stadiums
Former National Historic Landmarks of the United States
Ice hockey venues in Chicago
s
Leadership in Energy and Environmental Design certified buildings
Motorsport venues in Illinois
NASCAR tracks
National Football League venues
North American Soccer League (1968–1984) stadiums
Notre Dame Fighting Irish football venues
Pan American Games opening ceremony stadiums
Pan American Games athletics venues
Projects by Holabird & Root
Rebuilt buildings and structures in Illinois
Rugby union stadiums in Chicago
Soccer venues in Chicago
Softball venues in Chicago
Sports venues completed in 1924
Sports venues on the National Register of Historic Places in Illinois
Tennis venues in Chicago
Tourist attractions in Chicago
United States Football League venues
World Football League venues
1924 establishments in Illinois
Sports venues in Chicagoland | Soldier Field | Engineering | 5,697 |
1,059,617 | https://en.wikipedia.org/wiki/Old-growth%20forest | An old-growth forest or primary forest is a forest that has developed over a long period of time without disturbance. Due to this, old-growth forests exhibit unique ecological features. The Food and Agriculture Organization of the United Nations defines primary forests as naturally regenerated forests of native tree species where there are no clearly visible indications of human activity and the ecological processes are not significantly disturbed. One-third (34 percent) of the world's forests are primary forests. Old-growth features include diverse tree-related structures that provide diverse wildlife habitats that increases the biodiversity of the forested ecosystem. Virgin or first-growth forests are old-growth forests that have never been logged. The concept of diverse tree structure includes multi-layered canopies and canopy gaps, greatly varying tree heights and diameters, and diverse tree species and classes and sizes of woody debris., the world has of primary forest remaining. Combined, three countries (Brazil, Canada, and Russia) host more than half (61 percent) of the world's primary forest. The area of primary forest has decreased by since 1990, but the rate of loss more than halved in 2010–2020 compared with the previous decade.
Old-growth forests are valuable for economic reasons and for the ecosystem services they provide. This can be a point of contention when some in the logging industry desire to harvest valuable timber from the forests, destroying the forests in the process, to generate short-term profits, while environmentalists seek to preserve the forests in their pristine state for benefits such as water purification, flood control, weather stability, maintenance of biodiversity, and nutrient cycling. Moreover, old-growth forests are more efficient at sequestering carbon than newly planted forests and fast-growing timber plantations, thus preserving the forests is vital to climate change mitigation.
Characteristics
Old-growth forests tend to have large trees and standing dead trees, multilayered canopies with gaps that result from the deaths of individual trees, and coarse woody debris on the forest floor. The trees of old-growth forests develop distinctive attributes not seen in younger trees, such as more complex structures and deeply fissured bark that can harbor rare lichens and mosses.
A forest regenerated after a severe disturbance, such as wildfire, insect infestation, or harvesting, is often called second-growth or 'regeneration' until enough time passes for the effects of the disturbance to be no longer evident. Depending on the forest, this may take from a century to several millennia. Hardwood forests of the eastern United States can develop old-growth characteristics in 150–500 years. In British Columbia, Canada, old growth is defined as 120 to 140 years of age in the interior of the province where fire is a frequent and natural occurrence. In British Columbia's coastal rainforests, old growth is defined as trees more than 250 years, with some trees reaching more than 1,000 years of age. In Australia, eucalypt trees rarely exceed 350 years of age due to frequent fire disturbance.
Forest types have very different development patterns, natural disturbances and appearances. A Douglas-fir stand may grow for centuries without disturbance while an old-growth ponderosa pine forest requires frequent surface fires to reduce the shade-tolerant species and regenerate the canopy species. In the boreal forest of Canada, catastrophic disturbances like wildfires minimize opportunities for major accumulations of dead and downed woody material and other structural legacies associated with old growth conditions. Typical characteristics of old-growth forest include the presence of older trees, minimal signs of human disturbance, mixed-age stands, presence of canopy openings due to tree falls, pit-and-mound topography, down wood in various stages of decay, standing snags (dead trees), multilayered canopies, intact soils, a healthy fungal ecosystem, and presence of indicator species.
Biodiversity
Old-growth forests are often biologically diverse, and home to many rare species, threatened species, and endangered species of plants and animals, such as the northern spotted owl, marbled murrelet and fisher, making them ecologically significant. Levels of biodiversity may be higher or lower in old-growth forests compared to that in second-growth forests, depending on specific circumstances, environmental variables, and geographic variables. Logging in old-growth forests is a contentious issue in many parts of the world. Excessive logging reduces biodiversity, affecting not only the old-growth forest itself, but also indigenous species that rely upon old-growth forest habitat.
Mixed age
Some forests in the old-growth stage have a mix of tree ages, due to a distinct regeneration pattern for this stage. New trees regenerate at different times from each other, because each of them has a different spatial location relative to the main canopy, hence each one receives a different amount of light. The mixed age of the forest is an important criterion in ensuring that the forest is a relatively stable ecosystem in the long term. A climax stand that is uniformly aged becomes senescent and degrades within a relatively short time to result in a new cycle of forest succession. Thus, uniformly aged stands are less stable ecosystems. Boreal forests are more uniformly aged, as they are normally subject to frequent stand-replacing wildfires.
Canopy openings
Forest canopy gaps are essential in creating and maintaining mixed-age stands. Also, some herbaceous plants only become established in canopy openings, but persist beneath an understory. Openings are a result of tree death due to small impact disturbances such as wind, low-intensity fires, and tree diseases.
Old-growth forests are unique, usually having multiple horizontal layers of vegetation representing a variety of tree species, age classes, and sizes, as well as "pit and mound" soil shape with well-established fungal nets. As old-growth forest is structurally diverse, it provides higher-diversity habitat than forests in other stages. Thus, sometimes higher biological diversity can be sustained in old-growth forests, or at least a biodiversity that is different from other forest stages.
Topography
The characteristic topography of much old-growth forest consists of pits and mounds. Mounds are caused by decaying fallen trees, and pits (tree throws) by the roots pulled out of the ground when trees fall due to natural causes, including being pushed over by animals. Pits expose humus-poor, mineral-rich soil and often collect moisture and fallen leaves, forming a thick organic layer that is able to nurture certain types of organisms. Mounds provide a place free of leaf inundation and saturation, where other types of organisms thrive.
Standing snags
Standing snags provide food sources and habitat for many types of organisms. In particular, many species of dead-wood predators, such as woodpeckers, must have standing snags available for feeding. In North America, the spotted owl is well known for needing standing snags for nesting habitat.
Decaying ground layer
Fallen timber, or coarse woody debris, contributes carbon-rich organic matter directly to the soil, providing a substrate for mosses, fungi, and seedlings, and creating microhabitats by creating relief on the forest floor. In some ecosystems such as the temperate rain forest of the North American Pacific coast, fallen timber may become nurse logs, providing a substrate for seedling trees.
Soil
Intact soils harbor many life forms that rely on them. Intact soils generally have very well-defined horizons, or soil profiles. Different organisms may need certain well-defined soil horizons to live, while many trees need well-structured soils free of disturbance to thrive. Some herbaceous plants in northern hardwood forests must have thick duff layers (which are part of the soil profile). Fungal ecosystems are essential for efficient in-situ recycling of nutrients back into the entire ecosystem.
Definitions
Ecological definitions
Stand age definition
Stand age can also be used to categorize a forest as old-growth. For any given geographical area, the average time since disturbance until a forest reaches the old growth stage can be determined. This method is useful, because it allows quick and objective determination of forest stage. However, this definition does not provide an explanation of forest function. It just gives a useful number to measure. So, some forests may be excluded from being categorized as old-growth even if they have old-growth attributes just because they are too young. Also, older forests can lack some old-growth attributes and be categorized as old-growth just because they are so old. The idea of using age is also problematic, because human activities can influence the forest in varied ways. For example, after the logging of 30% of the trees, less time is needed for old-growth to come back than after removal of 80% of the trees. Although depending on the species logged, the forest that comes back after a 30% harvest may consist of proportionately fewer hardwood trees than a forest logged at 80% in which the light competition by less important tree species does not inhibit the regrowth of vital hardwoods.
Forest dynamics definition
From a forest dynamics perspective, old-growth forest is in a stage that follows understory reinitiation stage. Those stages are:
Stand-replacing: Disturbance hits the forest and kills most of the living trees.
Stand-initiation: A population of new trees becomes established.
Stem-exclusion: Trees grow higher and enlarge their canopy, thus competing for the light with neighbors; light competition mortality kills slow-growing trees and reduces forest density, which allows surviving trees to increase in size. Eventually, the canopies of neighboring trees touch each other and drastically lower the amount of light that reaches lower layers. Due to that, the understory dies and only very shade-tolerant species survive.
Understory reinitiation: Trees die from low-level mortality, such as windthrow and diseases. Individual canopy gaps start to appear and more light can reach the forest floor. Hence, shade-tolerant species can establish in the understory.
Old-growth: Main canopy trees become older and more of them die, creating even more gaps. Since the gaps appear at different times, the understory trees are at different growth stages. Furthermore, the amount of light that reaches each understory tree depends on its position relative to the gap. Thus, each understory tree grows at a different rate. The differences in establishment timing and in growth rate create a population of understory trees that is variable in size. Eventually, some understory trees grow to become as tall as the main canopy trees, thereby filling the gap. This perpetuation process is typical for the old-growth stage. This, however, does not mean that the forest will be old-growth forever. Generally, three futures for old-growth stage forest are possible: 1) The forest will be hit by a disturbance and most of the trees will die, 2) Unfavorable conditions for new trees to regenerate will occur. In this case, the old trees will die and smaller plants will create woodland, and 3) The regenerating understory trees are different species from the main canopy trees. In this case, the forest will switch back to stem-exclusion stage, but with shade-tolerant tree species. 4) The forest in an old-growth stage can be stable for centuries, but the length of this stage depends on the forest's tree composition and the climate of the area. For example, frequent natural fires do not allow boreal forests to be as old as coastal forests of western North America.
Of importance is that while the stand switches from one tree community to another, the stand will not necessarily go through old-growth stage between those stages. Some tree species have a relatively open canopy. That allows more shade-tolerant tree species to establish below even before the understory reinitiation stage. The shade-tolerant trees eventually outcompete the main canopy trees in stem-exclusion stage. Therefore, the dominant tree species will change, but the forest will still be in stem-exclusion stage until the shade-tolerant species reach old-growth stage.
Tree species succession may change tree species' composition once the old-growth stage has been achieved. For example, an old boreal forest may contain some large aspen trees, which may die and be replaced by smaller balsam fir or black spruce. Consequently, the forest will switch back to understory reinitiation stage. Using the stand dynamics definition, old-growth can be easily evaluated using structural attributes. However, in some forest ecosystems, this can lead to decisions regarding the preservation of unique stands or attributes that will disappear over the next few decades because of natural succession processes. Consequently, using stand dynamics to define old-growth forests is more accurate in forests where the species that constitute old-growth have long lifespans and succession is slow.
Social and cultural definitions
Common cultural definitions and common denominators regarding what comprises old-growth forest, and the variables that define, constitute and embody old-growth forests include:
The forest habitat possesses relatively mature, old trees;
The tree species present have long continuity on the same site;
The forest itself is a remnant natural area that has not been subjected to significant disturbance by mankind, altering the appearance of the landscape and its ecosystems, has not been subjected to logging (or other types of development such as road networks or housing), and has inherently progressed per natural tendencies.
Additionally, in mountainous, temperate landscapes (such as Western North America), and specifically in areas of high-quality soil and a moist, relatively mild climate, some old-growth trees have attained notable height and girth (DBH: diameter at breast height), accompanied by notable biodiversity in terms of the species supported. Therefore, for most people, the physical size of the trees is the most recognized hallmark of old-growth forests, even though the ecologically productive areas that support such large trees often comprise only a very small portion of the total area that has been mapped as old-growth forest. (In high-altitude, harsh climates, trees grow very slowly and thus remain at a small size. Such trees also qualify as old growth in terms of how they are mapped, but are rarely recognized by the general public as such.)
The debate over old-growth definitions has been inextricably linked with a complex range of social perceptions about wilderness preservation, biodiversity, aesthetics, and spirituality, as well as economic or industrial values.
Economic definitions
In logging terms, old-growth stands are past the economic optimum for harvestingusually between 80 and 150 years, depending on the species. Old-growth forests were often given harvesting priority because they had the most commercially valuable timber, they were considered to be at greater risk of deterioration through root rot or insect infestation, and they occupied land that could be used for more productive second-growth stands. In some regions, old growth is not the most commercially viable timberin British Columbia, Canada, harvesting in the coastal region is moving to younger second-growth stands.
Other definitions
A 2001 scientific symposium in Canada found that defining old growth in a scientifically meaningful, yet policy-relevant, manner presents some basic difficulties, especially if a simple, unambiguous, and rigorous scientific definition is sought. Symposium participants identified some attributes of late-successional, temperate-zone, old-growth forest types that could be considered in developing an index of "old-growthness" and for defining old-growth forests:
Structural features:
Uneven or multi-aged stand structure, or several identifiable age cohorts
Average age of dominant species approaching half the maximum longevity for species (about 150+ years for most shade-tolerant trees)
Some old trees at close to their maximum longevity (ages of 300+ years)
Presence of standing dead and dying trees in various stages of decay
Fallen, coarse woody debris
Natural regeneration of dominant tree species within canopy gaps or on decaying logs
Compositional features:
Long-lived, shade-tolerant tree species associations (e.g., sugar maple, American beech, yellow birch, red spruce, eastern hemlock, white pine)
Process features:
Characterized by small-scale disturbances creating gaps in the forest canopy
A long natural rotation for catastrophic or stand-replacing disturbance (e.g., a period greater than the maximum longevity of the dominant tree species)
Minimal evidence of human disturbance
Final stages of stand development before a relatively steady state is reached
Importance
Old-growth forests often contain rich communities of plants and animals within the habitat due to the long period of forest stability. These varied and sometimes rare species may depend on the unique environmental conditions created by these forests.
Old-growth forests serve as a reservoir for species, which cannot thrive or easily regenerate in younger forests, so they can be used as a baseline for research.
Plant species that are native to old-growth forests may someday prove to be invaluable towards curing various human ailments, as has been realized in numerous plants in tropical rainforests.
Old-growth forests also store large amounts of carbon above and below the ground (either as humus, or in wet soils as peat). They collectively represent a very significant store of carbon. Destruction of these forests releases this carbon as greenhouse gases, and may increase the risk of global climate change. Although old-growth forests serve as a global carbon dioxide sink, they are not protected by international treaties, because it is generally thought that aging forests cease to accumulate carbon. However, in forests between 15 and 800 years of age, net ecosystem productivity (the net carbon balance of the forest including soils) is usually positive; old-growth forests accumulate carbon for centuries and contain large quantities of it.
Ecosystem services
Old-growth forests provide ecosystem services that may be far more important to society than their use as a source of raw materials. These services include making breathable air, making pure water, carbon storage, regeneration of nutrients, maintenance of soils, pest control by insectivorous bats and insects, micro- and macro-climate control, and the storage of a wide variety of genes.
Climatic impacts
The effects of old-growth forests in relation to global warming have been addressed in various studies and journals.
The Intergovernmental Panel on Climate Change said in its 2007 report: "In the long term, a sustainable forest management strategy aimed at maintaining or increasing forest carbon stocks, while producing an annual sustained yield of timber, fibre, or energy from the forest, will generate the largest sustained mitigation benefit."
Old-growth forests are often perceived to be in equilibrium or in a state of decay. However, evidence from analysis of carbon stored above ground and in the soil has shown old-growth forests are more productive at storing carbon than younger forests. Forest harvesting has little or no effect on the amount of carbon stored in the soil, but other research suggests older forests that have trees of many ages, multiple layers, and little disturbance have the highest capacities for carbon storage. As trees grow, they remove carbon from the atmosphere, and protecting these pools of carbon prevents emissions into the atmosphere. Proponents of harvesting the forest argue the carbon stored in wood is available for use as biomass energy (displacing fossil fuel use), although using biomass as a fuel produces air pollution in the form of carbon monoxide, nitrogen oxides, volatile organic compounds, particulates, and other pollutants, in some cases at levels above those from traditional fuel sources such as coal or natural gas.
Each forest has a different potential to store carbon. For example, this potential is particularly high in the Pacific Northwest where forests are relatively productive, trees live a long time, decomposition is relatively slow, and fires are infrequent. The differences between forests must, therefore, be taken into consideration when determining how they should be managed to store carbon. A 2019 study projected that old-growth forests in Southeast Asia, the majority of which are in Indonesia and Malaysia, are able to sequester carbon or be a net emitter of greenhouse gases based on deforestation scenarios over the subsequent decades.
Old-growth forests have the potential to impact climate change, but climate change is also impacting old-growth forests. As the effects of global warming grow more substantial, the ability of old-growth forests to sequester carbon is affected. Climate change showed an impact on the mortality of some dominant tree species, as observed in the Korean pine. Climate change also showed an effect on the composition of species when forests were surveyed over a 10- and 20-year period, which may disrupt the overall productivity of the forest.
Logging
According to the World Resources Institute, as of January 2009, only 21% of the original old-growth forests that once existed on Earth are remaining. An estimated one-half of Western Europe's forests were cleared before the Middle Ages, and 90% of the old-growth forests that existed in the contiguous United States in the 1600s have been cleared.
The large trees in old-growth forests are economically valuable, and have been subject to aggressive logging throughout the world. This has led to many conflicts between logging companies and environmental groups. From certain forestry perspectives, fully maintaining an old-growth forest is seen as extremely economically unproductive, as timber can only be collected from falling trees, and also potentially damaging to nearby managed groves by creating environments conducive to root rot. It may be more productive to cut the old growth down and replace the forest with a younger one.
The island of Tasmania, just off the southeast coast of Australia, has the largest amount of temperate old-growth rainforest reserves in Australia with around 1,239,000 hectares in total. While the local Regional Forest Agreement (RFA) was originally designed to protect much of this natural wealth, many of the RFA old-growth forests protected in Tasmania consist of trees of little use to the timber industry. RFA old-growth and high conservation value forests that contain species highly desirable to the forestry industry have been poorly preserved. Only 22% of Tasmania's original tall-eucalypt forests managed by Forestry Tasmania have been reserved. Ten thousand hectares of tall-eucalypt RFA old-growth forest have been lost since 1996, predominantly as a result of industrial logging operations. In 2006, about 61,000 hectares of tall-eucalypt RFA old-growth forests remained unprotected. Recent logging attempts in the Upper Florentine Valley have sparked a series of protests and media attention over the arrests that have taken place in this area. Additionally, Gunns Limited, the primary forestry contractor in Tasmania, has been under recent criticism by political and environmental groups over its practice of woodchipping timber harvested from old-growth forests.
Management
Increased understanding of forest dynamics in the late 20th century led the scientific community to identify a need to inventory, understand, manage, and conserve representative examples of old-growth forests with their associated characteristics and values. Literature around old growth and its management is inconclusive about the best way to characterize the true essence of an old-growth stand.
A better understanding of natural systems has resulted in new ideas about forest management, such as managed natural disturbances, which should be designed to achieve the landscape patterns and habitat conditions normally maintained in nature. This coarse filter approach to biodiversity conservation recognizes ecological processes and provides for a dynamic distribution of old growth across the landscape.
And all seral stagesyoung, medium, and oldsupport forest biodiversity. Plants and animals rely on different forest ecosystem stages to meet their habitat needs.
In Australia, the Regional Forest Agreement (RFA) attempted to prevent the clearfelling of defined "old-growth forests". This led to struggles over what constitutes "old growth". For example, in Western Australia, the timber industry tried to limit the area of old growth in the karri forests of the Southern Forests Region; this led to the creation of the Western Australian Forests Alliance, the splitting of the Liberal Government of Western Australia and the election of the Gallop Labor Government. Old-growth forests in this region have now been placed inside national parks. A small proportion of old-growth forests also exist in South-West Australia and are protected by federal laws from logging, which has not occurred there for more than 20 years.
In British Columbia, Canada, old-growth forests must be maintained in each of the province's ecological units to meet biodiversity needs.
In the United States, from 2001, around a quarter of the federal forests are protected from logging. In December 2023, Biden's administration introduced a rule, according to which, logging is strongly limited in old growth forests, but permitted in "mature forests", representing a compromise between the logging industry and environmental activists.
Locations of remaining tracts
In 2006, Greenpeace identified that the world's remaining intact forest landscapes are distributed among the continents as follows:
35% in South America: The Amazon rainforest is mainly located in Brazil, which clears a larger area of forest annually than any other country in the world.
28% in North America, which harvests of ancient forests every year. Many of the fragmented forests of southern Canada and the United States lack adequate animal travel corridors and functioning ecosystems for large mammals. Most of the remaining old-growth forests in the contiguous United States and Alaska are on public land.
19% in northern Asia, home to the largest boreal forest in the world
8% in Africa, which has lost most of its intact forest landscapes in the last 30 years. The timber industry and local governments are responsible for destroying huge areas of intact forest landscapes and continue to be the single largest threat to these areas.
7% in South Asia Pacific, where the Paradise Forests are being destroyed faster than any other forest on Earth. Much of the large, intact forest landscapes have already been cut down, 72% in Indonesia, and 60% in Papua New Guinea.
Less than 3% in Europe, where more than of intact forest landscapes are cleared every year and the last areas of the region's intact forest landscapes in European Russia are shrinking rapidly. In the United Kingdom, they are known as ancient woodlands.
See also
Notes
References
Sources
Further reading
Provincial Old Growth regulations of British Columbia, Canada
Old-Growth Forest Definitions from U.S. Regional Ecosystem Office
Collection of Google map links of clear cuts in or around old growth
Managing for Biodiversity in Young Forests – U.S. Geological Survey Biological Science Report (pdf)
The State of British Columbia’s Forests Third Edition
BC Journal of Ecosystems Old growth definitions and management: A literature review
Natural Resources Canada Old-growth boreal forests: unraveling the mysteries
External links
Our disappearing forests
Rainforest Action Network
Ancient Forest Exploration & Research
Natural Resources Canada 2003
Old Growth Forest Definitions for Ontario
Submissions to XII World Forest Congress 2003
Minnesota Department of Natural Resources
Archangel Ancient Tree Archive | Old Growth Trees
Forest conservation
Forestry and the environment
Sustainable forest management
Types of formally designated forests | Old-growth forest | Biology | 5,425 |
60,573,633 | https://en.wikipedia.org/wiki/Stem%20Cells%20and%20Development | Stem Cells and Development is a biweekly peer-reviewed scientific journal covering cell biology, with a specific focus on biomedical applications of stem cells. It was established in 1992 as the Journal of Hematotherapy, and was renamed the Journal of Hematotherapy & Stem Cell Research in 1999. The journal obtained its current name in 2004. It is published by Mary Ann Liebert, Inc. and the editor-in-chief is Graham C. Parker (Wayne State University School of Medicine). According to the Journal Citation Reports, the journal has a 2018 impact factor of 3.147.
References
External links
Academic journals established in 1992
Biweekly journals
Stem cell research
Molecular and cellular biology journals
Mary Ann Liebert academic journals
English-language journals | Stem Cells and Development | Chemistry,Biology | 150 |
23,038,858 | https://en.wikipedia.org/wiki/Arithmetic%20variety | In mathematics, an arithmetic variety is the quotient space of a Hermitian symmetric space by an arithmetic subgroup of the associated algebraic Lie group.
Kazhdan's theorem
Kazhdan's theorem says the following:
References
Further reading
See also
Arithmetic Chow groups
Arithmetic of abelian varieties
Abelian variety
Arithmetic geometry | Arithmetic variety | Mathematics | 65 |
46,665,459 | https://en.wikipedia.org/wiki/Gerard%20J.%20Milburn | Gerard James Milburn (born 1958) is an Australian theoretical quantum physicist notable for his work on quantum feedback control, quantum measurements, quantum information, open quantum systems, and Linear optical quantum computing (aka the Knill, Laflamme and Milburn scheme).
Education
Milburn received his BSc (Hons) in Physics from Griffith University in 1980. He completed his PhD in physics under Daniel Frank Walls at the University of Waikato in 1982, with a thesis entitled Squeezed States and Quantum Nondemolition Measurements.
Career and Research
Following his PhD, Milburn did postdoctoral research in the Department of Mathematics at Imperial College London in 1983. Later, in 1984, he was awarded a Royal Society Fellowship to work in the Quantum Optics group of Peter Knight, at Imperial.
In 1985 he returned to Australia and was appointed lecturer at The Australian National University. In 1988 Milburn took up an appointment as Reader in Theoretical Physics at The University of Queensland. In 1994 he was appointed as Professor of Physics and in 1996 became Head of Department of Physics at The University of Queensland. From 2000 to 2010 he was Deputy Director of the Australian Research Council Centre of Excellence for Quantum Computer Technology. From 2003 to 2013 he was an Australian Research Council Federation Fellow at the University of Queensland.
He was the Chair of the Scientific Advisory Committee of the Institute for Quantum Computing and served on the scientific advisory committee for the Perimeter Institute for Theoretical Physics from 2007 to 2010.
From 2011 to 2017 he was the Director and Chief Investigator of the Australian Research Council Centre of Excellence for Engineered Quantum Systems.
Honors and awards
His awards include the Moyal Medal for Mathematical Physics (awarded 2001) and Boas medal, (awarded in 2003). He is a fellow of the Australian Academy of Science (1999), a Fellow of the American Physical Society (2005), and elected a Fellow of the Royal Society in 2017.
References
1958 births
Living people
Scientists from Brisbane
Australian physicists
Academic staff of the University of Queensland
University of Auckland alumni
University of Waikato alumni
Quantum physicists
Fellows of the Australian Academy of Science
Fellows of the American Physical Society
Fellows of the Royal Society | Gerard J. Milburn | Physics | 426 |
22,519,222 | https://en.wikipedia.org/wiki/Tantalum%20boride | Tantalum borides are compounds of tantalum and boron most remarkable for their extreme hardness.
Properties
The Vickers hardness of TaB and TaB2 films and crystals is ~30 GPa. Those materials are stable to oxidation below 700 °C and to acid corrosion.
TaB2 has the same hexagonal structure as most diborides (AlB2, MgB2, etc.). The mentioned borides have the following space groups: TaB (orthorhombic, Thallium(I) iodide-type, Cmcm), Ta5B6 (Cmmm), Ta3B4 (Immm), TaB2 (hexagonal, aluminum diboride-type, P6/mmm).
Preparation
Single crystals of TaB, Ta5B6, Ta3B4 or TaB2 (about 1 cm diameter, 6 cm length) can be produced by the floating zone method.
Tantalum boride films can be deposited from a gas mixture of TaCl5-BCl3-H2-Ar in the temperature range 540–800 °C. TaB2 (single-phase) is deposited at a source gas flow ratio (BCl3/TaCl5) of six and a temperature above 600 °C. TaB (single-phase) is deposited at BCl3/TaCl5 = 2–4 and T = 600–700 °C.
Nanocrystals of TaB2 were successfully synthesized by the reduction of Ta2O5 with NaBH4 using a molar ratio M:B of 1:4 at 700-900 °C for 30 min under argon flow.
Ta2O5 + 6.5 NaBH4 → 2 TaB2 + 4 Na(g,l) + 2.5 NaBO2+ 13 H2(g)
References
Tantalum compounds
Borides
Superhard materials | Tantalum boride | Physics | 383 |
793,058 | https://en.wikipedia.org/wiki/Frank%20Schlesinger | Frank Schlesinger (May 11, 1871 – July 10, 1943) was an American astronomer. His work concentrated on using photographic plates rather than direct visual studies for astronomical research.
Biography
Schlesinger was born in New York City and attended public schools there. He graduated from the College of the City of New York in 1890. He then worked as a surveyor, becoming a special student in astronomy at Columbia in 1894. In 1896, he received a fellowship which enabled him to study full-time, and he received a PhD in 1898. After his graduation, he spent the summer at Yerkes Observatory as a volunteer assisting director George Ellery Hale.
He was an observer in charge of the International Latitude Observatory, Ukiah, California, in 1898. From 1899 to 1903, he was an astronomer at Yerkes, where he pioneered the use of photographic methods to determine stellar parallaxes. He was director of Allegheny Observatory from 1903 to 1920 and Yale University Observatory from 1920 to 1941.
At Yale he worked extensively with Ida Barney. He compiled and published the Yale Bright Star Catalogue. The first publication of the results of this work started in 1925 (Transactions of the Yale University Observatory, v. 4) and the work concluded in the 1980s. He made major contributions to astrometry. He was elected to the American Philosophical Society (1912), the National Academy of Sciences (1916) and the American Academy of Arts and Sciences and served as president of the American Astronomical Society (1919–1922), and the International Astronomical Union (1932–1935).
Asked how to say his name, he told The Literary Digest "The name is so difficult for those who do not speak German that I am usually called sles'in-jer, to rhyme with messenger. It is, of course, of German origin and means 'a native of Schlesien' or Silesia. In that language the pronunciation is shlayzinger, to rhyme with singer."
Awards and honors
Valz Prize of the French Academy of Sciences (1926)
Gold Medal of the Royal Astronomical Society (1927)
Bruce Medal (1929)
The crater Schlesinger on the Moon is named after him, as is the asteroid 1770 Schlesinger.
Family
He married Eva Hirsch in 1900 while in Ukiah. They had one child, Frank Wagner Schlesinger, who later directed planetariums in Philadelphia and Chicago. His wife died in 1928, and in 1929 he married Mrs. Katherine Bell (Rawling) Wilcox.
Published works
Notes
References
Further reading
External links
1871 births
1943 deaths
American astronomers
Recipients of the Gold Medal of the Royal Astronomical Society
Columbia University alumni
Members of the United States National Academy of Sciences
Presidents of the International Astronomical Union | Frank Schlesinger | Astronomy | 549 |
61,248,076 | https://en.wikipedia.org/wiki/C55H70MgN4O6 | {{DISPLAYTITLE:C55H70MgN4O6}}
The molecular formula C55H70MgN4O6 (molar mass: 907.49 g/mol, exact mass: 906.5146 u) may refer to:
Chlorophyll_b
Chlorophyll_f
Molecular formulas | C55H70MgN4O6 | Physics,Chemistry | 75 |
2,935,251 | https://en.wikipedia.org/wiki/Energy%20industry | The energy industry is the totality of all of the industries involved in the production and sale of energy, including fuel extraction, manufacturing, refining and distribution. Modern society consumes large amounts of fuel, and the energy industry is a crucial part of the infrastructure and maintenance of society in almost all countries.
In particular, the energy industry comprises:
the fossil fuel industries, which include petroleum industries (oil companies, petroleum refiners, fuel transport and end-user sales at gas stations), coal industries (extraction and processing), and the natural gas industries (natural gas extraction, and coal gas manufacture, as well as distribution and sales);
the electrical power industry, including electricity generation, electric power distribution and sales;
the nuclear power industry;
the renewable energy industry, comprising alternative energy and sustainable energy companies, including those involved in hydroelectric power, wind power, and solar power generation, and the manufacture, distribution and sale of alternative fuels; and,
traditional energy industry based on the collection and distribution of firewood, the use of which, for cooking and heating, is particularly common in poorer countries.
The increased dependence during the 20th century on carbon-emitting energy sources, such as fossil fuels, and carbon-emitting renewables, such as biomass, means that the energy industry has frequently contributed to pollution and environmental impacts on the economy. Until recently, fossil fuels were the primary source of energy generation in most parts of the world and are a significant contributor to global warming and pollution. Many economies are investing in renewable and sustainable energy to limit global warming and reduce air pollution.
History
The use of energy has been a key in the development of the human society by helping it to control and adapt to the environment. Managing the use of energy is inevitable in any functional society. In the industrialized world the development of energy resources has become essential for agriculture, transportation, waste collection, information technology, communications that have become prerequisites of a developed society. The increasing use of energy since the Industrial Revolution has also brought with it a number of serious problems, some of which, such as global warming, present potentially grave risks to the world.
In some industries, the word energy is used as a synonym for energy resources, which refer to substances like fuels, petroleum products, and electricity in general. This is because a significant portion of the energy contained in these resources can easily be extracted to serve a useful purpose. After a useful process has taken place, the total energy is conserved. Still, the resource itself is not conserved since a process usually transforms the energy into unusable forms (such as unnecessary or excess heat).
Ever since humanity discovered various energy resources available in nature, it has been inventing devices, known as machines, that make life more comfortable by using energy resources. Thus, although primitive man knew the utility of fire to cook food, the invention of devices like gas burners and microwave ovens led to additional ways of how energy can be utilised. The trend is the same in any other field of social activity, be it the construction of social infrastructure, manufacturing of fabrics for covering, porting, printing, decorating, for example, textiles, air conditioning, communication of information, or for moving people and goods (automobiles).
Economics
Production and consumption of energy resources is very important to the global economy. All economic activity requires energy resources, whether to manufacture goods, provide transportation, run computers and other machines.
Widespread demand for energy may encourage competing energy utilities and the formation of retail energy markets. Note the presence of the "Energy Marketing and Customer Service" (EMACS) sub-sector.
The energy sector accounts for 4.6% of outstanding leveraged loans, compared with 3.1% a decade ago, while energy bonds make up 15.7% of the $1.3trillion junk bond market, up from 4.3% over the same period.
Management
Since the cost of energy has become a significant factor in the performance of societies' economies, the management of energy resources has become crucial. Energy management involves utilizing the available energy resources more effectively, that is, with minimum incremental costs. Simple management techniques can often save energy expenditures without incorporating fresh technology. Energy management is most often the practice of using energy more efficiently by eliminating energy wastage or balancing justifiable energy demand with appropriate energy supply. The process couples energy awareness with energy conservation.
Classifications
Government
The United Nations developed the International Standard Industrial Classification, which is a list of economic and social classifications. There is no distinct classification for an energy industry, because the classification system is based on activities, products, and expenditures according to purpose.
Countries in North America use the North American Industry Classification System (NAICS). The NAICS sectors #21 and #22 (mining and utilities) might roughly define the energy industry in North America. This classification is used by the U.S. Securities and Exchange Commission.
Financial market
The Global Industry Classification Standard used by Morgan Stanley define the energy industry as comprising companies primarily working with oil, gas, coal and consumable fuels, excluding companies working with certain industrial gases.
Add also to expand this section: Dow Jones Industrial Average
Environmental impact
Government encouragement in the form of subsidies and tax incentives for energy-conservation efforts has increasingly fostered the view of conservation as a major function of the energy industry: saving an amount of energy provides economic benefits almost identical to generating that same amount of energy. This is compounded by the fact that the economics of delivering energy tend to be priced for capacity as opposed to average usage. One of the purposes of a smart grid infrastructure is to smooth out demand so that capacity and demand curves align more closely.
Some parts of the energy industry generate considerable pollution, including toxic and greenhouse gases from fuel combustion, nuclear waste from the generation of nuclear power, and oil spillages as a result of petroleum extraction. Government regulations to internalize these externalities form an increasing part of doing business, and the trading of carbon credits and pollution credits on the free market may also result in energy-saving and pollution-control measures becoming even more important to energy providers.
Consumption of energy resources, (e.g. turning on a light) requires resources and has an effect on the environment. Many electric power plants burn coal, oil or natural gas in order to generate electricity for energy needs. While burning these fossil fuels produces a readily available and instantaneous supply of electricity, it also generates air pollutants including carbon dioxide (CO2), sulfur dioxide and trioxide (SOx) and nitrogen oxides (NOx). Carbon dioxide is an important greenhouse gas, known to be responsible, along with methane, nitrous oxide, and fluorinated gases, for the rapid increase in global warming since the Industrial Revolution. In the 20th century, global temperature records are significantly higher than temperature records from thousands of years ago, taken from ice cores in Arctic regions. Burning fossil fuels for electricity generation also releases trace metals such as beryllium, cadmium, chromium, copper, manganese, mercury, nickel, and silver into the environment, which also act as pollutants.
The large-scale use of renewable energy technologies would "greatly mitigate or eliminate a wide range of environmental and human health impacts of energy use". Renewable energy technologies include biofuels, solar heating and cooling, hydroelectric power, solar power, and wind power. Energy conservation and the efficient use of energy would also help.
In addition, it is argued that there is also the potential to develop a more efficient energy sector. This can be done by:
Fuel switching in the power sector from coal to natural gas;
Power plant optimisation and other measures to improve the efficiency of existing CCGT power plants;
Combined heat and power (CHP), from micro-scale residential to large-scale industrial;
Waste heat recovery
Best available technology (BAT) offers supply-side efficiency levels far higher than global averages. The relative benefits of gas compared to coal are influenced by the development of increasingly efficient energy production methods. According to an impact assessment carried out for the European Commission, the levels of energy efficiency of coal-fired plants built have now increased to 46-49% efficiency rates, as compared to coals plants built before the 1990s (32-40%). However, at the same time gas can reach 58-59% efficiency levels with the best available technology. Meanwhile, combined heat and power can offer efficiency rates of 80-90%.
Politics
Since now energy plays an essential role in industrial societies, the ownership and control of energy resources plays an increasing role in politics. At the national level, governments seek to influence the sharing (distribution) of energy resources among various sections of the society through pricing mechanisms; or even who owns resources within their borders. They may also seek to influence the use of energy by individuals and business in an attempt to tackle environmental issues.
The most recent international political controversy regarding energy resources is in the context of the Iraq Wars. Some political analysts maintain that the hidden reason for both 1991 and 2003 wars can be traced to strategic control of international energy resources. Others counter this analysis with the numbers related to its economics. According to the latter group of analysts, U.S. has spent about $336 billion in Iraq as compared with a background current value of $25 billion per year budget for the entire U.S. oil import dependence
Policy
Energy policy is the manner in which a given entity (often governmental) has decided to address issues of energy development including energy production, distribution and consumption. The attributes of energy policy may include legislation, international treaties, incentives to investment, guidelines for energy conservation, taxation and other public policy techniques.
Security
Energy security is the intersection of national security and the availability of natural resources for energy consumption. Access to cheap energy has become essential to the functioning of modern economies. However, the uneven distribution of energy supplies among countries has led to significant vulnerabilities. Threats to energy security include the political instability of several energy producing countries, the manipulation of energy supplies, the competition over energy sources, attacks on supply infrastructure, as well as accidents, natural disasters, the funding to foreign dictators, rising terrorism, and dominant countries reliance to the foreign oil supply. The limited supplies, uneven distribution, and rising costs of fossil fuels, such as oil and gas, create a need to change to more sustainable energy sources in the foreseeable future. With as much dependence that the U.S. currently has for oil and with the peaking limits of oil production; economies and societies will begin to feel the decline in the resource that we have become dependent upon. Energy security has become one of the leading issues in the world today as oil and other resources have become as vital to the world's people. However, with oil production rates decreasing and oil production peak nearing, the world has come to protect what resources we have left. With new advancements in renewable resources, less pressure has been put on companies that produce the world's oil; these resources are geothermal, solar power, wind power, and hydroelectric. Although these are not all the current and possible options for the world to turn to as the oil depletes, the most critical issue is protecting these vital resources from future threats. These new resources will become more valuable as the price of exporting and importing oil will increase due to increased demand.
Development
Producing energy to sustain human needs is an essential social activity, and a great deal of effort goes into the activity. While most of such effort is limited towards increasing the production of electricity and oil, newer ways of producing usable energy resources from the available energy resources are being explored. One such effort is to explore means of producing hydrogen fuel from water. Though hydrogen use is environmentally friendly, its production requires energy and existing technologies to make it, are not very efficient. Research is underway to explore enzymatic decomposition of biomass.
Other forms of conventional energy resources are also being used in new ways. Coal gasification and liquefaction are recent technologies that are becoming attractive after the realization that oil reserves, at present consumption rates, may be rather short lived. See alternative fuels.
Energy is the subject of significant research activities globally. For example, the UK Energy Research Centre is the focal point for UK energy research while the European Union has many technology programmes as well as a platform for engaging social science and humanities within energy research.
Transportation
All societies require materials and food to be transported over distances, generally against some force of friction. Since application of force over distance requires the presence of a source of usable energy, such sources are of great worth in society.
While energy resources are an essential ingredient for all modes of transportation in society, the transportation of energy resources is becoming equally important. Energy resources are frequently located far from the place where they are consumed. Therefore, their transportation is always in question. Some energy resources like liquid or gaseous fuels are transported using tankers or pipelines, while electricity transportation invariably requires a network of grid cables. The transportation of energy, whether by tanker, pipeline, or transmission line, poses challenges for scientists and engineers, policy makers, and economists to make it more risk-free and efficient.
Crisis
Economic and political instability can lead to an energy crisis. Notable oil crises are the 1973 oil crisis and the 1979 oil crisis. The advent of peak oil, the point in time when the maximum rate of global petroleum extraction is reached, will likely precipitate another energy crisis.
Mergers and acquisitions
Between 1985 and 2018, there have been around 69,932 deals in the energy sector. This cumulates to an overall value of 9,578 bil USD. The most active year was 2010 with about 3.761 deals. In terms of value 2007 was the strongest year (684 bil. USD), which was followed by a steep decline until 2009 (-55,8%).
Here is a list of the top 10 deals in history in the energy sector:
See also
Alternative energy
Climate lawsuit
Energy accounting
Energy quality
Energy system – the interpretation of the energy sector in system terms
Energy transformation
Economics of climate change
Hydrogen economy
List of books about the energy industry
List of countries by energy consumption per capita
List of energy resources
List of largest energy companies
Stranded asset
World energy consumption
Worldwide energy supply
References
Further reading
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
Fouquet, Roger, and Peter J.G. Pearson. "Seven Centuries of Energy Services: The Price and Use of Light in the United Kingdom (1300-2000)". Energy Journal 27.1 (2006).
Gales, Ben, et al. "North versus South: Energy transition and energy intensity in Europe over 200 years". European Review of Economic History 11.2 (2007): 219-253.
Nye, David E. Consuming power: A social history of American energies (MIT Press, 1999)
Pratt, Joseph A. Exxon: Transforming Energy, 1973-2005 (2013) 600pp
Stern, David I. "The role of energy in economic growth". Annals of the New York Academy of Sciences 1219.1 (2011): 26-51.
Warr, Benjamin, et al. "Energy use and economic development: A comparative analysis of useful work supply in Austria, Japan, the United Kingdom and the US during 100 years of economic growth". Ecological Economics 69.10 (2010): 1904-1917.
Industries (economics)
Energy development
Energy economics | Energy industry | Environmental_science | 3,150 |
4,454,234 | https://en.wikipedia.org/wiki/Gilbert%20Stork | Gilbert Stork (December 31, 1921 – October 21, 2017) was a Belgian-American organic chemist. For a quarter of a century he was the Eugene Higgins Professor of Chemistry Emeritus at Columbia University. He is known for making significant contributions to the total synthesis of natural products, including a lifelong fascination with the synthesis of quinine. In so doing he also made a number of contributions to mechanistic understanding of reactions, and performed pioneering work on enamine chemistry, leading to development of the Stork enamine alkylation.
It is believed he was responsible for the first planned stereocontrolled synthesis as well as the first natural product to be synthesised with high stereoselectivity.
Stork was also an accomplished mentor of young chemists and many of his students have gone on to make significant contributions in their own right.
Early life
Gilbert Stork was born in the Ixelles municipality of Brussels, Belgium on December 31, 1921. The oldest of 3 children, his middle brother, Michel, died in infancy, but he remained close with his younger sister Monique his whole life. His family had Jewish origins, although Gilbert himself didn't recall them being religiously active. The family moved to Nice when Gilbert was about 14 (circa. 1935) and remained there until 1939. During this period, Gilbert completed his lycée studies, distinguishing himself in French literature and writing. Characterizing himself during those years as "not terribly self-confident," and uncertain whether he could find employment in a profession he enjoyed, Gilbert considered applying for a colonial civil service job in French Indochina. However, the outbreak of World War II that year led the family to flee to New York, where his father's older brother, Sylvain, had already emigrated.
Education
Gilbert studied for a Bachelor of Science at the University of Florida, from 1940 to 1942. He then moved to the University of Wisconsin–Madison for this PhD, which he obtained in 1945 under the supervision of Samuel M. McElvain. While at Wisconsin he met Carl Djerassi, with whom he would go on to form a lasting friendship.
Career
1946 Harvard University: Instructor; 1948 Assistant Professor
1953 Columbia University: Associate Professor; 1955 Professor; 1967–1993 Eugene Higgins Professor; *1993 Professor Emeritus
Elected to
U.S. National Academy of Sciences, 1961
American Academy of Arts and Sciences, 1962
Foreign Member of the French Academy of Sciences, 1989
American Philosophical Society, 1995
The Royal Society, UK 1999
Incidents
The explosive steak
During his time at the University of Wisconsin, Stork kept a steak on his windowsill in the winter in order to keep it refrigerated. The steak began to degrade and to dispose of it Stork put it in a hot acid bath used to clean glassware which contained nitric and sulphuric acids. He was then concerned he would produce nitroglycerine due to the glycerine in the steak and the presence of nitric and sulphuric acids. However, due to the high temperature of the bath, the oxidation of glycerol was much faster than the nitration of glycerin thus preventing the formation of explosives.
Awarded Honorary Fellowship or membership
Chemists' Club of New York, 1974
Pharmaceutical Society of Japan, 1973
Chemical Society of Japan, 2002
Royal Society of Chemistry, UK, 1983
Chairman Organic Division of the American Chemical Society, 1966–1967
Awards
Professor Stork received a number of awards and honors including the following:
1957 Award in Pure Chemistry of the American Chemical Society
1959 Guggenheim Foundation Fellow
1961 Baekeland Medal, North Jersey ACS
1962 Harrison Howe Award
1966 Edward Curtis Franklin Memorial Award, Stanford University
1967 ACS Award for Creative Work in Synthetic Organic Chemistry
1971 Synthetic Organic Chemical Manufacturers Association Gold Medal
1973 Nebraska Award
1978 Roussel Prize, Paris
1980 Nichols Medal, New York ACS, Arthur C. Cope Award, ACS
1982 Edgar Fahs Smith Award, Philadelphia ACS
1982 Willard Gibbs Medal, Chicago ACS
1982 National Academy of Sciences Award in Chemical Sciences
1982 National Medal of Science from Ronald Reagan; Linus Pauling Award
1985 Tetrahedron Prize
1986 Remsen Award, Maryland ACS
1986 Cliff S. Hamilton Award
1987 Monie A Ferst Award and Medal, Georgia Tech.
1991 Roger Adams Award
1992 George Kenner Award, Liverpool
1992 Robert Robinson Lectureship, University of Manchester
1992 Chemical Pioneer Award, American Institute of Chemists
1993 Welch Award in Chemistry, Robert A. Welch Foundation
1994 Allan R. Day Award, Philadelphia Organic Chemists Club
1995 Wolf Prize, Israel
2002 Sir Derek Barton Gold medal, Royal Society of Chemistry
2005 Herbert C. Brown Award, American Chemical Society
Stork also held honorary doctorates from Lawrence University, the University of Wisconsin–Madison, the University of Paris, the University of Rochester, and Columbia University.
The inaugural Gilbert Stork Lecture was held in his honor in 2014 at his alma mater, the University of Wisconsin-Madison. Gilbert Stork named lecture series are also held at other institutions, including Columbia University and the University of Pennsylvania, as a result of his endowments.
He was fêted for his sense of humor and colorful personality by historian of chemistry Jeffrey I. Seeman who published a collection of "Storkisms".
References
External links
Finding aid to the Gilbert Stork papers at Columbia University Rare Book & Manuscript Library
1921 births
2017 deaths
Belgian emigrants to the United States
Belgian Jews
American people of Belgian-Jewish descent
American chemists
Columbia University faculty
Harvard University faculty
Jewish American scientists
Jewish chemists
National Medal of Science laureates
Foreign members of the Royal Society
Belgian chemists
University of Florida alumni
University of Wisconsin–Madison alumni
Wolf Prize in Chemistry laureates
Members of the French Academy of Sciences
Members of the United States National Academy of Sciences
Organic chemists
21st-century American Jews | Gilbert Stork | Chemistry | 1,182 |
1,148,195 | https://en.wikipedia.org/wiki/Broadacre%20City | Broadacre City was an urban or suburban development concept proposed by Frank Lloyd Wright throughout most of his lifetime. He presented the idea in his book The Disappearing City in 1932. A few years later he unveiled a very detailed twelve-by-twelve-foot (3.7 × 3.7 m) scale model representing a hypothetical four-square-mile (10 km2) community. The model was crafted by the student interns who worked for him at Taliesin, and financed by
Edgar Kaufmann. It was initially displayed at an Industrial Arts Exposition in the Forum at the Rockefeller Center starting on April 15, 1935. After the New York exposition, Kaufmann arranged to have the model displayed in Pittsburgh at an exposition titled "New Homes for Old", sponsored by the Federal Housing Administration. The exposition opened on June 18 on the 11th floor of Kaufmann's store. The model is now on display at the Museum of Modern Art. Wright went on to refine the concept in later books and in articles until his death in 1959.
Many of the building models in the concept were completely new designs by Wright, while others were refinements of older ones, some of which had rarely been seen.
Broadacre City was the antithesis of a city and the apotheosis of the newly born suburbia, shaped through Wright's particular vision. It was both a planning statement and a socio-political scheme, inspired by Henry George, by which each U.S. family would be given a plot of land from the federal lands reserves, and a Wright-conceived community would be built anew from this. In a sense it was the exact opposite of transit-oriented development. There is a train station and a few office and apartment buildings in Broadacre City, but the apartment dwellers are expected to be a small minority. All important transport is done by automobile, and the pedestrian can exist safely only within the confines of the plots where most of the population dwells.
In his book Urban Planning Theory since 1945, Nigel Taylor considers the planning methodology of this type of city to be Blueprint planning, which came under heavy criticism in the late 1950s by many critics such as Jane Jacobs, in her 1961 book The Death and Life of Great American Cities.
Similar models
Some of the earlier garden city ideas of the landscape architect Frederick Law Olmsted and the urban planner Ebenezer Howard had much in common with Broadacre City, save for the absence of the automobile, born much later.
More recently, the development of the edge city is like an unplanned, incomplete version of Broadacre city.
The R. W. Lindholm Service Station in Cloquet, Minnesota, shows some of Wright's ideas for Broadacre City.
See also
List of planned cities
List of Frank Lloyd Wright works
References
Further reading
Krohe, James Jr. Return to Broadacre City. Illinois Issues April 2000, 27. Also in digital form on the Web.
Pimlott, Mark. "Frank Lloyd Wright & Broadacre City". In M. Pimlott's Without and within: Essays on territory and the interior, Rotterdam, Episode Publishers, 2007
Illustrations
Photograph of Broadacre City model
Plan of Broadacre City model
Planned communities in the United States
Frank Lloyd Wright buildings
Architecture related to utopias | Broadacre City | Engineering | 679 |
1,215,724 | https://en.wikipedia.org/wiki/Sculptor%20Dwarf%20Irregular%20Galaxy | The Sculptor Dwarf Irregular Galaxy (SDIG) is an irregular galaxy in the constellation Sculptor. It is a member of the NGC 7793 subgroup of the Sculptor Group.
Nearby galaxies and galaxy group information
The Sculptor Dwarf Irregular Galaxy and the dwarf galaxy UGCA 442 are both companions of the spiral galaxy NGC 7793. These galaxies all lie within the Sculptor Group, a weakly bound, filament-like group of galaxies located near the Local Group.
See also
Sculptor Dwarf Galaxy – a dwarf spheroidal or elliptical galaxy, also in Sculptor, but significantly closer; a satellite of the Milky Way.
References
External links
Dwarf irregular galaxies
Sculptor Group
Sculptor (constellation)
00621
349-31 | Sculptor Dwarf Irregular Galaxy | Astronomy | 145 |
60,067,324 | https://en.wikipedia.org/wiki/Honor%206%20Plus | The Honor 6 Plus is a flagship Android smartphone produced by Honor, when it was still a sub brand of Huawei.
Specifications
The phone has 3 GB RAM, it has 16GB or 32GB of internal storage and is connectable using Bluetooth 4.0, Wifi 802.11 a/b/g/n and 2G/3G/4GLTE. It was released in December 2014.
Honor 6 Plus has a 5.5-inch IPS LCD display and runs on Android 6 OS after the latest update. It uses the HiSilicon Kirin 925 (28 nm) chipset.
References
https://www.hihonor.com/global/products/smartphone/honor6-plus/
https://www.techradar.com/reviews/phones/mobile-phones/honor-6-plus-1279376/review
https://www.phonearena.com/phones/Honor-6-Plus_id9010
https://www.gsmarena.com/honor_6_plus-6777.php
Android (operating system) devices
Mobile phones introduced in 2014
Huawei smartphones
Discontinued flagship smartphones
Mobile phones with infrared transmitter
Huawei Honor | Honor 6 Plus | Technology | 256 |
51,190,584 | https://en.wikipedia.org/wiki/Social%20media%20in%20the%20financial%20services%20sector | Social media in the financial services sector refers to the use of social media by the financial services sector to promote and distribute financial services. Social media is used in various aspects of the financial industry including customer service, marketing, and product development. It has enabled financial institutions to extend their reach through direct and real-time communication with customers, fostering more personal connections. It also allows individuals to talk to other individuals creating lending and trading via social groups as well as developing new financial services by fintech startup companies.
In terms of marketing, social media is utilized by both traditional financial companies as well as disruptive fintech companies such as peer-to-peer lending (P2P) companies. The financial industry has used information technology since its inception in the 1960s and social media fits in with this ongoing development. Larger, traditional financial firms have integrating social media into their marketing strategies.
Companies in the financial sector are subject to strict regulations that include how they use social media. In the United States, the Financial Industry Regulatory Authority (FINRA) is a key regulator that sets rules how financial firms can interact with consumers. This includes making sure social media posts follow financial advertising rules, such as being fair and balances not providing misleading information and that financial advice is not provided by unqualified personal, such as influencers.
History
In 2003, at the beginning of social media development, MySpace was founded as a “social networking service.” It allowed people to create a profile, connect with other people, and post videos, pictures, and songs. As MySpace grew in popularity, it attracted interest from companies wishing to promote their brands on the social platform. They were joined by Facebook and in 2010 by Instagram. Financial service firms were initially slow to adapt to promotion via social media but soon joined other big firms after they saw the success other industries had in engaging with younger people.
Uses
Branding
While companies are able to connect with more people remotely through providing online financial services, their branding strategy has shifted from customized to standardized. Prior to the outbreak of technology, most banks used customized branding where they targeted only customers in their regions. However, businesses can now use technology to operate past their geographic location and maintain a consistent image across multiple countries with standardized branding. By being able to extend a consistent brand reputation across a wider geographic location, financial services companies can take advantage of economies of scale in advertising cost, lower administrative complexity, lower entry into new markets, and improved cross-border learning within the company.
Customer engagement
Many argue that online banking has made customers feel more distant from their banks due to lack of human to human interaction. Instead of going to a local branch and interacting with a teller, customers can now do most of their banking online and even though mobile devices. Social media has provided a way for companies to once again connect with their customers on a personal level. The financial services sector uses social media platforms to create the value that was once found physically in local branches. For example, through their Facebook page, a bank may post a snapshot of one of their employees with a brief blurb about his/her job duties and values. This strategy replicates the human to human interaction a customer would receive at a local branch and humanizes larger financial institutes.
Lending
Social media is a core marketing channel for online peer-to-peer lending as well as small business lenders. Since these companies operate exclusively online, it makes sense for them to market online through social media channels. They are able to grow and find new lenders and buyers by utilizing social networks.
Trading
Social trading is an alternative way of analyzing financial data by looking at what other traders are doing and comparing, copying and discussing their techniques and strategies. Prior to the advent of social trading, investors and traders were relying on fundamental or technical analysis to form their investment decisions. Using social trading investors and traders could integrate into their investment decision-process social indicators from trading data-feeds of other traders.
Investors also use platform like Reddit, Signal messaging or WeChat to create social communities to discuss investments and finance. In some cases they use this to join together using meme stocks to move financial markets, such as the 2021 GameStop short squeeze incident. They can also use social groups to launch and promote new products such as Cryptocurrencys.
Investing application like WeBull incorporate a forum style messaging system on each stock that is available for trading. Financial brokers such as Fidelity Investments, Interactive Brokers, and E-Trade have moved to incorporated community features in their investment apps.
Regulations
The use of social media by investors and financial services professionals for business purposes is subject to regulatory oversight, in the United States this is done primarily by the Financial Industry Regulatory Authority (FINRA). FINRA's rules, designed to protect investors from misleading information in all communications and this also applies to social media communications. This includes making sure social media posts follow financial advertising rules, such as being fair and balances not providing misleading information and that advice is not provided by unqualified personal, such as influencers and bank staff in their personal capacity. Financial firms have to maintain books and records of all interaction with customers and this includes social media.
New products and services
Social media has created entirely new products for the financial services sector, revolutionizing products and developing new industries through the merging of social technology and financial services. Fintech startups use social media to promote products to get them established.
Many developing nations have used social media to leapfrog traditional financial technology for example the WeChat Pay which developed from the Chinese WeChat social media platform became a major payment system in China in just a few years. In 2015, according to consulting firm Accenture, 390 million people in China had registered to use mobile banking. This figure is more than the population of the United States. Albeit not as popular in the U.S., the most prominent American fintech company, Venmo, blends technology and financial services together on a social platform.
Other financial technology companies that have used social media to develop or promote financial products include:
Lending Club - One of the first peer-to-peer lending businesses
OnDeck Capital - A US online only lending business
Funding Circle - A UK based online lending company
Wise - A global online money transfers company
Kabbage - US Online unsecure loan company later acquired by American Express
Avant - A US online unsecured loan company
Zopa - A UK online neobank providing peer-to-peer lending
Risks
Reputational damage
Due to the real-time nature of social media, financial services companies can be impacted by potential reputational issues. Any negative experience by customers can easily be shared online and could become a viral phenomenon, those comments could likely have a detrimental effect on the company’s stock price and reputation. On the other hand, any positive experience a customer has can also be shared online. However, positive experiences are much less likely to become viral.
Scams
The nature of social media makes it easy to target individuals without being seen by the wider community, this allows scammers to target individuals. Example include romance scams such as the pig butchering scam where an individual is tricked to transfer funds or assets to the scammer over social media making it hard for law enforcement to track them or recover funds.
Customer privacy
Customer privacy is important for the financial services industry. It is critical that customer information such as a bank account numbers and other personal information is kept private. However, this information can be leaked if for example, a customer is unhappy with a bank’s service, they may tweet at the bank expressing their frustrations and include their name and account number.
See also
Cryptocurrency and crime
Influencer marketing
Marketing communications
Online advertising
Social media in the fashion industry
References
Social media
Financial services | Social media in the financial services sector | Technology | 1,570 |
4,101,529 | https://en.wikipedia.org/wiki/Rocker%20box | A rocker box (also known as a cradle or a big box) is a gold mining implement for separating alluvial placer gold from sand and gravel which was used in placer mining in the 19th century. It consists of a high-sided box, which is open on one end and on top, and was placed on rockers.
The inside bottom of the box is lined with riffles and usually a carpet (called Miner's Moss) similar to a sluice box. On top of the box is a classifier sieve (usually with half-inch or quarter-inch openings) which screens-out larger pieces of rock and other material, allowing only finer sand and gravel through. Between the sieve and the lower sluice section is a baffle, which acts as another trap for fine gold and also ensures that the aggregate material being processed is evenly distributed before it enters the sluice section. It sits at an angle and points towards the closed back of the box. Traditionally, the baffle consisted of a flexible apron made of canvas or a similar material, which had a sag of about an inch and a half in the center, to act as a collection pocket for fine gold. Later rockers (including most modern ones) dispensed with the flexible apron and used a pair of solid wood or metal baffle boards. These are sometimes covered with carpet to trap fine gold. The entire device sits on rockers at a slight gradient, which allows it to be rocked side to side.
Today, the rocker box is not used as extensively as the sluice, but still is an effective method of recovering gold in areas where there is not enough available water to operate a sluice effectively. Like a sluice box, the rocker box has riffles and a carpet in it to trap gold. It was designed to be used in areas with less water than a sluice box. The mineral processing involves pouring water out of a small cup and then rocking the small sluice box like a cradle, thus the name rocker box or cradle.
Rocker boxes must be manipulated carefully, to prevent losing the gold. Although big, and difficult to move, the rocker can pick up twice the amount of the gravel, and therefore more gold in one day than an ordinary gold mining pan. The rocker, like the pan, is used extensively in small-scale placer work, in sampling, and for washing sluice concentrates and material cleaned by hand from bedrock in other placer operations. One to three cubic yards, bank measure, can be dug and washed in a rocker per man-shift, depending upon the distance the gravel or water has to be carried, the character of the gravel, and the size of the rocker.
Rockers are usually homemade and display a variety of designs. A favorite design consists essentially of a combination washing box and screen, a canvas or carpet apron under the screen, a short sluice with two or more riffles, and rockers under the sluice. The bottom of the washing box consists of sheet metal with holes about a half an inch in diameter punched in it, or a half-inch mesh screen can be used. Dimensions shown are satisfactory, but variations are possible. The bottom of the rocker should be made of a single wide, smooth board, which will greatly facilitate cleanups. The materials for building a rocker cost only a few dollars, depending mainly on the source of lumber.
Notes
References
Further reading
Recreational Gold Panning
Gold Ankauf (in German)
Fossicking
Gold mining
Mining equipment | Rocker box | Engineering | 728 |
79,673 | https://en.wikipedia.org/wiki/Diff | In computing, the utility diff is a data comparison tool that computes and displays the differences between the contents of files. Unlike edit distance notions used for other purposes, diff is line-oriented rather than character-oriented, but it is like Levenshtein distance in that it tries to determine the smallest set of deletions and insertions to create one file from the other. The utility displays the changes in one of several standard formats, such that both humans or computers can parse the changes, and use them for patching.
Typically, diff is used to show the changes between two versions of the same file. Modern implementations also support binary files. The output is called a "diff", or a patch, since the output can be applied with the Unix program . The output of similar file comparison utilities is also called a "diff"; like the use of the word "grep" for describing the act of searching, the word diff became a generic term for calculating data difference and the results thereof. The POSIX standard specifies the behavior of the "diff" and "patch" utilities and their file formats.
History
diff was developed in the early 1970s on the Unix operating system, which was emerging from Bell Labs in Murray Hill, New Jersey. It was part of the 5th Edition of Unix released in 1974, and was written by Douglas McIlroy, and James Hunt. This research was published in a 1976 paper co-written with James W. Hunt, who developed an initial prototype of . The algorithm this paper described became known as the Hunt–Szymanski algorithm.
McIlroy's work was preceded and influenced by Steve Johnson's comparison program on GECOS and Mike Lesk's program. also originated on Unix and, like , produced line-by-line changes and even used angle-brackets (">" and "<") for presenting line insertions and deletions in the program's output. The heuristics used in these early applications were, however, deemed unreliable. The potential usefulness of a diff tool provoked McIlroy into researching and designing a more robust tool that could be used in a variety of tasks, but perform well in the processing and size limitations of the PDP-11's hardware. His approach to the problem resulted from collaboration with individuals at Bell Labs including Alfred Aho, Elliot Pinson, Jeffrey Ullman, and Harold S. Stone.
In the context of Unix, the use of the line editor provided with the natural ability to create machine-usable "edit scripts". These edit scripts, when saved to a file, can, along with the original file, be reconstituted by into the modified file in its entirety. This greatly reduced the secondary storage necessary to maintain multiple versions of a file. McIlroy considered writing a post-processor for where a variety of output formats could be designed and implemented, but he found it more frugal and simpler to have be responsible for generating the syntax and reverse-order input accepted by the command.
In 1984, Larry Wall created a separate utility, patch,
releasing its source code on the mod.sources and net.sources newsgroups. This program modifies files using output from and has the ability to match context.
X/Open Portability Guide issue 2 of 1987 includes diff. Context mode was added in POSIX.1-2001 (issue 6). Unified mode was added in POSIX.1-2008 (issue 7).
In 's early years, common uses included comparing changes in the source of software code and markup for technical documents, verifying program debugging output, comparing filesystem listings and analyzing computer assembly code. The output targeted for was motivated to provide compression for a sequence of modifications made to a file. The Source Code Control System (SCCS) and its ability to archive revisions emerged in the late 1970s as a consequence of storing edit scripts from .
Algorithm
The operation of is based on solving the longest common subsequence problem.
In this problem, given two sequences of items:
h q
e i k r x y
and we want to find a longest sequence of items that is present in both original sequences in the same order. That is, we want to find a new sequence which can be obtained from the first original sequence by deleting some items, and from the second original sequence by deleting other items. We also want this sequence to be as long as possible. In this case it is
a b c d f g j z
From a longest common subsequence it is only a small step to get -like output: if an item is absent in the subsequence but present in the first original sequence, it must have been deleted (as indicated by the '-' marks, below). If it is absent in the subsequence but present in the second original sequence, it must have been inserted (as indicated by the '+' marks).
e h i q k r x y
+ - + - + + + +
Usage
The diff command is invoked from the command line, passing it the names of two files: diff original new. The output of the command represents the changes required to transform the original file into the new file.
If original and new are directories, then will be run on each file that exists in both directories. An option, -r, will recursively descend any matching subdirectories to compare files between directories.
Any of the examples in the article use the following two files, original and new:
original:
This part of the
document has stayed the
same from version to
version. It shouldn't
be shown if it doesn't
change. Otherwise, that
would not be helping to
compress the size of the
changes.
This paragraph contains
text that is outdated.
It will be deleted in the
near future.
It is important to spell
check this dokument. On
the other hand, a
misspelled word isn't
the end of the world.
Nothing in the rest of
this paragraph needs to
be changed. Things can
be added after it.
new:
This is an important
notice! It should
therefore be located at
the beginning of this
document!
This part of the
document has stayed the
same from version to
version. It shouldn't
be shown if it doesn't
change. Otherwise, that
would not be helping to
compress the size of the
changes.
It is important to spell
check this document. On
the other hand, a
misspelled word isn't
the end of the world.
Nothing in the rest of
this paragraph needs to
be changed. Things can
be added after it.
This paragraph contains
important new additions
to this document.
The command diff original new produces the following normal diff output:
Note: Here, the diff output is shown with colors to make it easier to read. The diff utility does not produce colored output; its output is plain text. However, many tools can show the output with colors by using syntax highlighting.
In this traditional output format, a stands for added, d for deleted and c for changed. Line numbers of the original file appear before a/d/c and those of the new file appear after. The less-than and greater-than signs (at the beginning of lines that are added, deleted or changed) indicate which file the lines appear in. Addition lines are added to the original file to appear in the new file. Deletion lines are deleted from the original file to be missing in the new file.
By default, lines common to both files are not shown. Lines that have moved are shown as added at their new location and as deleted from their old location. However, some diff tools highlight moved lines.
Output variations
Edit script
An ed script can still be generated by modern versions of diff with the -e option. The resulting edit script for this example is as follows:
24a
This paragraph contains
important new additions
to this document.
.
17c
check this document. On
.
11,15d
0a
This is an important
notice! It should
therefore be located at
the beginning of this
document!
.
In order to transform the content of file original into the content of file new using , we should append two lines to this diff file, one line containing a w (write) command, and one containing a q (quit) command (e.g. by ). Here we gave the diff file the name mydiff and the transformation will then happen when we run .
Context format
The Berkeley distribution of Unix made a point of adding the context format () and the ability to recurse on filesystem directory structures (), adding those features in 2.8 BSD, released in July 1981. The context format of diff introduced at Berkeley helped with distributing patches for source code that may have been changed minimally.
In the context format, any changed lines are shown alongside unchanged lines before and after. The inclusion of any number of unchanged lines provides a context to the patch. The context consists of lines that have not changed between the two files and serve as a reference to locate the lines' place in a modified file and find the intended location for a change to be applied regardless of whether the line numbers still correspond. The context format introduces greater readability for humans and reliability when applying the patch, and an output which is accepted as input to the patch program. This intelligent behavior is not possible with the traditional diff output.
The number of unchanged lines shown above and below a change hunk can be defined by the user, even zero, but three lines is typically the default. If the context of unchanged lines in a hunk overlap with an adjacent hunk, then diff will avoid duplicating the unchanged lines and merge the hunks into a single hunk.
A "" represents a change between lines that correspond in the two files, whereas a "" represents the addition of a line, and a "" the removal of a line. A blank space represents an unchanged line. At the beginning of the patch is the file information, including the full path and a time stamp delimited by a tab character. At the beginning of each hunk are the line numbers that apply for the corresponding change in the files. A number range appearing between sets of three asterisks applies to the original file, while sets of three dashes apply to the new file. The hunk ranges specify the starting and ending line numbers in the respective file.
The command produces the following output:
*** /path/to/original timestamp
--- /path/to/new timestamp
***************
*** 1,3 ****
--- 1,9 ----
+ This is an important
+ notice! It should
+ therefore be located at
+ the beginning of this
+ document!
+
This part of the
document has stayed the
same from version to
***************
*** 8,20 ****
compress the size of the
changes.
- This paragraph contains
- text that is outdated.
- It will be deleted in the
- near future.
It is important to spell
! check this dokument. On
the other hand, a
misspelled word isn't
the end of the world.
--- 14,21 ----
compress the size of the
changes.
It is important to spell
! check this document. On
the other hand, a
misspelled word isn't
the end of the world.
***************
*** 22,24 ****
--- 23,29 ----
this paragraph needs to
be changed. Things can
be added after it.
+
+ This paragraph contains
+ important new additions
+ to this document.
Note: Here, the diff output is shown with colors to make it easier to read. The diff utility does not produce colored output; its output is plain text. However, many tools can show the output with colors by using syntax highlighting.
Unified format
The unified format (or unidiff) inherits the technical improvements made by the context format, but produces a smaller diff with old and new text presented immediately adjacent. Unified format is usually invoked using the "-u" command-line option. This output is often used as input to the patch program. Many projects specifically request that "diffs" be submitted in the unified format, making unified diff format the most common format for exchange between software developers.
Unified context diffs were originally developed by Wayne Davison in August 1990 (in unidiff which appeared in Volume 14 of comp.sources.misc). Richard Stallman added unified diff support to the GNU Project's diff utility one month later, and the feature debuted in GNU diff 1.15, released in January 1991. GNU diff has since generalized the context format to allow arbitrary formatting of diffs.
The format starts with the same two-line header as the context format, except that the original file is preceded by "---" and the new file is preceded by "+++". Following this are one or more change hunks that contain the line differences in the file. The unchanged, contextual lines are preceded by a space character, addition lines are preceded by a plus sign, and deletion lines are preceded by a minus sign.
A hunk begins with range information and is immediately followed with the line additions, line deletions, and any number of the contextual lines. The range information is surrounded by double at signs, and combines onto a single line what appears on two lines in the context format (above). The format of the range information line is as follows:
@@ -l,s +l,s @@ optional section heading
The hunk range information contains two hunk ranges. The range for the hunk of the original file is preceded by a minus symbol, and the range for the new file is preceded by a plus symbol. Each hunk range is of the format l,s where l is the starting line number and s is the number of lines the change hunk applies to for each respective file. In many versions of GNU diff, each range can omit the comma and trailing value s, in which case s defaults to 1. Note that the only really interesting value is the l line number of the first range; all the other values can be computed from the diff.
The hunk range for the original should be the sum of all contextual and deletion (including changed) hunk lines. The hunk range for the new file should be a sum of all contextual and addition (including changed) hunk lines. If hunk size information does not correspond with the number of lines in the hunk, then the diff could be considered invalid and be rejected.
Optionally, the hunk range can be followed by the heading of the section or function that the hunk is part of. This is mainly useful to make the diff easier to read. When creating a diff with GNU diff, the heading is identified by regular expression matching.
If a line is modified, it is represented as a deletion and addition. Since the hunks of the original and new file appear in the same hunk, such changes would appear adjacent to one another.
An occurrence of this in the example below is:
-check this dokument. On
+check this document. On
The command diff -u original new produces the following output:
--- /path/to/original timestamp
+++ /path/to/new timestamp
@@ -1,3 +1,9 @@
+This is an important
+notice! It should
+therefore be located at
+the beginning of this
+document!
+
This part of the
document has stayed the
same from version to
@@ -8,13 +14,8 @@
compress the size of the
changes.
-This paragraph contains
-text that is outdated.
-It will be deleted in the
-near future.
-
It is important to spell
-check this dokument. On
+check this document. On
the other hand, a
misspelled word isn't
the end of the world.
@@ -22,3 +23,7 @@
this paragraph needs to
be changed. Things can
be added after it.
+
+This paragraph contains
+important new additions
+to this document.
Note: Here, the diff output is shown with colors to make it easier to read. The diff utility does not produce colored output; its output is plain text. However, many tools can show the output with colors by using syntax highlighting.
Note that to successfully separate the file names from the timestamps, the delimiter between them is a tab character. This is invisible on screen and can be lost when diffs are copy/pasted from console/terminal screens.
Extensions
There are some modifications and extensions to the diff formats that are used and understood by certain programs and in certain contexts. For example, some revision control systems—such as Subversion—specify a version number, "working copy", or any other comment instead of or in addition to a timestamp in the diff's header section.
Some tools allow diffs for several different files to be merged into one, using a header for each modified file that may look something like this:
Index: path/to/file.cpp
The special case of files that do not end in a newline is not handled. Neither the unidiff utility nor the POSIX diff standard define a way to handle this type of files. (Indeed, such files are not "text" files by strict POSIX definitions.) GNU diff and git produce "\ No newline at end of file" (or a translated version) as a diagnostic, but this behavior is not portable. GNU patch does not seem to handle this case, while git-apply does.
The patch program does not necessarily recognize implementation-specific diff output. GNU patch is, however, known to recognize git patches and act a little differently.
Implementations and related programs
Changes since 1975 include improvements to the core algorithm, the addition of useful features to the command, and the design of new output formats. The basic algorithm is described in the papers An O(ND) Difference Algorithm and its Variations by Eugene W. Myers
and in A File Comparison Program by Webb Miller and Myers.
The algorithm was independently discovered and described in Algorithms for Approximate String Matching, by Esko Ukkonen.
The first editions of the diff program were designed for line comparisons of text files expecting the newline character to delimit lines. By the 1980s, support for binary files resulted in a shift in the application's design and implementation.
GNU diff and diff3 are included in the diffutils package with other diff and patch related utilities.
Formatters and front-ends
Postprocessors sdiff and diffmk render side-by-side diff listings and applied change marks to printed documents, respectively. Both were developed elsewhere in Bell Labs in or before 1981.
Diff3 compares one file against two other files by reconciling two diffs. It was originally conceived by Paul Jensen to reconcile changes made by two people editing a common source. It is also used by revision control systems, e.g. RCS, for merging.
Emacs has Ediff for showing the changes a patch would provide in a user interface that combines interactive editing and merging capabilities for patch files.
Vim provides vimdiff to compare from two to eight files, with differences highlighted in color. While historically invoking the diff program, modern vim uses git's fork of xdiff library (LibXDiff) code, providing improved speed and functionality.
GNU Wdiff is a front end to diff that shows the words or phrases that changed in a text document of written language even in the presence of word-wrapping or different column widths.
colordiff is a Perl wrapper for 'diff' and produces the same output but with colorization for added and deleted bits. diff-so-fancy and diff-highlight are newer analogues. "delta" is a Rust rewrite that highlights changes and the underlying code at the same time.
Patchutils contains tools that combine, rearrange, compare and fix context diffs and unified diffs.
Algorithmic derivatives
Utilities that compare source files by their syntactic structure have been built mostly as research tools for some programming languages; some are available as commercial tools. In addition, free tools that perform syntax-aware diff include:
C++: zograscope, AST-based.
HTML: Daisydiff, html-differ.
XML: xmldiffpatch by Microsoft and xmldiffmerge for IBM.
JavaScript: astii (AST-based).
Multi-language: Pretty Diff (format code and then diff)
spiff is a variant of diff that ignores differences in floating point calculations with roundoff errors and whitespace, both of which are generally irrelevant to source code comparison. Bellcore wrote the original version. An HPUX port is the most current public release. spiff does not support binary files. spiff outputs to the standard output in standard diff format and accepts inputs in the C, Bourne shell, Fortran, Modula-2 and Lisp programming languages.
LibXDiff is an LGPL library that provides an interface to many algorithms from 1998. An improved Myers algorithm with Rabin fingerprint was originally implemented (as of the final release of 2008), but git and libgit2's fork has since expanded the repository with many of its own. One algorithm called "histogram" is generally regarded as much better than the original Myers algorithm, both in speed and quality. This is the modern version of LibXDiff used by Vim.
See also
Comparison of file comparison tools
Delta encoding
Difference operator
Edit distance
Levenshtein distance
History of software configuration management
Longest common subsequence problem
Microsoft File Compare
Microsoft WinDiff
Revision control
Software configuration management
Other free file comparison tools
cmp
comm
tkdiff
WinMerge (Microsoft Windows)
meld
Pretty Diff
References
Further reading
A technique for isolating differences between files
A generic implementation of the Myers SES/LCS algorithm with the Hirschberg linear space refinement (C source code)
External links
JavaScript Implementation
1974 software
Free file comparison tools
Formal languages
Pattern matching
Data differencing
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands | Diff | Mathematics,Technology | 4,704 |
1,708,182 | https://en.wikipedia.org/wiki/Designer%20baby | A designer baby is a baby whose genetic makeup has been selected or altered, often to exclude a particular gene or to remove genes associated with disease. This process usually involves analysing a wide range of human embryos to identify genes associated with particular diseases and characteristics, and selecting embryos that have the desired genetic makeup; a process known as preimplantation genetic diagnosis. Screening for single genes is commonly practiced, and polygenic screening is offered by a few companies. Other methods by which a baby's genetic information can be altered involve directly editing the genome before birth, which is not routinely performed and only one instance of this is known to have occurred as of 2019, where Chinese twins Lulu and Nana were edited as embryos, causing widespread criticism.
Genetically altered embryos can be achieved by introducing the desired genetic material into the embryo itself, or into the sperm and/or egg cells of the parents; either by delivering the desired genes directly into the cell or using gene-editing technology. This process is known as germline engineering and performing this on embryos that will be brought to term is typically prohibited by law. Editing embryos in this manner means that the genetic changes can be carried down to future generations, and since the technology concerns editing the genes of an unborn baby, it is considered controversial and is subject to ethical debate. While some scientists condone the use of this technology to treat disease, concerns have been raised that this could be translated into using the technology for cosmetic purposes and enhancement of human traits.
Pre-implantation genetic diagnosis
Pre-implantation genetic diagnosis (PGD or PIGD) is a procedure in which embryos are screened prior to implantation. The technique is used alongside in vitro fertilisation (IVF) to obtain embryos for evaluation of the genome – alternatively, ovocytes can be screened prior to fertilisation. The technique was first used in 1989.
PGD is used primarily to select embryos for implantation in the case of possible genetic defects, allowing identification of mutated or disease-related alleles and selection against them. It is especially useful in embryos from parents where one or both carry a heritable disease. PGD can also be used to select for embryos of a certain sex, most commonly when a disease is more strongly associated with one sex than the other (as is the case for X-linked disorders which are more common in males, such as haemophilia). Infants born with traits selected following PGD are sometimes considered to be designer babies.
One application of PGD is the selection of 'saviour siblings', children who are born to provide a transplant (of an organ or group of cells) to a sibling with a usually life-threatening disease. Saviour siblings are conceived through IVF and then screened using PGD to analyze genetic similarity to the child needing a transplant, to reduce the risk of rejection.
Process
Embryos for PGD are obtained from IVF procedures in which the oocyte is artificially fertilised by sperm. Oocytes from the woman are harvested following controlled ovarian hyperstimulation (COH), which involves fertility treatments to induce production of multiple oocytes. After harvesting the oocytes, they are fertilised in vitro, either during incubation with multiple sperm cells in culture, or via intracytoplasmic sperm injection (ICSI), where sperm is directly injected into the oocyte. The resulting embryos are usually cultured for 3–6 days, allowing them to reach the blastomere or blastocyst stage.
Once embryos reach the desired stage of development, cells are biopsied and genetically screened. The screening procedure varies based on the nature of the disorder being investigated.
Polymerase chain reaction (PCR) is a process in which DNA sequences are amplified to produce many more copies of the same segment, allowing screening of large samples and identification of specific genes. The process is often used when screening for monogenic disorders, such as cystic fibrosis.
Another screening technique, fluorescent in situ hybridisation (FISH) uses fluorescent probes which specifically bind to highly complementary sequences on chromosomes, which can then be identified using fluorescence microscopy. FISH is often used when screening for chromosomal abnormalities such as aneuploidy, making it a useful tool when screening for disorders such as Down syndrome.
Following the screening, embryos with the desired trait (or lacking an undesired trait such as a mutation) are transferred into the mother's uterus, then allowed to develop naturally.
Regulation
PGD regulation is determined by individual countries' governments, with some prohibiting its use entirely, including in Austria, China, and Ireland.
In many countries, PGD is permitted under very stringent conditions for medical use only, as is the case in France, Switzerland, Italy and the United Kingdom. Whilst PGD in Italy and Switzerland is only permitted under certain circumstances, there is no clear set of specifications under which PGD can be carried out, and selection of embryos based on sex is not permitted. In France and the UK, regulations are much more detailed, with dedicated agencies setting out framework for PGD. Selection based on sex is permitted under certain circumstances, and genetic disorders for which PGD is permitted are detailed by the countries' respective agencies.
In contrast, the United States federal law does not regulate PGD, with no dedicated agencies specifying regulatory framework by which healthcare professionals must abide. Elective sex selection is permitted, accounting for around 9% of all PGD cases in the U.S., as is selection for desired conditions such as deafness or dwarfism.
Pre-implantation Genetic Testing
Based on the specific analysis conducted:
PGT-M (Preimplantation Genetic Testing for monogenic diseases): It is used to detect hereditary diseases caused by the mutation or alteration of the DNA sequence of a single gene.
PGT-A (Preimplantation Genetic Testing for aneuploidy): It is used to diagnose numerical abnormalities (aneuploidies).
Human germline engineering
Human germline engineering is a process in which the human genome is edited within a germ cell, such as a sperm cell or oocyte (causing heritable changes), or in the zygote or embryo following fertilization. Germline engineering results in changes in the genome being incorporated into every cell in the body of the offspring (or of the individual following embryonic germline engineering). This process differs from somatic cell engineering, which does not result in heritable changes. Most human germline editing is performed on individual cells and non-viable embryos, which are destroyed at a very early stage of development. In November 2018, however, a Chinese scientist, He Jiankui, announced that he had created the first human germline genetically edited babies.
Genetic engineering relies on a knowledge of human genetic information, made possible by research such as the Human Genome Project, which identified the position and function of all the genes in the human genome. As of 2019, high-throughput sequencing methods allow genome sequencing to be conducted very rapidly, making the technology widely available to researchers.
Germline modification is typically accomplished through techniques which incorporate a new gene into the genome of the embryo or germ cell in a specific location. This can be achieved by introducing the desired DNA directly to the cell for it to be incorporated, or by replacing a gene with one of interest. These techniques can also be used to remove or disrupt unwanted genes, such as ones containing mutated sequences.
Whilst germline engineering has mostly been performed in mammals and other animals, research on human cells in vitro is becoming more common. Most commonly used in human cells are germline gene therapy and the engineered nuclease system CRISPR/Cas9.
Germline gene modification
Gene therapy is the delivery of a nucleic acid (usually DNA or RNA) into a cell as a pharmaceutical agent to treat disease. Most commonly it is carried out using a vector, which transports the nucleic acid (usually DNA encoding a therapeutic gene) into the target cell. A vector can transduce a desired copy of a gene into a specific location to be expressed as required. Alternatively, a transgene can be inserted to deliberately disrupt an unwanted or mutated gene, preventing transcription and translation of the faulty gene products to avoid a disease phenotype.
Gene therapy in patients is typically carried out on somatic cells in order to treat conditions such as some leukaemias and vascular diseases.
Human germline gene therapy in contrast is restricted to in vitro experiments in some countries, whilst others prohibited it entirely, including Australia, Canada, Germany and Switzerland.
Whilst the National Institutes of Health in the US does not currently allow in utero germline gene transfer clinical trials, in vitro trials are permitted. The NIH guidelines state that further studies are required regarding the safety of gene transfer protocols before in utero research is considered, requiring current studies to provide demonstrable efficacy of the techniques in the laboratory. Research of this sort is currently using non-viable embryos to investigate the efficacy of germline gene therapy in treatment of disorders such as inherited mitochondrial diseases.
Gene transfer to cells is usually by vector delivery. Vectors are typically divided into two classes – viral and non-viral.
Viral vectors
Viruses infect cells by transducing their genetic material into a host's cell, using the host's cellular machinery to generate viral proteins needed for replication and proliferation. By modifying viruses and loading them with the therapeutic DNA or RNA of interest, it is possible to use these as a vector to provide delivery of the desired gene into the cell.
Retroviruses are some of the most commonly used viral vectors, as they not only introduce their genetic material into the host cell, but also copy it into the host's genome. In the context of gene therapy, this allows permanent integration of the gene of interest into the patient's own DNA, providing longer lasting effects.
Viral vectors work efficiently and are mostly safe but present with some complications, contributing to the stringency of regulation on gene therapy. Despite partial inactivation of viral vectors in gene therapy research, they can still be immunogenic and elicit an immune response. This can impede viral delivery of the gene of interest, as well as cause complications for the patient themselves when used clinically, especially in those who already have a serious genetic illness. Another difficulty is the possibility that some viruses will randomly integrate their nucleic acids into the genome, which can interrupt gene function and generate new mutations.
This is a significant concern when considering germline gene therapy, due to the potential to generate new mutations in the embryo or offspring.
Non-viral vectors
Non-viral methods of nucleic acid transfection involved injecting a naked DNA plasmid into cell for incorporation into the genome. This method used to be relatively ineffective with low frequency of integration, however, efficiency has since greatly improved, using methods to enhance the delivery of the gene of interest into cells. Furthermore, non-viral vectors are simple to produce on a large scale and are not highly immunogenic.
Some non-viral methods are detailed below:
Electroporation is a technique in which high voltage pulses are used to carry DNA into the target cell across the membrane. The method is believed to function due to the formation of pores across the membrane, but although these are temporary, electroporation results in a high rate of cell death which has limited its use. An improved version of this technology, electron-avalanche transfection, has since been developed, which involves shorter (microsecond) high voltage pulses which result in more effective DNA integration and less cellular damage.
The gene gun is a physical method of DNA transfection, where a DNA plasmid is loaded onto a particle of heavy metal (usually gold) and loaded onto the 'gun'. The device generates a force to penetrate the cell membrane, allowing the DNA to enter whilst retaining the metal particle.
Oligonucleotides are used as chemical vectors for gene therapy, often used to disrupt mutated DNA sequences to prevent their expression. Disruption in this way can be achieved by introduction of small RNA molecules, called siRNA, which signal cellular machinery to cleave the unwanted mRNA sequences to prevent their transcription. Another method utilises double-stranded oligonucleotides, which bind transcription factors required for transcription of the target gene. By competitively binding these transcription factors, the oligonucleotides can prevent the gene's expression.
ZFNs
Zinc-finger nucleases (ZFNs) are enzymes generated by fusing a zinc finger DNA-binding domain to a DNA-cleavage domain. Zinc finger recognizes between 9 and 18 bases of sequence. Thus by mixing those modules, it becomes easier to target any sequence researchers wish to alter ideally within complex genomes. A ZFN is a macromolecular complex formed by monomers in which each subunit contains a zinc domain and a FokI endonuclease domain. The FokI domains must dimerize for activities, thus narrowing target area by ensuring that two close DNA-binding events occurs.
The resulting cleavage event enables most genome-editing technologies to work. After a break is created, the cell seeks to repair it.
A method is NHEJ, in which the cell polishes the two ends of broken DNA and seals them back together, often producing a frame shift.
An alternative method is homology-directed repairs. The cell tries to fix the damage by using a copy of the sequence as a backup. By supplying their own template, researcher can have the system to insert a desired sequence instead.
The success of using ZFNs in gene therapy depends on the insertion of genes to the chromosomal target area without causing damage to the cell. Custom ZFNs offer an option in human cells for gene correction.
TALENs
There is a method called TALENs that targets singular nucleotides. TALENs stand for transcription activator-like effector nucleases. TALENs are made by TAL effector DNA-binding domain to a DNA cleavage domain. All these methods work by as the TALENs are arranged. TALENs are "built from arrays of 33-35 amino acid modules…by assembling those arrays…researchers can target any sequence they like". This event is referred as Repeat Variable Diresidue (RVD). The relationship between the amino acids enables researchers to engineer a specific DNA domain. The TALEN enzymes are designed to remove specific parts of the DNA strands and replace the section; which enables edits to be made. TALENs can be used to edit genomes using non-homologous end joining (NHEJ) and homology directed repair.
CRISPR/Cas9
The CRISPR/Cas9 system (CRISPR – Clustered Regularly Interspaced Short Palindromic Repeats, Cas9 – CRISPR-associated protein 9) is a genome editing technology based on the bacterial antiviral CRISPR/Cas system. The bacterial system has evolved to recognize viral nucleic acid sequences and cut these sequences upon recognition, damaging infecting viruses. The gene editing technology uses a simplified version of this process, manipulating the components of the bacterial system to allow location-specific gene editing.
The CRISPR/Cas9 system broadly consists of two major components – the Cas9 nuclease and a guide RNA (gRNA). The gRNA contains a Cas-binding sequence and a ~20 nucleotide spacer sequence, which is specific and complementary to the target sequence on the DNA of interest. Editing specificity can therefore be changed by modifying this spacer sequence.
Upon system delivery to a cell, Cas9 and the gRNA bind, forming a ribonucleoprotein complex. This causes a conformational change in Cas9, allowing it to cleave DNA if the gRNA spacer sequence binds with sufficient homology to a particular sequence in the host genome. When the gRNA binds to the target sequence, Cas will cleave the locus, causing a double-strand break (DSB).
The resulting DSB can be repaired by one of two mechanisms –
Non-Homologous End Joining (NHEJ) - an efficient but error-prone mechanism, which often introduces insertions and deletions (indels) at the DSB site. This means it is often used in knockout experiments to disrupt genes and introduce loss of function mutations.
Homology Directed Repair (HDR) - a less efficient but high-fidelity process which is used to introduce precise modifications into the target sequence. The process requires adding a DNA repair template including a desired sequence, which the cell's machinery uses to repair the DSB, incorporating the sequence of interest into the genome.
Since NHEJ is more efficient than HDR, most DSBs will be repaired via NHEJ, introducing gene knockouts. To increase frequency of HDR, inhibiting genes associated with NHEJ and performing the process in particular cell cycle phases (primarily S and G2) appear effective.
CRISPR/Cas9 is an effective way of manipulating the genome in vivo in animals as well as in human cells in vitro, but some issues with the efficiency of delivery and editing mean that it is not considered safe for use in viable human embryos or the body's germ cells. As well as the higher efficiency of NHEJ making inadvertent knockouts likely, CRISPR can introduce DSBs to unintended parts of the genome, called off-target effects. These arise due to the spacer sequence of the gRNA conferring sufficient sequence homology to random loci in the genome, which can introduce random mutations throughout. If performed in germline cells, mutations could be introduced to all the cells of a developing embryo.
There are developments to prevent unintended consequences otherwise known as off-target effects due to gene editing. There is a race to develop new gene editing technologies that prevent off-target effects from occurring with some of the technologies being known as biased off-target detection, and Anti-CRISPR Proteins. For biased off-target effects detection, there are several tools to predict the locations where off-target effects may take place. Within the technology of biased off-target effects detection, there are two main models, Alignment Based Models that involve having the sequences of gRNA being aligned with sequences of genome, after which then the off-target locations are predicted. The second model is known as the Scoring-Based Model where each piece of gRNA is scored for their off-target effects in accordance with their positioning.
Regulation on CRISPR use
In 2015, the International Summit on Human Gene Editing was held in Washington D.C., hosted by scientists from China, the UK and the U.S. The summit concluded that genome editing of somatic cells using CRISPR and other genome editing tools would be allowed to proceed under FDA regulations, but human germline engineering would not be pursued.
In February 2016, scientists at the Francis Crick Institute in London were given a license permitting them to edit human embryos using CRISPR to investigate early development. Regulations were imposed to prevent the researchers from implanting the embryos and to ensure experiments were stopped and embryos destroyed after seven days.
In November 2018, Chinese scientist He Jiankui announced that he had performed the first germline engineering on viable human embryos, which have since been brought to term. The research claims received significant criticism, and Chinese authorities suspended He's research activity. Following the event, scientists and government bodies have called for more stringent regulations to be imposed on the use of CRISPR technology in embryos, with some calling for a global moratorium on germline genetic engineering. Chinese authorities have announced stricter controls will be imposed, with Communist Party general secretary Xi Jinping and government premier Li Keqiang calling for new gene-editing legislations to be introduced.
As of January 2020, germline genetic alterations are prohibited in 24 countries by law and also in 9 other countries by their guidelines. The Council of Europe's Convention on Human Rights and Biomedicine, also known as the Oviedo Convention, has stated in its article 13 "Interventions on the human genome" as follows: "An intervention seeking to modify the human genome may only be undertaken for preventive, diagnostic or therapeutic purposes and only if its aim is not to introduce any modification in the genome of any descendants". Nonetheless, wide public debate has emerged, targeting the fact that the Oviedo Convention Article 13 should be revisited and renewed, especially due to the fact that it was constructed in 1997 and may be out of date, given recent technological advancements in the field of genetic engineering.
Lulu and Nana controversy
The Lulu and Nana controversy refers to the two Chinese twin girls born in November 2018, who had been genetically modified as embryos by the Chinese scientist He Jiankui. The twins are believed to be the first genetically modified babies. The girls' parents had participated in a clinical project run by He, which involved IVF, PGD and genome editing procedures in an attempt to edit the gene CCR5. CCR5 encodes a protein used by HIV to enter host cells, so by introducing a specific mutation into the gene CCR5 Δ32 He claimed that the process would confer innate resistance to HIV.
The project run by He recruited couples wanting children where the man was HIV-positive and the woman uninfected. During the project, He performed IVF with sperm and eggs from the couples and then introduced the CCR5 Δ32 mutation into the genomes of the embryos using CRISPR/Cas9. He then used PGD on the edited embryos during which he sequenced biopsied cells to identify whether the mutation had been successfully introduced. He reported some mosaicism in the embryos, whereby the mutation had integrated into some cells but not all, suggesting the offspring would not be entirely protected against HIV. He claimed that during the PGD and throughout the pregnancy, fetal DNA was sequenced to check for off-target errors introduced by the CRISPR/Cas9 technology, however the NIH released a statement in which they announced "the possibility of damaging off-target effects has not been satisfactorily explored". The girls were born in early November 2018, and were reported by He to be healthy.
His research was conducted in secret until November 2018, when documents were posted on the Chinese clinical trials registry and MIT Technology Review published a story about the project. Following this, He was interviewed by the Associated Press and presented his work on 27 November at the Second International Human Genome Editing Summit which was held in Hong Kong.
Although the information available about this experiment is relatively limited, it is deemed that the scientist erred against many ethical, social and moral rules but also China's guidelines and regulations, which prohibited germ-line genetic modifications in human embryos, while conducting this trial. From a technological point of view, the CRISPR/Cas9 technique is one of the most precise and least expensive methods of gene modification to this day, whereas there are still a number of limitations that keep the technique from being labelled as safe and efficient. During the First International Summit on Human Gene Editing in 2015 the participants agreed that a halt must be set on germline genetic alterations in clinical settings unless and until: "(1) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (2) there is broad societal consensus about the appropriateness of the proposed application". However, during the second International Summit in 2018 the topic was once again brought up by stating: "Progress over the last three years and the discussions at the current summit, however, suggest that it is time to define a rigorous, responsible translational pathway toward such trials". Inciting that the ethical and legal aspects should indeed be revisited G. Daley, representative of the summit's management and Dean of Harvard Medical School depicted Dr. He's experiment as "a wrong turn on the right path".
The experiment was met with widespread criticism and was very controversial, globally as well as in China. Several bioethicists, researchers and medical professionals have released statements condemning the research, including Nobel laureate David Baltimore who deemed the work "irresponsible" and one pioneer of the CRISPR/Cas9 technology, biochemist Jennifer Doudna at University of California, Berkeley. The director of the NIH, Francis S. Collins stated that the "medical necessity for inactivation of CCR5 in these infants is utterly unconvincing" and condemned He Jiankui and his research team for 'irresponsible work'. Other scientists, including geneticist George Church of Harvard University suggested gene editing for disease resistance was "justifiable" but expressed reservations regarding the conduct of He's work.
The Safe Genes program by DARPA has the goal to protect soldiers against gene editing war tactics. They receive information from ethical experts to better predict and understand future and current potential gene editing issues.
The World Health Organization has launched a global registry to track research on human genome editing, after a call to halt all work on genome editing.
The Chinese Academy of Medical Sciences responded to the controversy in the journal Lancet, condemning He for violating ethical guidelines documented by the government and emphasising that germline engineering should not be performed for reproductive purposes. The academy ensured they would "issue further operational, technical and ethical guidelines as soon as possible" to impose tighter regulation on human embryo editing.
Ethical considerations
Editing embryos, germ cells and the generation of designer babies is the subject of ethical debate, as a result of the implications in modifying genomic information in a heritable manner. This includes arguments over unbalanced gender selection and gamete selection.
Despite regulations set by individual countries' governing bodies, the absence of a standardized regulatory framework leads to frequent discourse in discussion of germline engineering among scientists, ethicists and the general public. Arthur Caplan, the head of the Division of Bioethics at New York University suggests that establishing an international group to set guidelines for the topic would greatly benefit global discussion and proposes instating "religious and ethics and legal leaders" to impose well-informed regulations.
In many countries, editing embryos and germline modification for reproductive use is illegal. As of 2017, the U.S. restricts the use of germline modification and the procedure is under heavy regulation by the FDA and NIH. The American National Academy of Sciences and National Academy of Medicine indicated they would provide qualified support for human germline editing "for serious conditions under stringent oversight", should safety and efficiency issues be addressed. In 2019, World Health Organization called human germline genome editing as "irresponsible".
Since genetic modification poses risk to any organism, researchers and medical professionals must give the prospect of germline engineering careful consideration. The main ethical concern is that these types of treatments will produce a change that can be passed down to future generations and therefore any error, known or unknown, will also be passed down and will affect the offspring. Theologian Ronald Green of Dartmouth College has raised concern that this could result in a decrease in genetic diversity and the accidental introduction of new diseases in the future.
When considering support for research into germline engineering, ethicists have often suggested that it can be considered unethical not to consider a technology that could improve the lives of children who would be born with congenital disorders. Geneticist George Church claims that he does not expect germline engineering to increase societal disadvantage, and recommends lowering costs and improving education surrounding the topic to dispel these views. He emphasizes that allowing germline engineering in children who would otherwise be born with congenital defects could save around 5% of babies from living with potentially avoidable diseases. Jackie Leach Scully, professor of social and bioethics at Newcastle University, acknowledges that the prospect of designer babies could leave those living with diseases and unable to afford the technology feeling marginalized and without medical support. However, Professor Leach Scully also suggests that germline editing provides the option for parents "to try and secure what they think is the best start in life" and does not believe it should be ruled out. Similarly, Nick Bostrom, an Oxford philosopher known for his work on the risks of artificial intelligence, proposed that "super-enhanced" individuals could "change the world through their creativity and discoveries, and through innovations that everyone else would use".
Many bioethicists emphasize that germline engineering is usually considered in the best interest of a child, therefore associated should be supported. Dr James Hughes, a bioethicist at Trinity College, Connecticut, suggests that the decision may not differ greatly from others made by parents which are well accepted – choosing with whom to have a child and using contraception to denote when a child is conceived. Julian Savulescu, a bioethicist and philosopher at Oxford University believes parents "should allow selection for non‐disease genes even if this maintains or increases social inequality", coining the term procreative beneficence to describe the idea that the children "expected to have the best life" should be selected. The Nuffield Council on Bioethics said in 2017 that there was "no reason to rule out" changing the DNA of a human embryo if performed in the child's interest, but stressed that this was only provided that it did not contribute to societal inequality. Furthermore, Nuffield Council in 2018 detailed applications, which would preserve equality and benefit humanity, such as elimination of hereditary disorders and adjusting to warmer climate. Philosopher and Director of Bioethics at non-profit Invincible Wellbeing David Pearce argues that "the question [of designer babies] comes down to an analysis of risk-reward ratios - and our basic ethical values, themselves shaped by our evolutionary past." According to Pearce,"it's worth recalling that each act of old-fashioned sexual reproduction is itself an untested genetic experiment", often compromising a child's wellbeing and pro-social capacities even if the child grows in a healthy environment. Pearce thinks that as technology matures, more people may find it unacceptable to rely on "genetic roulette of natural selection".
Conversely, several concerns have been raised regarding the possibility of generating designer babies, especially concerning the inefficiencies currently presented by the technologies. Green stated that although the technology was "unavoidably in our future", he foresaw "serious errors and health problems as unknown genetic side effects in 'edited' children" arise. Furthermore, Green warned against the possibility that "the well-to-do" could more easily access the technologies "..that make them even better off". This concern regarding germline editing exacerbating a societal and financial divide is shared amongst other researches, with the chair of the Nuffield Bioethics Council Professor Karen Yeung stressing that if funding of the procedures "were to exacerbate social injustice, in our view that would not be an ethical approach".
Social and religious worries also arise over the possibility of editing human embryos. In a survey conducted by the Pew Research Centre, it was found that only a third of the Americans surveyed who identified as strongly Christian approved of germline editing. Catholic leaders are in the middle ground. This stance is because, according to Catholicism, a baby is a gift from God, and Catholics believe that people are created to be perfect in God's eyes. Thus, altering the genetic makeup of an infant is unnatural. In 1984, Pope John Paul II addressed that genetic manipulation in aiming to heal diseases is acceptable in the Church. He stated that it "will be considered in principle as desirable provided that it tends to the real promotion of the personal well-being of man, without harming his integrity or worsening his life conditions". However, it is unacceptable if designer babies are used to create a super/superior race including cloning humans. The Catholic Church rejects human cloning even if its purpose is to produce organs for therapeutic usage. The Vatican has stated that "The fundamental values connected with the techniques of artificial human procreation are two: the life of the human being called into existence and the special nature of the transmission of human life in marriage". According to them, it violates the dignity of the individual and is morally illicit.
A survey conducted by the Mayo Clinic in the Midwestern United States in 2017 saw that most of the participants agreed against the creation of designer babies with some noting its eugenic undertones. The participants also felt that gene editing may have unintended consequences that it may be manifested later in life for those that undergo gene editing. Some that took the survey worried that gene editing may lead to a decrease in the genetic diversity of the population in societies. The survey also noted how the participants were worried about the potential socioeconomic effects designer babies may exacerbate. The authors of the survey noted that the results of the survey showed that there is a greater need for interaction between the public and the scientific community concerning the possible implications and the recommended regulation of gene editing as it was unclear to them how much those that participated knew about gene editing and its effects prior to taking the survey.
In Islam, the positive attitude towards genetic engineering is based on the general principle that Islam aims at facilitating human life. However, the negative view comes from the process used to create a designer baby. Oftentimes, it involves the destruction of some embryos. Muslims believe that "embryos already has a soul" at conception. Thus, the destruction of embryos is against the teaching of the Qur'an, Hadith, and Shari'ah law, that teaches our responsibility to protect human life. To clarify, the procedure would be viewed as "acting like God/Allah". With the idea, that parents could choose the gender of their child, Islam believes that humans have no decision to choose the gender, and that "gender selection is only up to God".
Since 2020, there have been discussions about American studies that use embryos without embryonic implantation with the CRISPR/Cas9 technique that had been modified with HDR (homology-directed repair), and the conclusions from the results were that gene editing technologies are currently not mature enough for real world use and that there is a need for more studies that generate safe results over a longer period of time.
An article in the journal Bioscience Reports discussed how health in terms of genetics is not straightforward and thus there should be extensive deliberation for operations involving gene editing when the technology gets mature enough for real world use, where all of the potential effects are known on a case-by-case basis to prevent undesired effects on the subject or patient being operated on.
Social aspects also raise concern, as highlighted by Josephine Quintavelle, director of Comment on Reproductive Ethics at Queen Mary University of London, who states that selecting children's traits is "turning parenthood into an unhealthy model of self-gratification rather than a relationship".
One major worry among scientists, including Marcy Darnovsky at the Center for Genetics and Society in California, is that permitting germline engineering for correction of disease phenotypes is likely to lead to its use for cosmetic purposes and enhancement. Meanwhile, Henry Greely, a bioethicist at Stanford University in California, states that "almost everything you can accomplish by gene editing, you can accomplish by embryo selection", suggesting the risks undertaken by germline engineering may not be necessary. Alongside this, Greely emphasizes that the beliefs that genetic engineering will lead to enhancement are unfounded, and that claims that we will enhance intelligence and personality are far off – "we just don't know enough and are unlikely to for a long time – or maybe for ever".
See also
Biohappiness
Directed evolution (transhumanism)
Epidemiology of genetic disorder
Eugenics
New eugenics
Genetically modified organism
Human enhancement
Human genetic engineering
Human germline engineering
Lulu and Nana (Gene edited babies in China 2018)
Moral enhancement
Reprogenetics
Transhumanism
References
Further reading
A non-fiction account of Strongin's pioneering use of IVF and PGD to have a healthy child whose cord blood could save the life of her son Henry
1989 introductions
2018 introductions
Bioethics
Fertility medicine
Genetic engineering
Genome editing
Human reproduction
Transhumanism | Designer baby | Chemistry,Technology,Engineering,Biology | 7,477 |
2,902,695 | https://en.wikipedia.org/wiki/39%20Arietis | 39 Arietis (abbreviated 39 Ari), officially named Lilii Borea , is a star in the northern constellation of Aries. It is visible to the naked eye with an apparent visual magnitude of +4.5. The distance to this star, as determined from an annual parallax shift of 19.01 mas, is approximately .
This star was formerly located in the obsolete constellation Musca Borealis.
Nomenclature
39 Arietis is the star's Flamsteed designation.
This star was described as Lilii Borea by Nicolas-Louis de Lacaille in 1757,
as a star of the now-defunct constellation of Lilium (the Lily). The words are simply the Latin phrase Līliī Boreā 'in the north of Lilium'. Līliī Austrīnā 'in the south of Lilium' was 41 Arietis.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Lilii Borea for this star on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Stomach, refers to an asterism consisting of 39 Arietis, 35 Arietis and 41 Arietis. Consequently, the Chinese name for 39 Arietis itself is (, ).
Properties
39 Arietis is a giant star with a stellar classification of K1.5 III. It is currently at an evolutionary stage known as the red clump, indicating that it is generating energy through the fusion of helium at its core. It has 1.6 times the mass of the Sun, but its outer envelope has expanded to around 10 times the Sun's radius. It shines with 49 times the luminosity of the Sun. This energy is being radiated into outer space from its outer atmosphere at an effective temperature of 4,768 K, giving it the cool orange-hued glow of a K-type star.
See also
Aries (Chinese astronomy)
References
External links
Aladin previewer
Aladin sky atlas
HR 824
K-type giants
Horizontal-branch stars
Aries (constellation)
BD+28 0462
Arietis, 39
017361
013061
0824 | 39 Arietis | Astronomy | 477 |
641,655 | https://en.wikipedia.org/wiki/Human%20interface%20device | A human interface device (HID) is a type of computer device usually used by humans that takes input from or provides output to humans.
The term "HID" most commonly refers to the USB HID specification. The term was coined by Mike Van Flandern of Microsoft when he proposed that the USB committee create a Human Input Device class working group. The working group was renamed as the Human Interface Device class at the suggestion of Tom Schmidt of DEC because the proposed standard supported bi-directional communication.
HID standard
The HID standard was adopted primarily to enable innovation in PC input devices and to simplify the process of installing such devices. Prior to the introduction of the HID concept, devices usually conformed to strictly defined protocols for mouse, keyboards and joysticks; for example, the standard mouse protocol at the time supported relative X- and Y-axis data and binary input for up to two buttons, with no legacy support. All hardware innovations necessitated either overloading the use of data in an existing protocol or the creation of custom device drivers and the evangelization of a new protocol to developers. By contrast, all HID-defined devices deliver self-describing packages that may contain any number of data types and formats. A single HID driver on a computer parses data and enables dynamic association of data I/O with application functionality, which has enabled rapid innovation and development, and prolific diversification of new human-interface devices.
A working committee with representatives from several prominent companies developed the HID standard. The list of participants appears in the "Device Class Definition for Human Interface Devices (HID)"
document. The concept of a self-describing extensible protocol initially came from Mike Van Flandern and Manolito Adan while working on a project named "Raptor" at Microsoft, and independently from Steve McGowan, who worked on a "SIM" project that defined a device protocol for the VFX1 VR Headset and its peripherals based on ACCESS.bus while at Forte Technologies. SIM was also self-describing and extensible, however it was more focused on SIMulation devices used for VR and motion capture. After comparing notes at a Consumer Game Developer Conference, Steve and Mike agreed to collaborate on a new standard for the emerging Universal Serial Bus (USB).
Prior to HID (c.1995), proprietary drivers needed to be installed for almost every device attached to a PC. This meant that device vendors needed to track OS releases, and regularly offer updated drivers for their devices, and to develop drivers for each OS that they wanted to support. Also, at the time any novel devices, e.g., Joysticks designed for flight simulators with extra buttons or D-pads, required software support not only by the driver, but by each game that supported them to enable new controls. This meant that the device developers had the additional responsibility of enabling each game that they wanted to support. The ability for a HID device to describe itself via a Report Descriptor, decoupled hardware device developers from game developers. The Report Descriptor concept also meant that OS vendors could write a HID driver (Parser) that could accommodate almost any HID device a vendor could dream up, without the vendor needing to write or maintain a driver for every OS that they wanted to support.
So the HID class decoupled device vendors from game and OS vendors, enabling device vendors to innovate faster, and reducing their development costs (e.g., no drivers or game developer support). The HID Usage Table document defines thousands of controls that can be presented by HID devices. Game vendors can query the OS's HID parser to identify the set of controls that are presented by a device, then map those controls to features in their game. Since its original release, the HID Usage Table (HUT) document has had hundreds of new uses added to it.
The HID protocol has its limitations, but all modern mainstream operating systems will recognize standard USB HID devices, such as keyboards and mice, without needing a specialized driver. However its versatility has been demonstrated by the fact that it has not been updated in over 22 years, and it is still supported by every PC, tablet and cell phone in production today. USB, hence HID, devices can be hot-plugged, so when installed, a message saying that "A 'HID-compliant device' has been recognized" generally appears on screen. In comparison, this message did not usually appear for devices connected via the PS/2 6-pin DIN connectors which preceded USB. PS/2 did not typically support plug-and-play, which means that connecting a PS/2 keyboard or mouse with the computer powered on does not always work and may pose a hazard to the computer's motherboard. Likewise, the PS/2 standard did not support the HID protocol. The USB human interface device class describes a USB HID.
The HID protocol (Report Descriptor and Report mechanism) has been implemented on many buses other than USB, including Bluetooth and I2C.
There are also a number of extensions to HID defined in "HID Integrated Usage Table Documents", including uninterruptible power supplies, video monitor controls, point of sale devices, arcade and gaming (slot machines) devices.
Report Descriptor
The Report Descriptor exposes the messages that are generated or accepted by a HID device. Each message is referred to as a 'Report'. Reports can define bits/controls in a device that can be read or written, or generated periodically to keep a host updated on the current status of the device. For instance, a mouse typically generates a Report 200 times a second to inform the host of any movement or button presses. Report Descriptors are 'bit orientated', meaning that controls can present between 1 and 32 bits of information. Each control defined in a Report Descriptor has an ID and defines its size and position in its Report. A Report Descriptor can define many Reports, each reporting a different set, or combination of information.
For example, a basic mouse defines a 3 byte Report where the least significant (0) bit of the report is the left button, the next (1) bit is the middle button, and the third (2) bit is the right button. To allow the mouse 8-bit X and Y position coordinates to conveniently land on byte boundaries, a 5-bit 'pad' is defined. Then the X coordinate is defined as an 8-bit relative value (i.e., number of 'mickeys' since the previous report) that resides in bit positions 8 through 15, and the Y coordinate is defined as an 8-bit relative value that resides in bit positions 16 through 24, resulting in a data packet that is presented to the host.
A Report Descriptor is extremely versatile, allowing a device to specify the resolution, range, and many other characteristics of each control that it presents. Being designed at a time when a mouse or keyboard controller was lucky to have 1KB of ROM for all its code and data, the Report Descriptor syntax has many features that allow its size to be minimized e.g., selected control parameters can persist across control multiple definitions, only needing to be redeclared if their value changes. The reports generated by a basic mouse can be described in 50 bytes, and a 104 keyboard in 65 bytes.
Physical Descriptor
A little known or understood feature of HID is the Physical Descriptor. The Physical Descriptor is used to define the parts of the human body that interact with the individual controls defined in the Report Descriptor. When controlling a game, the index finger and thumb are usually used to invoke repetitive actions. Because these fingers are considered to have the fastest 'twitch' response, they are typically used for pulling the trigger of a gun, or activating an often used game function. The Physical Descriptor allows a device vendor identify that which fingers rests on each control and prioritize the set of controls that can be reached by an individual finger. This feature enables a game vendor to intelligently present the best default button mapping, even for a device that didn't exist when the game was developed. The Physical Descriptor also enables full body motion capture information to be presented by a HID device, i.e., the angle, orientation, relative or absolute position of any joint in the human body. And through the Report Descriptor, the motion capture data can be presented at whatever resolution that the device can support.
Components of the HID protocol
In the HID protocol, there are 2 entities: the "host" and the "device". The device is the entity that directly interacts with a human, such as a keyboard or mouse. The host communicates with the device and receives input data from the device on actions performed by the human. Output data flows from the host to the device and then to the human. The most common example of a host is a PC but some cell phones and PDAs also can be hosts.
The HID protocol makes implementation of devices very simple. Devices define their data packets and then present a "HID descriptor" to the host. The HID descriptor is a hard coded array of bytes that describes the device's data packets. This includes: how many packets the device supports, the size of the packets, and the purpose of each byte and bit in the packet. For example, a keyboard with a calculator program button can tell the host that the button's pressed/released state is stored as the 2nd bit in the 6th byte in data packet number 4 (note: these locations are only illustrative and are device-specific). The device typically stores the HID descriptor in ROM and does not need to intrinsically understand or parse the HID descriptor. Some mouse and keyboard hardware in the market today is implemented using only an 8-bit CPU.
The host is expected to be a more complex entity than the device. The host needs to retrieve the HID descriptor from the device and parse it before it can fully communicate with the device. Parsing the HID descriptor can be complicated. Multiple operating systems were known to have shipped bugs in the device drivers responsible for parsing the HID descriptors years after the device drivers were originally released to the public. However, this complexity is the reason why rapid innovation with HID devices is possible.
The above mechanism describes what is known as HID "report protocol". Because it was understood that not all hosts would be capable of parsing HID descriptors, HID also defines "boot protocol". In boot protocol, only specific devices are supported with only specific features because fixed data packet formats are used. The HID descriptor is not used in this mode so innovation is limited. However, the benefit is that minimal functionality is still possible on hosts that otherwise would be unable to support HID. The only devices supported in boot protocol are
Keyboard – Any of the first 256 key codes ("Usages") defined in the HID Usage Tables, Usage Page 7 can be reported by a keyboard using the boot protocol, but most systems only handle a subset of these keys. Most systems support all 104 keys on the IBM AT-101 layout, plus the three extra keys designed for Microsoft Windows 95 (the left and right Windows key, and the Menu key). Many systems also support additional keys on basic western European 105-, Korean 106-, Brazilian ABNT 107- and Japanese DOS/V 109-key layouts. Buttons, knobs and keys that are not reported on Usage Page 7 are not available. For example, a particular US keyboard's QWERTY keys will function but the Calculator and Logoff keys will not because they are defined on Usage Page 12 and cannot be reported in boot protocol.
Mouse – Only the X-axis, Y-axis, and the first 3 buttons will be available. Any additional features on the mouse will not function.
One common usage of boot mode is during the first moments of a computer's boot up sequence. Directly configuring a computer's BIOS is often done using only boot mode.
Sometimes a message will appear informing the user that the device has installed the correct driver and is now usable.
Definition of a device
According to the HID specification, a device is described, during the report mode, as a set of controls or group of controls.
Controls are matched by a field containing the data, and another containing a usage tag.
Each usage tag is described in the spec as the constructor suggested use of the data described in the report mode.
The is a Windows app that can be used to generate all the descriptors associated with a HID device (see link below). It performs syntax checking, and can generate C, C Header and binary files for the HID descriptors. Its text-based Usage Table definion files can also be easily extended to define proprietary Usages (control types) or Usage Tables (the set of Usages associated with a device or feature).
Other protocols using HID
Since HID's original definition over USB, HID is now also used in other computer communication buses. This enables HID devices that traditionally were only found on USB to also be used on alternative buses. This is done since existing support for USB HID devices can typically be adapted much faster than having to invent an entirely new protocol to support mouse, touchpad, keyboards, and the like. Known buses that use HID are:
Bluetooth HID – Used for mouse and keyboards that are connected via Bluetooth
Serial HID – Used in Microsoft's Windows Media Center PC remote control receivers.
Zigbee input device – Zigbee (RF4CE) supports HID devices through the Zigbee input device profile.
HID over I²C – Used for embedded devices in Microsoft Windows 8
HID over SPI – Developed by Microsoft for faster, lower latency fixed-device communications
HOGP (HID over GATT) – Used for HID devices connected using Bluetooth Low Energy technology
See also
Human interface guidelines
Human–computer interaction
USB human interface device class
Graphical user interface builder
Linux on the desktop
Peripheral
Tangible user interface
References
External links
HID developers forum, USB.org
HID Device Class Definition 1.11 Specification, USB.org
HID Usage Tables 1.4 Specification, USB.org
HID Integrated Usage Table Documents, USB.org
HID Descriptor Tool, USB.org
Human–computer interaction | Human interface device | Engineering | 2,933 |
33,625,523 | https://en.wikipedia.org/wiki/Pavonia%20%C3%97%20gledhillii | Pavonia × gledhillii is an evergreen flowering plant in the mallow family, Malvaceae.
Etymology
The generic name honours Spanish botanist José Antonio Pavón Jiménez (1754–1844). The epithet gledhillii come from Dr. David Gledhill, curator in 1989 of University of Bristol Botanic Garden.
Description
Pavonia × gledhillii is a 19th-century hybrid of Pavonia makoyana, E. Morrem and Pavonia multiflora, A. Juss., often incorrectly confused with Pavonia multiflora.
This subshrub is intermediate between the two species of origin in almost all respects, but it has nine to ten equal broad bracts and sub-entire leaf margins. It can reach a height of about . The unusual flowers are purple-grey enclosed within a bright red calyx. Flowering period is late Summer.
Gallery
References
M. Cheek – A New Name for a South American Pavonia (Malvaceae) – Kew Bulletin – Vol. 44, No. 1, 1989
External links
Hibisceae
Hybrid plants | Pavonia × gledhillii | Biology | 228 |
54,458,210 | https://en.wikipedia.org/wiki/Princeton%20%28electronics%20company%29 | is a Japanese company headquartered in Tokyo, Japan, that offers computer hardware and electronics products.
Overview
Princeton Technology Ltd. was originally established in 1995.
The company is a fabless manufacturing company, designing products which are ordered to manufactures in Taiwan and China. The company offers flash memory products (SD cards, USB flash drives), DRAM, LCD, LED display, Hard disk drives, NAS and other electronic products. Princeton products are sold mostly in Japan, but can be found online on websites such as Amazon. The business type and scope is same as Green House, Elecom and Buffalo, also based in Japan. In 2014, the company name was changed from Princeton Technology Ltd. to Princeton Ltd..
As a computer hardware supplier, Princeton has contributed to offer the various flash memory and DRAM products to major electronics companies in Japan, such as Sony, Panasonic and Toshiba.
Princeton is also the official agency of Cisco, Polycom, Edgewater networks, Proware Technology, Drobo, and more, and has introduced several cloud collaboration systems and SAN systems in Japan. The company has presented IT solutions for education systems by installing Cisco and Edgewater networks cloud collaboration products, for instance, SAN systems by installing Princeton, Proware Technology and Drobo NAS products.
See also
List of companies of Japan
References
External links
Official Website
Computer companies established in 1995
Computer hardware companies
Computer memory companies
Computer peripheral companies
Computer storage companies
Electronics companies of Japan
Japanese brands
Japanese companies established in 1995 | Princeton (electronics company) | Technology | 301 |
44,234,093 | https://en.wikipedia.org/wiki/Lichen%20stromatolite | Lichen stromatolites are laminar calcretes that are proposed as being formed by a sequence of repetitions of induration followed by lichen colonization. Endolithic lichens inhabit areas between grains of rock, chemically and physically weathering that rock, leaving a rind, which is then indurated (hardened), then recolonized.
See also
Stromatolite
References
Lichenology
Carbonates | Lichen stromatolite | Biology | 92 |
3,566,883 | https://en.wikipedia.org/wiki/Particle%20physics%20and%20representation%20theory | There is a natural connection between particle physics and representation theory, as first noted in the 1930s by Eugene Wigner. It links the properties of elementary particles to the structure of Lie groups and Lie algebras. According to this connection, the different quantum states of an elementary particle give rise to an irreducible representation of the Poincaré group. Moreover, the properties of the various particles, including their spectra, can be related to representations of Lie algebras, corresponding to "approximate symmetries" of the universe.
General picture
Symmetries of a quantum system
In quantum mechanics, any particular one-particle state is represented as a vector in a Hilbert space . To help understand what types of particles can exist, it is important to classify the possibilities for allowed by symmetries, and their properties. Let be a Hilbert space describing a particular quantum system and let be a group of symmetries of the quantum system. In a relativistic quantum system, for example, might be the Poincaré group, while for the hydrogen atom, might be the rotation group SO(3). The particle state is more precisely characterized by the associated projective Hilbert space , also called ray space, since two vectors that differ by a nonzero scalar factor correspond to the same physical quantum state represented by a ray in Hilbert space, which is an equivalence class in and, under the natural projection map , an element of .
By definition of a symmetry of a quantum system, there is a group action on . For each , there is a corresponding transformation of . More specifically, if is some symmetry of the system (say, rotation about the x-axis by 12°), then the corresponding transformation of is a map on ray space. For example, when rotating a stationary (zero momentum) spin-5 particle about its center, is a rotation in 3D space (an element of ), while is an operator whose domain and range are each the space of possible quantum states of this particle, in this example the projective space associated with an 11-dimensional complex Hilbert space .
Each map preserves, by definition of symmetry, the ray product on induced by the inner product on ; according to Wigner's theorem, this transformation of comes from a unitary or anti-unitary transformation of . Note, however, that the associated to a given is not unique, but only unique up to a phase factor. The composition of the operators should, therefore, reflect the composition law in , but only up to a phase factor:
,
where will depend on and . Thus, the map sending to is a projective unitary representation of , or possibly a mixture of unitary and anti-unitary, if is disconnected. In practice, anti-unitary operators are always associated with time-reversal symmetry.
Ordinary versus projective representations
It is important physically that in general does not have to be an ordinary representation of ; it may not be possible to choose the phase factors in the definition of to eliminate the phase factors in their composition law. An electron, for example, is a spin-one-half particle; its Hilbert space consists of wave functions on with values in a two-dimensional spinor space. The action of on the spinor space is only projective: It does not come from an ordinary representation of . There is, however, an associated ordinary representation of the universal cover of on spinor space.
For many interesting classes of groups , Bargmann's theorem tells us that every projective unitary representation of comes from an ordinary representation of the universal cover of . Actually, if is finite dimensional, then regardless of the group , every projective unitary representation of comes from an ordinary unitary representation of . If is infinite dimensional, then to obtain the desired conclusion, some algebraic assumptions must be made on (see below). In this setting the result is a theorem of Bargmann. Fortunately, in the crucial case of the Poincaré group, Bargmann's theorem applies. (See Wigner's classification of the representations of the universal cover of the Poincaré group.)
The requirement referred to above is that the Lie algebra does not admit a nontrivial one-dimensional central extension. This is the case if and only if the second cohomology group of is trivial. In this case, it may still be true that the group admits a central extension by a discrete group. But extensions of by discrete groups are covers of . For instance, the universal cover is related to through the quotient with the central subgroup being the center of itself, isomorphic to the fundamental group of the covered group.
Thus, in favorable cases, the quantum system will carry a unitary representation of the universal cover of the symmetry group . This is desirable because is much easier to work with than the non-vector space . If the representations of can be classified, much more information about the possibilities and properties of are available.
The Heisenberg case
An example in which Bargmann's theorem does not apply comes from a quantum particle moving in . The group of translational symmetries of the associated phase space, , is the commutative group . In the usual quantum mechanical picture, the symmetry is not implemented by a unitary representation of . After all, in the quantum setting, translations in position space and translations in momentum space do not commute. This failure to commute reflects the failure of the position and momentum operators—which are the infinitesimal generators of translations in momentum space and position space, respectively—to commute. Nevertheless, translations in position space and translations in momentum space do commute up to a phase factor. Thus, we have a well-defined projective representation of , but it does not come from an ordinary representation of , even though is simply connected.
In this case, to obtain an ordinary representation, one has to pass to the Heisenberg group, which is a nontrivial one-dimensional central extension of .
Poincaré group
The group of translations and Lorentz transformations form the Poincaré group, and this group should be a symmetry of a relativistic quantum system (neglecting general relativity effects, or in other words, in flat spacetime). Representations of the Poincaré group are in many cases characterized by a nonnegative mass and a half-integer spin (see Wigner's classification); this can be thought of as the reason that particles have quantized spin. (Note that there are in fact other possible representations, such as tachyons, infraparticles, etc., which in some cases do not have quantized spin or fixed mass.)
Other symmetries
While the spacetime symmetries in the Poincaré group are particularly easy to visualize and believe, there are also other types of symmetries, called internal symmetries. One example is color SU(3), an exact symmetry corresponding to the continuous interchange of the three quark colors.
Lie algebras versus Lie groups
Many (but not all) symmetries or approximate symmetries form Lie groups. Rather than study the representation theory of these Lie groups, it is often preferable to study the closely related representation theory of the corresponding Lie algebras, which are usually simpler to compute.
Now, representations of the Lie algebra correspond to representations of the universal cover of the original group. In the finite-dimensional case—and the infinite-dimensional case, provided that Bargmann's theorem applies—irreducible projective representations of the original group correspond to ordinary unitary representations of the universal cover. In those cases, computing at the Lie algebra level is appropriate. This is the case, notably, for studying the irreducible projective representations of the rotation group SO(3). These are in one-to-one correspondence with the ordinary representations of the universal cover SU(2) of SO(3). The representations of the SU(2) are then in one-to-one correspondence with the representations of its Lie algebra su(2), which is isomorphic to the Lie algebra so(3) of SO(3).
Thus, to summarize, the irreducible projective representations of SO(3) are in one-to-one correspondence with the irreducible ordinary representations of its Lie algebra so(3). The two-dimensional "spin 1/2" representation of the Lie algebra so(3), for example, does not correspond to an ordinary (single-valued) representation of the group SO(3). (This fact is the origin of statements to the effect that "if you rotate the wave function of an electron by 360 degrees, you get the negative of the original wave function.") Nevertheless, the spin 1/2 representation does give rise to a well-defined projective representation of SO(3), which is all that is required physically.
Approximate symmetries
Although the above symmetries are believed to be exact, other symmetries are only approximate.
Hypothetical example
As an example of what an approximate symmetry means, suppose an experimentalist lived inside an infinite ferromagnet, with magnetization in some particular direction. The experimentalist in this situation would find not one but two distinct types of electrons: one with spin along the direction of the magnetization, with a slightly lower energy (and consequently, a lower mass), and one with spin anti-aligned, with a higher mass. Our usual SO(3) rotational symmetry, which ordinarily connects the spin-up electron with the spin-down electron, has in this hypothetical case become only an approximate symmetry, relating different types of particles to each other.
General definition
In general, an approximate symmetry arises when there are very strong interactions that obey that symmetry, along with weaker interactions that do not. In the electron example above, the two "types" of electrons behave identically under the strong and weak forces, but differently under the electromagnetic force.
Example: isospin symmetry
An example from the real world is isospin symmetry, an SU(2) group corresponding to the similarity between up quarks and down quarks. This is an approximate symmetry: while up and down quarks are identical in how they interact under the strong force, they have different masses and different electroweak interactions. Mathematically, there is an abstract two-dimensional vector space
and the laws of physics are approximately invariant under applying a determinant-1 unitary transformation to this space:
For example, would turn all up quarks in the universe into down quarks and vice versa. Some examples help clarify the possible effects of these transformations:
When these unitary transformations are applied to a proton, it can be transformed into a neutron, or into a superposition of a proton and neutron, but not into any other particles. Therefore, the transformations move the proton around a two-dimensional space of quantum states. The proton and neutron are called an "isospin doublet", mathematically analogous to how a spin-½ particle behaves under ordinary rotation.
When these unitary transformations are applied to any of the three pions (, , and ), it can change any of the pions into any other, but not into any non-pion particle. Therefore, the transformations move the pions around a three-dimensional space of quantum states. The pions are called an "isospin triplet", mathematically analogous to how a spin-1 particle behaves under ordinary rotation.
These transformations have no effect at all on an electron, because it contains neither up nor down quarks. The electron is called an isospin singlet, mathematically analogous to how a spin-0 particle behaves under ordinary rotation.
In general, particles form isospin multiplets, which correspond to irreducible representations of the Lie algebra SU(2). Particles in an isospin multiplet have very similar but not identical masses, because the up and down quarks are very similar but not identical.
Example: flavour symmetry
Isospin symmetry can be generalized to flavour symmetry, an SU(3) group corresponding to the similarity between up quarks, down quarks, and strange quarks. This is, again, an approximate symmetry, violated by quark mass differences and electroweak interactions—in fact, it is a poorer approximation than isospin, because of the strange quark's noticeably higher mass.
Nevertheless, particles can indeed be neatly divided into groups that form irreducible representations of the Lie algebra SU(3), as first noted by Murray Gell-Mann and independently by Yuval Ne'eman.
See also
Charge (physics)
Representation theory:
Of Lie algebras
Of Lie groups
Projective representation
Special unitary group
Notes
References
Coleman, Sidney (1985) Aspects of Symmetry: Selected Erice Lectures of Sidney Coleman. Cambridge Univ. Press. .
Georgi, Howard (1999) Lie Algebras in Particle Physics. Reading, Massachusetts: Perseus Books. .
.
.
Sternberg, Shlomo (1994) Group Theory and Physics. Cambridge Univ. Press. . Especially pp. 148–150.
Especially appendices A and B to Chapter 2.
External links
Lie algebras
Representation theory of Lie groups
Conservation laws
Quantum field theory | Particle physics and representation theory | Physics | 2,684 |
8,812,149 | https://en.wikipedia.org/wiki/CXCL2 | Chemokine (C-X-C motif) ligand 2 (CXCL2) is a small cytokine belonging to the CXC chemokine family that is also called macrophage inflammatory protein 2-alpha (MIP2-alpha), Growth-regulated protein beta (Gro-beta) and Gro oncogene-2 (Gro-2). CXCL2 is 90% identical in amino acid sequence as a related chemokine, CXCL1. This chemokine is secreted by monocytes and macrophages and is chemotactic for polymorphonuclear leukocytes and hematopoietic stem cells. The gene for CXCL2 is located on human chromosome 4 in a cluster of other CXC chemokines. CXCL2 mobilizes cells by interacting with a cell surface chemokine receptor called CXCR2.
CXCL2, like related chemokines, is also a powerful neutrophil chemoattractant and is involved in many immune responses including wound healing, cancer metastasis, and angiogenesis. A study was published in 2013 testing the role of CXCL2, CXCL3, and CXCL1 in the migration of airway smooth muscle cells (ASMCs) migration which plays a significant role in asthma. The results of this study showed that CXCL2 and CXCL3 both help with the mediation of normal and asthmatic ASMC migration through different mechanisms.
Clinical development
CXCL2 in combination with the CXCR4 inhibitor plerixafor rapidly mobilizes hematopoietic stem cells into the peripheral blood.
This rapid peripheral blood stem cell mobilization regimen entered Phase 2 clinical trials in 2021 in development by Magenta Therapeutics as a new method to collect stem cells for bone marrow transplantation.
References
Cytokines | CXCL2 | Chemistry | 406 |
49,367,970 | https://en.wikipedia.org/wiki/Algebroid%20function | In mathematics, an algebroid function is a solution of an algebraic equation whose coefficients
are analytic functions. So y(z) is an algebroid function if it satisfies
where are analytic. If this equation is irreducible then the function is d-valued,
and can be defined on a Riemann surface having d sheets.
References
Analytic functions
Equations | Algebroid function | Mathematics | 77 |
21,787,029 | https://en.wikipedia.org/wiki/Fuzzy%20pay-off%20method%20for%20real%20option%20valuation | The fuzzy pay-off method for real option valuation (FPOM or pay-off method) is a method for valuing real options, developed by Mikael Collan, Robert Fullér, and József Mezei; and published in 2009. It is based on the use of fuzzy logic and fuzzy numbers for the creation of the possible pay-off distribution of a project (real option). The structure of the method is similar to the probability theory based Datar–Mathews method for real option valuation, but the method is not based on probability theory and uses fuzzy numbers and possibility theory in framing the real option valuation problem.
Method
The Fuzzy pay-off method derives the real option value from a pay-off distribution that is created by using three or four cash-flow scenarios (most often created by an expert or a group of experts). The pay-off distribution is created simply by assigning each of the three cash-flow scenarios a corresponding definition with regards to a fuzzy number (triangular fuzzy number for three scenarios and a trapezoidal fuzzy number for four scenarios). This means that the pay-off distribution is created without any simulation whatsoever. This makes the procedure easy and transparent. The scenarios used are a minimum possible scenario (the lowest possible outcome), the maximum possible scenario (the highest possible outcome) and a best estimate (most likely to happen scenario) that is mapped as a fully possible scenario with a full degree of membership in the set of possible outcomes, or in the case of four scenarios used - two best estimate scenarios that are the upper and lower limit of the interval that is assigned a full degree of membership in the set of possible outcomes.
The main observations that lie behind the model for deriving the real option value are the following:
The fuzzy NPV of a project is (equal to) the pay-off distribution of a project value that is calculated with fuzzy numbers.
The mean value of the positive values of the fuzzy NPV is the "possibilistic" mean value of the positive fuzzy NPV values.
Real option value, ROV, calculated from the fuzzy NPV is the "possibilistic" mean value of the positive fuzzy NPV values multiplied with the positive area of the fuzzy NPV over the total area of the fuzzy NPV.
The real option formula can then be written simply as:
where A(Pos) is the area of the positive part of the fuzzy distribution, A(Neg) is the area of the negative part of the fuzzy distribution, and E[A+] is the mean value of the positive part of the distribution. It can be seen that when the distribution is totally positive, the real options value reduces to the expected (mean) value, E[A+].
As can be seen, the real option value can be derived directly from the fuzzy NPV, without simulation. At the same time, simulation is not an absolutely necessary step in the Datar–Mathews method, so the two methods are not very different in that respect. But what is totally different is that the Datar–Mathews method is based on probability theory and as such has a very different foundation from the pay-off method that is based on possibility theory: the way that the two models treat uncertainty is fundamentally different.
Use of the method
The pay-off method for real option valuation is very easy to use compared to the other real option valuation methods and it can be used with the most commonly used spreadsheet software without any add-ins. The method is useful in analyses for decision making regarding investments that have an uncertain future, and especially so if the underlying data is in the form of cash-flow scenarios. The method is less useful if optimal timing is the objective. The method is flexible and accommodates easily both one-stage investments and multi-stage investments (compound real options).
The method has been taken into use in some large international industrial companies for the valuation of research and development projects and portfolios. In these analyses triangular fuzzy numbers are used. Other uses of the method so far are, for example, R&D project valuation IPR valuation, valuation of M&A targets and expected synergies, valuation and optimization of M&A strategies, valuation of area development (construction) projects, valuation of large industrial real investments.
The use of the pay-off method is lately taught within the larger framework of real options, for example at the Lappeenranta University of Technology and at the Tampere University of Technology in Finland.
References
External links
Real options
Fuzzy logic
Financial models | Fuzzy pay-off method for real option valuation | Engineering | 926 |
23,089,402 | https://en.wikipedia.org/wiki/Bismuth%20pentafluoride | Bismuth pentafluoride is an inorganic compound with the formula BiF5. It is a white solid that is highly reactive. The compound is of interest to researchers but not of particular value.
Structure
BiF5 is polymeric and consists of linear chains of trans-bridged corner sharing BiF6 octahedra. This is the same structure as α-UF5.
Preparation
BiF5 can be prepared by treating BiF3 with F2 at 500 °C.
BiF3 + F2 → BiF5
In an alternative synthesis, ClF3 is the fluorinating agent at 350 °C.
BiF3 + ClF3 → BiF5 + ClF
Reactions
Bismuth pentafluoride is the most reactive of the pnictogen pentafluorides and is an extremely strong fluorinating agent. It reacts vigorously with water to form ozone and oxygen difluoride, and with iodine or sulfur at room temperature. BiF5 fluorinates paraffin oil (hydrocarbons) to fluorocarbons above 50 °C and oxidises UF4 to UF6 at 150 °C. At 180 °C, bismuth pentafluoride fluorinates Br2 to BrF3 and Cl2 to ClF.
BiF5 also reacts with alkali metal fluorides, MF, to form hexafluorobismuthates, M[BiF6], containing the hexafluorobismuthate anion, [BiF6]−. Bismuth pentafluorude in hydrofluoric acid solvent also reacts with nickel fluoride to form the nickel salt of this anion, which can be incorporated into a complex with acetonitrile.
References
Bismuth compounds
Fluorides
Metal halides
Oxidizing agents
Fluorinating agents
Inorganic polymers
Coordination polymers | Bismuth pentafluoride | Chemistry | 395 |
28,847,792 | https://en.wikipedia.org/wiki/HMGN4 | High mobility group nucleosome-binding domain-containing protein 4 is a transcription factor that in humans is encoded by the HMGN4 gene.
Function
The protein encoded by this gene, a member of the HMGN protein family, is thought to reduce the compactness of the chromatin fiber in nucleosomes, thereby enhancing transcription from chromatin templates. Transcript variants utilizing alternative polyadenylation signals exist for this gene.
See also
High-mobility group
References
Further reading
Transcription factors | HMGN4 | Chemistry,Biology | 103 |
14,149,404 | https://en.wikipedia.org/wiki/RACGAP1 | Rac GTPase-activating protein 1 is an enzyme that in humans is encoded by the RACGAP1 gene.
Function
Rho GTPases control a variety of cellular processes. There are 3 subtypes of Rho GTPases in the Ras superfamily of small G proteins: RHO (see MIM 165370), RAC (see RAC1; MIM 602048), and CDC42 (MIM 116952). GTPase-activating proteins (GAPs) bind activated forms of Rho GTPases and stimulate GTP hydrolysis. Through this catalytic function, Rho GAPs negatively regulate Rho-mediated signals. GAPs may also serve as effector molecules and play a role in signaling downstream of Rho and other Ras-like GTPases.[supplied by OMIM]. Over-expression of RACGAP1 is observed in multiple human cancers including breast cancer, gastric cancer and colorectal cancer. Evidence show that RACGAP1 can modulate mitochondrial quality control by stimulating mitopahy and mitochondrial biogenesis in breast cancer. Knocking out RACGAP1 in vitro using CRISPR/Cas9 leads to cytokinesis failure.
Interactions
RACGAP1 has been shown to interact with ECT2, Rnd2 and SLC26A8.
During cytokinesis, RACGAP1 has been shown to interact with KIF23 to form the centralspindlin complex. This complex is essential for the formation of the central spindle. RACGAP1 also interacts with PRC1 to stabilize and maintain the central spindle as anaphase proceeds. RACGAP1 can also interact with ECT2 during anaphase of cytokinesis, loss of RACGAP1 leads to cytokinesis failure.
References
Further reading | RACGAP1 | Chemistry | 386 |
490,307 | https://en.wikipedia.org/wiki/121%20%28number%29 | 121 (one hundred [and] twenty-one) is the natural number following 120 and preceding 122.
In mathematics
One hundred [and] twenty-one is
a square (11 times 11)
the sum of the powers of 3 from 0 to 4, so a repunit in ternary. Furthermore, 121 is the only square of the form , where p is prime (3, in this case).
the sum of three consecutive prime numbers (37 + 41 + 43).
As , it provides a solution to Brocard's problem. There are only two other squares known to be of the form . Another example of 121 being one of the few numbers supporting a conjecture is that Fermat conjectured that 4 and 121 are the only perfect squares of the form (with being 2 and 5, respectively).
It is also a star number, a centered tetrahedral number, and a centered octagonal number.
In decimal, it is a Smith number since its digits add up to the same value as its factorization (which uses the same digits) and as a consequence of that it is a Friedman number (). But it cannot be expressed as the sum of any other number plus that number's digits, making 121 a self number.
References
Integers | 121 (number) | Mathematics | 257 |
69,743,250 | https://en.wikipedia.org/wiki/Courant%E2%80%93Snyder%20parameters | In accelerator physics, the Courant–Snyder parameters (frequently referred to as Twiss parameters or CS parameters) are a set of quantities used to describe the distribution of positions and velocities of the particles in a beam. When the positions along a single dimension and velocities (or momenta) along that dimension of every particle in a beam are plotted on a phase space diagram, an ellipse enclosing the particles can be given by the equation:
where is the position axis and is the velocity axis. In this formulation, , , and are the Courant–Snyder parameters for the beam along the given axis, and is the emittance. Three sets of parameters can be calculated for a beam, one for each orthogonal direction, x, y, and z.
History
The use of these parameters to describe the phase space properties of particle beams was popularized in the accelerator physics community by Ernest Courant and Hartland Snyder in their 1953 paper, "Theory of the Alternating-Gradient Synchrotron". They are also widely referred to in accelerator physics literature as "Twiss parameters" after British astronomer Richard Q. Twiss, although it is unclear how his name became associated with the formulation.
Phase space area description
When simulating the motion of particles through an accelerator or beam transport line, it is often desirable to describe the overall properties of an ensemble of particles, rather than track the motion of each particle individually. By Liouville's Theorem it can be shown that the density occupied on a position and momentum phase space plot is constant when the beam is only affected by conservative forces. The area occupied by the beam on this plot is known as the beam emittance, although there are a number of competing definitions for the exact mathematical definition of this property.
Coordinates
In accelerator physics, coordinate positions are usually defined with respect to an idealized reference particle, which follows the ideal design trajectory for the accelerator. The direction aligned with this trajectory is designated "z", (sometimes "s") and is also referred to as the longitudinal coordinate. Two transverse coordinate axes, x and y, are defined perpendicular to the z axis and to each other.
In addition to describing the positions of each particle relative to the reference particle along the x, y, and z axes, it is also necessary to consider the rate of change of each of these values. This is typically given as a rate of change with respect to the longitudinal coordinate (x' = dx/dz) rather than with respect to time. In most cases, x' and y' are both much less than 1, as particles will be moving along the beam path much faster than transverse to it. Given this assumption, it is possible to use the small angle approximation to express x' and y' as angles rather than simple ratios. As such, x' and y' are most commonly expressed in milliradians.
Ellipse equation
When an ellipse is drawn around the particle distribution in phase space, the equation for the ellipse is given as:
"Area" here is an area in phase space, and has units of length * angle. Some sources define the area as the beam emittance , while others use . It is also possible to define the area as a specific fraction of the particles in a beam with a 2 dimensional gaussian distribution.
The other three coefficients, , , and , are the CS parameters. As this ellipse is an instantaneous plot of the positions and velocities of the particles at one point in the accelerator, these values will vary with time. Since there are only two independent variables, x and x', and the emittance is constant, only two of the CS parameters are independent. The relationship between the three parameters is given by:
Derivation for periodic systems
In addition to treating the CS parameters as an empirical description of a collection of particles in phase space, it is possible to derive them based on the equations of motion of particles in electromagnetic fields.
Equation of motion
In a strong focusing accelerator, transverse focusing is primarily provided by quadrupole magnets. The linear equation of motion for transverse motion parallel to an axis of the magnet is:
where is the focusing coefficient, which has units of length−2, and is only nonzero in a quadrupole field. (Note that x is used throughout this explanation, but y could be equivalently used with a change of sign for k. The longitudinal coordinate, z, requires a somewhat different derivation.)
Assuming is periodic, for example, as in a circular accelerator, this is a differential equation with the same form as the Hill differential equation. The solution to this equation is a pseudo harmonic oscillator:
where A(z) is the amplitude of oscillation, is the "betatron phase" which is dependent on the value of , and is the initial phase. The amplitude is decomposed into a position dependent part and an initial value , such that:
(It is important to remember that ' continues to indicated a derivative with respect to position along the direction of travel, not time.)
Particle distributions
Given these equations of motion, taking the average values for particles in a beam yields:
These can be simplified with the following definitions:
giving:
These are the CS parameters and emittance in another form. Combined with the relationship between the parameters, this also leads to a definition of emittance for an arbitrary (not necessarily Gaussian) particle distribution:
Properties
The advantage of describing a particle distribution parametrically using the CS parameters is that the evolution of the overall distribution can be calculated using matrix optics more easily than tracking each individual particle and then combining the locations at multiple points along the accelerator path. For example, if a particle distribution with parameters , , and passes through an empty space of length L, the values , , and at the end of that space are given by:
See also
Beam emittance
Beta function (accelerator physics)
Ray transfer matrix analysis
References
Accelerator physics | Courant–Snyder parameters | Physics | 1,221 |
54,294,875 | https://en.wikipedia.org/wiki/FaceApp | FaceApp is a photo and video editing application for iOS and Android developed by FaceApp Technology Limited, a company based in Cyprus. The app generates highly realistic transformations of human faces in photographs by using neural networks based on artificial intelligence. The app can transform a face to make it smile, look younger, look older, or change gender.
Features
FaceApp was launched on iOS in January 2017 and on Android in February 2017. There are multiple options to manipulate the photo uploaded such as editor options of adding an impression, make-up, smiles, hair colors, hairstyles, glasses, age or beards. Filters, lens blur and backgrounds along with overlays, tattoos, and vignettes are also a part of the app.
The gender change transformations of FaceApp have attracted particular interest from the LGBT and transgender communities, due to their ability to realistically simulate the appearance of a person as the opposite gender.
Criticism
In 2019, FaceApp attracted criticism in both the press and on social media over the privacy of user data. Among the concerns raised were allegations that FaceApp stored users' photos on their servers, and that their terms of use allowed them to use users' likenesses and photos for commercial purposes. In response to questions, the company's founder, Yaroslav Goncharov, stated that user data and uploaded images were not being transferred to Russia but instead processed on servers running in the Google Cloud Platform and Amazon Web Services. According to Goncharov, user photos were only stored on servers to save bandwidth when applying multiple filters, and were deleted shortly after being uploaded. US senator Chuck Schumer expressed "serious concerns regarding both the protection of the data that is being aggregated as well as whether users are aware of who may have access to it" and called for an FBI investigation into the app.
A "hot" transformation was available in the app in 2017 supposedly making its users appear more physically attractive, but this was accused of racism for lightening the skin color of black people and making them look more European. The feature was briefly renamed "spark" before being removed. Founder and chief executive Yaroslav Goncharov apologised, describing the situation as "an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour" and announcing that a "complete fix" was being worked on. In August that year, FaceApp once again faced criticism when it featured "ethnicity filters" depicting "White", "Black", "Asian", and "Indian". The filters were immediately removed from the app.
See also
Face of the Future
Deepfake
References
External links
Official website
Android (operating system) software
2017 software
IOS software
Photo software
Proprietary cross-platform software
Social media
Deep learning software applications
Deepfakes | FaceApp | Technology | 566 |
15,685,404 | https://en.wikipedia.org/wiki/Catnip | Nepeta cataria, commonly known as catnip and catmint, is a species of the genus Nepeta in the mint family, native to southern and eastern Europe, the Middle East, and Central Asia. It is widely naturalized in northern Europe, New Zealand, and North America. The common name catmint can also refer to the genus as a whole.
The names catnip and catmint are derived from the intense attraction about two-thirds of cats have toward the plant. Catnip is also an ingredient in some herbal teas, and is valued for its sedative and relaxant properties.
Description
Nepeta cataria is a short-lived perennial that grows tall, usually with several stems. Each of its stems is square in cross section, as typical of the mint family, and somewhat gray in color. It is a herbaceous plant that regrows from a taproot. However, it does not deeply root. Older plants tend to have more branches with particularly healthy plants becoming mound shaped.
The leaves are in appearance, white in color due to being covered in fine hairs, especially so on the lower side of the leaves. They are attached in pairs to opposite sides of the stems. Leaf shapes vary from cordate, deltoid, to ovate; shaped like a heart, triangle, or like an egg. They are attached by leaf stems and have a length of and wide. The edges of the leaves are coarsely crenate to serrate, having a wavy, rounded edge to have asymmetrical teeth like those of a saw that point forward.
The flowers are in loose groups in an inflorescence. The lowest flowers more widely spaced and the end more tightly packed into a spike. The inflorescences are at the end of branches and may be long and have inconspicuous bracts. A single plant may produce several thousand flowers, but at any time less than 10% of them will be in full bloom. The flowers themselves are somewhat small and inconspicuous, but quite fragrant. They are bilaterally symetrical and measure 10–12 mm long. The petals are off white to pink and usually dotted with purple-pink spots. They are with the upper lip having two lobes and the lower one much wider with a scalloped edge.
The fruit is a nutlet that is nearly triquetrous, three sided with sharp edges and concave sides, and over all shaped like an egg. They are approximately 1.7 mm by 1 mm. Each nutlet may contain between one and four seeds. They are dark reddish-brown in color with two white spots near the base.
Taxonomy
Nepeta cataria was one of the many species described by Linnaeus in 1753 in his landmark work Species Plantarum. He had previously described it in 1738 as (meaning "Nepeta with flowers in a stalked, interrupted spike"), before the commencement of Linnaean taxonomy. Catnip is classified in part of Nepeta in the Lamiaceae, commonly known as the mint family. It has no subspecies or varieties.
Synonyms
Nepeta cataria has botanical synonyms, 16 of which are species. Only three are exactly equivelent to the current description of the species.
Names
The species name cataria means "of cats". It derives from the medieval Latin herba catti or herba cattaria used by medieval herbalists. The English common name catnip is first recorded in 1775 in the colony of Pennsylvania, but now has worldwide usage. The variant catnep was also coined in the United States around 1806, but never became common elsewhere and is now very rarely used.
The first usage of catmint was in about 1300 in the form kattesminte. It continues to be used for Nepeta cataria, though it is also used for other species in the genus and the Nepeta as a genus. In medieval English it was also called cat-wort, but it is no longer used in English dying out by about 1500.
Another name with a medieval origin was nep, neps, or nepe. Originating about 1475, it was more common but has become a regional name for catnip used in East Anglia.
In medieval England it was known by various names in botanical manuscripts. It was called calamentum minus and nasturcium mureligi. It was also called nepeta or variants, but other species or genuses like the dead-nettles (Lamium) were also sometimes called this. It was also sometimes called collocasia, but this was more often applied to horse-mints especially Mentha longifolia.
Range and habitat
According to Plants of the World Online, the native range of catnip includes a large part of Eurasia. In Europe it is certainly native to the south around the Mediterranean and in the east, but sources disagree on its native status in the north in countries like the Baltic Countries, Germany, the Netherlands, and England. Around the Mediterranean it is idetified as native in Portugal, Spain, France, Corsica, Italy, Switzerland, the former Yugoslavia, Albania, and Greece. In the East it is native to Bulgaria, Romania, Ukraine, Belarus, European Russia, and the Caucasus. It is generally agreed to be an introduced species in Scandinavia, Poland, and may also grow in Ireland.
In Asia its range extends from Turkey into Syria, Lebanon, and Iraq. Eastward it continues to Iran and Pakistan and the western Himalayas, but no further into India. It is native to all of Central Asia including Afghanistan, Kazakhstan, Kirghistan, Tajikistan, Turkmenistan, and Uzbekistan and also extends to western Siberia. Its native status in China is disputed as it also is in the Russian Far East, Nepal, Korea, and Japan.
In Africa it may grow in Morrocco, but this report is doubtful. It also grows as introduced species on the island of Java. In Australia it has been reported in the states of South Australia, New South Wales, Victoria, Queensland, and Tasmania. It grows on both the north and south islands of New Zealand and was introduced there in 1870.
In North America it grows in Canada from the island of Newfoundland to British Columbia, but not in Labrador or the three northern Canadian territories. In the United States it is present in 48 states, only absent from Florida and Hawaii.
In South America it grows in many parts of Argentina as well as in Colombia.
It grows in a variety of soils from clay to sandy or even shallow and rocky. It requires good drainage and not to be water logged.
Uses
The plant terpenoid nepetalactone is the main chemical constituent of the essential oil of Nepeta cataria. Nepetalactone can be extracted from catnip by steam distillation.
Cultivation
Nepeta cataria is cultivated as an ornamental plant for use in gardens. It is also grown for its attractant qualities to house cats and butterflies.
The plant is drought-tolerant and deer-resistant. It can be a repellent for certain insects, including aphids and squash bugs. Catnip is best grown in full sunlight and grows as a loosely branching, low perennial.
The cultivar Nepeta cataria 'Citriodora', also known as lemon catmint, is known for the strong lemon-scent of its leaves.
Biological control
The iridoid that is deposited on cats who have rubbed themselves against the plants and scratched the surfaces of catnip and silver vine (Actinidia polygama) leaves repels mosquitoes. The compound iridodial, an iridoid extracted from catnip oil, has been found to attract lacewings that eat aphids and mites.
As an insect repellent
Nepetalactone is a mosquito and fly repellent. Oil isolated from catnip by steam distillation is a repellent against insects, in particular mosquitoes, cockroaches, and termites. Research suggests that, while a more effective spatial repellant than DEET, it is not as effective of a repellent when used on the skin of humans as SS220 or DEET.
Effect of ingestion on humans
Catnip has a history of use in traditional medicine for a variety of ailments such as stomach cramps, indigestion, fevers, hives, and nervous conditions. The plant has been consumed as a tisane, juice, tincture, infusion, or poultice, and has also been smoked. However, its medicinal use has fallen out of favor with the development of modern medicine.
Effect on felines
Catnip contains the feline attractant nepetalactone. N. cataria (and some other species within the genus Nepeta) are known for their behavioral effects on the cat family, not only on domestic cats, but also other species. Several tests showed that leopards, cougars, servals, and lynxes often reacted strongly to catnip in a manner similar to domestic cats. Lions and tigers may react strongly as well, but they do not react consistently in the same fashion.
With domestic cats, N. cataria is used as a recreational substance for the enjoyment of pet cats, and catnip and catnip-laced products designed for use with domesticated cats are available to consumers. Common behaviors cats display when they sense the bruised leaves or stems of catnip are rubbing on the plant, rolling on the ground, pawing at it, licking it, and chewing it. Consuming much of the plant is followed by drooling, sleepiness, anxiety, leaping about, and purring. Some growl, meow, scratch, or bite at the hand holding it. The main response period after exposure is generally between 5 and 15 minutes, after which olfactory fatigue usually sets in. However, about one-third of cats are not affected by catnip. The behavior is hereditary.
Cats detect nepetalactone through their olfactory epithelium, not through their vomeronasal organ. At the olfactory epithelium, the nepetalactone binds to one or more olfactory receptors.
A 1962 pedigree analysis of 26 cats in a Siamese breeding colony suggested that the catnip response was caused by a Mendelian-dominant gene. A 2011 pedigree analysis of 210 cats in two breeding colonies (taking into account measurement error by repeated testing) showed no evidence for Mendelian patterns of inheritance but demonstrated heritabilities of for catnip response behavior, indicating a polygenic liability threshold model.
A study published in January 2021 suggests that felines are specifically attracted to the iridoids nepetalactone and nepetalactol, present in catnip and silver vine, respectively.
Cats younger than six months might not exhibit behavioral change to catnip. Up to a third of cats are genetically immune to catnip effects but may respond in a similar way to other plants such as valerian (Valeriana officinalis) root and leaves, silver vine or matatabi (Actinidia polygama), and Tatarian honeysuckle (Lonicera tatarica) wood.
See also
Notes
Citations
References
Books
Journals
Further reading
External links
USDA Plant Profile: Nepeta cataria (catmint)
cataria
Flora of Southwestern Europe
Flora of Southeastern Europe
Flora of Eastern Europe
Flora of West Siberia
Flora of Central Asia
Flora of the Caucasus
Flora of Western Asia
Flora of Pakistan
Flora of West Himalaya
Cat attractants
Perennial plants
Plant toxin insecticides
Plants described in 1753 | Catnip | Chemistry | 2,393 |
24,813,944 | https://en.wikipedia.org/wiki/The%20Journal%20of%20Supercomputing | The Journal of Supercomputing is an academic computer science journal concerned with theoretical and practical aspects of supercomputing. Tutorial and survey papers are also included.
References
Computer science journals
Supercomputing
Springer Science+Business Media academic journals
Academic journals established in 1987
Triannual journals | The Journal of Supercomputing | Technology | 60 |
1,004,008 | https://en.wikipedia.org/wiki/Dynamic%20knowledge%20repository | The dynamic knowledge repository (DKR) is a concept developed by Douglas C. Engelbart as a primary strategic focus for allowing humans to address complex problems. He has proposed that a DKR will enable us to develop a collective IQ greater than any individual's IQ. References and discussion of Engelbart's DKR concept are available at the Doug Engelbart Institute.
Definition
A knowledge repository is a computerized system that systematically captures, organizes and categorizes an organization's knowledge. The repository can be searched and data can be quickly retrieved.
The effective knowledge repositories include factual, conceptual, procedural and meta-cognitive techniques. The key features of knowledge repositories include communication forums.
A knowledge repository can take many forms to "contain" the knowledge it holds. A customer database is a knowledge repository of customer information and insights – or electronic explicit knowledge. A Library is a knowledge repository of books – physical explicit knowledge. A community of experts is a knowledge repository of tacit knowledge or experience. The nature of the repository only changes to contain/manage the type of knowledge it holds. A repository (as opposed to an archive) is designed to get knowledge out. It should therefore have some rules of structure, classification, taxonomy, record management, etc., to facilitate user engagement.
References
Further reading
External links
Doug Engelbart Institute
Knowledge representation
Data management | Dynamic knowledge repository | Technology | 286 |
5,464,288 | https://en.wikipedia.org/wiki/Telluric%20contamination | Telluric contamination is contamination of the astronomical spectra by the Earth's atmosphere.
Interference with astronomical observations
Most astronomical observations are conducted by measuring photons (electromagnetic waves) which originate beyond the sky. The molecules in the Earth's atmosphere, however, absorb and emit their own light, especially in the visible and near-IR portion of the spectrum, and any ground-based observation is subject to contamination from these telluric (earth-originating) sources. Water vapor and oxygen are two of the more important molecules in telluric contamination. Contamination by water vapor was particularly pronounced in the Mount Wilson solar Doppler measurements.
Many scientific telescopes have spectrographs, which measure photons as a function of wavelength or frequency, with typical resolution on the order of a nanometer of visible light. Spectroscopic observations can be used in myriad contexts, including measuring the chemical composition and physical properties of astronomical objects as well as measuring object velocities from the Doppler shift of spectral lines. Unless they are corrected for, telluric contamination can produce errors or reduce precision in such data.
Telluric contamination can also be important for photometric measurements.
Telluric correction
It is possible to correct for the effects of telluric contamination in an astronomical spectrum. This is done by preparing a telluric correction function, made by dividing a model spectrum of a star by an observation of an astronomical photometric standard star. This function can then be multiplied by an astronomical observation at each wavelength point.
While this method can restore the original shape of the spectrum, the regions affected can be prone to high levels noise due to the low number of counts in that area of the spectrum.
See also
Pollution and light pollution
Interferometry and astronomy
Spectroscopy and spectrograph
References
Further reading
Christopher S. Carter, Herschel B. Snodgrass, and Claia Bryja, "Telluric water vapor contamination of the Mount Wilson solar Doppler measurements". Solar Physics volume 139, pages 13–24 (1992).
Atmosphere of Earth
Measurement
Observational astronomy | Telluric contamination | Physics,Astronomy,Mathematics | 422 |
6,701,743 | https://en.wikipedia.org/wiki/Diffusion%20%28acoustics%29 | Diffusion, in architectural acoustics, is the spreading of sound energy evenly in a given environment. A perfectly diffusive sound space is one in which the reverberation time is the same at any listening position.
Most interior spaces are non-diffusive; the reverberation time is considerably different around the room. At low frequencies, they suffer from prominent resonances called room modes.
Diffusor
Diffusors (or diffusers) are used to treat sound aberrations, such as echoes, in rooms. They are an excellent alternative or complement to sound absorption because they do not remove sound energy, but can be used to effectively reduce distinct echoes and reflections while still leaving a live sounding space. Compared to a reflective surface, which will cause most of the energy to be reflected off at an angle equal to the angle of incidence, a diffusor will cause the sound energy to be radiated in many directions, hence leading to a more diffusive acoustic space. It is also important that a diffusor spreads reflections in time as well as spatially. Diffusors can aid sound diffusion, but this is not why they are used in many cases; they are more often used to remove coloration and echoes.
Diffusors come in many shapes and materials. The birth of modern diffusors was marked by Manfred R. Schroeders' invention of number-theoretic diffusors in the 1970s. He got the idea during a 1977 Göttingen lecture by André Weil, Gauss sums and quadratic residues, celebrating the 200th anniversary of the birth of Gauss.
Maximum length sequence diffusors
Maximum length sequence based diffusors are made of strips of material with two different depths. The placement of these strips follows an MLS. The width of the strips is smaller than or equal to quarter the wavelength of the frequency where the maximum scattering effect is desired. Ideally, small vertical walls are placed between lower strips, improving the scattering effect in the case of tangential sound incidence. The bandwidth of these devices is rather limited; at one octave above the design frequency, diffusor efficiency drops to that of a flat surface.
Quadratic-residue diffusors
MLS based diffusors are superior to geometrical diffusors in many respects; they have limited bandwidth. The new goal was to find a new surface geometry that would combine the excellent diffusion characteristics of MLS designs with wider bandwidth. A new design was discovered, called a quadratic residue diffusor. Today the quadratic residue diffusor or Schroeder diffusor is still widely used. Quadratic-Residue Diffusors can be designed to diffuse sound in either one or two directions.
Primitive-root diffusors
Primitive-root diffusors are based on a number theoretic sequence based on primitive roots. Although they produce a notch in the scattering response, in reality the notch is over too narrow a bandwidth to be useful. In terms of performance, they are very similar to Quadratic-Residue Diffusors.
Optimized diffusors
By using numerical optimisation, it is possible to increase the number of theoretical designs, especially for diffusors with a small number of wells per period. But the big advantage of optimisation is that arbitrary shapes can be used which can blend better with architectural forms.
Two-dimensional ("hemispherical") diffusors
Designed, like most diffusors, to create "a big sound in a small room," unlike other diffusors, two-dimensional diffusors scatter sound in a hemispherical pattern. This is done by the creation of a grid, whose cavities have wells of varying depth, according to the matrix addition of two quadratic sequences equal or proportionate to those of a regular diffusor. These diffusors are very helpful for controlling the direction of the diffusion, particularly in studios and control rooms.
See also
Sound baffle
References
Further reading
T. J. Cox and P. D'Antonio, Acoustic Absorbers and Diffusors - Theory, Design and Application Spon press.
M. R. Schroeder, Number Theory in Science and Communication, Springer-Verlag, 1984; see especially sections 15.8 and 26.6.
Acoustics
Audio effects
Sound | Diffusion (acoustics) | Physics | 906 |
22,533,786 | https://en.wikipedia.org/wiki/Whirlwind%20mill | A whirlwind mill is a beater mill for pulverising and micro-pulverising in process engineering.
Construction
Whirlwind mills essentially consist of a mill base, a mill cover and a rotor. The inner side of the cover is equipped with wear protection elements. The top of the rotor is equipped with precrushing tools, and its side is covered with numerous U-shaped grinding tools.
Function
The grinding stock is fed to the mill via an inlet box and is pre-crushed by the tools on top of the rotor. The precrushing tools also carry the product into the milling zone at the side of the rotor. There the grinding stock is fluidised in the air stream between rotor and stator caused by rotation and the U-shaped grinding tools. The special design of these tools creates massive air whirls in the grinding zone (this is where the name of the mill comes from). These air whirls cause the main grinding effect. The particles collide with each other in these whirlwinds. The final particle size can be adjusted by changing the clearance between rotor and stator, air flow and rotor speed.
Applications
Whirlwind mills are basically used for pulverisation and micro-pulverisation of soft to medium hard products. In addition they can be used for cryogenic grinding, combined grinding/drying, combined drying/blending and defibration of organic substances (such as paper, cellulose, etc.).
Whirlwind Mills can be found in different industries, such as chemical, plastic, building material, and food industry.
Sources and external links
Whirlwind mills: pictures and explanations
References
Industrial equipment | Whirlwind mill | Engineering | 347 |
67,313,547 | https://en.wikipedia.org/wiki/Gordana%20Todorov | Gordana Todorov (born July 24, 1949) is a mathematician working in noncommutative algebra, representation theory, Artin algebras, and cluster algebras. She is a professor of mathematics at Northeastern University.
Biography
Todorov earned her Ph.D. in 1978, at Brandeis University. Her dissertation, Almost Split Sequences in the Representation Theory of Certain Classes of Artin Algebras, was supervised by Maurice Auslander.
Todorov is married to mathematician Kiyoshi Igusa. The Igusa–Todorov functions and Igusa–Todorov endomorphism algebras are named for their joint work. Todorov is also the namesake of Todorov's theorem on preprojective partitions, and the Gentle–Todorov theorem on abelian categories.
References
External links
Home page
1949 births
Living people
20th-century American mathematicians
21st-century American mathematicians
Brandeis University alumni
Northeastern University faculty
Algebraists
20th-century American women mathematicians
21st-century American women mathematicians | Gordana Todorov | Mathematics | 211 |
38,134,969 | https://en.wikipedia.org/wiki/Alberto%20Diaspro | Alberto Diaspro (born April 7, 1959, in Genoa, Italy) is an Italian scientist. He received his doctoral degree in electronic engineering from the university of Genoa, Italy, in 1983. He is full professor in applied physics at university of Genoa. He is research director of Nanoscopy Italian Institute of Technology. Alberto Diaspro is President of the Italian biophysical society SIBPA. In 2022 he got the Gregorio Weber Award for excellence in fluorescence.
References
External links
People
https://www.iit.it/people/alberto-diaspro
Alberto Diaspro - Top Italian Scientist in Physics
Alberto Diaspro è stato eletto Presidente della Società Italiana di Biofisica Pura e Applicata
ISS Honors Prof. Alberto Diaspro with Gregorio Weber Award | ISS
1959 births
Living people
Electronic engineering
Engineers from Genoa
20th-century Italian educators
University of Genoa alumni | Alberto Diaspro | Technology,Engineering | 191 |
25,476,351 | https://en.wikipedia.org/wiki/AN%20Ursae%20Majoris | AN Ursae Majoris is a binary star system in the northern circumpolar constellation of Ursa Major. It is a variable star, with AN Ursae Majoris being the variable star designation, and ranges in brightness from 14.90 down to 20.2. Even at its peak brightness though, the system is much too faint to be visible to the naked eye. Based on parallax measurements, the system is located roughly 1,050 light years away from the Sun.
This is a single-lined spectroscopic binary system with a period of in a close, circular orbit. The pair form an eclipsing binary system that decreases from magnitude 14.9 down to 20.2, once per orbit. This object, along with AM Herculis, define a class of cataclysmic variables known as polars. The pair consist of a low mass white dwarf with a strong magnetic field, interacting with a low–mass main sequence star that has filled its Roche lobe. Matter is being energetically accreted from the main sequence star onto one or both magnetic poles of the white dwarf star, producing emission lines in the spectrum. The magnetic field of the white dwarf has an estimated strength of .
References
White dwarfs
Polars (cataclysmic variable stars)
Eclipsing binaries
Ursa Major
Ursae Majoris, AN | AN Ursae Majoris | Astronomy | 280 |
72,398,642 | https://en.wikipedia.org/wiki/Abrothallus%20ertzii | Abrothallus ertzii is a species of lichenicolous fungus in the family Abrothallaceae. Found in Canada, it was formally described as a new species in 2015 by Ave Suija and Sergio Pérez-Ortega. The type specimen was collected near Dawson Falls in Wells Gray Provincial Park (British Columbia), where it was found growing on the thallus of the foliose lichen Lobaria pulmonaria, which itself was growing on the trunk of a Thuja plicata tree. It has also been collected in Quebec. The species epithet honours Damien Ertz, who collected the type. Abrothallus ertzii is distinguished from other Abrothallus fungi by its clavate (club-shaped) asci that contain eight two-celled ascospores; these readily split into part spores.
References
ertzii
Lichenicolous fungi
Fungi described in 2015
Fungi of Canada
Taxa named by Ave Suija
Fungus species | Abrothallus ertzii | Biology | 201 |
76,389,470 | https://en.wikipedia.org/wiki/Praseodymium%28III%29%20iodate | Praseodymium(III) iodate is an inorganic compound with the chemical formula Pr(IO3)3.
Preparation
Praseodymium(III) iodate can be obtained by reacting praseodymium(III) nitrate and potassium iodate in a hot aqueous solution:
Pr(NO3)3 + 3 KIO3 → Pr(IO3)3 + 3 KNO3
Properties
Praseodymium(III) iodate can be thermally decomposed as follows:
7 Pr(IO3)3 → Pr5(IO6)3 + Pr2O3 + 9 I2 + 21 O2
References
Praseodymium(III) compounds
Iodates | Praseodymium(III) iodate | Chemistry | 148 |
61,791,659 | https://en.wikipedia.org/wiki/Almadena%20Chtchelkanova | Almadena Yurevna Chtchelkanova is a Russian-American scientist. She is a program director in the Division of Computing and Communication Foundations at the National Science Foundation.
Education
Chtchelkanova completed a Ph.D. in physics from Moscow State University in 1988. In 1996, she earned a M.A. in the department of computer sciences at University of Texas at Austin. Her master's thesis was titled The application of object-oriented analysis to sockets system calls library testing. James C. Browne was her advisor.
Career
She worked as a senior scientist for Strategic Analysis, Inc. which provided support to DARPA. She provided support and oversight of the Spintronics, Quantum Information Science and Technology (QuIST) and Molecular Observation and Imaging programs. She worked at the United States Naval Research Laboratory for 4 years in the laboratory for computational physics and fluid dynamics. Chtchelkanova joined the National Science Foundation in 2005. She is a program director in the Division of Computing and Communication Foundations and oversees programs involving high performance computing.
References
External links
United States National Science Foundation officials
University of Texas at Austin College of Natural Sciences alumni
Moscow State University alumni
20th-century Russian women scientists
21st-century American women scientists
Russian women computer scientists
Women physicists
Computational physicists
21st-century American physicists
20th-century Russian physicists
Year of birth missing (living people)
Living people | Almadena Chtchelkanova | Physics | 282 |
7,158,257 | https://en.wikipedia.org/wiki/Streaking%20%28microbiology%29 | In microbiology, streaking is a technique used to isolate a pure strain from a single species of microorganism, often bacteria. Samples can then be taken from the resulting colonies and a microbiological culture can be grown on a new plate so that the organism can be identified, studied, or tested.
The modern streak plate method has progressed from the efforts of Robert Koch and other microbiologists to obtain microbiological cultures of bacteria in order to study them. The dilution or isolation by streaking method was first developed in Koch's laboratory by his two assistants Friedrick Loeffler and Georg Theodor August Gaffky. This method involves the dilution of bacteria by systematically streaking them over the exterior of the agar in a Petri dish to obtain isolated colonies which will then grow into quantity of cells, or isolated colonies. If the agar surface grows microorganisms which are all genetically same, the culture is then considered as a microbiological culture.
Technique
Streaking is rapid and ideally a simple process of isolation dilution. The technique is done by diluting a comparatively large concentration of bacteria to a smaller concentration. The decrease of bacteria should show that colonies are sufficiently spread apart to affect the separation of the different types of microbes. Streaking is done using a sterile tool, such as a cotton swab or commonly an inoculation loop. Aseptic techniques are used to maintain microbiological cultures and to prevent contamination of the growth medium. There are many different types of methods used to streak a plate. Picking a technique is a matter of individual preference and can also depend on how large the number of microbes the sample contains.
The three-phase streaking pattern, known as the T-Streak, is recommended for beginners. The streaking is done using a sterile tool, such as a cotton swab or commonly an inoculation loop. The inoculation loop is first sterilized by passing it through a flame. When the loop is cool, it is dipped into an inoculum such as a broth or patient specimen containing many species of bacteria. The inoculation loop is then dragged across the surface of the agar back and forth in a zigzag motion until approximately 30% of the plate has been covered. The loop then is re-sterilized and the plate is turned 90 degrees. Starting in the previously streaked section, the loop is dragged through it two to three times continuing the zigzag pattern. The procedure is then repeated once more being cautious to not touch the previously streaked sectors. Each time the loop gathers fewer and fewer bacteria until it gathers just single bacterial cells that can grow into a colony. The plate should show the heaviest growth in the first section. The second section will have less growth and a few isolated colonies, while the final section will have the least amount of growth and many isolated colonies.
Growth medium
The sample is spread across one quadrant of a Petri dish containing a growth medium. Bacteria need different nutrients to grow. This includes water, a source of energy, sources of carbon, sulfur, nitrogen, phosphorus, certain minerals, and other vitamins and growth factors. A very common type of media used in microbiology labs is known as agar, a gelatinous substance derived from seaweed. The nutrient agar has a lot of ingredients with unknown amounts of nutrients in them. On one hand, this can be a very selective media to use because as mentioned bacteria are particular. If there is a certain nutrient in the media the bacteria could most certainly not grow and could die. On the other hand, this media is very complex. Complex media is important because it allows for a wide range of microbial growth. The bacteria growth can be supported by this media greatly due in part to the high amounts of nutrients. Choice of which growth medium is used depends on which microorganism is being cultured, or selected for.
Incubation
Dependent on the strain, the plate may then be incubated, usually for 24 to 36 hours, to allow the bacteria to reproduce. At the end of incubation there should be enough bacteria to form visible colonies in the areas touched by the inoculation loop. From these mixed colonies, single bacterial or fungal species can be identified based on their morphological (size/shape/colour) differences, and then sub-cultured to a new media plate to yield a pure culture for further analysis.
Automated equipment is used at industrial level for streak plating the solid media in order to achieve better sterilization and consistency of streaking and for reliably faster work. While streaking manually it is important to avoid scratching the solid medium as subsequent streak lines will be damaged and non-uniform deposition of inoculum at damaged sites on the medium yield clustered growth of microbes which may extend into nearby streak lines.
Importance
Bacteria exist in water, soil and food, on skin, and intestinal tract normal flora. The assortment of microbes that exist in the environment and on human bodies is enormous. The human body has billions of bacteria which creates the normal flora fighting against the invading pathogens. Bacteria frequently occur in mixed populations. It is very rare to find a single occurring species of bacteria. To be able to study the cultural, morphological, and physiological characteristics of an individual species, it is vital that the bacteria be divided from the other species that generally originate in the environment. This is important in determining a bacterium in a clinical sample. When the bacteria is streaked and isolated, the causative agent of a bacterial disease can be identified.
See also
Bacterial lawn
References
External links
Streaking agar plate method for getting isolated colonies (video).
Microbiology techniques
Bacteriology | Streaking (microbiology) | Chemistry,Biology | 1,161 |
14,168,085 | https://en.wikipedia.org/wiki/DNA%20ligase%204 | DNA ligase 4 also DNA ligase IV, is an enzyme that in humans is encoded by the LIG4 gene.
Function
DNA ligase 4 is an ATP-dependent DNA ligase that joins double-strand breaks during the non-homologous end joining pathway of double-strand break repair. It is also essential for V(D)J recombination. Lig4 forms a complex with XRCC4, and further interacts with the DNA-dependent protein kinase (DNA-PK) and XLF/Cernunnos, which are also required for NHEJ. The crystal structure of the Lig4/XRCC4 complex has been resolved. Defects in this gene are the cause of LIG4 syndrome. The yeast homolog of Lig4 is Dnl4.
LIG4 syndrome
In humans, deficiency of DNA ligase 4 results in a clinical condition known as LIG4 syndrome. This syndrome is characterized by cellular radiation sensitivity, growth retardation, developmental delay, microcephaly, facial dysmorphisms, increased disposition to leukemia, variable degrees of immunodeficiency and reduced number of blood cells.
Haematopoietic stem cell aging
Accumulation of DNA damage leading to stem cell exhaustion is regarded as an important aspect of aging. Deficiency of lig4 in pluripotent stem cells impairs Non-homologous end joining (NHEJ) and results in accumulation of DNA double-strand breaks and enhanced apoptosis. Lig4 deficiency in the mouse causes a progressive loss of haematopoietic stem cells and bone marrow cellularity during aging. The sensitivity of haematopoietic stem cells to lig4 deficiency suggests that lig4-mediated NHEJ is a key determinant of the ability of stem cells to maintain themselves against physiological stress over time.
Interactions
LIG4 has been shown to interact with XRCC4 via its BRCT domain. This interaction stabilizes LIG4 protein in cells; cells that are deficient for XRCC4, such as XR-1 cells, have reduced levels of LIG4.
Mechanism
LIG4 is an ATP-dependent DNA ligase. LIG4 uses ATP to adenylate itself and then transfers the AMP group to the 5' phosphate of one DNA end. Nucleophilic attack by the 3' hydroxyl group of a second DNA end and release of AMP yield the ligation product. Adenylation of LIG4 is stimulated by XRCC4 and XLF.
References
Further reading
DNA repair | DNA ligase 4 | Biology | 530 |
8,457,775 | https://en.wikipedia.org/wiki/Skid%20plate | A skid plate is an abrasion-resistant material affixed to the underside of a vehicle or boat to prevent damage to the underside when contact is made with the ground.
Skid plates may be used on off-road vehicles, motorcycles and lowered vehicles to prevent damage to the underside. Fake skid plates are also added to vehicles for an off-road look.
Steel skid plate for the protection of the engine and the gearbox.
The advantages:
Increased resistance against any impact or debris found on the road.
It covers the front compartment of the car, so the engine is more protected against dust and dirt.
Longer lifespan compared to skid plates made of plastic or fibre glass.
It protects the frame, motor and linkage on an off-road motorcycle.
See also
Rock sliders
Metal bumpers
References
Vehicle parts | Skid plate | Technology | 167 |
71,187,755 | https://en.wikipedia.org/wiki/Demiplane%20%28company%29 | Demiplane is a company that creates digital toolsets for playing tabletop role-playing games which can be used as an aid to playing in person or remotely online. The Demiplane platform's main services are game matchmaking, game hosting and licensed content via the Nexus digital toolset. Nexus provides access to digital rulebooks, adventures, and other supplements; it also provides digital tools like a character builder and character sheets. The platform was launched in 2020; early access to Nexus launched in 2021. In June 2024, the company was acquired by the virtual tabletop (VTT) company Roll20.
The company has also produced and broadcast several web series on their official Twitch and YouTube channels. This includes the ongoing actual play web series Children of Éarte created and run by Deborah Ann Woll which launched in March 2022.
History
Origins
In 2019, Demiplane was founded by Peter Romenesko and Travis Frederick with the platform launching officially in 2020. Romenesko and Frederick grew up in the Lake Geneva area playing tabletop games together "and eventually re-united to build Demiplane". Demiplane acts as a platform for various tabletop role-playing game tools such as game hosting and matchmaking, shared game journals, and digital compendiums for licensed games. The company has received funding from TitletownTech and uses TitletownTech's startup incubator office space.
In March 2021, Adam Bradford – founder of D&D Beyond – joined the company as the Chief Development Officer. From October to December 2021, Demiplane announced three partnerships for their new Nexus digital toolset: Pathfinder Nexus with Paizo, World of Darkness Nexus (for games such as Vampire: The Masquerade and Werewolf: the Apocalypse) with Paradox Interactive, and Free League Nexus with Free League Publishing. An early access version of Pathfinder Nexus, titled Pathfinder Primer, was launched at the time of the announcement. Nexus has been called the "equivalent to digital toolset D&D Beyond" for other role-playing games.
In April 2022, Demiplane announced that they will host the new Marvel tabletop role-playing game titled Marvel Multiverse Role-Playing Game NEXUS with the digital playtest rulebook and early access given to users who pre-order the game. In June 2022, early access for Vampire: The Masquerade Nexus was launched. This is the first Demiplane toolset to include digital/physical bundles for a roleplaying game. Demiplane also announced that they are developing Nexus support for the World of Darkness game Hunter: The Reckoning. In October 2022, Magpie Games announced that early access for Avatar Legends Nexus, Demiplane's digital toolset Avatar Legends: The Roleplaying Game, was launching that month.
In February 2023, Demiplane announced the upcoming 5E Nexus which will support third-party Dungeons & Dragons 5th Edition (D&D) publishers who use the D&D 5.1 System Reference Document; this announcement included "pre-launch" tools such as 5E group matchmaking and group creation. A rule compendium, a digital reader, and a character builder are scheduled to be released in waves over 2023. Bradford said the intent at the moment was not to partner with Wizards of the Coast on official D&D products. In September 2023, early access for 5E Nexus launched with third party sourcebooks such as Tal'Dorei Reborn by Darrington Press, Tome of Beasts 1 & 2 by Kobold Press and Grim Hollow: The Monster Grimoire by Ghostfire Games. In October 2023, digital character tools officially launched for Vampire: The Masquerade. In February 2024, full access to Alien RPG Nexus for Free League's ALIEN: The Roleplaying Game is scheduled to launch.
Acquisition by Roll20
In June 2024, it was announced that the virtual tabletop (VTT) company Roll20 had acquired Demiplane. Roll20 CEO Ankit Lal stated: "We want to make it as easy as possible for you to build your first character, to get into your first game, to try out playing TTRPGs. And we think the combination of the Roll20 VTT and the Demiplane character sheet ecosystem is going to do that". Christian Hoffer of ComicBook.com reported that this acquisition "won't have any immediate impact on users of either platform, but Demiplane CEO Peter Romenesko noted that the merged companies will look to close the difference between their two platforms very quickly". Roll20 stated that they "don’t plan on making any changes to the 5e Nexus on Demiplane".
J. R. Zambrano, for Bell of Lost Souls, commented that "it seems that an era of consolidation is on the way as players like WotC and Roll20 move to consolidate their powerbases". It was also announced that due to the merger between Roll20 and Demiplane, Adam Bradford would be leaving the company. Bradford then announced that he would join SmiteWorks, which operates the virtual tabletop Fantasy Grounds, as their new Chief Development Officer.
Toolsets
Demiplane's content and game management system is primarily browser-based, and is fully functional on both mobile and desktop browsers.
Game matchmaking
Demiplane provides a free matchmaking service for over 180 roleplaying games; it is modeled on the matchmaking services offered in online multiplayer video games. Users can select various attributes such as group size and game style/themes. These user selections go into an algorithm to match groups together which includes the option of chat discussion to review expectations with the Game Master. After games, players and Game Masters can rate each other; other users can see these ratings during future matchmaking.
Game hosting
Demiplane's free game hosting service provides text and video chats for users in a game; this feature also provides breakout rooms for Game Masters to use to "reveal secrets to specific players". The platform provides tools for hosted games such as dice rolling, "shared and searchable journals", and task/inventory tracking. Demiplane has built in roleplaying game safety tools such as a raise hand button which anonymously flags to the Game Master that a player feels the game is going outside of pre-determined boundaries. In 2021, Frederick stated that Demiplane isn't intended as a virtual tabletop (VTT) platform or to compete with VTT companies such as Fantasy Grounds and Roll20. Instead, Frederick sees Demiplane as "sideways compatible" with VTT platforms as users can launch VTTs "from within Demiplane".
Game Masters also have the option to host paid game sessions on Demiplane by setting the cost per player and players have the option to provide tips. Demiplane facilitates payment – Game Masters receive 95% of the cost per player & tips; players are charged a 5.5% fee (based on the cost per player and on tips) by Demiplane.
Nexus
Customers can use a specific game's Nexus by purchasing access to licensed content in Demiplane's marketplace. Nexus provides access to a digital reader for role-playing game rulebooks, compendium content, and character sheets for licensed games. The compendium content is a digital version of the book (as HTML, not a PDF); it includes cross-links and tooltips for game rules mentioned in the text. Access to the book's options in the rest of Nexus allows purchased content to be used with the character builder and other tools. It can also provide additional features to hosted games.
Users can purchase access to games such as: Pathfinder Second Edition, the Marvel Multiverse Role-Playing Game, Vampire: The Masquerade 5th Edition, the ALIEN: The Roleplaying Game, Mutant: Year Zero, Avatar Legends: The Roleplaying Game, Hunter: The Reckoning, and Candela Obscura.
Following the purchase of Demiplane, Roll20 began to support cross-platform access so that content unlocked on one platform would be automatically unlocked on the other platform. , Paizo, Darrington Press, Kobold Press, and Renegade Game Studio have granted permission for cross-platform access to their products.
Streaming shows
The company has also produced and broadcast the following shows on their official Twitch and YouTube channels:
Heroes of the Planes (2021) – an actual play web series using the Fifth Edition of Dungeons & Dragons, led by Todd Kenreck as the Dungeon Master with the cast of players featuring Hope LaVelle, B. Dave Walters, Jennifer Kretchmer, Adam Bradford, Lauren Urban and Meagan Kenreck. It follows adventurers as they travel the multiverse. It ran for 37 episodes.
Demiplanar (2021) – B. Dave Walters hosts a web talk show where he interviews various people in the tabletop role-playing game industry. It ran for 11 episodes.
Strixhaven CHAOS (2022) – an actual play eight-part limited series using the Fifth Edition of Dungeons & Dragons, led by B. Dave Walters as the Dungeon Master with the cast of players featuring Hope LaVelle, Jennifer Kretchmer, Lauren Urban and Adam Bradford. The game is set in the Strixhaven campaign setting.
Children of Éarte (2022) – an ongoing actual play web series using the Fifth Edition of Dungeons & Dragons, led by Deborah Ann Woll as the Dungeon Master with the cast of players featuring Hope LaVelle, Alicia Marie, Adam Bradford, Lauren Urban, and Jennifer Kretchmer. The show has been described as a "fairy tale for grown ups". The show was nominated for the "Best Overlay Design (Actual Play Video)" and the "Outstanding Actual Play (Video)" awards at the 2023 New Jersey Web Festival; Woll was nominated for the "Best Game Master (Actual Play Video)" award and Urban was nominated for the "Best Player Character Performance" award for their work on the show.
Extinction Race (2022) – an actual play four-part limited series using the Mutant: Year Zero ruleset, led by Josh Simons as the Game Master with the cast of players featuring Aliza Pearl, Catie Osborn, Mellie Doucette, Michelle Nguyen Bradley and Omega Jones. The show corresponded with launch of the Mutant: Year Zero Nexus.
References
Browser-based game websites
Free-content websites
Internet properties established in 2019
Mobile content
Role-playing game software
Role-playing game websites | Demiplane (company) | Technology | 2,129 |
9,686,427 | https://en.wikipedia.org/wiki/Xenotropic%20murine%20leukemia%20virus%E2%80%93related%20virus | Xenotropic murine leukemia virus–related virus (XMRV) is a retrovirus which was first described in 2006 as an apparently novel human pathogen found in tissue samples from people with prostate cancer. Initial reports erroneously linked the virus to prostate cancer and later to chronic fatigue syndrome (CFS), leading to considerable interest in the scientific and patient communities, investigation of XMRV as a potential cause of multiple medical conditions, and public-health concerns about the safety of the donated blood supply.
Xenotropic viruses replicate or reproduce in cells other than those of the host species. Murine refers to the rodent family Muridae, which includes common household rats and mice.
Subsequent research established that XMRV was in fact a laboratory contaminant, rather than a novel pathogen, and had been generated unintentionally in the laboratory through genetic recombination between two mouse retroviruses during propagation of a prostate-cancer cell line in the mid-1990s. These findings raised serious questions concerning the findings of XMRV-related studies which purported to find connections between XMRV and human diseases. , there is no evidence that XMRV infects humans, nor that XMRV is associated with or causes any human disease.
Classification and genome
XMRV is a murine leukemia virus (MLV) that formed through the recombination of the genomes of two parent MLVs known as preXMRV-1 and preXMRV-2. MLVs belong to the virus family Retroviridae and the genus gammaretrovirus and have a single-stranded RNA genome that replicates through a DNA intermediate. The name XMRV was given because the discoverers of the virus initially thought that it was a novel potential human pathogen that was related to but distinct from MLVs. The XMRV particle is approximately spherical and 80 to 100 nm in diameter. Several XMRV genomic sequences have been published to date. These sequences are almost identical, an unusual finding as retroviruses replicate their genomes with relatively low fidelity, leading to divergent viral sequences in a single host organism. In 2010 the results of phylogenetic analyses of XMRV and related murine retroviruses led a group of researchers to conclude that XMRV "might not be a genuine human pathogen". Xenotropic viruses (xenos Gr. foreign; tropos Gr. turning) were initially discovered in the New Zealand Black (NZB) mouse and later found to be present in many other mouse strains including wild mice.
Discovery
XMRV was discovered in the laboratories of Joseph DeRisi at the University of California, San Francisco, and Robert Silverman and Eric Klein of the Cleveland Clinic. Silverman had previously cloned and investigated the enzyme ribonuclease L (RNase L), part of the cell's natural defense against viruses. When activated, RNase L degrades cellular and viral RNA to halt viral replication. In 2002, the "hereditary prostate cancer 1" locus (HPC1) was mapped to the RNase L gene, implicating it in the development of prostate cancer. The cancer-associated "R462Q" mutation results in a glutamine instead of an arginine at position 462 of the RNase L enzyme, reducing its catalytic activity. A man with two copies of this mutation has twice the risk of prostate cancer; one copy raises the risk by 50%. Klein and Silverman hypothesized that "the putative linkage of RNase L alterations to HPC might reflect enhanced susceptibility to a viral agent" and conducted a viral screen of prostate cancer samples, leading to the discovery of XMRV.
Disease association studies
Prostate cancer
Detection of XMRV was reported in several articles. However, subsequent studies and retractions cast doubt on these findings.
Other conditions
In one study, XMRV was detected in a small percentage of patients with weakened immune systems, but other studies found no evidence of XMRV in immunosuppression.
Controversy and origins
Concerns arose as multiple subsequent studies failed to replicate the positive findings of XMRV in the blood of patients with CFS, prostate cancer, and other illnesses.
Separate from these concerns, alarms were raised over the possibility that XMRV might be transmissible by blood transfusion since the virus was recovered from lymphocytes (white blood cells). XMRV is closely related to several known xenotropic mouse viruses which can recognize and enter cells of non-rodent species (including humans) by means of the cell surface xenotropic and polytropic retrovirus receptor 1 (XPR1). As a result, the AABB (formerly the American Association of Blood Banks) established a task force to determine the prevalence of XMRV in the United States' blood donation supply and the suitability of different detection methods.
In September 2011, the Scientific Research Working Group (SRWG) arm of the AABB task force released its findings that current assays could not reliably identify XMRV in human blood samples which had previously tested as XMRV/MLV positive; the only two labs which reported positive findings of XMRV in samples which were previously reported as positive (the WPI and NCI/Ruscetti labs) also reported positive findings in samples which were known XMRV negative.
Multiple contemporary studies concluded that XMRV was most likely a result of incidental recombination of mouse viruses during prostate cancer research in the 1990s. Positive findings of the virus were likely due to contamination rather than true presence of the virus in humans. A subsequent analysis also found that the primers used to detect and replicate traces of XMRV in PCR testing are, in fact, neither selective nor specific to XMRV and will actually react to various non-XMRV sequences naturally found in mammalian genomes. In the meantime, multiple other studies also failed to find any link between XMRV and CFS or prostate cancer. As a result, many of the key publications which did claim an association were voluntarily retracted. This included the initial study which had linked XMRV to CFS, which was retracted at Silverman's request; one of the co-authors, Judy Mikovits, was also accused of scientific misconduct.
References
External links
XMRV Fact Sheet, AABB.org (formerly American Association of Blood Banks) (Archived), Updated June 18, 2010
Center for Disease Control: XMRV Information Archived
Gammaretroviruses
Medical controversies
Unaccepted virus taxa | Xenotropic murine leukemia virus–related virus | Biology | 1,391 |
7,683,011 | https://en.wikipedia.org/wiki/Hyponitrous%20acid | Hyponitrous acid is a chemical compound with formula or HON=NOH. It is an isomer of nitramide, H2N−NO2; and a formal dimer of azanone, HNO.
Hyponitrous acid forms two series of salts, the hyponitrites containing the [ON=NO]2− anion, and the "acid hyponitrites" containing the [HON=NO]− anion.
Structure and properties
There are two possible structures of hyponitrous acid, trans and cis. trans-Hyponitrous acid forms white crystals that are explosive when dry. In aqueous solution, it is a weak acid (pKa1 = 7.21, pKa2 = 11.54), and decomposes to nitrous oxide and water with a half life of 16 days at 25 °C at pH 1–3:
H2N2O2 -> H2O + N2O
Since this reaction is not reversible, should not be considered as the anhydride of .
The cis acid is not known, but its sodium salt can be obtained.
Preparation
Hyponitrous acid (trans) can be prepared from silver(I) hyponitrite and anhydrous HCl in ether:
Ag2N2O2 + 2 HCl -> H2N2O2 + 2 AgCl
Spectroscopic data indicate a trans configuration for the resulting acid.
It can also be synthesized from hydroxylamine and nitrous acid:
NH2OH + HNO2 -> H2N2O2 + H2O
Biological aspects
In enzymology, a hyponitrite reductase is an enzyme that catalyzes the chemical reaction
H2N2O2 + 2 NADH + 2 H+ <-> 2 NH2OH + 2 NAD+
References
Pnictogen oxoacids
Nitrogen oxoacids
Oxidizing acids
Hyponitrites | Hyponitrous acid | Chemistry | 422 |
23,381,445 | https://en.wikipedia.org/wiki/Pawsey%20Medal | The Pawsey Medal is awarded annually by the Australian Academy of Science to recognize outstanding research in the physics by an Australian scientist early in their career (up to 10 years post-PhD).
This medal commemorates the work of the late Joseph L. Pawsey, FAA.
Winners
Source:
See also
List of physics awards
List of Australian Science and Technology Awards
Notes
External links
Pawsey Medal site of the Australian Academy of Science
Physics awards
Awards established in 1967
Australian Academy of Science Awards
Australian science and technology awards
Early career awards | Pawsey Medal | Technology | 104 |
32,629 | https://en.wikipedia.org/wiki/Video%20game%20console | A video game console is an electronic device that outputs a video signal or image to display a video game that can typically be played with a game controller. These may be home consoles, which are generally placed in a permanent location connected to a television or other display devices and controlled with a separate game controller, or handheld consoles, which include their own display unit and controller functions built into the unit and which can be played anywhere. Hybrid consoles combine elements of both home and handheld consoles.
Video game consoles are a specialized form of a home computer geared towards video game playing, designed with affordability and accessibility to the general public in mind, but lacking in raw computing power and customization. Simplicity is achieved in part through the use of game cartridges or other simplified methods of distribution, easing the effort of launching a game. However, this leads to ubiquitous proprietary formats that create competition for market share. More recent consoles have shown further confluence with home computers, making it easy for developers to release games on multiple platforms. Further, modern consoles can serve as replacements for media players with capabilities to play films and music from optical media or streaming media services.
Video game consoles are usually sold on a five–seven year cycle called a generation, with consoles made with similar technical capabilities or made around the same time period grouped into one generation. The industry has developed a razor and blades model: manufacturers often sell consoles at low prices, sometimes at a loss, while primarily making a profit from the licensing fees for each game sold. Planned obsolescence then draws consumers into buying the next console generation. While numerous manufacturers have come and gone in the history of the console market, there have always been two or three dominant leaders in the market, with the current market led by Sony (with their PlayStation brand), Microsoft (with their Xbox brand), and Nintendo (currently producing the Switch console). Previous console developers include Sega, Atari, Coleco, Mattel, NEC, SNK, Fujitsu, and 3DO.
History
The first video game consoles were produced in the early 1970s. Ralph H. Baer devised the concept of playing simple, spot-based games on a television screen in 1966, which later became the basis of the Magnavox Odyssey in 1972. Inspired by the table tennis game on the Odyssey, Nolan Bushnell, Ted Dabney, and Allan Alcorn at Atari, Inc. developed the first successful arcade game, Pong, and looked to develop that into a home version, which was released in 1975. The first consoles were capable of playing only a very limited number of games built into the hardware. Programmable consoles using swappable ROM cartridges were introduced with the Fairchild Channel F in 1976, though popularized with the Atari 2600 released in 1977.
Handheld consoles emerged from technology improvements in handheld electronic games as these shifted from mechanical to electronic/digital logic, and away from light-emitting diode (LED) indicators to liquid-crystal displays (LCD) that resembled video screens more closely. Early examples include the Microvision in 1979 and Game & Watch in 1980, and the concept was fully realized by the Game Boy in 1989.
Both home and handheld consoles have become more advanced following global changes in technology. These technological shifts include improved electronic and computer chip manufacturing to increase computational power at lower costs and size, the introduction of 3D graphics and hardware-based graphic processors for real-time rendering, digital communications such as the Internet, wireless networking and Bluetooth, and larger and denser media formats as well as digital distribution.
Following the same type of Moore's law progression, home consoles are grouped into generations; each lasting approximately five years. Consoles within each generation share similar specifications and features, such as processor word size. While no one grouping of consoles by generation is universally accepted, one breakdown of generations, showing representative consoles, of each is shown below.
Form factor
Home video game console
Home video game consoles are meant to be connected to a television or other type of monitor, with power supplied through an outlet. This requires the unit to be used in a fixed location, typically at home in one's living room. Separate game controllers, connected through wired or wireless connections, are used to provide input to the game. Early examples include the Atari 2600, the Nintendo Entertainment System, and the Sega Genesis; newer examples include the Wii U, the PlayStation 5, and the Xbox Series X.
Microconsole
A microconsole is a home video game console that is typically powered by low-cost computing hardware, making the console lower-priced compared to other home consoles on the market. The majority of microconsoles, with a few exceptions such as the PlayStation TV and OnLive Game System, are Android-based digital media players that are bundled with gamepads and marketed as gaming devices. Such microconsoles can be connected to the television to play video games downloaded from an application store such as Google Play.
Handheld game console
Handheld game consoles are devices that typically include a built-in screen and game controller in their case, and contain a rechargeable battery or battery compartment. This allows the unit to be carried around and played anywhere, in contrast to a home game console. Examples include the Game Boy, the PlayStation Portable, and the Nintendo 3DS.
Hybrid video game console
Hybrid video game consoles are devices that can be used either as a handheld or as a home console. They have either a wired connection or docking station that connects the console unit to a television screen and fixed power source, and the potential to use a separate controller. However, they can also be used as a handheld. While prior handhelds like the Sega Nomad and PlayStation Portable, or home consoles such as the Wii U, have had these features, some consider the Nintendo Switch to be the first true hybrid console.
Functionality
Most consoles are considered programmable consoles and have the means for the player to switch between different games. Traditionally, this has been done by switching a physical game cartridge or game card or by using optical media. It is now common to download games through digital distribution and store them on internal or external digital storage devices.
Dedicated console
Some consoles are considered dedicated consoles, in which games available for the console are "baked" onto the hardware, either by being programmed via the circuitry or set in the read-only flash memory of the console. Thus, the console's game library cannot be added to or changed directly by the user. The user can typically switch between games on dedicated consoles using hardware switches on the console, or through in-game menus. Dedicated consoles were common in the first generation of home consoles, such as the Magnavox Odyssey and the home console version of Pong, and more recently have been used for retro style consoles such as the NES Classic Edition and Sega Genesis Mini.
Dedicated consoles were very popular in the first generation until they were gradually replaced by second generation that use ROM cartridges. The fourth generation gradually merged with optical media.
Retro style console
During the later part of video game history, there have been specialized consoles using computing components to offer multiple games to players. Most of these plug directly into one's television, and thus are often called plug-and-play consoles. Most of them are also considered dedicated consoles since it is generally impossible to access the computing components by an average consumer, though tech-savvy consumers often have found ways to hack the console to install additional functionality, voiding the manufacturer's warranty. Plug-and-play consoles usually come with the console unit itself, one or more controllers, and the required components for power and video hookup. Many recent plug-and-play releases have been for distributing a number of retro games for a specific console platform. Examples of these include the Atari Flashback series, the NES Classic Edition, Sega Genesis Mini and also handheld retro consoles such as the Nintendo Game & Watch color screen series.
Components
Console unit
Early console hardware was designed as customized printed circuit boards (PCB)s, selecting existing integrated circuit chips that performed known functions, or programmable chips like erasable programmable read-only memory (EPROM) chips that could perform certain functions. Persistent computer memory was expensive, so dedicated consoles were generally limited to the use of processor registers for storage of the state of a game, thus limiting the complexities of such titles. Pong in both its arcade and home format, had a handful of logic and calculation chips that used the current input of the players' paddles and resisters storing the ball's position to update the game's state and send it to the display device. Even with more advanced integrated circuits (IC)s of the time, designers were limited to what could be done through the electrical process rather than through programming as normally associated with video game development.
Improvements in console hardware followed with improvements in microprocessor technology and semiconductor device fabrication. Manufacturing processes have been able to reduce the feature size on chips (typically measured in nanometers), allowing more transistors and other components to fit on a chip, and at the same time increasing the circuit speeds and the potential frequency the chip can run at, as well as reducing thermal dissipation. Chips were able to be made on larger dies, further increasing the number of features and effective processing power. Random-access memory became more practical with the higher density of transistors per chip, but to address the correct blocks of memory, processors needed to be updated to use larger word sizes and allot for larger bandwidth in chip communications. All these improvements did increase the cost of manufacturing, but at a rate far less than the gains in overall processing power, which helped to make home computers and consoles inexpensive for the consumer, all related to Moore's law of technological improvements.
For the consoles of the 1980s to 1990s, these improvements were evident in the marketing in the late 1980s to 1990s during the "bit wars", where console manufacturers had focused on their console's processor's word size as a selling point. Consoles since the 2000s are more similar to personal computers, building in memory, storage features, and networking capabilities to avoid the limitations of the past. The confluence with personal computers eased software development for both computer and console games, allowing developers to target both platforms. However, consoles differ from computers as most of the hardware components are preselected and customized between the console manufacturer and hardware component provider to assure a consistent performance target for developers. Whereas personal computer motherboards are designed with the needs for allowing consumers to add their desired selection of hardware components, the fixed set of hardware for consoles enables console manufacturers to optimize the size and design of the motherboard and hardware, often integrating key hardware components into the motherboard circuitry itself. Often, multiple components, such as the central processing unit and graphics processing unit, can be combined into a single chip, otherwise known as a system on a chip (SoC), which is a further reduction in size and cost. In addition, consoles tend to focus on components that give the unit high game performance, such as the CPU and GPU, and as a tradeoff to keep their prices in expected ranges, use less memory and storage space compared to typical personal computers.
In comparison to the early years of the industry, where most consoles were made directly by the company selling the console, many consoles of today are generally constructed through a value chain that includes component suppliers, such as AMD and NVidia for CPU and GPU functions, and contract manufacturers including electronics manufacturing services, factories which assemble those components into the final consoles such as Foxconn and Flextronics. Completed consoles are then usually tested, distributed, and repaired by the company itself. Microsoft and Nintendo both use this approach to their consoles, while Sony maintains all production in-house with the exception of their component suppliers.
Some of the commons elements that can be found within console hardware include:
Motherboard
The primary PCB that all of the main chips, including the CPU, are mounted on.
Daughterboard
A secondary PCB that connects to the motherboard that would be used for additional functions. These may include components that can be easily replaced later without having to replace the full motherboard.
Central processing unit (CPU)
The main processing chip on the console that performs most of the computational workload.
The consoles' CPU is generally defined by its word size (such as 8-bit or 64-bit), and its clock speed or frequency in hertz. For some CPUs, the clock speed can be variable in response to software needs. In general, larger word sizes and faster clock sizes indicate better performance, but other factors will impact the actual speed.
Another distinguishing feature for a console's CPU is the instruction set architecture. The instruction set defines low-level machine code to be sent to the CPU to achieve specific results on the chip. Differences in the instruction set architecture of CPU of consoles of a given generation can make for difficulty in software portability. This had been used by manufacturers to keep software titles exclusive to their platform as one means to compete with others. Consoles prior to the sixth generation typically used chips that the hardware and software developers were most familiar with, but as personal computers stabilized on the x86 architecture, console manufacturers followed suit as to help easily port games between computer and console.
Newer CPUs may also feature multiple processing cores, which are also identified in their specification. Multi-core CPUs allow for multithreading and parallel computing in modern games, such as one thread for managing the game's rendering engine, one for the game's physics engine, and another for evaluating the player's input.
Graphical processing unit (GPU)
The processing unit that performs rendering of data from the CPU to the video output of the console.
In the earlier console generations, this was generally limited to simple graphic processing routines, such as bitmapped graphics and manipulation of sprites, all otherwise involving integer mathematics while minimizing the amount of required memory needed to complete these routines, as memo. For example, the Atari 2600 used its own Television Interface Adaptor that handled video and audio, while the Nintendo Entertainment System used the Picture Processing Unit. For consoles, these GPUs were also designed to send the signal in the proper analog formation to a cathode ray television, NTSC (used in Japan and North America) or PAL (mostly used in Europe). These two formats differed by their refresh rates, 60 versus 50 Hertz, and consoles and games that were manufactured for PAL markets used the CPU and GPU at lower frequencies.
The introduction of real-time polygonal 3D graphics rendering in the early 1990s—not just an innovation in video games for consoles but in arcade and personal computer games—led to the development of GPUs that were capable of performing the floating-point calculations needed for real-time 3D rendering. In contrast to the CPU, modern GPUs for consoles and computers, principally made by AMD and NVidia, are highly parallel computing devices with a number of compute units/streaming multiprocessors (depending on vendor, respectively) within a single chip. Each compute unit/microprocessor contains a scheduler, a number of subprocessing units, memory caches and buffers, and dispatching and collecting units which also may be highly parallel in nature. Modern console GPUs can be run at a different frequency from the CPU, even at variable frequencies to increases its processing power at the cost of higher energy draw. The performance of GPUs in consoles can be estimated through floating-point operations per second (FLOPS) and more commonly as in teraflops (TFLOPS = 1012 FLOPS). However, particularly for consoles, this is considered a rough number as several other factors such as the CPU, memory bandwidth, and console architecture can impact the GPU's true performance.
Coprocessors
Additional processors used to handle other dedicated functions on the console. Many early consoles feature an audio coprocessor for example.
Northbridge
The processor unit that, outside of the CPU and GPU, typically manages the fastest processing elements on the computer. Typically this involves communication of data between the CPU, the GPU, and the on-board RAM, and subsequently sending and receiving information with the southbridge.
Southbridge
The counterpart of the northbridge, the southbridge is the processing unit that handles slower processing components of the console, typically those of input/output (I/O) with some internal storage and other connected devices like controllers.
BIOS
The console's BIOS (Basic Input/Output System) is the fundamental instruction set baked into a firmware chip on the console circuit board that the console uses when it is first turned on to direct operations. In older consoles, prior to the introduction of onboard storage, the BIOS effectively served as the console's operating system, while in modern consoles, the BIOS is used to direct loading of the console's operating system off internal memory.
Random-access memory (RAM)
Memory storage that is designed for fast reading and writing, often used in consoles to store large amounts of data about a game while it is being played to avoid reading from the slower game media. RAM memory typically does not sustain itself after the console is powered off. Besides the amount of RAM available, a key measurement of performance for consoles is the RAM's bandwidth, how fast in terms of bytes per second that the RAM can be written and read from. This is data that must be transferred to and from the CPU and GPU quickly as needed without requiring these chips to need high memory caches themselves.
Internal storage
Newer consoles have included internal storage devices, such as flash memory, hard disk drives (HDD) and solid-state drives (SSD), to save data persistently. Early application of internal storage was for saving game states, and more recently can be used to store the console's operating system, game patches and updates, games downloaded through the Internet, additional content for those games, and additional media such as purchased movies and music. Most consoles provide the means to manage the data on this storage while respecting the copyrights on the system. Newer consoles, such as the PlayStation 5 and Xbox Series X, use high-speed SSD's not only for storage but to augment the console's RAM, as the combination of their I/O speeds and the use of decompression routines build into the system software give overall read speeds that approach that of the onboard RAM.
Power supply
Besides converting AC power from a wall socket to the DC power needed by the console electronics, the power supply also helps to regulate that power in cases of power surges. Some consoles power supplies are built into the unit, so that the consumer plugs the unit directly to a wall socket, but more often, the console ships with an AC adapter, colloquially known as a "power brick", that converts the power outside of the unit. On handheld units the power supply will either be from a battery compartment, or optionally from a direct power connection from an AC adapter, or from a rechargeable battery pack built into the unit.
Cooling systems
More advanced computing systems generate heat, and require active cooling systems to keep the hardware at safe operating temperatures. Many newer consoles are designed with cooling fans, engineered cooling fins, internal layouts, and strategically-placed vents on the casing to assure good convective heat transfer for keeping the internal components cool.
Media reader
Since the introduction of game cartridges, nearly all consoles have a cartridge port/reader or an optical drive for game media. In the latter console generations, some console revisions have offered options without a media reader as a means to reduce the console's cost and letting the consumer rely on digital distribution for game acquisition, such as with the Xbox One S All-Digital Edition or the PlayStation 5 Digital Edition.
Case
All consoles are enclosed in a case to protect the electronics from damage and to constrain the air flow for cooling.
Input/output ports
Ports for connecting power, controllers, televisions or video monitors, external storage devices, Internet connectivity, and other features are placed in strategic locations on the console. Controller connections are typically offered on the front of the console, while power and most other connections are usually found on the back to keep cables out of the way.
Controllers
All game consoles require player input through a game controller to provide a method to move the player character in a specific direction and a variation of buttons to perform other in-game actions such as jumping or interacting with the game world. Though controllers have become more featured over the years, they still provide less control over a game compared to personal computers or mobile gaming. The type of controller available to a game can fundamentally change the style of how a console game will or can be played. However, this has also inspired changes in game design to create games that accommodate for the comparatively limited controls available on consoles.
Controllers have come in a variety of styles over the history of consoles. Some common types include:
Paddle
A unit with a single knob or dial and usually one or two buttons. Turning the knob typically allows one to move an on-screen object along one axis (such as the paddle in a table tennis game), while the buttons can have additional features.
Joystick
A unit that has a long handle that can pivot freely along multiple directions along with one or more buttons. The unit senses the direction that the joystick is pushed, allowing for simultaneous movement in two directions within a game.
Gamepad
A unit that contains a variety of buttons, triggers, and directional controls either D-pads or analog sticks or both. These have become the most common type of controller since the third generation of console hardware, with designs becoming more detailed to give a larger array of buttons and directional controls to player's while maintaining ergonomic features.
Numerous other controller types exist, including those that support motion controls, touchscreen support on handhelds and some consoles, and specialized controllers for specific types of games, such as racing wheels for racing games, light guns for shooting games, and musical instrument controllers for rhythm games. Some newer consoles also include optional support for a mouse and keyboard devices. Some older consoles such as 1988 Sega Genesis aka Mega Drive and 1993 3DO Interactive Multiplayer, supported optional mice, both with special mice made for them, but the 3DO mouse like that console was a flop, and the mouse for the Sega had very limited game support. The Sega also supported the optional Menacer, a wireless infrared light gun, and such were at one point popular for games. It also support BatterUP, a baseball bat-shaped controller.
A controller may be attached through a wired connection onto the console itself, or in some unique cases like the Famicom hardwired to the console, or with a wireless connection. Controllers require power, either provided by the console via the wired connection, or from batteries or a rechargeable battery pack for wireless connections. Controllers are nominally built into a handheld unit, though some newer ones allow for separate wireless controllers to also be used.
Game media
While the first game consoles were dedicated game systems, with the games programmed into the console's hardware, the Fairchild Channel F introduced the ability to store games in a form separate from the console's internal circuitry, thus allowing the consumer to purchase new games to play on the system. Since the Channel F, nearly all game consoles have featured the ability to purchase and swap games through some form, through those forms have changes with improvements in technology.
ROM cartridge or game cartridge
The read-only memory (ROM) cartridge was introduced with the Fairchild Channel F. A ROM cartridge consist of a printed circuit board (PCB) housed inside of a plastic casing, with a connector allowing the device to interface with the console. The circuit board can contain a wide variety of components, at the minimum, the read-only memory with the software written on it. Later cartridges were able to introduce additional components onto the circuit board like coprocessors, such as Nintendo's SuperFX chip, to enhance the performance of the console. Some consoles such as the Turbografx-16 used a smart card-like technology to flatten the cartridge to a credit-card-sized system, which helped to reduce production costs, but limited additional features that could be included onto the circuitry. PCB-based cartridges waned with the introduction of optical media during the fifth generation of consoles. More recently, ROM cartridges have been based on high memory density, low cost flash memory, which allows for easier mass production of games. Sony used this approach for the PlayStation Vita, and Nintendo continues to use ROM cartridges for its 3DS and Switch products.
Optical media
Optical media, such as CD-ROM, DVD, and Blu-ray, became the primary format for retail distribution with the fifth generation. The CD-ROM format had gained popularity in the 1990s, in the midst of the fourth generation, and as a game media, CD-ROMs were cheaper and faster to produce, offered much more storage space and allowed for the potential of full-motion video. Several console manufacturers attempted to offer CD-ROM add-ons to fourth generation consoles, but these were nearly as expensive as the consoles themselves and did not fare well. Instead, the CD-ROM format became integrated into consoles of the fifth generation, with the DVD format present across most by the seventh generation and Blu-ray by the eighth. Console manufacturers have also used proprietary disc formats for copy protection as well, such as the Nintendo optical disc used on the GameCube, and Sony's Universal Media Disc on the PlayStation Portable.
Digital distribution
Since the seventh generation of consoles, most consoles include integrated connectivity to the Internet and both internal and external storage for the console, allowing for players to acquire new games without game media. All three of Nintendo, Sony, and Microsoft offer an integrated storefront for consumers to purchase new games and download them to their console, retaining the consumers' purchases across different consoles, and offering sales and incentives at times.
Cloud gaming
As Internet access speeds improved throughout the eighth generation of consoles, cloud gaming had gained further attention as a media format. Instead of downloading games, the consumer plays them directly from a cloud gaming service with inputs performed on the local console sent through the Internet to the server with the rendered graphics and audio sent back. Latency in network transmission remains a core limitation for cloud gaming at the present time.
While magnetic storage, such as tape drives and floppy disks, had been popular for software distribution with early personal computers in the 1980s and 1990s, this format did not see much use in console systems. There were some attempts, such as the Bally Astrocade and APF-M1000 using tape drives, as well as the Disk System for the Nintendo Famicom, and the Nintendo 64DD for the Nintendo 64, but these had limited applications, as magnetic media was more fragile and volatile than game cartridges.
External storage
In addition to built-in internal storage, newer consoles often give the consumer the ability to use external storage media to save game date, downloaded games, or other media files from the console. Early iterations of external storage were achieved through the use of flash-based memory cards, first used by the Neo Geo but popularized with the PlayStation. Nintendo continues to support this approach with extending the storage capabilities of the 3DS and Switch, standardizing on the current SD card format. As consoles began incorporating the use of USB ports, support for USB external hard drives was also added, such as with the Xbox 360.
Online services
With Internet-enabled consoles, console manufacturers offer both free and paid-subscription services that provide value-added services atop the basic functions of the console. Free services generally offer user identity services and access to a digital storefront, while paid services allow players to play online games, interact with other uses through social networking, use cloud saves for supported games, and gain access to free titles on a rotating basis. Examples of such services include the Xbox network, PlayStation Network, and Nintendo Switch Online.
Console add-ons
Certain consoles saw various add-ons or accessories that were designed to attach to the existing console to extend its functionality. The best example of this was through the various CD-ROM add-ons for consoles of the fourth generation such as the TurboGrafx CD, Atari Jaguar CD, and the Sega CD. Other examples of add-ons include the 32X for the Sega Genesis intended to allow owners of the aging console to play newer games but has several technical faults, and the Game Boy Player for the GameCube to allow it to play Game Boy games.
Accessories
Consumers can often purchase a range of accessories for consoles outside of the above categories. These can include:
Video camera
While these can be used with Internet-connected consoles like webcams for communication with other friends as they would be used on personal computers, video camera applications on consoles are more commonly used in augmented reality/mixed reality and motion sensing games. Devices like the EyeToy for PlayStation consoles and the Kinect for Xbox consoles were center-points for a range of games to support these devices on their respective systems.
Standard Headsets
Headsets provide a combination of headphones and a microphone for chatting with other players without disturbing others nearby in the same room.
Virtual reality headsets
Some virtual reality (VR) headsets can operate independently of consoles or use personal computers for their main processing system. , the only direct VR support on consoles is the PlayStation VR, though support for VR on other consoles is planned by the other manufacturers.
Docking station
For handheld systems as well as hybrids such as the Nintendo Switch, the docking station makes it easy to insert a handheld to recharge its battery, and if supported, for connecting the handheld to a television screen.
Game development
Console development kits
Console or game development kits are specialized hardware units that typically include the same components as the console and additional chips and components to allow the unit to be connected to a computer or other monitoring device for debugging purposes. A console manufacturer will make the console's dev kit available to registered developers months ahead of the console's planned launch to give developers time to prepare their games for the new system. These initial kits will usually be offered under special confidentiality clauses to protect trade secrets of the console's design, and will be sold at a high cost to the developer as part of keeping this confidentiality. Newer consoles that share features in common with personal computers may no longer use specialized dev kits, though developers are still expected to register and purchase access to software development kits from the manufacturer. For example, any consumer Xbox One can be used for game development after paying a fee to Microsoft to register one intent to do so.
Licensing
Since the release of the Nintendo Famicom / Nintendo Entertainment System, most video game console manufacturers employ a strict licensing scheme that limit what games can be developed for it. Developers and their publishers must pay a fee, typically based on royalty per unit sold, back to the manufacturer. The cost varies by manufacturer but was estimated to be about per unit in 2012. With additional fees, such as branding rights, this has generally worked out to be an industry-wide 30% royalty rate paid to the console manufacturer for every game sold. This is in addition to the cost of acquiring the dev kit to develop for the system.
The licensing fee may be collected in a few different ways. In the case of Nintendo, the company generally has controlled the production of game cartridges with its lockout chips and optical media for its systems, and thus charges the developer or publisher for each copy it makes as an upfront fee. This also allows Nintendo to review the game's content prior to release and veto games it does not believe appropriate to include on its system. This had led to over 700 unlicensed games for the NES, and numerous others on other Nintendo cartridge-based systems that had found ways to bypass the hardware lockout chips and sell without paying any royalties to Nintendo, such as by Atari in its subsidiary company Tengen. This licensing approach was similarly used by most other cartridge-based console manufacturers using lockout chip technology.
With optical media, where the console manufacturer may not have direct control on the production of the media, the developer or publisher typically must establish a licensing agreement to gain access to the console's proprietary storage format for the media as well as to use the console and manufacturer's logos and branding for the game's packaging, paid back through royalties on sales. In the transition to digital distribution, where now the console manufacturer runs digital storefronts for games, license fees apply to registering a game for distribution on the storefront again gaining access to the console's branding and logo with the manufacturer taking its cut of each sale as its royalty. In both cases, this still gives console manufacturers the ability to review and reject games it believes unsuitable for the system and deny licensing rights.
With the rise of indie game development, the major console manufacturers have all developed entry level routes for these smaller developers to be able to publish onto consoles at far lower costs and reduced royalty rates. Programs like Microsoft's ID@Xbox give developers most of the needed tools for free after validating the small development size and needs of the team.
Similar licensing concepts apply for third-party accessory manufacturers.
Emulation and backward compatibility
Consoles, like most consumer electronic devices, have limited lifespans. There is great interest in preservation of older console hardware for archival and historical purposes, as games from older consoles, as well as arcade and personal computers, remain of interest. Computer programmers and hackers have developed emulators that can be run on personal computers or other consoles that simulate the hardware of older consoles that allow games from that console to be run. The development of software emulators of console hardware is established to be legal, but there are unanswered legal questions surrounding copyrights, including acquiring a console's firmware and copies of a game's ROM image, which laws such as the United States' Digital Millennium Copyright Act make illegal save for certain archival purposes. Even though emulation itself is legal, Nintendo is recognized to be highly protective of any attempts to emulate its systems and has taken early legal actions to shut down such projects.
To help support older games and console transitions, manufacturers started to support backward compatibility on consoles in the same family. Sony was the first to do this on a home console with the PlayStation 2 which was able to play original PlayStation content, and subsequently became a sought-after feature across many consoles that followed. Backward compatibility functionality has included direct support for previous console games on the newer consoles such as within the Xbox console family, the distribution of emulated games such as Nintendo's Virtual Console, or using cloud gaming services for these older games as with the PlayStation Now service.
Market
Distribution
Consoles may be shipped in a variety of configurations, but typically will include one base configuration that include the console, one controller, and sometimes a pack-in game. Manufacturers may offer alternate stock keeping unit (SKUs) options that include additional controllers and accessories or different pack-in games. Special console editions may feature unique cases or faceplates with art dedicated to a specific video game or series and are bundled with that game as a special incentive for its fans. Pack-in games are typically first-party games, often featuring the console's primary mascot characters.
The more recent console generations have also seen multiple versions of the same base console system either offered at launch or presented as a mid-generation refresh. In some cases, these simply replace some parts of the hardware with cheaper or more efficient parts, or otherwise streamline the console's design for production going forward; the PlayStation 3 underwent several such hardware refreshes during its lifetime due to technological improvements such as significant reduction of the process node size for the CPU and GPU. In these cases, the hardware revision model will be marked on packaging so that consumers can verify which version they are acquiring.
In other cases, the hardware changes create multiple lines within the same console family. The base console unit in all revisions share fundamental hardware, but options like internal storage space and RAM size may be different. Those systems with more storage and RAM would be marked as a higher performance variant available at a higher cost, while the original unit would remain as a budget option. For example, within the Xbox One family, Microsoft released the mid-generation Xbox One X as a higher performance console, the Xbox One S as the lower-cost base console, and a special Xbox One S All-Digital Edition revision that removed the optical drive on the basis that users could download all games digitally, offered at even a lower cost than the Xbox One S. In these cases, developers can often optimize games to work better on the higher-performance console with patches to the retail version of the game. In the case of the Nintendo 3DS, the New Nintendo 3DS, featured upgraded memory and processors, with new games that could only be run on the upgraded units and cannot be run on an older base unit. There have also been a number of "slimmed-down" console options with significantly reduced hardware components that significantly reduced the price they could sell the console to the consumer, but either leaving certain features off the console, such as the Wii Mini that lacked any online components compared to the Wii, or that required the consumer to purchase additional accessories and wiring if they did not already own it, such as the New-Style NES that was not bundled with the required RF hardware to connect to a television.
Pricing
Consoles when originally launched in the 1970s and 1980s were about , and with the introduction of the ROM cartridge, each game averaged about . Over time the launch price of base consoles units has generally risen to about , with the average game costing . Exceptionally, the period of transition from ROM cartridges to optical media in the early 1990s saw several consoles with high price points exceeding and going as high as . Resultingly, sales of these first optical media consoles were generally poor.
When adjusted for inflation, the price of consoles has generally followed a downward trend, from from the early generations down to for current consoles. This is typical for any computer technology, with the improvements in computing performance and capabilities outpacing the additional costs to achieve those gains. Further, within the United States, the price of consoles has generally remained consistent, being within 0.8% to 1% of the median household income, based on the United States Census data for the console's launch year.
Since the Nintendo Entertainment System, console pricing has stabilized on the razorblade model, where the consoles are sold at little to no profit for the manufacturer, but they gain revenue from each game sold due to console licensing fees and other value-added services around the console (such as Xbox Live). Console manufacturers have even been known to take losses on the sale of consoles at the start of a console's launch with expectation to recover with revenue sharing and later price recovery on the console as they switch to less expensive components and manufacturing processes without changing the retail price. Consoles have been generally designed to have a five-year product lifetime, though manufacturers have considered their entries in the more recent generations to have longer lifetimes of seven to potentially ten years.
Competition
The competition within the video game console market as subset of the video game industry is an area of interest to economics with its relatively modern history, its rapid growth to rival that of the film industry, and frequent changes compared to other sectors.
Effects of unregulated competition on the market were twice seen early in the industry. The industry had its first crash in 1977 following the release of the Magnavox Odyssey, Atari's home versions of Pong and the Coleco Telstar, which led other third-party manufacturers, using inexpensive General Instruments processor chips, to make their own home consoles which flooded the market by 1977. The video game crash of 1983 was fueled by multiple factors including competition from lower-cost personal computers, but unregulated competition was also a factor, as numerous third-party game developers, attempting to follow on the success of Activision in developing third-party games for the Atari 2600 and Intellivision, flooded the market with poor quality games, and made it difficult for even quality games to sell. Nintendo implemented a lockout chip, the Checking Integrated Circuit, on releasing the Nintendo Entertainment System in Western territories, as a means to control which games were published for the console. As part of their licensing agreements, Nintendo further prevented developers from releasing the same game on a different console for a period of two years. This served as one of the first means of securing console exclusivity for games that existed beyond technical limitation of console development.
The Nintendo Entertainment System also brought the concept of a video game mascot as the representation of a console system as a means to sell and promote the unit, and for the NES was Mario. The use of mascots in businesses had been a tradition in Japan, and this had already proven successful in arcade games like Pac-Man. Mario was used to serve as an identity for the NES as a humor-filled, playful console. Mario caught on quickly when the NES released in the West, and when the next generation of consoles arrived, other manufacturers pushed their own mascots to the forefront of their marketing, most notably Sega with the use of Sonic the Hedgehog. The Nintendo and Sega rivalry that involved their mascot's flagship games served as part of the fourth console generation's "console wars". Since then, manufacturers have typically positioned their mascot and other first-party games as key titles in console bundles used to drive sales of consoles at launch or at key sales periods such as near Christmas.
Another type of competitive edge used by console manufacturers around the same time was the notion of "bits" or the size of the word used by the main CPU. The TurboGrafx-16 was the first console to push on its bit-size, advertising itself as a "16-bit" console, though this only referred to part of its architecture while its CPU was still an 8-bit unit. Despite this, manufacturers found consumers became fixated on the notion of bits as a console selling point, and over the fourth, fifth and sixth generation, these "bit wars" played heavily into console advertising. The use of bits waned as CPU architectures no longer needed to increase their word size and instead had other means to improve performance such as through multicore CPUs.
Generally, increased console numbers gives rise to more consumer options and better competition, but the exclusivity of titles made the choice of console for consumers an "all-or-nothing" decision for most. Further, with the number of available consoles growing with the fifth and sixth generations, game developers became pressured to which systems to focus on, and ultimately narrowed their target choice of platforms to those that were the best-selling. This cased a contraction in the market, with major players like Sega leaving the hardware business after the Dreamcast but continuing in the software area. Effectively, each console generation was shown to have two or three dominant players.
Competition in the console market in the 2010s and 2020s is considered an oligarchy between three main manufacturers: Nintendo, Sony, and Microsoft. The three use a combination of first-party games exclusive to their console and negotiate exclusive agreements with third-party developers to have their games be exclusive for at least an initial period of time to drive consumers to their console. They also worked with CPU and GPU manufacturers to tune and customize hardware for computers to make it more amenable and effective for video games, leading to lower-cost hardware needed for video game consoles. Finally, console manufacturers also work with retailers to help with promotion of consoles, games, and accessories. While there is little difference in pricing on the console hardware from the manufacturer's suggested retail price for the retailer to profit from, these details with the manufacturers can secure better profits on sales of game and accessory bundles for premier product placement. These all form network effects, with each manufacturer seeking to maximize the size of their network of partners to increase their overall position in the competition.
Of the three, Microsoft and Sony, both with their own hardware manufacturing capabilities, remain at a leading edge approach, attempting to gain a first-mover advantage over the other with adaption of new console technology. Nintendo is more reliant on its suppliers and thus instead of trying to compete feature for feature with Microsoft and Sony, had instead taken a "blue ocean" strategy since the Nintendo DS and Wii.
See also
Game consoles sales
Unlockable game
Video game clone
References
Further reading
External links
American inventions
Bundled products or services
Console | Video game console | Technology | 9,052 |
341,658 | https://en.wikipedia.org/wiki/Apathy | Apathy, also referred to as indifference, is a lack of feeling, emotion, interest, or concern about something. It is a state of indifference, or the suppression of emotions such as concern, excitement, motivation, or passion. An apathetic individual has an absence of interest in or concern about emotional, social, spiritual, philosophical, virtual, or physical life and the world. Apathy can also be defined as a person's lack of goal orientation. Apathy falls in the less extreme spectrum of diminished motivation, with abulia in the middle and akinetic mutism being more extreme than both apathy and abulia.
The apathetic may lack a sense of purpose, worth, or meaning in their life. People with severe apathy tend to have a lower quality of life and are at a higher risk for mortality and early institutionalization. They may also exhibit insensibility or sluggishness. In positive psychology, apathy is described as a result of the individuals' feeling they do not possess the level of skill required to confront a challenge (i.e. "flow"). It may also be a result of perceiving no challenge at all (e.g., the challenge is irrelevant to them, or conversely, they have learned helplessness). Apathy is usually felt only in the short term, but sometimes it becomes a long-term or even lifelong state, often leading to deeper social and psychological issues.
Apathy should be distinguished from reduced affect display, which refers to reduced emotional expression but not necessarily reduced emotion.
Pathological apathy, characterized by extreme forms of apathy, is now known to occur in many different brain disorders, including neurodegenerative conditions often associated with dementia such as Alzheimer's disease, Parkinson's disease, and psychiatric disorders such as schizophrenia. Although many patients with pathological apathy also have depression, several studies have shown that the two syndromes are dissociable: apathy can occur independent of depression and vice versa.
Etymology
Although the word apathy was first used in 1594 and is derived from the Greek (apatheia), from (apathēs, "without feeling" from a- ("without, not") and pathos ("emotion")), it is important not to confuse the two terms. Also meaning "absence of passion," "apathy" or "insensibility" in Greek, the term apatheia was used by the Stoics to signify a (desirable) state of indifference toward events and things that lie outside one's control (that is, according to their philosophy, all things exterior, one being only responsible for one's own representations and judgments). In contrast to apathy, apatheia is considered a virtue, especially in Orthodox monasticism. In the Philokalia the word dispassion is used for apatheia, so as not to confuse it with apathy.
History and other views
Christians have historically condemned apathy as a deficiency of love and devotion to God and his works. This interpretation of apathy is also referred to as Sloth and is listed among the Seven Deadly Sins. Clemens Alexandrinus used the term to draw to gnostic Christianity philosophers who aspired after virtue.
The modern concept of apathy became more well known after World War I, when it was one of the various forms of "shell shock", now better known as post-traumatic stress disorder (PTSD). Soldiers who lived in the trenches amidst the bombing and machine gun fire, and who saw the battlefields strewn with dead and maimed comrades, developed a sense of disconnected numbness and indifference to normal social interaction when they returned from combat.
In 1950, US novelist John Dos Passos wrote: "Apathy is one of the characteristic responses of any living organism when it is subjected to stimuli too intense or too complicated to cope with. The cure for apathy is comprehension."
Social origin
There may be other factors contributing to a person's apathy.
Apathy has been socially viewed as worse than things such as hate or anger. Not caring whatsoever, in the eyes of some, is even worse than having distaste for something. Author Leo Buscaglia is quoted as saying "I have a very strong feeling that the opposite of love is not hate-it's apathy. It's not giving a damn." Helen Keller stated that apathy is the "worst of them all" when it comes to the various evils in the world. French social commentator and political thinker Charles de Montesquieu stated that "the tyranny of a prince in an oligarchy is not so dangerous to the public welfare as the apathy of a citizen in the democracy." As can be seen by these quotes and various others, the social implications of apathy are great. Many people believe that not caring at all can be worse for society than individuals who are overpowering or hateful.
In the school system
Apathy in students, especially those in high school, is a growing problem. It causes teachers to lower standards in order to try to engage their students. Apathy in schools is most easily recognized by students being unmotivated or, quite commonly, being motivated by outside factors. For example, when asked about their motivation for doing well in school, fifty percent of students cited outside sources such as "college acceptance" or "good grades". On the contrary, only fourteen percent cited "gaining an understanding of content knowledge or learning subject material" as their motivation to do well in school. As a result of these outside sources, and not a genuine desire for knowledge, students often do the minimum amount of work necessary to get by in their classes. This then leads to average grades and test grades but no real grasping of knowledge. Many students cited that "assignments/content was irrelevant or meaningless" and that this was the cause of their apathetic attitudes toward their schooling, leading to teacher and parent frustration. Other causes of apathy in students include situations within their home life, media influences, peer influences, school struggles and failures. Some of the signs of apathetic students include declining grades, skipping classes, routine illness, and behavioral changes both in school and at home. In order to combat this, teachers have to be aware that students have different motivation profiles; i.e. they are motivated by different factors or stimuli.
Bystander
Also known as the bystander effect, bystander apathy occurs when, during an emergency, those standing by do nothing to help but instead stand by and watch. Sometimes this can be caused by one bystander observing other bystanders and imitating their behavior. If other people are not acting in a way that makes the situation seem like an emergency that needs attention, often other bystanders will act in the same way. The diffusion to responsibility can also be to blame for bystander apathy. The more people that are around in emergency situations, the more likely individuals are to think that someone else will help so they do not need to. This theory was popularized by social psychologists in response to the 1964 Kitty Genovese murder. The murder took place in New York and the victim, Genovese, was stabbed to death as bystanders reportedly stood by and did nothing to stop the situation or even call the police. Latané and Darley are the two psychologists who did research on this theory. They performed different experiments that placed people into situations where they had the opportunity to intervene or do nothing. The individuals in the experiment were either by themselves, with a stranger(s), with a friend, or with a confederate. The experiments ultimately led them to the conclusion that there are many social and situational factors that are behind whether a person will react in an emergency situation or simply remain apathetic to what is occurring.
Measurement
Several different questionnaires and clinical interview instruments have been used to measure pathological apathy or, more recently, apathy in healthy people.
Apathy Evaluation Scale
Developed by Robert Marin in 1991, the Apathy Evaluation Scale (AES) was the first method developed to measure apathy in clinical populations. Centered around evaluation, the scale can either be self-informed or other-informed. The three versions of the test include self, informant such as a family member, and clinician. The scale is based around questionnaires that ask about topics including interest, motivation, socialization, and how the individual spends their time. The individual or informant answers on a scale of "not at all", "slightly", "somewhat" or "a lot". Each item on the evaluation is created with positive or negative syntax and deals with cognition, behavior, and emotion. Each item is then scored and, based on the score, the individual's level of apathy can be evaluated.
Apathy Motivation Index
The Apathy Motivation Index (AMI) was developed to measure different dimensions of apathy in healthy people. Factor analysis identified three distinct axes of apathy: behavioural, social and emotional. The AMI has since been used to examine apathy in patients with Parkinson's disease who, overall, showed evidence of behavioural and social apathy, but not emotional apathy. Patients with Alzheimer's disease, Parkinson's disease, subjective cognitive impairment and limbic encephalitis have also been assessed using the AMI, and their self-reports of apathy were compared with those of caregivers using the AMI caregiver scale.
Dimensional Apathy Scale
The Dimensional Apathy Scale (DAS) is a multidimensional apathy instrument for measuring subtypes of apathy in different clinical populations and healthy adults. It was developed using factor analysis, quantifying Executive apathy (lack of motivation for planning, organising and attention), Emotional apathy (emotional indifference, neutrality, flatness or blunting) and Initiation apathy (lack of motivation for self-generation of thought/action). There is a self-rated version of the DAS and an informant/carer-rated version of the DAS. Further a clinical brief DAS has also been developed. It has been validated for use in stroke, Huntington's disease, motor neurone disease, Multiple Sclerosis, dementia, Parkinson's disease and schizophrenia, showing to differentiate profiles of apathy subtypes between these conditions.
Medical aspects
Depression
Mental health journalist and author John McManamy argues that although psychiatrists do not explicitly deal with the condition of apathy, it is a psychological problem for some depressed people, in which they get a sense that "nothing matters", the "lack of will to go on and the inability to care about the consequences". He describes depressed people who "...cannot seem to make [themselves] do anything", who "can't complete anything", and who do not "feel any excitement about seeing loved ones". He acknowledges that the Diagnostic and Statistical Manual of Mental Disorders does not discuss apathy.
In a Journal of Neuropsychiatry and Clinical Neurosciences article from 1991, Robert Marin, MD, claimed that pathological apathy occurs due to brain damage or neuropsychiatric illnesses such as Alzheimer's, Parkinson's, Huntington's disease, or stroke. Marin argues that apathy is a syndrome associated with many different brain disorders. This has now been shown to be the case across a range of neurological and psychiatric conditions.
A review article by Robert van Reekum, MD, et al. from the University of Toronto in the Journal of Neuropsychiatry (2005) claimed that an obvious relationship between depression and apathy exists in some populations. However, although many patients with depression also have apathy, several studies have shown that apathy can occur independently of depression, and vice versa.
Apathy can be associated with depression, a manifestation of negative disorders in schizophrenia, or a symptom of various somatic and neurological disorders. Sometimes apathy and depression are viewed as the same thing, but actually take different forms depending on someone's mental condition.
Alzheimer's disease
Depending upon how it has been measured, apathy affects 19–88% percent of individuals with Alzheimer's disease (mean prevalence of 49% across different studies). It is a neuropsychiatric symptom associated with functional impairment. Brain imaging studies have demonstrated changes in the anterior cingulate cortex, orbitofrontal cortex, dorsolateral prefrontal cortex and ventral striatum in Alzheimer's patients with apathy. Cholinesterase inhibitors, used as the first line of treatment for the cognitive symptoms associated with dementia, have also shown some modest benefit for behavior disturbances such as apathy. The effects of donepezil, galantamine and rivastigmine have all been assessed but, overall, the findings have been inconsistent, and it is estimated that apathy in ~60% of Alzheimer's patients does not respond to treatment with these drugs. Methylphenidate, a dopamine and noradrenaline reuptake blocker, has received increasing interest for the treatment of apathy. Management of apathetic symptoms using methylphenidate has shown promise in randomized placebo controlled trials of Alzheimer's patients. A phase III multi-centered randomized placebo-controlled trial of methylphenidate for the treatment of apathy has reported positive effects.
Parkinson's disease
Overall, ~40% of Parkinson's disease patients suffer from apathy, with prevalence rates varying from 16 to 62%, depending on the study. Apathy is increasingly recognized to be an important non-motor symptom in Parkinson's disease. It has a significant negative impact on quality of life. In some patients, apathy can be improved by dopaminergic medication. There is also some evidence for a positive effect of cholinesterase inhibitors such as Rivastigmine on apathy. Diminished sensitivity to reward may be a key component of the syndrome in Parkinson's disease.
Frontotemporal dementia
Pathological apathy is considered to be one of the diagnostic features of behavioural variant frontotemporal dementia, occurring in the majority of people with this condition. Both hypersensitivity to effort as well as blunting of sensitivity to reward may be components of behavioural apathy in frontotemporal dementia.
Anxiety
While apathy and anxiety may appear to be separate, and different, states of being, there are many ways that severe anxiety can cause apathy. First, the emotional fatigue that so often accompanies severe anxiety leads to one's emotions being worn out, thus leading to apathy. Second, the low serotonin levels associated with anxiety often lead to less passion and interest in the activities in one's life, which can be seen as apathy. Third, negative thinking and distractions associated with anxiety can ultimately lead to a decrease in one's overall happiness which can then lead to an apathetic outlook about one's life. Finally, the difficulty enjoying activities that individuals with anxiety often face can lead to them doing these activities much less often and can give them a sense of apathy about their lives. Even behavioral apathy may be found in individuals with anxiety in the form of them not wanting to make efforts to treat their anxiety.
Other
Often, apathy is felt after witnessing horrific acts, such as the killing or maiming of people during a war, e.g. posttraumatic stress disorder. It is also known to be a distinct psychiatric syndrome that is associated with many conditions, more prominently recognized in the elderly, some of which are: CADASIL syndrome, depression, Alzheimer's disease, Chagas disease, Creutzfeldt–Jakob disease, dementia (and dementias such as Alzheimer's disease, vascular dementia, and frontotemporal dementia), Korsakoff's syndrome, excessive vitamin D, hypothyroidism, hyperthyroidism, general fatigue, Huntington's disease, Pick's disease, progressive supranuclear palsy (PSP), brain damage, schizophrenia, schizoid personality disorder, bipolar disorder, autism spectrum disorders, ADHD, and others. Some medications and the heavy use of drugs such as opiates may bring apathy as a side effect.
See also
Acedia
Callous and unemotional traits
Compassion fatigue
Detachment (philosophy)
Political apathy
Reduced affect display
Notes
References
External links
The Roots of Apathy – Essay By David O. Solmitz
Apathy – McMan's Depression and Bipolar Web, by John McManamy
Problem behavior
Emotions
Narcissism
Psychological attitude
Disorders of diminished motivation
Symptoms or signs involving mood or affect | Apathy | Biology | 3,421 |
1,386,072 | https://en.wikipedia.org/wiki/Rectal%20tenesmus | Rectal tenesmus is a feeling of incomplete defecation. It is the sensation of inability or difficulty to empty the bowel at defecation, even if the bowel contents have already been evacuated. Tenesmus indicates the feeling of a residue, and is not always correlated with the actual presence of residual fecal matter in the rectum. It is frequently painful and may be accompanied by involuntary straining and other gastrointestinal symptoms. Tenesmus has both a nociceptive and a neuropathic component.
Often, rectal tenesmus is simply called tenesmus. The term rectal tenesmus is a retronym to distinguish defecation-related tenesmus from vesical tenesmus. Vesical tenesmus is a similar condition, experienced as a feeling of incomplete voiding despite the bladder being empty.
Tenesmus is a closely related topic to obstructed defecation. The term is , , .
Considerations
Tenesmus is characterized by a sensation of needing to pass stool, accompanied by pain, cramping, and straining. Despite straining, little stool is passed. Tenesmus is generally associated with inflammatory diseases of the bowel, which may be caused by either infectious or noninfectious conditions. Conditions associated with tenesmus include:
Amebiasis
Chronic arsenic poisoning
Coeliac disease
Colorectal cancer
Anal melanoma
Cystocele
Cytomegalovirus (in immunocompromised patients)
Diverticular disease
Dysentery
Hemorrhoid, which are prolapsed
Imperforate hymen
Inflammatory bowel disease
Irritable bowel syndrome
Ischemic colitis
Kidney stones, when a stone is lodged in the lower ureter
Pelvic organ prolapse
Radiation proctitis
Rectal gonorrhea
Rectal lymphogranuloma venereum
Rectal parasitic infection, particularly Trichuris trichiura (whipworm)
Rectocele
Shigellosis
Ulcerative colitis
Tenesmus (rectal) is also associated with the installation of either a reversible or non reversible stoma where rectal disease may or may not be present. Patients who experience tenesmus as a result of stoma installation can experience the symptoms of tenesmus for the duration of the stoma presence. Long term pain management may need to be considered as a result.
Treatment
Pain relief is administered concomitantly to the treatment of the primary disease causing tenesmus.
See also
Constipation
Encopresis
Fecal incontinence
References
External links
Symptoms and signs: Digestive system and abdomen
Defecation | Rectal tenesmus | Biology | 552 |
28,866,991 | https://en.wikipedia.org/wiki/Thermomicroscopy | Thermomicroscopy, developed by the Austrian pharmacognosist Ludwig Kofler (1891-1951) and his wife Adelheid Kofler and continued by Maria Kuhnert-Brandstätter and Walter C. McCrone, is a method for observing the phases of solid drug substances.
References
Microscopy | Thermomicroscopy | Chemistry | 70 |
36,471,858 | https://en.wikipedia.org/wiki/Badger%20culling%20in%20the%20United%20Kingdom | Badger culling in the United Kingdom is permitted under licence, within a set area and timescale, as a way to reduce badger numbers in the hope of controlling the spread of bovine tuberculosis (bTB). Humans can catch bTB, but public health control measures, including milk pasteurisation and the BCG vaccine, mean it is not a significant risk to human health. The disease affects cattle and other farm animals, some species of wildlife including badgers and deer, and some domestic pets such as cats. Geographically, bTB has spread from isolated pockets in the late 1980s to cover large areas of the west and south-west of England and Wales in the 2010s. Some people believe this correlates with the lack of badger control.
A targeted cull in Pembrokeshire in Wales began in 2009, and was cancelled in 2012 after the Welsh Labour administration concluded that culling was ineffective. In October 2013, culling in England was tried in two pilot areas in west Gloucestershire and west Somerset. The main aim of these trials was to assess the humaneness of culling using "free shooting" (previous methods trapped the badgers in cages before shooting them). The trials were repeated in 2014 and 2015, and expanded to a larger area in later years.
Culling is intended to manage the cost of BTB both to farmers and to the taxpayer. DEFRA compensates farmers for culled cattle, paying between £82 (for a young calf) and £1,543 (for a breeding bull), with higher values for pedigree animals, ranging up to £5,267. Farmers bear other costs from a TB outbreak on their farm, and these are mandatory and uncompensated. After compensation, a TB outbreak costs the farmer a median £6,600.
As of 2024, the United Kingdom has culled 210,000 badgers at a cost of £58.8 million. In the same period, it culled 330,000 cattle. Bovine TB compensation paid to farmers costs the UK taxpayer around £150 million per annum.
Status of badgers
European badgers (Meles meles) are not an endangered species, but they are amongst the most legally protected wild animals in the UK, being shielded under the Protection of Badgers Act 1992, the Wildlife and Countryside Act 1981, and the Convention on the Conservation of European Wildlife and Natural Habitats.
Arguments for culling
Prior to the 2012/13 badger cull, the government's Department for Environment, Food and Rural Affairs (DEFRA) stated that badger control was needed because "...we still need to tackle TB in order to support high standards of animal health and welfare, to promote sustainable beef and dairy sectors, to meet EU legal and trade requirements and to reduce the cost and burden on farmers and taxpayers." This report listed these reasons for bTB control:-
Protect the health of the public and maintain public confidence in the safety of products entering the food chain
Protect and promote the health and welfare of some animals
Meet UK international (in particular EU) and domestic legal commitments and maintain the UK’s reputation for safe and high quality food
Maintain productive and sustainable beef and dairy sectors in England, securing opportunities for international trade and minimising environmental impacts
Reduce the cost of bTB to farmers and taxpayers
Disease
Humans can be infected with the Mycobacterium bovis bacterium, which causes the disease "bovine TB" (bTB). Between 1994 and 2011, 570 human cases of bovine TB were reported in humans. Most of these cases are thought to be in older people who could have been infected before milk pasteurisation became common in the UK.
One route of transmission to humans is drinking infected, unpasteurised milk (pasteurisation kills the bacterium). European badgers can be infected and transmit the disease to cattle, thereby posing a risk to the human food chain. Culling is used in parts of the UK to reduce the number of badgers and thereby reduce the incidence and spread of bTB that might infect humans.
BTB spreads through exhalations and excretions of infected individuals. Modern cattle housing, which has good ventilation, reduces transmission, but in older-style cattle housing or in badger setts, the disease can spread rapidly. Badgers range long distances at night, potentially spreading bTB widely. Badgers mark their territory with urine, which can contain a high proportion of bTB bacteria. According to the RSPCA, the infection rate among badgers is 4–6%.
BTB is mostly concentrated in the west and south-west of England and eastern Wales. It is thought to have re-emerged because of the 2001 foot-and-mouth disease outbreak, which led to thousands of cattle being slaughtered and farmers all over the UK having to buy new stock. Apparently, undiscovered bTB remained in some of these replacement animals.
Action on eradicating bTB is a devolved issue. DEFRA works with the Devolved Administrations in Wales, Scotland and Northern Ireland for coherent and joined-up policies for the UK as a whole. The Chief Veterinary Officers and lead TB policy officials from each country meet on a monthly basis to discuss bTB issues through the UK bTB Liaison Group.
Cost
Culling is intended to manage the cost of BTB both to farmers and to the taxpayer. DEFRA compensates farmers for culled cattle, paying between £82 (for a young calf) and £1,543 (for a breeding bull), with higher values for pedigree animals, ranging up to £5,267. Farmers bear other costs from a TB outbreak on their farm, and these are mandatory and uncompensated. A TB outbreak costs the farmer a median £6,600.
As of 2024, the United Kingdom has culled 210,000 badgers at a cost of £58.8 million. In the same period, it culled 330,000 cattle. Bovine TB compensation paid to farmers costs the UK taxpayer around £150 million per annum.
Arguments against culling
The risk of humans contracting bTB from milk is extremely low if certain precautions are taken, and scientists have argued that badger culling is unnecessary. The low risk is accepted by DEFRA, which wrote in a report published in 2011: "The risk to public health is very low these days, largely thanks to milk pasteurisation and the TB surveillance and control programme in cattle".
Animal welfare groups such as the Badger Trust and the Royal Society for the Prevention of Cruelty to Animals are opposed to what they consider random slaughter of badgers — which have special legal protection in the UK — in return for what they describe as a relatively small impact on bTB.
Cattle and badgers are not the only carriers of bTB. The disease can infect and be transmitted by domestic animals such as cats and dogs, wildlife such as deer, and farm livestock such as horses and goats. Although the frequency of infection from other mammals is generally much less than in cattle and badgers, other species of wildlife have been shown as a possible carriers of bTB. In some areas of south-west England, deer, especially fallow deer due to their gregarious behaviour, have been implicated as a possible maintenance host for transmission of bTB. In some localised areas, the risk of transmission to cattle from fallow deer arguably is greater than it is from badgers. M. bovis was shown to be hosted and transmitted to humans by cats in March 2014 when Public Health England announced two people in England developed bTB infections after contact with a domestic cat. The two human cases were linked to 9 cases of bTB infection in cats in Berkshire and Hampshire during 2013. These are the first documented cases of cat-to-human TB transmission.
Research reported in 2016 indicates that bTB is not transmitted by direct contact between badgers and cattle, but through contaminated pasture and dung. This has important implications for farm practices such as the spreading of slurry. Using a GPS collar small enough to be worn by badgers, the researchers tracked more than 400 cattle when they were in the territories of 100 badgers. In 65,000 observations, only once did a badger get within 10 m (33 ft) of a cow; the badgers preferred to be 50 m away. Experts were quoted as saying expansion of the cull “flies in the face of scientific evidence” and that the cull is a “monstrous” waste of time and money.
Alternatives to culling
Under the Berne Convention on the Conservation of European Wildlife and Natural Habitats, the culling of badgers is only permitted as part of a bTB reduction strategy if no satisfactory alternative exists.
Widespread public support has arisen for an alternative to culling. In October 2012, MPs voted 147 in favour of a motion to stop the 2012/2013 badger cull and 28 against. The debate had been prompted by a petition on the government's e-petition website, which at the time had exceeded 150,000 signatories, and which had by June 2013 gathered around a quarter of a million signatories. By the time it closed on 7 September 2013, it had 303,929 signatures, breaking the record for the largest number of people ever to sign a government e-petition.
Vaccination
In July 2008, Hilary Benn, the then-Secretary of State for Environment, Food, and Rural Affairs, made a statement that highlighted actions other than culling, including allocating funding of £20M to the development of an effective TB injectable vaccine for cattle and badgers, and an oral badger vaccine.
In March 2010, DEFRA licensed a vaccine for badgers, called the Badger BCG. The vaccine is only effective on animals that do not already have the disease and it can only be delivered by injection. It is available on prescription, subject to a licence to trap badgers from Natural England, but only where injections are carried out by trained vaccinators. DEFRA funded a programme of vaccinations in 2010/11, and other organisations that have funded smaller vaccination programmes include the National Trust in Devon, the Gloucestershire Wildlife Trust, and a joint project by the National Farmers' Union and the Badger Trust.
However, in England, the government views badger vaccination as a necessary part of a package of measures for controlling bTB, because it estimates the cost of vaccination to be around £2,250 /km2/yr, and notes that most landowners and farmers have little interest in paying this cost themselves.
In Wales, badger vaccination is carried out in preference to culling. Whilst a field trial into the vaccination of badgers is under way in the Republic of Ireland, as yet, neither culling nor vaccination is carried out in Northern Ireland, although the Northern Ireland Assembly has carried out a review into bTB that recently recommended an immediate investigation into the viability of culling and/or vaccination. In autumn 2009, Scotland was declared officially tuberculosis-free under EU rules, so there are no proposals to cull badgers there.
Although vaccinating cattle is a recognised method of avoiding killing wildlife, but reducing the prevalence, incidence, and spread of bTB in the cattle population, it could also reduce the severity of a herd infection regardless of whether infection is introduced by wildlife or cattle; it has three problems:
As with all vaccines, a cattle vaccine does not guarantee that all vaccinated animals are fully protected from infection.
Current research suggests that revaccination is likely to be necessary on an annual basis to maintain a sufficient level of protection in individual animals.
The BCG vaccine can make cattle sensitive to the tuberculin skin test after vaccination, which means the animal may have a positive result, though it is not actually infected with M. bovis (a "false positive"). In parallel with developing the vaccine, DEFRA is developing a test to differentiate between infected and vaccinated animals (so-called "DIVA" test). This test is based on gamma interferon blood-test technology. The intention is that when necessary, it can be used alongside the tuberculin skin test to confirm whether a positive skin test is caused by infection or vaccination. This is critical because without this differentiation, the UK could not be declared officially free of bTB, which is required by a 1964 European Economic Community directive for international trade. Given that in 2014 there is still no bTB vaccine for cattle that does not interfere with the tuberculin tests, such vaccination is prohibited under EU law.
As of 2011, DEFRA has invested around £18 million in the development of cattle vaccines and associated diagnostic tools.
History
Before 1992
Many badgers in Europe were gassed during the 1960s and 1970s to control rabies.
M. bovis was discovered in 1882, but until 1960, no compulsory tests for the disease had been brought in; previously, testing was voluntary. Herds that were attested TB free were tested annually and received a premium of 1d per gallon for their milk. Those not tested were able to carry on trading without testing. A programme of test-and-slaughter began and was successful. Until the 1980s, badger culling in the UK was undertaken in the form of gassing. By 1960, eradicating bTB in the UK was thought possible, until 1971, when a new population of tuberculous badgers was located in Gloucestershire. Subsequent experiments showed that bTB can be spread from badgers to cattle, and some farmers tried to cull badgers on their land. Wildlife protection groups lobbied Parliament, which responded by passing the Badgers Act 1973 (c. 57), making it an offence to attempt to kill, take, or injure badgers, or interfere with their setts without a licence. These laws are now contained in the Protection of Badgers Act 1992.
Randomised Badger Culling Trials (1998–2008)
In 1997, an independent scientific body issued the Krebs Report. This concluded a lack of evidence remained about whether badger culling would help control the spread of bTB and proposed a series of trials.
The government then ordered an independently run series of trials, known as the Randomised Badger Culling Trials (RBCT). These trials, in which 11,000 badgers in selected areas were cage-trapped and killed, were conducted from 1998 to 2005, although they were briefly suspended due to the outbreak of foot-and-mouth in 2001. The incidence of bTB in and around 10 large (100 km2) areas in which annual badger culling occurred was compared with the incidence in and around 10 matched areas with no such culling.
In 2003, as a result of initial findings from the RBCT, the reactive component of the culling where badgers were culled in and around farms where bTB was present in cattle, was suspended. This was because the RBCT recorded a 27% increase in bTB outbreaks in these areas of the trial compared to areas in which no culling took place. The advisory group of the trials concluded that reactive culling could not be used to control bTB.
In December 2005, a preliminary analysis of the RBCT data showed that proactive culling, in which most badgers in a particular area were culled, reduced the incidence of bTB by 19% within the cull area, but it increased by 29% within 2 km outside the cull area. The report, therefore, warned of a "perturbation effect" in which culling leads to changes in badger behaviour thereby increasing infections within the badger colonies and the migration of infected badgers to previously uninfected areas. Whilst culling produced a decreased badger population locally, it disrupted the badgers’ territorial system, causing any surviving badgers to range more widely, which itself led to a substantial increase in the incidence of the disease, and its wider dispersal. It also reported that a culling policy "would incur costs that were between four and five times higher than the economic benefits gained" and "if the predicted detrimental effects in the surrounding areas are included, the overall benefits achieved would fall to approximately one-fortieth of the costs incurred". In summary, the report argued that it would be more cost effective to improve cattle control measures, with zoning and supervision of herds, than it would be to cull badgers.
In 2007, the final results of the trials, conducted by the Independent Scientific Group on Cattle TB, were submitted to David Miliband, the then Secretary of State for Environment, Food and Rural Affairs. The report stated that "badger culling can make no meaningful contribution to cattle TB control in Britain. Indeed, some policies under consideration are likely to make matters worse rather than better". According to the report:
In October 2007, after considering the report and consulting other advisors, the government's then-chief scientific advisor, Professor Sir David King, produced a report of his own, which concluded that culling could indeed make a useful contribution to controlling bTB. This was criticised by scientists, most notably in the editorial of Nature, which implied King was being influenced by politics.
In July 2008, Hilary Benn, the then-Secretary of State for Environment, Food and Rural Affairs, refused to authorise a badger cull because of the practicalities and cost of a cull and the scale and length of time required to implement it, with no guarantee of success and the potential for making the disease worse. Benn went on to highlight other measures that would be taken, including allocating £20M to the development of an effective injectable TB vaccine for both cattle and badgers, and an oral badger vaccine.
Follow-up report
In 2010, a scientific report was published in which bTB incidence in cattle was monitored in and around RBCT areas after culling ended. The report showed that the benefits inside culled areas decreased over time and were no longer detectable three years after culling ceased. In areas adjoining those which culled, a trend indicated beneficial effects immediately after the end of culling were insignificant, and had disappeared 18 months after the cull ceased. The report also stated that the financial costs of culling an idealized 150 km2 area would exceed the savings achieved through reduced bTB by factors of 2,0 to 3.5. The report concluded, "These results, combined with evaluation of alternative culling methods, suggest that badger culling is unlikely to contribute effectively to the control of cattle TB in Britain."
The Bovine TB Eradication Group for England (2008)
In November 2008, the Bovine TB Eradication Group for England was set up. This group included DEFRA officials, members from the veterinary profession and farming industry representatives. Based on research published up to February 2010, the Group concluded that the benefits of the cull were not sustained beyond the culling and that it was ineffective method of controlling bTB in Britain. They said:
Post-2010
After the 2010 general election, the new Welsh environment minister, John Griffiths, ordered a review of the scientific evidence in favour of and against a cull. The incoming DEFRA Secretary of State, Caroline Spelman, began her Bovine TB Eradication Programme for England, which she described as "a science-led cull of badgers in the worst-affected areas". The Badger Trust put it differently, saying, "badgers are to be used as target practice". Shadow Environment Secretary Mary Creagh said it was prompted by "short-term political calculation".
The Badger Trust brought court action against the government. On 12 July 2012, their case was dismissed in the High Court; the trust appealed unsuccessfully. Meanwhile, the Humane Society International pursued a parallel case through the European Courts, which was also unsuccessful. Rural Economy and Land Use Programme fellow Angela Cassidy has identified one of the major forces underlying the opposition to badger culls as originating in the historically positive fictional depictions of badgers in British literature. Cassidy further noted that modern negative depictions have recently seen a resurgence.
In August 2015, culling was announced to be rolled out in Dorset, with a target of 615 to 835 badgers being culled there, while also being continued in Gloucestershire and Somerset. Licences were granted to allow six weeks of continuous culling in the three counties until 31 January. In December 2015, Defra released documents confirming the badger cull had "met government targets" with 756 animals culled in Dorset, 432 in Gloucestershire and 279 in Somerset.
Wales (2009–2012)
In 2009, the Welsh Assembly authorised a nonselective badger cull in the Tuberculosis Eradication (Wales) Order 2009; the Badger Trust sought a judicial review of the decision, but their application was denied. The Badger Trust appealed in Badger Trust v Welsh Ministers [2010] EWCA Civ 807; the Court of Appeal ruled that the 2009 Order should be quashed. The Welsh Assembly replaced proposals for a cull in 2011 with a five-year vaccination programme following a review of the science.
The 2012–2013 cull (England)
As an attempt to reduce the economic costs of live cage-trapping followed by shooting used in the RBCT, the post-2010 culls in England also allowed for the first time, "free shooting", i.e. shooting free-roaming badgers with firearms. Licences to cull badgers under the Protection of Badgers Act 1992 are available from Natural England, which require applicants to show that they have the skills, training, and resources to cull in an efficient, humane, and effective way, and to provide a Badger Control Plan. This meant that farmers were allowed to shoot the badgers themselves, or to employ suitably qualified persons to do this. The actual killing of the badgers was funded by the farmers, whereas the monitoring and data analysis were funded by DEFRA.
Aims
A DEFRA statement, published in October 2012, stated, "The aim of this monitoring is to test the assumption that controlled shooting is a humane culling technique." The statement makes no indication that the cull would assess the effectiveness of reducing bTB in the trial areas.
A Badger Trust statement indicated the 2012/13 badger cull had these specific aims:
Determine whether badger cull targets for each pilot area can be met within six weeks with at least 70% of the badger population removed in each cull area
Determine whether shooting "free-running" badgers at night is a humane way of killing badgers.
Determine whether shooting at night is safe with reference to the general public, pets, and livestock
Again, the statement made no indication that the cull would assess the effectiveness of reducing bTB in the trial areas.
Concerns regarding free shooting
Permission to allow free shooting for the first time during the cull of 2012/13 raised several concerns.
One suggested method to avoid endangering the public would be for shooters to stand over setts and shoot badgers near the entrances, but a report to DEFRA by The Game Conservancy Trust (2006) indicated that a major problem with shooting near the sett is that wounded badgers are very likely to bolt underground, preventing a second shot to ensure the animal is killed. Under these conditions, the first shot must cause the badger to collapse on the spot, limiting the choice of target sites to the spine, neck, or head.
Colin Booty, the RSPCA's deputy head of wildlife, said: "Shooting badgers might be very different from shooting foxes, say, because their anatomy is very different. The badger has a very thick skull, thick skin, and a very thick layer of subcutaneous fat. It has a much more robust skeleton than the fox. Because of the short, squat body and the way its legs work, these legs often partly conceal the main killing zone. Free shooting carries a high risk of wounding."
Economic costs
In 2014, the policing costs in Gloucestershire were £1.7 million over the seven-week period (£1,800 per badger) and in Somerset, the cost of policing amounted to £739,000 for the period.
Government announcement
On 19 July 2011, Caroline Spelman, then the Secretary of State for Environment, Food and Rural Affairs, announced the government response to the consultation. It was proposed that a cull would be conducted within the framework of the new "Bovine TB Eradication Programme for England". In view of concerns in response to the initial consultation, a further consultation would determine whether a cull could be effectively enforced and monitored by Natural England under the Protection of Badgers Act 1992. The cull would initially be piloted in two areas, before being extended to other parts of the country.
Implementation
In December 2011, the government announced that it intended to go forward with trial badger culls in two 150 km2 areas. These would take place over a 6-week period with the aim of reducing the badger population by 70% in each area. Farmers and land owners would be licensed to control badgers by shooting and would bear the costs of any culls. The government was to bear the costs of licensing and monitoring the culls.
The government would monitor:
Actions taken under the licence
The impact on cattle herd breakdowns (becoming infected with bTB) within the areas culled or vaccinated
Humaneness of the culling methods
Impacts on the remaining badger population
In March 2012, the government appointed members to an independent panel of experts (IPE) to oversee the monitoring and evaluation of the pilot areas and report back. The panel’s role was to evaluate the effectiveness, humaneness, and safety of the controlled shooting method, not the effectiveness of badger culling to control bTB in cattle.
The cull was to begin in 2012 led by DEFRA. However, the Secretary of State for Environment, Owen Paterson, announced in a statement to Parliament on 23 October 2012 that a cull would be postponed until 2013 with a wide range of reasons given.
On 27 August 2013, a full culling programme began in two pilot areas, one mainly in West Somerset and the other mainly in West Gloucestershire with a part in Southeast Herefordshire, at an estimated cost of £7 million per trial area. Up to 5,094 badgers were to be shot. There were closed seasons during the cull, designed to prevent distress to animals or their dependent offspring.
Data collected
Shooters failed to kill the target of 70% of badgers in both trial areas during the initial 6-week cull. During this time, 850 badgers were killed in Somerset and 708 in Gloucestershire. Of the badgers culled in Gloucestershire, 543 were killed through free shooting, whilst 165 were cage-trapped and shot. In Somerset, 360 badgers were killed by free shooting and 490 by being cage-trapped then shot.
Because the target of 70% badgers to be culled had not been achieved, the cull period was extended. During the 3-week extension in Somerset, an extra 90 badgers were culled, taking the total across the whole cull period to 940, representing a 65% reduction in the estimated badger population. During the 5 week and 3 day extension in Gloucestershire, 213 further badgers were culled, giving an overall total of 921, representing a reduction of just under 40% in the estimated badger population.
DEFRA and Natural England were unwilling to divulge what data would be collected and the methods of collection during the pilot culls. In a decision under the Freedom of information in the United Kingdom act dated 6 August 2013, though, the Information Commissioner’s Office found that DEFRA was wrong to apply the Environmental Information Regulations in defence of its refusal to disclose information about the pilot cull methods. DEFRA originally intended to sample 240 badgers killed during the pilot culls, but confirmed only 120 badgers targeted were to be collected for examination of humaneness, and that half of these badgers would be shot while caged. Therefore, only 1.1% of badgers killed by free shooting were tested for humaneness of shooting. No badgers were to be tested for bTB.
Details of the ongoing pilot culls were not released whilst they were being conducted, and DEFRA declined to divulge how the success of the trials would be measured. As a result, scientists, the RSPCA, and other animal charities called for greater transparency over the pilot badger culls. Environment Secretary Owen Paterson confirmed that the purpose of the pilot culls was to assess whether farmer-led culls deploying controlled shooting of badgers is suitable to be rolled-out to up to 40 new areas over the next four years. Farming Minister David Heath admitted in correspondence with Lord Krebs that the cull would "not be able to statistically determine either the effectiveness (in terms of badgers removed) or humaneness of controlled shooting". Lord Krebs, who led the RBCT in the 1990s, said the two pilots "will not yield any useful information".
In explaining why the culling had missed the target, Environment Secretary Paterson famously commented, "the badgers moved the goalposts."
Effectiveness of the cull
Leaks reported by the BBC in February 2014 indicated that the expert panel found that less than half of all badgers were killed in both trial areas. It was also revealed that between 6.8 and 18% of badgers took more than five minutes to die; the standard originally set was that this should be less than 5%.
As culling was not selective, it was suggested that as many as six out of seven badgers killed could have been perfectly healthy and bTB free.
Scientific experts agree that culling where "hard boundaries" exist to the cull zones, on a large and long-term scale, could yield modest benefits. If "soft boundaries" allow badgers to escape, then it will also make things worse for farmers bordering on the cull areas due to infected badgers dispersing - the so-called "perturbation" effect.
The Food and Environment Research Agency (FERA) concluded, "the form and duration of badger social perturbation is still poorly understood and significant changes to our assumption may alter the order of preference [of the proposed options]."
The DEFRA-commissioned FERA Report states: "Our modelling has shown that while the differences between the outcomes of strategies using culling and/or vaccinating badgers are quite modest (~about 15–40 CHBs prevented over 10 years), their risk profile is markedly different. Culling results in the known hazard of perturbation, leading to increased CHBs [Cattle Herd Breakdowns] in the periphery of the culling area. Culling also risks being ineffective or making the disease situation worse, if it is conducted partially (because of low compliance) or ineffectually (because of disruption or poor co-ordination) or it is stopped early (because of licensing issues). Vaccination carries no comparable risks or hazards."
The UK government stated that a sustained cull, conducted over a wide area in a co-ordinated and efficient manner, over a period of nine years, might achieve a 9–16% reduction in disease incidence, though many scientists and a coalition of animal-welfare and conservation groups, including the RSPCA, the Wildlife Trusts, and the RSPB, argue that a cull could risk local extermination of all badgers, and that a badger cull will not in any way solve the problem of bTB in cattle. The British Veterinary Association say that data collected from research in other countries suggest that the control of the disease in farms has only been successfully carried out by dealing with both cattle and wild reservoirs of infection. In the introduction to the Final Report on the RBCT, the chairman of the Independent Scientific Group, John Bourne, states: "Scientific findings indicate that the rising incidence of disease can be reversed, and geographical spread contained, by the rigid application of cattle-based control measures alone". In practice it is very difficult to quantify the contribution any wildlife reservoir has to the spread of bovine tuberculosis, since culling is usually carried out alongside cattle control measures (using "all the tools in the tool box" approach):
"From Australian experience, government has learnt that elimination of a wildlife host (feral water buffalo) needs to be followed by a long and extensive programme of cattle testing, slaughter, movement control, and public awareness campaigns before bTB is eventually eradicated. And from New Zealand experience, population reduction of the wildlife host (possums) does not by itself reliably control bTB in cattle. In both Australia and New Zealand, government was dealing with feral reservoirs of bTB rather than indigenous wildlife species, as is the case with the badger in this country" Wilsmore, A.J. and Taylor, N. M. (2005).
Bourne has also argued that the planned cull is likely only to increase the incidence of bTB, and that emphasis should instead be much greater on cattle farming controls. He claims, "the cattle controls in operation at the moment are totally ineffective", partly because the tuberculin test used in cattle is not accurate, causing tests in herds to often show negative results even while still harbouring the disease. Referring to the group's final report, he further argues that whilst cattle can get tuberculosis from badgers, the true problem is the other way around: "Badger infections are following, not leading, TB infections in cattle". Overall, he says, the cull will only do more harm than good, because, "you just chase the badgers around, which makes TB worse".
The amount of money that has been spent so far on planning and preparing for each pilot cull and who exactly is paying for what, i.e. what taxpayers are paying for and what the farming industry is paying for, is unclear. Costs of the culls have not factored in socioeconomic costs, such as tourism and any potential boycotts of dairy products from the cull zones. Others opposed to the cull argue that for economic reasons, the government have chosen the most inhumane approach to disease eradication. Tony Dean, chairperson of the Gloucestershire Badger Group, warns that some badgers will not be killed outright: "You have got to be a good marksman to kill a badger outright, with one shot... Many of the badgers will be badly injured. They will go back underground after being shot, probably badly maimed. They will die a long, lingering death underground from lead poisoning, etc. We are going to have a lot of cubs left underground where their mothers have been shot above ground." He also suggests that domestic pets will be at risk in the cull areas, as some farmers will mistake black and white cats and dogs for badgers.
Many cull opponents cite vaccination of badgers and cattle as a better alternative to culling. In Wales, where a policy of vaccination in 2013 was into its second year, Stephen James, who is the National Farmers Union Cymru's spokesperson on the matter, argues that the economics of badger culling are "ridiculous", saying the cost per badger was £620. "That's a very expensive way of trying to control this disease when we know full well, from experience from other countries, that there are cheaper ways of doing it...if you vaccinate in the clean areas, around the edges of the endemic areas, then there's a better chance of it working."
The Badger Trust national charity believes that vaccination will also be more likely to help eradicate the disease. Referring to further studies by Animal Health and Veterinary Laboratories Agency and the FERA, the group claims that vaccination reduces the risk of unvaccinated badger cubs testing tuberculosis positive, because "by the time cubs emerge and are available for vaccination, they might have already been exposed [and are therefore resistant] to TB". Steve Clark, a director of the group, has separately said that "vaccination also reduces the bacilli that are excreted by infected badgers. It doesn't cure them, but it reduces the possibility of any further infection...in the region of a 75% level of protection. The lifespan of a badger is about five years. So if you continue the vaccination project for five years, then the majority of animals that were there at the beginning will have died out and that vaccination programme is leading towards a clean and healthy badger population."
According to Dr Robbie McDonald, head of wildlife and emerging Ddiseases at FERA (the lead wildlife scientist for DEFRA and responsible for research on badgers) the benefit of culling a population is outweighed by the detrimental effect on neighbouring populations of badgers. He is reported as saying that a huge number of badgers would have to be killed to make a difference and while it is cheap and easy to exterminate animals in the early days of a cull, it gets harder and more expensive as time goes on.
A DEFRA-funded statistical analysis from 2013–2017 has shown reduced incidence of farms affected by bTB of 66% in Gloucestershire and 37% in Somerset. After two years of culling in Dorset, no change in incidence was observed.
Proposed 2014/15 cull
On 3 April 2014, Owen Paterson decided to continue the culling trials in 2014, in the same areas of Gloucestershire and Somerset as the 2012/13 cull. On 20 May 2014, the Badger Trust applied for a judicial review of this policy in the High Court, claiming that Paterson unlawfully failed to put into place an independent expert panel to oversee the process.
In response to a Freedom of Information Act request submitted by the Humane Society International (HSI) UK, DEFRA said that for nearly a year, it had been conducting initial investigations into carbon monoxide gas dispersal in badger sett-like structures. No live badgers have been gassed. HSI expressed concerns about the extent to which gassing causes animal suffering.
The 2014/15 cull (England)
In September 2014, a second year of badger culling began in Gloucestershire and Somerset as during 2013/2014. The cull had previously been stated to be extended to a further 10 areas.
The Badger Trust claimed at the High Court that this cull would take place without independent monitoring, but DEFRA has denied this, saying experts from Natural England and the Animal Health Veterinary Laboratory Agency will be monitoring the cull.
In June 2015, the National Trust, one of the largest landowners in the UK, stated it would not be allowing badger cullers onto their land until the results of all 4 years of pilot trials were known.
Aims
The 2014/15 cull targets had been lowered to 316 badgers in Somerset and 615 in Gloucestershire. Overall, the aim was for a reduction of 70% in badger populations over the successive culls. This was to be achieved with an emphasis on trapping badgers in cages and shooting them at dawn, rather than "free shooting".
Protests
As in the 2013/14 cull, hundreds of protesters entered the culling areas to disrupt the badgers causing them to remain down their setts and avoiding being trapped and/or shot, or to look for injured badgers. On 9 September 2014, two saboteurs in Gloucestershire found a badger trapped in a cage with cullers nearby. The police were called and the saboteurs pointed out that under government guidelines, trapped badgers should be released if a risk of interference from a third party existed. The saboteur organisation, "Stop the Cull" said police "did the right thing" and freed the badger. Gloucestershire police confirmed the standoff, which it said was resolved peacefully – adding the decision to release the badger was made by a contractor working for the cull operator.
Brian May, guitarist with the rock band Queen, is a critic of badger culling in the UK. He has called for the 2014/15 cull to be cancelled. "It's almost beyond belief that the government is blundering ahead with a second year of inept and barbaric badger killing," he said.
Organisations involved in protesting the cull include:
Team Badger: representing 25 different organisations
Gloucestershire Against the Badger Shooting
Policing
In the 2013/2014 cull, police from forces including Sussex, Warwickshire, Cornwall, and the Metropolitan Police were brought in to help with policing, but the police have said that in the 2014/2015 cull, the focus will be on more community policing with local officers on patrol. "It will be very focused on Gloucestershire officers dealing will local issues."
Post-2020
On 26 June 2024, Steve Reed (Shadow Secretary of State for Environment, Food and Rural Affairs) said the Labour Party would no longer be looking to end the badger cull immediately if they win the upcoming General Election. Instead, they would allow existing licences to run until they expire in 2026. This sparked criticism from environmental groups who have suggested this would need to be subject to judicial review, as the Labour party had labelled the badger cull as "ineffective" in combatting bTB in their manifesto, thereby potentially contradicting the Protection of Badgers Act 1992 by allowing the culling of a protected species without acknowledging any effectiveness in disease control.
References
Further reading
Department for Environment, Food and Rural Affairs: The Government's Policy on Bovine TB and badger control in England, published 14 December 2011, retrieved 16 July 2012.
Food and Environment Research Agency: Vaccination Q&A, retrieved 17 July 2012.
External links
The Protection of Badgers Act 1992
WildlifeOnline Badgers and Bovine TB
Agriculture in the United Kingdom
Animals in politics
Animal culling
Badgers
Epidemiology
Animals in the United Kingdom
Controversies in the United Kingdom
Animal welfare and rights in the United Kingdom | Badger culling in the United Kingdom | Environmental_science | 8,544 |
1,660,257 | https://en.wikipedia.org/wiki/Sodium%20fusion%20test | The sodium fusion test, or Lassaigne's test, is used in elemental analysis for the qualitative determination of the presence of foreign elements, namely halogens, nitrogen, and sulfur, in an organic compound. It was developed by J. L. Lassaigne.
The test involves heating the sample with sodium metal, "fusing" it with the sample. A variety of techniques has been described. The "fused" sample is plunged into water, and the qualitative tests are performed on the resultant solution for the respective possible constituents.
Theory
The halogens, nitrogen, and sulfur are covalently bonded to the organic compounds are converted to various sodium salts formed during the fusion. Typically proposed reactions are:
Na + C + N → NaCN
Na + C + N + S → NaSCN
2Na + S → Na2S
Na + X → NaX
The fate of the hydrocarbon portion of the sample is disregarded.
The aqueous extract is called sodium fusion extract or Lassaigne's extract.
Test for nitrogen
The sodium fusion extract is made alkaline by adding NaOH. To this mixture, freshly prepared FeSO4 solution is added and boiled for some time and then cooled. A few drops of FeCl3 are added and Prussian blue (bluish green) color forms due to formation of ferric ferrocyanide along with NaCl. This shows the presence of nitrogen in the organic compound.6CN- +Fe^2+ -> [Fe(CN)6]^4-{3[Fe(CN)6]}^{4-} +4Fe^3+ ->[\ce{xH2O}]Fe4[Fe(CN)6]3\cdot xH2O
Test for sulfur
Lead acetate test
The sodium fusion extract is acidified with acetic acid and lead acetate is added to it. A black precipitate of lead sulfide indicates the presence of sulfur.
Sodium nitroprusside test
Freshly prepared sodium nitroprusside solution is added to the sodium fusion extract, turning the solution deep violet due to formation of sodium thionitroprusside. S^2- + [Fe(CN)5NO]^2- -> [Fe(CN)5NOS]^4-
In case, both nitrogen and sulfur are present in an organic compound, sodium thiocyanate is formed which gives blood red color since there are no free cyanide ions.
Fe^3+ + SCN- -> [Fe(SCN)]^2+
Test for halogens
The sodium fusion extract is boiled with concentrated HNO3 followed by the addition of AgNO3 solution which yields a white (AgCl) or yellow (AgBr or AgI) precipitate if halogen is present.NaX + AgNO3 -> AgX + NaNO3
Test for phosphorus
Sodium peroxide is added to the compound to oxidise phosphorus to sodium phosphate. It is boiled with concentrated HNO3 and then ammonium molybdate is added. A yellow precipitate (ammonium phosphomolybdate) indicates the presence of phosphorus.
Na3PO4 + 3HNO3 -> H3PO4 + 3NaNO3
H3PO4 + 21NaNO3 + 12(NH4)2MoO4 -> (NH4)3[P(Mo3 O10)4] + 21NH4NO3 + 12H2O
References
External links
Lab Manual from the University of West Indies, Mona
Chemical tests
Sodium
Elemental analysis | Sodium fusion test | Chemistry | 751 |
11,421,640 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20psi28S-3316 | In molecular biology, Small nucleolar RNA psi28S-3316 is a member of the H/ACA class of snoRNA. This family is responsible for guiding the modification of uridine 3316 in Drosophila 28S rRNA to pseudouridine
References
External links
Small nuclear RNA | Small nucleolar RNA psi28S-3316 | Chemistry | 64 |
10,502,881 | https://en.wikipedia.org/wiki/Institution%20of%20Chemical%20Engineers | The Institution of Chemical Engineers (IChemE) is a global professional engineering institution with 30,000 members in 114 countries. It was founded in 1922 and awarded a Royal Charter in 1957.
The Institution has offices in Rugby, Melbourne, Wellington, New Zealand and Kuala Lumpur.
History
In 1881, George E. Davis proposed the formation of a Society of Chemical Engineers, but instead the Society of Chemical Industry (SCI) was formed.
The First World War required a huge increase in chemical production to meet the needs of the munitions and its supply industries, including a twenty-fold increase in explosives. This brought a number of chemical engineers into high positions within the Ministry of Munitions, notably K. B. Quinan, Frederic Nathan and Arthur Duckham.
The increased public perception of chemical engineers renewed the interest in a society, and in 1918 John Hinchley, who was a Council Member of the SCI, petitioned it to form a Chemical Engineers Group (CEG), which was done, with him as chairman and 510 members. In 1920 this group voted to form a separate Institution of Chemical Engineers,which was achieved in 1922 with Hinchley as the Secretary, a role he held until his death. The inaugural meeting was held on 2 May 1922, at the Hotel Cecil, London.
Despite opposition from the Institute of Chemistry and the Institution of Civil Engineers, it was formally incorporated with the Board of Trade on 21 December 1922 as a company not for profit and limited by guarantee. The first Corporate meeting was held 14 March 1923 and the first Annual General Meeting on 8 June 1923: Arthur Duckham was confirmed as President, Hinchley as Secretary and Quinan as Vice-President. At this time it had about 200 members. Nathan was the second President in 1925.
The American Institute of Chemical Engineers, which had been founded in 1908, served as a useful model. While suggestions of amalgamation were made and there was friendly but limited contact, the two organisations developed independently.
In 1926 an official Seal of the Institution was produced by Edith Mary Hinchley, wife of John Hinchley.
The same year the Institution set the first examinations for Associate (i.e. professionally qualified) membership, bringing it into line with the Civil and Mechanical Institutions. In addition to four set examinations of three hours each, there was a 'Home Paper' requiring the candidate to gather information and data and design a chemical plant, accompanied by drawings and a written design proposal within a time limit of a month.
In 1938 the membership passed 1000.
In 1939 the first courses were recognised as granting exemption from the examinations for Associate Membership, being Manchester College of Technology and of the South Wales and Monmouthshire School of Mines. Others followed in subsequent years.
In 1942 Mrs Hilda Derrick (née Stroud) was the first female member, in the category Student, taking a correspondence course in chemical engineering during the war. She was active in promoting the Institution and profession to women.
In 1955 Canterbury University College, New Zealand, and University of Cape Town, South Africa, were the first overseas institutions to have their qualifications recognised.
On 8 April 1957 IChemE was granted a Royal Charter, changing it from a limited company to a body incorporated by Royal Charter, a professional institution like the Civil and Mechanical ones, with HRH Prince Philip, Duke of Edinburgh as patron, a role he continued for over 63 years.
In 1971, the membership grades were changed: Associate became Member and Member became Fellow.
In 1976 the Institution moved its Headquarters from London to Rugby.
Relations with other bodies
IChemE is licensed by the Engineering Council UK to assess candidates for inclusion on ECUK's Register of professional Engineers, giving the status of Chartered Engineer, Incorporated Engineer and Engineering Technician. It is licensed by the Science Council to grant the status of Chartered Scientist and Registered Science Technician. It is licensed by the Society for the Environment to grant the status of Chartered Environmentalist. It is a member of the European Federation of Chemical Engineering. It accredits chemical engineering degree courses in 25 countries worldwide.
In 2023, IChemE entered into a 'hydrogen alliance' with the American Institute of Chemical Engineers (AIChE). The collaboration aims to support industry's adoption of hydrogen as an energy carrier in the drive to net zero.
Function
IChemE's vision is to "engineer a sustainable world" and its mission is to "put chemical and process engineering at the heart of a sustainable future, to benefit members, society, and the environment." These aims will be achieved by working towards two strategic goals: "Supporting a vibrant and thriving profession" and "serving society by collaborating with others", which are underpinned by five strategic enablers.
Membership grades and post-nominals
IChemE has two main types of membership, qualified and non-qualified, with the technician member grade being available in both categories.
Qualified membership grades.
Fellow – A chemical engineering professional in a very senior position in industry and/or academia. Entitling the holder to the post-nominal FIChemE and is a chartered grade encompassing all the privileges of Chartered Member grade.
Chartered Member – Internationally recognised level of professional and academic competence requiring at least 4 years of field experience and a bachelors degree with honours. Entitles the holder to the post-nominal MIChemE and registration as one or a combination of; Chartered Engineer (CEng), Chartered Scientist (CSci) and Chartered Environmentalist (CEnv). This also entitles the individual to register as a European Engineer with the pre-nominal Eur Ing.
Associate Member – This grade is for young professionals who are qualified in chemical & process engineering to bachelors with honours level or a higher. Typically this is the grade held by those working towards Chartered Member level or those graduates working other fields. This grade entitles the holder to the post-nominal AMIChemE. This grade can also lead to the grade of Incorporated Engineer (IEng) for those with some field experience but which falls short of the level required for Chartered Member grade.
Technician Member – Uses practical understanding to solve engineering problems and could have a qualification, an apprenticeship or years of experience. This grade can lead to the Eng Tech TIChemE post-nominal and now in conjunction with the Nuclear Institute the post-nominal Eng Tech TIChemE TNucI.
Non-qualified membership grades.
Associate Fellow – Senior professionals trained in other fields of a level comparable to Fellow in other professional bodies.
Affiliate – For people working in, with or with a general interest in the sector.
Student – For undergraduate chemical & process engineering students.
Activities
Medals
The Institution has been awarding Medals for different areas of chemical engineering work since the first Moulton medals were issued in 1929. The medal was named after Lord Moulton who helped develop chemical engineering during World War I when he took charge of explosive supplies. Today the institution gives out eleven medals related to research and teaching, six medals in special interest groups, four medals relating to publications, two medals for services to the profession and two medals for contribution to the Institution.
Annual awards
The IChemE Global Awards take place in November in the UK. The awards are highly regarded throughout the process industries for recognising and rewarding chemical engineering excellence and innovation. The first awards took place at the National Motorcycle Museum in Birmingham on 23 March 1994.
There are 16 categories in total that applicants are invited to enter, including Business Start-Up, Industry Project, Process Safety, and Sustainability, offering a broad scope for entries.
The organisation also holds awards ceremonies in other locations across the globe. 2024 will see the return of the IChemE Malaysia Awards alongside the first-ever IChemE Australasia Awards.
Ashok Kumar Fellowship
The Ashok Kumar Fellowship is an opportunity for a graduate to spend three months working at the UK Parliamentary Office for Science and Technology (POST). The fellowship was jointly funded by IChemE and the Northeast of England Process Industry Cluster (NEPIC). However, NEPIC was unable to contribute in 2018 and the Fellowship was not offered in 2019. As of 2021 it is jointly funded by IChemE and the Materials Processing Institute (reflecting Kumar's employment with British Steel).
The Fellowship was set up in memory of Dr Ashok Kumar, the only serving chemical engineer in the Parliament of the United Kingdom at the time of his sudden death in 2010. Kumar was an IChemE Fellow who had been the Labour MP for Middlesbrough South and Cleveland East.
DiscoverChemEng
In 2023, the Institution launched DiscoverChemEng, an initiative focused on the development of a package of education outreach activities to help inspire future process and chemical engineers and raise awareness of the profession as a career option for young people. A range of resources have been created for IChemE volunteers and STEM ambassadors to use within schools and at careers fairs, alongside an Educator Network that informs volunteers of upcoming events in their local area.
ChemEng Evolution
In order to celebrate its centenary, in 2022 the Institution produced a website with short articles about historic matters in the history of chemical engineering and IChemE, hosting videos and webinars throughout the year. ChemEng Evolution
Coat of arms
The coat of arms is a shield with two figures. On the left a helmeted woman, Pallas Athene, the goddess of wisdom, and on the right, a bearded man with a large hammer, Hephaestus the god of technology and of fire. The shield itself shows a salamander as the symbol of chemistry, and a corn grinding mill as a symbol of continuous processes. Between these is a diagonal stripe in red and blue in steps to indicate the cascade nature of many chemical engineering processes. The shield is surmounted by helmet on which is a dolphin, which is in heraldry associated with intellectual activity, and also represents the importance of fluid mechanics. Just below the dolphin are two Integral signs to illustrate the necessity of mathematics and in particular calculus.
The Latin motto is "Findendo Fingere Disco" or "I learn to make by separating".
Publications
Peer-reviewed journals
Chemical Engineering Research and Design
Process Safety and Environmental Protection
Food and Bioproducts Processing
Education for Chemical Engineers
Molecular Systems Design and Engineering (joint with the Royal Society of Chemistry)
Sustainable Production and Consumption
South African Journal of Chemical Engineering
Other periodicals
The Chemical Engineer
Loss Prevention Bulletin
Books
Conference Proceedings
Technical Guides
Safety Books
Forms of Contract
Past presidents
Notable members
Roland Clift Developer of Life cycle assessment and broadcaster on environmental issues
John Coulson (1910–1990) Co-writer of classic UK textbooks
M. B. Donald (1897 - 1978) Fourth Ramsay professor of Chemical engineering at University College London. Former honorary secretary and vice-president of IChemE, Institution's Donald medal named after him.
Sir Arthur Duckham (1879–1932) First President of the IChemE
Ian Fells Noted energy expert and popular science broadcaster
Trevor Kletz (1922-2013) Noted safety expert
Ashok Kumar (1956–2010) UK Member of Parliament
Frank Lees (1931–1999) author of major safety encyclopaedia
Bodo Linnhoff His 1979 PhD thesis led to Pinch Technology which has enabled companies to save large amounts of energy
K. B. Quinan (1878–1958) An American who, according to Lloyd George "did more than any other single individual to win the (First World) War"
Jack Richardson (1929–2011) Co-writer of classic UK textbooks
P. N. Rowe (1919-2014) Fifth Ramsay professor of chemical engineering at University College London. He was president of the Institution between 1981 and 1982.
Meredith Thring (1915–2006) prolific inventor, futurologist and early proponent of sustainability
See also
Chartered engineer
Incorporated engineer
Royal Society of Chemistry
American Institute of Chemical Engineers (AIChE)
Chemical engineer
Chemical engineering
History of chemical engineering
List of chemical engineers
List of chemical engineering societies
Process engineering
Process design (chemical engineering)
Frank Morton Sports Day
Northeast of England Process Industry Cluster
References
External links
Institution of Chemical Engineers
Origins of the IChemE
Why not Chemical Engineering – schools' website
Official IChemE Twitter feed
ChemEng Evolution
Chemical engineering organizations
Chemical industry in the United Kingdom
ECUK Licensed Members
Engineering societies based in the United Kingdom
Organisations based in Warwickshire
Organizations established in 1922
Rugby, Warwickshire
Science and technology in Warwickshire
1922 establishments in the United Kingdom | Institution of Chemical Engineers | Chemistry,Engineering | 2,465 |
68,904,876 | https://en.wikipedia.org/wiki/Ovarian%20culture | Ovarian culture is an in-vitro process that allows for the investigation of the development, toxicology and pathology of the ovary. This technique can also be used to study possible applications of fertility treatments e.g. isolating oocytes from primordial ovarian follicles that could be used for fertilisation.
Culture methods using mouse ovarian tissue
There are several culture systems which can be employed to investigate ovarian and follicular growth and development.
Whole ovarian culture
The culture of intact ovaries supports the formation and development of primordial follicles. Ovaries are dissected from neonatal mouse pups and placed into ovarian culture medium containing Bovine Serum Albumen (BSA) dissolved in α-Minimal Essential Media (αMEM). The cultures are maintained in a 37°C, 5% CO2 incubator and then the ovaries are frozen or fixed to facilitate further study.
Follicle culture
Individual
This method of culturing supports the growth of individual follicles from late pre-antral to pre-ovulatory stage. This system allows follicle growth and hormone production to be studied. The ovaries of young mice (19–23 days) are removed and halved, and follicles are identified under a microscope. Late pre-antral follicles are identified as having a diameter of 180-200 μm and containing 2-3 layers of granulosa cells. Follicles are manually dissected and then examined for suitability to culture. Follicles are chosen for culture only if they are healthy (diameter of 190 ± 10 μm; translucent; without dark atretic areas; intact basal lamina.) Wells containing follicle culture medium (α-Minimal Essential Media, recombinant human follicle stimulating hormone, ascorbic acid and adult female mouse serum) is overlaid with sterilised silicon oil, which prevents medium evaporation. A follicle is placed at the bottom of each well and maintained in a 37°C, 5% CO2 incubator, being moved into a well containing fresh medium for up to 6 days. If growth measurements are being taken visually the distortion due to the oil layer must be accounted for. Follicles are frozen or fixed so further analysis can be performed.
Paired
By culturing 2 follicles in close proximity, follicle-follicle interactions can be examined. The follicles may grow together to form a two-follicle unit. The follicles are dissected from the ovaries as above, then placed in contact with each other in pairs, in a well with follicle culture medium and sterilised silicon oil. Follicles from different genetic sources can be co-cultured so that tissue origins can be differentiated within the co-culture. The medium is replaced every 2 days and after 6 days the culture is fixed or frozen for further processing.
Follicle-ovary co-culture
This method allows follicle-ovary interactions to be studied. The ovaries and follicles are dissected as above and then one follicle is placed in contact with one pole of a neonatal ovary on a plate. The follicle-ovary plate is cultured in follicle culture medium at 37 °C, 5% CO2 for up to 5 days. At this point the co-culture is frozen or fixed before further processing. To facilitate differentiation between tissue origins the ovary and the follicle should be from different genetic sources.
Uses of ovarian culture techniques
Toxicological studies
At present research within the field of reproductive toxicology is principally carried out in vivo, however new culture methods have been developed with the aim of allowing ovarian follicles to be grown in vitro. These new methods allow us to culture isolated ovarian follicles, embryos, ovaries (whole organ or only part of the tissue), and embryonic stem cells. Ovarian cultures are useful to research as they can allow us to replicate systematic follicle development, periodical ovulation, and follicle atresia in an environment with modulated culture conditions.The ability of in vitro ovarian cultures to detect damage to the ovary and its specialised structures of the follicles and oocytes, allows for faster screening of potential developmental and/or reproductive toxicants. Therefore, ovarian culture systems have become increasingly widely used in reproductive biology and toxicology.
Culture of the whole ovary or ovarian fragments allows evaluation of various parameters in a controlled way and, therefore, has the potential for more complete reproductive toxicity studies. A big advantage of ovarian culture is the ability to evaluate the effect of drugs on the pool of primordial follicles that make up the ovarian reserve. However, this strategy is restricted regarding the duration of culture time, as short periods may not be sufficient to ensure follicular development. On the contrary, cells may be negatively affected by longer periods of culture.
Most in vitro toxicology studies use female mice and rat models. These species have been selected to assess the adverse effects of drugs on reproductive function and fertility, due to ease of handling and small size. Additionally, these species have been well characterised; anatomically, physiologically, and genetically. Their short life cycles make it convenient to assess gestation, breastfeeding, and puberty. The relevance of animal studies for toxicological risk assessment in heterogeneous human populations remains undetermined as it is unknown if the results obtained can be extrapolated to humans.
Fertility treatment
The use of in vivo maturation in ovarian culture would eliminate the risk of Ovarian Hyperstimulation Syndrome during IVF in patients with polycystic ovary syndrome (PCOS). For those without PCOS, in vitro maturation still has advantages as the process is less intense as superovulation is not required. Principles of ovarian culture can be applied to women who are resistant to FSH or oestrogen sensitive tumours. In comparison to IVF, cells used in vitro maturation are harvested at a smaller size, immature and arrested at Metaphase I stage of meiosis. Once in the lab they undergo maturation to Metaphase II.
Fertility preservation
Ovarian tissue can be harvested before ovarian damaging treatments and re-implanted at a later stage using cryopreservation. However, this method is associated with the recurrence of malignancy in those with ovarian cancer and leukaemia. In theory, ovarian tissue culture is a safer method to produce mature oocytes for fertilisation in these patients.
References
External links
In vitro fertilisation
Cell culture
Reproduction
Fertility medicine
Environmental toxicology | Ovarian culture | Biology,Environmental_science | 1,411 |
10,181,172 | https://en.wikipedia.org/wiki/Crawford%20burner | A Crawford burner is a device used to test burn rate (chemistry) of solid propellants. It is also known as a strand burner.
A Crawford burner consists of a small pressure vessel in which a thin bar of propellant to be tested is mounted on a stand. The bar is coated with an external coating so that burning cross-sectional surface is restricted. The propellant is ignited at one end and burned to the other end. Wires are embedded in the propellant at certain intervals of distance so that when the propellant burning reaches the wire, it sends off electrical signals. These wires are connected to a chronometer and the electrical signals are recorded at different time intervals so that burning rate can be measured.
The burning rate measured from a strand burner is typically 4 to 12% less than actual burning rate observed in rockets. This is because the high temperature conditions in actual rockets are not simulated. The heat transfer characteristics are also different. Nevertheless, the strand burner experiment is easy to perform, can be repeated, and is a qualitative picture of the burning rate is obtained. The temperature sensitivity of burning rate is usually calculated from strand burner test data.
See also
Solid rocket
References
George P. Sutton and Oscar Biblarz. Rocket Propulsion Elements. John Wiley and Sons, Inc. .
External links
Crawford Burner System (Strand Burner)
Rocketry | Crawford burner | Engineering | 281 |
15,482,790 | https://en.wikipedia.org/wiki/Receivables%20turnover%20ratio | Receivable turnover ratio or debtor's turnover ratio is an accounting measure used to measure how effective a company is in extending credit as well as collecting debts. The receivables turnover ratio is an activity ratio, measuring how efficiently a firm uses its assets.
Formula:
A high ratio implies either that a company operates on a cash basis or that its extension of credit and collection of accounts receivable is efficient. While a low ratio implies the company is not making the timely collection of credit.
A good accounts receivable turnover depends on how quickly a business recovers its dues or, in simple terms how high or low the turnover ratio is. For instance, with a 30-day payment policy, if the customers take 46 days to pay back, the Accounts Receivable Turnover is low.
Relation ratios
Days' sales in receivables = 365 / Receivable turnover ratio
Average collection period =
Average debtor collection period = × 365 = Average collection period in days,
Average creditor payment period = × 365 = Average Payment period in days,
See also
Debtor collection period
Cash flow
Working capital
References
Financial ratios
Working capital management
Accounts receivable | Receivables turnover ratio | Mathematics | 237 |
34,627,868 | https://en.wikipedia.org/wiki/Tukey%20depth | In statistics and computational geometry, the Tukey depth is a measure of the depth of a point in a fixed set of points. The concept is named after its inventor, John Tukey. Given a set of n points in d-dimensional space, Tukey's depth of a point x is the smallest fraction (or number) of points in any closed halfspace that contains x.
Tukey's depth measures how extreme a point is with respect to a point cloud. It is used to define the bagplot, a bivariate generalization of the boxplot.
For example, for any extreme point of the convex hull there is always a (closed) halfspace that contains only that point, and hence its Tukey depth as a fraction is 1/n.
Definitions
Sample Tukey's depth of point x, or Tukey's depth of x with respect to the point cloud , is defined as
where is the indicator function that equals 1 if its argument holds true or 0 otherwise.
Population Tukey's depth of x wrt to a distribution is
where X is a random variable following distribution .
Tukey mean and relation to centerpoint
A centerpoint c of a point set of size n is nothing else but a point of Tukey depth of at least n/(d + 1).
See also
Centerpoint (geometry)
References
Computational geometry | Tukey depth | Mathematics | 279 |
39,560,800 | https://en.wikipedia.org/wiki/Six%20operations | In mathematics, Grothendieck's six operations, named after Alexander Grothendieck, is a formalism in homological algebra, also known as the six-functor formalism. It originally sprang from the relations in étale cohomology that arise from a morphism of schemes . The basic insight was that many of the elementary facts relating cohomology on X and Y were formal consequences of a small number of axioms. These axioms hold in many cases completely unrelated to the original context, and therefore the formal consequences also hold. The six operations formalism has since been shown to apply to contexts such as D-modules on algebraic varieties, sheaves on locally compact topological spaces, and motives.
The operations
The operations are six functors. Usually these are functors between derived categories and so are actually left and right derived functors.
the direct image
the inverse image
the proper (or extraordinary) direct image
the proper (or extraordinary) inverse image
internal tensor product
internal Hom
The functors and form an adjoint functor pair, as do and . Similarly, internal tensor product is left adjoint to internal Hom.
Six operations in étale cohomology
Let be a morphism of schemes. The morphism f induces several functors. Specifically, it gives adjoint functors and between the categories of sheaves on X and Y, and it gives the functor of direct image with proper support. In the derived category, Rf! admits a right adjoint . Finally, when working with abelian sheaves, there is a tensor product functor ⊗ and an internal Hom functor, and these are adjoint. The six operations are the corresponding functors on the derived category: , , , , , and .
Suppose that we restrict ourselves to a category of -adic torsion sheaves, where is coprime to the characteristic of X and of Y. In SGA 4 III, Grothendieck and Artin proved that if f is smooth of relative dimension d, then is isomorphic to , where denote the dth inverse Tate twist and denotes a shift in degree by . Furthermore, suppose that f is separated and of finite type. If is another morphism of schemes, if denotes the base change of X by g, and if f′ and g′ denote the base changes of f and g by g and f, respectively, then there exist natural isomorphisms:
Again assuming that f is separated and of finite type, for any objects M in the derived category of X and N in the derived category of Y, there exist natural isomorphisms:
If i is a closed immersion of Z into S with complementary open immersion j, then there is a distinguished triangle in the derived category:
where the first two maps are the counit and unit, respectively of the adjunctions. If Z and S are regular, then there is an isomorphism:
where and are the units of the tensor product operations (which vary depending on which category of -adic torsion sheaves is under consideration).
If S is regular and , and if K is an invertible object in the derived category on S with respect to , then define DX to be the functor . Then, for objects M and M′ in the derived category on X, the canonical maps:
are isomorphisms. Finally, if is a morphism of S-schemes, and if M and N are objects in the derived categories of X and Y, then there are natural isomorphisms:
See also
Coherent duality
Grothendieck local duality
Image functors for sheaves
Verdier duality
Change of rings
References
External links
What (if anything) unifies stable homotopy theory and Grothendieck's six functors formalism?
Sheaf theory
Homological algebra
Duality theories | Six operations | Mathematics | 793 |
46,349,043 | https://en.wikipedia.org/wiki/MWC%20480 | MWC 480 is a single star, about 500 light-years away in the constellation of Auriga. It is located in the Taurus-Auriga Star-Forming Region. The name refers to the Mount Wilson Catalog of B and A stars with bright hydrogen lines in their spectra. With an apparent magnitude of 7.62, it is too faint to be seen with the naked eye.
Properties
MWC 480 is a young Herbig Ae/Be star, a class of young stars with spectral types of A or B, but are quite young and are still not main-sequence stars. MWC 480 is about 7 million years old. It is about twice the mass of the Sun, and is estimated to be about 1.67 solar radii.
MWC 480 has X-ray emissions typical of a pre-main-sequence Herbig Ae/Be star but with an order of magnitude more photoelectric absorption. It has a gas-dust envelope and is surrounded by a protoplanetary disc that is about 11% the mass of the Sun. The disc is inclined about 37° towards the line of sight, on a position angle of about 148°. Astronomers using the ALMA (Atacama Large Millimeter/submillimeter Array) have found that the protoplanetary disc surrounding MWC 480 contains large amounts of methyl cyanide (CH3CN), a complex carbon-based molecule. Hydrogen cyanide (HCN) has also been detected in the disc. No signs of planet formation have yet been detected.
Planetary system
In 2021, an imaging of the gas flows in the circumstellar disk has suggested a presence of shrouded Jupiter-mass planet about 245 AU from the star.
References
External links
Articles containing video clips
Auriga
Herbig Ae/Be stars
A-type main-sequence stars
023143
031648
BD+29 774
Hypothetical planetary systems
J04584626+2950370
MWC objects | MWC 480 | Astronomy | 406 |
51,847,330 | https://en.wikipedia.org/wiki/MP-2001 | MP-2001, also known as 2,3,4-trimethoxyestra-1,3,5(10)-trien-17β-ol or 2,4-dimethoxyestradiol 3-methyl ether, is a steroid and derivative of estradiol that was described in 1966 and is devoid of estrogenic activity but produces potent analgesic effects in animals. It was never marketed.
See also
2-Methoxyestradiol
4-Methoxyestradiol
References
Analgesics
Estranes
Ethers | MP-2001 | Chemistry | 119 |
62,374,018 | https://en.wikipedia.org/wiki/3-Methyl-PCPy | 3-Methyl-PCPy (3-Me-PCPy) is an arylcyclohexylamine derivative with an unusual spectrum of pharmacological effects, acting as both a potent NMDA antagonist and also a triple reuptake inhibitor which inhibits reuptake of all three monoamine neurotransmitters serotonin, dopamine and noradrenaline. It also acts as a high affinity sigma receptor ligand, selective for the σ2 subtype. It produces both stimulant and dissociative effects in animal behavioural studies.
Legal Status
3-Methyl-PCPy is covered by drug analogue laws in various jurisdictions (UK, Germany, Japan, Australia etc.) as a generic arylcyclohexylamine derivative, and a structural isomer of phencyclidine.
See also
3-Methyl-PCP
BTCP
Deoxymethoxetamine
Ephenidine
MDPCP
References
Arylcyclohexylamines
NMDA receptor antagonists
Serotonin–norepinephrine–dopamine reuptake inhibitors
Dissociative drugs | 3-Methyl-PCPy | Chemistry | 236 |
1,832,184 | https://en.wikipedia.org/wiki/MacConkey%20agar | MacConkey agar is a selective and differential culture medium for bacteria. It is designed to selectively isolate gram-negative and enteric (normally found in the intestinal tract) bacteria and differentiate them based on lactose fermentation. Lactose fermenters turn red or pink on MacConkey agar, and nonfermenters do not change color. The media inhibits growth of gram-positive organisms with crystal violet and bile salts, allowing for the selection and isolation of gram-negative bacteria. The media detects lactose fermentation by enteric bacteria with the pH indicator neutral red.
Contents
It contains bile salts (to inhibit most gram-positive bacteria), crystal violet dye (which also inhibits certain gram-positive bacteria), and neutral red dye (which turns pink if the microbes are fermenting lactose).
Composition:
Peptone – 17 g
Proteose peptone – 3 g
Lactose – 10 g
Bile salts – 1.5 g
Sodium chloride – 5 g
Neutral red – 0.03 g
Crystal violet – 0.001 g
Agar – 13.5 g
Water – add to make 1 litre; adjust pH to 7.1 +/− 0.2
Sodium taurocholate
There are many variations of MacConkey agar depending on the need. If the spreading or swarming of Proteus species is not required, sodium chloride is omitted. Crystal violet at a concentration of 0.0001% (0.001 g per litre) is included when needing to check if gram-positive bacteria are inhibited. MacConkey with sorbitol is used to isolate E. coli O157, an enteric pathogen.
History
The medium was developed by Alfred Theodore MacConkey while working as a bacteriologist for the Royal Commission on Sewage Disposal.
Uses
Using neutral red pH indicator, the agar distinguishes those gram-negative bacteria that can ferment the sugar lactose (Lac+) from those that cannot (Lac-).
This medium is also known as an "indicator medium" and a "low selective medium". Presence of bile salts inhibits swarming by Proteus species.
Lac positive
By utilizing the lactose available in the medium, Lac+ bacteria such as Escherichia coli, Enterobacter and Klebsiella will produce acid, which lowers the pH of the agar below 6.8 and results in the appearance of pink colonies. The bile salts precipitate in the immediate neighborhood of the colony, causing the medium surrounding the colony to become hazy.
Lac negative
Organisms unable to ferment lactose will form normal-colored (i.e., un-dyed) colonies. The medium may also turn yellow. Examples of non-lactose fermenting bacteria include Salmonella, Proteus, and Shigella spp.
Slow
Some organisms ferment lactose slowly or weakly, and are sometimes put in their own category. These include Serratia and Citrobacter.
Mucoid colonies
Some organisms, especially Klebsiella and Enterobacter, produce mucoid colonies which appear very moist and sticky and slimy. This phenomenon happens because the organism is producing a capsule, which is predominantly made from the lactose sugar in the agar.
Variant
A variant, sorbitol-MacConkey agar, (with the addition of additional selective agents) can assist in the isolation and differentiation of enterohemorrhagic E. coli serotype O157:H7, by the presence of colorless circular colonies that are non-sorbitol fermenting.
See also
R2a agar
MRS agar (culture medium designed to grow gram-positive bacteria and differentiate them for lactose fermentation).
References
Biochemistry detection reactions
Microbiological media
Cell culture media | MacConkey agar | Chemistry,Biology | 790 |
45,540,288 | https://en.wikipedia.org/wiki/2015%20Kerala%20meteoroid | The 2015 Kerala fireball was a meteor air burst that occurred over Kerala state in India on 27 February 2015.
Initial reports
The fireball, reportedly accompanied by a sonic boom, was noticed across the sky in parts of Thrissur, Ernakulam, Palakkad, Kozhikode and Malappuram districts of Kerala at around 22:00 PM IST(local time, UTC +5.30 hours) for about 5 to 6 seconds.
Initial reports suggested that it may have been a part of a rocket body used to launch the Yaogan Weixing-26, a Chinese satellite launched in December 2014. Later, the Meteorology Department and Disaster Management Authority of Kerala refuted the theory stating that if this was the case, it should have been spotted by the meteorology radars.
Impact sites
Meteorites (meteoroid debris) hit multiple places in Ernakulam district. Small fragments which are believed to be parts of the meteoroid were recovered from Valamboor, near Kolenchery, and Kuruppampady, near Perumbavoor.
A team of scientists from the State Emergency Operations Centre (SEOC) and Geological Survey of India visited the impact sites and collected samples for analysis. A preliminary report indicated that the fragments' chemical composition consist of nickel and iron ore.
See also
Bolide
Impact event
Meteorite fall
References
2015 in India
Modern Earth impact events
Meteorites found in India
History of Kerala (1947–present)
2015 in outer space
February 2015 events in India
Disasters in Kerala
History of Ernakulam district
21st-century astronomical events | 2015 Kerala meteoroid | Astronomy | 319 |
74,420,348 | https://en.wikipedia.org/wiki/Generalized%20uncertainty%20principle | The Generalized Uncertainty Principle (GUP) represents a pivotal extension of the Heisenberg Uncertainty Principle, incorporating the effects of gravitational forces to refine the limits of measurement precision within quantum mechanics. Rooted in advanced theories of quantum gravity, including string theory and loop quantum gravity, the GUP introduces the concept of a minimal measurable length. This fundamental limit challenges the classical notion that positions can be measured with arbitrary precision, hinting at a discrete structure of spacetime at the Planck scale. The mathematical expression of the GUP is often formulated as:
In this equation, and denote the uncertainties in position and momentum, respectively. The term represents the reduced Planck constant, while is a parameter that embodies the minimal length scale predicted by the GUP. The GUP is more than a theoretical curiosity; it signifies a cornerstone concept in the pursuit of unifying quantum mechanics with general relativity. It posits an absolute minimum uncertainty in the position of particles, approximated by the Planck length, underscoring its significance in the realms of quantum gravity and string theory where such minimal length scales are anticipated.
Various quantum gravity theories, such as string theory, loop quantum gravity, and quantum geometry, propose a generalized version of the uncertainty principle (GUP), which suggests the presence of a minimum measurable length. In earlier research, multiple forms of the GUP have been introduced
Observable consequences
The GUP's phenomenological and experimental implications have been examined across low and high-energy contexts, encompassing atomic systems, quantum optical systems, gravitational bar detectors, gravitational decoherence, and macroscopic harmonic oscillators, further extending to composite particles, astrophysical systems
See also
Uncertainty principle
References
External links
Research papers on Generalized Uncertainty Principle
Quantum gravity
String theory
Unsolved problems in physics | Generalized uncertainty principle | Physics,Astronomy | 363 |
65,814,387 | https://en.wikipedia.org/wiki/NGC%202188 | NGC 2188 is a barred spiral galaxy in the constellation Columba. It is located at a distance of circa 25 million light years from Earth, which means that the galaxy, given its apparent dimensions is about 50.000 light years long. It was discovered by John Herschel on January 9, 1836.
NGC 2188 is a spiral galaxy seen edge-on from the viewpoint of Earth as the centre and spiral arms of the galaxy are tilted away from us, with only the very narrow outer edge of the galaxy's disc visible to us. The true shape of the galaxy was identified by studying the distribution of the stars in the inner central bulge and outer disc and by observing the stars' colours. The galaxy is close enough that its stars can be resolved. The brightest of them have an apparent magnitude of about 21.
When imaged in HI, the galaxy appears asymmetrical, maybe due to a recent interaction. The hydrogen gas is more abundant in one end of the galaxy and extends over 4 kpc away from the galactic plane. Other features visible are some filaments and a superbubble with a diameter of 15 arcseconds. The filaments have been associated with a HII region located in the galactic halo. The total hydrogen mass of the galaxy is estimated to be , while it is of low metallicity.
NGC 2188 has been found to have three smaller companions, HIPASS J0607-34, ESO364-029, and KK 55.
References
External links
Barred spiral galaxies
Dwarf spiral galaxies
Columba (constellation)
2188
18536
Magellanic spiral galaxies | NGC 2188 | Astronomy | 331 |
3,524,992 | https://en.wikipedia.org/wiki/Triangular%20function | A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window.
Definitions
The most common definition is as a piecewise function:
Equivalently, it may be defined as the convolution of two identical unit rectangular functions:
The triangular function can also be represented as the product of the rectangular and absolute value functions:
Note that some authors instead define the triangle function to have a base of width 1 instead of width 2:
In its most general form a triangular function is any linear B-spline:
Whereas the definition at the top is a special case
where , , and .
A linear B-spline is the same as a continuous piecewise linear function , and this general triangle function is useful to formally define as
where for all integer .
The piecewise linear function passes through every point expressed as coordinates with ordered pair , that is,
.
Scaling
For any parameter :
Fourier transform
The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function:
where is the normalized sinc function.
See also
Källén function, also known as triangle function
Tent map
Triangular distribution
Triangle wave, a piecewise linear periodic function
Trigonometric functions
References
Special functions | Triangular function | Mathematics | 371 |
18,580,071 | https://en.wikipedia.org/wiki/Spurious%20trip%20level | Spurious trip level (STL) is defined as a discrete level for specifying the spurious trip requirements of safety functions to be allocated to safety systems. An STL of 1 means that this safety function has the highest level of spurious trips. The higher the STL level the lower the number of spurious trips caused by the safety system. There is no limit to the number of spurious trip levels.
Safety functions and systems are installed to protect people, the environment and for asset protection. A safety function should only activate when a dangerous situation occurs. A safety function that activates without the presence of a dangerous situation (e.g., due to an internal failure) causes economic loss. The spurious trip level concept represents the probability that safety function causes a spurious (unscheduled) trip.
The STL is a metric that is used to specify the performance level of a safety function in terms of the spurious trips it potentially causes. Typical safety systems that benefit from an STL level are defined in standards like IEC 61508 IEC 61511, IEC 62061, ISA S84, EN 50204 and so on. An STL provides end-users of safety functions with a measurable attribute that helps them define the desired availability of their safety functions. An STL can be specified for a complete safety loop or for individual devices.
For end-users there is always a potential conflict between the cost of safety solutions and the loss of profitability caused by spurious trips of these safety solutions. The STL concept helps the end-users to end this conflict in a way that safety solutions provide both the desired safety and the desired process availability.
STL determination
The spurious trip level represents asset loss due to an internal failure of the safety function. The more financial damage the safety function can cause due to a spurious trip the higher the STL level of the safety function should be. Each company needs to decide for themselves which level of financial loss they can or are willing to take. This actually depends on many different factors including the financial strength of the company, the insurance policy they have, the cost of process shutdown and startup, and so on. All these factors are unique to each company. The table below shows an example of how a company can calibrate its spurious trip levels.
STL levels
The STL level achieved by a safety function is determined by the probability of fail safe (PFS) of this safety function. The PFS value is determined by internal failures of the safety system that cause the safety function to be executed without a demand from the process. The table below demonstrates the PFS value and spurious trip reduction (TRV) values of each STL level.
STL vs SIL
Today standards only define the safety integrity level (SIL) for safety functions. Standards do not define STL levels because they do in first instance not represent safety but economic loss. Despite this the STL is also a safety attribute, specially for safety functions in the process, oil & gas, chemical and nuclear industry. In those industries an undesired shutdown of the process leads to dangerous situation as the plant needs to be started up again. Startup and shutdown of a process plant are considered the two most dangerous operational modes of the plant and should be limited to the absolute minimum.
In practice the STL and SIL concepts complement each other. Both factors are attributes of the same safety function. The STL level is determined by the average PFS value of the safety function. The SIL level is determined by the average probability of failure on demand. PFD value of the safety function. The STL level expresses the probability of spurious trips by the safety function, i.e., the safety function is executed without a demand from the process. The SIL level expresses the probability that the safety function does not work upon demand from the process. Both parameters are important to end-users in order to achieve safety and asset protection.
In order to calculate the PFS or PFD value of a safety loop it is necessary to have a reliability model and reliability data for each component in the safety loop. The best reliability model to use is a Markov model (see Andrey Markov). Typical data required are:
Lambda safe detected
Lambda safe undetected
Lambda dangerous detected
Lambda dangerous undetected
Repair rate
Proof test coverage
Proof test interval
Common cause factors
See also
Safety integrity level
Notes
External links
Spurious Trip Level analysis and certification
IEC Functional safety zone
IEC What is functional safety?
Overview of IEC 61508
SIL and Functional Safety in a Nutshell - eBook introducing SIL and Functional Safety
Safety
Risk management
Safety engineering | Spurious trip level | Engineering | 952 |
19,541,813 | https://en.wikipedia.org/wiki/Network%20detector | Network detectors or network discovery software are computer programs that facilitate detection of wireless LANs using the 802.11b, 802.11a and 802.11g WLAN standards. Discovering networks may be done through active as well as passive scanning.
Active scanning
Active scanning is done through sending multiple probe requests and recording the probe responses. The probe response received normally contains BSSID and WLAN SSID. If SSID broadcasting has been turned off, and active scanning is the only type of scanning supported by the software, no networks will show up. An example of an active scanner is NetStumbler.
Passive scanning
Passive scanning is not done by active probing, but by mere listening to any data sent out by the AP. Once a legitimate user connects to the AP, the AP will eventually send out a SSID in cleartext. By impersonating this AP by automatic altering of the MAC address, the computer running the network discovery scanner will be given this SSID by legitimate users. Passive scanners include Kismet and essid jack (a program under AirJack).
Notable programs
Notable programs include Network Stumbler, Kismet, Lumeta Corporation, Aerosol, AirMagnet, MacStumbler, Ministumbler, Mognet, NetChaser, perlskan, Wireless Security Auditor, Wlandump, PocketWarrior, pocketWinc, Prismstumbler, Sniff-em, AiroPeek, Airscanner, AP Scanner, AP Radar, Apsniff, BSD-Airtools, dstumbler, gtk-scanner, gWireless, iStumbler, KisMAC, Sniffer Wireless, THC-Scan, THC-Wardrive, WarGlue, WarKizniz, Wellenreiter, Wi-Scan and WiStumbler.
References
Hacking (computer security)
Wireless networking
Detectors | Network detector | Technology,Engineering | 390 |
43,110,993 | https://en.wikipedia.org/wiki/Penicillium%20arianeae | Penicillium arianeae is a fungus species of the genus of Penicillium which is named after Princess Ariane of the Netherlands.
See also
List of Penicillium species
References
Further reading
arianeae
Fungi described in 2013
Fungus species | Penicillium arianeae | Biology | 53 |
4,883,158 | https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20theorem | In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found.
The theorem also generalizes to the -gamma function.
Statement of the theorem
For every there is no non-zero polynomial such that
where is the gamma function.
For example, define by
Then the equation
is called an algebraic differential equation, which, in this case, has the solutions and — the Bessel functions of the first and second kind respectively. Hence, we say that and are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, , is not differentially algebraic and is therefore transcendentally transcendental.
Proof
Let and assume that a non-zero polynomial exists such that
As a non-zero polynomial in can never give rise to the zero function on any non-empty open domain of (by the fundamental theorem of algebra), we may suppose, without loss of generality, that contains a monomial term having a non-zero power of one of the indeterminates .
Assume also that has the lowest possible overall degree with respect to the lexicographic ordering For example,
because the highest power of in any monomial term of the first polynomial is smaller than that of the second polynomial.
Next, observe that for all we have:
If we define a second polynomial by the transformation
then we obtain the following algebraic differential equation for :
Furthermore, if is the highest-degree monomial term in , then the highest-degree monomial term in is
Consequently, the polynomial
has a smaller overall degree than , and as it clearly gives rise to an algebraic differential equation for , it must be the zero polynomial by the minimality assumption on . Hence, defining by
we get
Now, let in to obtain
A change of variables then yields
and an application of mathematical induction (along with a change of variables at each induction step) to the earlier expression
reveals that
This is possible only if is divisible by , which contradicts the minimality assumption on . Therefore, no such exists, and so is not differentially algebraic. Q.E.D.
References
Gamma and related functions
Theorems in analysis | Hölder's theorem | Mathematics | 508 |
16,819,687 | https://en.wikipedia.org/wiki/Maternity%20clothing | Maternity clothing is worn by women as an adaptation to changes in body size during pregnancy. The evolution of maternity clothing began during the Middle Ages, and became fashionable as women became more selective about style and comfort in the types of maternity clothing they wore. Fashions were constantly changing over time, such as the high-waisted Empire silhouette style maternity dress that was fashionable at the turn of the 19th century, and the "wrapper" style dress of the Victorian era that a woman could simply wrap around herself and button up.
The commercial production of maternity clothing began at the start of the 20th century, and continued to evolve. During the 1990s in the U.S., relaxed laws such as the Family and Medical Leave Act, which was signed into law by President Bill Clinton, helped to protect the jobs of pregnant women, and served as a form of liberation that afforded women the freedom to wear fashionable maternity styles that emphasised their pregnancy.
History
Dresses did not follow a wearer's body shape until the Middle Ages. When western European dresses began to have seams, affluent pregnant women opened the seams to allow for growth. During the Baroque period (roughly 1600s through the 1700s) the Adrienne, a waistless pregnancy gown with many folds, was popular. At that time women wore men's waistcoats. Some styles had laced vents in the back that allowed the wearer to adjust the girth of the coat as needed. From the 1790s through the early 1820s a style well-suited for pregnancy, the Empire waist, was popular. The Empire, a style which has a fitted bodice ending just below the bust and a loosely gathered skirt, was made popular by Napoleon's first wife Empress Joséphine. Bibs could be added to permit breastfeeding. The 1960s saw a revival of the Empire waistline which lasted for a few years as a general fashion, but remained popular for many years as pregnancy wear.
Victorian era
During the Victorian era, women spent more time in pregnancy compared to the 21st century, giving birth to an average of eight children with five making it through infancy. Queen Victoria herself had nine. Pregnancy was considered a private matter not to be discussed in "polite" conversation. A garment called a "wrapper" worn by women at home before they dressed for the day was well-suited for pregnancy as well since it wrapped around and could be worn loosely or more form-fitting as needed. At that time women were used to wearing corsets and maternity corsets with laces for adjustment were available.
High-waisted gowns remained in style until about 1830 but when waists returned to normal levels maternity wardrobe planning became more difficult. In the 1840s and 1850s a fan pleated bodice was popular. Some styles were gathered at the shoulder with a drawstring at the waist which could be let out as needed. Separates were introduced in the 1860s with boxy jackets and gathered center bodice panels. Some outfits had linings intended to lace over the belly for support. Fashion magazines showed maternity clothing, though the word "pregnancy" or similar was never mentioned, instead wording such as "for the recently married lady", or "for the young matron" was used.
1900 to present
The first commercial ready-to-wear clothing for pregnant women was sold in the US by Lane Bryant in the 1900s. Lane Bryant offered shirtwaists with an adjustable drawstring waist, and dresses with an adjustable wrap-around front.
The next competitor, Page Boy, offered a patented skirt in 1937. By the 1930s, wrap-around skirts with a series of buttons were available, but the new Page Boy skirt was constructed with a window over the area of the expanding abdomen which allowed the hemline to remain stable rather than to hike up as the woman's abdomen increased in size. In later years when stretch fabric became available it was used to fill in the window. Their clothing, usually a slim skirt with a wide smock top, became fashionable during the 1950s after Lucille Ball popularized the style in the first TV episode to show a pregnant woman in 1952. Celebrities such as Jackie Kennedy and Elizabeth Taylor were later known for wearing Page Boy clothes.
Slacks with adjustable waists became widely available in the 1950s. An Aldens catalog from 1952 shows a pedal pusher and matching blouse outfit priced at $5.98. Designer blue jeans became available in the '80s.
Further developments in maternity clothing styles have meant that many maternity tops are also made to enable discreet nursing, extending the usable life of maternity clothes beyond just the period whilst pregnant.
Cultural trends
Maternity clothes around the world have been undergoing significant changes. In both Eastern and Western cultures, there is greater demand for fashionable maternity clothes. In Western cultures the influence of celebrity culture. means that pregnant women in the public eye are taking the lead in maternity fashion. One such example is Demi Moore's 1991 Vanity Fair cover, which was one of the first instances of a magazine cover depicting an expectant mother. As a result, pregnant women are no longer trying to hide or disguise their "baby bumps", instead choosing to wear garments which closely fit their new shape, often emphasising the bust and abdominal area. Fashion bloggers have caught on to the shift in perception and began to regularly discuss new styles and fabrics designed with the pregnant form in mind. High-tech fabrics such as elastane are the material of choice for maternity wear in Western cultures as they allow garments to be form-fitting while allowing the abdominal area to expand as necessary.
Women in Eastern cultures, however, have maintained a much greater sense of modesty when it comes to maternity wear. In both the Islamic and Asian cultures, maternity wear is much less fitted, hemlines are longer and necklines higher. Modern Islamic maternity wear uses fabrics with brighter colours and bolder prints. Aside from cultural modesty, Chinese women have sometimes sought to hide their second pregnancy in less shapely clothes because Chinese policy has dictated that they can only have one child. In Chinese and Japanese cultures, there is a fear of radiation from devices such as computers and mobile phones, especially during pregnancy. Even though there is no evidence to support this (according to WHO), Asian maternity wear is often manufactured from "anti-radiation" fabrics.
Culturally in the US today, a few popular clothing brands have made everyday wear for pregnant women both fashionable and accessible. As the body is changing shape and therefore levels of comfort, most maternity clothing is made with Lycra and elastic for stretch and growth. For pants, the waistband is usually a thick layer of stretchy material that can be hidden by a shirt to give the pants a normal look. Depending on style and activity, tops often billow out to leave room for the belly and are made of varying cottons and elastic materials.
As more women are pregnant when they marry, some manufacturers of maternity clothing and bridal gowns have begun producing wedding dresses that fit pregnant women.
Military maternity uniforms
While women were integrated into the U.S. military in 1948, they were automatically discharged if they became pregnant. However, in the late 1970s it was decided that in order to keep women in an all-volunteer armed forces the military needed to change its policy regarding pregnancy. Following complaints that pregnant women dressed in civilian clothing undermined morale, between 1978 and 1980 the armed forces began to issue military maternity outfits. Writing about her experience working on the Armed Forces History Collections at the Smithsonian National Museum of American History, museum expert Bethanee Bemis wrote (in 2011):
"In the year since the beginning of the Military History Collections Inventory project, other members of the team and I have seen just about every type of military uniform we could conceive of. We have learned to identify branch, rank, even time period of different uniform pieces with relative ease, which is why we were surprised to come across a uniform unlike any we had seen before. It was a blue smock top paired with a white blouse and blue skirt, and it turned out to be a United States Air Force officer's maternity dress uniform."
Bemis wrote in 2011 that of the more than 6,000 military uniforms in their collection only three were maternity uniforms, an Air Force officer's uniform and two Navy Petty Officer Second Class uniforms. The Air Force dress uniform features a blue smock top paired with a white blouse and blue skirt, and the Navy uniforms include a blue coat and slacks with a white blouse for dress and a working uniform with dungaree pants and a chambray shirt. All three uniforms are from the 1980s.
Speaking at the International Women's Day celebration in March 2021, President Joe Biden spoke of the progress that the military has taken in recent years to better accommodate women in the military, including creating maternity flight suits and new better fitting body armor. Tucker Carlson, speaking on Fox News, showed a photo of an Air Force officer wearing an artificial pregnancy bump to demonstrate the design changes that made for the new Maternity Flight Duty Uniform and commented "So, we've got new hairstyles and maternity flight suits. Pregnant women are going to fight our wars. It's a mockery of the U.S. military." The Pentagon's top spokesman and senior generals rebuked Carlson's remarks. His comments also drew numerous social media responses from both male and female enlisted service members, some of whom noted that Tucker has never served. Army veteran Senator Tammy Duckworth, who lost both her legs during a deployment in Iraq and notably the first sitting senator to give birth while in office, responded to Carlson with a tweet referencing his brief appearance on Dancing with the Stars:
"While he was practicing his two-step, America's female warriors were hunting down Al Qaeda and proving the strength of America's women."
Legislative influences
Pregnancy fashions took a dramatic turn in the 1990s with the introduction of tight-fitted maternity wear intended to emphasize rather than hide a pregnant woman's baby bump. Not coincidentally, this shift occurred during a time of major changes for women in America. In 1993, the Family and Medical Leave Act was passed by President Bill Clinton. This act protected women's jobs during pregnancy, giving women more freedom to show off their pregnancies.
Until this act was passed, many women were fired as a result of their pregnancies. Following the passage of this legislation women had more job security and government-protected maternity leave. At the same time as these laws were being passed, maternity fashions changed drastically. Many magazine articles began to discuss stylish mothers-to-be wearing figure hugging clothing that emphasized their growing waistline.
Cost and economics
Historically maternity clothing has not generally been considered a potentially profitable area for most major clothing manufacturers due to a belief that many women would not purchase clothes intended for only a few months of wearing. Declining birth rates have also reduced sales. However, with wide media interest in celebrity pregnancies beginning in the late 1990s, the maternity wear market grew 10% between 1998 and 2003. It was also during this time that the term "pregnant chic" was developed in order for companies to market to pregnant women. One clothing source said the demand for maternity clothes was growing because "Nowadays women are working during pregnancy, and travelling, and going to the gym, so their clothing needs are greater and more diverse."
In 2015 it was reported that maternity clothes is a $2.4 billion market in the U.S. According to a Forbes analysis, in 2014 a pregnant woman spent around $480 on maternity wear. This represents approximately one-sixth of all clothing sales each year. The largest chains, belonging to Destination Maternity, control almost one-fifth of the American market. Other brands are sold through discount stores, department stores, and boutiques.
Maternity clothing is generally worn only during the second and third trimesters, and possibly for several weeks or months after the birth of the baby while a woman regains her pre-pregnancy size. If a woman expects to be pregnant only once or twice, buying maternity clothing that will be worn only for about six months, can be considered expensive. Women who cannot afford or don't want to spend large amounts of money on maternity clothing may choose to just wear either larger, looser clothing or buy secondhand maternity clothes via yard sales and also consignment clothing stores. Also, some products, such as button extenders or Ingrid & Isabel's Bellaband wrap, are intended to work with the woman's non-maternity clothing, to reduce the need for specialized clothing.
References
External links
Sizes in clothing
Clothing by function
Human pregnancy | Maternity clothing | Physics,Mathematics | 2,553 |
233,195 | https://en.wikipedia.org/wiki/Air%20compressor | An air compressor is a machine that takes ambient air from the surroundings and discharges it at a higher pressure. It is an application of a gas compressor and a pneumatic device that converts mechanical power (from an electric motor, diesel or gasoline engine, etc.) into potential energy stored in compressed air, which has many uses. A common application is to compress air into a storage tank, for immediate or later use. When the delivery pressure reaches its set upper limit, the compressor is shut off, or the excess air is released through an overpressure valve. The compressed air is stored in the tank until it is needed. The pressure energy provided by the compressed air can be used for a variety of applications such as pneumatic tools as it is released. When tank pressure reaches its lower limit, the air compressor turns on again and re-pressurizes the tank.
A compressor is different from a pump because it works on a gas, while pumps work on a liquid.
Classification
Power source
Internal combustion engine: Petrol, petrol without oil, diesel
Electric: AC, DC
Drive type
Direct drive
Belt drive
Compressors may be classified according to the pressure delivered:
Low-pressure air compressors, which have a discharge pressure of or less
Medium-pressure compressors which have a discharge pressure of
High-pressure air compressors, which have a discharge pressure above
There are numerous methods of air compression, divided into either positive-displacement or roto-dynamic types.
Single-stage reciprocating compressor
Multi-stage reciprocating compressor
Single stage rotary-screw compressor
Two-stage rotary screw compressor
Rotary vane pump
Scroll compressor
Centrifugal (roto-dynamic or turbo) compressor
Axial compressor, often used in jet engines.
Another way of classification, is by lubrication type: oil lubricated and oil-free. The oil-less (or oil-free) system has more technical development such as they do not require oil for lubrication. oil-less air compressors are also lighter and more portable than oil-lubricated models but require more maintenance. On other side Oil-lubricated air compressors are the more traditional type of air compressor. They require oil to lubricate the motor which helps prolong the compressor's life. One of the benefits of oil-lubricated compressors is that they tend to be more durable and require less maintenance than oil-free compressors.
Positive displacement compressors
Positive-displacement compressors work by forcing air through a chamber whose volume is decreased to compress the air. Once the pressure is greater than the pressure outside the discharge valve, a port or valve opens and air is discharged into the outlet system from the compression chamber. Common types of positive displacement compressors are
Piston-type air compressors, which compress air by pumping it through cylinders by reciprocating pistons. They use one-way valves to admit air into the cylinder on the induction stroke and prevent it from leaving by the same route, and out of the cylinder through the exhaust valve to the high pressure side on the compression stroke, again using a non-return valve to prevent it leaking back on the next induction stroke. Piston compressors can be single or multi-stage, and may also have one or more sets of cylinders in parallel (at the same pressure). Multi-stage compressors provide greater efficiency than their single-stage counterparts for high compression ratios, and generally use interstage cooling to improve efficiency.
The capacities for both single-stage and two-stage compressors are generally specified in Standard Cubic feet per Minute (SCFM) or litres per minute and pounds per square Inch (PSI) or bar. To a lesser extent, some compressors are rated in actual cubic feet per minute (ACFM). Still others are rated in cubic feet per minute (CFM). Using CFM alone to rate a compressor is ambiguous because it represents a flow rate without a pressure reference. i.e. 20 CFM at 60 PSI.
Single stage compressors usually fall into the fractional through 5 horsepower range. Two-stage compressors normally fall into the 5 through 30 horsepower range.
Rotary screw compressors provide positive-displacement compression by matching two helical screws that, when turned, guide air into a chamber, whose volume is decreased as the screws turn. Rotary screw compressors can be single-stage or two-stage.
Vane compressors: use a slotted rotor with varied blade placement to guide air into a chamber and compress the volume. This type of compressor delivers a fixed volume of air at high pressures.
Roto-dynamic or turbo compressors
Roto-dynamic air compressors include centrifugal compressors where Rotating vanes impart kinetic energy to a gas and stationary passages convert velocity into a rise in pressure, and axial compressors, where rotor blades impart the kinetic energy and stator blades convert it to a rise in pressure.
Cooling
Due to adiabatic heating, air compressors require some method of disposing of waste heat. Generally this is some form of air- or water-cooling, although some (particularly rotary type) compressors may be cooled by oil (that is then in turn air- or water-cooled). The atmospheric changes are also considered during cooling of compressors. The type of cooling is determined by considering the factors such as inlet temperature, ambient temperature, power of the compressor and area of application. There is no single type of compressor that could be used for any application.
Applications
Air compressors have many uses, such as supplying clean high-pressure air to fill gas cylinders, supplying clean moderate-pressure air to a submerged surface supplied air diver, supplying moderate-pressure clean air for driving some office and school building pneumatic HVAC control system valves, supplying a large amount of moderate-pressure air to power pneumatic tools, such as jackhammers, filling high pressure air tanks (HPA, air tank), for filling tires, and to produce large volumes of moderate-pressure air for large-scale industrial processes (such as oxidation for petroleum coking or cement plant bag house purge systems).
Air compressors are also widely used in oil and gas, mining and drilling applications as the flushing medium, aerating muds in underbalanced drilling and in air pigging of pipelines.
Most air compressors either are reciprocating piston type, rotary vane or rotary screw. Centrifugal compressors are common in very large applications, while rotary screw, scroll, and reciprocating air compressors are favored for small and medium-sized applications.
Power source
Air compressors are designed to utilize a variety of power sources. While direct drive gasoline or diesel-engines and electric motors are among the most popular, air compressors that utilize vehicle engines, power-take-off, or hydraulic ports are also commonly used in mobile applications.
The power of a compressor is measured in HP (horsepower) and CFM (cubic feet per minute of intake air).
The volume of the pressure vessel and the stored pressure indicate the volume of compressed air (in reserve) available.
Gasoline and diesel-powered compressors are widely used in remote areas with problematic access to electricity. They are noisy and require ventilation for exhaust gases, particularly if the compressed air is to be used for a breathing air supply. Electric-powered compressors are widely used in production, workshops and garages with permanent access to electricity. Common workshop/garage compressors are 110-120 Volt or 230-240 Volt. Compressor tank shapes are: "pancake", "twin tank", "horizontal", and "vertical". Depending on a size and purpose compressors can be stationary or portable.
Maintenance
To ensure all compressor types run efficiently with no leaks, it is necessary to perform routine maintenance. The cost of maintenance only accounts for 8% of the life cycle cost of owning an air compressor.
Air compressor isentropic efficiency
According to CAGI air compressor performance verification data sheets, the higher the isentropic efficiency is, the better the energy saving is. The better air compressor isentropic efficiency has reached 95%.
Approximately 70~80% of the air compressor's total lifetime cost is energy consumption, so using the high-efficiency air compressor is one of the energy-saving methods.
See also
Vacuum pump
Free-piston engine
Gas compressor
Pneumatics
Gas cylinder
"The Blue Air Compressor"
References
Gas compressors
Gases
Diving support equipment
Gas technologies
Industrial gases | Air compressor | Physics,Chemistry | 1,728 |
7,852,809 | https://en.wikipedia.org/wiki/Hamiltonian%20matrix | In mathematics, a Hamiltonian matrix is a -by- matrix such that is symmetric, where is the skew-symmetric matrix
and is the -by- identity matrix. In other words, is Hamiltonian if and only if where denotes the transpose.
(Not to be confused with Hamiltonian (quantum mechanics))
Properties
Suppose that the -by- matrix is written as the block matrix
where , , , and are -by- matrices. Then the condition that be Hamiltonian is equivalent to requiring that the matrices and are symmetric, and that . Another equivalent condition is that is of the form with symmetric.
It follows easily from the definition that the transpose of a Hamiltonian matrix is Hamiltonian. Furthermore, the sum (and any linear combination) of two Hamiltonian matrices is again Hamiltonian, as is their commutator. It follows that the space of all Hamiltonian matrices is a Lie algebra, denoted . The dimension of is . The corresponding Lie group is the symplectic group . This group consists of the symplectic matrices, those matrices which satisfy . Thus, the matrix exponential of a Hamiltonian matrix is symplectic. However the logarithm of a symplectic matrix is not necessarily Hamiltonian because the exponential map from the Lie algebra to the group is not surjective.
The characteristic polynomial of a real Hamiltonian matrix is even. Thus, if a Hamiltonian matrix has as an eigenvalue, then , and are also eigenvalues. It follows that the trace of a Hamiltonian matrix is zero.
The square of a Hamiltonian matrix is skew-Hamiltonian (a matrix is skew-Hamiltonian if ). Conversely, every skew-Hamiltonian matrix arises as the square of a Hamiltonian matrix.
Extension to complex matrices
As for symplectic matrices, the definition for Hamiltonian matrices can be extended to complex matrices in two ways. One possibility is to say that a matrix is Hamiltonian if , as above. Another possibility is to use the condition where the superscript asterisk () denotes the conjugate transpose.
Hamiltonian operators
Let be a vector space, equipped with a symplectic form . A linear map is called a Hamiltonian operator with respect to if the form is symmetric. Equivalently, it should satisfy
Choose a basis in , such that is written as . A linear operator is Hamiltonian with respect to if and only if its matrix in this basis is Hamiltonian.
References
Matrices | Hamiltonian matrix | Mathematics | 505 |
29,286,753 | https://en.wikipedia.org/wiki/M33%20X-7 | M33 X-7 is a black hole binary system in the Triangulum Galaxy. The system is made up of a stellar-mass black hole and a companion star. The black hole in M33 X-7 has an estimated mass of 15.65 times that of the Sun () (formerly the largest known stellar black hole, though this has now been superseded amongst electromagnetically-observed black holes by an increased mass estimate for Cygnus X-1, and also by many of the LVK-detected binary black hole components). The total mass of the system is estimated to be around , which would make it the most massive black hole binary system. The black hole is consuming its partner, a 70 solar mass blue giant star.
Location
M33 X-7 lies within the Triangulum Galaxy which is approximately 3 million light-years (ly) distant from the Milky Way in constellation Triangulum. This would make M33 X-7 one of the furthest confirmed stellar mass black holes known.
System
M33 X-7 orbits a companion star that eclipses the black hole every 3.45 days. The companion star also has an unusually large mass, . This makes it the most massive companion star in a binary system containing a black hole.
Observational data
The black hole was studied in combination by NASA's Chandra X-ray Observatory and the Gemini telescope on Mauna Kea, Hawaii.
The properties of the M33 X-7 binary system are difficult to explain using conventional models for the evolution of massive stars. The parent star for the black hole must have had a mass greater than the existing companion to have formed a black hole before the companion star. Such a massive star would have had a radius larger than the present separation between the stars, so the stars must have been brought closer while sharing a common outer atmosphere. This process typically results in a large amount of mass being lost from the system, so much that the parent star should not have been able to form a black hole.
In new models of the formation of the black hole, the star that will form the black hole is nearly 100 times the mass of the Sun, orbited by a second star with mass of about .
In such an orbit, the future black hole is able to start transferring mass while it is still fusing hydrogen into helium. As a result, it loses most of its hydrogen becoming a Wolf–Rayet star and shedding the rest of the envelope in the form of stellar wind, exposing its core. Its companion grows more massive in the process, becoming more massive of the two stars.
Finally, the star collapses creating the black hole, and begins absorbing material from its companion, leading to X-ray emissions.
Future
Due to the mass, it is assumed that the companion will collapse into a black hole, creating a binary black hole system.
See also
List of most massive stars
References
Stars in the Triangulum Galaxy
Triangulum
Triangulum Galaxy
Stellar black holes
Extragalactic stars
O-type giants | M33 X-7 | Physics,Astronomy | 611 |
560,061 | https://en.wikipedia.org/wiki/Psilocybe%20semilanceata | Psilocybe semilanceata, commonly known as the liberty cap, is a species of fungus which produces the psychoactive compounds psilocybin, psilocin and baeocystin. It is both one of the most widely distributed psilocybin mushrooms in nature, and one of the most potent. The mushrooms have a distinctive conical to bell-shaped cap, up to in diameter, with a small nipple-like protrusion on the top. They are yellow to brown, covered with radial grooves when moist, and fade to a lighter color as they mature. Their stipes tend to be slender and long, and the same color or slightly lighter than the cap. The gill attachment to the stipe is adnexed (narrowly attached), and they are initially cream-colored before tinting purple to black as the spores mature. The spores are dark purplish-brown en masse, ellipsoid in shape, and measure 10.5–15 by 6.5–8.5 micrometres.
The mushroom grows in grassland habitats, especially wetter areas. But unlike P. cubensis, the fungus does not grow directly on dung; rather, it is a saprobic species that feeds off decaying grass roots. It is widely distributed in the temperate areas of the Northern Hemisphere, particularly in Europe, and has been reported occasionally in temperate areas of the Southern Hemisphere as well. The earliest reliable history of P. semilanceata intoxication dates back to 1799 in London, and in the 1960s the mushroom was the first European species confirmed to contain psilocybin.
The possession or sale of psilocybin mushrooms is illegal in many countries.
Taxonomy and naming
The species was first described by Elias Magnus Fries as Agaricus semilanceatus in his 1838 work Epicrisis Systematis Mycologici. Paul Kummer transferred it to Psilocybe in 1871 when he raised many of Fries's sub-groupings of Agaricus to the level of genus. Panaeolus semilanceatus, named by Jakob Emanuel Lange in both 1936 and 1939 publications, is a synonym. According to the taxonomical database MycoBank, several taxa once considered varieties of P. semilanceata to be synonymous with the species now known as Psilocybe strictipes: the caerulescens variety described by Pier Andrea Saccardo in 1887 (originally named Agaricus semilanceatus var. coerulescens by Mordecai Cubitt Cooke in 1881), the microspora variety described by Rolf Singer in 1969, and the obtusata variety described by Marcel Bon in 1985.
Several molecular studies published in the 2000s demonstrated that Psilocybe, as it was defined then, was polyphyletic. The studies supported the idea of dividing the genus into two clades, one consisting of the bluing, hallucinogenic species in the family Hymenogastraceae, and the other the non-bluing, non-hallucinogenic species in the family Strophariaceae. However, the generally accepted lectotype (a specimen later selected when the original author of a taxon name did not designate a type) of the genus as a whole was Psilocybe montana, which is a non-bluing, non-hallucinogenic species. If the non-bluing, non-hallucinogenic species in the study were to be segregated, it would have left the hallucinogenic clade without a valid name. To resolve this dilemma, several mycologists proposed in a 2005 publication to conserve the name Psilocybe, with P. semilanceata as the type. As they explained, conserving the name Psilocybe in this way would prevent nomenclatural changes to a well-known group of fungi, many species of which are "linked to archaeology, anthropology, religion, alternate life styles, forensic science, law enforcement, laws and regulation". Further, the name P. semilanceata had historically been accepted as the lectotype by many authors in the period 1938–68. The proposal to conserve the name Psilocybe, with P. semilanceata as the type was accepted unanimously by the Nomenclature Committee for Fungi in 2009.
The mushroom takes its common name from the Phrygian cap, also known as the "liberty cap", which it resembles; P. semilanceata shares its common name with P. pelliculosa, a species from which it is more or less indistinguishable in appearance. The Latin word for Phrygian cap is pileus, nowadays the technical name for what is commonly known as the "cap" of a fungal fruit body. In the 18th century, Phrygian caps were placed on Liberty poles, which resemble the stipe of the mushroom. The generic name is derived from Ancient Greek psilos (ψιλός) 'smooth, bare' and Byzantine Greek kubê (κύβη) 'head'. The specific epithet comes from Latin semi 'half, somewhat' and lanceata, from lanceolatus 'spear-shaped'.
Description
The cap of P. semilanceata is in diameter and tall. It varies in shape from sharply conical to bell-shaped, often with a prominent papilla (a nipple-shaped structure), and does not change shape considerably as it ages. The cap margin is initially rolled inward but unrolls to become straight or even curled upwards in maturity. The cap is hygrophanous, meaning it assumes different colors depending on its state of hydration. When it is moist, the cap is ochraceous to pale brown to dark chestnut brown, but darker in the center, often with a greenish-blue tinge. When moist, radial grooves (striations) can be seen on the cap that correspond to the positions of the gills underneath. When the cap is dry, it becomes much paler, a light yellow-brown color. Moist mushrooms have sticky surfaces that result from a thin gelatinous film called a pellicle. This film becomes apparent if a piece of the cap is broken by bending it back and peeling away the piece. When the cap dries from exposure to the sun, the film turns whitish and is no longer peelable.
On the underside of the mushroom's cap, there are between 15 and 27 individual narrow gills that are moderately crowded together, and they have a narrowly adnexed to almost free attachment to the stipe. Their color is initially pale brown, but becomes dark gray to purple-brown with a lighter edge as the spores mature. The slender yellowish-brown stipe is long by thick, and usually slightly thicker towards the base. The mushroom has a thin cobweb-like partial veil that does not last long before disappearing; sometimes, the partial veil leaves an annular zone on the stipe that may be darkened by spores. The flesh is thin and membrane-like, and roughly the same color as the surface tissue. It has a farinaceous (similar to freshly ground flour) odor and taste. All parts of the mushroom will stain a bluish color if handled or bruised, and it may naturally turn blue with age.
Microscopic characteristics
In deposit, the spores are a deep reddish purple-brown color. The use of an optical microscope can reveal further details: the spores are oblong when seen in side view, and oblong to oval in frontal view, with dimensions of 10.5–15 by 6.5–8.5 μm. The basidia (spore bearing cells of the hymenium), are 20–31 by 5–9 μm, four-spored, and have clamps at their bases; there are no basidia found on the sterile gill edge. The cheilocystidia (cystidia on the gill edge) measure 15–30 by 4–7 μm, and are flask-shaped with long thin necks that are 1–3.5 μm wide. P. semilanceata does not have pleurocystidia (cystidia on the gill face). The cap cuticle is up to 90 μm thick, and is made of a tissue layer called an ixocutis—a gelatinized layer of hyphae lying parallel to the cap surface. The hyphae comprising the ixocutis are cylindrical, hyaline, and 1–3.5 μm wide. Immediately under the cap cuticle is the subpellis, made of hyphae that are 4–12 μm wide with yellowish-brown encrusted walls. There are clamp connections present in the hyphae of all tissues.
Other forms
The anamorphic form of P. semilanceata is an asexual stage in the fungus's life cycle involved in the development of mitotic diaspores (conidia). In culture, grown in a petri dish, the fungus forms a white to pale orange cottony or felt-like mat of mycelia. The conidia formed are straight to curved, measuring 2.0–8.0 by 1.1–2.0 μm, and may contain one to several small intracellular droplets. Although little is known of the anamorphic stage of P. semilanceata beyond the confines of laboratory culture, in general, the morphology of the asexual structures may be used as classical characters in phylogenetic analyses to help understand the evolutionary relationships between related groups of fungi.
Scottish mycologist Roy Watling described sequestrate (truffle-like) or secotioid versions of P. semilanceata he found growing in association with regular fruit bodies. These versions had elongated caps, long and wide at the base, with the inward curved margins closely hugging the stipe from the development of membranous flanges. Their gills were narrow, closely crowded together, and anastomosed (fused together in a vein-like network). The color of the gills was sepia with a brownish vinaceous (red wine-colored) cast, and a white margin. The stipes of the fruit bodies were long by thick, with about of stipe length covered by the extended cap. The thick-walled ellipsoid spores were 12.5–13.5 by 6.5–7 μm. Despite the significant differences in morphology, molecular analysis showed the secotioid version to be the same species as the typical morphotype.
Similar species
There are several other Psilocybe species that may be confused with P. semilanceata due to similarities in physical appearance. P. strictipes is a slender grassland species that is differentiated macroscopically from P. semilanceata by the lack of a prominent papilla. P. mexicana, commonly known as the "Mexican liberty cap", is also similar in appearance, but is found in manure-rich soil in subtropical grasslands in Mexico. It has somewhat smaller spores than P. semilanceata, typically 8–9.9 by 5.5–7.7 μm. Another lookalike species is P. samuiensis, found in Thailand, where it grows in well-manured clay-like soils or among paddy fields. This mushroom can be distinguished from P. semilanceata by its smaller cap, up to in diameter, and its rhomboid-shaped spores. P. pelliculosa is physically similar to such a degree that it may be indistinguishable in the field. It differs from P. semilanceata by virtue of its smaller spores, measuring 9–13 by 5–7 μm.
P. semilanceata has also been confused with the toxic muscarine-containing species Inocybe geophylla, a whitish mushroom with a silky cap, yellowish-brown to pale grayish gills, and a dull yellowish-brown spore print.
Ecology and habitat
Psilocybe semilanceata fruits solitarily or in groups on rich and acidic soil, typically in grasslands, such as meadows, pastures, or lawns. It is often found in pastures that have been fertilized with sheep or cow dung, although it does not typically grow directly on the dung.
P. semilanceata, like all others species of the genus Psilocybe, is a saprobic fungus, meaning it obtains nutrients by breaking down organic matter. The mushroom is also associated with sedges in moist areas of fields, and it is thought to live on the decaying root remains.
Like some other grassland psilocybin mushroom species such as P. mexicana, P. tampanensis and Conocybe cyanopus, P. semilanceata may form sclerotia, a dormant form of the fungus, which affords it some protection from wildfires and other natural disasters.
Laboratory tests have shown P. semilanceata to suppress the growth of the soil-borne water mold Phytophthora cinnamomi, a virulent plant pathogen that causes the disease root rot. When grown in dual culture with other saprobic fungi isolated from the rhizosphere of grasses from its habitat, P. semilanceata significantly suppresses their growth. This antifungal activity, which can be traced at least partly to two phenolic compounds it secretes, helps it compete successfully with other fungal species in the intense competition for nutrients provided by decaying plant matter. Using standard antimicrobial susceptibility tests, Psilocybe semilanceata was shown to strongly inhibit the growth of the human pathogen methicillin-resistant Staphylococcus aureus (MRSA). The source of the antimicrobial activity is unknown.
Distribution
Psilocybe authority Gastón Guzmán, in his 1983 monograph on psilocybin mushrooms, considered Psilocybe semilanceata the world's most widespread psilocybin mushroom species, as it has been reported on 18 countries.
In Europe, P. semilanceata has a widespread distribution, and is found in Austria, Belarus, Belgium, Bulgaria, the Channel Islands, Czech republic, Denmark, Estonia, the Faroe Islands, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, India, Ireland, Italy, Latvia, Lithuania, the Netherlands, Norway, Poland, Romania, Russia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom and Ukraine. It is generally agreed that the species is native to Europe; Watling has demonstrated that there exists little difference between specimens collected from Spain and Scotland, at both the morphological and genetic level.
The mushroom also has a widespread distribution in North America. In Canada it has been collected from British Columbia, New Brunswick, Newfoundland, Nova Scotia, Prince Edward Island, Ontario and Quebec. In the United States, it is most common in the Pacific Northwest, west of the Cascade Mountains, where it fruits abundantly in autumn and early winter; fruiting has also been reported to occur infrequently during spring months. Charles Horton Peck reported the mushroom to occur in New York in the early 20th century, and consequently, much literature published since then has reported the species to be present in the eastern United States. Guzmán later examined Peck's herbarium specimen, and in his comprehensive 1983 monograph on Psilocybe, concluded that Peck had misidentified it with the species now known as Panaeolina foenisecii. P. semilanceata is much less common in South America, where it has been recorded in Chile. It is also known in Australia (where it may be an introduced species) and New Zealand, where it grows in high-altitude grasslands. In 2000, it was reported from Golaghat, in the Indian state of Assam. In 2017, it was reported from Charsadda, in the Pakistani province of Khyber Pakhtunkhwa.
Psychoactive use
The first reliably documented report of Psilocybe semilanceata intoxication involved a British family in 1799, who prepared a meal with mushrooms they had picked in London's Green Park. According to the chemist Augustus Everard Brande, the father and his four children experienced typical symptoms associated with ingestion, including pupil dilation, spontaneous laughter and delirium. The identification of the species responsible was made possible by James Sowerby's 1803 book Coloured Figures of English Fungi or Mushrooms, which included a description of the fungus, then known as Agaricus glutinosus (originally described by Moses Ashley Curtis in 1780). According to German mycologist Jochen Gartz, the description of the species is "fully compatible with current knowledge about Psilocybe semilanceata."
In the early 1960s, the Swiss scientist Albert Hofmann—known for the synthesis of the psychedelic drug LSD—chemically analyzed P. semilanceata fruit bodies collected in Switzerland and France by the botanist Roger Heim. Using the technique of paper chromatography, Hofmann confirmed the presence of 0.25% (by weight) psilocybin in dried samples. Their 1963 publication was the first report of psilocybin in a European mushroom species; previously, it had been known only in Psilocybe species native to Mexico, Asia and North America. This finding was confirmed in the late 1960s with specimens from Scotland and England, Czechoslovakia (1973), Germany (1977), Norway (1978), and Belgium and Finland (1984). In 1965, forensic characterization of psilocybin-containing mushrooms seized from college students in British Columbia identified P. semilanceata—the first recorded case of intentional recreational use of the mushroom in Canada. The presence of the psilocybin analog baeocystin was confirmed in 1977. Several studies published since then support the idea that the variability of psilocybin content in P. semilanceata is low, regardless of country of origin.
Properties
Several studies have quantified the amounts of hallucinogenic compounds found in the fruit bodies of Psilocybe semilanceata. In 1993, Gartz reported an average of 1% psilocybin (expressed as a percentage of the dry weight of the fruit bodies), ranging from a minimum of 0.2% to a maximum of 2.37% making it one of the most potent species (but significantly less potent than panaeolus cyanescens). In an earlier analysis, Tjakko Stijve and Thom Kuyper (1985) found a high concentration in a single specimen (1.7%) in addition to a relatively high concentration of baeocystin (0.36%). Smaller specimens tend to have the highest percent concentrations of psilocybin, but the absolute amount is highest in larger mushrooms. A Finnish study assayed psilocybin concentrations in old herbarium specimens, and concluded that although psilocybin concentration decreased linearly over time, it was relatively stable. They were able to detect the chemical in specimens that were 115 years old. Michael Beug and Jeremy Bigwood, analyzing specimens from the Pacific Northwest region of the United States, reported psilocybin concentrations ranging from 0.62% to 1.28%, averaging 1.0 ±0.2%. They concluded that the species was one of the most potent, as well as the most constant in psilocybin levels. In a 1996 publication, Paul Stamets defined a "potency rating scale" based on the total content of psychoactive compounds (including psilocybin, psilocin, and baeocystin) in 12 species of Psilocybe mushrooms. Although there are certain caveats with this technique—such as the erroneous assumption that these compounds contribute equally to psychoactive properties—it serves as a rough comparison of potency between species. Despite its small size, Psilocybe semilanceata is considered a "moderately active to extremely potent" hallucinogenic mushroom (meaning the combined percentage of psychoactive compounds is typically between 0.25% to greater than 2%), and of the 12 mushrooms they compared, only 3 were more potent: P. azurescens, P. baeocystis, and P. bohemica. however this data has become obsolete over the years as more potent cultivars have been discovered for numerous species, especially panaeolus cyanescens which holds the current world record for most potent mushrooms described in published research. According to Gartz (1995), P. semilanceata is Europe's most popular psychoactive species.
Several reports have been published in the literature documenting the effects of consumption of P. semilanceata. Typical symptoms include visual distortions of color, depth and form, progressing to visual hallucinations. The effects are similar to the experience following consumption of LSD, although milder. Common side effects of mushroom ingestion include pupil dilation, increased heart rate, unpleasant mood, and overresponsive reflexes. As is typical of the symptoms associated with psilocybin mushroom ingestion, "the effect on mood in particular is dependent on the subject's pre-exposure personality traits", and "identical doses of psilocybin may have widely differing effects in different individuals." Although most cases of intoxication resolve without incident, there have been isolated cases with severe consequences, especially after higher dosages or persistent use. In one case reported in Poland in 1998, an 18-year-old man developed Wolff–Parkinson–White syndrome, arrhythmia, and suffered myocardial infarction after ingesting P. semilanceata frequently over the period of a month. The cardiac damage and myocardial infarction was suggested to be a result of either coronary vasoconstriction, or because of platelet hyperaggregation and occlusion of small coronary arteries.
Danger of misidentification
One danger of attempting to consume hallucinogenic or other wild mushrooms, especially for novice mushroom hunters, is the possibility of misidentification with toxic species. In one noted case, an otherwise healthy young Austrian man mistook the poisonous Cortinarius rubellus for P. semilanceata. As a result, he suffered end-stage kidney failure, and required a kidney transplant. In another instance, a young man developed cardiac abnormalities similar to those seen in Takotsubo cardiomyopathy, characterized by a sudden temporary weakening of the myocardium. A polymerase chain reaction-based test to specifically identity P. semilanceata was reported by Polish scientists in 2007. Poisonous Psathyrella species can easily be misidentified as liberty caps.
Legal status
The legal status of psilocybin mushrooms varies worldwide. Psilocybin and psilocin are listed as Class A (United Kingdom) or Schedule I (US) drugs under the United Nations 1971 Convention on Psychotropic Substances. The possession and use of psilocybin mushrooms, including P. semilanceata, is therefore prohibited by extension. Although many European countries remained open to the use and possession of hallucinogenic mushrooms after the US ban, starting in the 2000s (decade) there has been a tightening of laws and enforcements. In the Netherlands, where the drug was once routinely sold in licensed cannabis coffee shops and smart shops, laws were instituted in October 2008 to prohibit the possession or sale of psychedelic mushrooms—the final European country to do so.
They are legal in Jamaica and Brazil and decriminalised in Portugal. In the United States, the city of Denver, Colorado, voted in May 2019 to decriminalize the use and possession of psilocybin mushrooms. In November 2020, voters passed Oregon Ballot Measure 109, making Oregon the first state to both decriminalize psilocybin and also legalize it for therapeutic use. Ann Arbor, Michigan, and the county it resides in have decriminalized magic mushrooms. Possession, sale and use are now legal within the county. In 2021, the City Councils of Somerville, Northampton, Cambridge, Massachusetts, and Seattle, Washington, voted for decriminalization.
Sweden
The Riksdag added Psilocybe semilanceata to Narcotic Drugs Punishments Act under Swedish schedule I ("substances, plant materials and fungi which normally do not have medical use") as of 1 October 1997, published by Medical Products Agency (MPA) in regulation LVFS 1997:12 listed as Psilocybe semilanceata (toppslätskivling).
See also
List of Psilocybe species
Mushroom hunting
References
Cited texts
Entheogens
Fungi described in 1838
Fungi of Asia
Fungi of Australia
Fungi of Europe
Fungi of New Zealand
Fungi of North America
Fungi of South America
Fungi of Sweden
Fungi of Finland
semilanceata
Psychedelic tryptamine carriers
Psychoactive fungi
Taxa named by Elias Magnus Fries
Fungi of Iceland
Fungus species | Psilocybe semilanceata | Biology | 5,126 |
2,110,221 | https://en.wikipedia.org/wiki/Tie%20%28engineering%29 | A tie, strap, tie rod, eyebar, guy-wire, suspension cables, or wire ropes, are examples of linear structural components designed to resist tension. It is the opposite of a strut or column, which is designed to resist compression. Ties may be made of any tension resisting material.
Application in wood construction
In wood-frame construction ties are generally made of galvanized steel.
Wood framing ties generally have holes allowing them to be fastened to the wood structure by nails or screws. The number and type of nails are specific to the tie and its use. The manufacturer generally specifies information as to the connection method for each of their products. Among the most common wood framing ties used is the hurricane tie or seismic tie used in the framing of wooden structures where wind uplift or seismic overturning is a concern.
Hurricane tie
A hurricane tie (also known as hurricane clip or strip) is used to help make a structure (specifically wooden structures) more resistant to high winds (such as in hurricanes), resisting uplift, racking, overturning, and sliding.
Each of the crucial connections in a structure, that would otherwise fail under the pressures of high winds, have a corresponding type of tie, generally made of galvanized or stainless steel, and intended to resist hurricane-force and other strong winds.
"Hurricane clip" has two meanings in building construction:
A connecting tie that provides a continuous structural load transfer path from the top of a building to its foundation, helping to protect buildings from damage resulting from high wind. These devices are primarily used in areas affected by high winds including hurricanes but are generally suitable for any area that may be impacted by windstorm damage. They are also known as hurricane ties or strips;
A tie which is attached to roof tiles to keep them from blowing off a roof. These devices are also known as wind clips and hurricane side clips.
Seismic tie
Seismic ties are used to securely fix cabinets, bookcases, desks, appliances, machinery & equipment to walls and/or floors to constrain their movement during earthquakes.
Girder tiedown
Top mount, face mount, sloped/skewed, and variable pitch hangers for dimensional lumber, engineered wood I-joists, structural composite lumber and masonry wall. To give added strength in increase various load requirements over wood only.
Joist hanger or corner bracket
Joist hangers are used to prevent floor joists, which is what supports the flooring systems in residential homes and buildings built using lumber, from dropping and twisting thus creating an uneven walking surface. This is known as floor sagging. It is important to note that when laying wooden subfloor to apply adhesive to the joists which the subfloor will lay on to help prevent creaking & lateral movement and separation of the joists and subfloor. Using screws instead of nails is also highly recommended to prevent the aforementioned creaking and other problems as well. Subfloor isn't load bearing in residential construction. Although the use of steel joist hangers to support floor joists is recommended over a ledger supporting the joists because of house settling and nail separation, they are not required by code in most municipalities. However, toe nailing & end nailing is nowhere near as effective as using hangers to support flooring systems.
Twist strap
Twist straps provide a tension connection between two wood members. They resist uplift at the heel of a truss economically. When the strengthening is being done from the inside, the ideal connector to use is one that connects rafters or trusses directly to wall studs. This can only be done where the rafter or trusses are immediately above or immediately to the side of studs below. In that case a twist strap connector can be used.
Floor span connector
A connector for connecting wall studs of two adjacent floors in a light frame building structure, the connector having a first attachment tab, a seat member, a diagonally slanted support leg, and a second attachment tab, all substantially planar. The connector is intended to be paired and the paired connectors joined by an elongated tie member that pierces the sill plates of the intervening floor structure.
Angle tie
Sometimes referred to as an angle brace. The Angle tie is used to prevent displacement of building elements due to thrust. A brace/tie across an interior angle of a wooden frame, forming the hypotenuse and securing the two side pieces together.
Z-clip
Similar to a French cleat, a Z-Clip allows for the installation of wall panels without screwing into the front of the panels. The clips provide a secure mount for wall panels, partitions, frames, cabinets, and more. Once installed, clips wedge together to lock panels in place. To disengage panels, simply lift and remove.
Plate
Rafter tie (and tie-beams)
Rafter ties are designed to tie together the bottoms of opposing rafters on a roof, to resist the outward thrust where the roof meets the house ceiling and walls. This helps keep walls from spreading due to the weight of the roof and anything on it, notably wet snow. In many or most homes, the ceiling joists also serve as the rafter ties. When the walls spread, the roof ridge will sag. A sagging ridge is one clue that the home may lack adequate rafter ties. Rafter ties form the bottom chord of a simple triangular roof truss. They resist the out-thrust of a triangle that's trying to flatten under the roof's own weight or snow load. They are placed in the bottom one-third of the roof height. Rafter ties are always required unless the roof has a structural (self-supporting) ridge, or is built using engineered trusses. A lack of rafter ties is a serious structural issue in a conventionally-framed roof.
A wooden beam serving this purpose is known as a tie-beam, and a roof incorporating tie-beams is known as a tie-beam roof.
See also
Framing (construction)
Timber framing
List of structural elements
Tie rod
References
Fasteners | Tie (engineering) | Engineering | 1,239 |
68,245,955 | https://en.wikipedia.org/wiki/Cantor%27s%20isomorphism%20theorem | In order theory and model theory, branches of mathematics, Cantor's isomorphism theorem states that every two countable dense unbounded linear orders are order-isomorphic. For instance, Minkowski's question-mark function produces an isomorphism (a one-to-one order-preserving correspondence) between the numerical ordering of the rational numbers and the numerical ordering of the dyadic rationals.
The theorem is named after Georg Cantor, who first published it in 1895, using it to characterize the (uncountable) ordering on the real numbers. It can be proved by a back-and-forth method that is also sometimes attributed to Cantor but was actually published later, by Felix Hausdorff. The same back-and-forth method also proves that countable dense unbounded orders are highly symmetric, and can be applied to other kinds of structures. However, Cantor's original proof only used the "going forth" half of this method. In terms of model theory, the isomorphism theorem can be expressed by saying that the first-order theory of unbounded dense linear orders is countably categorical, meaning that it has only one countable model, up to logical equivalence.
One application of Cantor's isomorphism theorem involves temporal logic, a method for using logic to reason about time. In this application, the theorem implies that it is sufficient to use intervals of rational numbers to model intervals of time: using irrational numbers for this purpose will not lead to any increase in logical power.
Statement and examples
Cantor's isomorphism theorem is stated using the following concepts:
A linear order or total order is defined by a set of elements and a comparison operation that gives an ordering to each pair of distinct elements and obeys the The familiar numeric orderings on the integers, rational numbers, and real numbers are all examples of linear
Unboundedness means that the ordering does not contain a minimum or maximum element. This is different from the concept of a bounded set in a metric space. For instance, the open interval (0,1) is unbounded as an ordered set, even though it is bounded as a subset of the real numbers, because neither its infimum 0 nor its supremum 1 belong to the interval. The integers, rationals, and reals are also
An ordering is dense when every pair of elements has another element between This is different from being a topologically dense set within the real The rational numbers and real numbers are dense in this sense, as the arithmetic mean of any two numbers belongs to the same set and lies between them, but the integers are not dense because is no other integer between any two consecutive
The integers and rational numbers both form countable sets, but the real numbers do not, by a different result of Cantor, his proof that the real numbers are uncountable.
Two linear orders are order-isomorphic when there exists a one-to-one correspondence between them that preserves their For instance, the integers and the even numbers are order-isomorphic, under a bijection that multiplies each integer
With these definitions in hand, Cantor's isomorphism theorem states that every two unbounded countable dense linear orders are
Within the rational numbers, certain subsets are also countable, unbounded, and dense. The rational numbers in the open unit interval are an example. Another example is the set of dyadic rational numbers, the numbers that can be expressed as a fraction with an integer numerator and a power of two as the denominator. By Cantor's isomorphism theorem, the dyadic rational numbers are order-isomorphic to the whole set of rational numbers. In this example, an explicit order isomorphism is provided by Minkowski's question-mark function. Another example of a countable unbounded dense linear order is given by the set of real algebraic numbers, the real roots of polynomials with integer coefficients. In this case, they are a superset of the rational numbers, but are again It is also possible to apply the theorem to other linear orders whose elements are not defined as numbers. For instance, the binary strings that end in a 1, in their lexicographic order, form another isomorphic
Proofs
One proof of Cantor's isomorphism theorem, in some sources called "the standard uses the back-and-forth method. This proof builds up an isomorphism between any two given orders, using a greedy algorithm, in an ordering given by a countable enumeration of the two orderings. In more detail, the proof maintains two order-isomorphic finite subsets and of the two given orders, initially empty. It repeatedly increases the sizes of and by adding a new element from one order, the first missing element in its enumeration, and matching it with an order-equivalent element of the other order, proven to exist using the density and lack of endpoints of the order. The two orderings switch roles at each step: the proof finds the first missing element of the first order, adds it to , matches it with an element of the second order, and adds it to ; then it finds the first missing element of the second order, adds it to , matches it with an element of the first order, and adds it to , etc. Every element of each ordering is eventually matched with an order-equivalent element of the other ordering, so the two orderings are
Although the back-and-forth method has also been attributed to Cantor, Cantor's original publication of this theorem in 1895–1897 used a different In an investigation of the history of this theorem by logician Charles L. Silver, the earliest instance of the back-and-forth proof found by Silver was in a 1914 textbook by
Instead of building up order-isomorphic subsets and by going "back and forth" between the enumeration for the first order and the enumeration for the second order, Cantor's original proof only uses the "going forth" half of the back-and-forth It repeatedly augments the two finite sets and by adding to the first missing element of the first order's enumeration, and adding to the order-equivalent element that is first in the second order's enumeration. This naturally finds an equivalence between the first ordering and a subset of the second ordering, and Cantor then argues that the entire second ordering is
The back-and-forth proof has been formalized as a computer-verified proof using Coq, an interactive theorem prover. This formalization process led to a strengthened result that when two computably enumerable linear orders have a computable comparison predicate, and computable functions representing their density and unboundedness properties, then the isomorphism between them is also
Model theory
One way of describing Cantor's isomorphism theorem uses the language of model theory. The first-order theory of unbounded dense linear orders consists of sentences in mathematical logic concerning variables that represent the elements of an order, with a binary relation used as the comparison operation of the ordering. Here, a sentence means a well-formed formula that has no free variables. These sentences include both axioms, formulating in logical terms the requirements of a dense linear order, and all other sentences that can be proven as logical consequences from those axioms. The axioms of this system can be expressed
A model of this theory is any system of elements and a comparison relation that obeys all of the axioms; it is a countable model when the system of elements forms a countable set. For instance, the usual comparison relation on the rational numbers is a countable model of this theory. Cantor's isomorphism theorem can be expressed by saying that the first-order theory of unbounded dense linear orders is countably categorical: it has only one countable model, up to logical However, it is not categorical for higher cardinalities: for any higher cardinality, there are multiple inequivalent dense unbounded linear orders with the same
A method of quantifier elimination in the first-order theory of unbounded dense linear orders can be used to prove that it is a complete theory. This means that every logical sentence in the language of this theory is either a theorem, that is, provable as a logical consequence of the axioms, or the negation of a theorem. This is closely related to being categorical (a sentence is a theorem if it is true of the unique countable model; see the Łoś–Vaught test) but there can exist multiple distinct models that have the same complete theory. In particular, both the ordering on the rational numbers and the ordering on the real numbers are models of the same theory, even though they are different models. Quantifier elimination can also be used in an algorithm for deciding whether a given sentence is a
Related results
The same back-and-forth method used to prove Cantor's isomorphism theorem also proves that countable dense linear orders are highly symmetric. Their symmetries are called order automorphisms, and consist of order-preserving bijections from the whole linear order to itself. By the back-and-forth method, every countable dense linear order has order automorphisms that map any set of points to any other set of points. This can also be proven directly for the ordering on the rationals, by constructing a piecewise linear order automorphism with breakpoints at the given points. This equivalence of all sets of points is summarized by saying that the group of symmetries of a countable dense linear order is "highly homogeneous". However, there is no order automorphism that maps an ordered pair of points to its reverse, so these symmetries do not form a
The isomorphism theorem can be extended to colorings of an unbounded dense countable linear ordering, with a finite or countable set of colors, such that each color is dense, in the sense that a point of that color exists between any other two points of the whole ordering. The subsets of points with each color partition the order into a family of unbounded dense countable linear orderings. Any partition of an unbounded dense countable linear orderings into subsets, with the property that each subset is unbounded (within the whole set, not just in itself) and dense (again, within the whole set) comes from a coloring in this way. Each two colorings with the same number of colors are order-isomorphic, under any permutation of their colors. give as an example the partition of the rational numbers into the dyadic rationals and their complement; these two sets are dense in each other, and their union has an order isomorphism to any other pair of unbounded linear orders that are countable and dense in each other. Unlike Cantor's isomorphism theorem, the proof needs the full back-and-forth argument, and not just the "going forth"
Cantor used the isomorphism theorem to characterize the ordering of the real numbers, an uncountable set. Unlike the rational numbers, the real numbers are Dedekind-complete, meaning that every subset of the reals that has a finite upper bound has a real least upper bound. They contain the rational numbers, which are dense in the real numbers. By applying the isomorphism theorem, Cantor proved that whenever a linear ordering has the same properties of being Dedekind-complete and containing a countable dense unbounded subset, it must be order-isomorphic to the real Suslin's problem asks whether orders having certain other properties of the order on the real numbers, including unboundedness, density, and completeness, must be order-isomorphic to the reals; the truth of this statement is independent of Zermelo–Fraenkel set theory with the axiom of choice
Although uncountable unbounded dense orderings may not be order-isomorphic, it follows from the back-and-forth method that any two such orderings are elementarily equivalent. Another consequence of Cantor's proof is that every finite or countable linear order can be embedded into the rationals, or into any unbounded dense ordering. Calling this a "well known" result of Cantor, Wacław Sierpiński proved an analogous result for higher cardinality: assuming the continuum hypothesis, there exists a linear ordering of cardinality into which all other linear orderings of cardinality can be Baumgartner's axiom, formulated by James Earl Baumgartner in 1973 to study the continuum hypothesis, concerns sets of real numbers, unbounded sets with the property that every two elements are separated by exactly other elements. It states that each two such sets are order-isomorphic, providing in this way another higher-cardinality analogue of Cantor's isomorphism theorem ( is defined as the cardinality of the set of all countable ordinals). Baumgartner's axiom is consistent with ZFC and the negation of the continuum hypothesis, and implied by the but independent of
In temporal logic, various formalizations of the concept of an interval of time can be shown to be equivalent to defining an interval by a pair of distinct elements of a dense unbounded linear order. This connection implies that these theories are also countably categorical, and can be uniquely modeled by intervals of rational
Sierpiński's theorem stating that any two countable metric spaces without isolated points are homeomorphic can be seen as a topological analogue of Cantor's isomorphism theorem, and can be proved using a similar back-and-forth argument.
References
Model theory
Order theory
Georg Cantor
Theorems in the foundations of mathematics | Cantor's isomorphism theorem | Mathematics | 2,805 |
32,751,862 | https://en.wikipedia.org/wiki/Collinder%20135 | Collinder 135, known sometimes as the Pi Puppis Cluster, is an open cluster in Puppis constellation.
It consists of six stars brighter than 6th magnitude, and a widespread population of fainter stars. It lies in the southern celestial hemisphere near a rich star field. The main component is the star Pi Puppis, which gives to the cluster its common name; it is an orange supergiant with a visual magnitude of 2.71. Two of the 5th magnitude stars are all variables: NV Puppis is a Gamma Cassiopeiae variable, while NW Puppis is a Beta Cephei variable.
References
External links
Observing Cr 135
Open clusters
Puppis | Collinder 135 | Astronomy | 136 |
3,862,103 | https://en.wikipedia.org/wiki/Winbond | Winbond Electronics Corporation () is a Taiwan-based corporation founded in 1987. It produces semiconductors and several types of integrated circuits (ICs) including dynamic random-access memory, static random-access memory, serial flash, microcontrollers, and Super I/O chips.
Winbond is the largest brand-name IC supplier in Taiwan and one of the biggest suppliers of semiconductors worldwide.
History
Winbond was established in 1987 in Hsinchu Science Park in Taiwan. Its founder came from the Industrial Technology Research Institute. From 1987 to 1988 J.J Pan and Partners designed and constructed a fabrication plant known as IC Wafer Fab I Plant. This facility would produce 6 inch wafers. It was designed and constructed in 14 months. Later in 1989 to 1992, J.J Pan and Partners built a second fab for Winbond called IC Wafer Fab II Plant.
In 1992 Winbond joined the Precision RISC Organization and licensed HP's PA-RISC architecture to design and manufacture chips for X terminals and printers.
Winbond acquired affiliated chipset maker Symphony Laboratories, of San Jose, California, in October 1995.
Winbond was affected by power cuts caused by the 1999 Jiji earthquake forcing the company to pause manufacturing. By 2002 Winbond had 4,000 employees. In 2004 Winbond was said to have a "continuous-learning culture", having 1,200 training programs for its employees. In August 2004, Infineon announced a deal with Winbond to build a factory to make DRAM.
The computer IC, consumer electronics IC, and logic product foundry divisions of Winbond were spun off as Nuvoton Technology Corporation on 1 July 2008.
In 2010 Winbond was manufacturing DDR2 DRAM using technology licensed from Qimonda.
In 2019 Karamba Security partnered with Winbond to make secure embedded flash products. In 2023 Winbond joined the Universal Chiplet Interconnect Express Consortium.
See also
List of semiconductor fabrication plants
List of companies of Taiwan
References
1987 establishments in Taiwan
Computer companies of Taiwan
Companies based in Taichung
Computer hardware companies
Computer memory companies
Companies established in 1987
Electronics companies of Taiwan
Taiwanese brands
Companies listed on the Taiwan Stock Exchange
Semiconductor companies of Taiwan | Winbond | Technology | 463 |
51,147,390 | https://en.wikipedia.org/wiki/Kazhdan%E2%80%93Margulis%20theorem | In Lie theory, an area of mathematics, the Kazhdan–Margulis theorem is a statement asserting that a discrete subgroup in semisimple Lie groups cannot be too dense in the group. More precisely, in any such Lie group there is a uniform neighbourhood of the identity element such that every lattice in the group has a conjugate whose intersection with this neighbourhood contains only the identity. This result was proven in the 1960s by David Kazhdan and Grigory Margulis.
Statement and remarks
The formal statement of the Kazhdan–Margulis theorem is as follows.
Let be a semisimple Lie group: there exists an open neighbourhood of the identity in such that for any discrete subgroup there is an element satisfying .
Note that in general Lie groups this statement is far from being true; in particular, in a nilpotent Lie group, for any neighbourhood of the identity there exists a lattice in the group which is generated by its intersection with the neighbourhood: for example, in , the lattice satisfies this property for small enough.
Proof
The main technical result of Kazhdan–Margulis, which is interesting in its own right and from which the better-known statement above follows immediately, is the following.
Given a semisimple Lie group without compact factors endowed with a norm , there exists , a neighbourhood of in , a compact subset such that, for any discrete subgroup there exists a such that for all .
The neighbourhood is obtained as a Zassenhaus neighbourhood of the identity in : the theorem then follows by standard Lie-theoretic arguments.
There also exist other proofs. There is one proof which is more geometric in nature and which can give more information, and there is a third proof, relying on the notion of invariant random subgroups, which is considerably shorter.
Applications
Selberg's hypothesis
One of the motivations of Kazhdan–Margulis was to prove the following statement, known at the time as Selberg's hypothesis (recall that a lattice is called uniform if its quotient space is compact):
A lattice in a semisimple Lie group is non-uniform if and only if it contains a unipotent element.
This result follows from the more technical version of the Kazhdan–Margulis theorem and the fact that only unipotent elements can be conjugated arbitrarily close (for a given element) to the identity.
Volumes of locally symmetric spaces
A corollary of the theorem is that the locally symmetric spaces and orbifolds associated to lattices in a semisimple Lie group cannot have arbitrarily small volume (given a normalisation for the Haar measure).
For hyperbolic surfaces this is due to Siegel, and there is an explicit lower bound of for the smallest covolume of a quotient of the hyperbolic plane by a lattice in (see Hurwitz's automorphisms theorem). For hyperbolic three-manifolds the lattice of minimal volume is known and its covolume is about 0.0390. In higher dimensions the problem of finding the lattice of minimal volume is still open, though it has been solved when restricting to the subclass of arithmetic groups.
Wang's finiteness theorem
Together with local rigidity and finite generation of lattices the Kazhdan-Margulis theorem is an important ingredient in the proof of Wang's finiteness theorem.
If is a simple Lie group not locally isomorphic to or with a fixed Haar measure and there are only finitely many lattices in of covolume less than .
See also
Margulis lemma
Notes
References
Algebraic groups
Geometric group theory
Lie groups | Kazhdan–Margulis theorem | Physics,Mathematics | 753 |
2,898,453 | https://en.wikipedia.org/wiki/Calcium%20hypochlorite | Calcium hypochlorite is an inorganic compound with chemical formula , also written as . It is a white solid, although commercial samples appear yellow. It strongly smells of chlorine, owing to its slow decomposition in moist air. This compound is relatively stable as a solid and solution and has greater available chlorine than sodium hypochlorite. "Pure" samples have 99.2% active chlorine. Given common industrial purity, an active chlorine content of 65-70% is typical. It is the main active ingredient of commercial products called bleaching powder, used for water treatment and as a bleaching agent.
History
Charles Tennant and Charles Macintosh developed an industrial process in the late 18th century for the manufacture of chloride of lime, patenting it in 1799. Tennant's process is essentially still used today, and became of military importance during World War I, because calcium hypochlorite was the active ingredient in trench disinfectant.
Uses
Sanitation
Calcium hypochlorite is commonly used to sanitize public swimming pools and disinfect drinking water. Generally the commercial substances are sold with a purity of 65% to 73% with other chemicals present, such as calcium chloride and calcium carbonate, resulting from the manufacturing process. In solution, calcium hypochlorite could be used as a general purpose sanitizer, but due to calcium residue (making the water harder), sodium hypochlorite (bleach) is usually preferred.
Organic chemistry
Calcium hypochlorite is a general oxidizing agent and therefore finds some use in organic chemistry. For instance the compound is used to cleave glycols, α-hydroxy carboxylic acids and keto acids to yield fragmented aldehydes or carboxylic acids. Calcium hypochlorite can also be used in the haloform reaction to manufacture chloroform.
Calcium hypochlorite can be used to oxidize thiol and sulfide byproducts in organic synthesis and thereby reduce their odour and make them safe to dispose of. The reagent used in organic chemistry is similar to the sanitizer at ~70% purity.
Production
Calcium hypochlorite is produced industrially by reaction of moist slaked calcium hydroxide with chlorine gas. The one-step reaction is shown below:
Industrial setups allow for the reaction to be conducted in stages to give various compositions, each producing different ratios of calcium hypochlorite, unconverted lime, and calcium chloride. In one process, the chloride-rich first stage water is discarded, while the solid precipitate is dissolved in a mixture of water and lye for another round of chlorination to reach the target purity. Commercial calcium hypochlorite consists of anhydrous , dibasic calcium hypochlorite (also written as ), and dibasic calcium chloride (also written as ).
Reactions
Calcium hypochlorite reacts rapidly with acids producing calcium chloride, chlorine gas, and water:
Safety
It is a strong oxidizing agent, as it contains a hypochlorite ion at the valence +1 (redox state: Cl+1).
Calcium hypochlorite should not be stored wet and hot, or near any acid, organic materials, or metals. The unhydrated form is safer to handle.
See also
Calcium hydroxychloride
Sodium hypochlorite
Winchlor
References
External links
Chemical Land
Antiseptics
Bleaches
Hypochlorites
Calcium compounds
Oxidizing agents
Household chemicals | Calcium hypochlorite | Chemistry | 755 |
57,418,849 | https://en.wikipedia.org/wiki/Probe%20tip | A probe tip is an instrument used in scanning probe microscopes (SPMs) to scan the surface of a sample and make nano-scale images of surfaces and structures. The probe tip is mounted on the end of a cantilever and can be as sharp as a single atom. In microscopy, probe tip geometry (length, width, shape, aspect ratio, and tip apex radius) and the composition (material properties) of both the tip and the surface being probed directly affect resolution and imaging quality. Tip size and shape are extremely important in monitoring and detecting interactions between surfaces. SPMs can precisely measure electrostatic forces, magnetic forces, chemical bonding, Van der Waals forces, and capillary forces. SPMs can also reveal the morphology and topography of a surface.
The use of probe-based tools began with the invention of scanning tunneling microscopy (STM) and atomic force microscopy (AFM), collectively called scanning probe microscopy (SPM) by Gerd Binnig and Heinrich Rohrer at the IBM Zurich research laboratory in 1982. It opened a new era for probing the nano-scale world of individual atoms and molecules as well as studying surface science, due to their unprecedented capability to characterize the mechanical, chemical, magnetic, and optical functionalities of various samples at nanometer-scale resolution in a vacuum, ambient, or fluid environment.
The increasing demand for sub-nanometer probe tips is attributable to their robustness and versatility. Applications of sub-nanometer probe tips exist in the fields of nanolithography, nanoelectronics, biosensor, electrochemistry, semiconductor, micromachining and biological studies.
History and development
Increasingly sharp probe tips have been of interest to researchers for applications in the material, life, and biological sciences, as they can map surface structure and material properties at molecular or atomic dimensions. The history of the probe tip can be traced back to 1859 with a predecessor of the modern gramophone, called the phonautograph. During the later development of the gramophone, the hog's hair used in the phonautograph was replaced with a needle used to reproduce sound. In 1940, a pantograph was built utilizing a shielded probe and adjustable tip. A stylus was free moving allowing it to slide vertically in contact with the paper. In 1948, a circuit was employed in the probe tip to measure peak voltage, creating what may be considered the first scanning tunneling microscope (STM). The fabrication of electrochemically etched sharp tungsten, copper, nickel and molybdenum tips were reported by Muller in 1937. A revolution in sharp tips then occurred, producing a variety of tips with different shapes, sizes, and aspect ratios. They composed of tungsten wire, silicon, diamond and carbon nanotubes with Si-based circuit technologies. This allowed the production of tips for numerous applications in the broad spectrum of nanotechnological fields.
Following the development of STM, atomic force microscopy (AFM) was developed by Gerd Binnig, Calvin F. Quate, and Christoph Gerber in 1986. Their instrument used a broken piece of diamond as the tip with a hand-cut gold foil cantilever. Focused ion and electron beam techniques for the fabrication of strong, stable, reproducible Si3N4 pyramidal tips with 1.0 μm length and 0.1 μm diameter were reported by Russell in 1992. Significant advancement also came through the introduction of micro-fabrication methods for the creation of precise conical or pyramidal silicon and silicon nitride tips. Numerous research experiments were conducted to explore fabrication of comparatively less expensive and more robust tungsten tips, focusing on a need to attain less than 50 nm radius of curvature.
A new era in the field of fabrication of probe tips was reached when the carbon nanotube, an approximately 1 nm cylindrical shell of graphene, was introduced. The use of single wall carbon nanotubes makes the tips more flexible and less vulnerable to breaking or crushing during imaging. Probe tips made from carbon nano-tubes can be used to obtain high-resolution images of both soft and weakly adsorbed biomolecules like DNA on surfaces with molecular resolution.
Multifunctional hydrogel nano-probe techniques also advanced tip fabrication and resulted in increased applicability for inorganic and biological samples in both air and liquid. The biggest advantage of this mechanical method is that the tip can be made in different shapes, such as hemispherical, embedded spherical, pyramidal, and distorted pyramidal, with diameters ranging from 10 nm – 1000 nm. This covers applications including topography or functional imaging, force spectroscopy on soft matter, biological, chemical and physical sensors. Table 1. Summarizes various methods for fabricating probe tips, and the associated materials and applications.
Tunneling current and force measurement principle
The tip itself does not have any working principle for imaging, but depending on the instrumentation, mode of application, and the nature of the sample under investigation, the probe's tip may follow different principles to image the surface of the sample. For example, when a tip is integrated with STM, it measures the tunneling current that arises from the interaction between the sample and the tip. In AFM, short-ranged force deflection during the raster scan by the tip across the surface is measured. A conductive tip is essential for the STM instrumentation whereas AFM can use conductive and non-conductive probe tip. Although the probe tip is used in various techniques with different principles, for STM and AFM coupled with probe tip is discussed in detail.
Conductive probe tip
As the name implies, STM utilizes the tunneling charge transfer principle from tip to surface or vice versa, thereby recording the current response. This concept originates from a particle in a box concept; if potential energy for a particle is small, the electron may be found outside of the potential well, which is a classically forbidden region. This phenomenon is called tunneling.
Expression derived from Schrödinger equation for transmission charge transfer probability is as follows:
where
is the Planck constant
Non-conductive probe tip
Non-conductive nanoscale tips are widely used for AFM measurements. For non-conducting tip, surface forces acting on the tip/cantilever are responsible for deflection or attraction of tip. These attractive or repulsive forces are used for surface topology, chemical specifications, magnetic and electronic properties. The distance-dependent forces between substrate surface and tip are responsible for imaging in AFM. These interactions include van der Waals forces, capillary forces, electrostatic forces, Casimir forces, and solvation forces. One unique repulsion force is Pauli Exclusion repulsive force, which is responsible for single-atom imaging as in references and Figures 10 & 11 (contact region in Fig. 1).
Fabrication methods
Tip fabrication techniques fall into two broad classifications, mechanical and physicochemical. In the early stage of the development of probe tips, mechanical procedures were popular because of the ease of fabrication.
Mechanical methods
Reported mechanical methods in fabricating tips include cutting, grinding, and pulling.; an example would be cutting a wire at certain angles with a razor blade, wire cutter, or scissors. Another mechanical method for tip preparation is fragmentation of bulk pieces into small pointy pieces. Grinding a metal wire or rod into a sharp tip was also a method used. These mechanical procedures usually leave rugged surfaces with many tiny asperities protruding from the apex, which led to atomic resolution on flat surfaces. However, irregular shape and large macroscopic radius of curvature result in poor reproducibility and decreased stability especially for probing rough surfaces. Another main disadvantage of making probes by this method is that it creates many mini tips which lead to many different signals, yielding error in imaging. Cutting, grinding and pulling procedures can only be adapted for metallic tips like W, Ag, Pt, Ir, Pt-Ir and gold. Non-metallic tips cannot be fabricated by these methods.
In contrast, a sophisticated mechanical method for tip fabrication is based on the hydro-gel method. This method is based on a bottom-up strategy to make probe tips by a molecular self-assembly process. A cantilever is formed in a mould by curing the pre-polymer solution, then it is brought into contact with the mould of the tip which also contains the pre-polymer solution. The polymer is cured with ultraviolet light which helps to provide a firm attachment of the cantilever to the probe. This fabrication method is shown in Fig. 2.
Physio-chemical procedures
Physiochemical procedures are fabrication methods of choice, which yield extremely sharp and symmetric tips, with more reproducibility compared to mechanical fabrication-based tips. Among physicochemical methods, the electrochemical etching method is one of the most popular methods. Etching is a two or more step procedure. The "zone electropolishing" is the second step which further sharpens the tip in a very controlled manner. Other physicochemical methods include chemical vapor deposition and electron beam deposition onto pre-existing tips. Other tip fabrication methods include field ion microscopy and ion milling. In field ion microscopy techniques, consecutive field evaporation of single atoms yields specific atomic configuration at the probe tip, which yields very high resolution.
Fabrication through etching
Electrochemical etching is one of the most widely accepted metallic probe tip fabrication methods. Three commonly used electrochemical etching methods for tungsten tip fabrication are single lamella drop-off methods, double lamella drop-off method, and submerged method. Various cone shape tips can be fabricated by this method by minor changes in the experimental setup. A DC potential is applied between the tip and a metallic electrode (usually W wire) immersed in solution (Figure 3 a-c); electrochemical reactions at cathode and anode in basic solutions (2M KOH or 2M NaOH) are usually used. The overall etching process involved is as follows:
Anode;
W (s) + 8OH- -> WO4 + 4H2O + 6e- (E= 1.05V)
Cathode:
6H2O + 6e- -> 3H2 + 6OH- (E=-2.48V)
Overall:
W (s) + 2OH- -> WO4^2- + 2H2O (l) + 6e- + 3H2 (g) (E= -1.43V)
Here, all the potentials are reported vs. SHE.
The schematics of the fabrication method of probe tip production through the electrochemical etching method is shown in Fig. 3.
In the electrochemical etching process, W is etched at the liquid, solid, and air interface; this is due to surface tension, as shown in Fig. 3. Etching is called static if the W wire is kept stationary. Once the tip is etched, the lower part falls due to the lower tensile strength than the weight of the lower part of the wire. The irregular shape is produced by the shifting of the meniscus. However, slow etching rates can produce regular tips when the current flows slowly through the electrochemical cells. Dynamic etching involves slowly pulling up the wire from the solution, or sometimes the wire is moved up and down (oscillating wire) producing smooth tips.
Submerged method
In this method, a metal wire is vertically etched, reducing the diameter from 0.25 mm ~ 20 nm. A schematic diagram for probe tip fabrication with submerged electrochemical etching method is illustrated in Fig 4. These tips can be used for high-quality STM images.
Lamella method
In the double lamella method, the lower part of the metal is etched away, and the upper part of the tip is not etched further. Further etching of the upper part of the wire is prevented by covering it with a polymer coating. This method is usually limited to laboratory fabrication. The double lamella method schematic is shown in Fig. 5.
Single atom tip preparation
Transitional metals like Cu, Au and Ag adsorb single molecules linearly on their surface due to weak van der Waals forces. This linear projection of single molecules allows interactions of the terminal atoms of the tip with the atoms of the substrate, resulting in Pauli repulsion for single molecule or atom mapping studies. Gaseous deposition on the tip is carried out in an ultrahigh vacuum (5 x 10−8 mbar) chamber at a low temperature (10K). Depositions of Xe, Kr, NO, CH4 or CO on tip have been successfully prepared and used for imaging studies. However, these tips preparations rely on the attachment of single atoms or molecules on the tip and the resulting atomic structure of the tip is not known exactly. The probability of attachment of simple molecules on metal surfaces is very tedious and required great skill; as such, this method is not widely used.
Chemical vapor deposition (CVD)
Sharp tips used in SPM are fragile, and prone to wear and tear under high working loads. Diamond is considered the best option to address this issue. Diamond tips for SPMs are fabricated by fracturing, grinding and polishing bulk diamond, resulting in a considerable loss of diamond. One alternative is depositing a thin diamond film on Silicone tips by CVD. In CVD, diamond is deposited directly on silicon or W cantilevers. A is shown in Fig. 6. In this method, the flow of methane and hydrogen gas is controlled to maintain an internal pressure of 40 Torr inside the chamber. CH4 and H2 dissociate at 2100 °C with the help of the Ta filament, and nucleation sites are created on the tip of the cantilever. Once CVD is complete, the flow of CH4 is stopped and the chamber is cooled under the flow of H2. A schematic diagram of a CVD setup used for diamond tip fabrication for AFM application is shown in Fig. 6.
Reactive ion etching (RIE) fabrication
A groove or structure is made on a substrate to form a template. The desired material is then deposited in that template. Once the tip is formed, the template is etched off, leaving the tip and cantilever. Fig. 7 illustrates diamond tip fabrication on silicon wafers using this method.
Focused ion beam (FIB) milling
FIB milling is a sharpening method for probe tips in SPM. A blunt tip is first fabricated by other etching methods, such as CVD, or the use of a pyramid mold for pyramidal tips. This tip is then sharpened by FIB milling as shown in Fig. 8. The diameter of the focused ion beam, which directly affects the tip's final diameter, is controlled through a programmable aperture.
Glue
This method is used to attach carbon nanotubes to a cantilever or blunt tip. A strong adhesive (such as soft acrylic glue) is used to bind CNT with the silicon cantilever. CNT is robust, stiff and increases the durability of probe tips, and can be used for both contact and tapping mode.
Cleaning procedures
Electrochemically etched tips are usually covered with contaminants on their surfaces which cannot be removed simply by rinsing in water, acetone or ethanol. Some oxide layers on metallic tips, especially on tungsten, need to be removed by post-fabrication treatment.
Annealing
To clean W sharp tips, it is highly desirable to remove contaminant and the oxide layer. In this method a tip is heated in an UHV chamber at elevated temperature which desorb the contaminated layer. The reaction details are shown below.
2WO3 + W → 3WO2 ↑
WO2 → W (sublimation at 1075K)
At elevated temperature, trioxides of W are converted to WO2 which sublimates around 1075K, and cleaned metallic W surfaces are left behind. An additional advantage provided by annealing is the healing of crystallographic defects produced by fabrication, and the process also smoothens the tip surface.
HF chemical cleaning
In the HF cleaning method, a freshly prepared tip is dipped in 15% concentrated hydrofluoric acid for 10 to 30 seconds, which dissolves the oxides of W.
Ion milling
In this method, argon ions are directed at the tip surface to remove the contaminant layer by sputtering. The tip is rotated in a flux of argon ions at a certain angle, in a way that allows the beam to target the apex. The bombardment of ions at the tip depletes the contaminants and also results in a reduction of the radius of the tip. The bombardment time needs to be finely tuned with respect to the shape of the tip. Sometimes, short annealing is required after ion milling.
Self-sputtering
This method is very similar to ion milling, but in this procedure, the UHV chamber is filled with neon at a pressure of 10−4 mbar. When a negative voltage is applied on the tip, a strong electric field (produced by tip under negative potential) will ionize the neon gas, and these positively charged ions are accelerated back to the tip, where they cause sputtering. The sputtering removes contaminants and some atoms from the tip which, like ion milling, reduces the apex radius. By changing the field strength, one can tune the radius of the tip to 20 nm.
Coating
The surface of silicon-based tips cannot be easily controlled because they usually carry the silanol group. The Si surface is hydrophilic and can be contaminated easily by the environment. Another disadvantage of Si tips is the wear and tear of the tip. It is important to coat the Si tip to prevent tip deterioration, and the tip coating may also enhance image quality. To coat a tip, an adhesive layer is pasted (usually chromium layer on 5 nm thick titanium) and then gold is deposited by vapor deposition (40-100 nm or less). Sometimes, the coating layer reduces the tunnelling current detection capability of probe tips.
Characterization
The most important aspect of a probe tip is imaging the surfaces efficiently at nanometre dimensions. Some concerns involving credibility of the imaging or measurement of the sample arise when the shape of the tip is not determined accurately. For example, when an unknown tip is used to measure a linewidth pattern or other high aspect ratio feature of a surface, there may remain some confusion when determining the contribution of the tip and of the sample in the acquired image. Consequently, it is important to fully and accurately characterize the tips. Probe tips can be characterized for their shape, size, sharpness, bluntness, aspect ratio, radius of curvature, geometry and composition using many advanced instrumental techniques. For example, electron field emission measurement, scanning electron microscopy (SEM), transmission electron microscopy (TEM), scanning tunnelling spectroscopy as well as more easily accessible optical microscope. In some cases, optical microscopy cannot provide exact measurements for small tips in nanoscale due to the resolution limitation of the optical microscopy.
Electron field emission current measurement
In the electron field emission current measurement method, a high voltage is applied between the tip and another electrode, followed by measuring field emission current employing Fowler-Nordheim curves . Large fields-emission current measurements may indicate that the tip is sharp, and low field-emission current indicates that the tip is blunt, molten or mechanically damaged. A minimum voltage is essential to facilitate the release of electrons from the surface of the tip which in turn indirectly is used to obtain the tip curvature. Although this method has several advantages, a disadvantage is that the high electric field required for producing strong electric force can melt the apex of the tip, or might change the crystallographic tip nature.
Scanning electron microscopy and transmission electron microscopy
The size and shape of the tip can be obtained by scanning electron microscopy and transmission electron microscopy measurements. In addition, transmission electron microscopy (TEM) images are helpful to detect any layer of insulating materials on the surface of the tip as well as to estimate the size of the layer. These oxides are formed gradually on the surface of tip soon after fabrication, due to the oxidation of the metallic tip by reacting with the O2 present in the surrounding atmosphere. Scanning electron microscopy (SEM) has a resolution limitation of below 4 nm, so TEM may be needed to observe even a single atom theoretically and practically. Tip grain down to 1-3 nm, thin polycrystalline oxides, or carbon or graphite layers at the tip apex, are routinely measured using TEM. The orientation of tip crystal, which is the angle between the tip plane in the single-crystal and the tip normal, can be estimated.
Optical microscopy
In the past, optical microscopes were the only method used to investigate whether the tip is bent, through microscale imaging at many microscales. This is because the resolution limitation of an optical microscope is about 200 nm. Imaging software, including ImageJ, allows determination of the curvature, and aspect ratio of the tip. One drawback of this method is that it renders an image of tip, which is an object due to the uncertainty in the nanoscale dimension. This problem can be resolved by taking images of the tip multiple times, followed by combining them into an image by confocal microscope with some fluorescent material coating on the tip. It is also a time-consuming process due to the necessity of monitoring the wear or damage or degradation of the tip by collision with the surface during scanning the surface after each scan.
Scanning tunneling spectroscopy
The scanning tunneling spectroscopy (STS) is spectroscopic form of STM. Spectroscopic data based on curvature is obtained to analyze the existence of any oxides or impurities on the tip. This is done by monitoring the linearity of the curve, which represents metallic tunnel junction. Generally, the curve is non-linear; hence, the tip has a gap-like shape around zero bias voltage for oxidized or impure tip, whereas the opposite is observed for sharp pure un-oxidized tip.
Auger electron spectroscopy, X-ray photoelectron spectroscopy
In Auger electron spectroscopy (AES), any oxides present on the tip surface are sputtered out during in-depth analysis with argon ion beam generated by differentially pumped ion pump, followed by comparing the sputtering rate of the oxide with experimental sputtering yields. These Auger measurements may estimate the nature of oxides because of the surface contamination. Composition can also be revealed, and in some cases, thickness of the oxide layer down to 1-3 nm can be estimated. X-ray photoelectron spectroscopy also performs similar characterization for the chemical and surface composition, by providing information on the binding energy of the surface elements.
Overall, the aforementioned characterization methods of tips can be categorized into three major classes. They are as follows:
Imaging tip using microscopy is used to take image of tip with microscopy, except scanning probe microscopy (SPM) e.g. scanning tunnelling microscopy (STM), atomic force microscopy (AFM) are reported.
Using known tip characterizer is when the shape of tip is deduced by taking an image of a sample of known measurement, which is known as tip characterizer.
Blind method is where tip characterizer of either known or unknown measurement is used.
Applications
Probes tips have a wide variety of applications in different fields of science and technology. One of the major areas where probe tips are used is for application in SPM i.e., STM and AFM. For example, carbon nanotube tips in conjunction with AFM provides an excellent tool for surface characterization in the nanometer realm. CNT tips are also used in tapping-mode Scanning Force Microscopy (SFM), which is a technique where a tip taps a surface by a cantilever driven near resonant frequency of the cantilever. The CNT probe tips fabricated using CVD technique can be used for imaging of biological macromolecules, semiconductor and chemical structure. For example, it is possible to obtain an intermittent AFM contact image of IgM macromolecules with excellent resolution using a single CNT tip. Individual CNT tips can be used for high resolution imaging of protein molecules.
In another application, multiwall carbon nanotube (MWCNT) and single wall carbon nanotube (SWCNT) tips were used to image amyloid β (1-40) derived protofibrils and fibrils by tapping mode AFM. Functionalized probes can be used in Chemical Force Microscopy (CFM) to measure intermolecular forces and map chemical functionality. Functionalized SWCNT probes can be used for chemically sensitive imaging with high lateral resolution and to study binding energy in chemical and biological system. Probe tips that have been functionalized with either hydrophobic or hydrophilic molecules can be used to measure the adhesive interaction between hydrophobic-hydrophobic, hydrophobic-hydrophilic, and hydrophilic-hydrophilic molecules. From these adhesive interactions the friction image of patterned sample surface can be found. Probe tips used in force microscopy can provide imaging of structure and dynamics of adsorbate at the nanometer scale. Self-assembled functionalized organic thiols on the surface of Au coated Si3N4 probe tips have been used to study the interaction between molecular groups. Again, carbon nanotube probe tips in conjunction with AFM can be used for probing crevices that occur in microelectronic circuits with improved lateral resolution. Functionality modified probe tips have been to measure the binding force between single protein-ligand pairs. Probe tips have been used as a tapping mode technique to provide information about the elastic properties of materials. Probe tips are also used in the mass spectrometer. Enzymatically active probe tips have been used for the enzymatic degradation of analytes. They have also been used as devices to introduce samples into the mass spectrophotometer. For example, trypsin-activated gold (Au/trypsin) probe tips can be used for the peptide mapping of the hen egg lysozyme.
Atomically sharp probe tips can be used for imaging a single atom in a molecule. An example of visualizing single atoms in water cluster can be seen in Fig. 10. By visualizing single atoms in molecules present on a surface, scientists can determine bond length, bond order and discrepancies, if any, in conjugation which was previously thought to be impossible in experimental work. Fig. 9 shows the experimentally determined bond order in a poly-aromatic compound, which was thought to be very hard in the past.
References
Scanning probe microscopy
Scientific instruments | Probe tip | Chemistry,Materials_science,Technology,Engineering | 5,487 |
3,846,517 | https://en.wikipedia.org/wiki/NGC%207380 | NGC 7380 is a young open cluster of stars in the northern circumpolar constellation of Cepheus, discovered by Caroline Herschel in 1787. The surrounding emission nebulosity is known colloquially as the Wizard Nebula, which spans an angle of . German-born astronomer William Herschel included his sister's discovery in his catalog, and labelled it H VIII.77. The nebula is known as S 142 in the 1959 Sharpless catalog (Sh2-142). It is extremely difficult to observe visually, usually requiring very dark skies and an O-III filter. The NGC 7380 complex is located at a distance of approximately from the Sun, in the Perseus Arm of the Milky Way.
The cluster spans ~ with an elongated shape and an extended tail. Age estimates range from 4 to 11.9 million years. At the center of the cluster lies DH Cephei, a close, double-lined spectroscopic binary system consisting of two massive O-type stars. This pair are the primary ionizing source for the surrounding H II region, and are driving out the surrounding gas and dust while triggering star formation in the neighboring region. Of the variable stars that have been identified in the cluster, 14 have been identified as pre-main sequence stars while 17 are main sequence stars that are primarily B-type variables.
Gallery
References
External links
South Common Observatory - Images of the Wizard Nebula
NASA APOD mentioning the Wizard Nebula
SEDS – NGC 7380
7380
Open clusters
H II regions
Cepheus (constellation)
Astronomical objects discovered in 1787 | NGC 7380 | Astronomy | 314 |
17,002,524 | https://en.wikipedia.org/wiki/Temperature-responsive%20polymer | Temperature-responsive polymers or thermoresponsive polymers are polymers that exhibit drastic and discontinuous changes in their physical properties with temperature. The term is commonly used when the property concerned is solubility in a given solvent, but it may also be used when other properties are affected. Thermoresponsive polymers belong to the class of stimuli-responsive materials, in contrast to temperature-sensitive (for short, thermosensitive) materials, which change their properties continuously with environmental conditions.
In a stricter sense, thermoresponsive polymers display a miscibility gap in their temperature-composition diagram. Depending on whether the miscibility gap is found at high or low temperatures, either an upper critical solution temperature (UCST) or a lower critical solution temperature (LCST) exists.
Research mainly focuses on polymers that show thermoresponsivity in aqueous solution. Promising areas of application are tissue engineering, liquid chromatography, drug delivery and bioseparation. Only a few commercial applications exist, for example, cell culture plates coated with an LCST-polymer.
History
The theory of thermoresponsive polymer (similarly, microgels) begins in the 1940s with work from Flory and Huggins who both independently produced similar theoretical expectations for polymer in solution with varying temperature.
The effects of external stimuli on particular polymers were investigated in the 1960s by Heskins and Guillet. They established as the lower critical solution temperature (LCST) for poly(N-isopropylacrylamide).
Coil-globule transition
Thermoresponsive polymer chains in solution adopt an expanded coil conformation. At the phase separation temperature they collapse to form compact globuli. This process can be observed directly by methods of static and dynamic light scattering. The drop in viscosity can be indirectly observed. When mechanisms which reduce surface tension are absent, the globules aggregate, subsequently causing turbidity and the formation of visible particles.
Phase diagrams of thermoresponsive polymers
The phase separation temperature (and hence, the cloud point) is dependent on polymer concentration. Therefore, temperature-composition diagrams are used to display thermoresponsive behavior over a wide range of concentrations. Phases separate into a polymer-poor and a polymer-rich phase. In strictly binary mixtures the composition of the coexisting phases can be determined by drawing tie-lines. However, since polymers display a molar mass distribution this straightforward approach may be insufficient.
During the process of phase separation the polymer-rich phase can vitrify before equilibrium is reached. This depends on the glass transition temperature for each individual composition. It is convenient to add the glass transition curve to the phase diagram, although it is no real equilibrium. The intersection of the glass transition curve with the cloud point curve is called Berghmans point. In the case of UCST polymers, above the Berghmans point the phases separate into two liquid phases, below this point into a liquid polymer-poor phase and a vitrified polymer-rich phase. For LCST polymers the inverse behavior is observed.
Thermodynamics
Polymers dissolve in a solvent when the Gibbs energy of the system decreases, i.e., the change of Gibbs energy (ΔG) is negative. From the known Legendre transformation of the Gibbs–Helmholtz equation it follows that ΔG is determined by the enthalpy of mixing (ΔH) and entropy of mixing (ΔS).
Without interactions between the compounds there would be no enthalpy of mixing and the entropy of mixing would be ideal. The ideal entropy of mixing of multiple pure compounds is always positive (the term -T∙ΔS is negative) and ΔG would be negative for all compositions, causing complete miscibility. Therefore, the fact that miscibility gaps are observed can only be explained by interaction. In the case of polymer solutions, polymer-polymer, solvent-solvent and polymer-solvent interactions have to be taken into account. A model for the phenomenological description of polymer phase diagrams was developed by Flory and Huggins (see Flory–Huggins solution theory). The resulting equation for the change of Gibbs energy consists of a term for the entropy of mixing for polymers and an interaction parameter that describes the sum of all interactions.
where
R = universal gas constant
m = number of occupied lattice sites per molecule (for polymer solutions m1 is approximately equal to the degree of polymerization and m2=1)
φ = volume fraction of the polymer and the solvent, respectively
χ = interaction parameter
A consequence of the Flory-Huggins theory is, for instance, that the UCST (if it exists) increases and shifts into the solvent-rich region when the molar mass of the polymer increases. Whether a polymer shows LCST and/or UCST behavior can be derived from the temperature-dependence of the interaction parameter (see figure). The interaction parameter not only comprises enthalpic contributions but also the non-ideal entropy of mixing, which again consists of many individual contributions (e.g., the strong hydrophobic effect in aqueous solutions). For these reasons, classical Flory-Huggins theory cannot provide much insight into the molecular origin of miscibility gaps.
Applications
Bioseparation
Thermoresponsive polymers can be functionalized with moieties that bind to specific biomolecules. The polymer-biomolecule conjugate can be precipitated from solution by a small change of temperature. Isolation may be achieved by filtration or centrifugation.
Thermoresponsive surfaces
Tissue engineering
For some polymers it was demonstrated that thermoresponsive behavior can be transferred to surfaces. The surface is either coated with a polymer film or the polymer chains are bound covalently to the surface.
This provides a way to control the wetting properties of a surface by small temperature changes. The described behavior can be exploited in tissue engineering since the adhesion of cells is strongly dependent on the hydrophilicity/hydrophobicity. This way, it is possible to detach cells from a cell culture dish by only small changes in temperature, without the need to additionally use enzymes (see figure). Respective commercial products are already available.
Chromatography
Thermoresponsive polymers can be used as the stationary phase in liquid chromatography. Here, the polarity of the stationary phase can be varied by temperature changes, altering the power of separation without changing the column or solvent composition. Thermally related benefits of gas chromatography can now be applied to classes of compounds that are restricted to liquid chromatography due to their thermolability. In place of solvent gradient elution, thermoresponsive polymers allow the use of temperature gradients under purely aqueous isocratic conditions. The versatility of the system is controlled not only by changing temperature, but also by adding modifying moieties that allow for a choice of enhanced hydrophobic interaction, or by introducing the prospect of electrostatic interaction. These developments have already brought major improvements to the fields of hydrophobic interaction chromatography, size exclusion chromatography, ion exchange chromatography, and affinity chromatography separations, as well as pseudo-solid phase extractions ("pseudo" because of phase transitions).
Thermoresponsive gels
Covalently linked gels
Three-dimensional covalently linked polymer networks are insoluble in all solvents, they merely swell in good solvents. Thermoresponsive polymer gels show a discontinuous change of the degree of swelling with temperature. At the volume phase transition temperature (VPTT) the degree of swelling changes drastically. Researchers try to exploit this behavior for temperature-induced drug delivery. In the swollen state, previously incorporated drugs are released easily by diffusion. More sophisticated "catch and release" techniques have been elaborated in combination with lithography and molecular imprinting.
Physical gels
In physical gels unlike covalently linked gels the polymers chains are not covalently linked together. That means that the gel could re-dissolve in a good solvent under some conditions. Thermoresponsive physical gels, also sometimes called thermoresponsive injectable gels have been used in Tissue Engineering. This involves mixing at room temperature the thermoresponsive polymer in solution with the cells and then inject the solution to the body. Due to the temperature increase (to body temperature) the polymer creates a physical gel. Within this physical gel the cells are encapsulated. Tailoring the temperature that the polymer solution gels can be challenging because this depend by many factors like the polymer composition, architecture as well as the molar mass.
Thermoreversible materials
Some thermoreversible gels are used in biomedicine. For instance, hydrogels made of proteins are used as scaffolds in knee replacement. In baking, thermoreversible glazes such as pectin are prized for their ability to set and then reset after melting, and are used in nappage and other processes to ensure a smooth final surface for a presented dish. In manufacturing, thermoplastic elastomers can be set into a shape and then reset to their original shape through thermal reversibility, unlike one-way thermoset elastomers.
Characterization of thermoresponsive polymer solutions
Cloud point
Experimentally, the phase separation can be followed by turbidimetry. There is no universal approach for determining the cloud point suitable for all systems. It is often defined as the temperature at the onset of cloudiness, the temperature at the inflection point of the transmittance curve, or the temperature at a defined transmittance (e.g., 50%). The cloud point can be affected by many structural parameters of the polymer like the hydrophobic content, architecture and even the molar mass.
Hysteresis
The cloud points upon cooling and heating of a thermoresponsive polymer solution do not coincide because the process of equilibration takes time. The temperature interval between the cloud points upon cooling and heating is called hysteresis. The cloud points are dependent on the cooling and heating rates, and hysteresis decreases with lower rates. There are indications that hysteresis is influenced by the temperature, viscosity, glass transition temperature and the ability to form additional intra- and inter-molecular hydrogen bonds in the phase separated state.
Other properties
Another important property for potential applications is the extent of phase separation, represented by the difference in polymer content in the two phases after phase separation. For most applications, phase separation in pure polymer and pure solvent would be desirable although it is practically impossible. The extent of phase separation in a given temperature interval depends on the particular polymer-solvent phase diagram.
Example: From the phase diagram of polystyrene (molar mass 43,600 g/mol) in the solvent cyclohexane it follows that at a total polymer concentration of 10%, cooling from 25 to 20 °C causes phase separation into a polymer-poor phase with 1% polymer and a polymer-rich phase with 30% polymer content.
Also desirable for many applications is a sharp phase transition, which is reflected by a sudden drop in transmittance. The sharpness of the phase transition is related to the extent of phase separation but additionally relies on whether all present polymer chains exhibit the same cloud point. This depends on the polymer endgroups, dispersity, or—in the case of copolymers—varying copolymer compositions. As a result of phase separation, thermoresponsive polymer systems can form well-defined self-assembled nanostructures with a number of different practical application such as in drug and gene delivery, tissue engineering, etc. In order to establish the required properties for applications, a rigorous characterization of the phase separation phenomenon can be carried out by different spectroscopic and calorimetric methods, including nuclear magnetic resonance (NMR) , dynamic light scattering (DLS), small-angle X-ray scattering (SAXS), infrared spectroscopy (IR), Raman spectroscopy, and Differential scanning calorimetry (DSC).
Examples of thermoresponsive polymers
Thermoresponsivity in organic solvents
Due to the low entropy of mixing, miscibility gaps are often observed for polymer solutions. Many polymers are known that show UCST or LCST behavior in organic solvents. Examples for organic polymer solutions with UCST are polystyrene in cyclohexane, polyethylene in diphenylether or polymethylmethacrylate in acetonitrile. An LCST is observed for, e.g., polypropylene in n-hexane, polystyrene in butylacetate or polymethylmethacrylate in 2-propanone.
Thermoresponsivity in water
Polymer solutions that show thermoresponsivity in water are especially important since water as a solvent is cheap, safe and biologically relevant. Current research efforts focus on water-based applications like drug delivery systems, tissue engineering, bioseparation (see the section Applications). Numerous polymers with LCST in water are known. The most studied polymer is poly(N-isopropylacrylamide). Further examples are poly[2-(dimethylamino)ethyl methacrylate] (pDMAEMA) hydroxypropylcellulose, , poly-2-isopropyl-2-oxazoline and polyvinyl methyl ether.
Some industrially relevant polymers show LCST as well as UCST behavior whereas the UCST is found outside the 0-to-100 °C region and can only be observed under extreme experimental conditions. Examples are polyethylene oxide, polyvinylmethylether and polyhydroxyethylmethacrylate. There are also polymers that exhibit UCST behavior between 0 and 100 °C. However, there are large differences concerning the ionic strength at which UCST behavior is detected. Some zwitterionic polymers show UCST behavior in pure water and also in salt-containing water or even at higher salt concentration. By contrast, polyacrylic acid displays UCST behavior solely at high ionic strength. Examples for polymer that show UCST behavior in pure water as well as under physiological conditions are poly(N-acryloylglycinamide), ureido-functionalized polymers, copolymers from N-vinylimidazole and 1-vinyl-2-(hydroxylmethyl)imidazole or copolymers from acrylamide and acrylonitrile. Polymers for which UCST relies on non-ionic interactions are very sensitive to ionic contamination. Small amounts of ionic groups may suppress phase separation in pure water.
The UCST is dependent on the molecular mass of the polymer. For the LCST this is not necessarily the case, as shown for poly(N-isopropylacrylamide).
Schizophrenic behavior of UCST-LCST diblock copolymers
A more complex scenario can be found in the case of diblock copolymers that feature two orthogonally thermo-responsive blocks, i.e., an UCST and an LCST-type block. By applying a temperature stimulus, the individual polymer blocks show different phase transitions, e.g. by increasing the temperature, the UCST-type block features an insoluble-soluble transition, while the LCST-type block undergoes a soluble-insoluble transition. The order of the individual phase transitions depends on the relative positions of the UCST and LCST. Thus, upon temperature change the roles of the soluble and insoluble polymer blocks are reversed and this structural inversion is typically called ‘schizophrenic’ in the literature. Besides the fundamental interest in the mechanism of this behavior, such block copolymers have been proposed for application in smart emulsification, drug delivery, and rheology control. Schizophrenic diblock copolymer have also been applied as thin films for potential use as sensors, smart coatings or nanoswitches, and soft robotics.
References
Polymer material properties
Smart materials
Temperature | Temperature-responsive polymer | Physics,Chemistry,Materials_science,Engineering | 3,377 |
14,116,882 | https://en.wikipedia.org/wiki/RAS%20p21%20protein%20activator%201 | RAS p21 protein activator 1 or RasGAP (Ras GTPase activating protein), also known as RASA1, is a 120-kDa cytosolic human protein that provides two principal activities:
Inactivation of Ras from its active GTP-bound form to its inactive GDP-bound form by enhancing the endogenous GTPase activity of Ras, via its C-terminal GAP domain
Mitogenic signal transmission towards downstream interacting partners through its N-terminal SH2-SH3-SH2 domains
The protein encoded by this gene is located in the cytoplasm and is part of the GAP1 family of GTPase-activating proteins. The gene product stimulates the GTPase activity of normal RAS p21 but not its oncogenic counterpart. Acting as a suppressor of RAS function, the protein enhances the weak intrinsic GTPase activity of RAS proteins resulting in the inactive GDP-bound form of RAS, thereby allowing control of cellular proliferation and differentiation. Mutations leading to changes in the binding sites of either protein are associated with basal cell carcinomas. Alternative splicing results in two isoforms where the shorter isoform, lacking the N-terminal hydrophobic region but retaining the same activity, appears to be abundantly expressed in placental but not adult tissues.
Domains
RasGAP contains one SH3 domain and two SH2 domains, a PH domain, a C2 domain, and a GAP domain.
Interactions
RAS p21 protein activator 1 has been shown to interact with:
ANXA6,
CAV2,
DNAJA3,
DOK1,
EPHB2,
EPHB3,
GNB2L1
HCK,
HRAS,
HTT,
IGF1R,
KHDRBS1,
NCK1,
PDGFRB,
PTK2B,
SOCS3, and
Src.
The mRNA can interact with Mir-132 microRNA; this process is linked to angiogenesis.
Disease database
RASA1 gene variant database
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Capillary Malformation-Arteriovenous Malformation Syndrome and RASA1-Related Parkes Weber Syndrome
OMIM entries in RASA1 related disorders
Proteins | RAS p21 protein activator 1 | Chemistry | 465 |
17,175,853 | https://en.wikipedia.org/wiki/EJBCA | EJBCA (formerly: Enterprise JavaBeans Certificate Authority) is a free software public key infrastructure (PKI) certificate authority software package maintained and sponsored by the Swedish for-profit company PrimeKey Solutions AB, which holds the copyright to most of the codebase, being a subsidiary for Keyfactor Inc. based in United States. The project's source code is available under the terms of the GNU Lesser General Public License (LGPL). The EJBCA software package is used to install a privately operated certificate authority, validation authority and registration authority. This is in contrast to commercial certificate, validation and/or authorities that are operated by a trusted third party. Since its inception EJBCA has been used as certificate authority software for different use cases, including eGovernment, endpoint management, research, energy, eIDAS, telecom, networking, and for usage in SMEs.
See also
Public key infrastructure
References
Further reading
Research and application of EJBCA based on J2EE; Liyi Zhang, Qihua Liu and Min Xu; IFIP International Federation for Information Processing Volume 251/2008;
External links
Public key infrastructure
Cryptographic software
Free security software
Software using the GNU Lesser General Public License
Products introduced in 2001
Java enterprise platform
Java platform software | EJBCA | Mathematics | 261 |
36,921,185 | https://en.wikipedia.org/wiki/HD%20105382 | HD 105382 (also known as V863 Centauri) is a star in the constellation Centaurus. Its apparent magnitude is 4.47, making it visible to the naked eye under good observing conditions. From parallax measurements, it is located 130 parsecs (440 light years) from the Sun.
In 1992, Luis A. Balona et al. announced their discovery that HD 105382 is a variable star. It was given its variable star designation, V863 Centauri, in 1993. HD 105382's apparent magnitude varies with an amplitude of 0.012 over a period of 1.295 days. It has been previously classified as a Be star, which would explain the variability as stellar pulsations, but this classification was probably due to accidental observation of the nearby Be star δ Centauri. A 2004 study showed that the 1.295 day period is actually the rotation period of the star, and that the variability is caused by non-homogeneous distribution of elements in the stellar surface. In particular, HD 105382 is a helium-weak chemically peculiar star with a helium abundance varying between 0.5% and 15% of the solar abundance, and a silicon abundance varying between 0.00044% and 0.0069% the solar value. Regions with more helium appear to coincide with the regions with less silicon, and vice versa. This peculiar abundance pattern is probably related to HD 105382's magnetic field, which has a polar strength of 2.3 kG.
From astrometric measurements by the Hipparcos spacecraft, HD 105382 is identified as a probable astrometric binary. It is only 267" away from δ Centauri, and both stars appear to be at the same distance from Earth and have the same motion through space, so they may be related. In total, this may be a five star system. It is a member of the Lower Centaurus–Crux (LCC) subgroup of the Scorpius–Centaurus association.
References
Centaurus
B-type giants
Variable stars
Centauri, V863
Durchmusterung objects
105382
059173
4618
Helium-weak stars
Astrometric binaries
Lower Centaurus Crux | HD 105382 | Astronomy | 461 |
75,576,823 | https://en.wikipedia.org/wiki/Artificial%20planet | An artificial planet (also known as a planetary replica or a replica planet) is a proposed stellar megastructure. Its defining characteristic is that it has sufficient mass to generate its own gravity field that is strong enough to prevent atmosphere from escaping, although the term has been sometimes used to describe other types of megastructures that have self-sufficient ecosystems. The concept can be found in many works of science-fiction.
In science
Replica planet
Mark Hempsell suggested that an artificial replica planet could be created in the Solar System as preparation for future space colonization, probably in the habitable zone between the orbits of Venus and Mars. It could evolve from the construction of a smaller space habitat. They would have similar purpose to other large scale megastructures intended as living spaces (such as O'Neill cylinder) or the concept of colonizing (or terraforming) existing planets. Unlike a space habitat, the artificial planet would be large enough to create its own gravity field that would prevent its atmosphere from escaping, and atmosphere would also serve to protect the world from radiation or meteorites. However, an artificial planet would have a much worse mass invested to usable surface area ratio.
Material for artificial planet construction could be extracted from stars or gas giants or from asteroid mining. A sufficiently advanced civilization could use those resources to mass-produce artificial planets using a stellar factory that itself would likely be the size of a large planet.
Construction of an artificial planet has been described as scientifically plausible but likely taking thousands of years and would be highly expensive. It has also been suggested that such an endeavour would be more challenging than terraforming existing planets, although both ideas are mostly speculative at this point of human history.
Other
The term artificial planet has also been used to describe other types of megastructures, such as large spherical space stations. D. R. Glover defined artificial planet as "a self-sufficient, independent ecosystem in space", noting that size of such an entity is less relevant and that it could be much smaller than what is traditionally defined as a planet. Glover sees development of such a station as a precursor step for development of ships capable of interstellar travel.
Paul Birch has used this term to describe a concept of a supramundane planet. Such a structure would resemble the concept of a Dyson sphere, as the habitable surface would exist on the inner side, but it would be built around a massive stellar body, such as a giant planet or a black hole.
In fiction and popular culture
The concept of artificial planet can be found in many works of science fiction. An artificial planet is the main setting of several science fiction series, such as Philip José Farmer's Riverworld series (1971–1983), Jack L. Chalker's Well World series (1977-2000) and Paul J. McAuley's Confluence trilogy (1997-1999). Iain Banks' novel Matter (2008) is set on a shellworld (an artificial planet with several habitable layers).
The concept of artificial planets is also found in, among others, The Hitchhiker’s Guide to the Galaxy franchise created by Douglas Adams, where one of the characters is a "planet designer". The Death Star from the Star Wars franchise has been called an artificial planet as well.
In the 2000 film Titan A.E., a groundbreaking scientific project known as "The Titan Project", was designed to create new man-made, habitable planets in space.
See also
Atherton: The House of Power
Dyson sphere
Generation ship
Ringworld
Shellworld
References
Megastructures
Space habitats | Artificial planet | Technology | 734 |
9,454,227 | https://en.wikipedia.org/wiki/River%20of%20Gods | River of Gods is a 2004 science fiction novel by British writer Ian McDonald. It depicts a futuristic India in 2047, a century after its independence from Britain, characterized both by ancient traditions and advanced technologies such as artificial intelligences, robots and nanotechnology. The novel won the British Science Fiction Award in 2004 and was nominated for a Hugo. It was followed by a short story collection called Cyberabad Days in 2009.
Plot introduction
The novel follows a number of different characters' viewpoints on and around the date of 15 August 2047, the centenary of India's partition and independence from the colonial British Raj. This future India has become balkanized into a number of smaller competing states, such as Awadh, Bharat, and Bangla. The global information network is now inhabited by artificial intelligences, phonetically called aeais in the novel, of varying levels of intelligence.
Aeais higher than level 2.5 (able to pass the Turing test and imitate humans) are banned, and their destruction ("excommunication") is the responsibility of "Krishna Cops", like Mr. Nandha. While some pockets of the subcontinent are still steeped in ancient tradition and values, mainstream culture is replete with aeais in TV entertainment and robotic swarms in defense. During such a time, Ranjit Ray steps down from his control of Ray Power, a key energy company, and the responsibility falls on his son Vishram Ray. The playboy Vishram is struggling to make it on his own as a stand-up comedian in Scotland when he is flown back to Varanasi to assume his role at Ray Power, for which he finds himself terribly ill-equipped but eventually surprisingly effective. He learns that his company is working on harvesting zero-point energy from other universes, and sees the particle collider built by his father with the help of Odeco, a clandestine investment firm.
After a prolonged drought, a severe water shortage threatens to jeopardize the peace between the subcontinental states. To avert this crisis, governments are melting glaciers and modifying natural systems. To take advantage of the unrest, a Hindu fundamentalist leader named N.K. Jeevanji organises a "rath yatra" on a spectacular juggernaut. He starts releasing key information to the press via Najia Askarzadah, an ambitious Swedish-Afghan reporter with a desire to be part of history as it is being made.
Lisa Durnau notices an apocalyptic crisis brewing in Alterre, a simulated evolution of earth created by AI scientist Thomas Lull, who is currently hiding in a South Indian coastal village. While Lisa is sent into space to investigate an asteroid, Thomas Lull runs into Aj, a girl with mysterious powers that allow her to see into people's lives, pasts and futures. He decides to follow her and protect her during her quest to find her own true identity, but it is soon revealed that Aj's powers extend beyond mere mortals, when she brings a robot army to a halt with the raise of a hand.
Tal is a beautiful nute (of neutral gender) involved in the designing team of India's greatest "soapi", Town & Country, some of the main stars of which are not human actors, but aeais. Tal falls prey to a conspiracy that compromises the career of Shaheen Badoor Khan, Private Secretary to the Prime Minister Sajida Rana, leading to her assassination and the fall of the government. All this leads to riots and popular fury against Muslims and transsexuals across Varanasi.
Lisa Durnau discovers that at the center of the mysterious asteroid is an 8-billion-year-old grey sphere, possibly a black hole remnant, or an alien artifact from another civilization. This "Tabernacle" communicates a message to the scientists, and this leads Lisa to India to find Thomas Lull, who alone can explain this phenomenon.
Awards and nominations
British Science Fiction Association Best Novel winner, 2004
Arthur C. Clarke Award Best Novel nominee (2005)
Hugo Awards, Best Novel nominee (2005)
Release details
January 2004: United Kingdom. Simon & Schuster. (paperback).
June 2004: United Kingdom. Simon & Schuster. (hardcover).
April 2005: United Kingdom. Simon & Schuster. (paperback).
March 2006: United States. Prometheus Books (hardcover).
References
2004 British novels
2004 science fiction novels
Fiction set in 2047
British science fiction novels
Novels about artificial intelligence
Fiction about asteroids
Fiction about nanotechnology
Novels by Ian McDonald
Novels set in India
Postcyberpunk novels
Religion in science fiction
Novels about robots
Fiction about water scarcity
Novels with transgender themes
Simon & Schuster books
Novels set in the 2040s | River of Gods | Materials_science | 964 |
29,825,840 | https://en.wikipedia.org/wiki/C6H12N2O4 | {{DISPLAYTITLE:C6H12N2O4}}
The molecular formula C6H12N2O4 (molar mass: 176.17 g/mol, exact mass: 176.0797 u) may refer to:
DMDNB
Ethylenediaminediacetic acid (EDDA) | C6H12N2O4 | Chemistry | 71 |
2,252,847 | https://en.wikipedia.org/wiki/Ammonia%20%28data%20page%29 | This page provides supplementary chemical data on ammonia.
Structure and properties
Thermodynamic properties
Vapor–liquid equilibrium data
Table data (above) obtained from CRC Handbook of Chemistry and Physics 44th ed. The (s) notation indicates equilibrium temperature of vapor over solid. Otherwise temperature is equilibrium of vapor over liquid.
Vapor-pressure formula for ammonia:
log10P = A – B / (T − C),
where P is pressure in kPa, and T is temperature in kelvins;
A = 6.67956, B = 1002.711, C = 25.215 for T = 190 K through 333 K.
Heat capacity of liquid and vapor
Spectral data
Regulatory data
Safety data sheet
The handling of this chemical may incur notable safety precautions... It is highly recommend that you seek the Safety Data Sheet (SDS) for this chemical from a reliable source and follow its directions.
SIRI
Science Stuff (Ammonia Solution)
References
External links
Phase diagram for ammonia
IR spectrum (from NIST)
Chemical data pages
Chemical data pages cleanup | Ammonia (data page) | Chemistry | 214 |
1,342,299 | https://en.wikipedia.org/wiki/Diseases%20of%20affluence | Diseases of affluence, previously called diseases of rich people, is a term sometimes given to selected diseases and other health conditions which are commonly thought to be a result of increasing wealth in a society. Also referred to as the "Western disease" paradigm, these diseases are in contrast to "diseases of poverty", which largely result from and contribute to human impoverishment. These diseases of affluence have vastly increased in prevalence since the end of World War II.
Examples of diseases of affluence include mostly chronic non-communicable diseases (NCDs) and other physical health conditions for which personal lifestyles and societal conditions associated with economic development are believed to be an important risk factor—such as type 2 diabetes, asthma, coronary heart disease, cerebrovascular disease, peripheral vascular disease, obesity, hypertension, cancer, alcoholism, gout, and some types of allergy. They may also be considered to include depression and other mental health conditions associated with increased social isolation and lower levels of psychological well-being observed in many developed countries. Many of these conditions are interrelated, for example obesity is thought to be a partial cause of many other illnesses.
In contrast, the diseases of poverty have tended to be largely infectious diseases, or the result of poor living conditions. These include tuberculosis, malaria, and intestinal diseases. Increasingly, research is finding that diseases thought to be diseases of affluence also appear in large part in the poor. These diseases include obesity and cardiovascular disease and, coupled with infectious diseases, these further increase global health inequalities.
Diseases of affluence started to become more prevalent in developing countries as diseases of poverty decline, longevity increases, and lifestyles change. In 2008, nearly 80% of deaths due to NCDs—including heart disease, strokes, chronic lung diseases, cancers and diabetes—occurred in low- and middle-income countries.
Main instances
According to the World Health Organization (WHO), the top 10 causes of deaths in 2019 were from:
Ischemic heart diseases
Stroke
Chronic obstructive pulmonary disease
Lower respiratory infections
Neonatal conditions
Trachea, bronchus, lung cancers
Alzheimer's disease and other dementias
Diarrheal diseases
Diabetes
Kidney diseases
Seven of the main causes of death are non-communicable diseases. In 2019, WHO reported 55.4 million deaths worldwide, and more than half (55%) were due to the top causes of death previously mentioned.
Causes
Factors associated with the increase of these conditions and illnesses appear to be things that are a direct result of technological advances. They include:
Less strenuous physical exercise, often through increased use of motor vehicles
Irregular exercise as a result of office jobs involving no physical labor.
Easy accessibility in society to large amounts of low-cost food (relative to the much-lower caloric food availability in a subsistence economy)
More food generally, with much less physical exertion expended to obtain a moderate amount of food
Higher consumption of vegetable oils and high sugar-containing foods
Higher consumption of meat and dairy products
Higher consumption of refined grains and products made of such, like white bread and white rice.
A notable historical example is that of Beriberi, a thiamin deficiency syndrome which was long known as a disease of the wealthy in east Asia: Brown rice and other cereal grains are a good source of thiamin, while white rice is not. Because of the labor and waste involved, white rice was long seen as a luxury, meaning a thiamin-deficient diet was something only the rich could afford. Eventually, however, the development of motorized rice-polishing equipment brought luxury—and disease—to the masses.
More foods which are processed, cooked, and commercially provided (rather than seasonal, fresh foods prepared locally at the time of eating)
Prolonged periods of little activity
Greater use of alcohol and tobacco
Longer lifespans
Reduced exposure to infectious agents throughout life (this can result in a more idle and inexperienced immune system, as compared to an individual who experienced relatively frequent exposure to certain pathogens in their time of life)
Increased cleanliness. The hygiene hypothesis postulates that children of affluent families are now exposed to fewer antigens than has been normal in the past, giving rise to increased prevalence of allergy and autoimmune diseases.
Diabetes mellitus
Diabetes is a chronic metabolic disease characterized by increase blood glucose level. Type 2 diabetes is the most common form of diabetes. It is caused by resistance to insulin or the lack of production of insulin. It is seen most commonly in adults. Type 1 diabetes or juvenile diabetes is diagnosed mostly in children. This condition is due to little or lack of insulin production from the pancreas.
According to WHO the prevalence of diabetes has quadrupled from 1980 to 422 million adults. The global prevalence of diabetes has increased from 4.7% in 1980 to 8.5% in 2014. Diabetes has been a major cause for blindness, kidney failure, heart attack, stroke and lower limb amputation.
Prevalence in countries of affluence
The Centers of Disease Control and Prevention (CDC) released a report in 2015 indicating that more than 100 million Americans have diabetes or pre-diabetes. Diabetes was the seventh leading cause of death in the United States in 2015. In developed countries like the United States, the risk for diabetes is seen in people with low socioeconomic status (SES). Socioeconomic status is defined by the education and the income level of a person. The prevalence of diabetes varies by education level. Of those diagnosed with diabetes:12.6% of adults had less than a high school education, 9.5% had a high school education and 7.2% had more than high school education.
Differences in diabetes prevalence are seen in the population and ethnic groups in the US. Diabetes is more common in non-Hispanic whites who are less educated and have a lower income. It is also more common in less educated Hispanics. The highest prevalence of diabetes is seen in the southeast, southern and Appalachian portion of the United States. In the United States the prevalence of diabetes is increasing in children and adolescents. In 2015, 25 million people were diagnosed with diabetes, of which 193,000 were children. The total direct and indirect cost of diagnosed diabetes in US in 2012 was $245 billion.
In 2009, the Canadian Diabetes Association (CDA) estimated that diagnosed diabetes will increase from 1.3 million in 2000 to 2.5 million in 2010 and 3.7 million in 2020. Diabetes was the 7th leading cause of death in Canada in 2015. Like United States, diabetes in more prevalent in the low socioeconomic group of people in Canada.
According to the International Diabetes Federation, more than 58 million people are diagnosed with diabetes in the European Union Region (EUR), and this will go up to 66.7 million by 2045. Similar to other affluent countries like America and Canada, diabetes is more prevalent in the poorer parts of Europe like Central and Eastern Europe.
In Australia according to self-reported data, 1 in 7 adults or approximately 1.2 million people had diabetes in 2014–2015. People who were living in remote or socioeconomically disadvantaged areas were 4 times more likely to develop type 2 diabetes as compared to non-indigenous Australians. Australia incurred $20.8 million in direct costs towards hospitalization, medication, and out-patient treatment towards diabetes. In 2015, $1.2 billion were lost in Australia's Gross Domestic Product (GDP) due to diabetes.
In these countries of affluence, diabetes is prevalent in low socioeconomic groups of people as there is abundance of unhealthy food choices, high energy rich food, and decreased physical activity. More affluent people are typically more educated and have tools to counter unhealthy foods, such as access to healthy food, physical trainers, and parks and fitness centers.
Risk factors
Obesity and being overweight is one of the main risk factors of type 2 diabetes. Other risk factors include lack of physical activity, genetic predisposition, being over 45 years old, tobacco use, high blood pressure and high cholesterol. In United States, the prevalence of obesity was 39.8% in adults and 18.5% in children and adolescents in 2015–2016. In Australia in 2014–2015, 2 out 3 adults or 63% were overweight or obese. Also, 2 out of 3 adults did little or no exercise. According to the World Health Organization, Europe had the 2nd highest proportion of overweight or obese people in 2014 behind the Americas.
In developing countries
According to WHO the prevalence of diabetes is rising more in the middle and low income countries. Over the next 25 years, the number of people with diabetes in developing countries will increase by over 150%. Diabetes is typically seen in people above the retirement age in developed countries, but in developing countries people in the age of 35–64 are mostly affected. Although, diabetes is considered a disease of affluence affecting the developed countries, there is more loss of life and premature death among people with diabetes in the developing countries. Asia accounts for 60% of the world's diabetic population. In 1980 less than 1% of Chinese adults were affected by diabetes, but by 2008 the prevalence was 10%. It is predicted that by 2030 diabetes may affect 79.4 million people in India, 42.3 million people in China and 30.3 million in United States.
These changes are the result of developing nations having rapid economic development. This rapid economic development has caused a change in the lifestyle and food habits leading to over-nutrition, increased intake of fast food causing increase in weight, and insulin resistance. Compared to the west, obesity in Asia is low. India has very low prevalence of obesity, but a very high prevalence of diabetes suggesting that diabetes may occur at a lower BMI in Indians as compared to the Europeans. Smoking increases the risk for diabetes by 45%. In developing countries around 50–60% adult males are regular smokers, increasing their risk for diabetes. In developing countries, diabetes is more commonly seen in the more urbanized areas. The prevalence of diabetes in rural population is 1/4th that of urban population for countries like India, Bangladesh, Nepal, Bhutan and Sri Lanka.
Cardiovascular disease
Cardiovascular disease refers to a disease of the heart and blood vessels. Conditions and diseases associated with heart disease include: stroke, coronary heart disease, congenital heart disease, heart failure, peripheral vascular disease, and cardiomyopathy. Cardiovascular disease is known as the world's biggest killer. 17.5 million people die from it each year, which equals 31% of all deaths. Heart disease and stroke cause 80% of these deaths.
Risk factors
High blood pressure is the leading risk factor for cardiovascular disease and has contributed to 12% of the cardiovascular related deaths worldwide. Other significant risk factors for heart disease include high cholesterol and smoking. 47% of all Americans have one of these three risk factors. Lifestyle choices, such as poor diet and physical inactivity, and excessive alcohol use can also contribute to cardiovascular disease. Medical conditions, like diabetes and obesity can also be risk factors.
Prevalence in countries of affluence
In the United States, 610,000 people die every year from heart disease which is equal to 1 in 4 deaths. The leading cause of death for both men and women in the United States is heart disease. In Canada, heart disease is the second leading cause of death. In 2014, it was the cause of death for 51,000 people. In Australia, heart disease is also the leading cause of death. 29% of deaths in 2015, had an underlying cause of heart disease. Heart disease causes one in four premature deaths in the United Kingdom and in 2015 heart disease caused 26% of all deaths in that country.
People of lower socio-economic status are more likely to have cardiovascular disease than those who have a higher socio-economic status. This inequality gap has occurred in developed countries because people who have a lower socio-economic status often face many of the risk factors of tobacco and alcohol use, obesity as well as having a sedentary lifestyle. Further social and environmental factors such as poverty, pollution, family history, housing and employment contribute to this inequality gap and to risk of having a health condition caused by cardiovascular disease. The increasing inequality gap between the higher and lower income populations continues in countries such as Canada, despite the availability of health care for everyone.
Alzheimer's disease and other dementias
Dementia is a chronic syndrome which is characterized by deterioration in the thought process beyond what is expected from normal aging. It affects the persons memory, thinking, orientation, comprehension, behavior and ability to perform everyday activity. There are many different forms of dementia. Alzheimer is the most common form which contributes to 60–70% of the dementia cases. Different forms of dementia can co-exist. Young onset dementia which occurs in individuals before the age of 65 contributes to 9% of the total cases. It is the major cause of disability and dependency among old people.
Worldwide, there are 50 million people with dementia and every year 10 million new cases are being reported. The total number of people with dementia is projected to reach 82 million by 2030 and 152 million in 2050.
Prevalence in countries of affluence
According to CDC, Alzheimer is the 6th leading cause of death in U.S. adults and 5th leading cause of death in adults over the age of 65. In 2014, 5 million Americans above the age of 65 were diagnosed with Alzheimer. This number is predicted to triple by the year 2060 and reach up to 14 million. Dementia and Alzheimer has been shown to go unreported on death certificates, leading to under representation of the actual mortality caused by these diseases. Between 2000 and 2015, mortality due to cardiovascular diseases has decreased by 11%, where as death from Alzheimer has increased by 123%. 1 in 3 people over the age of 65 die from Alzheimer or other forms of dementia. Furthermore, 200,000 individuals have been affected by young onset dementia. In United States, Alzheimer affects more women than men. It is twice more common in African-Americans and Hispanics than in whites. As the number of older Americans increases rapidly, the number of new cases of Alzheimer will rise too .
East Asia has the most people living with dementia (9.8 million) followed by Western Europe (7.5 million), South Asia (5.1 million) and North America (4.8 million). In 2016, the prevalence of Alzheimer was 5.05% in Europe. Like in United States, it is more prevalent in women than in men. In the European Union, Finland has the highest mortality among both men and women due to dementia. In Canada, over half a million people are living with dementia. It is projected that by 2031 the number will go up by 66% to 937,000. Every year 25,000 new cases of dementia are diagnosed .
Dementia is the second leading cause of death in Australia. In 2016, it was the leading cause of deaths in females. In Australia 436,366 people are living with dementia in 2018. 3 in 10 people over the age of 85 and 1 in 10 people over the age of 65 have dementia. It is the single greatest cause of disability in older Australians . Rates of dementia are higher for indigenous people. In people from the northern territory and western Australia the prevalence of dementia is 26 times higher in the 45–69 year old group and about 20 times greater in 60–69 year old group.
Risk factors in countries of affluence
The risk factors for developing dementia or Alzheimer's include age, family history, genetic factors, environmental factors, brain injury, viral infections, neurotoxic chemicals, and various immunological and hormonal disorders.
A new research study has found an association between the affluence of a country, hygiene conditions and the prevalence of Alzheimer in their population. According to the Hygiene Hypothesis, affluent countries with more urbanized and industrialized areas have better hygiene, better sanitation, clean water and improved access to antibiotics. This reduces the exposure to the friendly bacteria, virus and other microorganisms that help stimulate our immune system. Decreased microbial exposure leads to immune system that is poorly developed, which exposes the brain to inflammation as is seen in Alzheimer's disease.
Countries like the UK and France that have access to clean drinking water, improved sanitation facilities and have a high GDP show a 9% increase in Alzheimer's disease as opposed to countries like Kenya and Cambodia. Also countries like UK and Australia, where three quarters of their population lives in urban areas, have a 10% higher Alzheimer's rate than in countries like Bangladesh and Nepal where less than one tenth of their population live in urban areas.
Alzheimer's risk changes with the environment. Individuals from the same ethnic background living in an area of low sanitation will have a lower risk as compared to the same individuals living in an area of high sanitation who will be exposed to a higher risk of developing Alzheimer's. An African-American in U.S. has a higher risk of developing Alzheimer's as compared to one living in Nigeria. Immigrant populations exhibit Alzheimer disease rates intermediate between their home country and adopted country. Moving from a country of high sanitation to a country of low sanitation reduces the risk associated with the disease.
Mental illness
People who face poverty have more risks related to having a mental illness and also do not have as much access to treatment. The stressful events that they face, unsafe living condition and poor physical health lead to cycle of poverty and mental illness that is seen all over the world. According to the World Health Organization 76–85% of people living in lower and middle income countries are not treated for their mental illness. For those in higher-income counties, 35–50% of people with mental illness do not receive treatment. It is estimated that 90% of deaths by suicide are caused by substance use disorders and mental illness in higher income countries. In lower to middle income countries, this number is lower.
Prevalence of mental illness
One in four people have experienced mental illness at one time in their lives, and approximately 450 million people in the world currently have a mental illness. Those who are impoverished live in conditions associated with a higher risk for mental illness and, to compound the issue, do not have as much access to treatment. Stress, unsafe living conditions, and poor physical health associated with lack of sufficient income lead to a cycle of poverty and mental illness that is observed worldwide. In the U.S., approximately one in five adults has a mental illness, or 44.7 million people. In 2016, it was estimated that 268 million people in the world had depression.
Anxiety disorders, such as generalized anxiety, Obsessive Compulsive Disorder, and Post Traumatic Stress Disorder affected 275 million people worldwide in 2016. The global proportion of people affected by anxiety disorders is between 2.5 and 6.5%. Australia, Brazil, Argentina, Iran, the United States, and a number of countries in Western Europe appear to have a higher prevalence of anxiety disorders.
Cancer
Cancer is a generic term for a large group of disease which is characterized by rapid creation of abnormal cells that grow beyond their usual boundaries. These cells can invade adjoining parts of the body and spread to other organs causing metastases, which is a major cause of death. According to WHO, Cancer is the second leading cause of death globally. One in six deaths worldwide are caused due to cancer, accounting to a total of 9.6 million deaths in 2018. Tracheal, bronchus, and lung cancer is the leading form of cancer deaths across most high and middle-income countries.
Prevalence in countries of affluence
In United States, 1,735,350 new cases of cancer will be diagnosed in 2018. Most common forms of cancer are cancer of the breast, lung, bronchus, prostate, colorectal cancer, melanoma of skin, Non-Hodgkin's lymphoma, renal cancer, thyroid cancer and liver cancer. Cancer mortality is higher among men than in women. African-Americans have the highest risk of mortality due to cancer. Cancer is also the leading cause of death in Australia. The most common cancers in Australia are prostate, breast, colorectal, melanoma and lung cancer. These account for 60% of the cancer cases diagnosed in Australia.
Europe contains only 1/8 of the world population, but has around one quarter of the global cancer cases, with 3.7 million new cases each year. Lung, breast, stomach, liver, colon are the most common cancers in Europe. The overall incidences among different cancers vary across countries.
About one in two Canadians will develop cancer in their lifetime, and one in four will die of the disease. In 2017, 206,200 new cases of cancer were diagnosed. Lung, colorectal, breast, and prostate cancer accounted for about half of all cancer diagnoses and deaths.
Risk factors
High prevalence of cancer in high-income countries is attributed to lifestyle factors like obesity, smoking, physical inactivity, diet and alcohol intake. Around 40% of the cancers can be prevented by modifying these factors.
Allergies/autoimmune diseases
The rate of allergies around the world has risen in industrialized nations over the past 50 years. A number of public health measures, such as sterilized milk, use of antibiotics and improved food production have contributed to a decrease in infections in developed countries. There is a proposed causal relationship, known as the "hygiene hypothesis" that indicates that there are more autoimmune disorders and allergies in developed countries with fewer infections. In developing countries, it is assumed that the rates of allergies are lower than developed countries. That assumption may not be accurate due to limited data on prevalence. Research has found an increase in asthma by 10% in countries such as Peru, Costa Rica, and Brazil.
See also
Affluenza: "placing a high value on money, possessions, appearances (physical and social) and fame" may increase risk of mental illnesses
Nutrition
Social determinants of health
The China Study: 2005 book on the relationship between the consumption of animal products and selected illnesses
Urbanization
Westernization
References
Further reading
Epidemiology
Human diseases and disorders
Medical conditions related to obesity
Social problems in medicine | Diseases of affluence | Environmental_science | 4,556 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.