id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,512,119 | https://en.wikipedia.org/wiki/Prime%20constant | The prime constant is the real number whose th binary digit is 1 if is prime and 0 if is composite or 1.
In other words, is the number whose binary expansion corresponds to the indicator function of the set of prime numbers. That is,
where indicates a prime and is the characteristic function of the set of prime numbers.
The beginning of the decimal expansion of ρ is:
The beginning of the binary expansion is:
Irrationality
The number is irrational.
Proof by contradiction
Suppose were rational.
Denote the th digit of the binary expansion of by . Then since is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers and such that
for all and all .
Since there are an infinite number of primes, we may choose a prime . By definition we see that . As noted, we have for all . Now consider the case . We have , since is composite because . Since we see that is irrational.
References
External links
Irrational numbers
Prime numbers
Articles containing proofs
Mathematical constants | Prime constant | Mathematics | 201 |
74,814,186 | https://en.wikipedia.org/wiki/Living%20Indus%20Initiative | The Living Indus is an umbrella initiative by Ministry of Climate Change, Government of Pakistan and United Nations in Pakistan.
The original Living Indus Initiative document was developed by a team led by Dr. Adil Najam as its Lead Author. The initiative serves as an overarching program, rallying call to action, seeks to spearhead and unify various efforts aimed at revitalizing the ecological well-being of the Indus River within Pakistan's borders. It emerges as a direct response to Pakistan's heightened susceptibility to the adverse effects of climate change.
Background
The Indus River flows down from the Himalayas, through Indian and Pakistan Administered Kashmir, Gilgit-Baltistan, and Khyber Pakhtunkhwa, flowing south-by-southwest through the length of Pakistan before emptying into the Arabian Sea near Karachi.
Ninety percent of Pakistan's people and more than three-quarters of its economy reside in the Indus basin. More than 80% of Pakistan's arable land is irrigated by its waters.
The Indus Basin is facing devastating challenges due to environmental degradation, unsustainable population growth, rapid urbanization and industrialization, the unregulated utilization of resources, inefficient water use, and poverty. The Indus and its ecosystems are under pressure both from the seemingly inexorable changing climate, temperature fluctuations, disruption of rainfall patterns, and early-stage efforts to adapt to and mitigate these effects.
The Indus has supported a civilization for thousands of years, but with the current state of the management of the basin and the impact of climate change on the monsoon and the glacial melt, it might not be able to sustain Pakistan for another 100 years.
Description
Living Indus is an umbrella initiative and a call to action to lead and consolidate initiatives to restore the ecological health of the Indus within the boundaries of Pakistan. The initiatives have been incorporated into a ‘Living Indus’ prospectus jointly developed by the Government of Pakistan and the United Nations. Initiated in 2021, it has been endorsed by all governments, the initiative is expected to continue receiving support.
The scale of the initiatives requires the adoption of collective and innovative approaches by all stakeholders, including the government, the private sector, and the UN, toward mobilizing resources. The response of Living Indus is one of building resilience and adaptation to the threats the Indus faces from the impacts of both human use and climate change over the next few decades.
A number of specific interventions under the Initiative are now operational, including the 'Recharge Pakistan' project led by the Ministry of Climate Change, Government of Pakistan and WWF-Pakistan.
Interventions
Extensive consultations with the government, led by the Chief Ministers of all the provinces, the public sector, private sector, experts, and civil society led to a ‘living’ menu of 25 preliminary interventions. These interventions are in line with global best practices, focusing on green infrastructure and nature-based approaches driven by the community.
The Ministry of Climate Change and Environmental Coordination (MoCC&EC), Government of Pakistan has highlighted eight priority interventions out of the 25. Implementation plans are being prepared for these.
World Restoration Flagship
Designated as a World Restoration Flagship by the UN Environment Programme, the Living Indus Initiative embodies the principles of the UN Decade on Ecosystem Restoration. This accolade acknowledges its exemplary contributions to large-scale ecosystem restoration and its alignment with global restoration objectives.
Inger Andersen, executive director of UN Environment Programme stated:
References
Environmental organisations based in Pakistan
Climate change organizations
Water organizations
Nature conservation organizations
Ecosystems
Indus basin
Nature conservation in Pakistan | Living Indus Initiative | Biology | 720 |
12,020,830 | https://en.wikipedia.org/wiki/Spizellomycetales | Spizellomycetales is an order of fungi in the Chytridiomycetes. Spizellomycetalean chytrids are essentially ubiquitous zoospore-producing fungi found in soils where they decompose pollen. Recently they have also been found in dung and harsh alpine environments, greatly expanding the range of habitats where one can expect to find these fungi.
Role in the environment
Spizellomycetalean chytrids have beneficial roles in the soil for nutrient recycling and as parasites of organisms that attack plants, such as nematodes and oospores of downy mildews. On the other hand, they also have detrimental roles as parasites of arbuscular mycorrhizae, symbiotic fungi that help plants gain essential nutrients. Culture isolation studies and molecular characterization of these fungi have demonstrated a great deal of undescribed diversity within the Spizellomycetales, even for isolates collected within the same geographic location. Thus, these understudied fungi await greater exploration.
Taxonomy
The order includes the following genera:
Family Caulochytriaceae Subramanian 1974
Genus Caulochytrium Voos & Olive 1968
Family Powellomycetaceae Simmons 2011
Genus Fimicolochytrium Simmons & Longcore 2012
Genus Geranomyces D.R. Simmons 2011
Genus Powellomyces Longcore, D.J.S. Barr & Désauln. 1995
Genus Thoreauomyces Simmons & Longcore 2012
Family Spizellomycetaceae Barr 1980
Genus Brevicalcar Letcher & M.J. Powell 2017
Genus Bulbomyces Letcher & M.J. Powell 2017
Genus Gaertneriomyces D.J.S. Barr 1980
Genus Gallinipes Letcher & M.J. Powell 2017
Genus Kochiomyces D.J.S. Barr 1980
Genus Spizellomyces D.J.S. Barr 1980
Genus Triparticalcar D.J.S. Barr 1980
See also
Rozella
References
External links
Chytrid Fungi Online: by the University of Alabama
Chytridiomycota
Fungus orders | Spizellomycetales | Biology | 441 |
45,390,961 | https://en.wikipedia.org/wiki/Four-engined%20jet%20aircraft | A four-engined jet, sometimes called a quadjet, is a jet aircraft powered by four engines. The presence of four engines offers increased power and redundancy, allowing such aircraft to be used as airliners, freighters, and military aircraft. Many of the first purpose-built jet airliners had four engines, among which stands the De Havilland Comet, the world's first commercial jetliner. In the decades following their introduction, their use has gradually declined due to a variety of factors, including the approval of twin-engine jets to fly farther from diversion airports as reliability increased, and an increased emphasis on fuel efficiency.
Design
Podded engines
The engines of a 4-engined aircraft are most commonly found in pods hanging from pylons underneath the wings. This can be observed in the Airbus A340, Airbus A380, and Boeing 747. Many military airlifters also feature this design, including the Antonov An-124, Boeing C-17 Globemaster, and Lockheed C-5 Galaxy. In this location, the engines can act as a relieving load and reduce the structural weight of the wing by 15%. They are also in a more accessible location for maintenance or replacement. However, disadvantages include a higher risk of the engines ingesting foreign objects as they have a lower ground clearance, and a larger yawing moment during an engine failure. The supersonic airliner Concorde had its engines mounted in rectangular pods conformal to the underside of the wing, without any pylons. The omission of pylons reduces drag and eliminates the risk of them being overstressed.
The four podded engines can also be attached to the rear fuselage, necessitating a T-tail. This reduces cabin noise and frees up more space on the wings for high-lift devices and fuel storage. The airflow over the wings is also undisturbed due to the absence of pylons. However, the rear-mounted engines shift the centre of gravity aft, and are located further from the fuel supply. The Ilyushin Il-62 and Vickers VC10 both have their four engines mounted in this configuration.
Buried engines
Jet aircraft can also be designed with engines buried within the aircraft structure. The de Havilland Comet incorporated four turbojets buried inside its wing roots, the most common location for buried engines. This design reduces both drag and the risk of ingesting foreign objects, but increases the difficulty of maintenance and complicates the wing structure. The Northrop Grumman B-2 Spirit stealth strategic bomber has all four turbofans buried within its wing (as a flying wing, the wing is the main structural component). This reduces the heat signature of the engines by concealing the fans and minimizing the exhaust signature.
Other
The Hawker Siddeley Trident 3B not only has two engines in rear fuselage external nacelles, but also has other two engines mounted vertically in the tail. The aircraft was initially designed as trijet, but the Trident 3B added a fourth engine as additional power was required by the stretched fuselage, increased wing chord and raised gross weight.
Advantages and drawbacks
Advantages
A major advantage of having four engines is the redundancy offered, leading to increased safety. A single engine failure is much less significant as the three remaining engines can usually provide sufficient power to comfortably reach a diversion airport or continue the journey, depending on factors such as the severity of the malfunction, altitude, fuel load, and weather conditions. With the increased reliability of jet engines, engine failures rates can be as low as 1 in-flight shutdown per 100,000 engine-hours, reducing the significance of this advantage.
During a single-engine failure, the amount of power lost with four engines is proportionately lower than three or two engines. This is because three of the four engines will still be functioning, constituting a 25% reduction in thrust, compared to 33% for trijets and 50% for twinjets. This can be observed in the following example involving the Boeing 747-400 quadjet, McDonnell Douglas MD-11 trijet, and Boeing 767-300ER twinjet. With all engines operative at maximum takeoff weight, all three aircraft have the power to weight ratios of approximately 1 to 3.4. Following the failure of one engine, the power to weight ratio drops to 1 to 4.7 (747-400), 1 to 5.5 (MD-11), and 1 to 6.6 (767-300ER). The Boeing 747-400 experiences the least degradation in performance, making it safer during an engine failure.
Fitting an aircraft with four engines also increases power, enabling more passengers, heavier payloads, and increased performance. This was especially important for early jet airliners, as the low-bypass turbofans and turbojets of the time were much weaker compared to modern high-bypass turbofans. The Pratt & Whitney JT3D from 1958 had a thrust output of , while modern engines like the General Electric GE90 can produce over of thrust, making this advantage less significant nowadays as larger airliners no longer necessarily need four engines.
The largest four-engined jet airliners are distinguished with having the highest passenger capacities—the Airbus A380 can carry up to 853 passengers in a single class layout. This allows them to satisfy demand on extremely busy routes, and when filled with passengers to distribute the cost, they can be very profitable.
Drawbacks
Four smaller engines consume more fuel than two larger ones, driving up operational costs. Specifically, the Boeing 747 quadjet consumes 2.5 litres (0.66 US gallons) more fuel per kilogram (2.2 pounds) of payload compared to the Boeing 787 twinjet. With jet fuel a large part of total costs, this makes quadjets less attractive to airlines and many are shifting their attention towards more efficient aircraft types.
Four engines also incur higher maintenance costs, as each one needs to undergo regular inspections and servicing. Approximately half of the airliner maintenance costs are derived from routine engine maintenance, so the additional expense in maintaining four engines is significant.
The ability of a very large aircraft to carry a large number of passengers can be a drawback when the seats are not filled. This is an emerging trend, particularly because the airline industry has been transitioning from a spoke-hub model to a point-to-point model. In the spoke-hub model, passengers are moved from smaller outlying points and concentrated at large hubs. This introduces a need for high-capacity aircraft. Conversely, the point-to-point model transports passengers directly from origin to destination, spreading them out across different routes and requiring fewer seats on the servicing aircraft. Especially with the recent appearance of low-cost carriers which operate many point-to-point flights, it is more difficult to fill the seats of the largest airliners. For this reason, the wide-body fleets of these airlines are dominated by lower capacity, long range twinjets such as the A330 and 787.
History
Early history
Prior to the Jet Age, airliners were powered by piston engines. Engine failures were relatively common, so providing redundancy with four engines was important for long range flights. This need extended into the beginning of the Jet Age, and combined with the limited thrust available from early jet engines, it was most practical to design large jet airliners with four engines. The first commercial jet aircraft was the four-engined De Havilland Comet, which first flew in 1949. However, due to a series of fatal metal fatigue accidents between 1953 and 1954, the Comet was grounded. This greatly tarnished its reputation and it was the later airliners that truly benefited from the subsequent improvements. In 1958, Boeing introduced the 707 and a year later, Douglas rolled out its DC-8, both types also with four engines. Both were very successful and the 707 in particular is credited with advancing the Jet Age. The large airliners flourished during this period, frequently operating on both domestic and international routes.
Later decades and gradual decline
By the 1960s it became apparent that having four engines reduced fuel efficiency. This was not an issue for long-haul routes which carried 300 or more passengers for 8 to 12 hours, allowing for a high cost-to-passenger-mile ratio. On the other hand, the large four-engined types were less suited for frequent short-haul services, which demanded multiple take-offs and landings daily, costing more fuel while also typically carrying fewer passengers per flight. This prompted the development of large trijets and twinjets. Due to limitations in engine technology, twinjets of this era were small and had relatively short range. The FAA's 60-Minute Rule also prevented them from flying farther than 60 minutes away from diversion airports due to their lower engine redundancy. Trijets represented a compromise between fuel efficiency and redundancy. In 1969, Boeing launched the 747. Nicknamed the "Jumbo Jet", it was the first wide-body airliner, able to carry significantly more passengers than any other aircraft. Its capacity and performance were unmatched, even after the launch of wide-body trijet competition in the form of the McDonnell Douglas DC-10 and Lockheed L-1011 TriStar.
Within its own category in commercial aviation, the supersonic airliner Concorde began service in 1976. Its four Rolls-Royce/Snecma Olympus 593 turbojets allowed it to cruise at twice the speed of sound. At the time of inception it was regarded as the future of air transportation. However, in large part due to high operating costs and noise issues, Concorde never achieved the predicted level of success.
When the BAe 146 was introduced in 1983, it was unusual because it was a four-engined short range regional airliner. Its design ultimately enabled quieter operation and short take off and landing capabilities.
In the 1980s, the increased reliability and available power of jet engines enabled twinjets to safely fly on one engine. This prompted the introduction of ETOPS ratings for twinjets, allowing them to circumvent the 60-Minute Rule and fly on transoceanic routes previously serviced by four-engined types. The advantage of redundancy brought by four engines was no longer necessary and they could no longer compete with the lower fuel consumption and maintenance costs of twinjets with higher-powered engines. All but the largest four-engined types, such as the Boeing 747, became uneconomical and this led to the retirement of the ageing 707 and DC-8 fleets for passenger service. Nonetheless, Boeing rolled out the 747-400 in 1989, which enjoyed high capacities (over 300 passengers) and long range, a combination still unmatched by twinjets at the time, making this the most commercially successful 747 variant. Airbus, after ending collaboration talks with McDonnell Douglas who went on to produce the MD-11 long-range trijet, instead launched the A340 quadjet in 1993 as a long-range derivative of the A330 twinjet as their initial variants shared the same fuselage and wing.
Between the 1970 and the 1990s, twinjets, trijets, and quadjets shared engines of similar output, such as when the DC-10, MD-11, Boeing 767, and Airbus A300, A310, and A330, and Boeing 747 all had variants powered by the widespread General Electric CF6, so at the time additional engines were needed for larger capacities and longer range. The major advantages of three and four engines became much less significant when the twin-engined Boeing 777 was introduced in 1995, equipped with the purpose-designed General Electric GE90 engine developed from further advancements in high-bypass turbofan technology. The original 777-200 could seat upwards of 300 passengers, a significant increase upon existing twinjets such as the 767, which could typically only seat 200-300 passengers. The subsequent development of the 777-300ER pushed the passenger capacity to just under 400, approaching the 747 and superseding the A340, while being more efficient and incurring lower engine maintenance costs. Airbus, not seeing much success with updated A340-500/600 variants powered by the Rolls-Royce Trent 500, went for the all-new A350XWB to compete against the 777 and upcoming 787 Dreamliner.
By the early 2000s, the only remaining advantage of the largest types was their ability to carry more passengers than the largest twinjets. In the years following the September 11 attacks, the increase in fuel prices and decline in the aviation industry heightened the need to minimise operating costs and expenditures. Production of the 747-400 passenger variant ceased by 2005 and deliveries of the A340 dropped to 11 per year, as they faced competition from more efficient and comparably capable twinjets.
Current status
The use of four engines was invigorated in 2005 when Airbus introduced the A380, currently the world's largest airliner. It was designed for routes with ultra-high demand, typically seating 575 passengers in two full-length decks. However, as of 2018, Airbus has only fulfilled a quarter of its initial projected figure of 1,200 sales over two decades. This can be attributed to a modern trend towards point-to-point travel using smaller but highly efficient twinjets such as the Airbus A350 and Boeing 787, as opposed to a spoke-hub model which favours massive aircraft such as the A380. The largest operator of the A380, Emirates, profits from its fleet because its primary hub is situated at Dubai International Airport, where many long-haul routes have their stopovers. This makes it easier for Emirates to fill the seats of its A380s. After Emirates reduced its last order in February 2019, Airbus announced that A380 production would end in 2021.
As engine power continued to grow and capacity demands decreased, twinjet aircraft such as the Boeing 787 Dreamliner and Airbus A350 XWB ate into the markets that traditionally demanded four engines.
In response to the A380, Boeing introduced the 747-8 in 2011 as a successor to the 747-400. The 747-8I passenger variant has only received 50 orders , while the 747-8F freighter variant has been more successful with over 100 orders. , the 747-8F is unmatched in range and payload, making it an option for cargo carriers.
After the Airbus A380 ended production, the Boeing 747-8 also stopped production with the last delivery taking place on January 31, 2023, meaning no double-deck passenger jets were any longer in production. Boeing attributed the retirement of their 747 fleets to the effect of the COVID-19 pandemic.
Types currently in production
Airliners
Ilyushin Il-96 (Limited)
Military
Ilyushin Il-76
Kawasaki P-1
Xi'an Y-20
References
Quadjets
Aerospace technologies
Aircraft configurations | Four-engined jet aircraft | Engineering | 3,060 |
47,254,886 | https://en.wikipedia.org/wiki/Yulia%20Sister | Yulia Sister (, ; born September 12, 1936, in Chișinău, Bessarabia, Romania) is a Soviet Moldavian and Israeli analytical chemist engaged in chemical research with the use of polarography and chromatography, a science historian, and a researcher of Russian Jewry in Israel, France, and other countries. She holds the position of Director General of the Research Centre for Russian Jews abroad and in Israel.
Biography
Early childhood and schools
Yulia Sister was born in 1936 in Chișinău (Russian: Kishinev), at the time in the Kingdom of Romania, a city which later became the capital of the Moldavian SSR and since 1991 is the capital of Moldova. Her parents and paternal grandparents were also born in this city. The grandparents were there and survived the pogrom of 1903.
David Iosifovich, Yulia's father, was a doctor educated in Prague at Charles University. He used to tell his daughter about his student years, the Bessarabian association of fellow-countrymen in Prague and his meetings with famous people. Yulia's mother Yevgenia (Bathsheba) Moiseevna copied for her by hand children's verses and Yulia learned to read quite early. Among the first poems was "What Is Good and What Is Bad" by Mayakovsky.
Yulia's grandparents stuck to traditions and spoke Yiddish, and grandfather Yosef (Iosif) even wrote Yiddish poetry. But Yulia could hardly remember them. Her grandfather Moshe (Moisei) died before she was born; her parental grandparents lost their lives in the Kishinev Ghetto in the Holocaust and her grandmother Sarah died during the World War II in evacuation.
During the 2nd World War Bessarabia was reclaimed and then occupied by the Soviet Union in June 1940. A year later in July 1941 it was reconquered by Germany and Romania, and in August 1944 reoccupied by the Soviet Union. In her memoirs Yulia recalled the day when the Red Army entered Kishinev. She also remembered the German bombing of the city and the air raids on the roads, by which her family escaped to the East from the Nazis.
At the beginning of the war David Sister and his family was evacuated to the left bank of the Volga River where he was appointed chief physician at the district hospital and a consultant of the nearby military hospital. The hospital was located in the open steppe between two villages and on the other side of Volga there was Stalingrad. The family lived there a few years. There were no other children in the neighborhood and Yulia had no friends to play with. But she was fascinated by the local nature and made observations of plants and animals. The inhabitants of the hospital could hear the cannonade from the other bank, and during the battle of Stalingrad it became particularly strong.
In 1944 Yulia's family moved to Kirovograd where she, after a year's delay, was enrolled in the first grade of primary education. A year later the family came back to the native city of Kishinev. Despite severe post-war shortages and difficulties, the Sister's family succeeded to restore their home, which included a huge library. Among the family friends and guests were writers, actors, musicians and scientists, and Yulia grew up in an atmosphere of thirst for knowledge.
Between the years of 1945 and 1954 Yulia Sister studied at the School for Girls Number 2 in Kishinev. Chemistry was taught very passionately by a teacher that loved the subject and was able to convey her enthusiasm to the students. On the advice of her teacher Sister participated in the chemistry enrichment program for school children that was carried out by Professor at the University of Kishinev.
Education and research career
Yulia Sister entered the Department of Chemistry of the University of Kishinev in the fall 1954. While asked by Professor , who interviewed the applicants to the Department, why she has chosen this Department, she explained that thanks to her school teacher she fell in love with chemistry. At the University Yulia was involved in various campus activities, and served as an editor of the faculty newspaper "Chemist". Since her second year at the university she became a member of the student scientific society and was engaged in the research of compounds called heteropolyacids. In 1959 Sister successfully defended her Master's thesis "Precipitation chromatography of heteropolyacids." and graduated with honors from the University of Kishinev.
Upon completion of the studies Sister was assigned to the laboratory of analytical chemistry headed by Professor Yuri Lyalikov. The laboratory was a part of the Institute of Chemistry at the Moldavian branch of the Academy of Sciences of the USSR, which became the Academy of Sciences of Moldova in 1961. Working in this laboratory allowed the young chemist Sister to begin her research with new polarographic methods. In order to carry out analysis of organic compounds by the means of alternating-current (ac) polarography Yulia built with her own hands a polarograph and received the first polarograms. Sister was the first in Moldova (with Y. S. Lyalikov), who applied the methods of ac polarography and second harmonic ac polarography for analyzing organic compounds. Then, together with the physicist Vil Senkevich, they assembled an automatic device, and only later began the serial production of polarographs in the USSR. In the early 1960s Yulia published her first research articles. In 1967 she received her Ph.D. from the Institute of Chemistry of the Moldavian Academy of Sciences.
Through 25 years of research at the Institute of Chemistry Sister dealt with a wide range of topics. Her ecology oriented research included analysis of pesticides in environmental samples, food items and biological mediums. She participated in research and analysis of suspensions and was involved in analyzing new organic compounds. Sister made a substantial contribution to the development of such methods as the second harmonic ac polarography, the difference polarography with magnetic recording, the chromatopolarography. For about 20 years Yulia Sister served as a consultant on the use of the polarographic method in biology at the Department of Human and Animal Physiology of the University of Kishinev.
In 1984 Yulia Sister was invited to work with the Institute of Technology and Development where she soon headed the laboratory of physical and chemical methods. The Institute was affiliated to a research and production association in the Ialoveni (formerly Kutuzov). Sister and her laboratory were using a variety of research methods and among them the high-performance liquid chromatography being at that time a new approach in the laboratories of the country. She also contributed as a board member of Moldavian branch of the Mendeleev Chemical Society and led the program "Young Chemist" in the Moldavian Republic. Many of her students, the former young chemists, became later scientists and managers of respectable companies.
New activities and challenges
Yulia Sister and her family repatriated to Israel in 1990. In 1992–1993 she served as a senior researcher of the Department of Inorganic and Analytical Chemistry at the Hebrew University of Jerusalem, and then she was engaged in the topics related to the analysis of biological objects at the Tel Aviv University. During these years, along with her career in chemistry, Yulia Sister became deeply interested in the study of Russian-Jewish culture.
In 1991 Sister began to write for the Shorter Jewish Encyclopedia (SJE) as a non-staff editor. She served as a research fellow covering the field of history of science and wrote about 90 articles for the encyclopedia. Yulia is the author of the articles "Chemistry" (jointly with P. Smorodnitsky), "Veniamin Levich", "Frederick Reines", "Moise Haissinsky", "Yuri Golfand" and many others.
Yulia Sister's activities in the House of Scientists and Experts of Rehovot started in 1991. Within this forum she organizes lectures, seminars and scientific conferences. She leads the scientific seminars of the House of Scientists that are regularly held at the Weizmann Institute of Science. In 2008, and then in 2014 she organized conferences devoted to the Bilu movement and to the First Aliyah. She also maintains friendly contacts with foreign colleagues, such as the Club of Russian-speaking scientists of Massachusetts.
In 1997 Mikhail Parkhomovsky initiated creation of the Research Center for Russian Jewry Abroad, which aimed to collect and publish information on Jews, who emigrated from the Russian Empire, Soviet Union or Post-Soviet states and made a contribution to world civilization. Parkhomovsky became the Scientific Director and Chief Editor and Yulia Sister Director General of the Center. From 2012 the Center changed its name to Research Centre for Russian Jews abroad and in Israel (Erzi). The collection, processing and publication of materials related to Russian Jewry are organized by Sister. By 2015 the Center published about 30 volumes of collections, including books devoted to Jews in England, France, U.S., Israel and other countries. In addition to her executive functions, Sister is a frequent editor and author of the Center's collective monographs. She is the editor of the 17th volume ("Let Us Build the Walls of Jerusalem. Book 3"), a coeditor of the 11th volume ("Let Us Build the Walls of Jerusalem. Book 1") and of the monograph "Israel, Russian Roots", and a participant in the editing of the 10th volume.
Sister's activities include the organization of seminars and conferences. The following examples are a small sampling of the events organized by the Director General of the Center. In 1999 she was the coordinator of the conference dedicated to the 50th anniversary of the Weizmann Institute in Rehovot. Together with Prof. Aron Cherniak she published a detailed report on the conference and some of its materials in the 8th volume of the "Russian Jewry Abroad" series. In 2003 Sister led a conference in Kiryat Ekron, in which she introduced the contribution of the Russian Aliyah to Israeli science, culture and education. More than 200 scientists from all over the country participated at the tenth-anniversary of the Center conference in 2007. The 2012 conference was devoted to the 130th anniversary of the First Aliyah and the event was covered by the House of Scientists of Rehovot.
Yulia Sister lives with her family in Kiryat Ekron. Her husband, Boris (Bezalel) Iosifovich Gendler, is a physician with an extensive experience in medical practice and education. After his repatriation from Kishinev Bezalel Gendler worked as a doctor in one of the Israeli hospitals and published several articles, some of them in collaboration with Yulia.
Selected publications
Chemistry
Yulia Sister is the author or co-author of more than 200 scientific publications.
History of science in Erzi publications
History of science in other publications
Russian Jewry in Israel
Other publications
References
External links
Research Centre for Russian Jews abroad and in Israel (Erzi)
Selected publications of Yulia Sister
Analytical chemists
Moldovan chemists
Israeli chemists
Israeli women chemists
Moldovan women scientists
Israeli women scientists
20th-century women scientists
Moldova State University alumni
Scientists from Chișinău
1936 births
Living people | Yulia Sister | Chemistry | 2,278 |
27,478,537 | https://en.wikipedia.org/wiki/Leggett%20inequality | In physics, the Leggett inequalities, named for Anthony James Leggett, who derived them, are a related pair of mathematical expressions concerning the correlations of properties of entangled particles. (As published by Leggett, the inequalities were exemplified in terms of relative angles of elliptical and linear polarizations.)
Inequalities
They are fulfilled by a large class of physical theories based on particular non-local and realistic assumptions, that may be considered to be plausible or intuitive according to common physical reasoning.
The Leggett inequalities are violated by quantum mechanical theory. The results of experimental tests in 2007 and 2010 have shown agreement with quantum mechanics rather than the Leggett inequalities. Given that experimental tests of Bell's inequalities have ruled out local realism in quantum mechanics, the violation of Leggett's inequalities is considered to have falsified realism in quantum mechanics. In quantum mechanics "realism" means "notion that physical systems possess complete sets of definite values for various parameters prior to, and independent of, measurement".
See also
CHSH inequality
Leggett–Garg inequality
References
External links
"The Reality Tests", Joshua Roebke, SEED, June 2008.
"A quantum renaissance", Markus Aspelmeyer and Anton Zeilinger, Physics World, July 2008.
"Quantum theory survives latest challenge", Kate McAlpine, Physics World, December 2010.
Equations of physics
Quantum information science
Quantum measurement
Physics theorems
Inequalities | Leggett inequality | Physics,Mathematics | 315 |
14,324,232 | https://en.wikipedia.org/wiki/RAR-related%20orphan%20receptor%20gamma | RAR-related orphan receptor gamma (RORγ) is a protein that in humans is encoded by the (RAR-related orphan receptor C) gene. RORγ is a member of the nuclear receptor family of transcription factors. It is mainly expressed in immune cells (Th17 cells) and it also regulates circadian rhythms. It may be involved in the progression of certain types of cancer.
Gene expression
Two isoforms are produced from the same RORC gene, probably by selection of alternative promoters.
RORγ (also referred to as RORγ1) – produced from an mRNA containing exons 1 to 11.
RORγt (also known as RORγ2) – produced from an mRNA identical to that of RORγ, except that the two 5'-most exons are replaced by an alternative exon, located downstream in the gene. This causes a different, shorter N-terminus.
RORγ
The mRNA of the first isoform, RORγ is expressed in many tissues, including thymus, lung, liver, kidney, muscle, and brown fat. While RORγ mRNA is abundantly expressed, attempts to detect RORγ protein have not been successful; therefore it is not clear whether RORγ protein is actually expressed. Consistent with this, the main phenotypes identified in RORγ-/- knockout mice (where neither isoform is expressed) are those associated with RORγt immune system function and an isoform specific RORγt knockout displayed a phenotype identical to the RORγ-/- knockout. On the other hand, circadian phenotypes of RORγ-/- mice in tissues where the RORγt isoform is expressed in minute amounts argues for the expression of functional RORγ isoform. Absent protein in previous studies may be due to the high amplitude circadian rhythm of expression of this isoform in some tissues.
The mRNA is expressed in various peripheral tissues, either in a circadian fashion (e.g., in the liver and kidney) or constitutively (e.g., in the muscle).
In contrast to other ROR genes, the RORC gene is not expressed in the central nervous system.
RORγt
The second isoform, RORγt, is expressed in various immune cells. Of those, the most prominent examples are immature CD4+/CD8+ thymocytes, T helper 17 (Th17) cells and in type 3 innate lymphoid cells (ILC3). Mice lacking RORγt are devoid of lymph nodes and Peyer's patches due to the lack of Lymphoid tissue inducer cells (LTi), a subpopulation of ILC3s and important drivers of lymphoid organogenesis. RORγt inhibitors are under development for the treatment of autoimmune diseases such as psoriasis and rheumatoid arthritis.
Function
The RORγ protein is a DNA-binding transcription factor and is a member of the NR1 subfamily of nuclear receptors. Although the specific functions of this nuclear receptor have not been fully characterized yet, some roles emerge from the literature on the mouse gene.
Circadian rhythms
The RORγ isoform appears to be involved in the regulation of circadian rhythms. This protein can bind to and activate the promoter of the ARNTL (BMAL1) gene, a transcription factor central to the generation of physiological circadian rhythms. Also, since the levels of RORγ are rhythmic in some tissues (liver, kidney), it has been proposed to impose a circadian pattern of expression on a number of clock-controlled genes, for example the cell cycle regulator p21. Conversely, it has also been demonstrated that RORγt+ enteric ILC3s themselves are under circadian control, being entrained by light that is sensed by the suprachiasmatic nucleus.
Importantly, the deletion of ARNTL in ILC3s using a RORc promoter disrupted enteric defence, reinforcing the role of clock machinery in the control of RORγt.
Whilst ILC3s themselves oscillate in a circadian manner and exhibit diurnal variations in the expression of clock genes, it remains unclear exactly how the central clock relays these signals to the RORγt+ ILC3s in the gut.
Immune regulation
RORγt is the most studied of the two isoforms. Its best understood functionality is in the immune system. The transcription factor is essential for lymphoid organogenesis in the embryo, in particular lymph nodes and Peyer's patches, but not the spleen. It is essential for the specific immune cells responsible for embryonic lymphoid formation, the Lymphoid Tissue inducer (LTi) cells. Within these cells, retinoic acid induces expression of RORc. Consequently, removing the metabolic ground product for retinoic acid, vitamin A, from the diet of pregnant mice resulted in lower embryonic LTi cell differentiation, leading to smaller lymph nodes in the adult offspring and finally resulting in lower capabilities to clear a virus. RORγt also plays an important regulatory role in thymopoiesis, by reducing apoptosis of thymocytes and promoting thymocyte differentiation into pro-inflammatory T helper 17 (Th17) cells. It also plays a role in inhibiting apoptosis of undifferentiated T cells and promoting their differentiation into Th17 cells, possibly by down regulating the expression of Fas ligand and IL2, respectively .
Despite the pro-inflammatory role of RORγt in the thymus, it is expressed in a Treg cell subpopulation in the colon, and is induced by symbiotic microflora. Abrogation of the gene's activity generally increases type 2 cytokines and may make mice more vulnerable to oxazolone-induced colitis.
Cancer
RORγ is expressed in certain subsets of cancer stem cells (EpCAM+/MSI2+) in pancreatic cancer and shows a strong correlation with tumor stage and lymph node invasion. Amplification of the RORC gene has also been observed in other malignancies such as lung, breast and neuroendocrine prostate cancer.
Ligands
Intermediates within the cholesterol pathway have been shown to activate RORγt. Various oxysterols are claimed to be an activator of RORγ, but with lower potency as cholesterol intermediates.
As a drug target
As antagonism of the RORγ receptor may have therapeutic applications in the treatment of inflammatory diseases, a number of synthetic RORγ receptor antagonists have been developed.
Agonists may allow the immune system to combat cancer. LYC-55716 is an oral, selective RORγ (RORgamma) agonist in clinical trials on patients with solid tumors.
See also
RAR-related orphan receptor
References
External links
Intracellular receptors
Transcription factors | RAR-related orphan receptor gamma | Chemistry,Biology | 1,465 |
64,035,607 | https://en.wikipedia.org/wiki/Uridine%20diphosphate%20N-acetylgalactosamine | Uridine diphosphate N-acetylgalactosamine or UDP-GalNAc is a nucleotide sugar composed of uridine diphosphate (UDP) and N-acetyl galactosamine (GalNAc). It is used by glycosyltransferases to transfer N-acetylgalactosamine residues to substrates. UDP-GalNAc is an important building block for the production of glycoproteins and glycolipids in the body. It also serves as a precursor for the synthesis of mucin-type O-glycans, which are important components of mucus and play important roles in biological processes such as cell signaling, immune defense, and lubrication of the digestive tract.
See also
Galactosamine
Galactose
Globoside
(N-Acetylglucosamine) GlcNAc
References
Acetamides
Hexosamines
Membrane biology | Uridine diphosphate N-acetylgalactosamine | Chemistry | 203 |
18,675,609 | https://en.wikipedia.org/wiki/Light-emitting%20electrochemical%20cell | A light-emitting electrochemical cell (LEC or LEEC) is a solid-state device that generates light from an electric current (electroluminescence). LECs are usually composed of two metal electrodes connected by (e.g. sandwiching) an organic semiconductor containing mobile ions. Aside from the mobile ions, their structure is very similar to that of an organic light-emitting diode (OLED).
LECs have most of the advantages of OLEDs, as well as additional ones:
The device is less dependent on the difference in work function of the electrodes. Consequently, the electrodes can be made of the same material (e.g. gold). Similarly, the device can still be operated at low voltages.
Recently developed materials such as graphene or a blend of carbon nanotubes and polymers have been used as electrodes, eliminating the need for using indium tin oxide for a transparent electrode.
The thickness of the active electroluminescent layer is not critical for the device to operate. This means that:
LECs can be printed with relatively inexpensive printing processes (where control over film thicknesses can be difficult).
In a planar device configuration, internal device operation can be observed directly.
There are two distinct types of LECs, those based on inorganic transition metal complexes (iTMC) or light emitting polymers. iTMC devices are often more efficient than their LEP based counterparts due to the emission mechanism being phosphorescent rather than fluorescent.
While electroluminescence had been seen previously in similar devices, the invention of the polymer LEC is attributed to Pei et al. Since then, numerous research groups and a few companies have worked on improving and commercializing the devices.
In 2012 the first inherently stretchable LEC using an elastomeric emissive material (at room temperature) was reported. Dispersing an ionic transition metal complex into an elastomeric matrix enables the fabrication of intrinsically stretchable light-emitting devices that possess large emission areas (~175 mm2) and tolerate linear strains up to 27% and repetitive cycles of 15% strain. This work demonstrates the suitability of this approach to new applications in conformable lighting that require uniform, diffuse light emission over large areas.
In 2012 fabrication of organic light-emitting electrochemical cells (LECs) using a roll-to-roll compatible process under ambient conditions was reported.
In 2017, a new design approach developed by a team of Swedish researchers promised to deliver substantially higher efficiency: 99.2 cd A−1 at a bright luminance of 1910 cd m−2.
See also
Electrochemical cell
Electrochemiluminescence
Light-emitting diode
Organic light-emitting diode
Photoelectrolysis
References
Display technology
Molecular electronics
Conductive polymers | Light-emitting electrochemical cell | Chemistry,Materials_science,Engineering | 573 |
13,255,208 | https://en.wikipedia.org/wiki/Affect%20display | Affect displays are the verbal and non-verbal displays of affect (emotion). These displays can be through facial expressions, gestures and body language, volume and tone of voice, laughing, crying, etc. Affect displays can be altered or faked so one may appear one way, when they feel another (e.g., smiling when sad). Affect can be conscious or non-conscious and can be discreet or obvious. The display of positive emotions, such as smiling, laughing, etc., is termed "positive affect", while the displays of more negative emotions, such as crying and tense gestures, is respectively termed "negative affect".
Affect is important in psychology as well as in communication, mostly when it comes to interpersonal communication and non-verbal communication. In both psychology and communication, there are a multitude of theories that explain affect and its impact on humans and quality of life.
Theoretical perspective
Affect can be taken to indicate an instinctual reaction to stimulation occurring before the typical cognitive processes considered necessary for the formation of a more complex emotion. Robert B. Zajonc asserts that this reaction to stimuli is primary for human beings and is the dominant reaction for lower organisms. Zajonc suggests affective reactions can occur without extensive perceptual and cognitive encoding, and can be made sooner and with greater confidence than cognitive judgments.
Lazarus on the other hand considers affect to be post-cognitive. That is, affect is elicited only after a certain amount of cognitive processing of information has been accomplished. In this view, an affective reaction, such as liking, disliking, evaluation, or the experience of pleasure or displeasure, is based on a prior cognitive process in which a variety of content discriminations are made and features are identified, examined for their value, and weighted for their contributions.
A divergence from a narrow reinforcement model for emotion allows for other perspectives on how affect influences emotional development. Thus, temperament, cognitive development, socialization patterns, and the idiosyncrasies of one's family or subculture are mutually interactive in non-linear ways. As an example, the temperament of a highly reactive, low self-soothing infant may "disproportionately" affect the process of emotion regulation in the early months of life.
Non-conscious affect and perception
In relation to perception, a type of non-conscious affect may be separate from the cognitive processing of environmental stimuli. A monohierarchy of perception, affect and cognition considers the roles of arousal, attentional tendencies, affective primacy, evolutionary constraints, and covert perception within the sensing and processing of preferences and discrimination. Emotions are complex chains of events triggered by certain stimuli. There is no way to completely describe an emotion by knowing only some of its components. Verbal reports of feelings are often inaccurate because people may not know exactly what they feel, or they may feel several different emotions at the same time. There are also situations that arise in which individuals attempt to hide their feelings, and there are some who believe that public and private events seldom coincide exactly, and that words for feelings are generally more ambiguous than are words for objects or events.
Affective responses, on the other hand, are more basic and may be less problematic in terms of assessment. Brewin has proposed two experiential processes that frame non-cognitive relations between various affective experiences: those that are prewired dispositions (i.e., non-conscious processes), able to "select from the total stimulus array those stimuli that are casually relevant, using such criteria as perceptual salience, spatiotemporal cues, and predictive value in relation to data stored in memory", and those that are automatic (i.e., subconscious processes), characterized as "rapid, relatively inflexible and difficult to modify... (requiring) minimal attention to occur and... (capable of being) activated without intention or awareness" (1989 p. 381).
Arousal
Arousal is a basic physiological response to the presentation of stimuli. When this occurs, a non-conscious affective process takes the form of two control mechanisms; one mobilization, and the other immobilization. Within the human brain, the amygdala regulates an instinctual reaction initiating this arousal process, either freezing the individual or accelerating mobilization.
The arousal response is illustrated in studies focused on reward systems that control food-seeking behavior. Researchers focused on learning processes and modulatory processes that are present while encoding and retrieving goal values. When an organism seeks food, the anticipation of reward based on environmental events becomes another influence on food seeking that is separate from the reward of food itself. Therefore, earning the reward and anticipating the reward are separate processes and both create an excitatory influence of reward-related cues. Both processes are dissociated at the level of the amygdala and are functionally integrated within larger neural systems.
Affect and mood
Mood, like emotion, is an affective state. However, an emotion tends to have a clear focus (i.e., a self-evident cause), while mood tends to be more unfocused and diffused. Mood, according to Batson, Shaw, and Oleson (1992), involves tone and intensity and a structured set of beliefs about general expectations of a future experience of pleasure or pain, or of positive or negative affect in the future. Unlike instant reactions that produce affect or emotion, and that change with expectations of future pleasure or pain, moods, being diffused and unfocused, and thus harder to cope with, can last for days, weeks, months, or even years. Moods are hypothetical constructs depicting an individual's emotional state. Researchers typically infer the existence of moods from a variety of behavioral referents.
Positive affect and negative affect represent independent domains of emotion in the general population, and positive affect is strongly linked to social interaction. Positive and negative daily events show independent relationships to subjective well-being, and positive affect is strongly linked to social activity. Recent research suggests that "high functional support is related to higher levels of positive affect". The exact process through which social support is linked to positive affect remains unclear. The process could derive from predictable, regularized social interaction, from leisure activities where the focus is on relaxation and positive mood, or from the enjoyment of shared activities.
Gender
Research has indicated many differences in affective displays due to gender. Gender, as opposed to sex, is one's self-perception of being masculine or feminine (i.e., a male can perceive himself to be more feminine or a female can perceive herself to be more masculine). It can also be argued, however, that hormones (typically determined by sex) greatly affect affective displays and mood.
Affect and child development
According to studies done in the late '80s and early '90s, infants within their first year of life are not only able to begin recognizing affect displays but can begin mimicking the displays and also begin developing empathy. A study in 2011 followed up on these earlier studies by testing fifteen 6-12 month old infants' arousal, via pupil dilation, when looking at both positive and negative displays. Results showed that when presented with negative affect, an infant's pupil will dilate and stay dilated for a longer period of time when compared to neutral affect. When presented with positive affect however, the pupil dilation is much larger, but stays dilated for shorter amount of time. While this study does not prove an infant's ability to empathize with others, it does show that infants do recognize and acknowledge both positive and negative displays of emotion.
In the early 2000s over the period of about seven years, a study was done on about 200 children whose mother had "a history of juvenile-onset unipolar depressive disorder" or simply, depression as children themselves. In the cases of unipolar depression, a person generally displays more negative affect and less positive affect than a person without depression. Or, they are more likely to show when they are sad or upset, than when they are excited or happy. This study that was published in 2010 discovered that the children of mothers that have unipolar depression, had lower levels of positive affect when compared to the control group. Even as the children grew older, while the negative affect began to stay the same, the children still showed consistently lower positive affect. This study suggests that "Reduced PA [positive affect] may be one source of developmental vulnerability to familial depression..." meaning that while having family with depression, increases the risk of children developing depression, reduced positive affect increases the risk of this development. But knowing this aspect of depression, might also be able to help prevent the onset of depression in young children well into their adulthood.
Disorders and physical disabilities
There are some diseases, physical disabilities and mental health disorders that can change the way a person's affect displays are conveyed. Reduced affect is when a person's emotions cannot be properly conveyed or displayed physically. There is no actual change in how intensely they truly feel emotions; there is simply a disparity between emotions felt and how intensely they are conveyed. These disorders can greatly affect a person's quality of life, depending on how intense the disability is.
Flat, blunted and restricted affect
These are symptoms in which an affected person feels an emotion, but does not or cannot display it. Flat being the most severe in where there is very little to absolutely no show of emotions. Restricted and blunted are, respectively, less severe. Disorders involving these reduced affect displays most commonly include schizophrenia, post traumatic stress disorder, depression, autism and persons with traumatic brain injuries. One study has shown that people with schizophrenia that experience flat affect, can also experience difficulty perceiving the emotions of a healthy individual.
Facial paralysis and surgery
People who have facial deformities or paralysis may also be physically incapable of displaying emotions. This is beginning to be corrected though, through "Facial Reanimation Surgery" which is proving not only to successfully improve a patient's affect displays, but also bettering their psychological health. There are multiple types of surgeries that can help fix facial paralysis. Some more popular types include fixing the actual nerve damage, specifically any damage to the hypoglossal nerve; facial grafts where nerves taken from a donor's leg are transplanted into the patient's face; or if the damage is more muscular versus actual nerves, muscle may be transferred into the patient's face.
Strategic display
Emotions can be displayed in order to elicit desired behaviors from others.
People have been known to display positive emotions in various settings. Service workers often engage in emotional labor, a strive to maintain positive emotional expressions despite difficulties in working conditions or rude customers, in order to conform to organizational rules. Such strategic displays are not always effective, since if they are detected, lower customer satisfaction results.
Perhaps the most notable attempt to feign negative emotion could be seen with Nixon's madman theory. Nixon's administration attempted to make the leaders of other countries think Nixon was mad, and that his behavior was irrational and volatile. Fearing an unpredictable American response, leaders of hostile Communist Bloc nations would avoid provoking the United States. This diplomatic strategy was not ultimately successful.
The effectiveness of the strategic display depends on the ability of the expresser to remain undetected. It may be a risky strategy since if detected, the person's original intent could be discovered, undermining the future relationship with the target.
According to the appraisal theory of emotions, the experience of emotions is preceded by an evaluation of an object of significance to that individual. When individuals are seen to display emotions, it serves as a signal to others of an event important to that individual. Thus, deliberately altering the emotion display toward an object could be used make the targets of the strategic emotion think and behave in ways that benefit the original expresser. For example, people attempt to hide their expressions during a poker game in order to avoid giving away information to the other players, i.e., keep a poker face.
See also
Affect (psychology)
Affect theory
Affective
Affective spectrum
Deception
Discrete emotions theory
Display rules
Emotion
Emotional contagion
Emotional labor
Empathy
Facial communication
Interpersonal deception theory
Psychopathy
Self-awareness
Self-deception
Sincerity
Silvan Tomkins
References
Emotion
Evolutionary psychology | Affect display | Biology | 2,510 |
348,969 | https://en.wikipedia.org/wiki/Diagonal%20lemma | In mathematical logic, the diagonal lemma (also known as diagonalization lemma, self-reference lemma or fixed point theorem) establishes the existence of self-referential sentences in certain formal theories of the natural numbers—specifically those theories that are strong enough to represent all computable functions. The sentences whose existence is secured by the diagonal lemma can then, in turn, be used to prove fundamental limitative results such as Gödel's incompleteness theorems and Tarski's undefinability theorem.
It is named in reference to Cantor's diagonal argument in set and number theory.
Background
Let be the set of natural numbers. A first-order theory in the language of arithmetic represents the computable function if there exists a "graph" formula in the language of — that is, a formula such that for each
.
Here is the numeral corresponding to the natural number , which is defined to be the th successor of presumed first numeral in .
The diagonal lemma also requires a systematic way of assigning to every formula a natural number (also written as ) called its Gödel number. Formulas can then be represented within by the numerals corresponding to their Gödel numbers. For example, is represented by
The diagonal lemma applies to theories capable of representing all primitive recursive functions. Such theories include first-order Peano arithmetic and the weaker Robinson arithmetic, and even to a much weaker theory known as R. A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent all computable functions, but all the theories mentioned have that capacity, as well.
Statement of the lemma
Intuitively, is a self-referential sentence: says that has the property . The sentence can also be viewed as a fixed point of the operation that assigns, to the equivalence class of a given sentence , the equivalence class of the sentence (a sentence's equivalence class is the set of all sentences to which it is provably equivalent in the theory ). The sentence constructed in the proof is not literally the same as , but is provably equivalent to it in the theory .
Proof
Let be the function defined by:
for each formula with only one free variable in theory , and otherwise. Here denotes the Gödel number of formula . The function is computable (which is ultimately an assumption about the Gödel numbering scheme), so there is a formula representing in . Namely
which is to say
Now, given an arbitrary formula with one free variable , define the formula as:
Then, for all formulas with one free variable:
which is to say
Now substitute with , and define the sentence as:
Then the previous line can be rewritten as
which is the desired result.
(The same argument in different terms is given in [Raatikainen (2015a)].)
History
The lemma is called "diagonal" because it bears some resemblance to Cantor's diagonal argument. The terms "diagonal lemma" or "fixed point" do not appear in Kurt Gödel's 1931 article or in Alfred Tarski's 1936 article.
Rudolf Carnap (1934) was the first to prove the general self-referential lemma, which says that for any formula F in a theory T satisfying certain conditions, there exists a formula ψ such that ψ ↔ F(°#(ψ)) is provable in T. Carnap's work was phrased in alternate language, as the concept of computable functions was not yet developed in 1934. Mendelson (1997, p. 204) believes that Carnap was the first to state that something like the diagonal lemma was implicit in Gödel's reasoning. Gödel was aware of Carnap's work by 1937.
The diagonal lemma is closely related to Kleene's recursion theorem in computability theory, and their respective proofs are similar.
See also
Indirect self-reference
List of fixed point theorems
Primitive recursive arithmetic
Self-reference
Self-referential paradoxes
Notes
References
George Boolos and Richard Jeffrey, 1989. Computability and Logic, 3rd ed. Cambridge University Press.
Rudolf Carnap, 1934. Logische Syntax der Sprache. (English translation: 2003. The Logical Syntax of Language. Open Court Publishing.)
Haim Gaifman, 2006. 'Naming and Diagonalization: From Cantor to Gödel to Kleene'. Logic Journal of the IGPL, 14: 709–728.
Hinman, Peter, 2005. Fundamentals of Mathematical Logic. A K Peters.
Mendelson, Elliott, 1997. Introduction to Mathematical Logic, 4th ed. Chapman & Hall.
Panu Raatikainen, 2015a. The Diagonalization Lemma. In Stanford Encyclopedia of Philosophy, ed. Zalta. Supplement to Raatikainen (2015b).
Panu Raatikainen, 2015b. Gödel's Incompleteness Theorems. In Stanford Encyclopedia of Philosophy, ed. Zalta.
Raymond Smullyan, 1991. Gödel's Incompleteness Theorems. Oxford Univ. Press.
Raymond Smullyan, 1994. Diagonalization and Self-Reference. Oxford Univ. Press.
Alfred Tarski, tr. J. H. Woodger, 1983. "The Concept of Truth in Formalized Languages". English translation of Tarski's 1936 article. In A. Tarski, ed. J. Corcoran, 1983, Logic, Semantics, Metamathematics, Hackett.
Mathematical logic
Lemmas
Articles containing proofs | Diagonal lemma | Mathematics | 1,144 |
26,450,277 | https://en.wikipedia.org/wiki/C20H23NO3 | {{DISPLAYTITLE:C20H23NO3}}
The molecular formula C20H23NO3 (molar mass: 325.408 g/mol) may refer to:
6,14-Endoethenotetrahydrooripavine
Enpiperate
N-Methyl-3-piperidyl benzilate
Nalodeine
Molecular formulas | C20H23NO3 | Physics,Chemistry | 78 |
24,150,859 | https://en.wikipedia.org/wiki/C13H17NO | {{DISPLAYTITLE:C13H17NO}}
The molecular formula C13H17NO (molar mass : 203.28 g/mol, exact mass : 203.131014) may refer to:
Crotamiton
Deschloroketamine
5-EAPB
5-MBPB
6-EAPB
α-Pyrrolidinopropiophenone
N-Phenethyl-4-piperidinone | C13H17NO | Chemistry | 99 |
40,659,694 | https://en.wikipedia.org/wiki/IC2MP | The IC2MP (Institute of Chemistry of Poitiers : Materials and Natural Resources) is a multidisciplinary French joint research unit of the University of Poitiers (France) and the CNRS.
Laboratory
The IC2MP is a research laboratory mainly in chemistry and geology. It was created on January 1, 2012, which fields of expertise include the study of materials (clay, catalysts,..), natural environments (waters, soils,..) and the reactions (natural or caused). The major application fields relate to catalysis, synthesis, decontamination, use of natural resources.
The IC2MP was born from the merger of four laboratories of Poitiers University (SFA and ENSI Poitiers) and CNRS located on the university campus of Poitiers.
HydrASA : Hydrogeology, Clays, Soils, Alteration, situated in the South of the campus;
LACCO: Laboratory Catalysis in Organic Chemistry.
LCME: Laboratory Chemistry and Microbiology of Water. Created in 1974 under the name of Laboratory of Chemistry of Water and Nuisances, then changed in 1996 to become the Laboratory of Water and Environment Chemistry; it has then become LCME in 2008.
SRSN : Synthesis and Reactivity of Natural Substances.
The institute IC2MP includes five research teams:
Water, organic geochemistry, health;
HydrASA - Hydrogeology, clay soils and alterations;
SAMCat - From the active site to the catalysis material;
Calalysis and unconventional media;
Organic synthesis.
The scientific team « Water, organic geochemistry, health », with the ENSIP and the APTEN, are the initiators of the ‘Water Information Day’ (the Journées Information Eaux), a bi-annual event which occurs around the opening of the academic year.
Research
In 2008, the discovery of more than 250 well-preserved fossils in Gabon brought, for the first time, the proof of the existence of pluricellular organisms which lived 2.1 billion years ago, a crucial advance in the understanding of the origin of life. So far, the first forms of complex life (with multiple cells) previously found were dated back 600 million years. These works were published in 2010 in the magazine Nature. This major and unexpected discovery, conducted by the team of Professor Abderrazak El Albani gave birth to a cross-disciplinary research project within the IC2MP Institute aiming at studying organic and primitive materials and Paleo-environments.
Cancer: a new therapeutic molecule which attacks only the tumourous cells. In 2012, a study led by the researchers of the IC2MP, supported by the Cancéropôle Grand Ouest, resulted in the finalization of the therapeutic targeting system, which is programmed to transport a powerful anti-cancer agent through the tumorous cells, without affecting the healthy tissue usually damaged by classical therapies. The validity of this concept was demonstrated on mouses, in the context of the treatment of a solid tumor. This study, which represents a new hope in the fight against cancer, has been published in the international edition of Angewandte Chemie as a “Very Important Paper” (VIP). This study was directed by Sébastien Papot, lecturer at University of Poitiers and leader of the Research Group “Programmed Molecular Systems”, part of the research team Organic Synthesis.
See also
Institut Pprime, a laboratory of physics in Poitiers
References
External links
Official Website in english
Chemical research institutes
University of Poitiers
French National Centre for Scientific Research
Research institutes established in 2012
Universities and colleges in Poitiers
French UMR | IC2MP | Chemistry | 746 |
20,420 | https://en.wikipedia.org/wiki/Multimedia | Multimedia refers to the integration of multiple forms of content, such as text, audio, images, video, and interactive elements into a single digital platform or application. This integration allows for a more immersive and engaging experience compared to traditional single-medium content. Multimedia is utilized in various fields, including education, entertainment, communication, game design, and digital art, reflecting its broad impact on modern technology and media.
Multimedia encompasses various types of content, each serving different purposes:
Text - Fundamental to multimedia, providing context and information.
Audio - Includes music, sound effects, and voiceovers that enhance the experience. Recent developments include spatial audio and advanced sound design.
Images - Static visual content, such as photographs and illustrations. Advances include high-resolution and 3D imaging technologies.
Video - Moving images that convey dynamic content. High-definition (HD), 4K, and 360-degree video are recent innovations enhancing viewer engagement.
Animation - the technique of creating moving images from still pictures, often used in films, television, and video games to bring characters and stories to life.
Multimedia can be recorded for playback on computers, laptops, smartphones, and other electronic devices. In the early years of multimedia, the term "rich media" was synonymous with interactive multimedia. Over time, hypermedia extensions brought multimedia to the World Wide Web, and streaming services became more common.
Terminology
The term multimedia was coined by singer and artist Bob Goldstein (later 'Bobb Goldsteinn') to promote the July 1966 opening of his "Lightworks at L'Oursin" show in Southampton, New York, Long Island. Goldstein was perhaps aware of an American artist named Dick Higgins, who had two years previously discussed a new approach to art-making he called "intermedia".
On August 10, 1966, Richard Albarino of Variety borrowed the terminology, reporting: "Brainchild of song scribe-comic Bob ('Washington Square') Goldstein, the 'Lightworks' is the latest multi-media music-cum-visuals to debut as discothèque fare." Two years later, in 1968, the term "multimedia" was re-appropriated to describe the work of a political consultant, David Sawyer, the husband of Iris Sawyer—one of Goldstein's producers at L'Oursin.
In the intervening forty years, the word has taken on different meanings. In the late 1970s, the term referred to presentations consisting of multi-projector slide shows timed to an audio track. However, by the 1990s, 'multimedia' had taken on its current meaning.
In the 1993 first edition of Multimedia: Making It Work, Tay Vaughan declared, "Multimedia is any combination of text, graphic art, sound, animation, and video that is delivered by computer. When you allow the user – the viewer of the project – to control what and when these elements are delivered, it is interactive multimedia. When you provide a structure of linked elements through which the user can navigate, interactive multimedia becomes hypermedia." This book contained the Tempra Show software. This was a later, rebranded version of the 1985 DOS multimedia software VirtulVideo Producer, about which the Smithsonian declared, "It is one of the first, if not the first, multi-media authoring systems on the market."
The German language society Gesellschaft für deutsche Sprache recognized the word's significance and ubiquitousness in the 1990s by awarding it the title of German 'Word of the Year' in 1995. The institute summed up its rationale by stating, "[Multimedia] has become a central word in the wonderful new media world".
In common usage, multimedia refers to the usage of multiple media of communication, including video, still images, animation, audio, and text, in such a way that they can be accessed interactively. Video, still images, animation, audio, and written text are the building blocks on which multimedia takes shape. In the 1990s, some computers were called "multimedia computers" because they represented advances in graphical and audio quality, such as the Amiga 1000, which could produce 4096 colors (12-bit color), outputs for TVs and VCRs, and four-voice stereo audio. Changes in removable storage technology during this time were also important, as the standard CD-ROM can hold, on average, 700 megabytes of data, while the maximum amount of data a 3.5-inch floppy disk can hold is 2.8 megabytes, with an average of 1.44 megabytes. Greater storage allowed for larger digital media files and therefore more complex multimedia.
The term "video," if not used exclusively to describe motion photography, is ambiguous in multimedia terminology. Video is often used to describe the file format, delivery format, or presentation format instead of "footage," which is used to distinguish motion photography from "animation" of rendered motion imagery. Multiple forms of information content are often not considered modern forms of presentation, such as audio or video. Likewise, single forms of information content with single methods of information processing (e.g., non-interactive audio) are often called multimedia, perhaps to distinguish static media from active media. In the fine arts, for example, Leda Luss Luyken's ModulArt brings two key elements of musical composition and film into the world of painting: variation of a theme and movement of and within a picture, making ModulArt an interactive multimedia form of art. Performing arts may also be considered multimedia, considering that performers and props are multiple forms of both content and media.
In modern times, a multimedia device can be referred to as an electronic device, such as a smartphone, a video game system, or a computer. Each and every one of these devices has a main function but also has other uses beyond their intended purpose, such as reading, writing, recording video and audio, listening to music, and playing video games. This has led them to be called "multimedia devices." While previous media was always local, many are now handled through web-based solutions, particularly streaming.
Major characteristics
Multimedia presentations are presentations featuring multiple types of media. The different types of media can include text, graphics, audio, video, and animations. These different types of media convey information to their target audience and effectively communicate with them. Videos are a great visual example to use in multimedia presentations because they can create visual aids to the presenter's ideas. They are commonly used among education and many other industries to benefit students and workers, as they effectively retain chunks of information in a limited amount of time and can be stored easily. Another example is charts and graphs, as the presenters can show their audience the trends using data associated with their researches. This provides the audience a visual idea of a company's capabilities and performances. Audio also helps people understand the message being presented, as most modern videos are combined with audio to increase their efficiency, while animations are made to simplify things from the presenter's perspective. These technological methods allow efficient communication and understanding across a wide range of audiences (with an even wider range of abilities) throughout different fields.
Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, simulator, virtual reality, or augmented reality.
The various formats of technological or digital multimedia may be intended to enhance the users' experience, for example, to make it easier and faster to convey information. Or in entertainment or art, combine an array of artistic insights that include elements from different art forms to engage, inspire, or captivate an audience.
Enhanced levels of interactivity are made possible by combining multiple forms of media content. Online multimedia is increasingly becoming object-oriented and data-driven, enabling applications with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of these range from multiple forms of content on websites, like photo galleries with both images (pictures) and titles (text) user-updated, to simulations whose coefficients, events, illustrations, animations, or videos are modifiable, allowing the multimedia "experience" to be altered without reprogramming. In addition to seeing and hearing, haptic technology enables virtual objects to be felt. Emerging technology involving illusions of taste and smell may also enhance the multimedia experience.
Categorization
Multimedia may be broadly divided into linear and non-linear categories:
Linear active content progresses often without any navigational control, only focusing on the user to watch the entire piece by involving higher levels of emotional and sensory stimulation based on what's being shown as a cinema presentation;
Non-linear uses interactivity to control progress as with a video game or self-paced computer-based training so that the actions made will be based on how the user interacts within the simulated world. Hypermedia is an example of non-linear content.
Multimedia presentations can be live or recorded:
A recorded presentation may allow interactivity via a navigation system;
A live multimedia presentation may allow interactivity via an interaction with the presenter or performer.
Usage/Application
Multimedia finds its application in various areas, including, but not limited to, advertisements, art, education, entertainment, engineering, medicine, mathematics, business, scientific research, and spatial temporal applications. Several examples are as follows:
Creative industries
Creative industries use multimedia for various purposes, ranging from fine arts, entertainment, commercial art, and journalism, to media and software services provided for any of the industries listed below. An individual multimedia designer may cover the spectrum throughout their career. Requests for their skills range from technical to analytical to creative. Multimedia, but more impressively in the modern day, the interactivity of the multimedia created forms the foundation for which most creative endeavors that take place online. Microsoft is one of the biggest computer industries in the world, and the core foundation of its success relies on the ability of multimedia designers to optimize user experience through interacting with their products.
Commercial uses
Marketing and commercial practices increasingly rely on interactive multimedia, allowing for more sophisticated tactics and increased customer retention. Advertising companies heavily utilize social media, online interfaces, and television to promote products, while ads and websites that utilize pop-ups need shorter, more concise methods to be as efficient and pleasing to potential customers as possible. These platforms can be used by commercial businesses to specifically target their desired audience with a message, advertisement, or promotion. External and internal office communications are often developed by hired creative service firms to display information in a variety of situations. This can range from providing more engaging presentations to educating trainees or new workers on a company's policies or process. Commercial multimedia developers may also be hired to design for governmental services or nonprofit service applications, usually in the form of campaign websites and commercials aimed at the general public. Data mining within multimedia platforms can also allow advertisers to adjust their marketing techniques to quickly and efficiently understand the demographic of their target audience. : Recently developed techniques include digital billboards, often placed on the side of buildings and wrapped around the edge or corner. Clips can then be added at differing angles to create a three-dimensional optical illusion, which is more likely to draw the attention of an observer.
Entertainment and fine arts
Multimedia is heavily used in the entertainment industry, especially to develop special effects in movies and animations (VFX, 3D animation, etc.). Multimedia games are a popular pastime and are software programs available either as CD-ROMs or online. Video games are considered multimedia, as they meld animation, audio, and interactivity to give the player an immersive experience. While video games can vary in terms of animation style or audio type, the element of interactivity makes them a striking example of interactive multimedia. Interactive multimedia refers to multimedia applications that allow users to actively participate instead of just sitting by as passive recipients of information.
In the arts, there are multimedia artists who blend techniques using different media that in some way incorporate interaction with the viewer. Another approach entails the creation of multimedia that can be displayed in a traditional fine arts arena, such as an art gallery. Video has become an intrinsic part of many concerts and theatrical productions in the modern era and has spawned content creation opportunities for many media professionals. Although multimedia display material may be volatile, the survivability of the content is as strong as any traditional medium.
Education
In education, multimedia is used to produce computer-based training courses (popularly called CBTs) and reference books like encyclopedias and almanacs. A CBT lets the user go through a series of presentations, text about a particular topic, and associated illustrations in various information formats.
Learning theory in the past decade has expanded dramatically because of the introduction of multimedia. Several lines of research have evolved, e.g., cognitive load and multimedia learning.
From multimedia learning (MML) theory, David Roberts has developed a large group lecture practice using PowerPoint based on the use of full-slide images in conjunction with a reduction of visible text (all text can be placed in the notes view' section of PowerPoint). The method has been applied and evaluated in 9 disciplines. In each experiment, students' engagement and active learning have been approximately 66% greater than with the same material being delivered using bullet points, text, and speech, corroborating a range of theories presented by multimedia learning scholars like Sweller and Mayer. The idea of media convergence is also becoming a major factor in education, particularly higher education. Defined as separate technologies such as voice (and telephony features), data (and productivity applications), and video that now share resources and interact with each other, media convergence is rapidly changing the curriculum in universities all over the world. Higher education has been implementing the use of social media applications such as Twitter, YouTube, Facebook, etc. to increase student collaboration and develop new processes in how information can be conveyed to students.
Educational technology
Multimedia provides students with an alternate means of acquiring knowledge designed to enhance teaching and learning through various media and platforms. In the 1960s, technology began to expand into classrooms through devices such as screens and telewriters. This technology allows students to learn at their own pace and gives teachers the ability to observe the individual needs of each student. The capacity for multimedia to be used in multi-disciplinary settings is structured around the idea of creating a hands-on learning environment through the use of technology. Lessons can be tailored to the subject matter as well as personalized to the students' varying levels of knowledge on the topic. Learning content can be managed through activities that utilize and take advantage of multimedia platforms. This kind of usage of modern multimedia encourages interactive communication between students and teachers and opens feedback channels, introducing an active learning process, especially with the prevalence of new media and social media. Technology has impacted multimedia as it is largely associated with the use of computers or other electronic devices and digital media due to its capabilities concerning research, communication, problem-solving through simulations, and feedback opportunities. The innovation of technology in education through the use of multimedia allows for diversification among classrooms to enhance the overall learning experience for students.
Within education, video games, specifically fast-paced action games, are able to play a big role in improving cognitive abilities involving attention, task switching, and resistance to distractors. Research also shows that, though video games may take time away from schoolwork, implementing games into the school curriculum has an increased probability of moving attention from games to curricular goals.
Social work
Multimedia is a robust education methodology within the social work context. The five different types of multimedia that support the education process are narrative media, interactive media, communicative media, adaptive media, and productive media. Contrary to long-standing belief, multimedia technology in social work education existed before the prevalence of the internet. It takes the form of images, audio, and video into the curriculum.
First introduced to social work education by Seabury & Maple in 1993, multimedia technology is utilized to teach social work practice skills, including interviewing, crisis intervention, and group work. In comparison with conventional teaching methods, including face-to-face courses, multimedia education shortens transportation time, increases knowledge and confidence in a richer and more authentic context for learning, generates interaction between online users, and enhances understanding of conceptual materials for novice students.
In an attempt to examine the impact of multimedia technology on students' studies, A. Elizabeth Cauble & Linda P. Thurston conducted research in which Building Family Foundations (BFF), an interactive multimedia training platform, was utilized to assess social work students' reactions to multimedia technology on variables of knowledge, attitudes, and self-efficacy. The results state that respondents show a substantial increase in academic knowledge, confidence, and attitude. Multimedia also benefits students because it brings experts online, fits students' schedule, and allows students to choose courses that suit them.
Mayer's Cognitive Theory of Multimedia Learning suggests that "people learn more from words and pictures than from words alone." According to Mayer and other scholars, multimedia technology stimulates people's brains by implementing visual and auditory effects and thereby assists online users to learn efficiently. Researchers suggest that when users establish dual channels while learning, they tend to understand and memorize better. The mixed literature of this theory is still present in the fields of multimedia and social work.
Language communication
With the spread and development of the English language around the world, multimedia has become an important way of communicating between different people and cultures. Multimedia technology creates a platform where language can be taught. The traditional form of teaching English as a Second Language in classrooms has drastically changed with the prevalence of technology, making it easier for students to obtain language learning skills. Multimedia motivates students to learn more languages through audio, visual, and animation support. It also helps create English contexts since an important aspect of learning a language is developing their grammar, vocabulary, and knowledge of pragmatics and genres. In addition, cultural connections in terms of forms, contexts, meanings, and ideologies have to be constructed. By improving thought patterns, multimedia develops students' communicative competence by improving their capacity to understand the language. One of the studies, carried out by Izquierdo, Simard and Pulido, presented the correlation between "Multimedia Instruction (MI) and learners' second language (L2)" and its effects on learning behavior. Their findings, based on Gardner's theory of the "socio-educational model of learner motivation and attitudes," show that there is easier access to language learning materials as well as increased motivation with MI along with the use of computer-assisted language learning.
Journalism
Newspaper companies all over the world are trying to embrace the new phenomenon by implementing its practices in their work. While some have been slow to come around, other major newspapers like The New York Times, USA Today, and The Washington Post are setting a precedent for the positioning of the newspaper industry in a globalized world. To keep up with the changing world of multimedia, journalistic practices are adopting and utilizing different multimedia functions through the inclusion of visuals such as varying audio, video, text, etc. in their writings.
News reporting is not limited to traditional media outlets. Freelance journalists can use different new media to produce multimedia pieces for their news stories. It engages global audiences and tells stories with technology, which develops new communication techniques for both media producers and consumers. The Common Language Project, later renamed The Seattle Globalist, is an example of this type of multimedia journalism production.
Multimedia reporters who are mobile (usually driving around a community with cameras, audio and video recorders, and laptop computers) are often referred to as mojos, or mobile journalists.
Multimedia Engineering
Software engineers may use multimedia in computer simulations for anything from entertainment to training, such as military or industrial training. Multimedia for software interfaces is often done as a collaboration between creative professionals and software engineers. Multimedia helps expand the teaching practices that can be found in engineering to allow for more innovative methods to not only educate future engineers but to help evolve the scope of understanding of where multimedia can be used in specialized engineer careers like software engineers.
Multimedia is also allowing major car manufacturers, such as Ford and General Motors, to expand the design and safety standards of their cars. By using a game engine and virtual reality glasses, these companies are able to test the safety features and the design of the car before a prototype is even made. Building a car virtually reduces the time it takes to produce new vehicles, cutting down on the time needed to test designs and allowing the designers to make changes in real time. It also reduces expenses since, with a virtual car, making real-world prototypes is no longer needed.
Mathematical and scientific research
In mathematical and scientific research, multimedia is mainly used for modeling and simulation with binary code. For example, a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new substance. Representative research can be found in journals such as the Journal of Multimedia. One well-known example of this being applied would be in the movie Interstellar, where Executive Director Kip Thorne helped create one of the most realistic depictions of a black hole in film. The visual effects team under Paul Franklin took Kip Thorne's mathematical data and applied it into their own visual effects engine called "Double Negative Gravitational Renderer," a.k.a. "Gargantua," to create a "real" black hole used in the final cut. Later on, the visual effects team went on to publish a black hole study.
Medicine
Medical professionals and students have a wide variety of ways to learn new techniques and procedures through interactive media, online courses, and lectures. The methods of conveying information to students have drastically evolved with the help of multimedia. From the 1800s to today, lessons are commonly taught using chalkboards. Projected aids, such as the epidiascope and slide projectors, were introduced into classrooms around the 1960s. With the growing use of computers, the medical field has begun to incorporate new devices and procedures to assist in teaching students, performing procedures, and analyzing patient data. As well as providing that data in a meaningful way to the patients.
Virtual reality
Virtual reality is a technology that creates a simulated environment, often using computer-generated imagery or a combination of real and virtual content, to immerse users in an interactive and lifelike experience. The aim of virtual reality is to make users feel as if they are physically present in a different environment, even though they are typically still physically located in the real world. Virtual reality finds applications across various fields, including gaming, education, healthcare, training, and entertainment. In gaming, users can be transported to fantastical worlds, experiencing games in a more immersive way. In education, VR can provide realistic simulations for training purposes, allowing users to practice skills in a risk-free environment. Healthcare professionals use VR for therapeutic purposes and medical training. The U.S. Air Force has shown using VR for training programs for their new pilots to simulate piloting an aircraft. This allows new pilots to learn in a safe environment and get comfortable before getting in a real aircraft.
Head-mounted display (HMD): Users wear a headset that covers their eyes and ears, providing visual and auditory stimuli. These headsets are equipped with screens that display the virtual environment, and some may also have built-in speakers or headphones for audio.
Motion tracking: Sensors track the user's movements, allowing them to interact with the virtual world. This can include head movements, hand gestures, and sometimes even full-body movements, enhancing the sense of immersion.
Input devices: Controllers or other input devices are used to interact with the virtual environment. These devices can simulate hands or tools, enabling users to manipulate objects or navigate within the virtual space.
Computer processing: Powerful computers or gaming consoles are often required to generate and render the complex graphics and simulations needed for a convincing virtual experience.
Augmented reality
Augmented reality overlays digital content or output onto the real world using media such as audio, animation, and text. Augmented reality became widely popular only in the 21st century; however, some of the earlier versions of such were things like the Sega Genesis Activator Controller back in 1992, which allowed users to literally stand in an octagon and control in-game movement with physical movement, or to stretch back even further, the R.O.B. NES Robot back in 1984, which, with its array of accessories, was able to also provide users with the sensation of holding a firearm. These multimedia input devices are among the earliest of the augmented reality devices, allowing users to input commands to facilitate a different user experience. A more modern example of augmented reality is Pokémon GO, a mobile game released on July 6, 2016, which allows users to see a Pokémon in a real-world environment.
See also
Animation
Artmedia
Audio
Audiovisual
Computer
Images
Internet
Kraftwerk
Multi-image
Multimedia cartography
Multimedia Messaging Service
Multimedia search
New media art
Non-linear media
Postliterate society
Social media
Text
Transmedia storytelling
Universal multimedia access
Video
Video Game
Virtual reality
Web documentary
References
External links
History of Multimedia from the University of Calgary
Multimedia in Answers.com
Communication design
Design
Film and video terminology
Film production
1960s neologisms | Multimedia | Technology,Engineering | 5,098 |
38,637,989 | https://en.wikipedia.org/wiki/Kepler-37c | Kepler-37c is an exoplanet discovered by the Kepler space telescope in February 2013. With an orbital period of 21 days, it is located 209 light-years away, in the constellation Lyra.
Host star
The planet orbits a (G-type) star similar to the Sun, named Kepler-37, orbited by a total of four planets. The star has a mass of 0.80 and a radius of 0.79 . It has a temperature of, 5417 K and is 5.66 billion years old. In comparison, the Sun is 4.6 billion years old, and has a temperature of 5778 K.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 9.71. Therefore, it is too dim to be seen with the naked eye.
See also
List of planets discovered by the Kepler spacecraft
References
Exoplanets discovered in 2013
37c
Lyra
Terrestrial planets
Transiting exoplanets
Kepler-37 | Kepler-37c | Astronomy | 204 |
1,564,592 | https://en.wikipedia.org/wiki/List%20of%20Firefox%20features | Mozilla Firefox has features which distinguish it from other web browsers, such as Google Chrome, Safari, and Microsoft Edge.
Major differences
To avoid interface bloat, ship a relatively smaller core customizable to meet individual users' needs, and allow for corporate or institutional extensions to meet their varying policies, Firefox relies on a robust extension system to allow users to modify the browser according to their requirements instead of providing all features in the standard distribution.
While Opera and Google Chrome do the same, extensions for these are fewer in number as of late 2013. Internet Explorer also has an extension system but it is less widely supported than that of others. Developers supporting multiple browsers almost always support Firefox, and in many instances exclusively. As Opera has a policy of deliberately including more features in the core as they prove useful, the market for extensions is relatively unstable but also there is less need for them. The sheer number of extensions is not a good guide to the capabilities of a browser.
Protocol support and the difficulty of adding new link type protocols also vary widely across not only these browsers but across versions of these browsers. Opera has historically been most robust and consistent about supporting cutting-edge protocols such as robust file sharing eDonkey links or bitcoin transactions. These can be difficult to support in Firefox without relying on unknown small developers, which defeats the privacy purpose of these protocols. Instructions for supporting new link protocols vary widely across operating systems and Firefox versions, and are generally not implementable by end users who lack systems administration comfort and the ability to follow exact detailed instructions to type in strings.
Web technologies support
Firefox supports most basic Web standards including HTML, XML, XHTML, CSS (with extensions), JavaScript, DOM, MathML, SVG, XSLT and XPath. Firefox's standards support and growing popularity have been credited as one reason Internet Explorer 7 was to be released with improved standards support.
Since Web standards are often in contradiction with Internet Explorer's behaviors, Firefox, like other browsers, has a quirks mode. This mode attempts to mimic Internet Explorer's quirks modes, which equates to using obsolete rendering standards dating back to Internet Explorer 5, or alternately newer peculiarities introduced in IE 6 or 7. However, it is not completely compatible. Because of the differing rendering, PC World notes that a minority of pages do not work in Firefox, however Internet Explorer 7's quirks mode does not either.
CNET notes that Firefox does not support ActiveX controls by default, which can also cause webpages to be missing features or to not work at all in Firefox. Mozilla made the decision to not support ActiveX due to potential security vulnerabilities, its proprietary nature and its lack of cross-platform compatibility. There are methods of using ActiveX in Firefox such as via third-party plugins but they do not work in all versions of Firefox or on all platforms.
Beginning on December 8, 2006, Firefox Nightly builds pass the Acid2 CSS standards compliance test, so all future releases of Firefox 3 would pass the test.
Firefox also implements a proprietary protocol from Google called "safebrowsing", which is not an open standard.
Cross-platform support
Mozilla Firefox runs on certain platforms that coincide OS versions in use at the time of release. In 2004 version 1 supported older operating systems such as Windows 95 and Mac OS X 10.1, by 2008 version 3 required at least OS X 10.4 and even Windows 98 support ended.
Various releases available on the primary distribution site can support the following operating systems, although not always the latest Firefox version.
Various versions of Microsoft Windows, including 98, 98SE, ME, NT 4.0, 2000, XP, Server 2003, Vista, 7, 8 and 10.
OS X
Linux-based operating systems using X.Org Server or XFree86
Builds for Solaris (x86 and SPARC), contributed by the Sun Beijing Desktop Team, are available on the Mozilla web site.
Mozilla Firefox 1.x installation on Windows 95 requires a few additional steps.
Since Firefox is open source and Mozilla actively develops a platform independent abstraction for its graphical front end, it can also be compiled and run on a variety of other architectures and operating systems. Thus, Firefox is also available for many other systems. This includes OS/2, AIX, and FreeBSD. Builds for Windows XP Professional x64 Edition are also available. Mozilla Firefox is also the browser of choice for a good number of smaller operating systems, such as SkyOS and ZETA.
Firefox uses the same profile format on the different platforms, so a profile may be used on multiple platforms, if all of the platforms can access the same profile; this includes, for example, profiles stored on an NTFS (via FUSE) or FAT32 partition accessible from both Windows and Linux, or on a USB flash drive. This is useful for users who dual-boot their machines. However, it may cause a few problems, especially with extensions.
Security
Firefox is free-libre software, and thus in particular its source code is visible to everyone. This allows anyone to review the code for security vulnerabilities. It also allowed the U.S. Department of Homeland Security to give funding for the automated tool Coverity to be run against Firefox code.
Additionally, Mozilla has a security bug bounty system - anyone who reports a valid critical security bug receives a $3000 (US) cash reward for each report and a Mozilla T-shirt. With effect from December 15, 2010, Mozilla added Web Applications to its Security Bug Bounty Program.
Tabbed browsing
Firefox supports tabbed browsing, which allows users to open several pages in one window. This feature was carried over from the Mozilla Application Suite, which in turn had borrowed the feature from the popular MultiZilla extension for Mozilla.
Firefox also permits the "homepage" to be a list of URLs delimited with vertical bars (|), which are automatically opened in separate tabs, rather than a single page.
Firefox 2 supports more tabbed browsing features, including a "tab overflow" solution that keeps the user's tabs easily accessible when they would otherwise become illegible, a "session store" which lets the user keep the opened tabs across the restarts, and an "undo close tab" feature.
The tab browsing feature allows users to open multiple tabs or pages on one window. This is convenient for users who enjoy browsing from one window and is also advantageous in ensuring ease of browsing. The tabs are easily made accessible and users can close tabs that are not in use for better usability.
Pop-up blocking
Firefox also includes integrated customizable pop-up blocking. Firefox was given this feature early in beta development, and it was a major comparative selling point of the browser until Internet Explorer gained the capability in the Windows XP SP2 release of August 25, 2004. Firefox's pop-up blocking can be turned off entirely to allow pop-ups from all sites. Firefox's pop-up blocking can be inconvenient at times — it prevents JavaScript-based links from opening a new window while a page is loading unless the site is added to a "safe list" found in the options menu.
In many cases, it is possible to view the pop-up's URL by clicking the dialog that appears when one is blocked. This makes it easier to decide if the pop-up should be displayed.
Private browsing
Private browsing was introduced in Firefox 3.5, which released on June 30, 2009. This feature lets users browse the Internet without leaving any traces in the browsing history.
Download manager
An integrated customizable download manager is also included. Downloads can be opened automatically depending on the file type, or saved directly to a disk. By default, Firefox downloads all files to a user's desktop on Mac and Windows or to the user's home directory on Linux, but it can be configured to prompt for a specific download location. Version 3.0 added support for cross-session resuming (stopping a download and resuming it after closing the browser). From within the download manager, a user can view the source URL from which a download originated as well as the location to which a file was downloaded.
Live bookmarks
From 2004, live bookmarks allowed users to dynamically monitor changes to their favorite news sources, using RSS or Atom feeds. Instead of treating RSS-feeds as HTML pages as most news aggregators do, Firefox treated them as bookmarks and automatically updated them in real-time with a link to the appropriate source. In December 2018, version 64.0 of Firefox removed live bookmarks and web feeds, with Mozilla suggesting its replacement by add-ons or other software with news aggregator functionality like Mozilla Thunderbird.
Other features
Find as you type
Firefox also has an incremental find feature known as "Find as you type", invoked by pressing Ctrl+F. With this feature enabled, a user can simply begin typing a word while viewing a web page, and Firefox automatically searches for it and highlights the first instance found. As the user types more of the word, Firefox refines its search. Also, if the user's exact query does not appear anywhere on the page, the "Find" box turns red. Ctrl+G can be pressed to go to the next found match.
Alternatively the slash (/) key can be used instead to invoke the "quick search". The "quick search", in contrast to the normal search, lacks search controls and is wholly controlled by the keyboard. In this mode highlighted links can be followed by pressing the enter key. The "quick search" has an alternate mode which is invoked by pressing the apostrophe (') key, in this mode only links are matched.
Mycroft Web Search
A built-in Mycroft Web search function features extensible search-engine listing; by default, Firefox includes plugins for Google and Yahoo!, and also includes plugins for looking up a word on dictionary.com and browsing through Amazon.com listings. Other popular Mycroft search engines include Wikipedia, eBay, and IMDb.
Smart Bookmarks
Smart Bookmarks (aka Smart keywords) can be used to quickly search for information on specific Web sites. A smart keyword is defined by the user and can be associated with any bookmark, and can then be used in the address bar as a shortcut to quickly get to the site or, if the smart keyword is linked to a searchbox, to search the site. For example, "imdb" is a pre-defined smart keyword; to search for information about the movie 'Firefox' on IMDb, jump to the location bar with the + shortcut, type "imdb Firefox" and press the Enter key or just simply type in "imdb" if one wants to get to the frontpage instead.
Chrome
The Chrome packages within Firefox control and implement the Firefox user interface.
Version 2.0 and above
Enhanced search capabilities
Search term suggestions will now appear as users type in the integrated search box when using the Google, Yahoo! or Answers.com search engines. A new search engine manager makes it easier to add, remove and re-order search engines, and users will be alerted when Firefox encounters a website that offers new search engines that the user may wish to install.
Microsummaries
Support for Microsummaries was added in version 2.0. Microsummaries are short summaries of web pages that are used to convey more information than page titles. Microsummaries are regularly updated to reflect content changes in web pages so that viewers of the web page will want to revisit the web page after updates. Microsummaries can either be provided by the page, or be generated by the processing of an XSLT stylesheet against the page. In the latter case, the XSLT stylesheet and the page that the microsummary applies to are provided by a microsummary generator. Support for Microsummaries was removed as of Firefox 6.
Live Titles
When a website offers a microsummary (a regularly updated summary of the most important information on a Web page), users can create a bookmark with a "Live Title". Compact enough to fit in the space available to a bookmark label, they provide more useful information about pages than static page titles, and are regularly updated with the latest information. There are several websites that can be bookmarked with Live Titles, and even more add-ons to generate Live Titles for other popular websites. Support for Live Titles was removed as of Firefox 6.
Session Restore
The Session Restore feature restores windows, tabs, text typed in forms, and in-progress downloads from the last user session. It will be activated automatically when installing an application update or extension, and users will be asked if they want to resume their previous session after a system crash.
Inline spell checker
A built-in spell checker enables users to quickly check the spelling of text entered into Web forms without having to use a separate application.
Usability in version 2
Firefox 2 was designed for the average user, hiding advanced configuration and making features that do not require user interaction to function. Jim Repoza of eWEEK states: Firefox also won UK Usability Professionals' Association's 2005 award for "Best software application".
Version 3.0 and above
Star button
Quickly add bookmarks from the location bar with a single click; a second click lets the user file and tag them.
Version 5.0 and above
Style Inspector
Firefox 10 added the CSS Style Inspector to the Page Inspector, which allow users to check out a site's structure and edit the CSS without leaving the browser.
Firefox 10 added support for CSS 3D Transforms and for anti-aliasing in the WebGL standard for hardware-accelerated 3D graphics. These updates mean that complex site and Web app animations will render more smoothly in Firefox, and that developers can animate 2D objects into 3D without plug-ins.
3D Page Inspector
Firefox 11, released January 2012, introduced a tiltable three-dimensional visualization of the Document Object Model (DOM), where more nested elements protrude further from the page surface. This feature was removed with version 47.
Firefox 57 and above
Electrolysis and WebExtensions
On August 21, 2015, Firefox developers announced that due to planned changes to Firefox's internal operations, including the planned implementation of a new multi-process architecture codenamed "Multiprocess Firefox" or "Electrolysis" (stylized "e10s"). Introduced to some users in version 48, Firefox adopted a new extension architecture known as WebExtensions. WebExtensions uses HTML and JavaScript APIs and is designed to be similar to the extension API used by Google Chrome, and run within a multi-process environment, but does not enable the same level of access to the browser. XPCOM and XUL add-ons are no longer supported effective Firefox 57.
HTTPS-only mode
Firefox 83 introduced HTTPS-only mode, a security enhancing mode that once enabled forces all connections to websites to use HTTPS.
Picture in Picture
Released on December 3, 2019, Firefox 71 is the first Firefox release to include Picture-in-picture. At first a Windows only feature, with Mac and Linux support introduced in Firefox 72, picture-in-picture allows users to place a video from a webpage into a small separate window that's viewable regardless of which tab the user is in—including from outside the browser.
Credit Card Auto-Fill
Firefox 81 allowed users in the US to save, manage, and auto-fill credit card information. Support for more countries have been added since the release. As of 2023, the list of supported countries is: Austria, Belgium, Canada, France, Germany, Italy, Poland, Spain, the U.K. and the U.S.
Automatic Local Translation of Webpages
Automatic translation of web content, performed entirely locally on device, was introduced to users in Firefox 118. This feature is a joint effort between Mozilla, University of Edinburgh, Charles University, University of Sheffield, and the University of Tartu under the name Project Bergamot. Project Bergamot was funded by the European Union’s Horizon 2020 research and innovation programme.
Tags
Smart Location Bar
Firefox 3 includes a "Smart Location Bar". While most other browsers, such as Internet Explorer, will search through history for matching web sites as the user types a URL into the location bar, the Smart Location Bar will also search through bookmarks for a page with a matching URL. Additionally, Firefox's Smart Location Bar will also search through page titles, allowing the user to type in a relevant keyword, instead of a URL, to find the desired page. Firefox uses frecency and other heuristics to predict which history and bookmark matches the user is most likely to select.
Library
View, organize and search through bookmarks, tags and browsing history using the new Library window. Create or restore full backups of this data whenever with a few clicks.
Smart Bookmark Folders
Users can quickly access their most visited bookmarks from the toolbar, or recently bookmarked and tagged pages from the bookmark menu. Smart Bookmark Folders can be created by saving a search query in the Library.
Full page zoom
From the View menu and via keyboard shortcuts, the new zooming feature lets users zoom in and out of entire pages, scaling the layout, text and images, or optionally only the text size. Zoom settings will be remembered for each site.
Text selection improvements
In addition to being able to double-click and drag to select text by words; or triple-click and drag to select text by paragraph, Ctrl (Cmd on Mac) can be held down to retain the previous selection and extend it instead of replacing it when doing another selection.
Web-based protocol handlers
Web applications, such as a user's favorite webmail provider, can now be used instead of desktop applications for handling mail to links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox).
Add-ons and extensions
There are six types of add-ons in Firefox: extensions, themes, language packs, plugins, social features and apps. Firefox add-ons may be obtained from the Mozilla Add-ons web site or from other sources.
Extensions
Firefox users can add features and change functionality in Firefox by installing extensions. Extension functionality is varied; such as those enabling mouse gestures, those that block advertisements, and those that enhance tabbed browsing.
Features that the Firefox developers believe will be used by only a small number of its users are not included in Firefox, but instead left to be implemented as extensions. Many Mozilla Suite features, such as IRC chat (ChatZilla) and calendar have been recreated as Firefox extensions. Extensions are also sometimes a testing ground for features that are eventually integrated to the main codebase. For example, MultiZilla was an extension that provided tabbed browsing when Mozilla lacked that feature.
While extensions provide a high level of customizability, PC World notes the difficulty a casual user would have in finding and installing extensions as compared to their features being available by default.
Most extensions are not created or supported by Mozilla. Malicious extensions have been created. Mozilla provides a repository of extensions that have been reviewed by volunteers and are believed to not contain malware. Since extensions are mostly created by third parties, they do not necessarily go through the same level of testing as official Mozilla products, and they may have bugs or vulnerabilities. Like applications on Android and iOS, Firefox extensions have permission model: for example before installing of extension user must agree that this extension can have access to all webpages, or maybe have permission to manage downloads, or have no special permissions — in such way the extension can be manually activated and interact with current page. From 2019 Firefox, Chromium based browsers (Google Chrome, Edge, Opera, Vivaldi) have the same format of extension: WebExtensions API, this is mean that web extension developed for Google Chrome can be used on Firefox (in most cases), and vice versa.
Themes
Firefox also supports a variety of themes for changing its appearance. Prior to the release of Firefox 57, themes are simply packages of CSS and image files. From Firefox 57 onwards, themes consist solely of color modifications through the use of CSS. Many themes can be downloaded from the Mozilla Update web site.
Language packs
Language packs are dictionaries for spell checking of input fields.
Plugins
Firefox supports plugins based on Netscape Plugin Application Program Interface (NPAPI), i.e. Netscape-style plugins. As a side note, Opera and Internet Explorer 3.0 to 5.0 also support NPAPI.
On June 30, 2004, the Mozilla Foundation, in partnership with Adobe, Apple, Macromedia, Opera, and Sun Microsystems, announced a series of changes to web browser plugins. The then-new API allowed web developers to offer richer web browsing experiences, helping to maintain innovation and standards. The then-new plugin technologies were implemented in the future versions of the Mozilla applications.
Mozilla Firefox 1.5 and later versions include the Java Embedding plugin, which allow Mac OS X users to run Java applets with the then-latest 1.4 and 5.0 versions of Java (the default Java software shipped by Apple is not compatible with any browser, except its own Safari).
Apps
After the releases of Firefox OS based on stack of web technologies, Mozilla added a feature to install mobile apps on PC using Firefox as base.
Customizability
Beyond the use of Add-ons, Firefox additional customization features.
The position of the toolbars and interface are customizable
User stylesheets to change the style of webpages and Firefox's user interface.
Customizable font colours.
A number of internal configuration options are not accessible in a conventional manner through Firefox's preference dialogs, although they are exposed through its about:config interface.
References
External links
Firefox Features at Mozilla.com
Microsummaries - MozillaWiki
Mozilla Firefox
Firefox | List of Firefox features | Technology | 4,820 |
24,509,230 | https://en.wikipedia.org/wiki/Gymnopilus%20novoguineensis | Gymnopilus novoguineensis is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus novoguineensis at Index Fungorum
novoguineensis
Fungi of North America
Fungus species | Gymnopilus novoguineensis | Biology | 59 |
44,358,770 | https://en.wikipedia.org/wiki/AM%2050 | AM 50 is a Czech automatically launched assault bridge used by combat engineers for crossing narrow obstacles such as rivers, canals, and ditches. It is mounted on heavy vehicles (such as the unarmored or lightly armored Tatra 815 8×8 truck), and can bridge a gap up to .
The AM 50 was developed for the Czechoslovak Army in the late 1960s. Today it is in use in the Czech Republic, Slovakia, India and Pakistan.
See also
Armoured vehicle-launched bridge
Bailey bridge
References
Military bridging equipment
Armoured vehicle-launched bridges
Armoured fighting vehicles of Czechoslovakia
Year of introduction missing | AM 50 | Engineering | 122 |
17,993,511 | https://en.wikipedia.org/wiki/Tin%28IV%29%20bromide | Tin(IV) bromide is the chemical compound SnBr4. It is a colourless low melting solid.
Structure
SnBr4 occurs in form of crystals. The compound crystallises in a monoclinic crystal system with molecular SnBr4 units that have distorted tetrahedral geometry, with mean Sn-Br bond lengths of 242.3 pm.
Preparation
SnBr4 can be prepared by reaction of the elements at standard temperature and pressure (STP):
+ 2 →
Dissolution in solvents
In aqueous solution SnBr4 dissolves to give a series of octahedral (six-ligated) bromo-aquo complexes. These include and cis- and trans-.
Reactions
SnBr4 forms 1:1 and 1:2 complexes with ligands, e.g. with trimethylphosphine the following can be produced, SnBr4.P(CH3)3 and SnBr4.2P(CH3)3.
References
Bromides
Metal halides
Tin(IV) compounds | Tin(IV) bromide | Chemistry | 224 |
69,993,032 | https://en.wikipedia.org/wiki/Sir%20Michael%20Uren%20Hub | The Sir Michael Uren Hub is a 13-storey building on the north side of the elevated A40 Westway in London, designed by Allies and Morrison for the purpose of Imperial College's biomedical engineering research. It contains a 160-seat auditorium, social space, cleanrooms, and futuristic outpatients. It is named for engineer Sir Michael Uren and built using his engineered cement substitute, ground granulated blast furnace slag (GGBS).
It houses the School of Public Health's Environmental Research Group, the Musculoskeletal Laboratory (MSk lab) and the National Heart and Lung Institute (NHLI).
Location
The Sir Michael Uren Hub is situated on Wood Lane, Shepherd's Bush, London. To its north is a 34-storey tower, to its east is an incubator building, and to its south is an elevated section of the A40 Westway.
History
In 2014 Imperial College London announced that it was to build a biomedical engineering centre supported by a £40 million donation from Sir Micheal Uren and his foundation, at Imperial West, the College's 25-acre research and innovation campus in White City, west London. The purpose was to house Imperial's biomedical and healthcare researchers, engineers, scientists and clinicians, along with spin-out companies, in one building.
Work on the site began in January 2017. It officially opened in December 2020.
Design
The 13-storey Hub was designed by architects Allies and Morrison, and the project was managed by Turner & Townsend, with mechanical and engineering consultants Buro Happold. Autodesk Revit provided the CAD software, and the building was inspected by Bureau Veritas. ISG Ltd was the contractor.
Structural features
The building has a triangular base and covers 18,150 square metres. It has two long sides, covered in 1,300 GGBS containing four metre high vertical precast concrete fins, of which there are nine types. GGBS, a waste by-product of coal-fired power stations, was developed by Uren's company as a substitute for cement which produces a fraction of the carbon emissions. The fins act to shade from the sun.
It contains a 160-seat auditorium, social space, cleanrooms, and futuristic outpatients. It houses the School of Public Health's Environmental Research Group led by Frank Kelly, the MSk Lab led by Justin Cobb and Alison McGregor, the Dementia Research Institute, the Centre for Cardiovascular Bioengineering, and 20 companies. Members of the National Heart and Lung Institute (NHLI) occupy space on the ninth floor.
It has space for functions and exhibitions in the main entrance, adjacent to the ground floor cafe. The auditorium and its foyer on the lower ground floor can be accessed via the main entrance and the research floors above can be accessed via secure entry. A discrete second entrance near a vehicle drop-off - pick-up point serves the clinical facility. There are toilets on all floors.
Gallery
References
External links
Research institutes in London
Engineering research institutes
2020 establishments in England
Laboratories in the United Kingdom
High-tech architecture
Modern architecture in the United Kingdom | Sir Michael Uren Hub | Engineering | 639 |
7,163,297 | https://en.wikipedia.org/wiki/Oxygen-17 | Oxygen-17 (17O) is a low-abundance, natural, stable isotope of oxygen (0.0373% in seawater; approximately twice as abundant as deuterium).
As the only stable isotope of oxygen possessing a nuclear spin (+5/2) and a favorable characteristic of field-independent relaxation in liquid water, 17O enables NMR studies of oxidative metabolic pathways through compounds containing 17O (i.e. metabolically produced H217O water by oxidative phosphorylation in mitochondria) at high magnetic fields.
Water used as nuclear reactor coolant is subjected to intense neutron flux. Natural water starts out with 373 ppm of 17O; heavy water starts out incidentally enriched to about 550 ppm of oxygen-17. The neutron flux slowly converts 16O in the cooling water to 17O by neutron capture, increasing its concentration. The neutron flux slowly converts 17O (with much greater cross section) in the cooling water to carbon-14, an undesirable product that can escape to the environment:
17O (n,α) → 14C
Some tritium removal facilities make a point of replacing the oxygen of the water with natural oxygen (mostly 16O) to give the added benefit of reducing 14C production.
History
The isotope was first hypothesized and subsequently imaged by Patrick Blackett in Rutherford's lab in 1925:
It was a product out of the first man-made transmutation of 14N and 4He2+ conducted by Frederick Soddy and Ernest Rutherford in 1917–1919. Its natural abundance in Earth's atmosphere was later detected in 1929 by Giauque and Johnson in absorption spectra.
References
Environmental isotopes
Isotopes of oxygen | Oxygen-17 | Chemistry | 360 |
22,751,476 | https://en.wikipedia.org/wiki/Meredith%20effect | The Meredith effect is a phenomenon whereby the aerodynamic drag produced by a cooling radiator may be offset by careful design of the cooling duct such that useful thrust is produced by the expansion of the hot air in the duct. The effect was discovered in the 1930s and became more important as the speeds of piston-engined aircraft increased over the next decade.
The Meredith effect occurs when air flowing through a duct is heated by a heat-exchanger or radiator containing a hot working fluid. Typically the fluid is a coolant carrying waste heat from an internal combustion engine.
The duct must be travelling at a significant speed with respect to the air for the effect to occur. Air flowing into the duct meets drag resistance from the radiator surface and is compressed due to the ram air effect. As the air flows through the radiator it is heated, raising its temperature slightly and increasing its volume. The hot, pressurised air then exits through the exhaust duct which is shaped to be convergent, i.e. to narrow towards the rear. This accelerates the air backwards and the reaction of this acceleration against the installation provides a small forward thrust. The air expands and decreases temperature as it passes along the duct, before emerging to join the external air flow. Thus, the three processes of an open Brayton cycle are achieved: compression, heat addition at constant pressure, and expansion. The thrust obtainable depends upon the pressure ratio between the inside and outside of the duct and the temperature of the coolant. The higher boiling point of ethylene glycol compared to water allows the air to attain a higher temperature increasing the specific thrust.
If the generated thrust is less than the aerodynamic drag of the ducting and radiator, then the arrangement serves to reduce the net aerodynamic drag of the radiator installation. If the generated thrust exceeds the aerodynamic drag of the installation, then the entire assemblage contributes a net forward thrust to the vehicle.
The Meredith effect inspired the early American work on the aero-thermodynamic duct or ramjet, due to the similarity of their principles of operation. In more recent times the phenomenon has been utilised in racing cars by mounting the engine cooling radiators in tunnels.
History
F. W. Meredith was a British engineer working at the Royal Aircraft Establishment (RAE), Farnborough. Reflecting on the principles of liquid cooling, he realized that what was conventionally regarded as waste heat, to be transferred to the atmosphere by a coolant in a radiator, need not be lost. The heat adds energy to the airflow and, with careful design, this may be used to generate thrust. The work was published in 1936.
The phenomenon became known as the "Meredith effect" and was quickly adopted by the designers of prototype fighter aircraft then under development, including the Supermarine Spitfire and Hawker Hurricane whose Rolls-Royce PV-12 engine, later named the Merlin, was cooled by ethylene glycol. An early example of a Meredith effect radiator was incorporated in the design of the Spitfire for the first flight of the prototype on 5 March 1936.
Many engineers did not understand the operating principles of the effect. A common mistake was the idea that the air-cooled radial engine would benefit most, because its fins ran hotter than the radiator of a liquid-cooled engine, with the mistake persisting even as late as 1949.
See also
Brayton cycle
References
1930s aircraft piston engines
Aircraft aerodynamics
Aerospace engineering | Meredith effect | Engineering | 705 |
3,367,262 | https://en.wikipedia.org/wiki/Circular%20convolution | Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the same period. Periodic convolution arises, for example, in the context of the discrete-time Fourier transform (DTFT). In particular, the DTFT of the product of two discrete sequences is the periodic convolution of the DTFTs of the individual sequences. And each DTFT is a periodic summation of a continuous Fourier transform function (see ). Although DTFTs are usually continuous functions of frequency, the concepts of periodic and circular convolution are also directly applicable to discrete sequences of data. In that context, circular convolution plays an important role in maximizing the efficiency of a certain kind of common filtering operation.
Definitions
The periodic convolution of two T-periodic functions, and can be defined as:
where is an arbitrary parameter. An alternative definition, in terms of the notation of normal linear or aperiodic convolution, follows from expressing and as periodic summations of aperiodic components and , i.e.:
Then:
Both forms can be called periodic convolution. The term circular convolution arises from the important special case of constraining the non-zero portions of both and to the interval Then the periodic summation becomes a periodic extension, which can also be expressed as a circular function:
(any real number)
And the limits of integration reduce to the length of function :
Discrete sequences
Similarly, for discrete sequences, and a parameter N, we can write a circular convolution of aperiodic functions and as:
This function is N-periodic. It has at most N unique values. For the special case that the non-zero extent of both x and h are ≤ N, it is reducible to matrix multiplication where the kernel of the integral transform is a circulant matrix.
Example
A case of great practical interest is illustrated in the figure. The duration of the x sequence is N (or less), and the duration of the h sequence is significantly less. Then many of the values of the circular convolution are identical to values of x∗h, which is actually the desired result when the h sequence is a finite impulse response (FIR) filter. Furthermore, the circular convolution is very efficient to compute, using a fast Fourier transform (FFT) algorithm and the circular convolution theorem.
There are also methods for dealing with an x sequence that is longer than a practical value for N. The sequence is divided into segments (blocks) and processed piecewise. Then the filtered segments are carefully pieced back together. Edge effects are eliminated by overlapping either the input blocks or the output blocks. To help explain and compare the methods, we discuss them both in the context of an h sequence of length 201 and an FFT size of N = 1024.
Overlapping input blocks
This method uses a block size equal to the FFT size (1024). We describe it first in terms of normal or linear convolution. When a normal convolution is performed on each block, there are start-up and decay transients at the block edges, due to the filter latency (200-samples). Only 824 of the convolution outputs are unaffected by edge effects. The others are discarded, or simply not computed. That would cause gaps in the output if the input blocks are contiguous. The gaps are avoided by overlapping the input blocks by 200 samples. In a sense, 200 elements from each input block are "saved" and carried over to the next block. This method is referred to as overlap-save, although the method we describe next requires a similar "save" with the output samples.
When an FFT is used to compute the 824 unaffected DFT samples, we don't have the option of not computing the affected samples, but the leading and trailing edge-effects are overlapped and added because of circular convolution. Consequently, the 1024-point inverse FFT (IFFT) output contains only 200 samples of edge effects (which are discarded) and the 824 unaffected samples (which are kept). To illustrate this, the fourth frame of the figure at right depicts a block that has been periodically (or "circularly") extended, and the fifth frame depicts the individual components of a linear convolution performed on the entire sequence. The edge effects are where the contributions from the extended blocks overlap the contributions from the original block. The last frame is the composite output, and the section colored green represents the unaffected portion.
Overlapping output blocks
This method is known as overlap-add. In our example, it uses contiguous input blocks of size 824 and pads each one with 200 zero-valued samples. Then it overlaps and adds the 1024-element output blocks. Nothing is discarded, but 200 values of each output block must be "saved" for the addition with the next block. Both methods advance only 824 samples per 1024-point IFFT, but overlap-save avoids the initial zero-padding and final addition.
See also
Convolution theorem
Circulant matrix
Discrete Hilbert transform
Page citations
References
Further reading
Functional analysis
Image processing
Binary operations | Circular convolution | Mathematics | 1,095 |
1,206,615 | https://en.wikipedia.org/wiki/Micro%20pitting | Micro pitting is a fatigue failure of the surface of a material commonly seen in rolling bearings and gears.
It is also known as grey staining, micro spalling or frosting.
Pitting and micropitting
The difference between pitting and micropitting is the size of the pits after surface fatigue. Pits formed by micropitting are approximately 10–20 μm in depth, and to the unaided eye, micropitting appears dull, etched or stained, with patches of gray. Normal pitting creates larger and more visible pits. Micropits are originated from the local contact of asperities produced by improper lubrication.
Causes
In a normal bearing the surfaces are separated by a layer of oil, this is known as elastohydrodynamic (EHD) lubrication. If the thickness of the EHD film is of the same order of magnitude as the surface roughness, the surface topography is able to interact and cause micro pitting. A thin EHD film may be caused by excess load or temperature, a lower oil viscosity than is required, low speed or water in the oil. Water in the oil can make micro pitting worse by causing hydrogen embrittlement of the surface. Micro pitting occurs only under poor EHD lubrication conditions and although it can affect all types of gears, it can be particularly troublesome in heavily loaded gears with hardened teeth.
A surface with a deep scratch might break exactly at the scratch if stress is applied. One can imagine that the surface roughness is a composite of many very small scratches. So high surface roughness decreases the stability on heavy stressed parts. To get a good overview of the surface an areal scan (Surface metrology) gives more information that a measurement along a single profile (profileometer). To quantify the surface roughness the ISO 25178 can be used.
See also
Pitting corrosion
Corrosion
References
Corrosion
Materials degradation | Micro pitting | Chemistry,Materials_science,Engineering | 395 |
17,740,035 | https://en.wikipedia.org/wiki/MOA-2007-BLG-192Lb | MOA-2007-BLG-192Lb, occasionally shortened to MOA-192 b, is an extrasolar planet approximately 7,000 light-years away in the constellation of Sagittarius. The planet was discovered orbiting the low-mass star MOA-2007-BLG-192L. It was found when it caused a gravitational microlensing event on May 24, 2007, which was detected as part of the MOA-II microlensing survey at the Mount John University Observatory in New Zealand.
The mass of the planet is not well-known. It is anything between 2.75 and 105 Earth masses (), although it is more likely to be between . The mass range also means that the planet's classification varies, from a Super-Earth to a Sub-Saturn. It is located at 2.02 astronomical units from its host star.
Host star
MOA-2007-BLG-192L is a red dwarf star, one of the smallest and least massive type of stars, as well as one of the most numerous in the Milky Way. It was initially estimated to have a mass 6% the mass of the Sun, which would probably be too low to sustain nuclear fusion at its core, making it a dimly glowing brown dwarf. However, this mass was based on an erroneous parallax, and a further analysis suggest a higher mass of . This would make it a red dwarf.
Both MOA-2007-BLG-192L and its planet are located at a distance of from Earth, in the direction of the constellation Sagittarius.
Notes
References
External links
Sagittarius (constellation)
Super-Earths
Terrestrial planets
Exoplanets discovered in 2008
Exoplanets detected by microlensing | MOA-2007-BLG-192Lb | Astronomy | 364 |
14,164,235 | https://en.wikipedia.org/wiki/MTA1 | Metastasis-associated protein MTA1 is a protein that in humans is encoded by the MTA1 gene. MTA1 is the founding member of the MTA family of genes. MTA1 is primarily localized in the nucleus but also found to be distributed in the extra-nuclear compartments. MTA1 is a component of several chromatin remodeling complexes including the nucleosome remodeling and deacetylation complex (NuRD). MTA1 regulates gene expression by functioning as a coregulator to integrate DNA-interacting factors to gene activity. MTA1 participates in physiological functions in the normal and cancer cells. MTA1 is one of the most upregulated proteins in human cancer and associates with cancer progression, aggressive phenotypes, and poor prognosis of cancer patients.
Discovery
MTA1 was first cloned by Toh, Pencil and Nicholson in 1994 as a differentially expressed gene in a highly metastatic rat breast cancer cell line. The role in MTA1 in chromatin remodeling was deduced due to the presence of MTA1 polypeptides in the NuRD complex. The first direct target of the MTA1-NuRD complex was ERα. MTA2 was initially recognized as MTA1-like 1 gene, named as MTA1-L1, as a randomly selected clone from a large-scale sequencing effort of human cDNAs by Takashi Tokino's laboratory. MTA2's suspected role in chromatin remodeling was inferred from the prevalence of MTA2 polypeptides with the NuRD complex in a proteomic study.
Gene and spliced variants
The MTA1 is 715/703 amino acids long, coded by one of three genes of the MTA family and localized on chromosome 14q32 in human and on chromosome 12F in mouse. There are 21 exons spread over a region of about 51-kb in human MTA1. Alternative splicing from 21 exons generates 20 transcripts, ranging from 416-bp to 2.9-kb long. However, open-reading frames are present only in eight spliced transcripts which code six proteins and two polypeptides and remaining transcripts are non-coding long RNAs some of which retain intron sequences. Murine Mta1 contains three protein coding transcripts and three non-coding RNA transcripts. Among human MTA1 variants, only two spliced variants are characterized: ZG29p variant is derived from the c-terminal MTA1, with 251 amino acids and 29-kDa molecular weight; and MTA1s variant generated from alternative splicing of a middle exon followed by a frame-shift, is 430 amino acids and 47-kDa molecular weight.
Protein domains
The conserved domains of MTA1 include a BAH (Bromo-Adjacent Homology), an ELM2 (egl-27 and MTA1 homology), a SANT (SWI, ADA2, N-CoR, TFIIIB-B) and a GATA-like zinc finger. The C-terminal divergent region of MTA1 has an Src homology 3-binding domain, acidic regions, and nuclear localization signals. The presence of these domains revealed the role of MTA1 in interactions with modified or unmodified histone and non-histone proteins, chromatin remodeling, and modulation of gene transcription. MTA1 undergoes multiple post-translation modifications: acetylation on lysine 626, ubiquitination on lysine 182 and lysine 626, sumoylation on lysine 509, and methylation on lysine 532. The structural insights of MTA1 domains are deduced from studies involving complexes with HDAC1 or RbAp48 subunits of the NuRD complexes. The MTA1s variant is an N-terminal portion of MTA1 without nuclear localization sequence but contains a novel sequence of 33 amino acids in its C-terminal region. The novel sequence harbors a nuclear receptor binding motif LXXLL which confers MTA1 with an ability to interact with estrogen receptor alpha or other type I nuclear receptors. The ZG29p variant represents the c-terminal MTA1 with two proline-rich SH3 binding sites.
Regulation
Expression of MTA1 is influenced by transcription and non-transcriptional mechanisms. MTA1 expression is regulated by growth factors, growth factor receptors, oncogenes, environmental stress, ionizing radiation, inflammation, and hypoxia. The transcription of MTA1 is stimulated by transcriptional factors including, c-Myc, SP1, CUTL1 homeodomain, NF-ḵB, HSF1, HIF-1a, and Clock/BMAL1 complex, and inhibited by p53. Non-genomic mechanisms of MTA1 expression include post-transcriptional regulations such as ubiquitination by RING-finger ubiquitin-protein ligase COP1 or interaction with tumor suppressor ARF [24] or micro-RNAs such as miR-30c, miR-661 and miR-125a-3p.
Targets
Functions of MTA1 are regulated by its post-translational modifications, modulating the roles of effector molecules, interacting with other regulatory proteins and chromatin remodeling machinery, and modulating the expression of target genes via interacting with the components of the NuRD complex including HDACs.
MTA1 suppresses transcription of breast cancer type 1 susceptibility gene, PTEN, p21WAF, guanine nucleotide-binding protein G(i) subunit alpha-2, SMAD family member 7, nuclear receptor subfamily 4 group A member 1, and homeobox protein SIX3, and represses BCL11B as well as E-cadherin expression.
MTA1 is a dual coregulatory as it stimulates the transcription of Stat3, breast cancer-amplified sequence 3, FosB, paired box gene 5, transglutaminase 2, myeloid differentiation primary response 88, tumor suppressorp14/p19ARF, tyrosine hydroxylase, clock gene CRY1, SUMO2, and Wnt1 and rhodopsin due to release of their transcriptional inhibition by homeodomain protein Six3,
MTA1 interacts with ERα and coregulatory factors such as MAT1, MICoA, and LMO4, which inhibits ER transactivation activity. MTA1 also deacetylate its target proteins such as p53 and HIF and modulates their transactivation functions. Furthermore, MTA1 could potentially modulate the expression of target genes through the microRNA network as MTA1 knockdown results modulation of miR-210, miR-125b, miR-194, miR-103, and miR-500.
Cellular functions
MTA1 modulates the expression of target genes due to its ability to act as a corepressor or coactivator. MTA1 targets and/or effector pathways regulate pathways with cellular functions in both normal and cancer cells. Physiological functions of MTA1 include: its role in the brain due to MTA1 interactions with DJ1 and endophilin-3; regulation of rhodopsin expression in the mouse eye; modifier of circadian rhythm due to MTA1 interactions with the CLOCK-BMAL1 complex and stimulation of Cry-transcription; in heart development due to MTA1-FOG2 interaction; in mammary gland development as MTA1 depletion leads to ductal hypobranching, in spermatogenesis; in immunomodulation due to differential effects on the expression of cytokines in the resting and activated macrophage; in liver regeneration following hepatic injury; differentiation of mesenchymal stem cells into osteogenic axis; and a component of DNA-damage response. In cancer cells, MTA1 and its downstream effectors regulate genes and/or pathways with roles in transformation, invasion, survival, angiogenesis, epithelial-to-mesenchymal transition, metastasis, DNA damage response, and hormone-independence of breast cancer.
Notes
References
External links
Transcription factors | MTA1 | Chemistry,Biology | 1,701 |
1,505,909 | https://en.wikipedia.org/wiki/Radio%20silence | In telecommunications, radio silence or emissions control (EMCON) is a status in which all fixed or mobile radio stations in an area are asked to stop transmitting for safety or security reasons.
The term "radio station" may include anything capable of transmitting a radio signal. A single ship, aircraft, or spacecraft, or a group of them, may also maintain radio silence.
Amateur radio Wilderness Protocol
The Wilderness Protocol recommends that those stations able to do so should monitor the primary (and secondary, if possible) frequency every three hours starting at 7 AM, local time, for 5 minutes starting at the top of every hour, or even continuously.
The Wilderness Protocol is now included in both the ARRL ARES Field Resources Manual and the ARES Emergency Resources Manual. Per the manual, the protocol is:
The Wilderness protocol (see page 101, August 1995 QST) calls for hams in the wilderness to announce their presence on, and to monitor, the national calling frequencies for five minutes beginning at the top of the hour, every three hours from 7 AM to 7 PM while in the back country. A ham in a remote location may be able to relay emergency information through another wilderness ham who has better access to a repeater. National calling frequencies: 52.525, 146.52, 223.50, 446.00, 1294.50 MHz.
Priority transmissions should begin with the LITZ (Long Interval Tone Zero or Long Time Zero) DTMF signal for at least 5 seconds. CQ like calls (to see who is out there) should not take place until after 4 minutes after the hour.
Maritime mobile service
Distress calls
Radio silence can be used in nautical and aeronautical communications to allow faint distress calls to be heard (see Mayday). In the latter case, the controlling station can order other stations to stop transmitting with the proword "Seelonce Seelonce Seelonce". (The word uses an approximation of the French pronunciation of the word silence, "See-LAWNCE."). Once the need for radio silence is finished, the controlling station lifts radio silence by the prowords "Seelonce FINI." Disobeying a Seelonce Mayday order constitutes a serious criminal offence in most countries. The aviation equivalent of Seelonce Mayday is the phrase or command "Stop Transmitting - Distress (or Mayday)". "Distress traffic ended" is the phrase used when the emergency is over. Again, disobeying such an order is extremely dangerous and is therefore a criminal offence in most countries.
Silent periods
Up until the procedure was replaced by the Global Maritime Distress and Safety System (August 1, 2013 in the U.S.), maritime radio stations were required to observe radio silence on 500 kHz (radiotelegraph) for the three minutes between 15 and 18 minutes past the top of each hour, and for the three minutes between 45 and 48 minutes past the top of the hour; and were also required to observe radio silence on 2182 kHz (upper-sideband radiotelephony) for the first three minutes of each hour (H+00 to H+03) and for the three minutes following the bottom of the hour (H+30 to H+33).
For 2182 kHz, this is still a legal requirement, according to 47 CFR 80.304 - Watch requirement during silence periods.
Military
An order for Radio silence is generally issued by the military where any radio transmission may reveal troop positions, either audibly from the sound of talking, or by radio direction finding. In extreme scenarios Electronic Silence ('Emissions Control' or EMCON) may also be put into place as a defence against interception.
In the British Army, the imposition and lifting of radio silence will be given in orders or ordered by control using 'Battle Code' (BATCO). Control is the only authority to impose or lift radio silence either fully or selectively. The lifting of radio silence can only be ordered on the authority of the HQ that imposed it in the first place. During periods of radio silence a station may, with justifiable cause, transmit a message. This is known as Breaking Radio Silence. The necessary replies are permitted but radio silence is automatically re-imposed afterwards. The breaking station transmits its message using BATCO to break radio silence.
The command for imposing radio silence is:
Hello all stations, this is 0. Impose radio silence. Over.
Other countermeasures are also applied to protect secrets against enemy signals intelligence.
Electronic emissions can be used to plot a line of bearing to an intercepted signal, and if more than one receiver detects it, triangulation can estimate its location. Radio direction finding (RDF) was critically important during the Battle of Britain and reached a high state of maturity in early 1943 with the aid of United States institutions aiding British Research and Development under the pressures of the continuing Battle of the Atlantic during World War II when locating U-boats. One key breakthrough was marrying MIT/Raytheon developed CRT technology with pairs of RDF antennas giving a differentially derived instant bearing useful in tactical situations, enabling escorts to run down the bearing to an intercept. The U-boat command of Wolfpacks required a minimum once daily communications check-in, allowing new Hunter-Killer groups to localize U-boats tactically from April on, leading to dramatic swings in the fortunes of war in the battles between March, when the U-boats sank over 300 allied ships and "Black May" when the allies sank at least 44 U-boats—each without orders to exercise EMCON/radio silence.
Other uses
Radio silence can be maintained for other purposes, such as for highly sensitive radio astronomy. Radio silence can also occur for spacecraft whose antenna is temporarily pointed away from Earth in order to perform observations, or there is insufficient power to operate the radio transmitter, or during re-entry when the hot plasma surrounding the spacecraft blocks radio signals.
In the USA, CONELRAD and EBS (which are now discontinued), and EAS (which is currently active) are also ways of maintaining radio silence, mainly in broadcasting, in the event of an attack.
Examples of radio silence orders
Radio silencing helped hide the Japanese attack on Pearl Harbor in World War II. The attackers had used AM radio station KGU in Honolulu as a homing signal.
On June 2, 1942, during World War II, a nine-minute air-raid alert, including at 9:22 pm a radio silence order applied to all radio stations from Mexico to Canada.
In January 1965, Syrian Armed Forces observed a period of radio silence which successfully detected Mossad spy Eli Cohen who was transmitting espionage work to Israel.
See also
Dead air
Guard band
Mapimí Silent Zone
Radio quiet zone
CONELRAD
References
Military communications
Radio communications
Spacecraft communication
Emergency Alert System
Civil defense
Silence | Radio silence | Engineering | 1,386 |
43,249,202 | https://en.wikipedia.org/wiki/Pencil%20Code | The Pencil Code is a high-order finite-difference code for solving partial differential equations, written in Fortran 95. The code is designed for efficient computation with massive parallelization. Due to its modular structure, it can be used for a large variety of physical setups like hydro- and magnetohydrodynamics relevant for, e.g., astrophysics, geophysics, cosmology, turbulence, and combustion. Many such setups are available as ready-to-run samples. Pencil Code is free software released under the GNU GPL v2.
Methods
The computational scheme is finite-difference and non-conservative; the time integration is implemented by an explicit scheme. Due to the usage of the vector potential, the magnetic field is intrinsically divergence free. High-order (4th, 6th, and 10th order, as well as single-sided or upwind) derivatives are available to resolve strong variations on the grid scale. With a set of automated tests, the functionality of the code is validated on a daily basis. MPI is used for parallelization, but the code can also be run non-parallel on a simple PC. There are modules for different time-integration schemes (e.g. three-step Runge–Kutta), treatment of shocks, embedded particle dynamics, chemistry, massive parallel I/O, etc.
Applications
The Pencil Code has mainly been applied to describe compressible turbulence and resistive magnetohydrodynamics. Applications include studies of planet formation, the solar dynamo, mono-chromatic radiative transfer, the coronal heating problem, debris disks, turbulent combustion of solid fuels, and others.
History
The Pencil Code development was started in 2001 by Axel Brandenburg and Wolfgang Dobler during the 'Helmholtz Summer School' at the Helmholtz Research Centre for Geosciences in Potsdam. It was initially used for MHD turbulence simulations. The development was continued by a team of about ten code owners and around 90 additional developers who extended the code for their scientific research. It is used by additional users from various branches of science. The code repository was hosted at NORDITA until 2008 and was then moved to Google Developers. In April 2015 the code was migrated to GitHub. Since June 2018 the Pencil Code supports the HDF5 data format.
References
External links
with SVN and GIT repository
GitHub page with issue tracker
Magnetohydrodynamics
Computational fluid dynamics
Simulation software
Parallel computing
Fortran software
Free astronomy software | Pencil Code | Physics,Chemistry | 508 |
1,593,142 | https://en.wikipedia.org/wiki/List%20of%20Usenet%20newsreaders | Usenet is a worldwide, distributed discussion system that uses the Network News Transfer Protocol (NNTP). Programs called newsreaders are used to read and post messages (called articles or posts, and collectively termed news) to one or more newsgroups. Users must have access to a news server to use a newsreader. This is a list of such newsreaders.
Types of clients
Text newsreader – designed primarily for reading/posting text posts; unable to download binary attachments
Traditional newsreader – a newsreader with text support that can also handle binary attachments, though sometimes less efficiently than more specialized clients
Binary grabber/plucker – designed specifically for easy and efficient downloading of multi-part binary post attachments; limited or nonexistent reading/posting ability. These generally offer multi-server and multi-connection support. Most now support NZBs, and several either support or plan to support automatic Par2 processing. Some additionally support video and audio streaming.
NZB downloader – binary grabber client without header support – cannot browse groups or read/post text messages; can only load 3rd-party NZBs to download binary post attachments. Some incorporate an interface for accessing selected NZB search websites.
Binary posting client – designed specifically and exclusively for posting multi-part binary files
Combination client – Jack-of-all-trades supporting text reading/posting, as well as multi-segment binary downloading and automatic Par2 processing
Web-Based Client - Client designed for access through a web browser and does not require any additional software to access Usenet.
Active
Commercial software
BinTube
Forté Agent
NewsBin
NewsLeecher
Novell GroupWise
Postbox
Turnpike
Usenet Explorer
Freeware
GrabIt
Free/Open-source software
Claws Mail is a GTK+-based email and news client for Linux, BSD, Solaris, and Windows.
GNOME Evolution
Gnus, is an email and news client, and feed reader for GNU Emacs.
Mozilla Thunderbird is a free and open-source cross-platform email client, news client, RSS and chat client developed by the Mozilla Foundation.
Pan a full-featured text and binary NNTP and Usenet client for Linux, FreeBSD, NetBSD, OpenBSD, OpenSolaris, and Windows.
SeaMonkey Mail & Newsgroups
Sylpheed
X Python Newsreader
Text-based
Alpine
Gnus (Emacs based)
Line Mode Browser
Lynx (has limited Usenet support)
Mutt (3rd party patches)
rn
Slrn
tin
Web-based
Easynews
Narkive
Nemo
Newsgrouper
novaBBS
See Web-based Usenet for details.
Discontinued
Commercial software
Lotus Notes
Netscape Communicator (superseded by Mozilla)
Windows Mail – replaced Outlook Express for Windows Vista – terminated by Windows 7
Windows Live Mail – replaced Outlook Express for Windows XP; optional for Windows XP, Windows Vista, and Windows 7
Freeware
Opera Mail
Xnews – MS Windows
MT NewsWatcher – Mac OS X Universal Binary
Free/Open Source
Arachne (with aranews.apm package)
Arena
Argo
Beonex Communicator
KNode (may be embedded in Kontact)
Mozilla Mail & Newsgroups (renamed to SeaMonkey)
Spotnet
Shareware
Unison – Mac OS X
Text-based
Agora (email server)
Pine
Web-based
Google Groups – discontinued on February 22, 2024
See also
Comparison of Usenet newsreaders
List of newsgroups
References
External links
Usenet newsreaders
List | List of Usenet newsreaders | Technology | 736 |
57,971,303 | https://en.wikipedia.org/wiki/Server-side%20request%20forgery | Server-side request forgery (SSRF) is a type of computer security exploit where an attacker abuses the functionality of a server causing it to access or manipulate information in the realm of that server that would otherwise not be directly accessible to the attacker.
Similar to cross-site request forgery which utilizes a web client, for example, a web browser, within the domain as a proxy for attacks; an SSRF attack utilizes a vulnerable server within the domain as a proxy.
If a parameter of a URL is vulnerable to this attack, it is possible an attacker can devise ways to interact with the server directly (via localhost) or with the backend servers that are not accessible by the external users. An attacker can practically scan the entire network and retrieve sensitive information.
Types
Basic
In this type of attack the response is displayed to the attacker. The server fetches the URL requested by the attacker and sends the response back to the attacker.
Blind
In this type of attack the response is not sent back to the attacker. Therefore, the attacker has to devise ways to confirm this vulnerability.
References
Computer security exploits
Internet security
Deception
Security breaches | Server-side request forgery | Technology | 234 |
71,881,524 | https://en.wikipedia.org/wiki/BE%20Ursae%20Majoris | BE Ursae Majoris is a binary star system in the northern circumpolar constellation of Ursa Major, abbreviated BE UMa. The two components are an unusual M-class dwarf star and a subdwarf O star, borderline white dwarf. It is classified as a detached Algol variable and ranges in brightness from an apparent visual magnitude of 14.8 down to 17.8. This is too faint to be visible to the naked eye. The distance to this system is approximately 4,600 light years based on parallax measurements.
The variability of SVS 1424 was announced in 1964 by N. E. Kurochkin from Sternberg, and was found to have a period of 2.291 days while ranging in brightness from magnitude 14.1 down to 15.6. After being assigned the variable star designation BE UMa, it was discovered to be a source of hot ultraviolet emission with a helium-rich spectrum by D. H. Ferguson and associates in 1981. B. Margon and associates found variability of spectral features on a time scale as low as a few hours. They interpreted this as a detached binary system consisting of a compact, high temperature white dwarf and a cool red dwarf star. The outer layers of the cooler star are being ionized by radiation from the hotter component, and the changing orientation of this heated region over the course of an orbit is creating a sinusoidal variability of about 1.5 magnitudes.
In 1982, a deep eclipse was discovered in the light curve by H. Ando and associates. This put a strong limit on the possible models for the system, which indicated that the compact component is a hot O-type subdwarf. D. Crampton and associates in 1983 found that the temperature and radius of the cool component suggested that it is an evolved subgiant star. At present, no mass transfer is taking place, but the system appears to be evolving into a cataclysmic variable as the subdwarf cools to become a normal white dwarf.
In 1995, J. Liebert and associates discovered that the system is surrounded by a planetary nebula with a diameter of , which was likely shed when the present day subdwarf was leaving the asymptotic giant branch stage. The two components would have shared a common envelope as little as 10,000 years ago. As a result, rather than being a subgiant, the cool component has not yet reached the thermal equilibrium of a late dwarf star. The pair have a circular orbit with a period of 2.2911658 days and a separation of . The orbital plane is inclined at an angle of to the line of sight from the Earth.
References
Further reading
M-type main-sequence stars
O-type subdwarfs
White dwarfs
Algol variables
Ursa Major
Ursae Majoris, BE | BE Ursae Majoris | Astronomy | 582 |
13,612,789 | https://en.wikipedia.org/wiki/N%2CN%27-Diisopropylcarbodiimide | {{DISPLAYTITLE:N,N'''-Diisopropylcarbodiimide}}
{{chembox
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 424839902
| Name =
| ImageFile = N,N'-methanediylidenebis(propan-2-amine) 200.svg
| ImageSize = 220
| ImageFile1 = N,N'-Diisopropylcarbodiimide molecule ball.png
| ImageSize1 = 240
| ImageAlt1 = Ball-and-stick model of the N,N'-diisopropylcarbodiimide molecule
| PIN = N,N-Di(propan-2-yl)methanediimine
| OtherNames = Diisopropylmethanediimine, DIC
|Section1=
|Section2=
|Section3=
}}N,-Diisopropylcarbodiimide' is a carbodiimide used in peptide synthesis. As a liquid, it is easier to handle than the commonly used N,-dicyclohexylcarbodiimide, a waxy solid. In addition, N,-diisopropylurea, its byproduct in many chemical reactions, is soluble in most organic solvents, a property that facilitates work-up.
SafetyIn vivo'' dermal sensitization studies according to OECD 429 confirmed DIC is a strong skin sensitizer, showing a response at 0.20 wt% in the Local Lymph Node Assay (LLNA) placing it in Globally Harmonized System of Classification and Labelling of Chemicals (GHS) Dermal Sensitization Category 1A. Thermal hazard analysis by differential scanning calorimetry (DSC) shows DIC poses minimal explosion risks.
References
Peptide coupling reagents
Carbodiimides
Reagents for biochemistry
Biochemistry
Biochemistry methods
Isopropylamino compounds | N,N'-Diisopropylcarbodiimide | Chemistry,Biology | 426 |
44,383,980 | https://en.wikipedia.org/wiki/Nu1%20Coronae%20Borealis | {{DISPLAYTITLE:Nu1 Coronae Borealis}}
Nu1 Coronae Borealis is a solitary, red-hued star located in the northern constellation of Corona Borealis. It is faintly visible to the naked eye, having an apparent visual magnitude of 5.20. Based upon an annual parallax shift of , it is located roughly 650 light years from the Sun. At its distance, the visual magnitude is diminished by an extinction of 0.1 due to interstellar dust. This object is drifting closer with a radial velocity of −13 km/s.
This is an evolved red giant star with a stellar classification of M2 III. It is a variable star of uncertain type, showing a change in brightness with an amplitude of 0.0114 magnitude and a frequency of 0.22675 cycles per day, or 4.41 days/cycle. It has about 81 times the Sun's radius and is radiating nearly 1,300 times the Sun's luminosity from its photosphere at an effective temperature of 3,828 K.
References
Corona Borealis, Nu1
Corona Borealis
Corona Borealis, Nu1
Durchmusterung objects
Coronae Borealis, 20
147749
080197
6107 | Nu1 Coronae Borealis | Astronomy | 255 |
7,097,296 | https://en.wikipedia.org/wiki/DMSMS | Diminishing manufacturing sources and material shortages (DMSMS) or diminishing manufacturing sources (DMS) is defined as: "The loss or impending loss of manufacturers of items or suppliers of items or raw materials." DMSMS and obsolescence are terms that are often used interchangeably. However, obsolescence refers to a lack of availability due to statutory or process changes and new designs, whereas DMSMS is a lack of sources or materials.
Impact
Although DMSMS is not strictly limited to electronic systems, much of the effort regarding DMSMS deals with electronic components that have a relatively short lifetime.
Causes
Primary components
DMSMS is a multifaceted problem because there are at least three main components that need to be considered. First, a primary concern is the ongoing improvement in technology. As new products are designed, the technology that was used in their predecessors becomes outdated, making it more difficult to repair the equipment. Second, the mechanical parts may be harder to acquire because fewer are produced as the demand for these parts decreases. Third, the materials required to manufacture a piece of equipment may no longer be readily available.
Product life cycle
It is widely accepted that all electronic devices are subject to the product life cycle. As products evolve into updated versions, they require parts and technology distinct from their predecessors. However, the earlier versions of the product often still need to be maintained throughout their life cycle. As the new product becomes predominant, there are fewer parts available to fix the earlier versions and the technology becomes outdated.
According to EIA-724, there are 6 distinct phases of a product's life cycle: Introduction, Growth, Maturity, Saturation, Decline, and Phase-Out. Although the terms "Introduction", "Growth", and "Decline" are generally accepted without much explanation, the terms "Maturity", "Saturation", and "Phase-Out" are less obvious.
"Maturity" in this case refers to state in the product's life cycle where sales of the product first reach its sales peak and begins to level off. Having survived the Introduction and Growth phases, products in this phase have a low probability of being discontinued.
"Saturation" refers to a state in the product's life cycle where sales have leveled off and, towards the end of this phase, first begin to decline. The term "Saturation" is confusing to many and can be explained in reference to its equivalent in chemistry where a substance can no longer be dissolved in a liquid. A product can be said to have "saturated" its market. The decline at the end of the Saturation phase gives the first indications of the products end of life.
"Phase-out" refers to the final stages of a product's decline ending in the product being altogether discontinued by the supplier.
Mitigation
DMSMS is managed through various risk mitigation efforts, both during the manufacturing of a product as well as later in the products life cycle. DMSMS is a hot topic in military supply where the usable lifetime of an electronic system may far exceed the availability of the components used to produce that system.
Devices in phases 5 and 6 of a product's life cycle require caution on the part of designers and product support engineers to assure that system components are indeed available at the time of production.
Some examples of the signs and symptoms of a DMSMS issue are:
Notification of a part that will be discontinued in the future.
A system that uses a unique part that can only be produced by a single manufacturer.
Dwindling of parts for a system, but no replacements over time.
Planning in a new system design that does not consider future obsolescence problems.
A parts list that contains an end-of-life cycle part before a system has gone into production.
The core methodology for DMSMS analysis has been to make direct contact with the supplier of an item. Direct contact takes the form of phone, e-mail or other communication with a competent supplier representative. This is essential in the management of commercial off-the-shelf products and assemblies. The main items of concern in a DMSMS analysis are:
Is the item an active product?
Is the item a good seller (generates good revenue for the company)?
Is the item slated for obsolescence for any reason (e.g. replaced by a newer version)?
Monitoring
Other methodologies involve subscription to data services which monitor parts lists, known as a Bill of Materials (BOM), for activity on any one part in the user's list. Often both the classic methodology and the data subscription methodology will be used in conjunction to provide a more complete assessment of a part's availability and lifetime.
Lifetime buy
One strategy used to combat DMS is to buy additional inventory during the production run of a system or part, in quantities sufficient to cover the expected number of failures. This strategy is known as a lifetime buy. An example of this is the many 30- and 40-year-old railway locomotives being run by small operators in the United Kingdom. These operators will often buy more locomotives than they actually require, and keep a number of them stored as a source of spare parts.
Take action
It is important and responsible to use a DMSMS risk management plan to ensure parts are available when you need them. Long range planning must occur for every key piece of equipment, establishing "when" and "what" parts will be replaced or redesigned. Try to foresee potential equipment problems. Consider replacing obsolete parts and equipment. New methods of design engineering allow for the open exchange of parts as technology changes. There are also companies out there giving assistance and consult in seminars and workshops, audits and implementation of effective DMSMS processes.
See also
Cannibalization (parts)
Military surplus
Obsolescence
Product life cycle management
Stockout
Supply chain management
References
Further reading
Bjoern Bartels, Ulrich Ermel, Peter Sandborn and Michael G. Pecht: Strategies to the Prediction, Mitigation and Management of Product Obsolescence, 1st. Ed., John Wiley & Sons, Inc., Hoboken, New Jersey, 2012, , online available at google books.
External links
DMSMS Knowledge Sharing Portal
Electronics manufacturing
Obsolescence
Product management
Scarcity | DMSMS | Engineering | 1,269 |
25,266,303 | https://en.wikipedia.org/wiki/Voges%E2%80%93Proskauer%20test | Voges–Proskauer or VP is a test used to detect acetoin in a bacterial broth culture. The test is performed by adding alpha-naphthol and potassium hydroxide to the Voges-Proskauer broth, which is a glucose-phosphate broth that has been inoculated with bacteria. A cherry red color indicates a positive result, while a yellow-brown color indicates a negative result.
The test depends on the digestion of glucose to acetylmethylcarbinol. In the presence of oxygen and strong base, the acetylmethylcarbinol is oxidized to diacetyl, which then reacts with
guanidine compounds commonly found in the peptone medium of the broth. Alpha-naphthol acts as a color enhancer, but the color change to red can occur without it.
Procedure: First, add the alpha-naphthol; then, add the potassium hydroxide. A reversal in the order of the reagents being added may result in a weak-positive or false-negative reaction.
VP is one of the four tests of the IMViC series, which tests for evidence of an enteric bacterium. The other three tests include: the indole test [I], the methyl red test [M], and the citrate test [C].
VP positive organisms include Enterobacter, Klebsiella, Serratia marcescens, Hafnia alvei, Vibrio cholerae biotype El Tor, and Vibrio alginolyticus.
VP negative organisms include Citrobacter sp., Shigella, Yersinia, Edwardsiella, Salmonella, Vibrio furnissii, Vibrio fluvialis, Vibrio vulnificus, and Vibrio parahaemolyticus.
History
The reaction was developed by Daniel Wilhelm Otto Voges and Bernhard Proskauer, German bacteriologists in 1898 at the Institute for Infectious Diseases.
References
External links
Voges–Proskauer reaction at Merriam–Webster Online
Voges–Proskauer (VP) Test- Principle, Reagents, Procedure and Result
Biochemistry detection reactions
Microbiology techniques
Bacteriology | Voges–Proskauer test | Chemistry,Biology | 463 |
48,166,528 | https://en.wikipedia.org/wiki/Lipo-oxytocin-1 | Lipo-oxytocin-1 (LOT-1) is a synthetic peptide and derivative of oxytocin that acts as an agonist of the oxytocin receptor. The lipidation strategy was applied to oxytocin to create a new peptide with improved pharmacokinetics. LOT-1 consists of oxytocin conjugated with two palmitoyl groups. After adjusting for molecular weight (LOT-1 is ~1.5x the weight of oxytocin), oxytocin and LOT-1 are equipotent. In addition, LOT-1 appears to have a significantly longer duration of effect relative to that of oxytocin. It has yet to be determined whether LOT-1 possesses improved blood-brain-barrier permeability relative to oxytocin.
See also
Carbetocin
Demoxytocin
Merotocin
Palmitoylation
References
Oxytocin receptor agonists
Nonapeptides | Lipo-oxytocin-1 | Chemistry | 205 |
1,754,202 | https://en.wikipedia.org/wiki/Holoplankton | Holoplankton are organisms that are planktic (they live in the water column and cannot swim against a current) for their entire life cycle. Holoplankton can be contrasted with meroplankton, which are planktic organisms that spend part of their life cycle in the benthic zone. Examples of holoplankton include some diatoms, radiolarians, some dinoflagellates, foraminifera, amphipods, krill, copepods, and salps, as well as some gastropod mollusk species. Holoplankton dwell in the pelagic zone as opposed to the benthic zone. Holoplankton include both phytoplankton and zooplankton and vary in size. The most common plankton are protists.
Reproduction
Holoplankton have unique traits that make reproduction in the water column possible. Both sexual and asexual reproduction are used depending on the type of plankton. Some invertebrate holoplankton release sperm into the water column which are then taken up by the females for fertilization. Other species release both sperm and egg to increase the likelihood of fertilization. Environmental, mechanical, or chemical cues can all trigger this release.
Diatoms are single celled phytoplankton that can occur as individuals or as long chains. They can reproduce sexually and asexually.
Diatoms are important oxygen producers and are usually the first step in the food chain.
Copepods are small holoplanktonic crustaceans that swim using their hind legs and antennae.
Defenses
Because of their small size and sluggish swimming abilities, holoplanktonic species have made certain specialized adaptations and in some cases are equipped with special defenses. Adaptations include flat bodies, lateral spines, oil droplets, floats filled with gases, sheaths made of gel-like substances, and ion replacement.
Zooplankton have adapted by developing transparent bodies, bright colors, bad tastes and cyclomorphosis (seasonal changes in body shape). When predators release a chemical in the water to signal zooplankton; cyclomorphosis allows holoplankton to increase their spines and protective shields. Studies have shown that although small in size certain gelatinous zooplankton are rich in protein and lipid. "Many holoplankton seem to have very little visible defense mechanisms; therefore, it is hypothesized that a chemical defense may be possible. Pelagic cnidarians (jellyfish and related species) have nematocysts on their tentacles that eject a coiled microscopic thread very rapidly. These threads penetrate the surface of their target and release a series of complicated, biologically advanced venoms. Their stings can be very dangerous, due in part to the number of vital systems affected.
Sexual holoplankton
Copepod
See also
Plankton
Meroplankton
Sources
Australian Museum Online
References
Aquatic ecology
Planktology | Holoplankton | Biology | 632 |
3,521,243 | https://en.wikipedia.org/wiki/Gun%20carriage | A gun carriage is a frame or a mount that supports the gun barrel of an artillery piece, allowing it to be maneuvered and fired. These platforms often had wheels so that the artillery pieces could be moved more easily. Gun carriages are also used on ships to facilitate the movement and aiming of large cannons and guns. These are also used in the funeral procession of any higher authority of any state and country.
Early guns
The earliest guns were laid directly onto the ground, with earth being piled up under the muzzle end of the barrel to increase the elevation. As the size of guns increased, they began to be attached to heavy wooden frames or beds that were held down by stakes. These began to be replaced by wheeled carriages in the early 16th century.
Smoothbore gun carriages
From the 16th to the mid-19th century, the main form of artillery remained the smoothbore cannon. By this time, the trunnion (a short axle protruding from either side of the gun barrel) had been developed, with the result that the barrel could be held in two recesses in the carriage and secured with an iron band, the "capsquare". This simplified elevation, which was achieved by raising or lowering the breech of the gun by means of a wedge called a quoin or later by a steel screw. During this time, the design of gun carriages evolved only slowly, with the trend being towards lighter carriages carrying barrels that were able to throw a heavier projectile. There were two main categories of gun carriages:
Naval or garrison carriages
These were designed for use aboard a ship or within a fortification and consisted of two large wooden slabs called "cheeks" held apart by bracing pieces called "transoms". The trunnions of the gun barrel sat on the top of the cheeks; the rearward part of each cheek was stepped so that the breech could be lifted by iron levers called "handspikes". Because these guns were not required to travel about, they were only provided with four small solid wooden wheels called "trucks", whose main function was to roll backwards with the recoil of the gun and then allow it to be moved forward into a firing position after reloading. Traversing the gun was achieved by levering the rear of the carriage sideways with handspikes. An improvement on this arrangement started at the end of the 18th century with the introduction of the traversing carriage, initially in fortifications but later on ships as well. This consisted of a stout wooden (and later iron) beam on which the entire gun carriage was mounted. The beam was fitted to a pivot at the centre, and to one or more trucks or "racers" at the front; the racers ran along a semi-circular iron track set in the floor called a "race". This allowed the gun to be swung in an arc over a parapet. Alternatively, the pivot could be fitted to the front of the beam and the racers at the rear, allowing the gun to fire through an embrasure. The traversing beam sloped upwards towards the rear, allowing the gun and its carriage to recoil up the slope.
Field carriages
These were designed to allow guns to be deployed on the battlefield and were provided with a pair of large wheels similar to those used on carts or wagons. The cheeks of field carriages were much narrower than those on the naval carriage and the rear end, called a "trail", rested on the ground. When the gun needed to be moved any distance, the trail could be lifted onto a second separate axle called a limber, which could then be towed by a team of horses or oxen. Limbers had been invented in France in about 1550. An innovation from the mid-18th century was the invention of the "block trail", which replaced the heavy cheeks and transoms of the "double-bracket" carriage with a single wooden spar reinforced with iron.
Modern gun carriages
The First World War is often considered the dawn of modern artillery because, like repeating firearms, the majority of barrels were rifled, the projectiles were conical, the guns were breech loaded and many used fixed ammunition or separate loading charges and projectiles.
Some of the features of modern carriages are listed below and illustrated in the photo gallery:
Box trail – A box trail is a type of field carriage that is rectangular in shape and consists of a ladder frame often with decking. The goal was strength and stability. Box trail carriages on howitzers often had an open area near the breech to permit the high angles of fire necessary for indirect fire. On larger guns, there often was a ramp to hold ready rounds to make reloading easier. A problem with a box trail carriage is that it often limits easy access to the breech, so the barrel needs to be lowered to load and then raised for each shot which reduces the rate of fire. At the end of WW I box trail carriages became less common. Ease of loading and rate of fire were improved by providing better access to the breech.
Pole trail – A pole trail was sometimes used with early horse-drawn light artillery. The single trail resembled a pipe and was meant to be strong, light, easy to maneuver and easy to work around. After the First World War, pole trails became less common because light horse-drawn artillery was in decline. Some guns received new carriages to increase traverse, elevation and to make them suitable for motor traction.
Split trail – A split trail carriage, invented by Joseph-Albert Deport in 1907-1908, has two trails which can be spread to provide greater stability. However, another reason for this design is to provide greater angles of elevation and traverse. Since the carriage is stationary, traverse and elevation are controlled by separate hand wheels. Another advantage of a split trail is easier access to the breech for reloading at different angles. Many guns produced since the First World War have used the split trail configuration.
Outriggers – Since the First World War many anti-aircraft guns have had collapsible two, three, and four outrigger carriages with leveling jacks to provide stability, high-angle fire, and 360° traverse. The three outrigger carriages tend to have two detachable wheels for transport, while four-outrigger carriages have four. The four-outrigger versions are often referred to as cruciform carriages because when their outriggers are deployed they form a cross.
Gun shields – Not all modern guns have shields. Before World War I, shields were intended to provide gun crews with protection at shorter ranges from the recently-invented repeating rifles and shrapnel shells when they were engaged in direct fire. During the First World War on the Western Front, machine guns and fast-firing light field guns firing shrapnel shells made massed infantry or cavalry attacks over open ground too costly, so both sides sought to protect their men and artillery behind trenches and fortifications. Since the fighting was being conducted from behind fortifications shields became less important and were sometimes done away with to save weight. After the First World War shields were mostly used on small caliber direct-fire guns ranging in size from . Guns larger than 120 mm usually had the advantage of range and were indirect fire weapons so shields were sometimes omitted.
Recoil mechanism – Early guns had no mechanism to absorb recoil and the gun had to be repositioned after each shot. Later, ramps were used and the gun would roll up the ramp and gravity would roll the gun back into position. The majority of field guns produced since 1900 have had some sort of mechanical recoil mechanism. These can be broken down into two related subsystems: one absorbs the recoil while the other returns the gun to firing position. The part which absorbs the recoil is most often a hydraulic shock absorber, while the part that returns the gun to firing position is a pneumatic recuperator. This is usually shortened to hydro-pneumatic in technical documents. Another option is a hydraulic shock absorber and a spring recuperator. This is usually shortened to hydro-spring in technical documents. These systems can be identified as cylinders on top or below the gun barrel. The recoil system can either be integral with the barrel or the carriage. Some guns designed before recoil mechanisms became integrated on the gun carriage could be attached to an external shock absorber which was a spring/rubber tether that attached to an eyelet on the base of the gun carriage and was attached to a ground anchor at the other end. Guns which could use external shock absorbers include the De Bange 155 mm cannon and Canon de 120 mm modèle 1878.
Recoil spade or ground spade – The purpose of a spade is to anchor the carriage and stop it from rolling backward when the gun is fired. Spades are normally located at the end of the carriage and their shape resembles a plow or shovel. Some artillery pieces designed before hydro-pneumatic recoil mechanism became common used recoil spades with coil springs or rubber shock absorbers such as the 76 mm gun M1900 and Obusier de 120 mm C mle 1897 Schneider-Canet.
Elevation and range – As the First World War progressed, range and elevation became more important. Range was important because each side wanted to reduce their artillery losses in a war of attrition, and one of the best ways to do that was by outranging the enemy's artillery. At first, elevation was fairly easy to accomplish because both sides just propped up their guns on ramps or earthen embankments to increase the range of their shells. Existing carriages were also modified to get higher angles of elevation. Since both sides were dug in, the most effective way to attack the enemy was vertically through indirect fire to drop high-explosive shells into the trenches. At the end of the First World War, most guns had higher angles of elevation and greater range. There was also a trend towards lighter guns firing larger projectiles since the light field guns of World War I were not effective firing light projectiles with limited explosive yield.
Equilibrators – Equilibrators are devices that balance a barrel when its center of mass is not aligned with trunnions. They store potential energy when the barrel (its center of mass) is lowered, and release it when it needs to be raised by counteracting the downward force of weight by upwards pressure of compressed elastic elements similar to those employed in recoil mechanisms: hydropneumatic, pneumatic, coil springs or torsion bars. Equilibrators can vary in number and orientation, but they are often paired vertically or inclined on either side of the barrel near the trunnions and/or the breech. This feature became increasingly common in guns and howitzers with relatively long barrels after WWI.
Motor traction – The majority of guns used during the First World War were horse-drawn. Even in the Second World War, many guns were still horse-drawn. However, towards the end of the First World War, some were converted from horse traction to motor traction. The conversion process often involved replacing wooden spoked wheels with metal wheels with either solid rubber or pneumatic tires. This simple conversion was sufficient as long as the towing vehicle wasn't very fast, like a Holt tractor. But as towing vehicles became faster, the axles needed to be sprung in order to withstand the punishment of towing. Most carriages produced since the First World War have been sprung with leaf springs or torsion bars and have used rubber tires.
Limbers and caissons – Although neither limbers nor caissons are new inventions, many guns have used them. A limber is a two-wheeled cart that attaches to the trail of the gun for towing. This also often serves as a tool and ammunition wagon for the gun crew. Originally, limbers were used with horse-drawn artillery, but they can also be used with motor traction. Since the end of the Second World War, the use of limbers has declined with the increase in capabilities of motor traction. Often when viewing weight specifications for guns there will be one for travel and one for combat: normally the specification for travel will be larger because of the limber with supplies.
Large guns and multiple loads – Light guns could be transported in one piece but larger guns often broke down into multiple loads on trailers for towing. This was particularly true before motor traction because the whole gun was often too heavy for a single horse team to tow, so large guns were broken down into multiple wagon loads with each load being towed by a single horse team. Even after motor traction, many large guns had the option of being broken down into two or three wagon loads for towing, while others have the ability to detach the gun barrel from the recoil mechanism and pulled back to lay on top of the trail while being towed. This was done because long guns were barrel-heavy and could tip while being towed. During the Second World War, heavy artillery was increasingly mounted on tank chassis in order to improve mobility, and this trend has continued with today's self-propelled artillery. The advantage is this artillery can go anywhere a tank can and can be ready for action in minutes without any assembly.
Mountain or pack artillery – Some guns have the ability to be broken down into multiple loads for carrying by pack animals such as mules or by teams of men. Each load is small enough and light enough for a pack animal or man to carry. This is an advantage in mountainous terrain because there may not be roads or the terrain is too rough for towing. The parts are normally constructed with multiple joints and held together by pins. This lightness and portability has led pack guns to be used in a number of roles such as heliborne or airborne operations. Mountain guns were common during the First and Second World War, but have been largely replaced since then with rockets, mortars and recoilless guns.
State and military funerals
Gun carriages have been used to carry the coffin of fallen soldiers and officers at military funerals and holders of high office with a military connection in state funerals to their final resting place. The practice has its origins in war and appears in the nineteenth century in the Queens regulations of the British Army.
In the United Kingdom, in a state funeral, the Royal Navy State Funeral Gun Carriage bearing the coffin is drawn by sailors from the Royal Navy rather than horses. (This tradition dates from the funeral of Queen Victoria; the horses drawing the gun carriage bolted, so ratings from the Royal Navy hauled it to the Royal Chapel at Windsor.) This distinguishing feature is not invariable, however, as shown by the use of naval ratings rather than horses at the ceremonial funeral for Lord Mountbatten in 1979, which was one of a number of features on that occasion which emphasized Mountbatten's lifelong links with the Royal Navy. In state funerals in the United States, a caisson (a two-wheeled ammunition wagon), is used in place of a gun carriage. At the 2015 state funeral of Lee Kuan Yew in Singapore, the coffin was mounted on a 25-pounder gun towed by a Land Rover.
Gallery
See also
Limbers and caissons
Gun turret
References
Bibliography
Artillery
Artillery components
Artillery operation
Carriages and mountings | Gun carriage | Technology | 3,084 |
70,299,095 | https://en.wikipedia.org/wiki/Peter%20Nellist | Peter David Nellist is a British physicist and materials scientist, currently a professor in the Department of Materials at the University of Oxford. He is noted for pioneering new techniques in high-resolution electron microscopy.
Early life and career
Nellist gained his B.A. (1991), M.A. (1995) and Ph.D (1996) from St John's College, Cambridge, and studied at the Cavendish Laboratory with John Rodenburg, before taking up post-doctoral research at Oak Ridge National Laboratory (ORNL) in Tennessee with ex-Cavendish researcher Stephen Pennycook. Eighteen months later, Nellist returned to Cambridge on a Royal Society University Research Fellowship, which he transferred to the University of Birmingham. He left academia for four years to work for another ex-Cambridge microscopy pioneer, Ondrej Krivanek, at Nion, his newly formed company in Seattle. Nellist then returned to Trinity College Dublin and finally to the University of Oxford, where he became Joint Head of the Department of Materials in 2019.
Scientific research
Nellist's research focuses on scanning transmission electron microscopy and its use in materials science. In particular, he is noted for work on electron ptychography, quantitative image interpretation, and the development of corrective electron microscope lenses, which he describes as "like spectacles for a microscope".
In the mid-1990s, working with John Rodenburg at the Cavendish Laboratory in Cambridge, he helped to devise new ways of improving the resolution of both scanning electron microscopes and transmission electron microscopes.
In 1998, working with Stephen Pennycook of ORNL, he recorded "the highest resolution microscope images ever made of crystal structures". Six years later, Nellist, Pennycook, and colleagues at ORNL produced the first images of atoms in a crystal on sub-Angstrom scales by using a new technique to correct the optical aberrations in a scanning transmission electron microscope.
Achievements and awards
Nellist has won many awards, including the 2007 Burton Medal from the Microscopy Society of America for "an exceptional contribution to microscopy", the 2013 Ernst Ruska Prize from the German Electron Microscopy Society for the development of confocal electron microscopy, the 2013 Birks Award from the Microbeam Analysis Society, and the 2016 and 2020 European Microscopy Society prizes for best published paper in materials science. He was elected a Fellow of the Royal Society in 2020. He is the vice-president of the Royal Microscopical Society (of which he was also made an Honorary Fellow in 2020) and a board member of the European Microscopy Society.
Selected publications
Books
Scientific papers
References
External links
Seeing is Believing: How observing atoms in the electron microscope helps develop tomorrow's materials: A schools outreach talk by Peter Nellist explaining his work on electron microscopy.
Living people
Year of birth missing (living people)
Alumni of St John's College, Cambridge
Alumni of the University of Birmingham
Fellows of the Royal Society
British materials scientists
Microscopists
English physicists | Peter Nellist | Chemistry | 599 |
50,600,571 | https://en.wikipedia.org/wiki/%CE%91-Hederin | α-Hederin (alpha-hederin) is a water-soluble pentacyclic triterpenoid saponin found in the seeds of Nigella sativa and leaves of Hedera helix.
Anticancer studies
α-Hederin and also its derivative, kalopanaxsaponin-I, have been studied for their anticancer activities. α-Hederin has been shown to enhance the cytotoxicity of an established chemotherapeutic agent, 5-fluorouracil, in an animal model of colon carcinoma.
See also
Hederagenin
Thymoquinone
References
Triterpene glycosides
Saponins | Α-Hederin | Chemistry | 149 |
603,351 | https://en.wikipedia.org/wiki/Project%20Daedalus | Project Daedalus (named after Daedalus, the Greek mythological designer who crafted wings for human flight) was a study conducted between 1973 and 1978 by the British Interplanetary Society to design a plausible uncrewed interstellar probe. Intended mainly as a scientific probe, the design criteria specified that the spacecraft had to use existing or near-future technology and had to be able to reach its destination within a human lifetime. Alan Bond led a team of scientists and engineers who proposed using a fusion rocket to reach Barnard's Star 5.9 light years away. The trip was estimated to take 50 years, but the design was required to be flexible enough that it could be sent to any other target star.
All the papers produced by the study are available in a BIS book, Project Daedalus: Demonstrating the Engineering Feasibility of Interstellar Travel.
Concept
Daedalus would be constructed in Earth orbit and have an initial mass of 54,000 tonnes including 50,000 tonnes of fuel and 500 tonnes of scientific payload. Daedalus was to be a two-stage spacecraft. The first stage would operate for two years, taking the spacecraft to 7.1% of light speed (0.071 c), and then after it was jettisoned, the second stage would fire for 1.8 years, taking the spacecraft up to about 12% of light speed (0.12 c), before being shut down for a 46-year cruise period. Due to the extreme temperature range of operation required, from near absolute zero to 1600 K, the engine bells and support structure would be made of molybdenum alloyed with titanium, zirconium, and carbon, which retains strength even at cryogenic temperatures. A major stimulus for the project was Friedwardt Winterberg's inertial confinement fusion drive concept, for which he received the Hermann Oberth gold medal award.
This velocity is well beyond the capabilities of chemical rockets or even the type of nuclear pulse propulsion studied during Project Orion. According to Dr. Tony Martin, controlled-fusion engine and the nuclear–electric systems have very low thrust, equipment to convert nuclear energy into electrical has a large mass, which results in small acceleration, which would take a century to achieve the desired speed; thermodynamic nuclear engines of the NERVA type require a great quantity of fuel, photon rockets have to generate power at a rate of 3 W per kg of vehicle mass and require mirrors with absorptivity of less than 1 part in 106, interstellar ramjet's problems are tenuous interstellar medium with a density of about 1 atom/cm3, a large diameter funnel, and high power required for its electric field. Thus the only suitable propulsion method for the project was thermonuclear pulse propulsion.
Daedalus would be propelled by a fusion rocket using pellets of a deuterium/helium-3 mix that would be ignited in the reaction chamber by inertial confinement using electron beams. The electron beam system would be powered by a set of induction coils trapping energy from the plasma exhaust stream. 250 pellets would be detonated per second, and the resulting plasma would be directed by a magnetic nozzle. The computed burn-up fraction for the fusion fuels was 0.175 and 0.133 producing exhaust velocities of 10,600 km/s and 9,210 km/s respectively. Due to scarcity of helium-3 on Earth, it was to be mined from the atmosphere of Jupiter by large hot-air balloon supported robotic factories over a 20-year period, or from a less distant source, such as the Moon.
The second stage would have two 5-metre optical telescopes and two 20-metre radio telescopes. About 25 years after launch these telescopes would begin examining the area around Barnard's Star to learn more about any accompanying planets. This information would be sent back to Earth, using the 40-metre diameter second stage engine bell as a communications dish, and targets of interest would be selected. Since the spacecraft would not decelerate, upon reaching Barnard's Star, Daedalus would carry 18 autonomous sub-probes that would be launched between 7.2 and 1.8 years before the main craft entered the target system. These sub-probes would be propelled by nuclear-powered ion drives and would carry cameras, spectrometers, and other sensory equipment. The sub-probes would fly past their targets, still travelling at 12% of the speed of light, and transmit their findings back to the Daedalus' second stage, mothership, for relay back to Earth.
The ship's payload bay containing its sub-probes, telescopes, and other equipment would be protected from the interstellar medium during transit by a beryllium disc, up to 7 mm thick, weighing up to 50 tonnes. This erosion shield would be made from beryllium due to its lightness and high latent heat of vaporisation. Larger obstacles that might be encountered while passing through the target system would be dispersed by an artificially generated cloud of particles, ejected by support vehicles called dust bugs about 200 km ahead of the vehicle. The spacecraft would carry a number of robot wardens capable of autonomously repairing damage or malfunctions.
Specifications
Overall length: 190 metres
Payload mass: 450 tonnes
Variants
A quantitative engineering analysis of a self-replicating variation on Project Daedalus was published in 1980 by Robert Freitas. The non-replicating design was modified to include all subsystems necessary for self-replication. Use the probe to deliver a seed factory, with a mass of about 443 metric tons, to a distant site. Have the seed factory replicate many copies of itself on-site, to increase its total manufacturing capacity, then use the resulting automated industrial complex to construct probes, with a seed factory on board, over a 1,000-year period. Each REPRO would weigh over 10 million tons due to the extra fuel needed to decelerate from 12% of lightspeed.
Another possibility is to equip the Daedalus with a magnetic sail similar to the magnetic scoop on a Bussard ramjet to use the destination star heliosphere as a brake, making carrying deceleration fuel unnecessary, allowing a much more in-depth study of the star system chosen.
See also
Breakthrough Starshot
Project Icarus
Project Longshot
Enzmann starship
Further reading
References
External links
Project Daedalus, The Encyclopedia of Astrobiology Astronomy and Spaceflight
Starship Daedalus
Project Daedalus – Origins
The Daedalus Starship
Renderings of the Daedalus Starship to scale
Project Daedalus
Project Daedalus: The Propulsion System Part 1; Theoretical considerations and calculations. 2. Review of Advanced Propulsion Systems
Title: Project Daedalus. Authors: Bond, A.; Martin, A. R. Publication: Journal of the British Interplanetary Society Supplement, p. S5-S7 Publication Date: 00/1978 Origin: ARI ARI Keywords: Miscellanea, Philosophical Aspects, Extraterrestrial Life Comment: A&AA ID. AAA021.015.025 Bibliographic Code: 1978JBIS...31S...5B
British Interplanetary Society: Project Daedalus, video rendering by Hazegrayart
Hypothetical spacecraft
Interstellar travel
Nuclear spacecraft propulsion | Project Daedalus | Astronomy,Technology | 1,506 |
36,763,756 | https://en.wikipedia.org/wiki/PanCam | The PanCam (Panoramic Camera) assembly is a set of two wide angle cameras for multi-spectral stereoscopic panoramic imaging, and a high resolution camera for colour imaging that has been designed to search for textural information or shapes that can be related to the presence of microorganisms on Mars. This camera assembly is part of the science payload on board the European Space Agency Rosalind Franklin rover, tasked to search for biosignatures and biomarkers on Mars. The rover is planned to be launched in August–October 2022 and land on Mars in spring 2023.
Overview
This instrument will provide stereo multispectral images, of the terrain nearby. PanCam are the "eyes" of the rover and its primary navigation system. PanCam will also provide the geological context of the sites being explored and help support the selection of the best sites to carry out exobiology studies, as well as assist in some aspect of atmospheric studies. This system will also monitor the sample from the drill before it is crushed inside the rover, where the analytical instruments will perform a detailed chemical analysis.
The Principal Investigator is Professor Andrew Coates of the Mullard Space Science Laboratory, University College London in the United Kingdom.
Description
PanCam design includes the following major components:
Wide Angle Camera (WAC) pair, for multispectral stereoscopic panoramic imaging, using a miniaturized filter wheel. Both cameras have a focus range from 1 m to infinity.
High Resolution Camera (HRC) for high-resolution color images. It has a focus range from 0.98 m to infinity, and it uses a 1 megapixel (1024 × 1024) STAR1000 radiation resistant detector. Its active focus capability allows for an eight-fold better resolution than the WACs.
PanCam Interface Unit and DC-DC converter (PIU and DCDC) to provide a single electronic interface.
PanCam Optical Bench (OB) to house PanCam and provide protection.
See also
Astrobiology
Life on Mars
Planetary habitability
References
ExoMars
Mars imagers
Astrobiology | PanCam | Astronomy,Biology | 427 |
2,633,476 | https://en.wikipedia.org/wiki/M%C3%A9lanie%20%28rocket%29 | Mélanie is a French solid rocket motor, 16 cm in diameter, initially used as first stage of the Monica rocket.
There are two versions, Mélanie and "2Mélanie" (exact name unknown) :
The first version was used on Monica I, II and IVA; while the improved "2Melanie", with twice the propellant, was used on Monica III, IVB and V.
Melanie was later used in several ATEF and ONERA rockets. In the ONERA rockets, such as Daniel, Antarès and Berenice, Melanie was placed inside a 22 cm diameter cylindrical housing. This version delivered a total impulse of 48 kN.s with about 22 kilograms of propellant.
See also
Bèrènice
Antarès (OPd-56-39-22D)
Veronique (rocket)
French space program
References
Rockets and missiles
Sounding rockets of France | Mélanie (rocket) | Astronomy | 180 |
48,609,032 | https://en.wikipedia.org/wiki/Snow%20dance | A snow dance is a ceremony that is performed with the hopes of bringing snow in the winter months. This ceremony is often performed with the goal of avoiding school or work the next day. Specific snow dance ceremonies vary from person to person, but commonly include sleeping with silverware under one's pillow, flushing ice cubes down a toilet, or wearing pajamas inside out and backwards. Snow dancing is often performed outside in sunny or rainy conditions as the participating dancer would want it to snow that day or week, rather than be rainy or sunny. Considered by many to be an urban legend, the Snow Dance is often referred to in jest.
Origin of tradition
Rituals to invoke a desired weather pattern have historically been performed by Native American, Chinese, Slavic, and Romanian descendants and are still done today. These rituals have been executed and admired for purposes ranging from religious beliefs to celebration ceremonies.
Examples of such rituals include Rain Dance Ceremonies, Rainmaking, Sun Dance ceremonies, and other weather modification rituals.
While in contemporary society the Snow Dance is done typically in hope of no school, snow dances have historically been performed for the turn of a season and/or a change in the hunting season.
Rain dances are commonly performed during a drought.
The hope behind the dance: Snow days
The snow dance is primarily performed in hope of a snow day. A snow day is when school districts deem roads too dangerous for teachers and students to drive to school, and school is cancelled. Adults may also take a personal snow day from work if they are unwilling to risk driving on the snow. Typical snow day activities include sledding, building snowmen, making snow igloos, leaving footprint trails, making snow angels, and shoveling the driveway and sidewalks. Those who stay inside on a snow day may instead choose to do any number of things to stave off boredom, including craft-making, drinking hot chocolate, and baking cookies.
Similar rituals
Aside from the Snow Dance, there have been many other rituals and superstitions people have carried out in hopes of snowfall and a possible no school day. Typically, these practices are performed by the younger generation, however all ages may participate.
One of the more well-known and popular traditions to wish for snow include wearing one's pajamas inside out when going to bed the night before potential snowfall. This is sometimes also accompanied by wearing the pajamas backwards.
Another superstition is flushing ice cubes down the toilet. Most prefer ice cubes rather than crushed ice. There are many other traditions that include ice cubes such as throwing them into a pond or lake in hopes of the water freezing over, throwing shaved ice cubes at a tree, and tossing ice into one's front yard.
Other snow rituals include walking backwards to bed, placing a snowball or a white crayon in the freezer, leaving a spoon under one's pillow while sleeping, running around the kitchen table five times, and chanting “I want it to snow” three times in a row. All of these superstitions are to help influence the temperature to drop and for snow to fall.
As opposed to other weather rituals
The Snow Dance, which is meant to summon snow, contrasts the Native American tradition of ushering in rain through what is called the rain dance.
Although the Snow Dance is typically performed to invite a type of weather appreciated mainly for its beauty and the fun to be had in it, the success and failure of the rain dance is a matter of survival. Due to prolonged periods of drought in the southwestern United States, the tradition formed as an appeal to nature to send rain, enabling a plentiful harvest. Tribes known to do the rain dance include the Pueblo, Navajo, Hopi, and Mojave. The dance continues to be performed on many reservations in the United States.
Unique features of the rain dance include the participation of both men and women in the choreography, the outfits worn by the dancers, and the lack of drum accompaniment. White feathered masks worn by the men represent wind to blow in the rain, while the blue color of turquoise jewels and moccasins symbolize rain. Women do not wear shoes, but other than that are completely covered in ritual clothing. Rhythm for the dance is kept solely by the sound of feet pounding the ground as the dance is performed.
Other cultural groups, past and present, also perform or have performed weather rituals. For example, the blood sacrifice poured over the statue of Tezcatlipoca, the sun god of the ancient Aztecs, was believed to be the source of the god's strength to cause the sun to rise each morning. The ancient Inca Empire, prone to volcanic eruptions, earthquakes and floods due to its location in the Andes mountains, also presented sacrifices. These sacrifices, meant to appease the gods, included the sacrifice of children and of prisoners of war. Unlike the rain dance or blood sacrifice, however, these rites were performed in response to calamity rather than to usher in a specific weather pattern.
Figures
Similar to characters such as Santa Claus, Befana, and Hajji Firuz, who symbolize a particular holiday, there are also figures that resemble the bringing of winter and snow that many cultures associate with the weather. Such characters include Heikki Lunta, Jack Frost, Yuki-onna, Old Man Winter, Snow Queen, and many more. Oftentimes these characters are popular figures in a culture's history, religion, mythology, folklore, and/or literature.
Heikki Lunta, created by David Riutta, is a snow god from Atlantic Mine, Michigan. This Finnish character was originated from a song titled “Heikki Lunta Snowdance Song”, which was created in 1970 to summon snowfall at Range Snowmobile Club's snowmobile race. According to the town, the song created an abundance of snow before the race prompting a second song to be written called “Heikki Lunta Go Away”.
A more recent character personifying snow is Queen Elsa from Walt Disney’s movie Frozen. Elsa, who is based on the main character in Hans Christian Andersen’s The Snow Queen, has the powers to create and manipulate ice and snow. The movie brought in more than $1.2 billion in ticket sales. The booming popularity created a Frozen phenomenon resulting in movie fans blaming snowfall on the animated character of Queen Elsa.
Jack Frost is a common figure associated with frost, snow, sleet, and other weather related extremities. The exact origin of Jack Frost remains unclear with suspicions of Norse and Ango-Saxon history. Jack is mentioned in holiday song “The Christmas Song” and has a presence among movies and wintertime specials including movies ‘’Jack Frost” and “Rise of the Guardians” and television series “A Touch of Frost”.
Characters are major components in contemporary children's books, television shows, songs, movies, and oral tradition.
Reported Snow Dance success
In January 2012, members of the Southern Ute Indian tribe met at Vail Village, in Vail, Colorado, with the hopes of bringing more snow to the nearby mountain. While the members of the tribe began their snow dance ritual, hundreds of people gathered to observe. It was reported that, as the ritual progressed, the snowfall increased. According to the Vail Mountain Marketing Director, the Southern Ute Indian Tribe had also performed a snow dance when the mountain originally opened during 1962. The success of the first snow dance ritual resulted in the second call on the tribe years later.
Similarly, in 2014, a snow dance ritual was performed at the Sugar Pine State Park in Lake Tahoe by the Eagle Wings Native American dancers. According to the executive director of the Sierra State Parks Foundation, snow began to fall, unexpectedly, for only a brief period after the ritual dance was performed.
See also
Old Man Winter
Urban legend
Elsa
Jack Frost
Rain dance
Superstition
References
Snow
Ritual dances
Weather lore | Snow dance | Physics | 1,605 |
378,157 | https://en.wikipedia.org/wiki/69%20%28number%29 | 69 (sixty-nine; ) is the natural number following 68 and preceding 70. An odd number and a composite number, 69 is divisible by 1, 3, 23 and 69.
The number and its pictograph give its name to the sexual position of the same name. The association of the number with this sex position has resulted in it being associated in meme culture with sex. People knowledgeable of the meme may respond "nice" in response to the appearance of the number, whether intentionally an innuendo or not.
In mathematics
69 is a semiprime because it is a natural number that is the product of exactly two prime numbers (3 and 23), and it is an interprime between the numbers of 67 and 71. 69 is not divisible by any square number other than 1, making it a square-free integer. 69 is a Blum integer since the two factors of 69 are both Gaussian primes, and an Ulam number—an integer that is the sum of two distinct previously occurring Ulam numbers in a sequence. 69 is a deficient number because the sum of its proper divisors (which excludes itself) is less than itself. As an integer for which the arithmetic mean average of its positive divisors is also an integer, 69 is an arithmetic number. 69 is a congruent number—a positive integer that is the area of a right triangle with three rational number sides—and an amenable number. 69 can be expressed as the sum of consecutive positive integers in multiple ways, making it a polite number. 69 is a lucky number because it is a natural number that remains after repeatedly removing every nth number in a sequence of natural numbers, starting from 1.
In decimal, 69 is the only natural number whose square () and cube () use every digit from 0–9 exactly once. It is also the largest number whose factorial is less than a googol. On many handheld scientific and graphing calculators, 69! (1.711224524) is the highest factorial that can be calculated due to memory limitations. In its binary expansion of 1000101, 69 is equal to 105 octal, while 105 is equal to 69 hexadecimal (this same property can be applied to all numbers from 64 to 69). In computing, 69 equates to 2120 in ternary (base-3); 153 in senary (base-6); and 59 in duodecimal (base-12).
Visually, in Arabic numerals, 69 is a strobogrammatic number because it looks the same when viewed both right-side and upside down. 69 is a centered tetrahedral number, a figurate number that represents a pyramid with a triangular base and all other points arranged in layers above the base, forming a tetrahedron shape. 69 is also a pernicious number because there is a prime number of 1s when it is written as a binary number, and an odious number as it is a positive integer that has an odd number of 1s in its binary expansion.
In culture
69ing is a sex position wherein each partner aligns themselves to simultaneously achieve oral sex with each other. In reference to this sex act, the number 69 itself has become an Internet meme as an inherently funny number in which users will respond to any occurrence of the number with the word "nice" to draw specific attention to it. This means to humorously imply that the reference to the sex position was intentional. Because of its association with the sex position and resulting meme, 69 has been named "the sex number".
See also
96 (number) – 69 reversed
Explanatory footnotes
References
Integers
Internet memes | 69 (number) | Mathematics | 773 |
4,070,825 | https://en.wikipedia.org/wiki/Confluency | In cell culture biology, confluence refers to the percentage of the surface of a culture dish that is covered by adherent cells. For example, 50 percent confluence means roughly half of the surface is covered, while 100 percent confluence means the surface is completely covered by the cells, and no more room is left for the cells to grow as a monolayer. The cell number refers to, trivially, the number of cells in a given region.
Impact on research
Many cell lines exhibit differences in growth rate or gene expression depending on the degree of confluence. Cells are typically passaged before becoming fully confluent in order to maintain their proliferation phenotype. Some cell types are not limited by contact inhibition, such as immortalized cells, and may continue to divide and form layers on top of the parent cells. To achieve optimal and consistent results, experiments are usually performed using cells at a particular confluence, depending on the cell type. Extracellular export of cell free material is also dependent on the cell confluence .
Estimation
Rule of thumb
Comparing the amount of space covered by cells with unoccupied space using the naked eye can provide a rough estimate of confluency.
Hemocytometer
A hemocytometer can be used to count cells, giving the cell number.
References
Cell culture | Confluency | Biology | 260 |
469,609 | https://en.wikipedia.org/wiki/Desiccation | Desiccation is the state of extreme dryness, or the process of extreme drying. A desiccant is a hygroscopic (attracts and holds water) substance that induces or sustains such a state in its local vicinity in a moderately sealed container. The word desiccation comes .
Industry
Desiccation is widely employed in the oil and gas industry. These materials are obtained in a hydrated state, but the water content leads to corrosion or is incompatible with downstream processing. Removal of water is achieved by cryogenic condensation, absorption into glycols, and absorption onto desiccants such as silica gel.
Laboratory
A desiccator is a heavy glass or plastic container, now somewhat antiquated, used in practical chemistry for drying or keeping small amounts of materials very dry. The material is placed on a shelf, and a drying agent or desiccant, such as dry silica gel or anhydrous sodium hydroxide, is placed below the shelf.
Often some sort of humidity indicator is included in the desiccator to show, by color changes, the level of humidity. These indicators are in the form of indicator plugs or indicator cards. The active chemical is cobalt chloride (CoCl2). Anhydrous cobalt chloride is blue. When it bonds with two water molecules, (CoCl2•2H2O), it turns purple. Further hydration results in the pink hexaaquacobalt(II) chloride complex [Co(H2O)6]2+.
Biology and ecology
In biology and ecology, desiccation refers to the drying out of a living organism, such as when aquatic animals are taken out of water, slugs are exposed to salt, or when plants are exposed to sunlight or drought. Ecologists frequently study and assess various organisms' susceptibility to desiccation. For example, in one study the investigators found that Caenorhabditis elegans dauer is a true anhydrobiote that can withstand extreme desiccation and that the basis of this ability is founded in the metabolism of trehalose.
DNA damage and repair
Several bacterial species have been shown to accumulate DNA damage upon desiccation. Deinococcus radiodurans is extremely resistant to ionizing radiation. The functions necessary to survive ionizing radiation are also necessary to survive prolonged desiccation. Radiation resistance is considered to be an incidental consequence of the organism's evolutionary adaptation to dehydration, a common physiological stress in nature. The chromosomal DNA from desiccated D. radiodurans revealed increased DNA double-strand breaks. DNA double-strand breaks are repaired principally by a RecA-dependent recombination process that requires the presence of two genome copies. By this process D. radiodurans can survive thousands of double-strand breaks per cell.
Mycobacterium smegmatis mutant strains that are deficient in the ability to repair double-strand breaks by the non-homologous end joining (NHEJ) pathway are more sensitive to prolonged desiccation during stationary phase than wild-type strains. NHEJ appears to be the preferred pathway for repairing double-strand breaks caused by desiccation during the stationary phase. NHEJ can repair double-strand breaks even when only one chromosome is present in a cell.
Upon exposure to extreme dryness, Bacillus subtilis endospores acquire DNA-double strand breaks and DNA-protein crosslinks.
Broadcasting
In broadcast engineering, a desiccator may be used to pressurize the feedline of a high-power transmitter. Because it carries a large amount of energy from the transmitter to the antenna, the feedline must have low dielectric losses. Because it must also be lightweight so as not to overload the radio tower, air is often used as the dielectric. Since moisture can condense in these lines, desiccated air or nitrogen gas is pumped in. This pressure also keeps water or other dampness from coming into the line at any point along its length.
See also
Deposition (phase transition)
List of desiccants
Hygroscopy
Mummy
References
Broadcast engineering
Chemical processes
Patterned grounds | Desiccation | Chemistry,Engineering | 871 |
1,592,061 | https://en.wikipedia.org/wiki/Isophthalic%20acid | Isophthalic acid is an organic compound with the formula C6H4(CO2H)2. This colorless solid is an isomer of phthalic acid and terephthalic acid. The main industrial uses of purified isophthalic acid (PIA) are for the production of polyethylene terephthalate (PET) resin and for the production of unsaturated polyester resin (UPR) and other types of coating resins.
Isophthalic acid is one of three isomers of benzenedicarboxylic acid, the others being phthalic acid and terephthalic acid.
Crystalline isophthalic acid is built up from molecules connected by hydrogen bonds, forming infinite chains.
Preparation
Isophthalic acid is produced on the billion kilogram per year scale by oxidizing meta-xylene using oxygen. The process employs a cobalt-manganese catalyst. The world's largest producer of isophthalic acid is Lotte Chemical Corporation.
In the laboratory, chromic acid can be used as the oxidant. It also arises by fusing potassium meta-sulfobenzoate, or meta-bromobenzoate with potassium formate (terephthalic acid is also formed in the last case).
The barium salt, as its hexahydrate, is very soluble in water (a distinction between phthalic and terephthalic acids). Uvitic acid, 5-methylisophthalic acid, is obtained by oxidizing mesitylene or by condensing pyroracemic acid with baryta water.
Applications
Aromatic dicarboxylic acids are used as precursors (in the form of acyl chlorides) to commercially important polymers, e.g. the fire-resistant material Nomex. Mixed with terephthalic acid, isophthalic acid is used in the production of PET resins for drink plastic bottles and food packaging. The high-performance polymer polybenzimidazole is produced from isophthalic acid. Also, the acid is used as an important input to produce insulation materials.
References
Note: reference 2 refers to the ortho isomer. Accurate cites for the meta isomer not available.
External links
International Chemical Safety Card 0500
Dicarboxylic acids
Monomers
Benzoic acids | Isophthalic acid | Chemistry,Materials_science | 499 |
44,545,880 | https://en.wikipedia.org/wiki/Acousto-electric%20effect | Acousto-electric effect is a nonlinear phenomenon of generation of electric current in a piezo-electric semiconductor by a propagating acoustic wave. The generated electric current is proportional to the intensity of the acoustic wave and to the value of its electron-induced attenuation. The effect was theoretically predicted in 1953 by Parmenter. Its first experimental observation was reported in 1957 by Weinreich and White.
Valley acoustoelectric effect
There are two varieties of the original acousto-electric effect called the valley acoustoelectric effect and valley acoustoelectric Hall effect theoretically predicted in 2019 by Kalameitsev, Kovalev, and Savenko. These effects also represent nonlinear phenomena of generation of electric current in two-dimensional materials, such as transition metal dichalcogenide monolayers or graphene, located on a piezoelectric substrate by a propagating acoustic wave. The generated electric currents are proportional to the intensity of the acoustic wave and their directions are perpendicular to the acoustic wave vector.
See also
Physical acoustics
Semiconductors
Piezoelectricity
Elastic waves
References
Acoustics
Waves
Semiconductors | Acousto-electric effect | Physics,Chemistry,Materials_science,Engineering | 235 |
70,478,462 | https://en.wikipedia.org/wiki/Hybrid%20rocket%20fuel%20regression | Hybrid rocket fuel regression refers to the process by which the fuel grain of a hybrid-propellant rocket is converted from a solid to a gas that is combusted. It encompasses the regression rate, the distance that the fuel surface recedes over a given time, as well as the burn area, the surface area that is being eroded at a given moment.
Because the quantity of fuel being burned is important for the effectiveness of combustion in the engine, the regression rate plays a fundamental role in the design and firing of a hybrid engine. Unfortunately, hybrid fuel grains tend to have extremely slow regression, requiring very long combustion chambers or complex port designs that result in excess mass. Regression rate has also proven quite difficult to predict, with advanced models still providing significant error when applied at various scales and with differing fuels. Recent research has centered around the development of more accurate models coupled with research into techniques for increasing regression rate.
Regression rate
In contrast to solid rocket motors, hybrids exhibit significant dependence on the size of the port and low dependence on chamber pressure under normal conditions. Because they are dominated by thermodynamic forces, models typically emerge via a heat transfer calculation. Marxman provided the first attempt at an a priori model of hybrid regression, basing the rate on a heat transfer equilibrium calculation and assuming unity for the Prandtl and Lewis numbers. He eventually developed the below equation, using for instantaneous local mass flux, as distance along the port, rho for density of the fuel, for viscosity of the main-stream gas flow, for the velocity ratio between gas in the main stream and gas at the flame, and for the ratio considering the enthalpy difference from flame to fuel surface () in comparison to the effective heat of vaporization () for the fuel.Though the model showed large errors when used to predict regression rate for an annular port, the strong dependence on flux was a key finding. Unfortunately, many components of the equation are extremely difficult to determine, so most engineers focused on developing models based on testing, fitting the regression rate to a power function by effectively combining most of the terms into one coefficient that is assumed constant throughout the burn. It was typically simplified into a basic equation by considering the average regression over time for a test, fitting coefficients , and based on regression testing.Where G is the mass flux of propellant and x is the distance along the fuel grain. Though Marxman's initial math indicates that and , data typically ranges from 0.5 to 0.8 for and usually shows less dependence than predicted on . By averaging the regression out over the length of the fuel grain, the commonly used space-time average regression equation is created (also typically using , the flux of oxidizer, for the flux term instead of for flux of oxidizer and fuel).Many alternative equations for regression rate have been derived, usually constructed by reconsidering the assumptions made by Marxman but using the same diffusion-limited calculation approach. A model published by Karabeyoglu, for example, provides a more accurate approach by considering variation in the Prandtl number, accounting for entrance effects in the Reynolds number, and moving the flame sheet location to the stoichiometric location.
Similar concepts can be seen in an extension by Whitmore, where the Prandtl number is approximated as 0.8 and the skin friction coefficient is recalculated to consider blowing and the flow development along the grain length.
Both improved formulas appear to show a better relationship with tested data.
Regression enhancements
Liquifying fuels
The simplest technique for increasing the regression rate is to use a different fuel. Solids with lower molecular masses tend to have lower viscosities, a quality which generally correlates with a decrease in the required energy for gasification. Taken to the extreme, a new phenomenon actually emerges, where a melt layer at the surface of the fuel allows droplets to be entrained as oxidizer flows past. At the flux levels commonly seen in hybrid rocketry, this entrainment actually accounts for the largest portion of regression (dominating vaporization).
The concept was originally discovered during a brief research period in which AFRL and Orbital Technologies Corporation (ORBITEC) tested several cryogenic fuels in an effort to increase specific impulse. Using solidified pentane, they found regression rates vastly increased over traditional hybrid fuels. Several tests with paraffin also foreshadowed modern liquifying rocket technology, with the Peregrine rocket among others leading the way for further development.
The alternative regression method does supply some other issues, mainly a reduction in combustion efficiency. Because of the large particle size, the entrained droplets may not be fully consumed before flowing out of the nozzle and leaving the engine. Indeed, paraffin has a tendency to even slough off large fragments, greatly reduces combustion efficiency and potentially contributing to combustion instability.
Complex geometry
Although it is much harder to predict, complex grain geometries offer another technique for increasing regression rate and burn area in order to greatly increase fuel flow.
Using non-circular port cross sections increases the area exposed to the oxidizer to be gasified, especially at the start of the burn. However, as the fuel continues to regress it will begin to round out the shape because regression generally occurs normal to the fuel’s surface, and corners tend to regress faster. Generally, this will cause the O/F ratio to shift away from stoichiometric.
Some of the first attempts at complex geometries were wagon wheel designs developed by the United Technology Center. Though they massively increase fuel flow, wagon wheels require that a significant portion of fuel is left behind, or the structure could break apart.
More recently, helical designs have been used to create a centripetal component of flow, reducing blowing and providing greater friction between the oxidizer and fuel in order to increase convection. Analysis at the University of Utah concluded that regression rates generally increased by at least a factor of two, up to even a factor of four. In general, helical regression rate is modeled by several multiplicative adjustments to the skin friction coefficient and to the blowing coefficient.
Burn area
The burn area refers to the surface exposed to the heat of the combustion chamber, and it is just as pivotal to the regression of the rocket as the regression rate itself, since the volume flow rate of fuel is usually given by the regression rate multiplied by the burn area. Depending on the complexity of the grain geometry, it can also be quite difficult to calculate. At its simplest form, a tube-shaped fuel grain has a burn area of added to the area on both ends. However, a star-shaped fuel grain could require the use of CAD or other geometric software to determine the surface area, particularly as the surface area regresses along the normals, often creating highly irregular geometry.
In fact, the process is even slightly more complicated because corners protruding into the combustion chamber will regress more quickly than their circular counterparts, since they are exposed to heat on both sides. To model the problem, Bath developed a technique of iteratively blurring pixels and removing those that fall below a certain threshold of brightness. Using the image processing to generate a table of surface area outputs for a given volume, it can easily be implemented into a model for regression of the fuel grain over time.
Unfortunately, most models still require an empirical factor that depends on variations in fuel and oxidizer flow paths for different port geometries. In the case of the image blurring model, predictions of regression are also dependent on the settings used in the image processing program.
Models of burn area based on 2D cross sections lose another component of accuracy because they assume regression in the radial direction. For a helical grain, for example, the burn area predicted by Bath's model would be incorrect.
Regression testing
Because of the lack of accurate prediction methods, each system should generally be tested in full configuration to accurately determine the regression rate before flight. Typically, data points for several identical grains tested under different flux conditions are fitted to the space-time averaged power function. Initially, methods for fitting the power function were often left ambiguous in publications due to variation in the possible calculations for average mass flux, making it difficult to compare findings. A now commonly-referenced study by Karabeyoglu indicates that the easiest measurement, the port diameter average, also provides the most accurate results.
References
Hybrid-propellant rockets
Rocket engines | Hybrid rocket fuel regression | Technology | 1,712 |
1,869,857 | https://en.wikipedia.org/wiki/Accessory%20pigment | Accessory pigments are light-absorbing compounds, found in photosynthetic organisms, that work in conjunction with chlorophyll a. They include other forms of this pigment, such as chlorophyll b in green algal and vascular ("higher") plant antennae, while other algae may contain chlorophyll c or d. In addition, there are many non-chlorophyll accessory pigments, such as carotenoids or phycobiliproteins, which also absorb light and transfer that light energy to photosystem chlorophyll. Some of these accessory pigments, in particular the carotenoids, also serve to absorb and dissipate excess light energy, or work as antioxidants. The large, physically associated group of chlorophylls and other accessory pigments is sometimes referred to as a pigment bed.
The different chlorophyll and non-chlorophyll pigments associated with the photosystems all have different absorption spectra, either because the spectra of the different chlorophyll pigments are modified by their local protein environment or because the accessory pigments have intrinsic structural differences. The result is that, in vivo, a composite absorption spectrum of all these pigments is broadened and flattened such that a wider range of visible and infrared radiation is absorbed by plants and algae. Most photosynthetic organisms do not absorb green light well, thus most remaining light under leaf canopies in forests or under water with abundant plankton is green, a spectral effect called the "green window". Organisms such as some cyanobacteria and red algae contain accessory phycobiliproteins that absorb green light reaching these habitats.
In aquatic ecosystems, it is likely that the absorption spectrum of water, along with gilvin and tripton (dissolved and particulate organic matter, respectively), determines phototrophic niche differentiation. The six shoulders in the light absorption of water between wavelengths 400 and 1100 nm correspond to troughs in the collective absorption of at least twenty diverse species of phototrophic bacteria. Another effect is due to the overall trend for water to absorb low frequencies, while gilvin and tripton absorb higher ones. This is why open ocean appears blue and supports yellow species such as Prochlorococcus, which contains divinyl-chlorophyll a and b. Synechococcus, colored red with phycoerythrin, is adapted to coastal bodies, while phycocyanin allows Cyanobacteria to thrive in darker inland waters.
See also
Action spectrum
References
Photosynthesis | Accessory pigment | Chemistry,Biology | 545 |
7,331,527 | https://en.wikipedia.org/wiki/Dispensary | A dispensary is an office in a school, hospital, industrial plant, or other organization that dispenses medications, medical supplies, and in some cases even medical and dental treatment. In a traditional dispensary set-up, a pharmacist dispenses medication per the prescription or order form. The English term originated from the medieval Latin noun and is cognate with the Latin verb dispensare, 'to distribute'.
The term also refers to legal cannabis dispensaries.
The term also has Victorian antiquity, in 1862 the term dispensary was used in the folk song the Blaydon Races. The folk song differentiated the term dispensary from a Doctors surgery and an Infirmary. The advent of huge industrial plants in the late 19th and early 20th centuries, such as large steel mills, created a demand for in-house first responder services, including firefighting, emergency medical services, and even primary care that were closer to the point of need, under closer company control, and in many cases better capitalized than any services that the surrounding town could provide. In such contexts, company doctors and nurses were regularly on duty or on call.
Electronic dispensaries are designed to ensure efficient and consistent dispensing of excipient and active ingredients in a secure data environment with full audit traceability. A standard dispensary system consists of a range of modules such as manual dispensing, supervisory, bulk dispensing, recipe management and interfacing with external systems. Such a system might dispense much more than just medical related products, such as alcohol, tobacco or vitamins and minerals.
Primary care (Kenya)
In Kenya, a dispensary is a small outpatient health facility, usually managed by a registered nurse. It provides the most basic primary healthcare services to rural communities, e.g. childhood immunization, family planning, wound dressing and management of common ailments like colds, diarrhea and simple malaria. The nurses report to the nursing officer at the health center, where they refer patients with complicated diseases to be managed by clinical officers.
Primary care (India)
In India, a dispensary refers to a small setup with basic medical facilities where a doctor can provide a primary level of care.
It does not have a hospitalization facility and is generally owned by a single doctor.
In remote areas of India where hospital facilities are not available, dispensaries will be available.
Tuberculosis (Turkey)
In Turkey, the term dispensary is almost always used in reference to tuberculosis dispensaries () established across the country under a programme to eliminate tuberculosis initiated in 1923, the same year the country was founded. Although more than a hundred such dispensaries continue to operate as of 2023, they have been largely supplanted by hospitals by the end of 20th century with increased access to healthcare.
Alcohol (USA)
The term dispensary in the United States was used to refer to government agencies that sell alcoholic beverages, particularly in the state of Idaho and the South Carolina.
Cannabis
North America
In Arizona, British Columbia, California, Colorado, Connecticut, Illinois, Maine, Massachusetts, Oregon, Michigan, New Jersey, New Mexico, New York, Rhode Island, Ontario, Quebec, and Washington, medical cannabis is sold in specially designated stores called cannabis dispensaries or "compassion clubs". These clubs are for members or patients only, unless legal cannabis has already passed in the state or province in question. In Canada dispensaries are far less abundant than in the USA; most Canadian dispensaries are in British Columbia and Ontario.
Uruguay
In 2013 Uruguay became the first country to legalize marijuana cultivation, sale and consumption. The government is building a network of dispensaries that are meant to help to track marijuana sales and consumption. The move was meant to decrease the role of the criminal world in distribution and sales of it.
See also
References
Pharmacy | Dispensary | Chemistry | 820 |
30,876,676 | https://en.wikipedia.org/wiki/File%20Retrieval%20and%20Editing%20System | The File Retrieval and Editing SyStem, or FRESS, was a hypertext system developed at Brown University starting in 1968 by Andries van Dam and his students, including Bob Wallace. It was the first hypertext system to run on readily available commercial hardware and OS. It is also possibly the first computer-based system to have had an "undo" feature for quickly correcting small editing or navigational mistakes.
Features
FRESS was a continuation of work done on van Dam's previous hypertext system, HES, developed the previous year. FRESS ran on an IBM 360-series mainframe running VM/CMS. It improved on HES's capabilities in many ways, inspired by Douglas Engelbart's NLS. FRESS implemented one of the first virtual terminal interfaces, in order to provide device-independence. It could run on various terminals from dumb typewriters up to the Imlac PDS-1 graphical minicomputer. On the PDS-1, it supported multi-window WYSIWYG editing and graphics display. The PDS-1 used a light pen, not a mouse, and the light pen could be "clicked" using a foot-pedal.
FRESS allowed multiple users to collaborate on as set of documents, which could be of arbitrary size, and (unlike prior systems) were not laid out in lines until the moment of display. FRESS users could insert a marker at any location within a text document and link the marked selection to any other point either in the same document or a different document. This was much like the World Wide Web of today, but without the need for the anchor hyperlinks that HTML requires. Links were also bi-directional, unlike in today's web.
FRESS had two types of links: tags and "jumps". Tags were links to information such as references or footnotes, while "jumps" were links that could take the user through many separate but related documents. FRESS also had the ability to assign keywords to links or text blocks to assist with navigation. Keywords could be used to select which sections to display or print, which links would be available to the user, and so on. Multiple "spaces" were also automatically maintained, including an automatic table of contents and indexes to keywords, document structures, and so on. Users could view a visualization of the "structure space" of the texts and cross-reference links, and could directly rearrange the structure space, and automatically update the links to match.
FRESS was essentially a text-based system and editing links was a fairly complex task unless you had access to the PDS-1 terminal, in which case you could select each end with the lightpen and create a link with a couple of keystrokes. FRESS provided no method for knowing where the user was within a collection of documents.
Usage
FRESS was used as educational technology for several classes at Brown, probably being the first hypertext system used in education. Most notably it was used for teaching an introduction to poetry in 1975 and 1976. In those days it was difficult to convince faculty in the humanities that computers could be useful in their teaching or work, or to convince the people funding the computer center that writing was an appropriate use of the expensive computers of the time. But English Professor Robert Scholes and two teaching assistants worked with the FRESS team to run a small experiment funded by the National Endowment for the Humanities. They saw hypertext as an attractive new way to present poetry, which is often highly reflexive and full of allusions and references to other works. They also wanted to help students directly interact with the course material, and engage with other students and instructors to collectively add meaning to it. There was only a single Imlac terminal, which students signed up for time on, so only 12 students per course could use FRESS. The students in the section which read and commented on the material via FRESS wrote about three times as much as students in control groups, and seemed to benefit from the use of the system, but given the small number of students in the study, the uncertainty in the results is high. A short film was made to document the project, which was rediscovered and shown as part of NEH's 50th anniversary celebration.
FRESS was for many years the word processor of choice at Brown and a small number of other sites. It was used for typesetting many books, including those by Roderick Chisholm, Robert Coover and Rosmarie Waldrop. For example, in the Preface to Person and Object Chisholm writes "The book would not have been completed without the epoch-making File Retrieval and Editing System..."
Through the diligent work of Alan Hecht, FRESS survived a major OS upgrade around 1978. Around the same time Jonathan Prusky wrote thorough user documentation for the system as well, in The FRESS Resource Manual. Although support had to be withdrawn a few years later for lack of resources and while rarely used, FRESS still runs on the current Brown mainframe.
For the ACM Hypertext '89 conference, David Durand reverse-engineered the PDS-1 terminal and created an emulator for the Apple Macintosh. He and Steven DeRose, the last FRESS project director, recovered the old poetry class databases and gave live demos on this and a few later occasions.
Documentary Film
Andries van Dam: Hypertext: an Educational Experiment in English and Computer Science at Brown University. Brown University, Providence, RI, U.S. 1974, Run time 15:16, , Full Movie on the Internet Archive
References
External links
Video documenting FRESS in use at Brown University poetry class, 1976
from the Cyberart Database
File Retrieval and Editing System by Steven DeRose
A Half-Century of Hypertext at Brown: A Symposium, Brown University Department of Computer Science, 23 May 2019
Brown University
Hypertext
History of human–computer interaction
Computer-related introductions in 1968 | File Retrieval and Editing System | Technology | 1,216 |
240,568 | https://en.wikipedia.org/wiki/Stress%E2%80%93strain%20curve | In engineering and materials science, a stress–strain curve for a material gives the relationship between stress and strain. It is obtained by gradually applying load to a test coupon and measuring the deformation, from which the stress and strain can be determined (see tensile testing). These curves reveal many of the properties of a material, such as the Young's modulus, the yield strength and the ultimate tensile strength.
Definition
Generally speaking, curves that represent the relationship between stress and strain in any form of deformation can be regarded as stress–strain curves. The stress and strain can be normal, shear, or a mixture, and can also be uniaxial, biaxial, or multiaxial, and can even change with time. The form of deformation can be compression, stretching, torsion, rotation, and so on. If not mentioned otherwise, stress–strain curve typically refers to the relationship between axial normal stress and axial normal strain of materials measured in a tension test.
Stages
A schematic diagram for the stress–strain curve of low carbon steel at room temperature is shown in figure 1. There are several stages showing different behaviors, which suggests different mechanical properties. To clarify, materials can miss one or more stages shown in figure 1, or have totally different stages.
Linear elastic region
The first stage is the linear elastic region. The stress is proportional to the strain, that is, obeys the general Hooke's law, and the slope is Young's modulus. In this region, the material undergoes only elastic deformation. The end of the stage is the initiation point of plastic deformation. The stress component of this point is defined as yield strength (or upper yield point, UYP for short).
Strain hardening region
The second stage is the strain hardening region. This region starts as the stress goes beyond the yielding point, reaching a maximum at the ultimate strength point, which is the maximal stress that can be sustained and is called the ultimate tensile strength (UTS). In this region, the stress mainly increases as the material elongates, except that for some materials such as steel, there is a nearly flat region at the beginning. The stress of the flat region is defined as the lower yield point (LYP) and results from the formation and propagation of Lüders bands. Explicitly, heterogeneous plastic deformation forms bands at the upper yield strength and these bands carrying with deformation spread along the sample at the lower yield strength. After the sample is again uniformly deformed, the increase of stress with the progress of extension results from work strengthening, that is, dense dislocations induced by plastic deformation hampers the further motion of dislocations. To overcome these obstacles, a higher resolved shear stress should be applied. As the strain accumulates, work strengthening gets reinforced, until the stress reaches the ultimate tensile strength.
Necking region
The third stage is the necking region. Beyond tensile strength, a necking forms where the local cross-sectional area becomes significantly smaller than the average. The necking deformation is heterogeneous and will reinforce itself as the stress concentrates more at the reduced section. Such positive feedback leads to quick development of necking and leads to fracture. Note that though the pulling force is decreasing, the work strengthening is still progressing, that is, the true stress keeps growing but the engineering stress decreases because the shrinking section area is not considered. This region ends up with the fracture. After fracture, percent elongation and reduction in section area can be calculated.
Classification
Some common characteristics among the stress–strain curves can be distinguished with various groups of materials and, on this basis, to divide materials into two broad categories; namely, the ductile materials and the brittle materials.
Ductile materials
Ductile materials, including structural steel and many other metals, are characterized by their ability to yield at normal temperatures. For example, low-carbon steel generally exhibits a very linear stress–strain relationship up to a well-defined yield point. The linear portion of the curve is the elastic region, and the slope of this region is the modulus of elasticity or Young's modulus. Plastic flow initiates at the upper yield point and continues at the lower yield point.
The appearance of the upper yield point is associated with the pinning of dislocations in the system. Permanent deformation occurs once dislocations are forced to move past pinning points. Initially, this permanent deformation is non-uniformly distributed along the sample. During this process, dislocations escape from Cottrell atmospheres within the material. The resulting slip bands appear at the lower yield point and propagate along the gauge length, at constant stress, until the Lüders strain is reached, and deformation becomes uniform.
Beyond the Lüders strain, the stress increases due to strain hardening until it reaches the ultimate tensile stress. During this stage, the cross-sectional area decreases uniformly along the gauge length, due to the incompressibility of plastic flow (not because of the Poisson effect, which is an elastic phenomenon). Then a process of necking begins, which ends in a 'cup and cone' fracture characteristic of ductile materials.
The appearance of necking in ductile materials is associated with geometrical instability in the system. Due to the natural inhomogeneity of the material, it is common to find some regions with small inclusions or porosity, within the material or on its surface, where strain will concentrate, leading to a local reduction in cross-sectional area. For strain less than the ultimate tensile strain, the increase of work-hardening rate in this region will be greater than the area reduction rate, thereby make this region harder to deform than others, so that the instability will be removed, i.e. the material increases in homogeneity before reaching the ultimate strain. However, beyond this, the work hardening rate will decrease, such that a region with smaller area is weaker than nearby regions, therefore reduction in area will concentrate in this region and the neck will become more and more pronounced until fracture. After the neck has formed in the material, further plastic deformation is concentrated in the neck while the remainder of the material undergoes elastic contraction owing to the decrease in tensile force.
The stress–strain curve for a ductile material can be approximated using the Ramberg–Osgood equation. This equation is straightforward to implement, and only requires the material's yield strength, ultimate strength, elastic modulus, and percent elongation.
Toughness
Materials that are both strong and ductile are classified as tough. Toughness is a material property defined as the area under the stress-strain curve.
Toughness can be determined by integrating the stress-strain curve. It is the energy of mechanical deformation per unit volume prior to fracture. The explicit mathematical description is:where
is strain
is the strain upon failure
is stress
Brittle materials
Brittle materials, which include cast iron, glass, and stone, are characterized by the fact that rupture occurs without any noticeable prior change in the rate of elongation, sometimes they fracture before yielding.
Brittle materials such as concrete or carbon fiber do not have a well-defined yield point, and do not strain-harden. Therefore, the ultimate strength and breaking strength are the same. Typical brittle materials like glass do not show any plastic deformation but fail while the deformation is elastic. One of the characteristics of a brittle failure is that the two broken parts can be reassembled to produce the same shape as the original component as there will not be a neck formation like in the case of ductile materials. A typical stress–strain curve for a brittle material will be linear. For some materials, such as concrete, tensile strength is negligible compared to the compressive strength and it is assumed to be zero for many engineering applications. Glass fibers have a tensile strength greater than that of steel, but bulk glass usually does not. This is because of the stress intensity factor associated with defects in the material. As the size of the sample gets larger, the expected size of the largest defect also grows.
See also
Elastomers
Plane strain compression test
Strength of materials
Stress–strain index
Tensometer
Universal testing machine
References
Elasticity (physics)
Structural analysis | Stress–strain curve | Physics,Materials_science,Engineering | 1,683 |
22,759,890 | https://en.wikipedia.org/wiki/EDTMP | EDTMP or ethylenediamine tetra(methylene phosphonic acid) is a phosphonic acid. It has chelating and anti corrosion properties. EDTMP is the phosphonate analog of EDTA. It is classified as a nitrogenous organic polyphosphonic acid.
Properties and applications
EDTMP is normally delivered as its sodium salt, which exhibits good solubility in water.
Used in Water treatment as an antiscaling and anti corrosion agent, the corrosion inhibition of EDTMP is 3–5 times better than that of inorganic polyphosphate. It can degrade to Aminomethylphosphonic acid. It shows excellent scale inhibition ability under temperature 200 °C. It functions by chelating with many metal ions.
The anti-cancer drug Samarium (153Sm) lexidronam is also derived from EDTMP.
References
Phosphonic acids
Chelating agents
Tertiary amines
Ethyleneamines
Corrosion inhibitors
Water treatment
Hexadentate ligands | EDTMP | Chemistry,Engineering,Environmental_science | 206 |
3,125,205 | https://en.wikipedia.org/wiki/Hierarchical%20state%20routing | Hierarchical state routing (HSR), proposed in Scalable Routing Strategies for Ad Hoc Wireless Networks by Iwata et al. (1999), is a typical example of a hierarchical routing protocol.
HSR maintains a hierarchical topology, where elected clusterheads at the lowest level become members of the next higher level. On the higher level, superclusters are formed, and so on. Nodes which want to communicate to a node outside of their cluster ask their clusterhead to forward their packet to the next level, until a clusterhead of the other node is in the same cluster. The packet then travels down to the destination node.
Furthermore, HSR proposes to cluster nodes in a logical way instead of in a geological way: members of the same company or in the same battlegroup are clustered together, assuming they will communicate much within the logical cluster.
HSR does not specify how a cluster is to be formed.
Routing algorithms | Hierarchical state routing | Technology | 189 |
2,436,892 | https://en.wikipedia.org/wiki/Mitrula%20paludosa | Mitrula paludosa (syn. Mitrula phalloides), the swamp beacon (US) or bog beacon, (UK) is a species of fungus. It is inedible.
Habitat
These mushrooms are found in swamps and bogs across North America in the cooler climates of south-eastern Canada, New England south to the Mason–Dixon line, and much of the mid-western United States. Also present in Europe from the British Isles to Eastern Europe.
On the West Coast of the United States, the Mitrula elegans looks similar.
Identification
Many related species of Mitrula look identical without microscopic study. The cap or club is yellow with a white stalk (possibly with some pink coloration). It is around 2–3 mm wide, and up to 4 cm tall.
References
External links
Images of the bog beacon in the UK
Bog beacon locations in Northern Ireland
Photographs with many European language translations of the name
Helotiales
Fungi of Europe
Fungi of North America
Inedible fungi
Fungus species
Taxa named by Elias Magnus Fries | Mitrula paludosa | Biology | 215 |
21,327,215 | https://en.wikipedia.org/wiki/K-factor%20%28fire%20protection%29 | In fire protection engineering, the K-factor formula is used to calculate the volumetric flow rate from a nozzle. Spray nozzles can for example be fire sprinklers or water mist nozzles, hose reel nozzles, water monitors and deluge fire system nozzles.
Calculation
K-factors are usually calculated in metric units internationally.
Metric units
Using metric units, the volumetric flow rate of a nozzle is given by , where q is the flow rate in litres per minute ( l/min ), p is the pressure at the nozzle in bar and K is the K-factor is given in units of .
US customary units
K-Factors have also previously been calculated and published using the United States customary units of pound per square inch (psi) and gallon per minute (gpm). Within the United States, US measurements are still often used instead of metric.
Unit confusion
Care should be exercised not to intermix K-factors from metric and Imperial/US units, as the resulting factors are not equivalent or interchangeable. In case of mix-ups, results can be catastrophic.
References
Fire protection | K-factor (fire protection) | Engineering | 232 |
45,318,796 | https://en.wikipedia.org/wiki/Russian%20Geometric%20Kernel | Russian Geometric Kernel (also known as RGK) is a proprietary geometric modeling kernel developed by several Russian software companies, most notably Top Systems and LEDAS, and supervised by STANKIN (State Technology University). It was written in C++.
History
The kernel was developed in 2011–2013 under the supervision of “Stankin” Moscow State Technical University within the framework of the project for “Developing Licensed Home 3D-Kernel”, funded by the Ministry of Industry and Trade of the Russian Federation.
The kernel is said to be completed by 2013, with no other news on it available (by the end of 2016).
Architecture
RGK is described using boundary representation (B-rep). But other descriptions are used when necessary. For instance, to optimize the speed of kernel's functions, and to ensure precise storage and computation of the model, canonical objects and NURBS curves and surfaces are used. To solve tasks associated with complex operations (such as hole-covering surfaces, N-side patches, and blending surfaces in complex cases), special types of curves and surfaces are used by the kernel.
Low-level and high-level operations
Kernel functions can be grouped under another criterion: low-level and high-level ones. The low-level operations include constructing curves and surfaces (canonical objects, NURBS, offset curves and surfaces, and so on), projecting points and curves on surfaces, intersecting and extending curves and surfaces, modifying topology (including Euler operations), and so on. Low-level operations enable application developers to modify kernel data in a most flexible manner, practically operating in manual mode.
High-level operations include operations that are standard for body generation, and Boolean operations on bodies (union, subtract, and intersect). It can be used with solid and surface bodies, and with combinations of the two.
Platforms
The geometric kernel supports 32- and 64-bit architecture, and Windows and Linux platforms. It can be compiled with any C++ compiler that implements features of С++11 standard.
References
External links
Official RGK web page
Computer-aided design software
Computer-aided engineering software
3D graphics software
Computer-aided design | Russian Geometric Kernel | Engineering | 443 |
75,250,402 | https://en.wikipedia.org/wiki/Resigratinib | Resigratinib (KIN-3248) is an experimental anticancer medication which acts as a fibroblast growth factor receptor inhibitor (FGFRi) and is in early stage human clinical trials.
See also
Enbezotinib
Pralsetinib
Rebecsinib
Selpercatinib
Zeteletinib
References
Enzyme inhibitors
Experimental monoclonal antibodies
Benzimidazoles
Pyrazolecarboxamides
Pyrrolidines
Enols
Methoxy compounds
Secondary amines
Fluoroarenes
Cyclopropyl compounds | Resigratinib | Chemistry | 119 |
42,591,967 | https://en.wikipedia.org/wiki/Monolith%20%28catalyst%20support%29 | Monolithic catalyst supports are extruded structures that are the core of many catalytic converters, most diesel particulate filters, and some catalytic reactors. Most catalytic converters are used for vehicle emissions control. Stationary catalytic converters can reduce air pollution from fossil fuel power stations.
Properties
Monoliths for automotive catalytic converters are made of a ceramic that contains a large proportion of synthetic cordierite, 2MgO•2Al2O3•5SiO2, which has a low coefficient of thermal expansion.
Each monolith contains thousands of parallel channels or holes, which are defined by many thin walls, in a honeycomb structure. The channels can be square, hexagonal, round, or other shapes. The hole density may be from 30 to 200 per cm2, and the separating walls can be 0.05 to 0.3 mm. The many small holes have a much larger surface area than one large hole. High surface area facilitates catalytic reaction or filtration. The open spaces in the cross-sectional area are 72 to 87% of the frontal area, so resistance to the flow of gases through the holes is low, which minimizes energy consumed forcing gases through the structure.
The monolith is a substrate that supports a catalyst. After the monolith is complete, a washcoat is applied that deposits oxides and catalyst(s) (most commonly platinum, palladium, and/or rhodium) on the walls of the holes.
Alternative structures include corrugated metal and a packed bed of coated pellets or other shapes.
Uses
Diesel particulate filters (DPF)
Catalytic incineration
Catalyst support for chemical processes
Vehicle emissions control
References
Catalysis | Monolith (catalyst support) | Chemistry | 344 |
521,753 | https://en.wikipedia.org/wiki/Flashover | A flashover is the near-simultaneous ignition of most of the directly exposed combustible material in an enclosed area. When certain organic materials are heated, they undergo thermal decomposition and release flammable gases. Flashover occurs when the majority of the exposed surfaces in a space are heated to their autoignition temperature and emit flammable gases (see also flash point). Flashover normally occurs at or for ordinary combustibles and an incident heat flux at floor level of .
An example of flashover is the ignition of a piece of furniture in a domestic room. The fire involving the initial piece of furniture can produce a layer of hot smoke, which spreads across the ceiling in the room. The hot buoyant smoke layer grows in depth, as it is bounded by the walls of the room. The radiated heat from this layer heats the surfaces of the directly exposed combustible materials in the room, causing them to give off flammable gases, via pyrolysis. When the temperatures of the evolved gases become high enough, these gases will ignite throughout their extent.
Types
A lean flashover (sometimes called rollover) is the ignition of the gas layer under the ceiling, leading to total involvement of the compartment. The air–fuel ratio is at the bottom region of the flammability range (i.e. lean).
A rich flashover occurs when the flammable gases are ignited while at the upper region of the flammability range (i.e. rich). This can happen in rooms where the fire subsided because of lack of oxygen. The ignition source can be a smouldering object, or the stirring up of embers by the air track. Such an event is known as backdraft.
A delayed flashover occurs when the colder gray smoke cloud ignites after congregating outside of its room of origin. This results in a volatile situation, and if the ignition occurs at the ideal mixture, the result can be a violent smoke gas explosion. This is referred to as smoke explosion or fire gas ignition depending on the severity of the combustion process.
A hot rich flashover occurs when the hot smoke with flammable gas ratio above the upper limit of flammability range and temperature higher than the ignition temperature leaves the compartment. Upon dilution with air it can spontaneously ignite, and the resultant flame can propagate back into the compartment, resulting in an event similar to a rich flashover. The common definition of this process is known as auto-ignition, which is another form of fire gas ignition.
Dangers
Flashover is one of the most feared phenomena among firefighters. Firefighters are taught to recognize the signs of imminent rollovers and flashovers and to avoid backdrafts. For example, there are certain routines for opening closed doors to buildings and compartments on fire, known as door entry procedures, ensuring fire crew safety where possible.
Indicators
The following are some of the signs that firefighters are looking for when they attempt to determine whether a flashover is likely to occur.
Fast dark smoke.
The neutral plane is moving down towards the floor. In this situation, a flashover is plausible.
All directly exposed combustible materials are showing signs of pyrolysis.
"Rollover" or tongues of fire appear (known as "angel fingers" to firefighters) as gases reach their auto-ignition temperatures.
There is a rapid build-up (or "spike") in temperature due to the compound effect of rapidly burning (i.e., deflagrating) gases and the thermal cycle they produce. This is generally the best indication of a flashover.
The fire is in a ventilated compartment, so there is no shortage of oxygen in the room.
Firefighters memorize a chant to help remember these during training: "Thick dark smoke, high heat, rollover, free burning."
The colour of the smoke is often considered as well, but there is no connection between the colour of the smoke and the risk of flashovers. Traditionally, black, dense smoke was considered particularly dangerous, but history shows this to be an unreliable indicator. For example, there was a fire in a rubber mattress factory in London in 1975 which produced white smoke. The white smoke was not considered dangerous, so firefighters decided to ventilate, which caused a smoke explosion and killed two firefighters. The white smoke from the pyrolysis of the rubber turned out to be extremely flammable.
See also
Air Canada Flight 797
Saudia Flight 163
Backdraft
Charleston Sofa Super Store fire
Firestorm
Burning of Parliament (1834) (flashover seen 20 miles (32 km) away)
Kilbirnie Street fire (1972)
King's Cross fire (1987) (flashover happened in escalator shaft)
MGM Grand fire (1980)
Stardust fire (1981)
Ufa train disaster (1989) (caused by massive gas leak in the open air, triggered by sparks from trains' brakes)
References
External links
Living Room Flashover Video
Rapid Fire Progress & Flashover related fire development
Realistic hot fire training to deal safely with flashover and backdraft
Flashover / Backdraft training
Presentation and video of a flashover in a living room (Forschungsstelle für Brandschutztechnik (Karlsruhe Institute of Technology - KIT))
Flashover training Croatia
French site about structural firefighting
Flashover during house fire in Baltimore, MD. Video taken January 2010
Flashover Slow Motion
Combustion
Fire protection
Firefighting
Thermodynamics
Types of fire | Flashover | Physics,Chemistry,Mathematics,Engineering | 1,120 |
41,443,291 | https://en.wikipedia.org/wiki/Swill%20milk%20scandal | The swill milk scandal was a major adulterated food scandal in the state of New York in the 1850s. The New York Times reported an estimate that in one year, 8,000 infants died from swill milk.
Name
Swill milk referred to milk from cows fed swill which was residual mash from nearby distilleries. The milk was whitened with plaster of Paris, thickened with starch and eggs, and hued with molasses.
After the extraction of alcohol from the macerated grain, the residual mash still contains nutrients. Therefore, keeping cows stabled near distilleries and feeding them with swill was an economic advantage.
History
As the population of New York City exploded in the antebellum period, a time when safe drinking water was scarce, the demand for milk soared. But as the city expanded and real estate prices climbed, the meadows necessary to raise hay-fed cattle moved farther from its markets. The cost of bringing fresh milk to customers in the city became prohibitive and threatened to restrict its supply to relatively wealthy inhabitants. For the same sanitary reasons that made milk popular, Americans consumed alcohol at the highest per capita rates in US history, and New York City was home to a large number of distilleries. Distilleries in London had experimented with feeding the waste product of their industry—the fermented mash of rye, barley, and wheat commonly referred to as "swill"—to cattle with some success, and New York City distillers soon followed suit. The milk from swill-fed cows, produced in dense urban areas and often priced as low as 6 cents per quart, was affordable to most of New York City's poorest residents.
The New York Academy of Medicine carried out an examination and established the connection of swill milk with the increased infant mortality in the city. The topic of swill milk was also well exposed in pamphlets and caricatures of the time.
In May 1858, Frank Leslie's Illustrated Newspaper did a landmark exposé of the distillery-dairies of Manhattan and Brooklyn that marketed so-called swill milk that came from cows fed on distillery waste and then adulterated with water, eggs, flour, and other ingredients that increased the volume and masked the adulteration. Swill milk dairies were noted for their filthy conditions and overpowering stench both caused by the close confinement of hundreds (sometimes thousands) of cows in narrow stalls where, once farmers tied them, they would stay for the rest of their lives, often standing in their own manure, covered with flies and sores, and suffering from a range of virulent diseases. These cows were fed boiling distillery waste, often leaving the cows with rotting teeth and other maladies. The milk drawn from the cows was routinely adulterated with water, rotten eggs, flour, burnt sugar, and other adulterants with the finished product then marketed falsely as "pure country milk" or "Orange County Milk".
In an editorial published at the height of the scandal, the New York Times described swill milk as a "bluish, white compound of true milk, pus and dirty water, which, on standing, deposits a yellowish, brown sediment that is manufactured in the stables attached to large distilleries by running the refuse distillery slops through the udders of dying cows and over the unwashed hands of milkers..."
Frank Leslie's exposé caused widespread public outrage that strongly pressured local politicians to punish and regulate the distillery-dairies, which were formally complained to be "swill milk nuisance". The Tammany Hall politician Alderman Michael Tuomey, known as "Butcher Mike", defended the distillers vigorously throughout the scandal—in fact, he was put in charge of the Board of Health investigation. Frank Leslie's Illustrated Newspaper staked out distillery owner Bradish Johnson's mansion at 21st and Broadway and reported that amid the investigation, Tuomey was observed making late-night visits. Tuomey assumed a central role in the ensuing investigations and, with fellow Aldermen E. Harrison Reed and William Tucker, shielded the dairies and turned the hearings into one-sided exercises designed to make dairy critics and established health authorities look ridiculous, even going to the extent of arguing that swill milk was as good or better for children than regular milk. With Reed and others, Tuomey successfully blocked any serious inquiry into the dairies and stymied calls for reform. The Board of Health exonerated the distillers, but public outcry led to the passage of the first food safety laws in the form of milk regulations in 1862. Tuomey became known for his attempts to block the new regulations, and earned the new moniker "Swill Milk" Tuomey. In addition to Tuomey's assistance in clearing up the unclean image milk developed, Robert Milham Hartley, a social reformist, aided in the restoration of milk being a nutritional and safe-to-drink beverage. During the mid to late nineteenth century, Hartley utilized Biblical references in his essays to appeal to the urban community. He asserted that universal milk consumption could help alleviate society's "sins", poverty, and alcohol consumption.
See also
2008 Chinese milk scandal
References
1850s in New York (state)
Food safety in the United States
Adulteration
Dairy industry
History of New York City
Infant mortality | Swill milk scandal | Chemistry | 1,102 |
52,883,696 | https://en.wikipedia.org/wiki/Elisabethatriene | Elisabethatriene is a bicyclic compound found in the marine octocoral Pseudopterogorgia elisabethae. Its stereochemistry is identical to the stereochemistry of elisabethatrienol.
References
Diterpenes
Vinylidene compounds
Polyenes | Elisabethatriene | Chemistry | 56 |
60,271,424 | https://en.wikipedia.org/wiki/Long%20Ambients%202 | Long Ambients 2 (printed as Long Ambients Two on the cover) is the sixteenth studio album by American electronic musician, songwriter, and producer Moby, released on March 15, 2019. It is the sequel to his previous ambient album, Long Ambients 1: Calm. Sleep. (2016).
Background
Long Ambients 2 is the follow-up to Long Ambients 1: Calm. Sleep. (2016) and offers more than 200 minutes of ambient music to put a listener to sleep or to meditate. Moby had struggled to find music that helped him sleep better, so he decided to compose some himself. He intended the music to also help people calm down, reduce anxiety, and aid in their own sleep issues. "LA15" is an extended version/remix of "LA8" from Long Ambients 1, while "LA16" is an extended version of "LA10". The other four pieces are entirely new to this album. Moby explained, "Most of the music in my life I’ve made with an audience in mind, but this long ambient music I originally just made for myself."
Release
The album was released on March 15, 2019 to commemorate World Sleep Day. It was available exclusively on Calm, a meditation app, for the first thirty days before it was released on other streaming and music download platforms.
Track listing
Chart performance
See also
Sleep, album by Max Richter created to fit a full night's sleep
References
External links
2019 albums
Moby albums
Ambient albums by American artists
Albums free for download by copyright owner
Self-released albums
Sleep
Sequel albums | Long Ambients 2 | Biology | 324 |
2,321,740 | https://en.wikipedia.org/wiki/Cytokine%20storm | A cytokine storm, also called hypercytokinemia, is a pathological reaction in humans and other animals in which the innate immune system causes an uncontrolled and excessive release of pro-inflammatory signaling molecules called cytokines. Cytokines are a normal part of the body's immune response to infection, but their sudden release in large quantities may cause multisystem organ failure and death.
Cytokine storms may be caused by infectious or non-infectious etiologies, especially viral respiratory infections such as H1N1 influenza, H5N1 influenza, SARS-CoV-1, SARS-CoV-2, Influenza B, and parainfluenza virus. Other causative agents include the Epstein-Barr virus, cytomegalovirus, group A streptococcus, and non-infectious conditions such as graft-versus-host disease. The viruses can invade lung epithelial cells and alveolar macrophages to produce viral nucleic acid, which stimulates the infected cells to release cytokines and chemokines, activating macrophages, dendritic cells, and others.
Cytokine storm syndrome is a diverse set of conditions that can result in a cytokine storm. Cytokine storm syndromes include familial hemophagocytic lymphohistiocytosis, Epstein-Barr virus–associated hemophagocytic lymphohistiocytosis, systemic or non-systemic juvenile idiopathic arthritis–associated macrophage activation syndrome, NLRC4 macrophage activation syndrome, cytokine release syndrome and sepsis.
Cytokine storms versus cytokine release syndrome
The term "cytokine storm" is often loosely used interchangeably with cytokine release syndrome (CRS) but is more precisely a differentiable syndrome that may represent a severe episode of cytokine release syndrome or a component of another disease entity, such as macrophage activation syndrome. When occurring as a result of a therapy, CRS symptoms may be delayed until days or weeks after treatment. Immediate-onset (fulminant) CRS appears to be a cytokine storm.
Research
Nicotinamide (a form of vitamin B3) is a potent inhibitor of proinflammatory cytokines. Low blood plasma levels of trigonelline (one of the metabolites of vitamin B3) have been suggested for the prognosis of SARS-CoV-2 death (which is thought to be due to the inflammatory phase and cytokine storm).
Magnesium decreases inflammatory cytokine production by modulation of the immune system.
History
The first reference to the term cytokine storm in the published medical literature appears to be by James Ferrara in 1993 during a discussion of graft vs. host disease, a condition in which the role of excessive and self-perpetuating cytokine release had already been under discussion for many years. The term next appeared in a discussion of pancreatitis in 2002. In 2003, it was first used in reference to a reaction to an infection.
It is believed that cytokine storms were responsible for the disproportionate number of healthy young adult deaths during the 1918 influenza pandemic, which killed an estimated 50 million people worldwide. In this case, a healthy immune system may have been a liability rather than an asset. Preliminary research results from Taiwan also indicated this as the probable reason for many deaths during the SARS epidemic in 2003. Human deaths from the bird flu H5N1 usually involve cytokine storms as well. Cytokine storm has also been implicated in hantavirus pulmonary syndrome.
In 2006, a study at Northwick Park Hospital in England resulted in all 6 of the volunteers given the drug theralizumab becoming critically ill, with multiple organ failure, high fever, and a systemic inflammatory response. Parexel, a company conducting trials for pharmaceutical companies claimed that theralizumab could cause a cytokine storm—the dangerous reaction the men experienced.
Relationship to COVID-19
During the COVID-19 pandemic, some doctors have attributed many deaths to cytokine storms. A cytokine storm can cause the severe symptoms of acute respiratory distress syndrome (ARDS), which has a high mortality rate in COVID-19 patients. SARS-CoV-2 activates the immune system resulting in a release of a large number of cytokines, including IL-6, which can increase vascular permeability and cause a migration of fluid and blood cells into the alveoli leading to such consequent symptoms as dyspnea and respiratory failure. In an autopsy study from Karolinska Hospital, 29 pleural effusions of deceased COVID-19 patients were analyzed. Out of 184 protein markers, 20 markers were raised significantly in COVID-19 deceased patients. A group of markers showed over-stimulation of the immune system, including ADA, BTC, CA12, CAPG, CD40, CDCP1, CXCL9, ENTPD2, Flt3L, IL-6, IL-8, LRP1, OSM, PD-L1, PTN, STX8, and VEGFA; furthermore, DPP6 and EDIL3 indicated damage to arterial and cardiovascular organs. The higher mortality has been linked to the effects of ARDS aggravation and the tissue damage that can result in organ-failure and/or death.
ARDS was shown to be the cause of mortality in 70% of COVID-19 deaths. A cytokine plasma level analysis showed that in cases of severe SARS-CoV-2 infection, the levels of many interleukins and cytokines are highly elevated, indicating evidence of a cytokine storm. Additionally, postmortem examination of patients with COVID-19 has shown a large accumulation of inflammatory cells in lung tissues including macrophages and T-helper cells.
Early recognition of a cytokine storm in COVID-19 patients is crucial to ensure the best outcome for recovery, allowing treatment with a variety of biological agents that target the cytokines to reduce their levels. Meta-analysis suggests clear patterns distinguishing patients with or without severe disease. Possible predictors of severe and fatal cases may include lymphopenia, thrombocytopenia and high levels of ferritin, D-dimer, aspartate aminotransferase, lactate dehydrogenase, C-reactive protein, neutrophils, procalcitonin and creatinine as well as interleukin-6 (IL-6). Ferritin and IL-6 are considered to be possible immunological biomarkers for severe and fatal cases of COVID-19. Ferritin and C-reactive protein may be possible screening tools for early diagnosis of systemic inflammatory response syndrome in cases of COVID-19.
Due to the increased levels of cytokines and interferons in patients with severe COVID-19, both have been investigated as potential targets for SARS-CoV-2 therapy. An animal study found that mice producing an early strong interferon response to SARS-CoV-2 were likely to live, but in other cases the disease progressed to a highly morbid overactive immune system. The high mortality rate of COVID-19 in older populations has been attributed to the impact of age on interferon responses.
Short-term use of dexamethasone, a synthetic corticosteroid, has been demonstrated to reduce the severity of inflammation and lung damage induced by a cytokine storm by inhibiting the severe cytokine storm or the hyperinflammatory phase in patients with COVID-19.
Clinical trials continue to identify causes of cytokine storms in COVID-19 cases. One such cause is the delayed Type I interferon response that leads to accumulation of pathogenic monocytes. High viremia is also associated with exacerbated Type I interferons response and worse prognosis. Diabetes, hypertension, and cardiovascular disease are all risk factors of cytokine storms in COVID-19 patients.
References
Immunology
Endocrine system | Cytokine storm | Chemistry,Biology | 1,721 |
6,667,807 | https://en.wikipedia.org/wiki/LMMS | LMMS (formerly Linux MultiMedia Studio) is a digital audio workstation application program. It allows music to be produced by arranging samples, synthesizing sounds, entering notes via computer keyboard or mouse (or other pointing device) or by playing on a MIDI keyboard, and combining the features of trackers and sequencers. It is free and open source software, written in Qt and released under GPL-2.0-or-later.
System requirements
LMMS is available for multiple operating systems, including Linux, OpenBSD, macOS, and Windows. It requires a 1.5 GHz CPU, 1 GB of RAM and a two-channel sound card.
Program features
LMMS accepts soundfonts and GUS patches, and it supports the Linux Audio Developer's Simple Plugin API (LADSPA) and LV2 (only master branch, since 24.05.2020). It can use VST plug-ins on Win32, Win64, or Wine32. The nightly versions support LinuxVST. Currently the macOS port doesn't support them.
It can import Musical Instrument Digital Interface (MIDI) and Hydrogen files and can read and write customized presets and themes.
Audio can be exported in the WAV, FLAC, Ogg and MP3 file formats.
Projects can be saved in the compressed MMPZ file format or the uncompressed MMP file format.
Editors
Song Editor – for arranging instruments, samples, groups of notes, automation, and more
Beat+Bassline Editor – for quickly sequencing rhythms
FX Mixer – for sending multiple audio inputs through groups of effects and sending them to other mixer channels, infinite channels are supported
Piano Roll – edit patterns and melodies
Automation Editor – move almost any knob or widget over the course of the song
Audio plug-ins
LMMS includes a variety of audio plug-ins that can be drag-and-dropped onto instrument tracks in the Song Editor and Beat+Bassline Editor.
Synthesizer plugins:
BitInvader – wavetable-lookup synthesis
FreeBoy – emulator of Game Boy audio processing unit (APU)
Kicker – bass drum synthesizer
LB302 – imitation of the Roland TB-303
Mallets – tuneful percussion synthesizer
Monstro – 3-oscillator synthesizer with modulation matrix
Nescaline – NES-like synthesizer
OpulenZ – 2-operator FM synthesizer
Organic – organ-like synthesizer
Sf2 Player – a Fluidsynth-based Soundfont player
SID – emulator of the Commodore 64 chips
TripleOscillator - 3-oscillator synthesizer with 5 modulation modes: MIX, SYNC, PM, FM, and AM
Vibed – vibrating string modeler
Watsyn – 4-oscillator wavetable synthesizer
Xpressive - mathematical expression parser synthesizer (only in alpha)
ZynAddSubFXOther plugins
AudioFileProcessor (AFP) – basic sampler with trimming and looping capabilities
SlicerT – slicer with tempo detection (only in nightly)
VeSTige'' - interface for VST plugins
Standards
Musical Instrument Digital Interface (MIDI)
SoundFont (SF2)
Virtual Studio Technology (VST)
Linux Audio Developer's Simple Plugin API (LADSPA)
LV2 (only master branch, since 24.05.2020)
Gravis Ultrasound (GUS) patches (PatMan)
JACK Audio Connection Kit (JACK)
ZynAddSubFX
Audio output examples
See also
List of music software
List of Linux audio software
Comparison of free software for audio
Multitrack recording
Comparison of multitrack recording software
References
External links
LMMS website
2004 software
Audio editing software for Linux
Audio editing software that uses Qt
Linux
Digital audio editors for Linux
Digital audio workstation software
Free audio editors
Free educational software
Free music software
Free software programmed in C++
Linux software
Open source software synthesizers
Software drum machines | LMMS | Engineering | 794 |
8,417,294 | https://en.wikipedia.org/wiki/ClearSpeed | ClearSpeed Technology Ltd was a semiconductor company, formed in 2002 to develop enhanced SIMD processors for use in high-performance computing and embedded systems. Based in Bristol, UK, the company has been selling its processors since 2005. Its current 192-core CSX700 processor was released in 2008, but a lack of sales has forced the company to downsize and it has since delisted from the London stock exchange.
Products
The CSX700 processor consists of two processing arrays, each with 96 processing elements. The processing elements each contain a 32/64-bit floating point multiplier, a 32/64-bit floating point adder, 6 KB of SRAM, an integer arithmetic logic unit, and a 16-bit integer multiply–accumulate unit. It currently sells its CSX700 processor on a PCI Express expansion card with 2 GB of memory, called the Advance e710. The card is supplied with the ClearSpeed Software Development Kit and application libraries.
Related multi-core architectures include Ambric, PicoChip, Cell BE, Texas Memory Systems, and GPGPU stream processors such as AMD FireStream and Nvidia Tesla. ClearSpeed competes with AMD and Nvidia in the hardware acceleration market, where computationally intensive applications offload tasks to the accelerator. As of 2009, only the ClearSpeed e710 performs 64-bit arithmetic at its peak computational rate.
History
In November 2003 ClearSpeed demonstrated the CS301, with 64 processing elements running at 200 MHz, and peak 25.6 FP32 GFLOPS.
In June 2005 ClearSpeed demonstrated the CSX600, with 96 processing elements running at 210 MHz, capable of 40 GFLOPS.
In September 2005 John Gustafson joined ClearSpeed as CTO of high performance computing.
In November 2005 ClearSpeed made its first significant sale of CSX600 processors to the Tokyo Institute of Technology using X620 Advance cards.
In November 2006 ClearSpeed X620 Advance cards helped place the Tsubame cluster 7th in the TOP500 list of supercomputers. The cards continue to be used in 2009.
In September 2007 ClearSpeed licensed its next generation processor to BAE Systems for inclusion in satellite systems.
In February 2007 ClearSpeed raised £20 million in share placing on the AIM market.
In June 2008 ClearSpeed released the CSX700, combining two CSX600 devices with a PCI Express x16 interface and ECC on all memories, using a lower power 90 nm process. The device delivers 96 GFLOPS for 9 watts with 192 processing elements running at 250 MHz. The device was also released on the Advance e710 card at the same time.
In February 2009 ClearSpeed announced a cost-cutting programme following poor financial results for 2008.
In July 2009 ClearSpeed delisted from the London Stock Exchange and returned £6.9 million to its shareholders.
In August 2009 ClearSpeed made its most significant sale through high performance and heterogeneous compute specialists PetaPath.
References
External links
ClearSpeed Official site (seem defunct / not working URL.)
https://books.google.com/books?id=bAeFGuNtGOAC&pg=PA105&dq=ClearSpeed
Coprocessors
Embedded systems
SIMD computing
Supercomputers | ClearSpeed | Technology,Engineering | 680 |
6,108,552 | https://en.wikipedia.org/wiki/Ordinal%20notation | In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. A recursive ordinal notation must satisfy the following two additional properties:
the subset of natural numbers is a recursive set
the induced well-ordering on the subset of natural numbers is a recursive relation
There are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here.
Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contradict each other.
A simplified example using a pairing function
As usual, we must start off with a constant symbol for zero, "0", which we may consider to be a function of arity zero. This is necessary because there are no smaller ordinals in terms of which zero can be described. The most obvious next step would be to define a unary function, "S", which takes an ordinal to the smallest ordinal greater than it; in other words, S is the successor function. In combination with zero, successor allows one to name any natural number.
The third function might be defined as one that maps each ordinal to the smallest ordinal that cannot yet be described with the above two functions and previous values of this function. This would map β to ω·β except when β is a fixed point of that function plus a finite number in which case one uses ω·(β+1).
The fourth function would map α to ωω·α except when α is a fixed point of that plus a finite number in which case one uses ωω·(α+1).
ξ-notation
One could continue in this way, but it would give us an infinite number of functions. So instead let us merge the unary functions into a binary function. By transfinite recursion on α, we can use transfinite recursion on β to define ξ(α,β) = the smallest ordinal γ such that α < γ and β < γ and γ is not the value of ξ for any smaller α or for the same α with a smaller β.
Thus, define ξ-notations as follows:
"0" is a ξ-notation for zero.
If "A" and "B" are replaced by ξ-notations for α and β in "ξAB", then the result is a ξ-notation for ξ(α,β).
There are no other ξ-notations.
The function ξ is defined for all pairs of ordinals and is one-to-one. It always gives values larger than its arguments and its range is all ordinals other than 0 and the epsilon numbers (ε=ωε).
One has ξ(α, β) < ξ(γ, δ) if and only if either (α = γ and β < δ) or (α < γ and β < ξ(γ, δ)) or (α > γ and ξ(α, β) ≤ δ).
With this definition, the first few ξ-notations are:
"0" for 0. "ξ00" for 1. "ξ0ξ00" for ξ(0,1)=2. "ξξ000" for ξ(1,0)=ω. "ξ0ξ0ξ00" for 3. "ξ0ξξ000" for ω+1. "ξξ00ξ00" for ω·2. "ξξ0ξ000" for ωω. "ξξξ0000" for
In general, ξ(0,β) = β+1. While ξ(1+α,β) = ωωα·(β+k) for k = 0 or 1 or 2 depending on special situations:
k = 2 if α is an epsilon number and β is finite.
Otherwise, k = 1 if β is a multiple of ωωα+1 plus a finite number.
Otherwise, k = 0.
The ξ-notations can be used to name any ordinal less than ε0 with an alphabet of only two symbols ("0" and "ξ"). If these notations are extended by adding functions that enumerate epsilon numbers, then they will be able to name any ordinal less than the first epsilon number that cannot be named by the added functions. This last property, adding symbols within an initial segment of the ordinals gives names within that segment, is called repleteness (after Solomon Feferman).
List
There are many different systems for ordinal notation introduced by various authors. It is often quite hard to convert between the different systems.
Cantor
"Exponential polynomials" in 0 and ω gives a system of ordinal notation for ordinals less than ε0. There are many equivalent ways to write these; instead of exponential polynomials, one can use rooted trees, or nested parentheses, or the system described above.
Veblen
The 2-variable Veblen functions can be used to give a system of ordinal notation for ordinals less than the Feferman-Schutte ordinal. The Veblen functions in a finite or transfinite number of variables give systems of ordinal notations for ordinals less than the small and large Veblen ordinals.
Ackermann
described a system of ordinal notation rather weaker than the system described earlier by Veblen. The limit of his system is sometimes called the Ackermann ordinal.
Bachmann
introduced the key idea of using uncountable ordinals to produce new countable ordinals. His original system was rather cumbersome to use as it required choosing a special sequence converging to each ordinal. Later systems of notation introduced by Feferman and others avoided this complication.
Takeuti (ordinal diagrams)
described a system of ordinal notation known as "ordinal diagrams", whose limit is the Takeuti–Feferman–Buchholz ordinal. The system was later simplified by Feferman.
Feferman's θ functions
Feferman introduced theta functions, described in as follows. For an ordinal α, θα is a function mapping ordinals to ordinals. Often θα(β) is written as θαβ. The set C(α, β) is defined by induction on α to be the set of ordinals that can be generated from 0, ω1, ω2, ..., ωω, together with the ordinals less than β by the operations of ordinal addition and the functions θξ for ξ<α. And the function θγ is defined to be the function enumerating the ordinals δ with δ∉C(γ,δ). The problem with this system is that ordinal notations and collapsing functions are not identical, and therefore this function does not qualify as an ordinal notation. An associated ordinal notation is not known.
Buchholz
described the following system of ordinal notation as a simplification of Feferman's theta functions. Define:
Ωξ = ωξ if ξ > 0, Ω0 = 1
The functions ψv(α) for α an ordinal, v an ordinal at most ω, are defined by induction on α as follows:
ψv(α) is the smallest ordinal not in Cv(α)
where Cv(α) is the smallest set such that
Cv(α) contains all ordinals less than Ωv
Cv(α) is closed under ordinal addition
Cv(α) is closed under the functions ψu (for u≤ω) applied to arguments less than α.
This system has about the same strength as Fefermans system, as for v ≤ ω. Yet, while this system is powerful, it does not qualify as an ordinal notation. Buchholz did create an associated ordinal notation, yet it is complicated: the definition is in the main article.
Kleene's O
described a system of notation for all recursive ordinals (those less than the Church–Kleene ordinal). Unfortunately, unlike the other systems described above there is in general no effective way to tell whether some natural number represents an ordinal, or whether two numbers represent the same ordinal. However, one can effectively find notations that represent the ordinal sum, product, and power (see ordinal arithmetic) of any two given notations in Kleene's ; and given any notation for an ordinal, there is a recursively enumerable set of notations that contains one element for each smaller ordinal and is effectively ordered. Kleene's denotes a canonical (and very non-computable) set of notations. It uses a subset of the natural numbers instead of finite strings of symbols, and is not recursive, therefore, once again, not qualifying as an ordinal notation.
List of limits of various ordinal notations and collapsing functions
See also
Large countable ordinals
Ordinal arithmetic
Ordinal analysis
References
; English translation by Martin Dowd (2019),
"Constructive Ordinal Notation Systems" by Fredrick Gass
"Hyperarithmetical Index Sets In Recursion Theory" by Steffen Lempp
Hilbert Levitz, Transfinite Ordinals and Their Notations: For The Uninitiated, expository article, 1999 (8 pages, in PostScript)
Ordinal numbers
Proof theory
Mathematical notation | Ordinal notation | Mathematics | 2,321 |
22,041,787 | https://en.wikipedia.org/wiki/Nation.1 | Nation.1 was a project to create what was described as an "online country" – a conceptual country based on the Internet. It was to be owned, populated and governed by the children of the world. Its borders were defined by the age of its citizens, as opposed to geography or ethnicity. The central goal of Nation.1 was to empower young people with a voice and representation in world affairs.
By March 2002, Nation.1 merged with TakingITGlobal, an online network of young people, and encouraged their current "online citizens" to transfer their accounts to the new platform.
History
The Nation.1 concept began as a challenge by Nicholas Negroponte to a group of young delegates of the 2B1 conference at the MIT Media Lab in 1997. Several of these delegates had attended the GII Junior Summit in 1995, and others were part of the Generation Why Project in Olympia, Washington.
Nation.1 was announced publicly in a 1997 teleconference to the United Nations Committee on the Rights of the Child, in a series of speeches at the Massachusetts State House and in an article in Wired magazine. The project was further developed by delegates at the Junior Summit 1998 conference, during which it adopted the use of Swatch Internet Time. In the following years Nation.1 was incorporated as a non-profit organization for youth empowerment.
In 1999 they started two petitions: one called "No War", asking world leaders to halt armed conflict; and another called "Stop Child Labor". Their website presented a section for users to join discussions about the "Crisis in Kosovo". They also released an online document Declaration of Nation1 Organization.
The Nation.1 foundation, its executive director and its assets merged officially with TakingITGlobal in 2001.
Although Nation.1 is not directly related to the One Laptop per Child project, both projects were informed by the 2B1 conference, the GII Junior Summit '95 and the Junior Summit '98.
The Nation.1 project explored a variety of translation and governance technologies as well as a variety of concepts key to the construction of a country. New approaches to youth-empowerment and autonomous self-government through the use of decentralized internet voting systems were discussed, as were topics relating to citizenship, education, economic exchange, trust, identification, and a usable international definition on what constitutes childhood.
On this last question, an arbitrary boundary of 25 years of age and under was eventually established, although the variables of this boundary were widely discussed. John Perry Barlow put in a request that the young at heart be admitted as ambassadors from the adult world, or at least be granted temporary visas.
Technology
Alan Kay recommended using a wiki in the formation of Nation1, three years before the founding of Wikipedia. His advice led to the use of a wiki to help form the founding texts of the nation. A database-driven website and email mailing lists were used as the initial communication system, while the central committee laid plans for multi-lingual chat system and a distributed decision-making system called the Democracy Engine.
Their online platform evolved during Nation.1's existence: by 1998, just one year after creation, it presented discussion forums and a mailing list users could subscribe to, as well as a call for translators. Early proposed website interfaces also presented a voting area.
In July 2001, prior to the merging with TakingITGlobal, their website presented a web chat that "automatically translates "live" between the world's major languages".
Key people
John Perry Barlow
Brandon Bruce
Terah DeJong
Marco D'Alimonte
Maitreyi Doshi
Talena Foster
Alan Kay
Lauren Keane
Hayley Goodwin
Ragni Marea Kidvai
Kanetaka Maki
Nick Moraitis
Nicholas Negroponte
Dimitri Negroponte
Thomas O'Duffy
Laetitia Garriott de Cayeux
Ryan Powell
Warren Sack
David Sontag
Tannie Kwong
Emily Kumpel
Gerald Tan
Nusrah Wali
Ian Wojtowicz
Jacob Wolfsheimer
See also
A Declaration of the Independence of Cyberspace
Nationalism
2B1
Notes
External links
Nation.1's website archive
Information and communication technologies for development
1997 introductions
Organizations established in 1997
2001 mergers and acquisitions | Nation.1 | Technology | 852 |
74,393,422 | https://en.wikipedia.org/wiki/Camptobasidiaceae | The Camptobasidiaceae are a family of fungi in the subdivision Pucciniomycotina. The family currently comprises two genera, one of which (Camptobasidium) contains an aquatic, hyphal species with auricularioid (laterally septate) basidia. The other genus contains species currently known only from their yeast states.
References
Basidiomycota families
Pucciniomycotina
Yeasts
Taxa described in 1996
Taxa named by Royall T. Moore | Camptobasidiaceae | Biology | 102 |
52,144,005 | https://en.wikipedia.org/wiki/Jelly-falls | Jelly-falls are marine carbon cycling events whereby gelatinous zooplankton, primarily cnidarians, sink to the seafloor and enhance carbon and nitrogen fluxes via rapidly sinking particulate organic matter. These events provide nutrition to benthic megafauna and bacteria. Jelly-falls have been implicated as a major “gelatinous pathway” for the sequestration of labile biogenic carbon through the biological pump. These events are common in protected areas with high levels of primary production and water quality suitable to support cnidarian species. These areas include estuaries and several studies have been conducted in fjords of Norway.
Initiation
Jelly-falls are primarily made up of the decaying corpses of Cnidaria and Thaliacea (Pyrosomida, Doliolida, and Salpida). Several circumstances can trigger the death of gelatinous organisms which would cause them to sink. These include high levels of primary production that can clog the feeding apparatuses of the organisms, a sudden temperature change, when an old bloom runs out of food, when predators damage the bodies of the jellies, and parasitism. In general, however, jelly-falls are linked to jelly-blooms and primary production, with over 75% of the jelly falls in subpolar and temperate regions occurring after spring blooms, and over 25% of the jelly-falls in the tropics occurring after upwelling events.
With global climates shifting towards creating warmer and more acidic oceans, conditions not favored by non-resilient species, jellies are likely to grow in population sizes. Eutrophic areas and dead zones can become jelly hot spots with substantial blooms. As the climate changes and ocean waters warm, jelly blooms become more prolific and the transport of jelly-carbon to the lower ocean increases. With a possible slowing of the classic biological pump, the transport of carbon and nutrients to the deep sea through jelly-falls may become more and more important to deep ocean.
Decomposition
The decomposition process starts after death and can proceed in the water column as the gelatinous organisms are sinking. Decay happens faster in the tropics than in temperate and subpolar waters as a result of warmer temperatures. In the tropics, a jelly-fall may take less than 2 days to decay in warmer, surface water, but as many as 25 days when it is lower than 1000 m deep. However, lone gelatinous organisms may spend less time on the sea floor as one study found that jellies could be decomposed by scavengers in the Norwegian deep sea in under two and a half hours.
Decomposition of jelly-falls is largely aided by these kinds of scavengers. In general, echinoderms, such as sea stars, have emerged as the primary consumer of jelly-falls, followed by crustaceans and fish. However, which scavengers find their way to jelly-falls is highly reliant on each ecosystem. For example, in an experiment in the Norwegian deep sea, hagfish were the first scavengers to find the traps of decaying jellies, followed by squat lobsters, and finally decapod shrimp. Photographs taken off the coast of Norway on natural jelly-falls also revealed caridean shrimp feeding on jelly carcasses.
With increased populations and blooms becoming more common, with favorable conditions and a lack of other filter feeders in the area to consume plankton, environments with jellies present will have carbon pumps be more primarily supplied with jelly-falls. This could lead to issues of habitats with established biological pumps succumbing to nonequilibrium as the presence of jellies would change the food web as well as changes to the amount of carbon deposited into the sediment.
Finally, decomposition is aided by the microbial community. In a case study on the Black Sea, the number of bacteria increased in the presence of jelly-falls, and the bacteria were shown to preferentially use nitrogen released from decaying jelly carcasses while mostly leaving carbon. In a study conducted by Andrew Sweetman in 2016, it was discovered using core samples of the sediment in Norwegian fjords, the presence of jelly-falls significantly impacted the biochemical process of these benthic communities. Bacteria consume jelly carcasses rapidly, removing opportunities of acquiring sustenance for bottoming feeding macrofauna, which has impacts traveling up the trophic levels. In addition, with the exclusion of scavengers, jelly-falls develop a white layer of bacteria over the decaying carcasses and emit a black residue over the surrounding area, which is from sulfide. This high level of microbial activity requires a lot of oxygen, which can lead zones around jelly-falls to become hypoxic and inhospitable to larger scavengers.
Research challenges
Researching jelly-falls relies on direct observational data such as video, photography, or benthic trawls. A complication with trawling for jelly-falls is the gelatinous carcass easily falls apart and as a result, opportunistic photography, videography, and chemical analysis have been primary methods of monitoring. This means that jelly-falls are not always observed in the time period in which they exist. Because jelly-falls can be fully processed and degraded within a number of hours by scavengers and the fact that some jelly-falls will not sink below 500 m in tropical and subtropical waters, the importance and prevalence of jelly-falls may be underestimated.
See also
Biological pump
Jellyfish
Pyrosoma atlanticum
Whale fall
Deep sea community
Dead zone
References
Aquatic ecology
Biological oceanography
Chemical oceanography
Biogeochemistry | Jelly-falls | Chemistry,Biology,Environmental_science | 1,169 |
18,531 | https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s%20rule | L'Hôpital's rule (, ) or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume De l'Hôpital. Although the rule is often attributed to De l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.
L'Hôpital's rule states that for functions and which are defined on an open interval and differentiable on for a (possibly infinite) accumulation point of , if and for all in , and exists, then
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be directly evaluated by continuity.
History
Guillaume de l'Hôpital (also written l'Hospital) published this rule in his 1696 book Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes (literal translation: Analysis of the Infinitely Small for the Understanding of Curved Lines), the first textbook on differential calculus. However, it is believed that the rule was discovered by the Swiss mathematician Johann Bernoulli.
General form
The general form of L'Hôpital's rule covers many cases. Let and be extended real numbers: real numbers, positive or negative infinity. Let be an open interval containing (for a two-sided limit) or an open interval with endpoint (for a one-sided limit, or a limit at infinity if is infinite). On , the real-valued functions and are assumed differentiable with . It is also assumed that , a finite or infinite limit.
If eitherorthenAlthough we have written throughout, the limits may also be one-sided limits ( or ), when is a finite endpoint of .
In the second case, the hypothesis that diverges to infinity is not necessary; in fact, it is sufficient that
The hypothesis that appears most commonly in the literature, but some authors sidestep this hypothesis by adding other hypotheses which imply . For example, one may require in the definition of the limit that the function must be defined everywhere on an interval . Another method is to require that both and be differentiable everywhere on an interval containing .
Necessity of conditions: Counterexamples
All four conditions for L'Hôpital's rule are necessary:
Indeterminacy of form: or ;
Differentiability of functions: and are differentiable on an open interval except possibly at the limit point in ;
Non-zero derivative of denominator: for all in with ;
Existence of limit of the quotient of the derivatives: exists.
Where one of the above conditions is not satisfied, L'Hôpital's rule is not valid in general, and its conclusion may be false in certain cases.
1. Form is not indeterminate
The necessity of the first condition can be seen by considering the counterexample where the functions are and and the limit is .
The first condition is not satisfied for this counterexample because and . This means that the form is not indeterminate.
The second and third conditions are satisfied by and . The fourth condition is also satisfied with
But the conclusion fails, since
2. Differentiability of functions
Differentiability of functions is a requirement because if a function is not differentiable, then the derivative of the function is not guaranteed to exist at each point in . The fact that is an open interval is grandfathered in from the hypothesis of the Cauchy's mean value theorem. The notable exception of the possibility of the functions being not differentiable at exists because L'Hôpital's rule only requires the derivative to exist as the function approaches ; the derivative does not need to be taken at .
For example, let , , and . In this case, is not differentiable at . However, since is differentiable everywhere except , then still exists. Thus, since
and exists, L'Hôpital's rule still holds.
3. Derivative of denominator is zero
The necessity of the condition that near can be seen by the following counterexample due to Otto Stolz. Let and Then there is no limit for as However,
which tends to 0 as , although it is undefined at infinitely many points. Further examples of this type were found by Ralph P. Boas Jr.
4. Limit of derivatives does not exist
The requirement that the limit exists is essential; if it does not exist, the other limit may nevertheless exist. Indeed, as approaches , the functions or may exhibit many oscillations of small amplitude but steep slope, which do not affect but do prevent the convergence of .
For example, if , and , then which does not approach a limit since cosine oscillates infinitely between and . But the ratio of the original functions does approach a limit, since the amplitude of the oscillations of becomes small relative to :
In a case such as this, all that can be concluded is that
so that if the limit of exists, then it must lie between the inferior and superior limits of . In the example, 1 does indeed lie between 0 and 2.)
Note also that by the contrapositive form of the Rule, if does not exist, then also does not exist.
Examples
In the following computations, we indicate each application of L'Hopital's rule by the symbol .
Here is a basic example involving the exponential function, which involves the indeterminate form at :
This is a more elaborate example involving . Applying L'Hôpital's rule a single time still results in an indeterminate form. In this case, the limit may be evaluated by applying the rule three times:
Here is an example involving : Repeatedly apply L'Hôpital's rule until the exponent is zero (if is an integer) or negative (if is fractional) to conclude that the limit is zero.
Here is an example involving the indeterminate form (see below), which is rewritten as the form :
Here is an example involving the mortgage repayment formula and . Let be the principal (loan amount), the interest rate per period and the number of periods. When is zero, the repayment amount per period is (since only principal is being repaid); this is consistent with the formula for non-zero interest rates:
One can also use L'Hôpital's rule to prove the following theorem. If is twice-differentiable in a neighborhood of and its second derivative is continuous on this neighborhood, then
Sometimes L'Hôpital's rule is invoked in a tricky way: suppose converges as and that converges to positive or negative infinity. Then:and so, exists and (This result remains true without the added hypothesis that converges to positive or negative infinity, but the justification is then incomplete.)
Complications
Sometimes L'Hôpital's rule does not reduce to an obvious limit in a finite number of steps, unless some intermediate simplifications are applied. Examples include the following:
Two applications can lead to a return to the original expression that was to be evaluated: This situation can be dealt with by substituting and noting that goes to infinity as goes to infinity; with this substitution, this problem can be solved with a single application of the rule: Alternatively, the numerator and denominator can both be multiplied by at which point L'Hôpital's rule can immediately be applied successfully:
An arbitrarily large number of applications may never lead to an answer even without repeating:This situation too can be dealt with by a transformation of variables, in this case : Again, an alternative approach is to multiply numerator and denominator by before applying L'Hôpital's rule:
A common logical fallacy is to use L'Hôpital's rule to prove the value of a derivative by computing the limit of a difference quotient. Since applying l'Hôpital requires knowing the relevant derivatives, this amounts to circular reasoning or begging the question, assuming what is to be proved. For example, consider the proof of the derivative formula for powers of x:
Applying L'Hôpital's rule and finding the derivatives with respect to yields
as expected, but this computation requires the use of the very formula that is being proven. Similarly, to prove , applying L'Hôpital requires knowing the derivative of at , which amounts to calculating in the first place; a valid proof requires a different method such as the squeeze theorem.
Other indeterminate forms
Other indeterminate forms, such as , , , , and , can sometimes be evaluated using L'Hôpital's rule. We again indicate applications of L'Hopital's rule by .
For example, to evaluate a limit involving , convert the difference of two functions to a quotient:
L'Hôpital's rule can be used on indeterminate forms involving exponents by using logarithms to "move the exponent down". Here is an example involving the indeterminate form :
It is valid to move the limit inside the exponential function because this function is continuous. Now the exponent has been "moved down". The limit is of the indeterminate form dealt with in an example above: L'Hôpital may be used to determine that
Thus
The following table lists the most common indeterminate forms and the transformations which precede applying l'Hôpital's rule:
Stolz–Cesàro theorem
The Stolz–Cesàro theorem is a similar result involving limits of sequences, but it uses finite difference operators rather than derivatives.
Geometric interpretation: parametric curve and velocity vector
Consider the parametric curve in the xy-plane with coordinates given by the continuous functions and , the locus of points , and suppose . The slope of the tangent to the curve at is the limit of the ratio as . The tangent to the curve at the point is the velocity vector with slope . L'Hôpital's rule then states that the slope of the curve at the origin () is the limit of the tangent slope at points approaching the origin, provided that this is defined.
Proof of L'Hôpital's rule
Special case
The proof of L'Hôpital's rule is simple in the case where and are continuously differentiable at the point and where a finite limit is found after the first round of differentiation. This is only a special case of L'Hôpital's rule, because it only applies to functions satisfying stronger conditions than required by the general rule. However, many common functions have continuous derivatives (e.g. polynomials, sine and cosine, exponential functions), so this special case covers most applications.
Suppose that and are continuously differentiable at a real number , that , and that . Then
This follows from the difference quotient definition of the derivative. The last equality follows from the continuity of the derivatives at . The limit in the conclusion is not indeterminate because .
The proof of a more general version of L'Hôpital's rule is given below.
General proof
The following proof is due to , where a unified proof for the and indeterminate forms is given. Taylor notes that different proofs may be found in and .
Let f and g be functions satisfying the hypotheses in the General form section. Let be the open interval in the hypothesis with endpoint c. Considering that on this interval and g is continuous, can be chosen smaller so that g is nonzero on .
For each x in the interval, define and as ranges over all values between x and c. (The symbols inf and sup denote the infimum and supremum.)
From the differentiability of f and g on , Cauchy's mean value theorem ensures that for any two distinct points x and y in there exists a between x and y such that . Consequently, for all choices of distinct x and y in the interval. The value g(x)-g(y) is always nonzero for distinct x and y in the interval, for if it was not, the mean value theorem would imply the existence of a p between x and y such that g' (p)=0.
The definition of m(x) and M(x) will result in an extended real number, and so it is possible for them to take on the values ±∞. In the following two cases, m(x) and M(x) will establish bounds on the ratio .
Case 1:
For any x in the interval , and point y between x and c,
and therefore as y approaches c, and become zero, and so
Case 2:
For every x in the interval , define . For every point y between x and c,
As y approaches c, both and become zero, and therefore
The limit superior and limit inferior are necessary since the existence of the limit of has not yet been established.
It is also the case that
and
and
In case 1, the squeeze theorem establishes that exists and is equal to L. In the case 2, and the squeeze theorem again asserts that , and so the limit exists and is equal to L. This is the result that was to be proven.
In case 2 the assumption that f(x) diverges to infinity was not used within the proof. This means that if |g(x)| diverges to infinity as x approaches c and both f and g satisfy the hypotheses of L'Hôpital's rule, then no additional assumption is needed about the limit of f(x): It could even be the case that the limit of f(x) does not exist. In this case, L'Hopital's theorem is actually a consequence of Cesàro–Stolz.
In the case when |g(x)| diverges to infinity as x approaches c and f(x) converges to a finite limit at c, then L'Hôpital's rule would be applicable, but not absolutely necessary, since basic limit calculus will show that the limit of f(x)/g(x) as x approaches c must be zero.
Corollary
A simple but very useful consequence of L'Hopital's rule is that the derivative of a function cannot have a removable discontinuity. That is, suppose that f is continuous at a, and that exists for all x in some open interval containing a, except perhaps for . Suppose, moreover, that exists. Then also exists and
In particular, f''' is also continuous at a.
Thus, if a function is not continuously differentiable near a point, the derivative must have an essential discontinuity at that point.
Proof
Consider the functions and . The continuity of f at a'' tells us that . Moreover, since a polynomial function is always continuous everywhere. Applying L'Hopital's rule shows that .
See also
L'Hôpital controversy
Notes
References
Sources
Articles containing proofs
Theorems in calculus
Theorems in real analysis
Limits (mathematics) | L'Hôpital's rule | Mathematics | 3,107 |
73,687,434 | https://en.wikipedia.org/wiki/Hydrography%20of%20the%20Biella%20region | The hydrography of the Biella region, that is, the distribution of surface water in the province of Biella, Italy, falls almost entirely in the two basins of the Cervo and Sessera rivers, both tributaries of the Sesia. Some areas of the southwestern Biella region, on the other hand, are tributaries of the Dora Baltea; the largest natural body of water in the province, Lake Viverone, is also located in this area. In addition to the natural bodies of water, there are several irrigation canals in the plains built mainly to support rice cultivation and some reservoirs built in the foothills. In addition to irrigation, surface water is also used in the Biella area to serve the region's numerous industries and for potable water use, because the area is densely inhabited and groundwater capture is insufficient. Hydroelectric use, on the other hand, is very limited and is substantially confined to the Sessera Valley. The streams of the Biella region can be subject to ruinous floods as well, which have caused numerous damage to property and people over time.
Paleogeography
Before the formation of the ridge and more generally of the Ivrea Morainic Amphitheatre, deposited at the outlet of the Aosta Valley during the last glaciations, the hydrographic network of the area that today constitutes the Province of Biella must have been completely different from the present one.
Paleogeographical research (in particular by geologists Francesco Carraro and Franco Gianotti) in fact shows how in ancient times the Cervo, after leaving the alpine valley of the same name, headed decisively southward to flow into the Dora Baltea roughly where Verrone stands today. Instead, the ancient course of the Dora was shifted markedly further northeast than it is today. In this reconstruction, the Oropa and Elvo also probably went directly into the Dora, running parallel to the Cervo for a long stretch of plain.
The small basin of the Viona would also have been a first-order tributary of the Dora Baltea but its course, instead of diverting eastward, would instead have taken it to join the Dora not too far from Ivrea.
Nevertheless, the deposition of the enormous moraine apparatus of the Serra and of sedimentary beds to the east of it changed this configuration and gradually diverted the course of the Cervo eastward, eventually leading it to flow into the Sesia. Sediments transported by the Balteo Glacier also barred the way towards the Dora to the current right tributaries of the Cervo itself, thus also conveying their waters towards the Sesia basin. This complex realignment of the hydrographic network left as a remnant many of the sedimentary deposits that can be found up to an altitude of 800 m a.s.l. at the foot of the Biellese mountains as well as the Baragge, which would represent what remains of the lowland areas present in that ancient past.
The deep incision in the rocky substrate produced by the ancient course of the Cervo River and preserved below the present sedimentary blanket is of considerable importance today because of the aquifers it contains, which can potentially be exploited for potable water use.
Streams
Among the main streams in the province of Biella, 14 belong to the Cervo basin while only 2 flow into the Sessera. Their hierarchy can be highlighted with a tree view, where streams are shown as secondary branches of a tree whose stem represents the main stream. For example, the Chiebbia is a tributary of the Quargnasca, which in turn flows into the Strona di Mosso, a tributary of the Cervo.
The following list in parentheses indicates whether the stream flows into its main watercourse from the right (r) or left (l), and what its length is in km.
Sessera Basin
Sessera (l; 35,46 km)
Ponzone (r; 10,00 km)
Strona di Postua (l; 14,13 km)
Cervo Basin
Cervo (l; 65,48 km)
Oropa (r; 13,49 km)
Strona di Mosso (l; 26,81 km)
Quargnasca (r; 12,54 km)
Chiebbia (r; 12,11 km)
Ostola (l; 23,51 km)
Bisingana (r; 14,70 km)
Guarabione (r; 20,47 km)
Rovasenda (r; 37,83 km)
Marchiazza (r; 34,67 km)
Elvo (l; 58,46 km)
Ingagna (r; 18,50 km)
Viona (r; 16,50 km)
Oremo (l; 16,21 km)
Olobbia (r; 15,22 km)
Hydrological regime
The streams in the Biellese area almost all have a typically pre-Alpine regime with autumn and spring floods and marked summer and winter low flows. The water supply provided by melting winter snow accumulation (often abundant in the upper Cervo or Sessera valleys) is exhausted relatively quickly during the spring due to the lower elevation reached by the Biellese Alps compared to the other Piedmont mountain ranges. In the event of heavy rainfall, the streams in the area are subject to massive floods that in the past have caused very serious damage to people and buildings.
The water regime of the lowland section of these streams is then altered, both quantitatively and as a distribution of flow rates over time, by the withdrawal operated by irrigation canals and the return of residual water from irrigation, particularly those served by spring and summer flooding of rice fields.
Flood events
Precipitation data for the Biella area show that the prevailing rainfall regime is of alpine type exposed to the plains, in which rainfall is more abundant than in the inner parts of the Alpine chain. In the Sesia and Cervo basins, the highest average annual rainfall in the Po Valley region (even more than 2,000 mm/year) and the highest rainfall intensities are reached. All this leads to the formation of floods with flow rates that can also become massive due to the low permeability of the soils located at the head of the valleys.
Sometimes these flood waves lead to the overflow of streams resulting in the flooding of surrounding areas. Among many such events, the most catastrophic one in the last 100 years occurred in 1968; the most extensive damage was located on the Strona riverbank in Mosso. On Saturday, November 2, 1968, 180.6 mm of rain fell in Trivero, and the following day as much as 305.6 mm. The impact on the territory of these values, already very high in themselves, was then aggravated by the fact that the rain, instead of being evenly distributed over the two days, was concentrated in the night between Saturday and Sunday. In Valstrona alone, the flood caused 58 deaths and more than 100 injuries. Hundreds of homes were totally destroyed or severely damaged, and the same fate was suffered by numerous industrial plants. As a result, many companies had to resort to lay-offs, which came to affect some 13,000 workers.
Even outside the Strona Valley, the flooding torrents swept away bridges and roads and flooded vast expanses of territory; landslides and mudslides involved much of the Biella region and in particular the Sessera basin. Here, near the village of Ponzone (Trivero), a huge landslide poured over the local factories a large part of a hillside (Il Trucco), and carnage was avoided only because the factories were closed due to the public holiday.
The following table shows the flow rates of some of the waterways measured on November 2, 1968, flow rates that for the gauging stations listed in 2002 still represented the highest figure in the available time series.
The 1968 event was not an isolated one, and numerous other flood waves have caused flood events in the Biella area over time. Among the most notable ones in the last century are:
May 1923: a flood of the Cervo River caused severe damage in Piedicavallo and Rosazza;
November 1951: widespread flooding throughout the Biella area and particularly in the Cervo Valley;
October and November 1976: numerous disruptions caused in particular by the flooding of Olobbia, Elvo, Oremo and Quargnasca;
September 1981: flooding in the upper Cervo Valley with damage caused mostly by the minor hydrographic network;
April 1986: a large landslide blocked the former SS 232 near Valle Mosso;
September 1993: after 36 hours of bad weather, a flood of the Cervo River caused the bypass bridge to collapse. There were no casualties thanks to a roadman who noticed the impending disaster and had the bridge closed half an hour before the collapse.
June 2002: heavy rainfall caused landslides and overflowing in the western Biella basins; the most serious damage occurred in the upper Cervo valley, while in the Oropa basin a landslide destroyed a long section of the access road to the Rosazza Tunnel.
October 2020: the flood event mainly affected the Strona di Mosso Valley and Sessera Valley, with very heavy damage especially to the road system.
Lakes
In the hilly and mountainous part of the Biellese region there are numerous lakes, generally of small to medium size with the exception of Lake Viverone. The latter, with a surface area of almost 6 km2, is in fact the third largest lake in Piedmont and is an important tourist hub with numerous accommodation and recreational facilities located on its shores. The lake is located on the border with the Province of Turin (in fact, about 1/6 of its surface area falls within the municipality of Azeglio); a public boat line connects the main towns along the coast.
Lakes in the Biella region can be grouped according to the event responsible for their formation into lakes of glacial origin, intermorainic lakes and artificial lakes. For the lakes in each of these groups, the elevation of the body of water, the outfall (where it is present and is sufficiently significant), and, when available, the area of the lake are given in parentheses; the three lists are not exhaustive and do not include many small lakes of exclusively local importance.
Lakes of glacial origin
They are located in the mountainous belt of the province and occupy the small basins left by the cirque glaciers present in the Biellese Alps during the glacial episodes of the Middle and Upper Pleistocene. These are small lakes located at altitudes above 1,800 m above sea level, often the destination of hiking routes.
The main lakes of this type are:
Lago della Vecchia (1858 m, Cervo,[11] 0.06 km²)
Mucrone Lake (1894 m, Oropa,[11] 0.02 km²)
Sessera or Monte Bo lakes (1956 m, Sessera)
Lago della Lace (1920 m, Rio della Lace - Elvo)
Lake Pasci or Mombarone (2058 m, Viona)
Intermorainic lakes
They are located in depressions included between the moraine ridges abandoned during the episodes of advance and retreat of the huge glacier that in the Pleistocene ran through the valley of the Dora Baltea. They are sometimes devoid of an outfall and, when it exists, it is activated only in case of exceptional floods. Remains pertaining to pile-dwelling villages dating back to prehistoric times have been found around these lakes. In particular, the remains of a Bronze Age village and two ancient monoxyle pirogues, i.e., built from a single tree trunk, have been found near Lake Bertignano; the boats are preserved in the Museum of Antiquities in Turin. Investigations by the Piedmont Archaeological Superintendency have also identified the presence of important prehistoric sites on the shores of Lake Viverone.
Such bodies of water present in the Biella region are:
Lake Viverone (230 m,[11] 5.72 km²)
Lake Bertignano (377 m,[11] 0.09 km²)
Lake Prè (690 m,)
Lake Bosi or Roppolo (375 m)
Artificial lakes
They are formed by dams that bar a watercourse; most of these reservoirs were created in the hillside to store the water needed to irrigate the rice fields in the plains below during the summer. However, the water from some of these reservoirs today has a mixed use and is also used for drinking, energy or industrial purposes. In particular, the Ponte Vittorio reservoir, whose construction dates back to 1953, was created to serve the numerous industrial users in the Strona di Mosso Valley.
The main artificial reservoirs in the province are:
Lake Ingagna or Mongrando (365 m, Ingagna, 0.42 km²)
Lago delle Piane or Masserano (325 m, Ostola,[11] 0.43 km²)
Lake of Ponte Vittorio or Camandona (692 m, Strona di Mosso, 0.04 km²)
Lake Ravasanella (325 m, Riale Ravasanella - Rovasenda, 0.31 km²)
Lago delle Mischie or delle Miste (904 m, Sessera, 0.13 km²)
Canals
The Biella plain is crossed by numerous irrigation ditches and canals, such as the ditch of Buronzo, which originates from a branch on the hydrographic left of the Cervo near Castelletto Cervo and returns to it further downstream, now in the province of Vercelli. These canals were generally built to serve rice-growing, which is mainly concentrated in the southeastern part of the province. The area on the hydrographic right of the Cervo is served by various canals including the Roggia Massa di Serravalle, which originates in Cerrione from the Elvo and, after crossing the municipality of Salussola, flows into the Roggia Marchesa. The latter originates instead from the Cervo, flowing into the stream itself further downstream, not far from Villanova Biellese. The Roggia Madama and Roggia Molinara also depart from the Cerrione area, which then descend into the plain below, dispersing into numerous minor branches. Finally, in the southern area of Biella, the Vanoni Canal passes through, an important derivation of the Depretis Canal completed in 1958 to distribute water from the Dora Baltea in the lower part of the rice-growing district.
In some cases it is difficult to distinguish between a natural watercourse and a canal as is the case, for example, with the Roggia Drumma and Ottina, which collect water from the Candelo and Benna moorlands and then flow down to the Cervo in a semi-natural pattern. For centuries through this system of canals there has been a transfer to the Biellese and Vercellese areas of water resources from the Dora Baltea, the flow of which remains substantial even during the summer thanks to the contribution provided by the snowfields and glaciers of the Valdaostan Alps.
Various artificial canals are also present in the hilly area of the province, although with much smaller flows and dimensions than those in the plain. Among these, the Roggia del Piano and the Roggia del Piazzo, two mixed-use derivations of the Oropa Stream that served the city of Biella, have had considerable historical-urban importance in the past. The Roggia del Piazzo in particular is one of the oldest public works in the Biella area; in fact, it was born together with the district of the same name around 1160 and the cost of its construction was shared between Bishop Uguccione and the city of Biella.
In addition to the canals built for irrigation purposes, there are a number of derivations for hydroelectric purposes in Biella, concentrated particularly in the Sessera Valley and feeding small power plants that serve local industrial plants.
Watersheds
The Biellese territory is almost entirely included in the two basins of the Cervo and the Sessera, both tributaries on the orographic right of the Sesia River. The most downstream part of both basins falls within the province of Vercelli, however, and the confluence of many of its main tributaries into the Cervo also occurs in the Vercelli plain. Some areas of southwestern Biella, however, encroach into the Dora Baltea basin; this occurs in the Serra area and in the plains around Cavaglià.
The following table shows a series of data on the watersheds of Biella's main streams, taken from the Water Protection Plan adopted by the Piedmont Region.
While the average flow rate expressed in m3/s measures the total amount of water leaving the basin on average in the unit of time, the one expressed in liters/s-km2 measures the surface runoff water that is produced by each km2 of the same catchment, again in the unit of time. The latter figure is greater in mountain basins (e.g., those of Oropa or Strona di Postua) than in hill basins (e.g., Olobbia) due to a number of factors including greater land gradients that favor runoff at the expense of water infiltration into the soil and lower evapotranspiration losses due to vegetation. For much the same reasons, higher runoff coefficients are also found in mountainous areas which, among other things, have higher annual water inflows (here expressed in mm) in this part of Piedmont.
Solid transport
Solid transport is the ability of a watercourse to transport solid materials downstream. This transport can occur by various modes among which, in the climatic situation of the Biella region, suspension in water and rolling/dragging near the bottom largely predominate. In order to arrive at a quantitative estimate of the solid transport of a watercourse, it is necessary to know both the average amount of sediment produced by the upland portion of the basin of the watercourse itself and the solid transport capacity of the river channel.
The former depends on the lithological and tectonic characteristics of the area under consideration (rock erodibility, slope gradient, presence of structural fractures, etc.) as well as the local climate and, in particular, the amount and type of atmospheric precipitation.
For the Elvo and Cervo mountain basins, which together cover an area of 426 km2, the average annual precipitation is 1,580 mm and the amount of sediment produced has been estimated at 35,600 m3 annually, or 0.08 mm/year of specific erosion. This figure is rather low when compared with the average value of specific erosion in the entire Po mountain basin (28,440 km2), which is estimated at 1.2 mm/year. The amount of sediment originating from the Biella mountain basins is also evidently medium-low when compared with their surface area: compared with a territorial extension of the area considered that represents 4.94% of the Po Valley mountain area, the sediment produced by it is only 3.21% of the total.
On the other hand, as far as estimating the solid transport capacity of river courses is concerned, it is necessary to consider not only the magnitude and duration of the watercourse's flow rates, but also the physical characteristics of its riverbed (slope, width, roughness, tortuosity, grain size of materials).
The following table provides an estimate of the annual solid transport capacity of the two main streams in Biella and, for comparison, shows the same data referring to the Sesia River.
From the comparison of the amount of sediment produced by a basin and the solid transport capacity of the river channel, rough predictions can be made about the prevalence of either deposition of transported materials or erosion, with the consequent deepening of the riverbed, abandonment of secondary branches, and a decrease in the rambling range of the watercourse. From the available data, it appears that the overall situation on the Elvo is fairly close to the equilibrium point, and that instead along the Cervo River channel erosion tends to predominate over deposition.
Uses of water
Industry and hydropower
In the past, water availability favored the concentration of the textile industry near Biella's streams. Notable examples of industrial archaeology, such as the Biella factories located near the confluence of Cervo and Oropa or the Wheel Factory on the banks of the Ponzone Creek, remain as evidence of this historical phase.
Water withdrawals for industrial use are still numerous, and tend to be concentrated in the hilly stretches and at the outlet on the plain of river courses, such as in the middle Strona valley of Mosso or in the lower Cervo Valley.
The discontinuity of the flows of Biella's streams, on the other hand, confines the hydroelectric use of its waters to limited mountainous areas such as the Sessera Valley; derivations are generally operated by private utilities that use the energy produced for the operation of their industrial plants.
Irrigation
To alleviate at least part of the summer water shortage, a number of dams were built in the second half of the twentieth century that retain water from spring rainfall in a series of artificial lakes. These are then gradually restored over the summer, allowing irrigation of the underlying rice-growing area.
The main provider of this type of service is the Consorzio di Bonifica della Baraggia (Ovest Sesia).
Potable water
The two main users of Biella's water for potable water purposes are CORDAR spa Biella Servizi, the company that manages, among other things, Biella's water supply, and Servizio Idrico Integrato S.p.a. (SII), which manages the water services of several municipalities in Vercelli and Biella including Borgosesia. They are joined by two other managers of integrated water services of supra-municipal importance, CORDAR Valsesia spa and Comuni Riuniti spa. In some municipalities, the presence of these utilities coexists with that of locally important aqueducts operated directly by local citizens; about ten municipalities lack an integrated water service for now. While the water supply of CORDAR spa Biella Servizi relies mainly on springs and wells a good part of the drinking water put on the network by Servizio Idrico Integrato S.p.a. comes instead from two artificial lakes, Lago delle Piane and Ingagna Lake. Some of this water, however, is not used directly by SII but is sold to CORDAR spa Biella Servizi.
Also in the context of water use for human consumption, two important plants for collecting and bottling mineral water are worth mentioning, both located on the southern slopes of the Colma di Mombarone. These are Fonte Caudana, located in the municipality of Donato and operated by Alpe Guizza S.p.a. (a brand of Acqua Minerale San Benedetto S.p.a.) and Acqua Lauretana. This second mineral water, which qualifies as the lightest in Europe, is instead collected and bottled in the municipality of Graglia by Società Lauretana S.p.a.
Other uses
The streams draining the Bessa area are known, as are other Piedmontese watercourses such as the Orco, for the presence of bits of gold in the bed sands. Gold panning had in antiquity a very notable economic importance and led to the grandiose mining work put in place by the ancient Romans on the Bessa plateau, whose gold sediments were washed down thanks to the detour of the waters of the nearby streams Viona and Olobbia. These streams are still known among avid gold prospectors for the presence of flecks of metal in their alluvial deposits, so much so that in August 2009 the municipality of Zubiena hosted the World Gold Prospecting Championships.
The waters of Lake Viverone and those of the mountain section of some streams including the Cervo and Sessera are used during the summer for bathing.
The streams in the Biellese region are not navigable by motor boats; however, it should be mentioned that there is a public lake transport line on Lake Viverone. During periods when the flow is sufficiently high, however, it is possible to go down some streams by canoe or rafting, including the Sessera, the Strona di Postua, and the Cervo.
Many Biella streams, including lowland ones, are also valued fishing grounds.
Environmental status
Water quality monitoring in Biella is based on a network of 24 sampling stations (in 2006) managed by ARPA-Piemonte. The environmental status of streams in the Biellese region is generally satisfactory in the upland section but worsens considerably in the more anthropized hillside areas due to emissions from civil and industrial sources. The impact of pollutant factors tends to be more severe in streams with low flows and poor self-depurative capacity. Even in the lowlands, due to water shortages caused by withdrawal for agricultural use and the presence of pesticide residues, the environmental status of Biella's streams generally remains unsatisfactory.
Out of the total of 24 points surveyed in 2006, 4 were in S.A.C.A. class 4 (POOR) and only one was in class 5 (VERY POOR). The situation of Lake Viverone is also rather problematic mainly due to its slow water exchange. The lake became almost completely swimmable again in 2008 after several years during which this use had been prohibited due to pollution. For the time being, water quality remains rather poor, although a cleanup project currently being implemented could lead to a gradual improvement in quality.
The main sewage treatment plant in the Biella area is the one operated by CORDAR spa Biella Servizi and is located near Cossato in Spolina. This plant has a capacity of 520,000 population equivalent (p.e.); in contrast, none of the many other sewage treatment plants in the Biellese area exceeds the capacity of 100,000 p.e..
See also
Province of Biella
Biellese Alps
Hydrography
Drainage basin
Ivrea Morainic Amphitheatre
Notes
References
Bibliography
Rivers of the Province of Biella
Biella
Hydrography | Hydrography of the Biella region | Environmental_science | 5,429 |
1,481,284 | https://en.wikipedia.org/wiki/Swiki | Swiki (Squeak wiki) is wiki software written in Squeak.
It was formerly used by the Georgia Institute of Technology's College of Computing, but its use was discontinued in 2011 following a student complaint about privacy. Swiki comes bundled with its own web server.
A swiki installation consists of the Virtual Machine (VM) file (usually squeak.exe), an image file (usually squeak.image), and a set of files and folders with templates and the virtual wikis. One swiki installation allows a large number of virtual wikis to be created through the admin interface using a web browser. The image file and associated templates and virtual wikis can be run on any OS as long as the VM for that OS is used.
The VM and image file are the only binary files. All of the swiki templates and pages are stored as text files using XML tags. Each new virtual swiki goes in its own folder, and each page in the virtual swiki is a numbered XML file. For example, the first page is 1.xml, the second is 2.xml, etc. History for each page is a separate XML file that used the file extension "old", e.g., 1.old, 2.old.
See also
Comparison of wiki software
References
External links
Smalltalk programming language family
Wiki software | Swiki | Technology | 284 |
4,627,911 | https://en.wikipedia.org/wiki/Chestnut-backed%20chickadee | The chestnut-backed chickadee (Poecile rufescens) is a small passerine bird in the tit family, Paridae, native to western North America.
Taxonomy
In the early 20th century, Joseph Grinnell hypothesized that the chestnut-backed chickadee diverged from the boreal chickadee (Poecile hudsonicus), because both species inhabited similar coniferous forest environments. Grinnell noted that the main differences between the boreal chickadee and the chestnut-backed were in the shade and tone of their respective brown coloration. He drew parallels between the varied chickadee characteristics using the fact that some bird species become smaller and more vibrantly brown as their habitat becomes more humid. Modern molecular phylogenetic studies have confirmed that the chestnut-backed chickadee is sister to the boreal chickadee. More recent research regarding the population distribution of the chestnut-backed chickadee suggests that the genetic fragmentation of the chestnut-backed chickadee from the boreal chickadee was due to the changing glacial landscapes of the Pleistocene era. After this species divergence, the chestnut-backed chickadee migrated south to inhabit the range described above.
Subspecies
There are three subspecies, with the flanks being grayer and less rufous further south:
Poecile rufescens rufescens (Townsend, 1837). Nominate subspecies; Alaska south to northwest California. Broad rufous band on flanks.
Poecile rufescens neglectus (Ridgway, 1879). Coastal central California (Marin County). Narrow rufous band on flanks.
Poecile rufescens barlowi (Grinnell, 1900). Coastal southwestern California (south of San Francisco Bay). Almost no rufous color on flanks.
In addition to these three subspecies, research on the geographical range of chestnut-backed chickadees suggests that there are also four "genetically distinct" groups of chestnut-backed chickadee in North America. Including the populations in Alaska and Coastal North America, there are also separate populations inhabiting the Queen Charlotte islands and British Columbia. In fact, the chestnut-backed chickadee is the only species of chickadee that resides on the British Columbia islands.
Distribution and habitat
It is found in the Pacific Northwest of the United States and western Canada, from southeastern Alaska to southwestern California. Its geographical range hugs the humid, foggy coasts. It is a permanent resident within its range, with some seasonal movements as feeding flocks move short distances in search of food. These chickadees usually move to lower elevations in the same area upon onset of winter and move back up to higher elevations in late summer. Its habitat is low elevation coniferous and mixed coniferous forests, consisting mainly of Douglas fir, western hemlock, and western redcedar. This environment provides plenty of shade and constant, cool temperature. In fact, the abundance of Douglas fir trees can be a helpful indicator for the population of chestnut-backed chickadees in the region. In the San Francisco Bay Area, this bird has readily adapted to suburban settings, prompting expansion farther inland.
Description
It is a small chickadee, long with a weight of . The head is dark blackish-brown with white cheeks, the mantle is bright rufous-brown, the wing feathers are dark gray with paler fringes. The underparts are white to pale grayish-white, with rufous or pale gray flanks. It is often considered the most vibrant of all chickadees.
Chickadees are able to use nocturnal hypothermia to regulate energy expenditure, allowing them to survive harsh winters where other bird species not utilizing thermal regulation would not be able to. Some estimates put the energy conserved while using nocturnal hypothermia all the way up to 32%.
Diet and foraging
Chestnut-backed chickadees feed largely on insects and other invertebrates gleaned from foliage (especially from that of the Douglas fir). They often move through the forest in mixed feeding flocks, and can be spotted in large groups with bushtits, warblers, red-breasted nuthatches, and kinglets. Chestnut-backed chickadees also eat seeds and plant matter, especially those of conifers, and fruit. It will visit bird feeders, including hummingbird feeders, and especially loves suet.
Mating and nesting
Chestnut-backed chickadees mate monogamously, and can stay with the same partner for years. These chickadees are cavity-nesters, preferring tree-stump holes and nest boxes, usually utilizing an abandoned woodpecker hole, but sometimes excavating on their own. During nesting season, the female chickadee will spend about a week building the nest on her own. She builds the under layers of the nest from moss and tree bark, with layer of fur on top. Chestnut-backed chickadees use much fur and hair to make their nests. Their nests are actually 50% fur and hair. The most common hair they use comes from deer, rabbits, and coyotes. The adult chickadees also make a layer of fur about a centimeter thick which is used to cover the eggs on the nest whenever they leave the nest. The female lays 5–8 (sometimes 9) eggs per clutch, laying about one egg each morning. Weasels are the main predator risk posed to chestnut chickadee eggs. The incubation period is about two weeks, and the chicks fledge the nest about three weeks after hatching.
Gallery
References
External links
Miller, K. (2001). Animal Diversity Web: Parus rufescens. Retrieved 2006-NOV-21.
Chestnut-backed chickadee at BirdHouses101.com
Chestnut-backed chickadee species account – Cornell Lab of Ornithology
Tool use by chestnut-backed chickadee
chestnut-backed chickadee
Native birds of Alaska
Native birds of the Northwestern United States
Native birds of the West Coast of the United States
Birds of the Sierra Nevada (United States)
Tool-using animals
chestnut-backed chickadee
chestnut-backed chickadee | Chestnut-backed chickadee | Biology | 1,248 |
25,998,383 | https://en.wikipedia.org/wiki/Sycophancy | In modern English, sycophant denotes an "insincere flatterer" and is used to refer to someone practising sycophancy (i.e., insincere flattery to gain advantage). The word has its origin in the legal system of Classical Athens. Most legal cases of the time were brought by private litigants as there was no police force and only a limited number of officially appointed public prosecutors. By the fifth century BC this practice had given rise to abuse by "sycophants": litigants who brought unjustified prosecutions. The word retains the same meaning ('slanderer') in Modern Greek, French (where it also can mean 'informer'), and Italian. In modern English, the meaning of the word has shifted to its present usage.
Etymology
The origin of the Ancient Greek word () is a matter of debate, but disparages the unjustified accuser who has in some way perverted the legal system.
The original etymology of the word (// 'fig', and // 'to show') "revealer of figs"—has been the subject of extensive scholarly speculation and conjecture. Plutarch appears to be the first to have suggested that the source of the term was in laws forbidding the exportation of figs, and that those who leveled the accusation against another of illegally exporting figs were therefore called sycophants. Athenaeus provided a similar explanation. Blackstone's Commentaries repeats this story, but adds an additional take—that there were laws making it a capital offense to break into a garden and steal figs, and that the law was so odious that informers were given the name sycophants.
A different explanation of the origin of the term by Shadwell was that the sycophant refers to the manner in which figs are harvested, by shaking the tree and revealing the fruit hidden among the leaves. The sycophant, by making false accusations, makes the accused yield up their fruit. The Encyclopædia Britannica Eleventh Edition listed these and other explanations, including that the making of false accusations was an insult to the accused in the nature of "showing the fig", an "obscene gesture of phallic significance" or, alternatively that the false charges were often so insubstantial as to not amount to the worth of a fig.
Generally, scholars have dismissed these explanations as inventions, long after the original meaning had been lost. Danielle Allen suggests that the term was "slightly obscene", connoting a kind of perversion, and may have had a web of meanings derived from the symbolism of figs in ancient Greek culture, ranging from the improper display of one's "figs" by being overly aggressive in pursuing a prosecution, the unseemly revealing of the private matters of those accused of wrongdoing, to the inappropriate timing of harvesting figs when they are unripe.
In Athenian culture
The traditional view is that the opprobrium against sycophants was attached to the bringing of an unjustified complaint, hoping either to obtain the payment for a successful case, or to blackmail the defendant into paying a bribe to drop the case. Other scholars have suggested that the sycophant, rather than being disparaged for being motivated by profit, was instead viewed as a vexatious litigant who was over-eager to prosecute, and who had no personal stake in the underlying dispute, but brings up old charges unrelated to himself long after the event. Sycophants included those who profited from using their position as citizens for profit. For instance, one could hire a sycophant to bring a charge against one's enemies, or to take a wide variety of actions of an official nature with the authorities, including introducing decrees, acting as an advocate or a witness, bribing ecclesiastical or civil authorities and juries, or other questionable things, with which one did not want to be personally associated. Sycophants were viewed as uncontrolled and parasitic, lacking proper regard for truth or for justice in a matter, using their education and skill to destroy opponents for profit in matters where they had no stake, lacking even the convictions of politicians, and having no sense of serving the public good.
Orations
The charge of sycophancy against a litigant was a serious matter, and the authors of two surviving oratories, "Against the Grain Dealers" (author Lysias) and "Against Leocrates" (author Lycurgus), defend themselves against charges that they are sycophants because they are prosecuting cases as private citizens in circumstances where they have no personal stake in the underlying dispute. In each instance, the lack of personal involvement appears to have been the crux of the accusation of sycophancy against them, the merits of the cases being separate matters from whether they had a right to bring them.
Measures to suppress sycophants
Efforts were made to discourage or suppress sycophants, including imposing fines on litigants who failed to obtain at least one fifth of the jury's votes, or for abandoning a case after it had begun (as would occur if the sycophant was bribed to drop the matter), and authorizing the prosecution of men for being sycophants. Statutes of Limitation were specifically adopted to try to prevent sycophancy.
Satire
Sycophants are better illustrated through the satirical works of Aristophanes. In The Acharnians, a Megarian attempting to sell his daughters is confronted by a sycophant who accuses him of illegally attempting to sell foreign goods; and a Boeotian purchases a sycophant as a typical Athenian product that he cannot obtain at home. A sycophant appears as a character in The Birds. One of his lost plays had, as its principal theme, an attack against a sycophant. In Wealth, the character, Sycophant, defends his role as a necessity in supporting the laws and preventing wrongdoing.
Modern Greece
In daily use, the term refers to someone that purposely spreads lies about a person, in order to harm this person’s reputation, or otherwise insult his honor (i.e. a slanderer), and is doing so (i.e. slander, n., to slander: ).
In legal terms, Article 362 of the Greek Penal Code defines defamation () "whoever who with in any way claims or spreads for someone else a fact that could harm that person's honor or reputation", whereas slanderous defamation () is when the fact is a lie, and the person who claims or spreads it knows that. The first case is punishable with up to two years' imprisonment or a fine, whereas slanderous defamation is punishable with at least three months' imprisonment and a fine.
Shift in meaning in modern English
The word sycophant entered the English and French languages in the mid-16th century, and originally had the same meaning in English and French () as in Greek, a false accuser. Today, in Greek and French it retains the original meaning.
The meaning in English has changed over time, however, and came to mean an insincere flatterer. The common thread in the older and current meanings is that the sycophant is in both instances portrayed as a kind of parasite, speaking falsely and insincerely in the accusation or the flattery for gain. The Greek plays often combined in one single character the elements of the parasite and the sycophant, and the natural similarities of the two closely related types led to the shift in the meaning of the word. The sycophant in both meanings can also be viewed as two sides of the same coin: the same person currying one's favor by insincere flattery is also spreading false tales and accusations behind one's back.
In Renaissance English, the word was used in both senses and meanings, that of the Greek informer, and the current sense of a "flattering parasite", with both being cast as enemies—not only of those they wrong, but also of the person or state that they ostensibly serve.
Related expressions
Sycophancy is insincere flattery given to gain advantage from a superior. A user of sycophancy is referred to as a sycophant or a “yes-man.”
Alternative phrases are often used such as:
See also
Further reading
References
External links
Etymologies
Classical Athens
Greek words and phrases
Interpersonal relationships
Human behavior
Harassment and bullying
Ancient Greece | Sycophancy | Biology | 1,793 |
13,542,841 | https://en.wikipedia.org/wiki/Rat%20Rock%20%28Central%20Park%29 | Rat Rock, also known as Umpire Rock, is an outcrop of Manhattan schist which protrudes from the bedrock in Central Park, Manhattan, New York City. It is named after the rats that used to swarm there at night. It is located near the southwest corner of the park, south of the Heckscher Ballfields near the alignments of 62nd Street and Seventh Avenue. It measures wide and tall with different east, west, and north faces, each of which present differing climbing challenges. The rock has striations caused by glaciation.
Boulderers congregate there, sometimes as many as fifty per day. Some are regulars such as Yukihiko Ikumori, a gardener from the West Village who is known as the spiritual godfather of the rock. Others are just passing through, such as tourists and visitors who learn about the climbing spot from the Internet and word of mouth. Experienced climbers such as Ikumori often show neophytes good routes and techniques. More experienced outsiders may be disappointed as the quality of the stone is poor, the setting is gloomy and the climbs present so little challenge that it has been called "one of America's most pathetic boulders".
The park police formerly ticketed climbers who climbed more than a few feet up the rock. The City Climbers Club approached the park authorities and, by working to provide safety features such as wood chips around the base, they were able to legalize climbing there.
References
External links
Bouldering in Central Park
RatRock @ ClimbNYC.com
Central Park
Climbing areas of the United States
Stones | Rat Rock (Central Park) | Physics | 325 |
39,773,873 | https://en.wikipedia.org/wiki/Fourth%20Industrial%20Revolution | "Fourth Industrial Revolution", "4IR", or "Industry 4.0", is a neologism describing rapid technological advancement in the 21st century. It follows the Third Industrial Revolution (the "Information Age"). The term was popularised in 2016 by Klaus Schwab, the World Economic Forum founder and executive chairman, who asserts that these developments represent a significant shift in industrial capitalism.
A part of this phase of industrial change is the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds.
Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scale machine-to-machine communication (M2M), and the Internet of things (IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyse and diagnose issues without the need for human intervention.
It also represents a social, political, and economic shift from the digital age of the late 1990s and early 2000s to an era of embedded connectivity distinguished by the ubiquity of technology in society (i.e. a metaverse) that changes the ways humans experience and know the world around them. It posits that we have created and are entering an augmented social reality compared to just the natural senses and industrial ability of humans alone. The Fourth Industrial Revolution is sometimes expected to mark the beginning of an imagination age, where creativity and imagination become the primary drivers of economic value.
History
The phrase Fourth Industrial Revolution was first introduced by a team of scientists developing a high-tech strategy for the German government. Klaus Schwab, executive chairman of the World Economic Forum (WEF), introduced the phrase to a wider audience in a 2015 article published by Foreign Affairs. "Mastering the Fourth Industrial Revolution" was the 2016 theme of the World Economic Forum Annual Meeting, in Davos-Klosters, Switzerland.
On 10 October 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco. This was also subject and title of Schwab's 2016 book. Schwab includes in this fourth era technologies that combine hardware, software, and biology (cyber-physical systems), and emphasises advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics, artificial intelligence, nanotechnology, quantum computing, biotechnology, the internet of things, the industrial internet of things, decentralised consensus, fifth-generation wireless technologies, 3D printing, and fully autonomous vehicles.
In The Great Reset proposal by the WEF, The Fourth Industrial Revolution is included as a strategic intelligence in the solution to rebuild the economy sustainably following the COVID-19 pandemic.
First Industrial Revolution
The First Industrial Revolution was marked by a transition from hand production methods to machines through the use of steam power and water power. The implementation of new technologies took a long time, so the period which this refers to was between 1760 and 1820, or 1840 in Europe and the United States. Its effects had consequences on textile manufacturing, which was first to adopt such changes, as well as iron industry, agriculture, and mining although it also had societal effects with an ever stronger middle class.
Second Industrial Revolution
The Second Industrial Revolution, also known as the Technological Revolution, is the period between 1871 and 1914 that resulted from installations of extensive railroad and telegraph networks, which allowed for faster transfer of people and ideas, as well as electricity. Increasing electrification allowed for factories to develop the modern production line.
Third Industrial Revolution
The Third Industrial Revolution, also known as the Digital Revolution, began in the late 20th century. It is characterized by the shift to an economy centered on information technology, marked by the advent of personal computers, the Internet, and the widespread digitalization of communication and industrial processes.
A book titled The Third Industrial Revolution, by Jeremy Rifkin, was published in 2011, which focused on the intersection of digital communications technology and renewable energy. It was made into a 2017 documentary by Vice Media.
Characteristics
In essence, the Fourth Industrial Revolution is the trend towards automation and data exchange in manufacturing technologies and processes which include cyber-physical systems (CPS), Internet of Things (IoT), cloud computing, cognitive computing, and artificial intelligence.
Machines improve human efficiency in performing repetitive functions, and the combination of machine learning and computing power allows machines to carry out increasingly complex tasks.
The Fourth Industrial Revolution has been defined as technological developments in cyber-physical systems such as high capacity connectivity; new human-machine interaction modes such as touch interfaces and virtual reality systems; and improvements in transferring digital instructions to the physical world including robotics and 3D printing (additive manufacturing); "big data" and cloud computing; improvements to and uptake of Off-Grid / Stand-Alone Renewable Energy Systems: solar, wind, wave, hydroelectric and the electric batteries (lithium-ion renewable energy storage systems (ESS) and EV).
It also emphasizes decentralized decisions – the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level.
Distinctiveness
Proponents of the Fourth Industrial Revolution suggest it is a distinct revolution rather than simply a prolongation of the Third Industrial Revolution. This is due to the following characteristics:
Velocity — exponential speed at which incumbent industries are affected and displaced
Scope and systems impact – the large amount of sectors and firms that are affected
Paradigm shift in technology policy – new policies designed for this new way of doing are present. An example is Singapore's formal recognition of Industry 4.0 in its innovation policies.
Critics of the concept dismiss Industry 4.0 as a marketing strategy. They suggest that although revolutionary changes are identifiable in distinct sectors, there is no systemic change so far. In addition, the pace of recognition of Industry 4.0 and policy transition varies across countries; the definition of Industry 4.0 is not harmonised. One of the most known figures is Jeremy Rifkin who "agree[s] that digitalization is the hallmark and defining technology in what has become known as the Third Industrial Revolution". However, he argues "that the evolution of digitalization has barely begun to run its course and that its new configuration in the form of the Internet of Things represents the next stage of its development".
Components
The application of the Fourth Industrial Revolution operates through:
Mobile devices
Location detection technologies (electronic identification)
Advanced human-machine interfaces
Authentication and fraud detection
Smart sensors
Big analytics and advanced processes
Multilevel customer interaction and customer profiling
Augmented reality/wearables
On-demand availability of computer system resources
Data visualisation
Industry 4.0 networks a wide range of new technologies to create value. Using cyber-physical systems that monitor physical processes, a virtual copy of the physical world can be designed. Characteristics of cyber-physical systems include the ability to make decentralised decisions independently, reaching a high degree of autonomy.
The value created in Industry 4.0, can be relied upon electronic identification, in which the smart manufacturing require set technologies to be incorporated in the manufacturing process to thus be classified as in the development path of Industry 4.0 and no longer digitisation.
Trends
Smart factory
The Fourth Industrial Revolution fosters "smart factories", which are production environment where facilities and logistics systems are organised with minimal human intervention.
The technical foundations on which smart factories are based are cyber-physical systems that communicate with each other using the Internet of Things and Services. An important part of this process is the exchange of data between the product and the production line. This enables a much more efficient connection of the Supply Chain and better organisation within any production environment.
Within modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralised decisions. Over the internet of things, cyber-physical systems communicate and cooperate with each other and with humans in synchronic time both internally and across organizational services offered and used by participants of the value chain.
Artificial intelligence
Artificial intelligence (AI) has a wide range of applications across all sectors of the economy. It gained prominence following advancements in deep learning during the 2010s, and its impact intensified in the 2020s with the rise of generative AI, a period often referred to as the "AI boom". Models like GPT-4o can engage in verbal and textual discussions and analyze images.
AI is a key driver of Industry 4.0, orchestrating technologies like robotics, automated vehicles, and real-time data analytics. By enabling machines to perform complex tasks, AI is redefining production processes and reducing changeover times. AI could also significantly accelerate, or even automate software development.
Some experts believe that AI alone could be as transformative as an industrial revolution. Multiple companies such as OpenAI and Meta have expressed the goal of creating artificial general intelligence (AI that can do virtually any cognitive task a human can), making large investments in data centers and GPUs to train more capable AI models.
Robotics
Humanoid robots have traditionally lacked usefulness. They had difficulty picking simple objects due to imprecise control and coordination, and they wouldn't understand their environment and how physics works. They were often explicitly programmed to do narrow tasks, failing when encountering new situations. Modern humanoid robots however are typically based on machine learning, in particular reinforcement learning. In 2024, humanoid robots are rapidly becoming more flexible, easier to train and versatile.
Predictive maintenance
Industry 4.0 facilitates predictive maintenance, due to the use of advanced technologies, including IoT sensors. Predictive maintenance, which can identify potential maintenance issues in real time, allows machine owners to perform cost-effective maintenance before the machinery fails or gets damaged. For example, a company in Los Angeles could understand if a piece of equipment in Singapore is running at an abnormal speed or temperature. They could then decide whether or not it needs to be repaired.
3D printing
The Fourth Industrial Revolution is said to have extensive dependency on 3D printing technology. Some advantages of 3D printing for industry are that 3D printing can print many geometric structures, as well as simplify the product design process. It is also relatively environmentally friendly. In low-volume production, it can also decrease lead times and total production costs. Moreover, it can increase flexibility, reduce warehousing costs and help the company towards the adoption of a mass customisation business strategy. In addition, 3D printing can be very useful for printing spare parts and installing it locally, therefore reducing supplier dependence and reducing the supply lead time.
Smart sensors
Sensors and instrumentation drive the central forces of innovation, not only for Industry 4.0 but also for other "smart" megatrends, such as smart production, smart mobility, smart homes, smart cities, and smart factories.
Smart sensors are devices, which generate the data and allow further functionality from self-monitoring and self-configuration to condition monitoring of complex processes.
With the capability of wireless communication, they reduce installation effort to a great extent and help realise a dense array of sensors.
The importance of sensors, measurement science, and smart evaluation for Industry 4.0 has been recognised and acknowledged by various experts and has already led to the statement "Industry 4.0: nothing goes without sensor systems."
However, there are a few issues, such as time synchronisation error, data loss, and dealing with large amounts of harvested data, which all limit the implementation of full-fledged systems. Moreover, additional limits on these functionalities represents the battery power. One example of the integration of smart sensors in the electronic devices, is the case of smart watches, where sensors receive the data from the movement of the user, process the data and as a result, provide the user with the information about how many steps they have walked in a day and also converts the data into calories burned.
Agriculture and food industries
Smart sensors in these two fields are still in the testing stage. These innovative connected sensors collect, interpret and communicate the information available in the plots (leaf area, vegetation index, chlorophyll, hygrometry, temperature, water potential, radiation). Based on this scientific data, the objective is to enable real-time monitoring via a smartphone with a range of advice that optimises plot management in terms of results, time and costs. On the farm, these sensors can be used to detect crop stages and recommend inputs and treatments at the right time. As well as controlling the level of irrigation.
The food industry requires more and more security and transparency and full documentation is required. This new technology is used as a tracking system as well as the collection of human data and product data.
Accelerated transition to the knowledge economy
Knowledge economy is an economic system in which production and services are largely based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence. Industry 4.0 aids transitions into knowledge economy by increasing reliance on intellectual capabilities than on physical inputs or natural resources.
Challenges
Challenges in implementation of Industry 4.0:
Economic
High economic cost
Business model adaptation
Unclear economic benefits/excessive investment
Driving significant economic changes through automation and technological advancements, leading to both job displacement and the creation of new roles, necessitating widespread workforce reskilling and systemic adaptation.
Social
Privacy concerns
Surveillance and distrust
General reluctance to change by stakeholders
Threat of redundancy of the corporate IT department
Loss of many jobs to automatic processes and IT-controlled processes, especially for blue-collar workers
Increased risk of gender inequalities in professions with job roles most susceptible to replacement with AI
Political
Lack of regulation, standards and forms of certifications
Unclear legal issues and data security
Organizational
IT security issues, which are greatly aggravated by the inherent need to open up previously closed production shops
Reliability and stability needed for critical machine-to-machine communication (M2M), including very short and stable latency times
Need to maintain the integrity of production processes
Need to avoid any IT snags, as those would cause expensive production outages
Need to protect industrial know-how (contained also in the control files for the industrial automation gear)
Lack of adequate skill-sets to expedite the transition towards Industry 4.0
Low top management commitment
Insufficient qualification of employees
Country applications
Many countries have set up institutional mechanisms to foster the adoption of Industry 4.0 technologies. For example,
Australia
Australia has a Digital Transformation Agency (est. 2015) and the Prime Minister's Industry 4.0 Taskforce (est. 2016), which promotes collaboration with industry groups in Germany and the USA.
Germany
The term "Industrie 4.0", shortened to I4.0 or simply I4, originated in 2011 from a project in the high-tech strategy of the German government and specifically relates to that project policy, rather than a wider notion of a Fourth Industrial Revolution of 4IR, which promotes the computerisation of manufacturing. The term "Industrie 4.0" was publicly introduced in the same year at the Hannover Fair. Renowned German professor Wolfgang Wahlster is sometimes called the inventor of the "Industry 4.0" term. In October 2012, the Working Group on Industry 4.0 presented a set of Industry 4.0 implementation recommendations to the German federal government. The workgroup members and partners are recognised as the founding fathers and driving force behind Industry 4.0. On 8 April 2013 at the Hannover Fair, the final report of the Working Group Industry 4.0 was presented. This working group was headed by Siegfried Dais, of Robert Bosch GmbH, and Henning Kagermann, of the German Academy of Science and Engineering.
As Industry 4.0 principles have been applied by companies, they have sometimes been rebranded. For example, the aerospace parts manufacturer Meggitt PLC has branded its own Industry 4.0 research project M4.
The discussion of how the shift to Industry 4.0, especially digitisation, will affect the labour market is being discussed in Germany under the topic of Work 4.0.
The federal government in Germany through its ministries of the BMBF and BMWi, is a leader in the development of the I4.0 policy. Through the publishing of set objectives and goals for enterprises to achieve, the German federal government attempts to set the direction of the digital transformation. However, there is a gap between German enterprise's collaboration and knowledge of these set policies. The biggest challenge which SMEs in Germany are currently facing regarding digital transformation of their manufacturing processes is ensuring that there is a concrete IT and application landscape to support further digital transformation efforts.
The characteristics of the German government's Industry 4.0 strategy involve the strong customisation of products under the conditions of highly flexible (mass-) production. The required automation technology is improved by the introduction of methods of self-optimization, self-configuration, self-diagnosis, cognition and intelligent support of workers in their increasingly complex work. The largest project in Industry 4.0 as of July 2013 is the German Federal Ministry of Education and Research (BMBF) leading-edge cluster "Intelligent Technical Systems Ostwestfalen-Lippe (its OWL)". Another major project is the BMBF project RES-COM, as well as the Cluster of Excellence "Integrative Production Technology for High-Wage Countries". In 2015, the European Commission started the international Horizon 2020 research project CREMA (cloud-based rapid elastic manufacturing) as a major initiative to foster the Industry 4.0 topic.
Estonia
In Estonia, the digital transformation dubbed as the 4th Industrial Revolution by Klaus Schwab and the World Economic Forum in 2015 started with the restoration of independence in 1991. Although a latecomer to the information revolution due to 50 years of Soviet occupation, Estonia leapfrogged to the digital era, while skipping the analogue connections almost completely. The early decisions made by Prime Minister Mart Laar on the course of the country's economic development led to the establishment of what is today known as e-Estonia, one of the worlds most digitally advanced nations.
According to the goals set in the Estonia's Digital Agenda 2030, next leaps in the country's digital transformation will be switching to event based and proactive services, both in private and business environment, as well as developing a green, AI-powered and human-centric digital government.
Indonesia
Another example is Making Indonesia 4.0, with a focus on improving industrial performance.
India
India, with its expanding economy and extensive manufacturing sector, has embraced the digital revolution, leading to significant advancements in manufacturing. The Indian program for Industry 4.0 centers around leveraging technology to produce globally competitive products at cost-effective rates while adopting the latest technological advancements of Industry 4.0.
Japan
Society 5.0 envisions a society that prioritizes the well-being of its citizens, striking a harmonious balance between economic progress and the effective addressing of societal challenges through a closely interconnected system of both the digital realm and the physical world. This concept was introduced in 2019 in the 5th Science and Technology Basic Plan for Japanese Government as a blueprint for a forthcoming societal framework.
Malaysia
Malaysia's national policy on Industry 4.0 is known as Industry4WRD. Launched in 2018, key initiatives in this policy include enhancing digital infrastructure, equipping the workforce with 4IR skills, and fostering innovation and technology adoption across industries.
South Africa
South Africa appointed a Presidential Commission on the Fourth Industrial Revolution in 2019, consisting of about 30 stakeholders with a background in academia, industry and government. South Africa has also established an Inter ministerial Committee on Industry 4.0.
South Korea
The Republic of Korea has had a Presidential Committee on the Fourth Industrial Revolution since 2017. The Republic of Korea's I-Korea strategy (2017) is focusing on new growth engines that include AI, drones and autonomous cars, in line with the government's innovation-driven economic policy.
Uganda
Uganda adopted its own National 4IR Strategy in October 2020 with emphasis on e-governance, urban management (smart cities), health care, education, agriculture and the digital economy; to support local businesses, the government was contemplating introducing a local start-ups bill in 2020 which would require all accounting officers to exhaust the local market prior to procuring digital solutions from abroad.
United Kingdom
In a policy paper published in 2019, the UK's Department for Business, Energy & Industrial Strategy, titled "Regulation for the Fourth Industrial Revolution", outlined the need to evolve current regulatory models to remain competitive in evolving technological and social settings.
United States
The Department of Homeland Security in 2019 published a paper called 'The Industrial Internet of things (IIOT): Opportunities, Risks, Mitigation'. The base pieces of critical infrastructure are increasingly digitised for greater connectivity and optimisation. Hence, its implementation, growth and maintenance must be carefully planned and safeguarded. The paper discusses not only applications of IIOT but also the associated risks. It has suggested some key areas where risk mitigation is possible. To increase coordination between the public, private, law enforcement, academia and other stakeholders the DHS formed the National Cybersecurity and Communications Integration Center (NCCIC).
Industry applications
The aerospace industry has sometimes been characterised as "too low volume for extensive automation". However, Industry 4.0 principles have been investigated by several aerospace companies, and technologies have been developed to improve productivity where the upfront cost of automation cannot be justified. One example of this is the aerospace parts manufacturer Meggitt PLC's M4 project.
The increasing use of the industrial internet of things is referred to as Industry 4.0 at Bosch, and generally in Germany. Applications include machines that can predict failures and trigger maintenance processes autonomously or self-organised coordination that react to unexpected changes in production. in 2017, Bosch launched the Connectory, a Chicago, Illinois based innovation incubator that specializes in IoT, including Industry 4.0.
Industry 4.0 inspired Innovation 4.0, a move toward digitisation for academia and research and development. In 2017, the £81M Materials Innovation Factory (MIF) at the University of Liverpool opened as a center for computer aided materials science, where robotic formulation, data capture and modelling are being integrated into development practices.
Criticism
With the consistent development of automation of everyday tasks, some saw the benefit in the exact opposite of automation where self-made products are valued more than those that involved automation. This valuation is named the IKEA effect, a term coined by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale, and Dan Ariely of Duke.
Another problem that is expected to accelerate with the growth of IR4 is the prevalence of mental disorders, a known issue within high-tech operators. Also, the IR4 has sparked significant criticism regarding AI bias and ethical issues, as algorithms used in decision-making processes often perpetuate existing social inequalities, disproportionately impacting marginalized groups while lacking transparency and accountability.
Future
Industry 5.0
Industry 5.0 has been proposed as a strategy to create a paradigm shift for an industrial landscape in which the primary focus should no longer be on increasing efficiency but on promoting the well-being of society and sustainability of the economy and industrial production.
See also
Computer-integrated manufacturing
Cyber manufacturing
Digital modelling and fabrication
Industrial control system
Intelligent maintenance systems
Lights-out manufacturing
List of emerging technologies
Machine to machine
Nondestructive Evaluation 4.0
Simulation software
Technological singularity
Technological unemployment
The War on Normal People
Work 4.0
World Economic Forum 2016
Digitization
Transhumanism
AI boom
References
Sources
2015 neologisms
21st century
Industrial automation
Industrial computing
Internet of things
Technology forecasting
Big data
Industrial Revolution
Fourth Industrial Revolution
Knowledge economy | Fourth Industrial Revolution | Technology,Engineering | 4,843 |
22,218,484 | https://en.wikipedia.org/wiki/Omega%20hydroxy%20acid | Omega hydroxy acids (ω-hydroxy acids) are a class of naturally occurring straight-chain aliphatic organic acids n carbon atoms long with a carboxyl group at position 1 (the starting point for the family of carboxylic acids), and a hydroxyl at terminal position n where n > 3. They are a subclass of hydroxycarboxylic acids. The C16 and C18 omega hydroxy acids 16-hydroxy palmitic acid and 18-hydroxy stearic acid are key monomers of cutin in the plant cuticle. The polymer cutin is formed by interesterification of omega hydroxy acids and derivatives of them that are substituted in mid-chain, such as 10,16-dihydroxy palmitic acid. Only the epidermal cells of plants synthesize cutin.
Omega hydroxy fatty acids also occur in animals. Cytochrome P450 (CYP450) microsome ω-hydroxylases such as CYP4A11, CYP4A22, CYP4F2, and CYP4F3 in humans, Cyp4a10 and Cyp4a12 in mice, and Cyp4a1, Cyp4a2, Cyp4a3, and Cyp4a8 in rats metabolize arachidonic acid and many arachidonic acid metabolites to their corresponding omega hydroxyl products. This metabolism of arachidonic acid produces 20-hydroxyarachidonic acid (i.e. 20-hydroxyeicosatetraeonic acid or 20-HETE), a bioactive product involved in various physiological and pathological processes; and this metabolism of certain bioactive arachidonic acid metabolites such as leukotriene B4 and 5-hydroxyicosatetraenoic acid produces 20-hydroxylated products which are 100- to 1,000-fold weaker than, and therefore represents the inactivation of, their respective precursors.
List
The definition for "omega" includes number of carbons (C#) greater or equal to three. Lower numbers are included here to match the formula pattern CnH2nO3.
See also
Alpha hydroxy acid
Beta hydroxy acid
References
Hydroxy acids
Plant physiology
ar:حمض هيدروكسي | Omega hydroxy acid | Biology | 494 |
60,413,544 | https://en.wikipedia.org/wiki/Neuroprivacy | Neuroprivacy, or "brain privacy," is a concept which refers to the rights people have regarding the imaging, extraction and analysis of neural data from their brains. This concept is highly related to fields like neuroethics, neurosecurity, and neurolaw, and has become increasingly relevant with the development and advancement of various neuroimaging technologies. Neuroprivacy is an aspect of neuroethics specifically regarding the use of neural information in legal cases, neuromarketing, surveillance and other external purposes, as well as corresponding social and ethical implications.
History
Neuroethical concepts such as neuroprivacy developed initially in the 2000s, after the initial invention and development of neuroimaging techniques such as positron emission tomography (PET), electroencephalography (EEG), and functional magnetic resonance imaging (fMRI). As neuroimaging became highly studied and popularized in the 1990s, it also started entering the commercial market as entrepreneurs sought to market the practical applications of neuroscience, such as neuromarketing, neuroenhancement and lie detection. Neuroprivacy consists of the privacy issues raised by both neuroscience research and applied uses of neuroimaging techniques. The relevance of neuroprivacy debate increased significantly after the 9/11 terrorist attacks, which led to a push for increased neuroimaging in the context of information/threat detection and surveillance.
Neuroanalysis techniques
Brain fingerprinting
Brain fingerprinting is a controversial and unproven EEG technique that relies on identifying the P300 event-related potential, which is correlated with recognition of some stimulus. The purpose of this technique is to determine if a person has incriminating information or memory. In its current state, brain fingerprinting is only able to determine the existence of information, and is unable to provide any specific details about that information. Its creator, Dr. Lawrence Farwell, claims brain fingerprinting is highly reliable and nearly impossible to fool, but some studies dispute its reliability and lack of countermeasures. Some possible countermeasures include thinking of something else instead of processing the real stimuli, mental suppression of recognition, or simply not cooperating with the test. There have been concerns over the potential use of memory dampening drugs such as propranolol to beat brain fingerprinting. However, some studies have shown that propranolol actually dampens the emotional arousal associated with a memory instead of the memory itself, which could even improve the recollection of the memory.
A comparable EEG technique is brain electrical oscillation signature profiling (BEOS), which is very similar to brain fingerprinting in that it detects the presence of specific information or memories. Despite a significant lack of scientific studies confirming the validity of BEOS profiling, this technique has been used in India to provide evidence for criminal investigations.
Evaluation and prediction of mental and moral faculties
Current neuroimaging technology has been able to detect neural correlates of human attributes such as memory and morality. Neurodata can be used to diagnose and predict behavioral disorders and patterns such as psychopathy and antisocial behavior, both of which are factors in calculating likelihood of future criminal behavior. This ability to evaluate mental proficiencies, biases and faculties could be relevant to government or corporate entities for the purposes of surveillance or neuromarketing, especially if neurodata can be collected without the subjects' knowledge or consent. Using neurodata to predict future behaviors and actions could help create or inform preventive measures to treat people before problems happen; however, this raises ethical issues as to how society defines "moral" or "acceptable" behavior.
Lie detection
It is possible to use neuroimaging as a form of lie detection. By assuming deception requires an increase of cognitive processes to develop an alternate story, the difference in mental states between telling the truth or lying should be noticeable. However, this relies on assumptions that have yet to be conclusively determined, and as such neurological lie detection is not yet reliable or fully understood. This is in contrast to the standard polygraph, which relies on analyzing biological mechanisms that are well understood but still not necessarily reliable.
Applications of personal neurodata
Legal evidence
The legal systems of most countries generally do not accept neuroimaging data as permissible evidence, with some exceptions. India has allowed BEOS tests as legal evidence, and an Italian court of appeals used neuroimaging evidence in a 2009 case, being the first European court to do so. Canadian and US courts have been more cautious in permitting neuroimaging data as legal evidence. One of the reasons legal systems have been slow to adopt neuroimaging data as an accepted form of evidence is the possible error and misinterpretations that could result from such a new technology; courts in the US typically follow the Daubert standard set for evidence evaluation by the Daubert v. Merrell Dow Pharmaceuticals, Inc. Supreme Court case, which established that the validity of scientific evidence must be determined by the trial judge. The Daubert standard serves as a safeguard for the reliability of scientific evidence, and requires a significant amount of testing for any neuroimaging technique to be considered for it to be considered as evidence. While brain fingerprinting was technically accepted in the Harrington v. Iowa case, the judge specifically stated that the EEG evidence was not to be presented to a jury and so the evidence did not set a significant precedent.
Surveillance and security
Neurological surveillance is relevant to governmental, corporate, academic and technological entities, as the improvement of technology increases the amount of information that can be extrapolated from neuroimaging. Surveillance with current neuroimaging technology is considered difficult, given how fMRI data is difficult to collect and interpret even in laboratory settings; fMRI studies generally require subjects to be motionless and cooperative. However, as technology improves it may be possible to overcome these requirements.
Hypothetically, there are benefits in using neuroscience in the context of surveillance and security. However, there is debate over whether doing so would violate neuroprivacy to an unacceptable extent.
Neuromarketing
Neurodata is valuable to advertising and marketing entities by its potential to identify how and why people react to different stimuli in order to better influence consumers. This ability to examine reactions and perceptions from the brain directly creates new ethical debates, such as how to define the acceptable limits of mental manipulation and how to avoid targeting vulnerable/receptive demographics. In a sense, these could be seen as not necessarily brand new debates but rather added dimensions to previously existing discussions.
Controversy and debate
Scientific arguments
The main scientific arguments regarding neuroprivacy mainly revolve around the limits to the current understanding of neurodata. Many of the arguments against using neuroimaging in legal, surveillance and other contexts are based on the lack of a solid scientific basis, meaning the potential for error and misinterpretation is too high. Brain fingerprinting, one of the most popularized forms of neuroanalysis, has been promoted by its creator, Lawrence Farwell, despite a lack of scientific agreement on its reliability. Currently, there is even a lack of scientific understanding as to what can be interpreted from neurodata, which makes limiting and categorizing different types of neurodata difficult and thus complicating neuroprivacy. Another complication is that neurodata is highly personal and is essentially inseparable from the subject, making it extremely sensitive and difficult to anonymize.
Another issue is the conflation with scientific knowledge with beliefs regarding the relations between philosophical, neural and societal constructs. Popularization and overconfidence in scientific techniques may lead to assumptions or misinterpretations of what neurodata actually describe, when in reality there are limits to what can be interpreted from correlations between neural activity and semantic meaning.
Legal arguments
There are various legal arguments as to how neuroprivacy is covered under current protections and rights and how future laws should be implemented to define and protect neuroprivacy, as neuroscience has the potential to significantly change the legal status quo. The legal definition of neuroprivacy has yet to be properly established, but there appears to be a general consensus that a legal and ethical foundation for neuroprivacy rights should be established before neuroimaging becomes widely accepted across legal, corporate and security contexts. As neuroprivacy constitutes an international issue, an international consensus may be required to establish the necessary legal and ethical foundation.
Bringing neuroscience into legal contexts has been argued to have certain benefits. Current types of legal testimony, such as eyewitness testimony and polygraph testing, have significant flaws that may be possibly currently overlooked due to historical and traditional precedents. Neuroscience could potentially solve some of these issues by directly examining the brain, given scientific confidence in the neuroimaging techniques. However, this raises questions concerning balancing legal usages of neuroscience with neuroprivacy protections.
In the US, there are certain existing rights that could be interpreted to protect neuroprivacy. The Fifth Amendment, which protects citizens from self-incrimination, could be interpreted to protecting citizens from being incriminated by their own brain. However, the current interpretation is that the Fifth Amendment protects citizens from self-incriminating testimony; if neuroimaging constitutes physical evidence instead of testimony, the Fifth Amendment may not protect against neuroimaging evidence. The Ninth and Fourteenth amendments help protect unspecified rights and fair procedures, which may or may not include neuroprivacy to some extent.
One interpretation of neuroimaging evidence is categorizing it as forensic evidence rather than scientific expert testimony; detecting memories and information of a crime could be compared to collecting forensic residue from a crime scene. This distinction would make it categorically different than a polygraph test, and increase its legal permissibility in Canadian and US legal systems.
Ethical arguments
Some general ethical concerns regarding neuroprivacy revolve around personal rights and control over personal information. As technology improves, it is possible that collecting neurodata without consent or knowledge will be easier or more common in the future. One argument is that the collection of neurodata is a violation of both personal property and intellectual property, as the collection of neurodata involves scanning both the body and the analysis of thought.
One of the main ethical controversies regarding neuroprivacy is related to the issue of free will, and the mind-body problem. A possible concern is the unknown extent to which neurodata can predict actions and thoughts - it is not currently known if the physical activity of the brain is conclusively or solely responsible for thoughts and actions. Examining the brain as a way to prevent crimes or disorders before they manifest raises the question of if it is possible for people to exercise their agency despite their neurological condition. Even using neurodata in a way to treat certain disorders and diseases preemptively raises questions about identity, agency and how society defines morality.
Popular culture
In the television show Westworld, hats are used as neuroimaging devices that record experiences and data without the consent or knowledge of the users. This data is mainly used for research for neuromarketing and commercial pursuits, namely the pursuit of immortality.
In the Dark Forest novel by Liu Cixin, one of the projects developed to ensure the survival of humanity involved extensive human brain mapping to develop ways to improve cognition. This project was eventually used to imprint human brains with "mental seals", artificially implanted unshakeable beliefs in a person's psyche.
In the Harry Potter series by J. K. Rowling, brain privacy can be invaded by the use of Legilimens, which involves the extraction of the contents of the mind such as thoughts and emotions. One way to increase neuroprivacy in the Harry Potter world is by practicing Occlumency, which involves defending the mind against Legilimens and other forms of mental invasion.
See also
Brain-reading
Thought identification
References
Neuroscience
Privacy | Neuroprivacy | Biology | 2,491 |
1,368,572 | https://en.wikipedia.org/wiki/Heinrich%20Tietze | Heinrich Franz Friedrich Tietze (August 31, 1880 – February 17, 1964) was an Austrian mathematician, famous for the Tietze extension theorem on functions from topological spaces to the real numbers. He also developed the Tietze transformations for group presentations, and was the first to pose the group isomorphism problem. Tietze's graph is also named after him; it describes the boundaries of a subdivision of the Möbius strip into six mutually-adjacent regions, found by Tietze as part of an extension of the four color theorem to non-orientable surfaces.
Education and career
Tietze was the son of Emil Tietze and the grandson of Franz Ritter von Hauer, both of whom were Austrian geologists.
He was born in Schleinz, Austria-Hungary, and studied mathematics at the Technische Hochschule in Vienna beginning in 1898. After additional studies in Munich, he returned to Vienna, completing his doctorate in 1904 and his habilitation in 1908.
From 1910 until 1918 Tietze taught mathematics in Brno, and was promoted to ordinary professor in 1913. He served in the Austrian army during World War I, and then returned to Brno, but in 1919 he took a position at the University of Erlangen, and then in 1925 moved again to the University of Munich, where he remained for the rest of his career. One of his doctoral students was Georg Aumann. Tietze retired in 1950, and died in Munich, West Germany.
Awards and honors
Tietze was a fellow of the Bavarian Academy of Sciences and a fellow of the Austrian Academy of Sciences.
Publications
Über die mit Lineal und Zirkel und die mit dem rechten Zeichenwinkel lösbaren Konstruktionsaufgaben, Mathematische Zeitschrift vol.46, 1940
mit Leopold Vietoris Beziehungen zwischen den verschiedenen Zweigen der Topologie, Enzyklopädie der Mathematischen Wissenschaften 1929
Über die Anzahl der stabilen Ruhelagen eines Würfels, Elemente der Mathematik vol.3, 1948
Über die topologische Invarianten mehrdimensionaler Mannigfaltigkeiten, Monatshefte für Mathematik und Physik, vol. 19, 1908, p.1-118
Über Simony Knoten und Simony Ketten mit vorgeschriebenen singulären Primzahlen für die Figur und für ihr Spiegelbild, Mathematische Zeitschrift vol.49, 1943, p.351 (Knot theory)
References
External links
Austrian mathematicians
1880 births
1964 deaths
Group theorists
Members of the Austrian Academy of Sciences
Mathematicians from Austria-Hungary
Academic staff of the University of Erlangen-Nuremberg
Academic staff of the Ludwig Maximilian University of Munich
TU Wien alumni
Topologists | Heinrich Tietze | Mathematics | 595 |
74,014,351 | https://en.wikipedia.org/wiki/Clandestinotrema%20portoricense | Clandestinotrema portoricense is a rare species of corticolous (bark-dwelling) crustose lichen in the family Graphidaceae. Found in Puerto Rico, it was described as a new species in 2014. It is characterised by its white, slightly shiny thallus that can span several centimetres in diameter, and its rounded that are immersed in the thallus. Unlike most of its genus counterparts, C. portoricense possesses septated (partitioned) spores and a carbonised (blackened) and , effectively distinguishing it from similar species.
Taxonomy
Clandestinotrema portoricense was first formally described by lichenologists Joel Mercado-Díaz, Robert Lücking, and Sittiporn Parnmen. The holotype, the initial specimen that serves as the basis for its description, was discovered by the first author in Canóvanas, Puerto Rico. The species name, portoricense, pays homage to the island of Puerto Rico, the locale of its discovery.
Description
The thallus, or body, of Clandestinotrema portoricense, can span up to in diameter. The thallus, which can be either thinly epiperidermal or partially endoperidermal, is white, slightly shiny, and smooth to uneven in texture. No prothallus is present in this species. The lichen's , responsible for photosynthesis, is a green alga from genus Trentepohlia, with cells that are rounded to irregular in outline and grouped together in a yellowish-green colour.
What makes this species unique are the ascomata – reproductive structures where spores are produced – that are rounded and immersed with a lateral . The are 3-septate to often somewhat , with an additional, longitudinal septum in the upper segment. They are hyaline, and with diamond-shaped .
No substances were detected in this species using thin-layer chromatography.
Similar species
While most species of Clandestinotrema have regularly (sub-)muriform ascospores, C. portoricense stands out due to its seemingly 3-septate ascospores that may form an additional, longitudinal septum in the thicker proximal segment. This characteristic differentiates it from the other species in the genus, such as C. analorenae, C. maculatum, and C. protoalbum, all of which have regularly 3-septate ascospores. Apart from its unique ascospore septation, Clandestinotrema portoricense also differs in the of the and , providing further distinguishing features.
Habitat and distribution
This lichen species was discovered in the shaded understory of a Palo Colorado forest in El Yunque National Forest, Puerto Rico, specifically on the living trunk of an unidentified tree.
References
Graphidaceae
Lichen species
Lichens described in 2014
Lichens of the Caribbean
Taxa named by Robert Lücking
Species known from a single specimen | Clandestinotrema portoricense | Biology | 609 |
1,710,040 | https://en.wikipedia.org/wiki/Indentation%20hardness | Indentation hardness tests are used in mechanical engineering to determine the hardness of a material to deformation. Several such tests exist, wherein the examined material is indented until an impression is formed; these tests can be performed on a macroscopic or microscopic scale.
When testing metals, indentation hardness correlates roughly linearly with tensile strength, but it is an imperfect correlation often limited to small ranges of strength and hardness for each indentation geometry. This relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers.
Material hardness
Different techniques are used to quantify material characteristics at smaller scales. Measuring mechanical properties for materials, for instance, of thin films, cannot be done using conventional uniaxial tensile testing. As a result, techniques testing material "hardness" by indenting a material with a very small impression have been developed to attempt to estimate these properties.
Hardness measurements quantify the resistance of a material to plastic deformation. Indentation hardness tests compose the majority of processes used to determine material hardness, and can be divided into three classes: macro, micro and nanoindentation tests. Microindentation tests typically have forces less than . Hardness, however, cannot be considered to be a fundamental material property. Classical hardness testing usually creates a number which can be used to provide a relative idea of material properties. As such, hardness can only offer a comparative idea of the material's resistance to plastic deformation since different hardness techniques have different scales.
The equation based definition of hardness is the pressure applied over the contact area between the indenter and the material being tested. As a result hardness values are typically reported in units of pressure, although this is only a "true" pressure if the indenter and surface interface is perfectly flat.
Instrumented indentation
Instrumented indentation basically indents a sharp tip into the surface of a material to obtain a force-displacement curve. The results provide a lot of information about the mechanical behavior of the material, including hardness, e.g., elastic moduli and plastic deformation. One key factor of instrumented indentation test is that the tip needs to be controlled by force or displacement that can be measured simultaneously throughout the indentation cycle. Current technology can realize accurate force control in a wide range. Therefore hardness can be characterized at many different length scales, from hard materials like ceramics to soft materials like polymers.
The earliest work was finished by Bulychev, Alekhin, Shorshorov in the 1970s, who determined that Young's modulus of a material can be determined from the slope of a force vs. displacement indentation curve as:
: material stiffness, which is the slope of the curve
: the tip-sample contact area
: reduced modulus, defined as:
Where and are the Young's modulus and Poisson's ratio of the sample, an and are that of the indenter. Since typically, , the second term can typically be ignored.
The most critical information, hardness, can be calculated by:
Commonly used indentation techniques, as well as detailed calculation of each different method, are discussed as follows.
Macroindentation tests
The term "macroindentation" is applied to tests with a larger test load, such as 1 kgf or more. There are various macroindentation tests, including:
Vickers hardness test (HV), which has one of the widest scales. Widely used to test hardness of all kinds of metal materials (steel, nonferrous metals, tinsel, cemented carbide, sheet metal, etc.); surface layer / coating (Carburization, nitriding, decarburization layer, surface hardening layer, galvanized coating, etc.).
Brinell hardness test (HB) BHN and HBW are widely used
Knoop hardness test (HK), for measurement over small areas, widely used to test glass or ceramic material.
Janka hardness test, for wood
Meyer hardness test
Rockwell hardness test (HR), principally used in the USA. HRA, HRB and HRC scales are most widely used.
Shore hardness test, for polymers, widely used in the rubber industry.
Barcol hardness test, for composite materials.
There is, in general, no simple relationship between the results of different hardness tests. Though there are practical conversion tables for hard steels, for example, some materials show qualitatively different behaviors under the various measurement methods. The Vickers and Brinell hardness scales correlate well over a wide range, however, with Brinell only producing overestimated values at high loads.
Indentation procedures can, however, be used to extract genuine stress-strain relationships. Certain criteria need to be met if reliable results are to be obtained. These include the need to deform a relatively large volume, and hence to use large loads. The methodologies involved are often grouped under the term Indentation plastometry, which is described in a separate article.
Microindentation tests
The term "microhardness" has been widely employed in the literature to describe the hardness testing of materials with low applied loads. A more precise term is "microindentation hardness testing." In microindentation hardness testing, a diamond indenter of specific geometry is impressed into the surface of the test specimen using a known applied force (commonly called a "load" or "test load") of 1 to 1000 gf. Microindentation tests typically have forces of 2 N (roughly 200 gf) and produce indentations of about 50 μm. Due to their specificity, microhardness testing can be used to observe changes in hardness on the microscopic scale. Unfortunately, it is difficult to standardize microhardness measurements; it has been found that the microhardness of almost any material is higher than its macrohardness. Additionally, microhardness values vary with load and work-hardening effects of materials. The two most commonly used microhardness tests are tests that also can be applied with heavier loads as macroindentation tests:
Vickers hardness test (HV)
Knoop hardness test (HK)
In microindentation testing, the hardness number is based on measurements made of the indent formed in the surface of the test specimen. The hardness number is based on the applied force divided by the surface area of the indent itself, giving hardness units in kgf/mm2. Microindentation hardness testing can be done using Vickers as well as Knoop indenters. For the Vickers test, both the diagonals are measured and the average value is used to compute the Vickers pyramid number. In the Knoop test, only the longer diagonal is measured, and the Knoop hardness is calculated based on the projected area of the indent divided by the applied force, also giving test units in kgf/mm2.
The Vickers microindentation test is carried out in a similar manner welling to the Vickers macroindentation tests, using the same pyramid. The Knoop test uses an elongated pyramid to indent material samples. This elongated pyramid creates a shallow impression, which is beneficial for measuring the hardness of brittle materials or thin components. Both the Knoop and Vickers indenters require polishing of the surface to achieve accurate results.
Scratch tests at low loads, such as the Bierbaum microcharacter test, performed with either 3 gf or 9 gf loads, preceded the development of microhardness testers using traditional indenters. In 1925, Smith and Sandland of the UK developed an indentation test that employed a square-based pyramidal indenter made from diamond. They chose the pyramidal shape with an angle of 136° between opposite faces in order to obtain hardness numbers that would be as close as possible to Brinell hardness numbers for the specimen. The Vickers test has a great advantage of using one hardness scale to test all materials. The first reference to the Vickers indenter with low loads was made in the annual report of the National Physical Laboratory in 1932. Lips and Sack describes the first Vickers tester using low loads in 1936.
There is some disagreement in the literature regarding the load range applicable to microhardness testing. ASTM Specification E384, for example, states that the load range for microhardness testing is 1 to 1000 gf. For loads of 1 kgf and below, the Vickers hardness (HV) is calculated with an equation, wherein load (L) is in grams force and the mean of two diagonals (d) is in millimeters:
For any given load, the hardness increases rapidly at low diagonal lengths, with the effect becoming more pronounced as the load decreases. Thus at low loads, small measurement errors will produce large hardness deviations. Thus one should always use the highest possible load in any test. Also, in the vertical portion of the curves, small measurement errors will produce large hardness deviations.
Nanoindentation tests
Sources of error
The main sources of error with indentation tests are poor technique, poor calibration of the equipment, and the strain hardening effect of the process. However, it has been experimentally determined through "strainless hardness tests" that the effect is minimal with smaller indentations.
Surface finish of the part and the indenter do not have an effect on the hardness measurement, as long as the indentation is large compared to the surface roughness. This proves to be useful when measuring the hardness of practical surfaces. It also is helpful when leaving a shallow indentation, because a finely etched indenter leaves a much easier to read indentation than a smooth indenter.
The indentation that is left after the indenter and load are removed is known to "recover", or spring back slightly. This effect is properly known as shallowing. For spherical indenters the indentation is known to stay symmetrical and spherical, but with a larger radius. For very hard materials the radius can be three times as large as the indenter's radius. This effect is attributed to the release of elastic stresses. Because of this effect the diameter and depth of the indentation do contain errors. The error from the change in diameter is known to be only a few percent, with the error for the depth being greater.
Another effect the load has on the indentation is the piling-up or sinking-in of the surrounding material. If the metal is work hardened it has a tendency to pile up and form a "crater". If the metal is annealed it will sink in around the indentation. Both of these effects add to the error of the hardness measurement.
Relation to yield stress
When hardness, , is defined as the mean contact pressure (load/ projected contact area), the yield stress, , of many materials is proportional to the hardness by a constant known as the constrain factor, C.
where:
The hardness differs from the uni-axial compressive yield stress of the material because different compressive failure modes apply. A uni-axial test only constrains the material in one dimension, which allows the material to fail as a result of shear. Indentation hardness on the other hand is constrained in three dimensions which prevent shear from dominating the failure.
See also
Leeb rebound hardness test
Meyer's law
References
External links
"Pinball Tester Reveals Hardness." Popular Mechanics, November 1945, p. 75.
Bibliography
.
Hardness tests
Physical quantities | Indentation hardness | Physics,Materials_science,Mathematics | 2,321 |
39,801,199 | https://en.wikipedia.org/wiki/Nettenchelys%20erroriensis | Nettenchelys erroriensis is an eel in the family Nettastomatidae (duckbill/witch eels). It was described by Emma Stanislavovna Karmovskaya in 1994. It is a marine, deep water-dwelling eel which is known from Error Seamount (from which its species epithet is derived), in the western Indian Ocean. It dwells at a depth range of . Females can reach a maximum total length of .
References
erroriensis
Fish described in 1994
Species known from a single specimen
Fauna of Socotra | Nettenchelys erroriensis | Biology | 112 |
71,664,230 | https://en.wikipedia.org/wiki/Glock%20switch | A Glock switch (sometimes called a button or a giggle switch) is a small device that can be attached to the rear of the slide of a Glock handgun, changing the semi-automatic pistol into a selective fire machine pistol capable of fully automatic fire. As a type of auto sear, it functions by applying force to the trigger bar to prevent it from limiting fire to one round of ammunition per trigger pull. This device by itself, regardless if it is installed on a slide or not, is classified by the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) to be a machine gun, making possession of the device illegal in the United States under most circumstances.
Operation
A Glock switch functions by applying force to a semi-automatic pistol's trigger bar to prevent it from limiting fire to one round of ammunition per trigger pull. Normally, in a semiautomatic pistol, after firing, the trigger bar catches the firing pin until the trigger is released, but when depressed by the switch it does not catch. A Glock switch thus converts the weapon into a machine pistol capable of automatic fire. The device is roughly the size of a United States quarter, and when installed on the rear of the slide on a Glock pistol (replacing the slide cover plate), adds a selective fire switch; flipping the switch sets the weapon to full automatic mode, which is capable of firing as many rounds per minute as the short-recoil action allows.
History
A patent for the Glock auto-sear was filed in 1996 and approved in 1998, with its invention credited to Venezuelan Jorge A. Leon, who claims to have invented the device in 1987.
The first reported appearance of Glock switches in the United States occurred in 2002 when an Argentinian was arrested for sending Glock switches among other illegal firearms to the United States, with 16 later being recovered by the ATF in 2003.
A handgun with a Glock switch attached fits the definition of a machine gun under United States federal law. The 1986 Firearm Owners Protection Act made new machine guns illegal for civilians to own, banning "possession and transfer of new automatic firearms and parts that fire bullets without stopping once the trigger is depressed", with the exception of machine guns manufactured prior to May 19, 1986. The penalties for possession of an unregistered machine gun in the United States are up to a $250,000 fine and prison sentences of up to 10 years.
In 2019, the ATF recovered thousands of the devices which were imported from China. In 2021 and 2022, people have been manufacturing the switch devices with 3D printers. In March 2022, a Vice News investigation learned that the federal prosecutions which involved conversion devices have been rising since 2017. They determined that from 2017 to 2022, advances in low-cost 3D printers and global commerce on the internet have made the devices available for as little as US$20. In 2022, federal authorities documented a dramatic rise in the prevalence of the Glock switches.
Legality
According to the American ATF, "A Glock Switch is a part which was designed and intended for use in converting a semi-automatic Glock pistol into a machine gun; therefore, it is a "legal machine gun" as defined in 26 U.S.C. 5845(b)."
See also
Hell-fire trigger
Bump stock
Gun politics in the United States
September 2024 Birmingham shooting
References
External links
Video More 'Glock Switches' confiscated in Tennessee
Video Penny-sized 'Glock switch' turns handgun into automatic weapon
Firearm actions
Gun politics in the United States
Firearm components | Glock switch | Technology | 726 |
605,486 | https://en.wikipedia.org/wiki/Chequamegon%E2%80%93Nicolet%20National%20Forest | The Chequamegon–Nicolet National Forest (; the q is silent) is a U.S. National Forest in northern Wisconsin in the United States. Due to logging in the early part of the 20th century, very little old growth forest remains. Some of the trees there were planted by the Civilian Conservation Corps in the 1930s. The national forest land trees and vegetation are part of the North Woods Ecoregion that prevails throughout the upper Great Lakes region.
Legally two separate national forests—the Chequamegon National Forest and the Nicolet National Forest—the areas were established by presidential proclamations in 1933 and have been managed as one unit since 1998.
The Chequamegon National Forest comprises three units in the north-central part of the state totaling . In descending order of forestland area, it is located in parts of Bayfield, Ashland, Price, Sawyer, Taylor, and Vilas counties. Forest headquarters are in Park Falls. There are local ranger district offices in Glidden, Hayward, Medford, Park Falls, and Washburn. Moquah Barrens Research Natural Area is located with the Chequamegon. Lying within the Chequamegon are two officially designated wilderness areas of the National Wilderness Preservation System. These are the Porcupine Lake Wilderness and the Rainbow Lake Wilderness.
The Nicolet National Forest covers of northeastern Wisconsin. It is located in parts of Forest, Oconto, Florence, Vilas, Langlade, and Oneida counties. The forest headquarters are in Rhinelander. There are local ranger district offices in Eagle River, Florence, Lakewood, and Laona. Bose Lake Hemlock Hardwoods and the Franklin Lake Campground are located in the Nicolet. Lying within the Nicolet are three wildernesses—the Blackjack Springs Wilderness, the Headwaters Wilderness, and the Whisker Lake Wilderness.
Flora, fauna, and funga
Remote areas of uplands, bogs, wetlands, muskegs, rivers, streams, pine savannas, meadows and many glacial lakes are found throughout these forests. Native tree species include Acer saccharum (sugar maple), Acer rubrum (red maple), and Acer spicatum (mountain maple), white, red, and black oaks, aspen, beech, basswood, sumac, and paper, yellow, and river birch. Coniferous trees, including red, white, and jack pine, white spruce and balsam fir are abundant due to a dense second growth. Eastern hemlock are also present as this is the westernmost limit of its distribution. Tamarack/black spruce bogs, cedar swamps and alder thickets are common. Blueberries, raspberries, blackberries, cranberries, serviceberries, ferns, mosses, cattails, and mushrooms also grow here, as well as many more shrubs and wildflowers.
White-tailed deer are numerous and are hit by motorists on roads in northern Wisconsin year-round. Black bears, foxes, raccoons, rabbits, beavers, river otters, squirrels, chipmunks, pheasants, grouse and wild turkeys are popular game in the woods. Elk and wolves(Wolves have not been reintroduced to Wisconsin ) have been reintroduced and there have been sightings of moose and pine marten. Bird species include northern cardinal, blue jay, Canada jay, common raven, boreal and black-capped chickadees, black-backed and pileated woodpeckers, red-winged blackbirds, owls, ducks, common loons, bald eagles, evening grosbeaks, red and white-winged crossbills and many species of thrushes, sparrows and warblers. Brook trout, rainbow trout, and brown trout are found in many miles of excellent streams. Walleye, small and largemouth bass, crappie, northern pike, and many species of panfish make the area's lakes famous for freshwater fishing. A record making muskellunge, Wisconsin's state fish, was caught in these waters. The beauty, heritage, and recreational opportunities of these forests draw thousands of tourists to the Chequamegon–Nicolet area every year.
These national forests are best known for recreation, including camping, hiking, fishing, cross country skiing, and snowmobiling.
Clam Lake in Chequamegon National Forest was also home to one of the two extremely low frequency antennae in the United States.
Gallery
See also
List of national forests of the United States
Lake Namakagon
References
External links
History on official website
National forests of Wisconsin
Parks in Wisconsin
Old-growth forests
Civilian Conservation Corps in Wisconsin
Protected areas of Ashland County, Wisconsin
Protected areas of Bayfield County, Wisconsin
Protected areas of Price County, Wisconsin
Protected areas of Sawyer County, Wisconsin
Protected areas of Taylor County, Wisconsin
Protected areas of Vilas County, Wisconsin
Protected areas of Florence County, Wisconsin
Protected areas of Forest County, Wisconsin
Protected areas of Oconto County, Wisconsin
Protected areas of Langlade County, Wisconsin
Protected areas of Oneida County, Wisconsin
1933 establishments in Wisconsin
Protected areas established in 1933 | Chequamegon–Nicolet National Forest | Biology | 1,040 |
59,361,584 | https://en.wikipedia.org/wiki/Bor-ming%20Jahn | Bor-ming Jahn (; 24 August 1940 – 1 December 2016) was a Taiwanese-French geochemist.
Life and career
Born in Miaoli, Taiwan, on 24 August 1940, Jahn graduated from Hsinchu Senior High School and attended National Taiwan University, where, in 1963, he earned a bachelor's degree in geology. He obtained a master's degree in geochemistry from Brown University in 1967, and completed a Ph.D. at the University of Minnesota in 1972.
After postdoctoral work and further research at NASA and the Lunar Science Institute, Jahn moved to France and joined the University of Rennes I faculty in 1976. Jahn acquired French nationality in May 1980. In 2003, he returned to Taiwan, serving as distinguished research fellow affiliated with the Institute of Earth Sciences, Academia Sinica from August 2004 to 2010. He left Academia Sinica to take an appointment at NTU, as distinguished chair professor of the department of geosciences. Between 2006 and 2016, Jahn was chief editor of the Journal of Asian Earth Sciences.
Over the course of his career, Jahn was granted fellowship by the Mineralogical Society of America and Geological Society of America in 2004, followed by the Geochemical Society and European Association of Geochemistry in 2006. In 2012, Jahn was elected a member of Academia Sinica. The next year, the French government named Jahn a chevalier of the ordre des Palmes Académiques. In 2016, the Geological Society of America awarded Jahn honorary fellow status. He died on 1 December 2016, at the Taipei Veterans General Hospital.
References
1940 births
2016 deaths
21st-century Taiwanese scientists
Taiwanese geochemists
Taiwanese emigrants to France
Naturalized citizens of France
French geochemists
National Taiwan University alumni
Academic staff of the National Taiwan University
Academic staff of the University of Rennes
Members of Academia Sinica
Fellows of the Geological Society of America
Academic journal editors
Chevaliers of the Ordre des Palmes Académiques
People from Miaoli County
20th-century Taiwanese scientists
20th-century geologists
21st-century geologists
20th-century French chemists
21st-century French chemists
Brown University alumni | Bor-ming Jahn | Chemistry | 437 |
5,305,778 | https://en.wikipedia.org/wiki/Monopotassium%20glutamate | Monopotassium glutamate (MPG) is the compound with formula KC5H8NO4. It is a potassium salt of glutamic acid.
It has the E number E622 and is used in foods as a flavor enhancer. It is a non-sodium MSG alternative.
See also
Monoammonium glutamate
Glutamates
Potassium compounds
Flavor enhancers
E-number additives | Monopotassium glutamate | Chemistry | 90 |
9,908,503 | https://en.wikipedia.org/wiki/Core%20%28graph%20theory%29 | In the mathematical field of graph theory, a core is a notion that describes behavior of a graph with respect to graph homomorphisms.
Definition
Graph is a core if every homomorphism is an isomorphism, that is it is a bijection of vertices of .
A core of a graph is a graph such that
There exists a homomorphism from to ,
there exists a homomorphism from to , and
is minimal with this property.
Two graphs are said to be homomorphism equivalent or hom-equivalent if they have isomorphic cores.
Examples
Any complete graph is a core.
A cycle of odd length is a core.
A graph is a core if and only if the core of is equal to .
Every two cycles of even length, and more generally every two bipartite graphs are hom-equivalent. The core of each of these graphs is the two-vertex complete graph K2.
By the Beckman–Quarles theorem, the infinite unit distance graph on all points of the Euclidean plane or of any higher-dimensional Euclidean space is a core.
Properties
Every finite graph has a core, which is determined uniquely, up to isomorphism. The core of a graph G is always an induced subgraph of G. If and then the graphs and are necessarily homomorphically equivalent.
Computational complexity
It is NP-complete to test whether a graph has a homomorphism to a proper subgraph, and co-NP-complete to test whether a graph is its own core (i.e. whether no such homomorphism exists) .
References
Godsil, Chris, and Royle, Gordon. Algebraic Graph Theory. Graduate Texts in Mathematics, Vol. 207. Springer-Verlag, New York, 2001. Chapter 6 section 2.
.
.
Graph theory objects | Core (graph theory) | Mathematics | 355 |
6,792,457 | https://en.wikipedia.org/wiki/Girl%20%28Chinese%20constellation%29 | The Girl mansion (女宿, pinyin: Nǚ Xiù) is one of the Twenty-eight mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise.
Asterisms
Notes
Chinese constellations | Girl (Chinese constellation) | Astronomy | 50 |
62,009,127 | https://en.wikipedia.org/wiki/Patented%20track%20crane | A patented track crane is a crane with a bottom flange of hardened steel and a raised tread to improve rolling.
History
In 1867, William Louden was issued a patent for a hay carrier. Rerolled from old car rails, this system handled loads of approximately and was suspended by hairpin-shaped hanger rods nailed to the exposed barn rafters. There were a few industrial applications of this product during World War I, but Louden Machinery did not pursue the industrial applications after the war. Earl T. Bennington, an electric motor salesman, had installed some of the Louden systems during World War I. Realizing the sales potential of motor propelled systems, he convinced Cleveland Electric Tramrail to enter the industry. Two years later, a line of underhung cranes and monorails were developed and marketed by Cleveland Electric Tramrail. The company's rapid success in the industry caused Louden to re-enter the market they had created and previously abandoned.
In 1925, two Louden executives, J. P. Lawrence and Frank Harris, resigned from the company to form American Monorail Company. From 1923 to 1948, these three companies—Louden, Tramrail, and American—held a virtual oligopoly in the market of underhung cranes and monorails. In 1947, Spencer and Morris, Cleveland Tramrail's Southern California representative, was acquired by the Whiting Corporation. S&M had been Cleveland Tramrail's representative for 23 years, but had begun to manufacture equipment identical to Cleveland Tramrail's during World War II. Soon after in 1950, Spanmaster was created as a product of Angelus Engineering Corporation in South Gate, California.
In the late 1920s, Vern G. Ellen Company was formed as a dealer and installer of American Monorail Company Equipment. After the death of Ellen in 1957, the company was purchased by Frank Griswold, who ran the company in its purchased form until 1958, when he lost access to the American Monorail product line. On May 1, 1959, the Twin City Monorail Company was formed. In 1968, the assets of Twin City Monorail were sold to Dyson-Kissner Corporation, which operated Twin City Monorail until 1971, when they were acquired by Robbins & Myers. They were later purchased in March 1982 by Lague Enterprises, Inc. (LEI). In 1990, TC/American Monorail was formed by the merger of Twin City Monorail and American Monorail under the ownership of LEI. In October 1990, Spanmaster, a division of the Jervis B. Webb Company, was acquired and became part of TC/American Monorail.
Characteristics
Patented track rails are engineered specifically for overhead cranes and monorails. Unlike a symmetrical structural rail, the material in a patented track rail is placed where it is most effective allowing for a significant reduction in weight. The rails are engineered to be twice as strong as typical A-36 structural beams and have a hardened, raised tread track, providing a longer life and reduced wear on the wheels. Utilizing patented track rails also significantly eases the installation process. The rails are inspected and straightened in factories, which reduces the need to manipulate the beams during installation and startup. In most cases, there is no welding involved in the installation process. All splices are joined with bolted splice joints. Also, rails are cut with a slight taper on the ends, which allows for tight joints at the bottom of a splice allowing for a smooth transition between beams. Patented track rails were also designed specifically to be supported from the building. Not requiring a duplicate structure or columns allows for increased flexibility when maneuvering material.
Applications
Due to the strength, versatility, reliability, and prolonged life of the patented track rail, there are many applications where patented track rails are preferred over structural beams.
References
Further reading
ANSI MH27.2-2017 - Enclosed Track Underhung Cranes and Monorail Systems
ASME B30.11 - Monorail and Underhung Cranes
ASME B30.16 - Overhead Hoists (Underhung)
ASME B30.20 - Below-the Hook Lifting Devices
ASME HST-1 - Performance Standard for Electric Chain Hoist
ASME HST-2 - Performance Standard for Hand Chain Manually Operated Chain Hoists
ASME HST-4 - Performance Standard for Overhead Electric Wire Hoists
ASME HST-5 - Performance Standard for Air Chain Hoists
ASME HST-6 - Performance Standard for Air Wire Rope Hoists
ANSI Z535.4 - Product Safety Signs and Labels
NFPA 70 - National Electric Code
AISC - AISC manual of Steel Construction: Load and Resistance Factor Design
AISC - AISC manual of Steel Construction: Allowable Stress Design
NEMA ICS 6 - Industrial Controls and Systems: Enclosures
ANSI/AWS D1.1 - Structural Welding Code-Steel
ANSI/AWS D14.1 - Specification for Welding of Industrial and Mill Cranes and other Material Handling Equipment
ASTM E2349-05 - Safety Requirements in Metal Casting Operations: Sand Preparation, Molding and Core Making; Melting and Pouring, and Cleaning & Finishing
Machinery
Cranes by type | Patented track crane | Physics,Technology,Engineering | 1,053 |
18,966,340 | https://en.wikipedia.org/wiki/Main%20sequence%20turnoff | The turnoff point for a star refers to the point on the Hertzsprung–Russell diagram where it leaves the main sequence after its main fuel is exhaustedthe main sequence turnoff.
By plotting the turnoff points of individual stars in a star cluster one can estimate the cluster's age.
Stars with no turnoff point
Red dwarfs, also referred to as classM stars, are stars of . They have sufficient mass to sustain hydrogen-to-helium fusion via the proton–proton chain reaction, but they do not have sufficient mass to create the temperatures and pressures necessary to fuse helium into carbon, nitrogen or oxygen (see CNO cycle). However, all their hydrogen is available for fusion, and low temperature and pressure means a lifetime measured in trillions of years. For example, the lifespan of a star of 0.1 solar masses is six trillion years. This lifespan greatly exceeds the current age of the universe, therefore all red dwarfs are main sequence stars. Even though extremely long lived, those stars will eventually run out of fuel. Once all the available hydrogen has been fused stellar nucleosynthesis stops, and the remaining helium slowly cools by radiation. Gravity contracts the star until electron degeneracy pressure compensates and it goes off the main sequence, i.e. becomes a white dwarf.
References
Stellar evolution | Main sequence turnoff | Physics,Astronomy | 272 |
29,277,420 | https://en.wikipedia.org/wiki/SAFSTOR | SAFSTOR is a nuclear decommissioning method in which a nuclear power plant or facility governed by the United States Nuclear Regulatory Commission, is "placed and maintained in a condition that allows the facility to be safely stored and subsequently decontaminated (deferred decontamination) to levels that permit release for unrestricted use".
During SAFSTOR the de-fuelled plant is monitored before complete decontamination and dismantling of the site, to a condition where nuclear licensing is no longer required. The decommissioning must be completed within 60 years of the plant ceasing operations.
During the storage interval, some of the radioactive contaminants of the reactor and power plant will decay, which will reduce the quantity of radioactive material to be removed during the final decontamination phase.
Levels
Different sub-levels of SAFSTOR are recognized, which vary in the type of activity and monitoring required.
In "hot/cold standby" the plant is kept in operating condition but not actively delivering power; monitoring and maintenance is similar to that during a long outage. This may be the first step to allow planning of further shutdown and decommissioning.
In "custodial SAFSTOR" systems such as radiation monitoring and ventilation are kept in operation, along with continuous site security and maintenance. Minimal initial decontamination is done.
"Passive SAFSTOR" requires a more thorough initial clean up, but allows only intermittent inspection of the site and shutdown of active systems such as radiation monitoring.
"Hardened SAFSTOR" prevents intrusion on contaminated parts of the plant by substantial barriers.
All varieties of SAFSTOR require positive action to decontaminate the site at the end of the storage period.
Alternative options
The other options set by the NRC are nuclear decommissioning which is immediate dismantling of the plant and remediation of the site, and nuclear entombment which is the enclosure of contaminated parts of the plant in a permanent layer of concrete. Mixtures of options may be used, for example, immediate removal of steam turbine components and condensers, and SAFSTOR for the more heavily radioactive containment vessel. Since NRC requires decommissioning to be completed within 60 years, ENTOMB is not usually chosen since not all activity will have decayed to an unregulated background level in that time.
Decommissioning options for a retired nuclear plant may be chosen based on availability of decommissioning funds, operation of other reactors at the same site, or availability of waste disposal facilities. In 2004, 11 reactors were planned for DECON and 9 for SAFSTOR. In 2008, 14 shutdown commercial power reactors were planned for or had completed DECON, 11 were in SAFSTOR, 3 were in ENTOMB and Three Mile Island unit 2 was defuelled and will be decontaminated when Unit 1 ceases operation.
See also
Environmental effects of nuclear power
Nuclear power debate
References
Nuclear power in the United States
Power | SAFSTOR | Technology | 624 |
60,513,764 | https://en.wikipedia.org/wiki/Digital%20media%20in%20education | Digital media in education refers to an individual's ability to access, analyze, evaluate, and create media content and communication in various forms. This includes the use of multiple digital software applications, devices, and platforms as tools for learning. The integration of digital media in education has been increased over time, rivaling books as a primary means of communication and gradually transforming traditional educational practices.
History
20th century
Technological advances, including the invention of the Internet in the late 20th century, introduced the possibility of incorporating technology into education. In the early 1900s, the overhead projector was used as an educational tool, along with on-air classes available via radio. The first use of computers in classrooms occurred in 1950, when a flight simulation program was used to train pilots at the Massachusetts Institute of Technology. However, access to computers remained extremely limited. In 1964, researchers John Kemeny and Thomas Kurtz developed a new computer language called BASIC, which was easier to learn and popularized time-sharing, enabling multiple students to use a computer simultaneously. By the 1980s, schools began to show more interest in computers as companies released mass-market devices to the public. Networking further facilitated the connection of computers into a single communication system, which was both more efficient and cost-effective than previous stand-alone machines, prompting widespread adoption in schools.
By 1999, 99% of public school teachers in the United States reported access to at least one computer in their schools, and 84% had access to a computer in their classroom. The invention of the World Wide Web in 1992 simplified internet navigation and sparked further interest in educational settings. Computers were initially integrated into school curricula for tasks such as word processing, spreadsheet creation, and data organization. By the late 1990s, the Internet became a research tool, functioning as a vast library resource.
The World Wide Web also led to the development of learning management systems, which allowed educators to create online teaching environments for content storage, student activities, discussions, and assignments. Advances in digital compression and high-speed Internet made video creation and distribution more affordable, contributing to the rise of systems designed for recording lectures. These systems were often incorporated into learning management platforms, supporting the growth of fully online courses.
21st century
By 2002, the Massachusetts Institute of Technology began offering recorded lectures to the public, marking a significant step toward accessible online education. The creation of YouTube in 2005 further revolutionized educational content distribution. Many educators started uploading lectures and instructional videos, with platforms like Khan Academy, which began posting on YouTube in 2006, helping to establish the site as a valuable educational tool. In 2007, Apple launched iTunesU, another platform for sharing educational resources and videos. Meanwhile, learning management systems gained popularity, with Blackboard and Canvas becoming two of the most widely used platforms after Canvas's release in 2008. That same year saw the introduction of the first Massive Open Online Course (MOOC), which offered webinars and expert posts accessible to anyone.
As technology evolved, traditional projectors were gradually replaced by interactive whiteboards, which enabled teachers to integrate digital tools more effectively in their classrooms. By 2009, 97% of U.S. classrooms had at least one computer, and 93% had Internet access.
The COVID-19 pandemic, which forced schools across the world to close, significantly impacted education with schools shifting to distance education. Students attended classes remotely using devices such as laptops, phones, and tablets, utilizing digital platforms as tools for creating at-home learning environments.
Some schools faced challenges in adapting assessments and exams to the new learning environment. In a study by Eddie M. Mulenga and José M. Marbán on Zambian students during the pandemic, students struggled to adapt to online learning in subjects like mathematics, as they were unprepared for the unfamiliar digital platforms. Similar issues were observed among students in Romania, where the transition to virtual learning presented significant obstacles in engagement and adaptation.
References
education
Educational technology | Digital media in education | Technology | 798 |
17,372,184 | https://en.wikipedia.org/wiki/Reference%20designator | A reference designator unambiguously identifies the location of a component within an electrical schematic or on a printed circuit board. The reference designator usually consists of one or two letters followed by a number, e.g. C3, D1, R4, U15. The number is sometimes followed by a letter, indicating that components are grouped or matched with each other, e.g. R17A, R17B. The IEEE 315 standard contains a list of Class Designation Letters to use for electrical and electronic assemblies. For example, the letter R is a reference prefix for the resistors of an assembly, C for capacitors, K for relays.
Industrial electrical installations often use reference designators according to IEC 81346.
History
IEEE 200-1975 or "Standard Reference Designations for Electrical and Electronics Parts and Equipments" is a standard that was used to define referencing naming systems for collections of electronic equipment. IEEE 200 was ratified in 1975. The IEEE renewed the standard in the 1990s, but withdrew it from active support shortly thereafter. This document also has an ANSI document number, ANSI Y32.16-1975.
This standard codified information from, among other sources, a United States military standard MIL-STD-16 which dates back to at least the 1950s in American industry.
To replace IEEE 200-1975, ASME, a standards body for mechanical engineers, initiated the new standard ASME Y14.44-2008. This standard, along with IEEE 315-1975, provide the electrical designer with guidance on how to properly reference and annotate everything from a single circuit board to a collection of complete enclosures.
Definition
ASME Y14.44-2008 and IEEE 315-1975 define how to reference and annotate components of electronic devices.
It breaks down a system into units, and then any number of sub-assemblies. The unit is the highest level of demarcation in a system and is always a numeral. Subsequent demarcation are called assemblies and always have the Class Letter "A" as a prefix following by a sequential number starting with 1. Any number of sub-assemblies may be defined until finally reaching the component. Note that IEEE 315-1975 defines separate class designation letters for separable assemblies (class designation 'A') and inseparable assemblies (class designation 'U'). Inseparable assemblies—i.e., "items which are ordinarily replaced as a single item of supply"—are typically treated as components in this referencing scheme.
Examples:
1A12A2R3 - Unit 1, Assembly 12, Sub-assembly 2, Resistor 3
1A12A2U3 - Unit 1, Assembly 12, Sub-assembly 2, Inseparable Assembly 3
Especially valuable is the method of referencing and annotating cables plus their connectors within and outside assemblies.
Examples:
1A1A44J5 - Unit 1, Assembly 1, Sub-Assembly 44, Jack 5 (J5 is a connector on a box referenced as A44)
1A1A45J333 - Unit 1, Assembly 1, Sub-Assembly 45, Jack 333 (J333 is a connector on a box referenced as A45)
A cable connecting these two might be:
1A1W35 - In the assembly A1 is a cable called W35.
Connectors on this cable would be designated:
1A1W35P1
1A1W35P2
ASME Y14.44-2008 continues the convention of Plug P and Jack J when assigning references for electrical connectors in assemblies where a J (or jack) is the more fixed and P (or plug) is the less fixed of a connector pair, without regard to the gender of the connector contacts.
The construction of reference designators is covered by IEEE 200-1975/ANSI Y32.16-1975 (replaced by ASME Y14.44-2008) and IEEE 315-1975.
Designators
The table below lists designators commonly used, and does not necessarily comply with standards. For modern use, designators are often simplified towards shorter designators, because it requires less space on silkscreens.
Other designators
See also
Circuit diagram
Electronic symbol
References
Further reading
AS 1103.2-1982 - "Diagrams charts and tables for electrotechnology, Part 2: Item Designation" (Superseded by AS 3702-1989.)
AS 3702-1989 - "Item designation in electrotechnology". (Equivalent to IEC 60750 Edition 1.0, 1983.)
IEC 113 (Superseded by IEC 750, i.e. IEC 60750.)
IEC 750-1983 (AS 3702 is equivalent, but provides extra information.)
IEEE 315-1975 / ANSI Standard Y32.2. Annex F: "Cross reference list of Class Designation Letters" compares IEC 113-2:1971 to the IEEE/ANSI standard. * AS 1102 and IEC 60617 for "Graphical Symbols for Electrotechnology".
Electronic engineering | Reference designator | Technology,Engineering | 1,019 |
8,187,204 | https://en.wikipedia.org/wiki/Orally%20disintegrating%20tablet | An orally disintegrating tablet or orally dissolving tablet (ODT) is a drug dosage form available for a limited range of over-the-counter (OTC) and prescription medications. ODTs differ from traditional tablets in that they are designed to be dissolved on the tongue rather than swallowed whole. The ODT serves as an alternative dosage form for patients who experience dysphagia (difficulty in swallowing) or for where compliance is a known issue and therefore an easier dosage form to take ensures that medication is taken. Common among all age groups, dysphagia is observed in about 35% of the general population, as well as up to 60% of the elderly institutionalized population and 18-22% of all patients in long-term care facilities
ODTs may have a faster onset of effect than tablets or capsules, and have the convenience of a tablet that can be taken without water. During the last decade, ODTs have become available in a variety of therapeutic markets, both OTC and by prescription.
History
Tablets designed to dissolve on the buccal (cheek) mucous membrane were a precursor to the ODT. This dosage form was intended for drugs that yield low bioavailability through the digestive tract but are inconvenient to administer parenterally, such as steroids and narcotic analgesics. Absorption through the cheek allows the drug to bypass the digestive tract for rapid systemic distribution. Not all ODTs have buccal absorption and many have similar absorption and bioavailability to standard oral dosage forms with the primary route remaining GI absorption. However, a fast disintegration time and a small tablet weight can enhance absorption in the buccal area. The first ODTs disintegrated through effervescence rather than dissolution, and were designed to make taking vitamins more pleasant for children. This method was adapted to pharmaceutical use with the invention of microparticles containing a drug, which would be released upon effervescence of the tablet and swallowed by the patient. Dissolution became more effective than effervescence through improved manufacturing processes and ingredients (such as the addition of mannitol to increase binding and decrease dissolution time). Catalent Pharma Solutions (formerly Scherer DDS) in the U.K., Cima Labs and Fuisz Technologies (whose founder Richard Fuisz went on to pioneer orally soluble films, a separate but related dosage form) in the U.S. and Takeda Pharmaceutical Company in Japan led the development of ODTs.
The first ODT form of a drug to get approval from the U.S. Food and Drug Administration (FDA) was a Zydis ODT formation of Claritin (loratadine) in December 1996. It was followed by a Zydis ODT formulation of Klonopin (clonazepam) in December 1997, and a Zydis ODT formulation of Maxalt (rizatriptan) in June 1998. The regulatory condition for meeting the definition of an orally disintegrating tablet is USP method 701 for Disintegration. FDA guidance issued in Dec 2008 is that ODT drugs should disintegrate in less than 30 seconds. This practice is under review by the FDA as the fast disintegration time of ODTs makes the disintegration test too rigorous for some of the ODT formulations that are commercially available.
Manufacturing/packaging
The processes used to manufacture orally disintegrating tablets include loose compression tabletting, a process which is not very different than the manufacturing method used for traditional tablets and lyophilization processes. In loose compression, ODTs are compressed at much lower forces (4 – 20 kN) than traditional tablets. However, since ODTs are compressed at very low forces due to the need for them to be soft enough to disintegrate rapidly in the mouth, issues of material sticking to the die walls can be challenging. Typically, as in most tablet blends, lubricants such as magnesium stearate are added to the blend to reduce the amount of material that may stick to the die wall. Differences may be the use of disintegrating aids, such as crospovidone, and binding agents that aid in mouth feel, such as microcrystalline cellulose. Primarily, ODTs contain some form of sugar such as mannitol, which typically serves as the major diluent of the ODTs, and is also the primary contributor to the smooth and creamy mouth feel of most ODTs. Lyophilized ODT formulations may use proprietary technologies but can produce a tablet that has a faster disintegration rate, for example the Zydis ODT typically dissolves in the mouth in less than 5 seconds without water and Lyophilized Freeze drying tablets - ODT typically dissolves in the mouth in few seconds depending on the molecules and strength.
ODTs are available in HPDE bottles (Parcopa) or individually sealed in blister packs to protect the tablets from damage, moisture, and oxidation. Because ODTs are soft in nature, the ability to successfully package an ODT in a bottle is difficult. However, CIMA Labs markets their Durasolv ODT as being able to be placed into bottle for commercial sale, while CIMA's Orasolv is marketed for blisters only. Zydis ODT tablets manufactured by Catalent Pharma Solutions and Lyophilized Freeze drying tablets manufactured by Galien-LPS are delivered in a blister pack. The differences between the two CIMA products are proprietary, however, the primary difference is expected to be the use of microcrystalline cellulose (MCC), such as Avicel PH101, in the Durasolv product. MCC serves multiple purposes in an ODT but in the case of CIMA's products, it acts as a binder, increasing the internal strength of the tablet and making it more robust for packaging in bottles.
ODTs currently or previously available
Advantages of ODTs
Ved Parkash et al. note the following advantages of ODTs:
they are easy to consume and as such are convenient for such patients as "the elderly, stroke victims, bedridden patients, patients affected by kidney failure, and people who refuse to swallow, such as pediatric, geriatric, and psychiatric patients";
increased bioavailability (rapid absorption) due to pregastric absorption;
don't require water to consume and thus suitable for "patient compliant for disabled, bedridden patients, and for travelers and busy people who do not always have access to water";
good mouth feel;
improved safety due to low risk of choking or suffocation during oral administration.
Disadvantages of ODTs
Ved Parkash et al. lists the following disadvantages of ODTs:
unpleasant taste;
cost-intensive production process;
lack of physical resistance in standard blister packs;
limited ability to incorporate higher concentrations of active drug.
ODTs under development
See also
Phagophobia - fear of swallowing
Pnigophobia - fear of choking
Sugar alcohol - a family of chemicals common in ODTs to enhance the mouth feel of the tablet as it disintegrates
References
Food and Drug Administration
Drug delivery devices
Dosage forms | Orally disintegrating tablet | Chemistry | 1,538 |
16,044,423 | https://en.wikipedia.org/wiki/Limp%20binding | Limp binding is a bookbinding method in which the book has flexible cloth, leather, vellum, or (rarely) paper sides. When the sides of the book are made of vellum, the bookbinding method is also known as limp vellum.
The cover is made with a single piece of vellum or alternative material, folded around the textblock, the front and back covers being folded double. The quires are sewn onto sewing supports such as cords or alum-tawed thongs and the tips of the sewing supports would be laced into the cover. The thongs could also be used at the fore edge of the covers to create a closure or tie.
In limp binding the covering material is not stiffened by thick boards, although paste-downs, if used, provide some stiffness; some limp bindings are only adhered to the back of the book. Some limp vellum bindings had yapp edges that flop over to protect the textblock.
Usage
Limp vellum bindings for commonplace books were being produced at least as early as the 14th century and probably earlier, but it was not usually common until the 16th and 17th centuries. Its usage subsequently declined until "revived by the private presses near the end of the 19th century". From about 1775 to 1825, limp leather was commonly used for pocket books, but by the 1880s limp bindings came to be largely restricted to devotional books, diaries, and sentimental verse, sometimes with yapp edges. Yapp edges are bent edges on a limp binding projecting beyond the textblock to reduce damage. They are often found in editions of the Bible.
References
Bibliography
External links
an online exhibit of the form with an essay on its history
from the University of Texas at Austin School of Information
Bookbinding
Book design
Hides (skin) | Limp binding | Engineering | 365 |
16,959,545 | https://en.wikipedia.org/wiki/IBM%205151 | The IBM 5151 is a 12" transistor–transistor logic (TTL) monochrome monitor, shipped with the original IBM Personal Computer for use with the IBM Monochrome Display Adapter. A few other cards were designed to work with it, such as the Hercules Graphics Card.
The monitor has an 11.5-inch wide CRT (measured diagonally) with 90 degree deflection, etched to reduce glare, with a resolution of 350 horizontal lines and a 50 Hz refresh rate. It uses TTL digital inputs through a 9-pin D-shell connector, being able to display at least three brightness levels, according to the different pin 6 and 7 signals. It is also plugged into the female AC port on the IBM PC power supply, and thus did not have a power switch of its own.
The IBM 5151 uses the P39 phosphor type, producing a bright green monochrome image intended for displaying high-resolution text. This phosphor has high persistence, which decreases display flicker but causes smearing when the image changes.
Specifications
References
External links
Picture of IBM 5151 display in operation (PC Shell shown on screen)
5151
5151 | IBM 5151 | Technology | 245 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.